Tagged: Cosmos Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:20 pm on November 7, 2018 Permalink | Reply
    Tags: , , , , Cosmos Magazine, Demonstrated that there is an upper limit – now called the Chandrasekhar limit – to the mass of a white dwarf star, ,   

    From COSMOS Magazine: “Science history: The astrophysicist who defined how stars behave” Subrahmanyan Chandrasekhar 

    Cosmos Magazine bloc

    From COSMOS Magazine

    07 November 2018
    Jeff Glorfeld

    1
    Subrahmanyan Chandrasekhar meets the press in 1983, shortly after winning the Nobel Prize. Bettmann / Contributor / Getty Images

    Subrahmanyan Chandrasekhar was so influential, NASA honoured him by naming an orbiting observatory after him.

    NASA/Chandra X-ray Telescope

    The NASA webpage devoted to astrophysicist Subrahmanyan Chandrasekhar says he “was known to the world as Chandra. The word chandra means ‘moon’ or ‘luminous’ in Sanskrit.”

    Subrahmanyan Chandrasekhar was born on October 19, 1910, in Lahore, Pakistan, which at the time was part of British India. NASA says that he was “one of the foremost astrophysicists of the 20th century. He was one of the first scientists to couple the study of physics with the study of astronomy.”

    The Encyclopaedia Britannica adds that, with William A. Fowler, he won the 1983 Nobel Prize for physics, “for key discoveries that led to the currently accepted theory on the later evolutionary stages of massive stars”.

    According to an entry on the website of the Harvard-Smithsonian Centre for Astrophysics, early in his career, between 1931 and 1935, he demonstrated that there is an upper limit – now called the Chandrasekhar limit – to the mass of a white dwarf star.

    “This discovery is basic to much of modern astrophysics, since it shows that stars much more massive than the Sun must either explode or form black holes,” the article explains.

    When he first proposed his theory, however, it was opposed by many, including Albert Einstein, “who refused to believe that Chandrasekhar’s findings could result in a star collapsing down to a point”.

    Writing for the Nobel Prize committee, Chandra described how he approached a project.

    “My scientific work has followed a certain pattern, motivated, principally, by a quest after perspectives,” he wrote.

    “In practice, this quest has consisted in my choosing (after some trials and tribulations) a certain area which appears amenable to cultivation and compatible with my taste, abilities, and temperament. And when, after some years of study, I feel that I have accumulated a sufficient body of knowledge and achieved a view of my own, I have the urge to present my point of view, ab initio, in a coherent account with order, form, and structure.

    “There have been seven such periods in my life: stellar structure, including the theory of white dwarfs (1929-1939); stellar dynamics, including the theory of Brownian motion (1938-1943); the theory of radiative transfer, including the theory of stellar atmospheres and the quantum theory of the negative ion of hydrogen and the theory of planetary atmospheres, including the theory of the illumination and the polarisation of the sunlit sky (1943-1950); hydrodynamic and hydromagnetic stability, including the theory of the Rayleigh-Benard convection (1952-1961); the equilibrium and the stability of ellipsoidal figures of equilibrium, partly in collaboration with Norman R. Lebovitz (1961-1968); the general theory of relativity and relativistic astrophysics (1962-1971); and the mathematical theory of black holes (1974- 1983).”

    In 1999, four years after his death on August 21, 1995, NASA launched an x-ray observatory named Chandra, in his honour. The observatory studies the universe in the x-ray portion of the electromagnetic spectrum.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 12:55 pm on September 24, 2018 Permalink | Reply
    Tags: , , , , Cosmos Magazine, , ,   

    From COSMOS Magazine: “A galactic near-miss set stars on an unexpected path around the Milky Way” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    24 September 2018
    Ben Lewis

    A close pass from the Sagittarius dwarf galaxy sent ripples through the Milky Way that are still visible today.

    1
    Image Credit: R. Ibata (UBC), R. Wyse (JHU), R. Sword (IoA)

    Milky Way NASA/JPL-Caltech /ESO R. Hurt

    1
    Tiny galaxy; big trouble. Gaia imaging shows the Sagittarius galaxy, circled in red. ESA/Gaia/DPAC

    ESA/GAIA satellite

    Between 300 and 900-million years ago the Sagittarius dwarf galaxy made a close pass by the Milky Way, setting millions of stars in motion, like ripples on a pond. The after-effects of that galactic near miss are still visible today, according to newly published findings.

    The unique pattern of stars left over from the event was detected by the European Space Agency’s star mapping mission, Gaia. The details are contained in a paper written by Teresa Antoja and colleagues from the Universitat de Barcelona in Spain, and published in the journal Nature.

    The movements of over six million stars in the Milky Way were tracked by Gaia to reveal that groups of them follow different courses as they orbit the galactic centre.

    In particular, the researchers found a pattern that resembled a snail shell in a graph that plotted star altitudes above or below the plane of the galaxy, measured against their velocity in the same direction. This is not to say that the stars themselves are moving in a spiral, but rather that the roughly circular orbits correlate with up-and-down motion in a pattern that has never been seen before.

    While some perturbations in densities and velocities had been seen previously, it was generally assumed that the movement of the disk’s stars is largely in dynamic equilibrium and symmetry about the galactic plane. Instead, Antoja’s team discovered something had knocked the disk askew.

    “It is a bit like throwing a stone in a pond, which displaces the water as ripples and waves,” she explains.

    Whereas water will eventually settle out after being disturbed, a star’s motion carries signatures from the change in movement. While the ripples in the distribution caused by Sagittarius passing by has evened out, the motion of the stars themselves still carry the pattern.

    “At the beginning the features were very weird to us,” says Antoja. “I was a bit shocked and I thought there could be a problem with the data because the shapes are so clear.”

    The new revelations came about because of a huge increase in quality of the Gaia data, compared to what had been captured previously. The new information provided, for the first time, a measurement of three-dimensional speeds for the stars. This allowed the study of stellar motion using the combination of position and velocity, known as “phase space”.

    “It looks like suddenly you have put the right glasses on and you see all the things that were not possible to see before,” says Antoja.

    Computer models suggest the disturbance occurred between 300 and 900 million years ago – a point in time when it’s known the Sagittarius galaxy came near ours.

    In cosmic terms, that’s not very long ago, which also came as a surprise. It was known that the Milky Way had endured some much earlier collisions – smashing into a dwarf galaxy some 10 billion years ago, for instance – but until now more recent events had not been suspected. The Gaia results have changed that view.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 8:39 am on September 18, 2018 Permalink | Reply
    Tags: , Cosmos Magazine, , , , , Super-Kamioka Neutrino Detection Experiment at Kamioka Observatory Tokyo Japan   

    From COSMOS Magazine: “Hints of a fourth type of neutrino create more confusion” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    18 September 2018
    Katie Mack

    Anomalous experimental results hint at the possibility of a fourth kind of neutrino, but more data only makes the situation more confusing.

    1
    Inside the Super-Kamioka Neutrino Detection Experiment at Kamioka Observatory, Tokyo, Japan. Credit: Kamioka Observatory, ICRR (Institute for Cosmic Ray Research), The University of Tokyo

    It was a balmy summer in 1998 when I first became aware of the confounding weirdness of neutrinos. I have vivid memories of that day, as an embarrassingly young student researcher, walking along a river in Japan, listening to a graduate student tell me about her own research project: an attempt to solve a frustrating neutrino–related mystery. We were both visiting a giant detector experiment called Super-Kamiokande, in the heady days right after it released data that forever altered the Standard Model of Particle Physics.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.


    Standard Model of Particle Physics from Symmetry Magazine

    What Super-K found was that neutrinos – ghostly, elusive particles that are produced in the hearts of stars and can pass through the whole Earth with only a miniscule chance of interacting with anything – have mass.

    A particle having mass might not sound like a big deal, but the original version of the otherwise fantastically successful Standard Model described neutrinos as massless – just like photons, the particles that carry light and other electromagnetic waves. Unlike photons, however, neutrinos come in three ‘flavours’: electron, muon, and tau.

    Super-K’s discovery was that neutrinos could change from one flavour to another as they travelled, in a process called oscillation. This can only happen if the three flavours have different masses from one another, which means they can’t be massless.

    The finding suggested there must be a fourth neutrino, one invisible in experiments.

    This discovery was a big deal, but it wasn’t the mystery the grad student was working to solve. A few years before, an experiment called the Liquid Scintillator Neutrino Detector (LSND), based in the US, had seen tantalising evidence that neutrinos were oscillating in a way that made no sense at all with the results of other experiments, including Super-K. The LSND finding indirectly suggested there had to be a fourth neutrino in the picture that the other neutrinos were sometimes oscillating into. This fourth neutrino would be invisible in experiments, lacking the kind of interactions that made the others detectable, which gave it the name ‘sterile neutrino’. And it would have to be much more massive than the other three.

    As I learned that day by the river, the result had persisted, unexplained, for years. Most people assumed something had gone wrong with the experiment, but no one knew what.

    In 2007, the plot thickened. An experiment called MiniBooNE, designed primarily to figure out what the heck happened with LSND, didn’t find the distribution of neutrinos it should have seen to confirm the LSND result.

    FNAL/MiniBooNE

    But some extra neutrinos did show up in MiniBooNE in a different energy range. They were inconsistent with LSND and every other experiment, perhaps suggesting the existence of even more flavours of neutrino.

    Meanwhile, experiments looking at neutrinos produced by nuclear reactors were seeing numbers that also couldn’t easily be explained without a sterile neutrino, though some physicists wrote these off as possibly due to calibration errors.

    And now the plot has grown even thicker.

    In May, MiniBooNE announced new results that seem more consistent with LSND, but even less palatable in the context of other experiments. MiniBooNE works by creating a beam of muon neutrinos and shooting them through the dirt at an underground detector 450 m away. The detector, meanwhile, is monitoring the arrival of electron neutrinos, in case any muon neutrinos are shape-shifting. More of these electron neutrinos turn up than standard neutrino models predict, which implies that some muon neutrinos transform by oscillating into sterile neutrinos too. (Technically, all neutrinos would be swapping around with all others, but this beam only makes sense if there’s an extra, massive one in the mix.)

    But there are several reasons this explanation is facing resistance. One is that experiments just looking for muon neutrinos disappearing (becoming sterile neutrinos or anything else) don’t find a consistent picture. Secondly, if sterile neutrinos at the proposed mass exist, they should have been around in the very early universe, and measurements we have from the cosmic microwave background of the number of neutrino types kicking around then strongly suggest it was just the normal three.

    So, as usual, there’s more work to be done. A MiniBooNE follow-up called MicroBooNE is currently taking data and might make the picture clearer, and other experiments are on the way.

    FNAL/MicroBooNE

    It seems very likely that something strange is happening in the neutrino sector. It just remains to be seen exactly what, and how, over the next 20 years of constant neutrino bombardment, it will change our understanding of everything else.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 8:19 am on September 18, 2018 Permalink | Reply
    Tags: , Cosmos Magazine, , Earth’s most volcanic places   

    From COSMOS Magazine: “Earth’s most volcanic places” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    18 September 2018
    Vhairi Mackintosh

    Some countries are famous for images of spewing lava and mountainous destruction. However, appearances can be deceiving. Not all volcanoes are the same.

    1
    Credit Stocktrek Images / Getty Images

    Volcanic activity. It’s the reason why the town of El Rodeo in Guatemala is currently uninhabitable, why the Big Island of Hawaii gained 1.5 kilometres of new coastline in June, and why Denpasar airport in Bali has closed twice this year.

    But these eruptions should not be seen as destructive attacks on certain places or the people that live in them. They have nothing to do with even the country that hosts them. They occur in specific regions because of much larger-scale processes originating deep within the Earth.

    According to the United States Geological Survey (USGS), approximately 1,500 potentially active volcanoes exist on land around the globe. Here’s a look at four of the world’s most volcanically active spots, and the different processes responsible for their eruptions. As you’ll see, there is no one-size-fits-all volcano.

    ICELAND

    Most volcanic eruptions go unnoticed. That’s because they happen continuously on the ocean floor where cracks in the Earth’s outer layer, the lithosphere (comprising the crust and solid upper mantle), form at so-called divergent plate boundaries. These margins form due to convection in the underlying mantle, which causes hot, less dense molten material, called magma, to rise to the surface. As it forces its way through the lithospheric plate, magma breaks the outer shell. Lava, the surface-equivalent of magma, fills the crack and pushes the broken pieces in opposite directions.

    Volcanism from this activity created Iceland. The country is located on the Mid-Atlantic Ridge, which forms the seam between the Eurasian and North American plates. Iceland is one of the few places where this type of spreading centre pops above sea level.

    However, volcanism on Iceland also happens because of its location over a hot spot. These spots develop above abnormally hot, deep regions of the mantle known as plumes.

    Each plume melts the overlying material and buoyant magma rises through the lithosphere – picture a lava lamp – to erupt at the surface.

    This volcanic double whammy produces both gentle fissure eruptions of basaltic lava as well as stratovolcanoes that are characterised by periodic non-explosive lava flows and explosive, pyroclastic eruptions, which produce clouds of ash, gas and debris.

    In 2010, the two-month eruption of the ice-capped Eyjafjallajökull stratovolcano – the one that no one outside Iceland can pronounce – attracted a lot of media attention because the resulting ash cloud grounded thousands of flights across Europe.

    3
    Eruption at Fimmvörðuháls at dusk. Boaworm

    In fact, it was a relatively small eruption. It is believed that a major eruption in Iceland is long overdue. Four other volcanoes are all showing signs of increased activity, including the country’s most feared one, called Katla.

    3
    Credit: Westend61 / Getty Images

    4
    Photograph of Katla volcano erupting through Mýrdalsjökull ice cap in 1918. ICELANDIC GLACIAL LANDSCAPES
    Author Public Domain

    INDONESIA

    More than 197 million Indonesians live within 100 km of a volcano, with nearly nine million of those within 10 km. Indonesia has more volcanoes than any other country in the world. The 1815 eruption of its Mount Tambora still holds the record for the largest in recent history.

    Indonesia is one of many places located within the world’s most volcanically, and seismically, active zone, known as the Pacific Ring of Fire. This 40,000 km horseshoe-shaped region, bordering the Pacific Ocean, is where many tectonic plates bang into each other.

    In this so-called convergent plate boundary setting, the process of subduction generates volcanism. Subduction occurs because when two plates collide, the higher density plate containing oceanic crust sinks beneath another less dense plate, which contains either continental crust or younger, hotter and therefore less dense oceanic crust. As the plate descends into the mantle, it releases fluids that trigger melting of the overriding plate, thus producing magma. This then rises and erupts at the surface to form an arc-shaped chain of volcanoes, inward of, but parallel to, the subducting plate margin.

    Indonesia marks the junction between many converging plates and, thus, the subduction processes and volcanism are complex. Most of Indonesia’s volcanoes, however, are part of the Sundra Arc, an island volcanic range caused by the subduction of the Indo-Australian Plate beneath the Eurasian Plate. Volcanism in eastern Indonesia is mainly caused by the subduction of the Pacific Plate under the Eurasian Plate.

    The stratovolcanoes that form in convergent plate boundary settings are the most dangerous because they are characterised by incredibly fast, highly explosive pyroclastic flows. One of Indonesia’s stratovolcanoes, Mount Agung, erupted on 29 June for the second time in a year, spewing ash more than two km into the air and grounding hundreds of flights to the popular tourist destination, Bali.

    5
    Mount Agung, November 2017 eruption – 27 Nov 2017. Michael W. Ishak (http://www.myreefsdiary.com)

    6
    Credit: shayes17 / Getty Images

    GUATEMALA

    The June 3 eruption of the Guatemalan stratovolcano, Volcan de Fuego (Volcano of Fire), devastated Guatemalans, and the rest of the world, as horrifying images and videos of people trying to escape the quick-moving pyroclastic flow filled the news.

    Like Indonesia, Guatemala’s location within the Ring of Fire and the subduction-related processes that go along with its location are responsible for the volcanoes found here. Located on the other side of the Pacific Ocean, volcanism is caused by the subduction of the much smaller Cocos Plate beneath the North American-Caribbean Plate.

    Unlike Indonesia, however, the convergent boundary between these two plates occurs on land instead of within the ocean. Therefore, the Guatemalan arc does not form islands but a northwest-southeast trending chain of onshore volcanoes.

    The same process is responsible for the formation of the Andes – the world’s longest continental mountain range – further south along the western coast of South America. In this case, subduction of the Nazca-Antarctic Plate beneath the South American Plate causes volcanism in countries such as Chile and Peru.

    7
    October 1974 eruption of Volcán de Fuego — seen from Antigua Guatemala, Guatemala. Paul Newton, Smithsonian Institution

    7
    Credit: ShaneMyersPhoto / Getty Images

    HAWAII

    When someone mentions Hawaii, it’s hard not to picture a volcano. But Hawaii’s volcanoes are actually not typical. That’s because they are not found on a plate boundary. In fact, Hawaii is slap-bang in the middle of the Pacific Plate – the world’s largest.

    Like Iceland, Hawaii is also underlain by a hot spot. However, because the Pacific Plate is moving to the northwest over this relatively fixed mantle anomaly, the resulting volcanism creates a linear chain of islands within the Pacific Ocean. A volcano forming over the hot spot will be carried away, over millions of years, by the moving tectonic plate. As a new volcano begins to form, the older one becomes extinct, cools and sinks to form a submarine mountain. Through this process, the islands of Hawaii have been forming for the past 70 million years.

    The typical shield volcanoes that form in this geological setting are produced from gentle eruptions of basaltic lava and are rarely explosive. The youngest Hawaiian shield volcano, Kilauea, erupted intensely on 3 May of this year, and 1,170 degree Celsius lava has been flowing over the island and into the ocean ever since. Kilauea, which has been continuously oozing since 1983, is regarded as one of the world’s most active volcanoes, if not the most.

    9
    Looking up the slope of Kilauea, a shield volcano on the island of Hawaii. In the foreground, the Puu Oo vent has erupted fluid lava to the left. The Halemaumau crater is at the peak of Kilauea, visible here as a rising vapor column in the background. The peak behind the vapor column is Mauna Loa, a volcano that is separate from Kilauea. USGS

    9
    An aerial view of the erupting Pu’u ‘O’o crater on Hawaii’s Kilauea volcano taken at dusk on June 29, 1983.
    Credit: G.E. Ulrich, USGS

    AND THE WORLD’S LEAST VOLCANIC PLACE?

    It may be surprising to hear that despite the Himalayas, like the Andes, being located on a very active convergent plate boundary, they are not volcanically active. In fact, there are barely any volcanoes at all within the mountain range.

    This is because the two colliding plates that are responsible for the formation of the Himalayas contain continental crust at the convergent plate boundary, distinct from the oceanic-continental or oceanic-oceanic crustal boundaries in the Guatemalan and Indonesian cases, respectively.

    As the two colliding plates have similar compositions, and therefore densities, and both their densities are much lower than the underlying mantle, neither plate is subducted. It’s a bit like wood floating on water. As subduction causes the lithospheric partial melting that generates the magma in convergent plate boundary settings, volcanism is not common in continent-continent collisions.

    Unfortunately, Himalayan people don’t get off that easily though, because devastating earthquakes go hand-in-hand with this sort of setting.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 8:48 am on August 20, 2018 Permalink | Reply
    Tags: , , , , Cosmos Magazine, , KELT-9b   

    From IAC via COSMOS: “The planet KELT-9b literally has an iron sky” 

    IAC

    From Instituto de Astrofísica de Canarias – IAC

    via

    COSMOS

    20 August 2018
    Ben Lewis

    1
    The Gran Telescopio Canarias on Las Palma in the Canary Islands was instrumental in determining the constituents of the exoplanet’s atmosphere. Dominic Dähncke/Getty Images

    KELT-9b, one of the most unlikely planets ever discovered, has surprised astronomers yet again with the discovery that its atmosphere contains the metals iron and titanium, according to research published in the journal Nature.

    2
    NASA/JPL-Caltech

    The planet is truly like no other. Located around 620 light-years away from Earth in the constellation Cygnus, it is known as a Hot Jupiter – which gives a hint to its nature. Nearly three times the size of Jupiter, its surface temperature tops 3780 degrees Celsius – the hottest exoplanet ever discovered. It is even hotter than the surface of some stars. In some ways it straddles the line between a star and a gas-giant exoplanet.

    And it’s that super-hot temperature, created by a very close orbit to its host star, that allows the metals to become gaseous and fill the atmosphere, say the findings from a team led by Jens Hoeijmakers of the University of Geneva in Switzerland.

    On the night of 31 July 2017, as KELT-9b passed across the face of its star, the HARPS-North spectrograph attached to the Telescopio Nazionale Galileo, located the Spanish Canary Island of La Palma, began watching. The telescope recorded changes in colour in the planet’s atmosphere, the result of chemicals with different light-filtering properties.

    Telescopio Nazionale Galileo – Harps North


    Telescopio Nazionale Galileo a 3.58-meter Italian telescope, located at the Roque de los Muchachos Observatory on the island of La Palma in the Canary Islands, Spain, Altitude 2,396 m (7,861 ft)

    By subtracting the plain starlight from the light that had passed through the atmosphere, the team were left with a spectrograph of its chemical make-up.

    They then homed in on titanium and iron, because the relative abundances of uncharged and charged atoms tend to change dramatically at the temperatures seen on KELT-9b. After a complex process of analysis and cross-correlation of results, they saw dramatic peaks in the ionised forms of both metals.

    It has been long suspected that iron and titanium exist on some exoplanets, but to date they have been difficult to detect. Somewhat like Earth, where the two elements are mostly found in solid form, the cooler conditions of most exoplanets means that the iron and titanium atoms are generally “trapped in other molecules,” as co-author Kevin Heng from the University of Bern in Switzerland recently told Space.com.

    However, the permanent heatwave on KELT-9b means the metals are floating in the atmosphere as individual charged atoms, unable to condense or form compounds.

    While this is the first time iron has been detected in an exoplanet’s atmosphere, titanium has previously been detected in the form of titanium dioxide on Kepler 13Ab, another Hot Jupiter. The discovery on KELT-9b however, is the first detection of elemental titanium in an atmosphere.

    KELT-9b’s atmosphere is also known to contain hydrogen, which was easily identifiable without requiring the type of complex analysis needed to identify iron and titanium. However, a study in July [Nature Astronomy] found that the hydrogen is literally boiling off the planet, leading to the hypothesis that its escape could also be dragging the metals higher into the atmosphere, making their detection easier.

    Further studies into KELT-9b’s atmosphere are continuing, with suggestions that announcements of other metals could be forthcoming. In addition, the complex analysis required in this study could be useful for identifying obscure components in the atmospheres of other planets.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The Instituto de Astrofísica de Canarias(IAC) is an international research centre in Spain which comprises:

    The Instituto de Astrofísica, the headquarters, which is in La Laguna (Tenerife).
    The Centro de Astrofísica en La Palma (CALP)
    The Observatorio del Teide (OT), in Izaña (Tenerife).
    The Observatorio del Roque de los Muchachos (ORM), in Garafía (La Palma).

    Roque de los Muchachos Observatory is an astronomical observatory located in the municipality of Garafía on the island of La Palma in the Canary Islands, at an altitude of 2,396 m (7,861 ft)

    These centres, with all the facilities they bring together, make up the European Northern Observatory(ENO).

    The IAC is constituted administratively as a Public Consortium, created by statute in 1982, with involvement from the Spanish Government, the Government of the Canary Islands, the University of La Laguna and Spain’s Science Research Council (CSIC).

    The International Scientific Committee (CCI) manages participation in the observatories by institutions from other countries. A Time Allocation Committee (CAT) allocates the observing time reserved for Spain at the telescopes in the IAC’s observatories.

    The exceptional quality of the sky over the Canaries for astronomical observations is protected by law. The IAC’s Sky Quality Protection Office (OTPC) regulates the application of the law and its Sky Quality Group continuously monitors the parameters that define observing quality at the IAC Observatories.

    The IAC’s research programme includes astrophysical research and technological development projects.

    The IAC is also involved in researcher training, university teaching and outreachactivities.

    The IAC has devoted much energy to developing technology for the design and construction of a large 10.4 metre diameter telescope, the ( Gran Telescopio CANARIAS, GTC), which is sited at the Observatorio del Roque de los Muchachos.



    Gran Telescopio Canarias at the Roque de los Muchachos Observatory on the island of La Palma, in the Canaries, SpainGran Telescopio CANARIAS, GTC

     
  • richardmitnick 10:23 am on August 17, 2018 Permalink | Reply
    Tags: A step closer to a theory of quantum gravity, Cosmos Magazine   

    From COSMOS Magazine: “A step closer to a theory of quantum gravity” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    17 August 2018
    Phil Dooley

    1
    Resolving differences between the theory of general relativity and the predictions of quantum physics remains a huge challenge. Credit: diuno / Getty Images

    A new approach to combining Einstein’s General Theory of Relativity with quantum physics could come out of a paper published in the journal Nature Physics. The insights could help build a successful theory of quantum gravity, something that has so far eluded physicists.

    Magdalena Zych from University of Queensland in Australia and Caslav Brukner from University of Vienna in Austria have devised a set of principles that compare the way objects behave as predicted by Einstein’s theory with their behaviour predicted by quantum theory.

    Quantum physics has very successfully described the behaviour of tiny particles such as atoms and electrons, while relativity is very accurate for forces at cosmic scales. However, in some cases, notably gravity, the two theories produce incompatible results.

    Einstein’s theory revolutionised the concept of the gravity, by showing that it was caused by curves in spacetime rather than by a force. In contrast, quantum theory has successfully shown other forces, such as magnetism, are the result of fleeting particles being exchanged between interacting objects.

    The difference between the two cases throws up a surprising question: do objects attracted by electrical or magnetic forces behave the same way as when attracted by the gravity of a nearby planet?

    In physics language, an object’s inertial mass and its gravitational mass are held to be the same, a property known as the Einstein equivalence principle. But, given that the two theories are so different, it is not clear that the idea still holds at the quantum level.

    Zych and Brukner combined two principles to formulate the problem. From relativity they took the equation E=MC2, which holds that when objects gain more energy they become heavier. This even applies to an atom moving from a low energy level to a more excited state.

    To this they added the principle of quantum superposition, which holds that particles can be smeared into more than one state at once. And since the different energy levels have different masses, then the total mass gets smeared across a range of values, too.

    This prediction allowed the pair to propose tests that would tease out the quantum behaviour of gravitational acceleration.

    “For example, for an object in freefall in a superposition of accelerations, quantum correlations – entanglement – would develop between the internal states of the particle and their position,” Zych explains.

    “So, the particle would actually smear across space as it falls, which would violate the equivalence principle.”

    As most current theories of quantum gravity predict that the equivalence principle will indeed be violated, the tests proposed by Zych and Brukner could help evaluate whether these approaches are on the right track.

    Zych was inspired to tackle the problem when thinking about a variant of Einstein’s “twin paradox”. This arises as a consequence of relativity, and says that one twin travelling at high speed will age more slowly than the other, who remains stationary.

    Instead, Zych imagined kind of quantum conjoined twins, built from the quantum superposition of two different energy states – and therefore two superposed masses.

    “It was surprising to find these corners of quantum physics that have not been explored before,” Zych says.

    She estimates the difference caused by the quantum behaviour of an atom interacting with a visible wavelength laser would be around one part in 10^11.

    An Italian group has already begun work on such experiments and found no deviation from the equivalence principle up to one part in 109.

    If Einstein’s work does turn out to be violated, it could have consequences for the use of quantum systems as very precise atomic clocks.

    “If the Einstein principle was violated only as allowed in classical physics, clocks could fail to be time-dilated, as predicted by relativity,” says Zych.

    “But if it is violated as allowed in quantum theory, clocks would generically cease to be clocks at all.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 8:38 am on August 8, 2018 Permalink | Reply
    Tags: , Cosmos Magazine, Incredibly tiny explosion packs a big punch, Nanoplasma, ,   

    From COSMOS Magazine: “Incredibly tiny explosion packs a big punch” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    08 August 2018
    Phil Dooley

    Japanese researchers record for the first time the birth of nanoplasma.

    3
    (piranka/istock)

    1
    By bombarding xenon atoms with X-rays, researchers can create nanoplasma. Credit: Science Picture Co / Getty Images

    Japanese researchers have captured the birth of a nanoplasma – a mixture of highly charged ions and electrons – in exquisite detail, as a high-powered X-ray laser roasted a microscopic cluster of atoms, tearing off electrons.

    While it’s cool to witness an explosion lasting just half a trillionth of a second and occupying one-hundredth the diameter of a human hair, caused by an X-ray beam 12,000 times brighter than the sun, it’s also important for studies of tiny structures such as proteins and crystals.

    To study small things you need light of a comparably small wavelength. The wavelength of the X-rays used by Yoshiaki Kumagai and his colleagues in this experiment at the Spring-8 Angstrom Compact free electron Laser (SACLA) in Japan is one ten billionth of a meter: you could fit a million wavelengths into the thickness of a sheet of paper.

    SACLA Free-Electron Laser Riken Japan

    This is the perfect wavelength for probing the structure of crystals and proteins, and the brightness of a laser gives a good strong signal. The problem, however, is that the laser itself damages the structure, says Kumagai, a physicist from Tohoku University in the city of Sendai.

    “Some proteins are very sensitive to irradiation,” he explains. “It is hard to know if we are actually detecting the pure protein structure, or whether there is already radiation damage.”

    The tell-tale sign of radiation damage is the formation of a nanoplasma, as the X-rays break bonds and punch out electrons from deep inside atoms to form ions. This happens in tens of femtoseconds (that is, quadrillionths of a second) and sets off complex cascades of collisions, recombinations and internal rearrangements of atoms. SACLA’s ultra short pulses, only 10 femtoseconds long, are the perfect tool to map out the progress of the tiny explosion moment by moment.

    To untangle the complicated web of processes going on the team chose a very simple structure to study, a cluster of about 5000 xenon atoms injected into a vacuum, which they then hit with an X-ray laser pulse.

    A second laser pulse followed, this time from an infrared laser, which was absorbed by the fragments and ions. The patterns of the absorption told the scientists what the nanoplasma contained. By repeating the experiment, each time delaying the infrared laser a little more, they built a set of snapshots of the nanoplasma’s birth.

    Previous experiments had shown that on average at least six electrons eventually get blasted off each xenon atom, but the team’s set of new snapshots, published in the journal Physical Review X, show that it doesn’t all happen immediately.

    1
    3
    4

    Instead, within 10 femtoseconds many of the xenon atoms have absorbed a lot of energy but not lost any electrons. Some atoms do lose electrons, and the attraction between the positive ions and the free electrons holds the plasma together. This leads to many collisions, which share the energy among the neutral atoms. The number of these atoms then declines over the next several hundred femtoseconds, as more ions form.

    Kumagai says the large initial population of highly-excited neutral xenon atoms were gateway states to the nanoplasma formation.

    “The excited atoms play an important role in the charge transfer and energy migration. It’s the first time we’ve caught this very fast step in nanoplasma formation,” he says.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:30 am on July 30, 2018 Permalink | Reply
    Tags: , Cosmos Magazine, Hello quantum world, , ,   

    From COSMOS Magazine: “Hello quantum world” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    30 July 2018
    Will Knight

    Quantum computing – IBM

    Inside a small laboratory in lush countryside about 80 kilometres north of New York City, an elaborate tangle of tubes and electronics dangles from the ceiling. This mess of equipment is a computer. Not just any computer, but one on the verge of passing what may, perhaps, go down as one of the most important milestones in the history of the field.

    Quantum computers promise to run calculations far beyond the reach of any conventional supercomputer. They might revolutionise the discovery of new materials by making it possible to simulate the behaviour of matter down to the atomic level. Or they could upend cryptography and security by cracking otherwise invincible codes. There is even hope they will supercharge artificial intelligence by crunching through data more efficiently.

    Yet only now, after decades of gradual progress, are researchers finally close to building quantum computers powerful enough to do things that conventional computers cannot. It’s a landmark somewhat theatrically dubbed ‘quantum supremacy’. Google has been leading the charge toward this milestone, while Intel and Microsoft also have significant quantum efforts. And then there are well-funded startups including Rigetti Computing, IonQ and Quantum Circuits.

    No other contender can match IBM’s pedigree in this area, though. Starting 50 years ago, the company produced advances in materials science that laid the foundations for the computer revolution. Which is why, last October, I found myself at IBM’s Thomas J. Watson Research Center to try to answer these questions: What, if anything, will a quantum computer be good for? And can a practical, reliable one even be built?

    2
    Credit: Graham Carlow

    Why we think we need a quantum computer

    The research center, located in Yorktown Heights, looks a bit like a flying saucer as imagined in 1961. It was designed by the neo-futurist architect Eero Saarinen and built during IBM’s heyday as a maker of large mainframe business machines. IBM was the world’s largest computer company, and within a decade of the research centre’s construction it had become the world’s fifth-largest company of any kind, just behind Ford and General Electric.

    While the hallways of the building look out onto the countryside, the design is such that none of the offices inside have any windows. It was in one of these cloistered rooms that I met Charles Bennett. Now in his 70s, he has large white sideburns, wears black socks with sandals and even sports a pocket protector with pens in it.

    3
    Charles Bennett was one of the pioneers who realised quantum computers could solve some problems exponentially faster than conventional computers. Credit:Bartek Sadowski

    Surrounded by old computer monitors, chemistry models and, curiously, a small disco ball, he recalled the birth of quantum computing as if it were yesterday.

    When Bennett joined IBM in 1972, quantum physics was already half a century old, but computing still relied on classical physics and the mathematical theory of information that Claude Shannon had developed at MIT in the 1950s. It was Shannon who defined the quantity of information in terms of the number of ‘bits’ (a term he popularised but did not coin) required to store it. Those bits, the 0s and 1s of binary code, are the basis of all conventional computing.

    A year after arriving at Yorktown Heights, Bennett helped lay the foundation for a quantum information theory that would challenge all that. It relies on exploiting the peculiar behaviour of objects at the atomic scale. At that size, a particle can exist ‘superposed’ in many states (e.g., many different positions) at once. Two particles can also exhibit ‘entanglement’, so that changing the state of one may instantaneously affect the other.

    Bennett and others realised that some kinds of computations that are exponentially time consuming, or even impossible, could be efficiently performed with the help of quantum phenomena. A quantum computer would store information in quantum bits, or qubits. Qubits can exist in superpositions of 1 and 0, and entanglement and a trick called interference can be used to find the solution to a computation over an exponentially large number of states. It’s annoyingly hard to compare quantum and classical computers, but roughly speaking, a quantum computer with just a few hundred qubits would be able to perform more calculations simultaneously than there are atoms in the known universe.

    In the summer of 1981, IBM and MIT organised a landmark event called the First Conference on the Physics of Computation. It took place at Endicott House, a French-style mansion not far from the MIT campus.

    In a photo that Bennett took during the conference, several of the most influential figures from the history of computing and quantum physics can be seen on the lawn, including Konrad Zuse, who developed the first programmable computer, and Richard Feynman, an important contributor to quantum theory. Feynman gave the conference’s keynote speech, in which he raised the idea of computing using quantum effects. “The biggest boost quantum information theory got was from Feynman,” Bennett told me. “He said, ‘Nature is quantum, goddamn it! So if we want to simulate it, we need a quantum computer.’”

    IBM’s quantum computer – one of the most promising in existence – is located just down the hall from Bennett’s office. The machine is designed to create and manipulate the essential element in a quantum computer: the qubits that store information.

    The gap between the dream and the reality

    The IBM machine exploits quantum phenomena that occur in superconducting materials. For instance, sometimes current will flow clockwise and counterclockwise at the same time. IBM’s computer uses superconducting circuits in which two distinct electromagnetic energy states make up a qubit.

    The superconducting approach has key advantages. The hardware can be made using well-established manufacturing methods, and a conventional computer can be used to control the system. The qubits in a superconducting circuit are also easier to manipulate and less delicate than individual photons or ions.

    Inside IBM’s quantum lab, engineers are working on a version of the computer with 50 qubits. You can run a simulation of a simple quantum computer on a normal computer, but at around 50 qubits it becomes nearly impossible.

    That means IBM is theoretically approaching the point where a quantum computer can solve problems a classical computer cannot: in other words, quantum supremacy.

    But as IBM’s researchers will tell you, quantum supremacy is an elusive concept. You would need all 50 qubits to work perfectly, when in reality quantum computers are beset by errors that need to be corrected. It is also devilishly difficult to maintain qubits for any length of time; they tend to ‘decohere’, or lose their delicate quantum nature, much as a smoke ring breaks up at the slightest air current. And the more qubits, the harder both challenges become.

    3
    The cutting-edge science of quantum computing requires nanoscale precision mixed with the tinkering spirit of home electronics. Researcher Jerry Chow is here shown fitting a circuitboard in the IBM quantum research lab. Jon Simon

    “If you had 50 or 100 qubits and they really worked well enough, and were fully error-corrected – you could do unfathomable calculations that can’t be replicated on any classical machine, now or ever,” says Robert Schoelkopf, a Yale professor and founder of a company called Quantum Circuits. “The flip side to quantum computing is that there are exponential ways for it to go wrong.”

    Another reason for caution is that it isn’t obvious how useful even a perfectly functioning quantum computer would be. It doesn’t simply speed up any task you throw at it; in fact, for many calculations, it would actually be slower than classical machines. Only a handful of algorithms have so far been devised where a quantum computer would clearly have an edge. And even for those, that edge might be short-lived. The most famous quantum algorithm, developed by Peter Shor at MIT, is for finding the prime factors of an integer. Many common cryptographic schemes rely on the fact that this is hard for a conventional computer to do. But cryptography could adapt, creating new kinds of codes that don’t rely on factorisation.

    This is why, even as they near the 50-qubit milestone, IBM’s own researchers are keen to dispel the hype around it. At a table in the hallway that looks out onto the lush lawn outside, I encountered Jay Gambetta, a tall, easygoing Australian who researches quantum algorithms and potential applications for IBM’s hardware. “We’re at this unique stage,” he said, choosing his words with care. “We have this device that is more complicated than you can simulate on a classical computer, but it’s not yet controllable to the precision that you could do the algorithms you know how to do.”

    What gives the IBMers hope is that even an imperfect quantum computer might still be a useful one.

    Gambetta and other researchers have zeroed in on an application that Feynman envisioned back in 1981. Chemical reactions and the properties of materials are determined by the interactions between atoms and molecules. Those interactions are governed by quantum phenomena. A quantum computer can – at least in theory – model those in a way a conventional one cannot.

    Last year, Gambetta and colleagues at IBM used a seven-qubit machine to simulate the precise structure of beryllium hydride. At just three atoms, it is the most complex molecule ever modelled with a quantum system. Ultimately, researchers might use quantum computers to design more efficient solar cells, more effective drugs or catalysts that turn sunlight into clean fuels.

    Those goals are a long way off. But, Gambetta says, it may be possible to get valuable results from an error-prone quantum machine paired with a classical computer.

    4
    Credit Cosmos Magazine

    Physicist’s dream to engineer’s nightmare

    “The thing driving the hype is the realisation that quantum computing is actually real,” says Isaac Chuang, a lean, soft-spoken MIT professor. “It is no longer a physicist’s dream – it is an engineer’s nightmare.”

    Chuang led the development of some of the earliest quantum computers, working at IBM in Almaden, California, during the late 1990s and early 2000s. Though he is no longer working on them, he thinks we are at the beginning of something very big – that quantum computing will eventually even play a role in artificial intelligence.

    But he also suspects that the revolution will not really begin until a new generation of students and hackers get to play with practical machines. Quantum computers require not just different programming languages but a fundamentally different way of thinking about what programming is. As Gambetta puts it: “We don’t really know what the equivalent of ‘Hello, world’ is on a quantum computer.”

    We are beginning to find out. In 2016 IBM connected a small quantum computer to the cloud. Using a programming tool kit called QISKit, you can run simple programs on it; thousands of people, from academic researchers to schoolkids, have built QISKit programs that run basic quantum algorithms. Now Google and other companies are also putting their nascent quantum computers online. You can’t do much with them, but at least they give people outside the leading labs a taste of what may be coming.

    The startup community is also getting excited. A short while after seeing IBM’s quantum computer, I went to the University of Toronto’s business school to sit in on a pitch competition for quantum startups. Teams of entrepreneurs nervously got up and presented their ideas to a group of professors and investors. One company hoped to use quantum computers to model the financial markets. Another planned to have them design new proteins. Yet another wanted to build more advanced AI systems. What went unacknowledged in the room was that each team was proposing a business built on a technology so revolutionary that it barely exists. Few seemed daunted by that fact.

    This enthusiasm could sour if the first quantum computers are slow to find a practical use. The best guess from those who truly know the difficulties –people like Bennett and Chuang – is that the first useful machines are still several years away. And that’s assuming the problem of managing and manipulating a large collection of qubits won’t ultimately prove intractable.

    Still, the experts hold out hope. When I asked him what the world might be like when my two-year-old son grows up, Chuang, who learned to use computers by playing with microchips, responded with a grin. “Maybe your kid will have a kit for building a quantum computer,” he said.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:46 am on July 2, 2018 Permalink | Reply
    Tags: Australia’s reputation for research integrity at the crossroads, Cosmos Magazine   

    From COSMOS Magazine: “Australia’s reputation for research integrity at the crossroads” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    02 July 2018
    David Vaux
    Peter Brooks
    Simon Gandevia

    Changes to Australia’s code of research conduct endanger its reputation for world-standard output.

    1
    Researchers are under pressure to deliver publications and win grants. Shutterstock

    In 2018, Australia still does not have appropriate measures in place to maintain research integrity. And recent changes to our code of research conduct have weakened our already inadequate position.

    In contrast, China’s recent move to crack down on academic misconduct moves it into line with more than twenty European countries, the UK, USA, Canada and others that have national offices for research integrity.

    Australia risks its reputation by turning in the opposite direction.

    Research integrity is vital

    Our confidence in science relies on its integrity – relating to both the research literature (its freedom from errors), and the researchers themselves (that they behave in a principled way).

    However, the pressures on scientists to publish and win grants can lead to misconduct. This can range from cherry-picking results that support a favoured hypothesis, to making up experimental, animal or patient results from thin air. A recent report found that around 1 in 25 papers contained duplicated images (inconsistent with good research practice), and about half of these had features suggesting deliberate manipulation.

    For science to progress efficiently, and to remain credible, we need good governance structures, and as transparent and open a system as possible. Measures are needed to identify and correct errors, and to rectify misbehaviour.

    In Australia, one such measure is the Australian Code for the Responsible Conduct of Research. But recently published revisions of this code allow research integrity to be handled internally by institutions, and investigations to be kept secret. This puts at risk the hundreds of millions of dollars provided by the taxpayer to fund research.

    As a nation, we can and must do much better, before those who invest in and conduct research go elsewhere – to countries that are serious about the governance of research integrity.

    Learning from experience – the Hall affair

    Developed jointly by the National Health and Medical Research Council (NHMRC), the Australian Research Council (ARC) and Universities Australia, the Australian Code for the Responsible Conduct of Research has the stated goal of improving research integrity in Australia.

    The previous version of the Australian Code was written in 2007, partly in response to the “Hall affair”.

    In 2001, complaints of research misconduct were levelled at Professor Bruce Hall, an immunologist at University of New South Wales (UNSW). After multiple inquiries, UNSW Vice Chancellor Rory Hume concluded that Hall was not guilty of scientific misconduct but had “committed errors of judgement sufficiently serious in two instances to warrant censure.” All allegations were denied by Hall.

    Commenting on the incident in 2004, Editor-in-Chief of the Medical Journal of Australia Martin Van Der Weyden highlighted the importance of external and independent review in investigating research practice:

    “The initial inquiry by the UNSW’s Dean of Medicine [was] patently crippled by perceptions of conflicts of interest — including an institution investigating allegations of improprieties carried out in its own backyard!

    Herein lies lesson number one — once allegations of scientific misconduct and fraud have been made, these should be addressed from the beginning by an external and independent inquiry.”

    An external and independent panel

    Avoiding conflicts of interest – real or perceived – was one of the reasons the 2007 version of the Australian Code required “institutions to establish independent external research misconduct inquiries to evaluate allegations of serious research misconduct that are contested.”

    But it seems this lesson has been forgotten. With respect to establishing a panel to investigate alleged misconduct, the revised Code says meekly:

    “There will be occasions where some or all members should be external to the institution.”

    Institutions will now be able to decide for themselves the terms of reference for investigations, and the number and composition of inquiry panels.

    Reducing research misconduct in Australia

    The chief justification for revising the 2007 Australian Code was to reduce research misconduct.

    In its initial draft form in 2016, the committee charged with this task suggested simply removing the term “research misconduct” from the Code, meaning that research misconduct would no longer officially exist in Australia.

    Unsurprisingly, this created a backlash, and, in the final version of the revised Code, a definition of the term “research misconduct” has returned:

    “Research misconduct: a serious breach of the Code which is also intentional or reckless or negligent.”

    However, institutions now have the option of “whether and how to use the term ‘research misconduct’ in relation to serious breaches of the Code”.

    Principles not enough

    The new Code is split into a set of principles of responsible research conduct that lists the responsibilities of researchers and institutions, together with a set of guides. The first guide describes how potential breaches of the Code should be investigated and managed.

    The principles of responsible research conduct are fine, and exhort researchers to be honest and fair, rigorous and respectful. No one would have an issue with this.

    Similarly, no one would think it unreasonable that institutions also have responsibilities, such as to identify and comply with relevant laws, regulations, guidelines and policies related to the conduct of research.

    However, having a set of lofty principles alone is not sufficient; there also need to be mechanisms to ensure compliance, not just by researchers, but also by institutions.

    Transparency, accountability, and trust

    The new Code says that institutions must ensure that all investigations are confidential. There is no requirement to make the outcome public, but only to “consider whether a public statement is appropriate to communicate the outcome of an investigation”.

    Combining mandatory confidentiality with self-regulation is bound to undermine trust in the governance of research integrity.

    In the new Code there is no mechanism for oversight. The outcome of a misconduct investigation can be appealed to the Australian Research Integrity Committee (ARIC), but only on the grounds of improper process, and not based on evidence or facts.

    Given that the conduct of investigations as well as the findings are to be confidential, it will be difficult to make an appeal to ARIC on any grounds.

    We need a national office of research integrity

    It is not clear why Australia does not learn from the experience of countries with independent agencies for research integrity, and adopt one of the models that is already working elsewhere in the world.

    Those who care about research and careers in research should ask their politicians and university Vice Chancellors why a national office of research integrity is necessary in the nations of Europe, the UK, US, Canada and now China, but not in Australia.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:21 am on June 12, 2018 Permalink | Reply
    Tags: , , , , , Cosmos Magazine,   

    From COSMOS Magazine: “‘Galactic archaeology’ provides clues to star formation, and origin of gold” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    12 June 2018
    Richard A Lovett

    Old stars in our own galaxy are yielding information that illuminates conditions in the early universe.

    1
    Analysing ancient neighbourhood stars is a promising avenue of research. Image credit temniy/Getty Images

    Cosmologists looking for fingerprints of the early universe need look no further than old stars in our own galaxy and its neighbours, astronomers say.

    Not that these stars date back to the dawn of time, but a few were formed when the universe was only a fraction of its current age, and the composition of their atmospheres reveals much about how conditions have changed between then and the time, much later, when our own sun was formed. They can even reveal the origin of important elements such as silver and gold.

    It’s a type of study that Gina Duggan, a graduate student in astrophysics at California Institute of Technology, Pasadena, in the US, calls galactic archaeology. “[It] uses elements in stars alive today to probe the galaxy’s history,” she says.

    In fact, adds Timothy Beers, an astrophysicist at The University of Notre Dame, in Indiana, US, it’s not just our own galaxy’s history that can be probed in this manner. Such stars provide clues to conditions throughout the early universe.

    Both researchers recently presented their ideas to the annual meeting of the American Astronomical Society in Denver, US.

    The first stars, cosmologists believe, were composed entirely of hydrogen and helium — the only elements formed directly in the Big Bang. These elements still compose the bulk of today’s stars; the sun, for instance, is 98% hydrogen and helium.

    But there’s a big difference between 98% and 100%. Pure hydrogen and helium stars tend to be hot and big, burning bright and dying young in giant explosions. In the process, they spray other elements into the cosmos – elements that enrich the next generation of stars, building toward the 2% of them found in the sun.

    Such chemically enriched stars, Beers says, don’t necessarily burn as brightly or die as young. Some can be smaller, with lifetimes of 10 billion or more years. “These low-mass stars we can still see today,” he says.

    Spectroscopic analysis can determine how much “pollution” these stars picked up from materials ejected by their predecessors. This allows astronomers to pick out early second-generation stars from other stars populating the Milky Way galaxy and its neighbours, allowing them to be used as cosmological time capsules.

    “We can learn about the chemistry of the very early universe right in our own backyard, not just from studying faint sources more than 10 billion light years away,” Beers says.

    In fact, one of these stars, known as BD+44:493, is only 600 light years away.

    “It’s visible with binoculars,” Beers exclaims. “But it’s preserving stuff from the early universe!”

    Kris Youakim of the Leibniz Institute for Astrophysics in Potsdam, Germany, adds that such stars can also be used to study the way large galaxies like our own were formed by mergers of numerous smaller ones. Such mergers, he says, tore the smaller galaxies apart, producing long “spaghettified” streamers.

    But by using old stars similar to those studied by Beers as markers, he adds, it’s possible find these streamers and trace the history of how our galaxy came together.

    Other researchers believe that nearby dwarf galaxies that have not yet merged into larger galaxies are good laboratories for understanding processes in the early universe, where dwarf galaxies dominated.

    Small Magellanic Cloud. NASA/ESA Hubble and ESO/Digitized Sky Survey 2

    Large Magellanic Cloud. Adrian Pingstone December 2003

    Magellanic Bridge ESA Gaia satellite. Image credit V. Belokurov D. Erkal A. Mellinger.

    “This is an under-utilised but important way to get at where and how the first stars might have formed and the kind of galaxies that helped,” says Aparna Venkatesan of the University of San Francisco, California, US.

    But the most exciting find involves the origin of the Earth’s gold.

    Geologically, of course, we know it comes from gold mines. But before the Earth was formed there had to have been gold in the dust cloud that created the solar system, and there are two theories for how that gold could have been made.

    One, says Duggan, is that it was formed in the heart of giant stellar explosions called magnetorotational supernovae. Another is that it was made in an equally titanic process: the collision of the remnants of dead stars known as neutron stars.

    The former tended to occur early in the universe’s history, when giant stars met their catastrophic ends. The latter mostly came later, following the deaths of later-generation stars.

    To figure out which it was, Duggan’s team looked at the concentration of a related element, barium, in stars of a variety of ages. By comparing the amount of barium to that of iron, which is known to build up steadily with each new generation of stars, she was able to determine if it, acting as a proxy for gold, appeared on the scene early – a sign that they were produced by magnetorotational supernovae – or more recently, a sign that they came from neutron star collisions.

    Evan Kirby, a researcher on the project, calls it another example of galactic archaeology in operation.

    “This study … used elements present in stars today to ‘dig up’ evidence of the history of element production in galaxies,” he says.

    “By measuring the ratio of elements in stars with different ages, we are able to say when these elements were created.”

    The conclusion: gold and related elements were largely formed later on, in neutron star collisions.

    UC Santa Cruz

    UC Santa Cruz

    14

    A UC Santa Cruz special report

    Tim Stephens

    Astronomer Ryan Foley says “observing the explosion of two colliding neutron stars” [see https://sciencesprings.wordpress.com/2017/10/17/from-ucsc-first-observations-of-merging-neutron-stars-mark-a-new-era-in-astronomy ]–the first visible event ever linked to gravitational waves–is probably the biggest discovery he’ll make in his lifetime. That’s saying a lot for a young assistant professor who presumably has a long career still ahead of him.

    2
    The first optical image of a gravitational wave source was taken by a team led by Ryan Foley of UC Santa Cruz using the Swope Telescope at the Carnegie Institution’s Las Campanas Observatory in Chile. This image of Swope Supernova Survey 2017a (SSS17a, indicated by arrow) shows the light emitted from the cataclysmic merger of two neutron stars. (Image credit: 1M2H Team/UC Santa Cruz & Carnegie Observatories/Ryan Foley)

    Carnegie Institution Swope telescope at Las Campanas, Chile, 100 kilometres (62 mi) northeast of the city of La Serena. near the north end of a 7 km (4.3 mi) long mountain ridge. Cerro Las Campanas, near the southern end and over 2,500 m (8,200 ft) high, at Las Campanas, Chile

    A neutron star forms when a massive star runs out of fuel and explodes as a supernova, throwing off its outer layers and leaving behind a collapsed core composed almost entirely of neutrons. Neutrons are the uncharged particles in the nucleus of an atom, where they are bound together with positively charged protons. In a neutron star, they are packed together just as densely as in the nucleus of an atom, resulting in an object with one to three times the mass of our sun but only about 12 miles wide.

    “Basically, a neutron star is a gigantic atom with the mass of the sun and the size of a city like San Francisco or Manhattan,” said Foley, an assistant professor of astronomy and astrophysics at UC Santa Cruz.

    These objects are so dense, a cup of neutron star material would weigh as much as Mount Everest, and a teaspoon would weigh a billion tons. It’s as dense as matter can get without collapsing into a black hole.

    THE MERGER

    Like other stars, neutron stars sometimes occur in pairs, orbiting each other and gradually spiraling inward. Eventually, they come together in a catastrophic merger that distorts space and time (creating gravitational waves) and emits a brilliant flare of electromagnetic radiation, including visible, infrared, and ultraviolet light, x-rays, gamma rays, and radio waves. Merging black holes also create gravitational waves, but there’s nothing to be seen because no light can escape from a black hole.

    Foley’s team was the first to observe the light from a neutron star merger that took place on August 17, 2017, and was detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO).


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    1
    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    Now, for the first time, scientists can study both the gravitational waves (ripples in the fabric of space-time), and the radiation emitted from the violent merger of the densest objects in the universe.

    3
    The UC Santa Cruz team found SSS17a by comparing a new image of the galaxy N4993 (right) with images taken four months earlier by the Hubble Space Telescope (left). The arrows indicate where SSS17a was absent from the Hubble image and visible in the new image from the Swope Telescope. (Image credits: Left, Hubble/STScI; Right, 1M2H Team/UC Santa Cruz & Carnegie Observatories/Ryan Foley)

    It’s that combination of data, and all that can be learned from it, that has astronomers and physicists so excited. The observations of this one event are keeping hundreds of scientists busy exploring its implications for everything from fundamental physics and cosmology to the origins of gold and other heavy elements.


    A small team of UC Santa Cruz astronomers were the first team to observe light from two neutron stars merging in August. The implications are huge.

    ALL THE GOLD IN THE UNIVERSE

    It turns out that the origins of the heaviest elements, such as gold, platinum, uranium—pretty much everything heavier than iron—has been an enduring conundrum. All the lighter elements have well-explained origins in the nuclear fusion reactions that make stars shine or in the explosions of stars (supernovae). Initially, astrophysicists thought supernovae could account for the heavy elements, too, but there have always been problems with that theory, says Enrico Ramirez-Ruiz, professor and chair of astronomy and astrophysics at UC Santa Cruz.

    4
    The violent merger of two neutron stars is thought to involve three main energy-transfer processes, shown in this diagram, that give rise to the different types of radiation seen by astronomers, including a gamma-ray burst and a kilonova explosion seen in visible light. (Image credit: Murguia-Berthier et al., Science)

    A theoretical astrophysicist, Ramirez-Ruiz has been a leading proponent of the idea that neutron star mergers are the source of the heavy elements. Building a heavy atomic nucleus means adding a lot of neutrons to it. This process is called rapid neutron capture, or the r-process, and it requires some of the most extreme conditions in the universe: extreme temperatures, extreme densities, and a massive flow of neutrons. A neutron star merger fits the bill.

    Ramirez-Ruiz and other theoretical astrophysicists use supercomputers to simulate the physics of extreme events like supernovae and neutron star mergers. This work always goes hand in hand with observational astronomy. Theoretical predictions tell observers what signatures to look for to identify these events, and observations tell theorists if they got the physics right or if they need to tweak their models. The observations by Foley and others of the neutron star merger now known as SSS17a are giving theorists, for the first time, a full set of observational data to compare with their theoretical models.

    According to Ramirez-Ruiz, the observations support the theory that neutron star mergers can account for all the gold in the universe, as well as about half of all the other elements heavier than iron.

    RIPPLES IN THE FABRIC OF SPACE-TIME

    Einstein predicted the existence of gravitational waves in 1916 in his general theory of relativity, but until recently they were impossible to observe. LIGO’s extraordinarily sensitive detectors achieved the first direct detection of gravitational waves, from the collision of two black holes, in 2015. Gravitational waves are created by any massive accelerating object, but the strongest waves (and the only ones we have any chance of detecting) are produced by the most extreme phenomena.

    Two massive compact objects—such as black holes, neutron stars, or white dwarfs—orbiting around each other faster and faster as they draw closer together are just the kind of system that should radiate strong gravitational waves. Like ripples spreading in a pond, the waves get smaller as they spread outward from the source. By the time they reached Earth, the ripples detected by LIGO caused distortions of space-time thousands of times smaller than the nucleus of an atom.

    The rarefied signals recorded by LIGO’s detectors not only prove the existence of gravitational waves, they also provide crucial information about the events that produced them. Combined with the telescope observations of the neutron star merger, it’s an incredibly rich set of data.

    LIGO can tell scientists the masses of the merging objects and the mass of the new object created in the merger, which reveals whether the merger produced another neutron star or a more massive object that collapsed into a black hole. To calculate how much mass was ejected in the explosion, and how much mass was converted to energy, scientists also need the optical observations from telescopes. That’s especially important for quantifying the nucleosynthesis of heavy elements during the merger.

    LIGO can also provide a measure of the distance to the merging neutron stars, which can now be compared with the distance measurement based on the light from the merger. That’s important to cosmologists studying the expansion of the universe, because the two measurements are based on different fundamental forces (gravity and electromagnetism), giving completely independent results.

    “This is a huge step forward in astronomy,” Foley said. “Having done it once, we now know we can do it again, and it opens up a whole new world of what we call ‘multi-messenger’ astronomy, viewing the universe through different fundamental forces.”

    IN THIS REPORT

    Neutron stars
    A team from UC Santa Cruz was the first to observe the light from a neutron star merger that took place on August 17, 2017 and was detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO)

    5
    Graduate students and post-doctoral scholars at UC Santa Cruz played key roles in the dramatic discovery and analysis of colliding neutron stars.Astronomer Ryan Foley leads a team of young graduate students and postdoctoral scholars who have pulled off an extraordinary coup. Following up on the detection of gravitational waves from the violent merger of two neutron stars, Foley’s team was the first to find the source with a telescope and take images of the light from this cataclysmic event. In so doing, they beat much larger and more senior teams with much more powerful telescopes at their disposal.

    “We’re sort of the scrappy young upstarts who worked hard and got the job done,” said Foley, an untenured assistant professor of astronomy and astrophysics at UC Santa Cruz.

    7
    David Coulter, graduate student

    The discovery on August 17, 2017, has been a scientific bonanza, yielding over 100 scientific papers from numerous teams investigating the new observations. Foley’s team is publishing seven papers, each of which has a graduate student or postdoc as the first author.

    “I think it speaks to Ryan’s generosity and how seriously he takes his role as a mentor that he is not putting himself front and center, but has gone out of his way to highlight the roles played by his students and postdocs,” said Enrico Ramirez-Ruiz, professor and chair of astronomy and astrophysics at UC Santa Cruz and the most senior member of Foley’s team.

    “Our team is by far the youngest and most diverse of all of the teams involved in the follow-up observations of this neutron star merger,” Ramirez-Ruiz added.

    8
    Charles Kilpatrick, postdoctoral scholar

    Charles Kilpatrick, a 29-year-old postdoctoral scholar, was the first person in the world to see an image of the light from colliding neutron stars. He was sitting in an office at UC Santa Cruz, working with first-year graduate student Cesar Rojas-Bravo to process image data as it came in from the Swope Telescope in Chile. To see if the Swope images showed anything new, he had also downloaded “template” images taken in the past of the same galaxies the team was searching.

    9
    Ariadna Murguia-Berthier, graduate student

    “In one image I saw something there that was not in the template image,” Kilpatrick said. “It took me a while to realize the ramifications of what I was seeing. This opens up so much new science, it really marks the beginning of something that will continue to be studied for years down the road.”

    At the time, Foley and most of the others in his team were at a meeting in Copenhagen. When they found out about the gravitational wave detection, they quickly got together to plan their search strategy. From Copenhagen, the team sent instructions to the telescope operators in Chile telling them where to point the telescope. Graduate student David Coulter played a key role in prioritizing the galaxies they would search to find the source, and he is the first author of the discovery paper published in Science.

    10
    Matthew Siebert, graduate student

    “It’s still a little unreal when I think about what we’ve accomplished,” Coulter said. “For me, despite the euphoria of recognizing what we were seeing at the moment, we were all incredibly focused on the task at hand. Only afterward did the significance really sink in.”

    Just as Coulter finished writing his paper about the discovery, his wife went into labor, giving birth to a baby girl on September 30. “I was doing revisions to the paper at the hospital,” he said.

    It’s been a wild ride for the whole team, first in the rush to find the source, and then under pressure to quickly analyze the data and write up their findings for publication. “It was really an all-hands-on-deck moment when we all had to pull together and work quickly to exploit this opportunity,” said Kilpatrick, who is first author of a paper comparing the observations with theoretical models.

    11
    César Rojas Bravo, graduate student

    Graduate student Matthew Siebert led a paper analyzing the unusual properties of the light emitted by the merger. Astronomers have observed thousands of supernovae (exploding stars) and other “transients” that appear suddenly in the sky and then fade away, but never before have they observed anything that looks like this neutron star merger. Siebert’s paper concluded that there is only a one in 100,000 chance that the transient they observed is not related to the gravitational waves.

    Ariadna Murguia-Berthier, a graduate student working with Ramirez-Ruiz, is first author of a paper synthesizing data from a range of sources to provide a coherent theoretical framework for understanding the observations.

    Another aspect of the discovery of great interest to astronomers is the nature of the galaxy and the galactic environment in which the merger occurred. Postdoctoral scholar Yen-Chen Pan led a paper analyzing the properties of the host galaxy. Enia Xhakaj, a new graduate student who had just joined the group in August, got the opportunity to help with the analysis and be a coauthor on the paper.

    12
    Yen-Chen Pan, postdoctoral scholar

    “There are so many interesting things to learn from this,” Foley said. “It’s a great experience for all of us to be part of such an important discovery.”

    13
    Enia Xhakaj, graduate student

    IN THIS REPORT

    Scientific Papers from the 1M2H Collaboration

    Coulter et al., Science, Swope Supernova Survey 2017a (SSS17a), the Optical Counterpart to a Gravitational Wave Source

    Drout et al., Science, Light Curves of the Neutron Star Merger GW170817/SSS17a: Implications for R-Process Nucleosynthesis

    Shappee et al., Science, Early Spectra of the Gravitational Wave Source GW170817: Evolution of a Neutron Star Merger

    Kilpatrick et al., Science, Electromagnetic Evidence that SSS17a is the Result of a Binary Neutron Star Merger

    Siebert et al., ApJL, The Unprecedented Properties of the First Electromagnetic Counterpart to a Gravitational-wave Source

    Pan et al., ApJL, The Old Host-galaxy Environment of SSS17a, the First Electromagnetic Counterpart to a Gravitational-wave Source

    Murguia-Berthier et al., ApJL, A Neutron Star Binary Merger Model for GW170817/GRB170817a/SSS17a

    Kasen et al., Nature, Origin of the heavy elements in binary neutron star mergers from a gravitational wave event

    Abbott et al., Nature, A gravitational-wave standard siren measurement of the Hubble constant (The LIGO Scientific Collaboration and The Virgo Collaboration, The 1M2H Collaboration, The Dark Energy Camera GW-EM Collaboration and the DES Collaboration, The DLT40 Collaboration, The Las Cumbres Observatory Collaboration, The VINROUGE Collaboration & The MASTER Collaboration)

    Abbott et al., ApJL, Multi-messenger Observations of a Binary Neutron Star Merger

    PRESS RELEASES AND MEDIA COVERAGE


    Watch Ryan Foley tell the story of how his team found the neutron star merger in the video below. 2.5 HOURS.

    Press releases:

    UC Santa Cruz Press Release

    UC Berkeley Press Release

    Carnegie Institution of Science Press Release

    LIGO Collaboration Press Release

    National Science Foundation Press Release

    Media coverage:

    The Atlantic – The Slack Chat That Changed Astronomy

    Washington Post – Scientists detect gravitational waves from a new kind of nova, sparking a new era in astronomy

    New York Times – LIGO Detects Fierce Collision of Neutron Stars for the First Time

    Science – Merging neutron stars generate gravitational waves and a celestial light show

    CBS News – Gravitational waves – and light – seen in neutron star collision

    CBC News – Astronomers see source of gravitational waves for 1st time

    San Jose Mercury News – A bright light seen across the universe, proving Einstein right

    Popular Science – Gravitational waves just showed us something even cooler than black holes

    Scientific American – Gravitational Wave Astronomers Hit Mother Lode

    Nature – Colliding stars spark rush to solve cosmic mysteries

    National Geographic – In a First, Gravitational Waves Linked to Neutron Star Crash

    Associated Press – Astronomers witness huge cosmic crash, find origins of gold

    Science News – Neutron star collision showers the universe with a wealth of discoveries

    UCSC press release
    First observations of merging neutron stars mark a new era in astronomy

    Credits

    Writing: Tim Stephens
    Video: Nick Gonzales
    Photos: Carolyn Lagattuta
    Header image: Illustration by Robin Dienel courtesy of the Carnegie Institution for Science
    Design and development: Rob Knight
    Project managers: Sherry Main, Scott Hernandez-Jason, Tim Stephens

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    Gemini South telescope, Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile, at an altitude of 7200 feet

    Noted in the video but not in the article:

    NASA/Chandra Telescope

    NASA/SWIFT Telescope

    NRAO/Karl V Jansky VLA, on the Plains of San Agustin fifty miles west of Socorro, NM, USA

    Prompt telescope CTIO Chile

    NASA NuSTAR X-ray telescope

    See the full UCSC article here

    Without such impacts, perhaps everything from gold rushes to the history of precious coins might have been entirely different.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: