Tagged: Cosmos Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:32 am on December 4, 2018 Permalink | Reply
    Tags: , , , , Cosmos Magazine, , First stars may have been in massive dark matter halos, Institute for Advanced Study in Princeton New Jersey US, New observations challenge universe model   

    From COSMOS Magazine: “New observations challenge universe model” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    04 December 2018
    Lauren Fuge

    First stars may have been in massive dark matter halos.

    1
    The cosmic microwave background, captured by NASA’s Wilkinson Microwave Anistropy Probe. NASA

    Observations of the very first stars to form might change accepted models of the dawn of the universe.

    A team of astronomers led by Alexander Kaurov of the Institute for Advanced Study in New Jersey, US, says these observations may indicate that the majority of the first stellar generation were located in rare and massive dark matter halos.

    First, though, a quick cosmology refresher.

    Current models tell us that for almost 400,000 years after the Big Bang, the universe was so hot that atoms couldn’t form yet. All that existed was a searing soup of plasma, with photons trapped within it like a fog. But when the universe finally cooled enough for protons and electrons to combine into hydrogen atoms, those photons escaped.

    Today, this jailbreak radiation is known as the cosmic microwave background (CMB). It’s like the universe’s baby photo, and by studying it and the tiny fluctuations within it, we can learn about the system’s infancy and how stars and galaxies began to form.

    The first generation of stars appear so faint and distant that they’re difficult to detect directly. However, astronomers theorised that these stars emitted ultraviolet radiation that heated up the gas around them, which in turn absorbed some of CMB – at radio wavelengths of 21 centimetres, to be specific.

    In March 2018, the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) detected this signal as a small distortion in the CMB, like a fingerprint of the first stars.

    EDGES telescope in a radio quiet zone at the Murchison Radio-astronomy Observatory in Western Australia.

    But upon analysis, the EDGES team realised that the signal’s shape was much deeper than predicted, with sharper boundaries.

    Any number of studies have since attempted to explain the unexpected depth, using new physics or astrophysics. Now, Kaurov’s team at the US Institute for Advanced Study has tackled the signal’s sharp boundaries.

    In a study published in The Astrophysical Journal Letters, he and co-authors argue that this feature indicates that as the first stars lit up, ultraviolet photons flooded the universe much more quickly than expected. The team’s computer simulations showed that this suddenness would occur naturally if the first stars were concentrated in the most massive and rarest dark matter halos – rather than distributed evenly throughout the universe as previously thought.

    These halos, weighing over a billion times more than our Sun, exploded in number in the universe’s infancy and could have easily produced the huge influx of ultraviolet photons necessary to explain the EDGES signal.

    If this scenario is correct, then these rare halos might be bright enough to be observed by the James Webb Space Telescope, which will launch in 2021.

    Time, thus, in more ways than one, will tell.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Advertisements
     
  • richardmitnick 12:20 pm on November 7, 2018 Permalink | Reply
    Tags: , , , , Cosmos Magazine, Demonstrated that there is an upper limit – now called the Chandrasekhar limit – to the mass of a white dwarf star, ,   

    From COSMOS Magazine: “Science history: The astrophysicist who defined how stars behave” Subrahmanyan Chandrasekhar 

    Cosmos Magazine bloc

    From COSMOS Magazine

    07 November 2018
    Jeff Glorfeld

    1
    Subrahmanyan Chandrasekhar meets the press in 1983, shortly after winning the Nobel Prize. Bettmann / Contributor / Getty Images

    Subrahmanyan Chandrasekhar was so influential, NASA honoured him by naming an orbiting observatory after him.

    NASA/Chandra X-ray Telescope

    The NASA webpage devoted to astrophysicist Subrahmanyan Chandrasekhar says he “was known to the world as Chandra. The word chandra means ‘moon’ or ‘luminous’ in Sanskrit.”

    Subrahmanyan Chandrasekhar was born on October 19, 1910, in Lahore, Pakistan, which at the time was part of British India. NASA says that he was “one of the foremost astrophysicists of the 20th century. He was one of the first scientists to couple the study of physics with the study of astronomy.”

    The Encyclopaedia Britannica adds that, with William A. Fowler, he won the 1983 Nobel Prize for physics, “for key discoveries that led to the currently accepted theory on the later evolutionary stages of massive stars”.

    According to an entry on the website of the Harvard-Smithsonian Centre for Astrophysics, early in his career, between 1931 and 1935, he demonstrated that there is an upper limit – now called the Chandrasekhar limit – to the mass of a white dwarf star.

    “This discovery is basic to much of modern astrophysics, since it shows that stars much more massive than the Sun must either explode or form black holes,” the article explains.

    When he first proposed his theory, however, it was opposed by many, including Albert Einstein, “who refused to believe that Chandrasekhar’s findings could result in a star collapsing down to a point”.

    Writing for the Nobel Prize committee, Chandra described how he approached a project.

    “My scientific work has followed a certain pattern, motivated, principally, by a quest after perspectives,” he wrote.

    “In practice, this quest has consisted in my choosing (after some trials and tribulations) a certain area which appears amenable to cultivation and compatible with my taste, abilities, and temperament. And when, after some years of study, I feel that I have accumulated a sufficient body of knowledge and achieved a view of my own, I have the urge to present my point of view, ab initio, in a coherent account with order, form, and structure.

    “There have been seven such periods in my life: stellar structure, including the theory of white dwarfs (1929-1939); stellar dynamics, including the theory of Brownian motion (1938-1943); the theory of radiative transfer, including the theory of stellar atmospheres and the quantum theory of the negative ion of hydrogen and the theory of planetary atmospheres, including the theory of the illumination and the polarisation of the sunlit sky (1943-1950); hydrodynamic and hydromagnetic stability, including the theory of the Rayleigh-Benard convection (1952-1961); the equilibrium and the stability of ellipsoidal figures of equilibrium, partly in collaboration with Norman R. Lebovitz (1961-1968); the general theory of relativity and relativistic astrophysics (1962-1971); and the mathematical theory of black holes (1974- 1983).”

    In 1999, four years after his death on August 21, 1995, NASA launched an x-ray observatory named Chandra, in his honour. The observatory studies the universe in the x-ray portion of the electromagnetic spectrum.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 12:55 pm on September 24, 2018 Permalink | Reply
    Tags: , , , , Cosmos Magazine, , ,   

    From COSMOS Magazine: “A galactic near-miss set stars on an unexpected path around the Milky Way” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    24 September 2018
    Ben Lewis

    A close pass from the Sagittarius dwarf galaxy sent ripples through the Milky Way that are still visible today.

    1
    Image Credit: R. Ibata (UBC), R. Wyse (JHU), R. Sword (IoA)

    Milky Way NASA/JPL-Caltech /ESO R. Hurt

    1
    Tiny galaxy; big trouble. Gaia imaging shows the Sagittarius galaxy, circled in red. ESA/Gaia/DPAC

    ESA/GAIA satellite

    Between 300 and 900-million years ago the Sagittarius dwarf galaxy made a close pass by the Milky Way, setting millions of stars in motion, like ripples on a pond. The after-effects of that galactic near miss are still visible today, according to newly published findings.

    The unique pattern of stars left over from the event was detected by the European Space Agency’s star mapping mission, Gaia. The details are contained in a paper written by Teresa Antoja and colleagues from the Universitat de Barcelona in Spain, and published in the journal Nature.

    The movements of over six million stars in the Milky Way were tracked by Gaia to reveal that groups of them follow different courses as they orbit the galactic centre.

    In particular, the researchers found a pattern that resembled a snail shell in a graph that plotted star altitudes above or below the plane of the galaxy, measured against their velocity in the same direction. This is not to say that the stars themselves are moving in a spiral, but rather that the roughly circular orbits correlate with up-and-down motion in a pattern that has never been seen before.

    While some perturbations in densities and velocities had been seen previously, it was generally assumed that the movement of the disk’s stars is largely in dynamic equilibrium and symmetry about the galactic plane. Instead, Antoja’s team discovered something had knocked the disk askew.

    “It is a bit like throwing a stone in a pond, which displaces the water as ripples and waves,” she explains.

    Whereas water will eventually settle out after being disturbed, a star’s motion carries signatures from the change in movement. While the ripples in the distribution caused by Sagittarius passing by has evened out, the motion of the stars themselves still carry the pattern.

    “At the beginning the features were very weird to us,” says Antoja. “I was a bit shocked and I thought there could be a problem with the data because the shapes are so clear.”

    The new revelations came about because of a huge increase in quality of the Gaia data, compared to what had been captured previously. The new information provided, for the first time, a measurement of three-dimensional speeds for the stars. This allowed the study of stellar motion using the combination of position and velocity, known as “phase space”.

    “It looks like suddenly you have put the right glasses on and you see all the things that were not possible to see before,” says Antoja.

    Computer models suggest the disturbance occurred between 300 and 900 million years ago – a point in time when it’s known the Sagittarius galaxy came near ours.

    In cosmic terms, that’s not very long ago, which also came as a surprise. It was known that the Milky Way had endured some much earlier collisions – smashing into a dwarf galaxy some 10 billion years ago, for instance – but until now more recent events had not been suspected. The Gaia results have changed that view.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 8:39 am on September 18, 2018 Permalink | Reply
    Tags: , Cosmos Magazine, , , , , Super-Kamioka Neutrino Detection Experiment at Kamioka Observatory Tokyo Japan   

    From COSMOS Magazine: “Hints of a fourth type of neutrino create more confusion” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    18 September 2018
    Katie Mack

    Anomalous experimental results hint at the possibility of a fourth kind of neutrino, but more data only makes the situation more confusing.

    1
    Inside the Super-Kamioka Neutrino Detection Experiment at Kamioka Observatory, Tokyo, Japan. Credit: Kamioka Observatory, ICRR (Institute for Cosmic Ray Research), The University of Tokyo

    It was a balmy summer in 1998 when I first became aware of the confounding weirdness of neutrinos. I have vivid memories of that day, as an embarrassingly young student researcher, walking along a river in Japan, listening to a graduate student tell me about her own research project: an attempt to solve a frustrating neutrino–related mystery. We were both visiting a giant detector experiment called Super-Kamiokande, in the heady days right after it released data that forever altered the Standard Model of Particle Physics.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.


    Standard Model of Particle Physics from Symmetry Magazine

    What Super-K found was that neutrinos – ghostly, elusive particles that are produced in the hearts of stars and can pass through the whole Earth with only a miniscule chance of interacting with anything – have mass.

    A particle having mass might not sound like a big deal, but the original version of the otherwise fantastically successful Standard Model described neutrinos as massless – just like photons, the particles that carry light and other electromagnetic waves. Unlike photons, however, neutrinos come in three ‘flavours’: electron, muon, and tau.

    Super-K’s discovery was that neutrinos could change from one flavour to another as they travelled, in a process called oscillation. This can only happen if the three flavours have different masses from one another, which means they can’t be massless.

    The finding suggested there must be a fourth neutrino, one invisible in experiments.

    This discovery was a big deal, but it wasn’t the mystery the grad student was working to solve. A few years before, an experiment called the Liquid Scintillator Neutrino Detector (LSND), based in the US, had seen tantalising evidence that neutrinos were oscillating in a way that made no sense at all with the results of other experiments, including Super-K. The LSND finding indirectly suggested there had to be a fourth neutrino in the picture that the other neutrinos were sometimes oscillating into. This fourth neutrino would be invisible in experiments, lacking the kind of interactions that made the others detectable, which gave it the name ‘sterile neutrino’. And it would have to be much more massive than the other three.

    As I learned that day by the river, the result had persisted, unexplained, for years. Most people assumed something had gone wrong with the experiment, but no one knew what.

    In 2007, the plot thickened. An experiment called MiniBooNE, designed primarily to figure out what the heck happened with LSND, didn’t find the distribution of neutrinos it should have seen to confirm the LSND result.

    FNAL/MiniBooNE

    But some extra neutrinos did show up in MiniBooNE in a different energy range. They were inconsistent with LSND and every other experiment, perhaps suggesting the existence of even more flavours of neutrino.

    Meanwhile, experiments looking at neutrinos produced by nuclear reactors were seeing numbers that also couldn’t easily be explained without a sterile neutrino, though some physicists wrote these off as possibly due to calibration errors.

    And now the plot has grown even thicker.

    In May, MiniBooNE announced new results that seem more consistent with LSND, but even less palatable in the context of other experiments. MiniBooNE works by creating a beam of muon neutrinos and shooting them through the dirt at an underground detector 450 m away. The detector, meanwhile, is monitoring the arrival of electron neutrinos, in case any muon neutrinos are shape-shifting. More of these electron neutrinos turn up than standard neutrino models predict, which implies that some muon neutrinos transform by oscillating into sterile neutrinos too. (Technically, all neutrinos would be swapping around with all others, but this beam only makes sense if there’s an extra, massive one in the mix.)

    But there are several reasons this explanation is facing resistance. One is that experiments just looking for muon neutrinos disappearing (becoming sterile neutrinos or anything else) don’t find a consistent picture. Secondly, if sterile neutrinos at the proposed mass exist, they should have been around in the very early universe, and measurements we have from the cosmic microwave background of the number of neutrino types kicking around then strongly suggest it was just the normal three.

    So, as usual, there’s more work to be done. A MiniBooNE follow-up called MicroBooNE is currently taking data and might make the picture clearer, and other experiments are on the way.

    FNAL/MicroBooNE

    It seems very likely that something strange is happening in the neutrino sector. It just remains to be seen exactly what, and how, over the next 20 years of constant neutrino bombardment, it will change our understanding of everything else.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 8:19 am on September 18, 2018 Permalink | Reply
    Tags: , Cosmos Magazine, , Earth’s most volcanic places   

    From COSMOS Magazine: “Earth’s most volcanic places” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    18 September 2018
    Vhairi Mackintosh

    Some countries are famous for images of spewing lava and mountainous destruction. However, appearances can be deceiving. Not all volcanoes are the same.

    1
    Credit Stocktrek Images / Getty Images

    Volcanic activity. It’s the reason why the town of El Rodeo in Guatemala is currently uninhabitable, why the Big Island of Hawaii gained 1.5 kilometres of new coastline in June, and why Denpasar airport in Bali has closed twice this year.

    But these eruptions should not be seen as destructive attacks on certain places or the people that live in them. They have nothing to do with even the country that hosts them. They occur in specific regions because of much larger-scale processes originating deep within the Earth.

    According to the United States Geological Survey (USGS), approximately 1,500 potentially active volcanoes exist on land around the globe. Here’s a look at four of the world’s most volcanically active spots, and the different processes responsible for their eruptions. As you’ll see, there is no one-size-fits-all volcano.

    ICELAND

    Most volcanic eruptions go unnoticed. That’s because they happen continuously on the ocean floor where cracks in the Earth’s outer layer, the lithosphere (comprising the crust and solid upper mantle), form at so-called divergent plate boundaries. These margins form due to convection in the underlying mantle, which causes hot, less dense molten material, called magma, to rise to the surface. As it forces its way through the lithospheric plate, magma breaks the outer shell. Lava, the surface-equivalent of magma, fills the crack and pushes the broken pieces in opposite directions.

    Volcanism from this activity created Iceland. The country is located on the Mid-Atlantic Ridge, which forms the seam between the Eurasian and North American plates. Iceland is one of the few places where this type of spreading centre pops above sea level.

    However, volcanism on Iceland also happens because of its location over a hot spot. These spots develop above abnormally hot, deep regions of the mantle known as plumes.

    Each plume melts the overlying material and buoyant magma rises through the lithosphere – picture a lava lamp – to erupt at the surface.

    This volcanic double whammy produces both gentle fissure eruptions of basaltic lava as well as stratovolcanoes that are characterised by periodic non-explosive lava flows and explosive, pyroclastic eruptions, which produce clouds of ash, gas and debris.

    In 2010, the two-month eruption of the ice-capped Eyjafjallajökull stratovolcano – the one that no one outside Iceland can pronounce – attracted a lot of media attention because the resulting ash cloud grounded thousands of flights across Europe.

    3
    Eruption at Fimmvörðuháls at dusk. Boaworm

    In fact, it was a relatively small eruption. It is believed that a major eruption in Iceland is long overdue. Four other volcanoes are all showing signs of increased activity, including the country’s most feared one, called Katla.

    3
    Credit: Westend61 / Getty Images

    4
    Photograph of Katla volcano erupting through Mýrdalsjökull ice cap in 1918. ICELANDIC GLACIAL LANDSCAPES
    Author Public Domain

    INDONESIA

    More than 197 million Indonesians live within 100 km of a volcano, with nearly nine million of those within 10 km. Indonesia has more volcanoes than any other country in the world. The 1815 eruption of its Mount Tambora still holds the record for the largest in recent history.

    Indonesia is one of many places located within the world’s most volcanically, and seismically, active zone, known as the Pacific Ring of Fire. This 40,000 km horseshoe-shaped region, bordering the Pacific Ocean, is where many tectonic plates bang into each other.

    In this so-called convergent plate boundary setting, the process of subduction generates volcanism. Subduction occurs because when two plates collide, the higher density plate containing oceanic crust sinks beneath another less dense plate, which contains either continental crust or younger, hotter and therefore less dense oceanic crust. As the plate descends into the mantle, it releases fluids that trigger melting of the overriding plate, thus producing magma. This then rises and erupts at the surface to form an arc-shaped chain of volcanoes, inward of, but parallel to, the subducting plate margin.

    Indonesia marks the junction between many converging plates and, thus, the subduction processes and volcanism are complex. Most of Indonesia’s volcanoes, however, are part of the Sundra Arc, an island volcanic range caused by the subduction of the Indo-Australian Plate beneath the Eurasian Plate. Volcanism in eastern Indonesia is mainly caused by the subduction of the Pacific Plate under the Eurasian Plate.

    The stratovolcanoes that form in convergent plate boundary settings are the most dangerous because they are characterised by incredibly fast, highly explosive pyroclastic flows. One of Indonesia’s stratovolcanoes, Mount Agung, erupted on 29 June for the second time in a year, spewing ash more than two km into the air and grounding hundreds of flights to the popular tourist destination, Bali.

    5
    Mount Agung, November 2017 eruption – 27 Nov 2017. Michael W. Ishak (http://www.myreefsdiary.com)

    6
    Credit: shayes17 / Getty Images

    GUATEMALA

    The June 3 eruption of the Guatemalan stratovolcano, Volcan de Fuego (Volcano of Fire), devastated Guatemalans, and the rest of the world, as horrifying images and videos of people trying to escape the quick-moving pyroclastic flow filled the news.

    Like Indonesia, Guatemala’s location within the Ring of Fire and the subduction-related processes that go along with its location are responsible for the volcanoes found here. Located on the other side of the Pacific Ocean, volcanism is caused by the subduction of the much smaller Cocos Plate beneath the North American-Caribbean Plate.

    Unlike Indonesia, however, the convergent boundary between these two plates occurs on land instead of within the ocean. Therefore, the Guatemalan arc does not form islands but a northwest-southeast trending chain of onshore volcanoes.

    The same process is responsible for the formation of the Andes – the world’s longest continental mountain range – further south along the western coast of South America. In this case, subduction of the Nazca-Antarctic Plate beneath the South American Plate causes volcanism in countries such as Chile and Peru.

    7
    October 1974 eruption of Volcán de Fuego — seen from Antigua Guatemala, Guatemala. Paul Newton, Smithsonian Institution

    7
    Credit: ShaneMyersPhoto / Getty Images

    HAWAII

    When someone mentions Hawaii, it’s hard not to picture a volcano. But Hawaii’s volcanoes are actually not typical. That’s because they are not found on a plate boundary. In fact, Hawaii is slap-bang in the middle of the Pacific Plate – the world’s largest.

    Like Iceland, Hawaii is also underlain by a hot spot. However, because the Pacific Plate is moving to the northwest over this relatively fixed mantle anomaly, the resulting volcanism creates a linear chain of islands within the Pacific Ocean. A volcano forming over the hot spot will be carried away, over millions of years, by the moving tectonic plate. As a new volcano begins to form, the older one becomes extinct, cools and sinks to form a submarine mountain. Through this process, the islands of Hawaii have been forming for the past 70 million years.

    The typical shield volcanoes that form in this geological setting are produced from gentle eruptions of basaltic lava and are rarely explosive. The youngest Hawaiian shield volcano, Kilauea, erupted intensely on 3 May of this year, and 1,170 degree Celsius lava has been flowing over the island and into the ocean ever since. Kilauea, which has been continuously oozing since 1983, is regarded as one of the world’s most active volcanoes, if not the most.

    9
    Looking up the slope of Kilauea, a shield volcano on the island of Hawaii. In the foreground, the Puu Oo vent has erupted fluid lava to the left. The Halemaumau crater is at the peak of Kilauea, visible here as a rising vapor column in the background. The peak behind the vapor column is Mauna Loa, a volcano that is separate from Kilauea. USGS

    9
    An aerial view of the erupting Pu’u ‘O’o crater on Hawaii’s Kilauea volcano taken at dusk on June 29, 1983.
    Credit: G.E. Ulrich, USGS

    AND THE WORLD’S LEAST VOLCANIC PLACE?

    It may be surprising to hear that despite the Himalayas, like the Andes, being located on a very active convergent plate boundary, they are not volcanically active. In fact, there are barely any volcanoes at all within the mountain range.

    This is because the two colliding plates that are responsible for the formation of the Himalayas contain continental crust at the convergent plate boundary, distinct from the oceanic-continental or oceanic-oceanic crustal boundaries in the Guatemalan and Indonesian cases, respectively.

    As the two colliding plates have similar compositions, and therefore densities, and both their densities are much lower than the underlying mantle, neither plate is subducted. It’s a bit like wood floating on water. As subduction causes the lithospheric partial melting that generates the magma in convergent plate boundary settings, volcanism is not common in continent-continent collisions.

    Unfortunately, Himalayan people don’t get off that easily though, because devastating earthquakes go hand-in-hand with this sort of setting.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 8:48 am on August 20, 2018 Permalink | Reply
    Tags: , , , , Cosmos Magazine, , KELT-9b   

    From IAC via COSMOS: “The planet KELT-9b literally has an iron sky” 

    IAC

    From Instituto de Astrofísica de Canarias – IAC

    via

    COSMOS

    20 August 2018
    Ben Lewis

    1
    The Gran Telescopio Canarias on Las Palma in the Canary Islands was instrumental in determining the constituents of the exoplanet’s atmosphere. Dominic Dähncke/Getty Images

    KELT-9b, one of the most unlikely planets ever discovered, has surprised astronomers yet again with the discovery that its atmosphere contains the metals iron and titanium, according to research published in the journal Nature.

    2
    NASA/JPL-Caltech

    The planet is truly like no other. Located around 620 light-years away from Earth in the constellation Cygnus, it is known as a Hot Jupiter – which gives a hint to its nature. Nearly three times the size of Jupiter, its surface temperature tops 3780 degrees Celsius – the hottest exoplanet ever discovered. It is even hotter than the surface of some stars. In some ways it straddles the line between a star and a gas-giant exoplanet.

    And it’s that super-hot temperature, created by a very close orbit to its host star, that allows the metals to become gaseous and fill the atmosphere, say the findings from a team led by Jens Hoeijmakers of the University of Geneva in Switzerland.

    On the night of 31 July 2017, as KELT-9b passed across the face of its star, the HARPS-North spectrograph attached to the Telescopio Nazionale Galileo, located the Spanish Canary Island of La Palma, began watching. The telescope recorded changes in colour in the planet’s atmosphere, the result of chemicals with different light-filtering properties.

    Telescopio Nazionale Galileo – Harps North


    Telescopio Nazionale Galileo a 3.58-meter Italian telescope, located at the Roque de los Muchachos Observatory on the island of La Palma in the Canary Islands, Spain, Altitude 2,396 m (7,861 ft)

    By subtracting the plain starlight from the light that had passed through the atmosphere, the team were left with a spectrograph of its chemical make-up.

    They then homed in on titanium and iron, because the relative abundances of uncharged and charged atoms tend to change dramatically at the temperatures seen on KELT-9b. After a complex process of analysis and cross-correlation of results, they saw dramatic peaks in the ionised forms of both metals.

    It has been long suspected that iron and titanium exist on some exoplanets, but to date they have been difficult to detect. Somewhat like Earth, where the two elements are mostly found in solid form, the cooler conditions of most exoplanets means that the iron and titanium atoms are generally “trapped in other molecules,” as co-author Kevin Heng from the University of Bern in Switzerland recently told Space.com.

    However, the permanent heatwave on KELT-9b means the metals are floating in the atmosphere as individual charged atoms, unable to condense or form compounds.

    While this is the first time iron has been detected in an exoplanet’s atmosphere, titanium has previously been detected in the form of titanium dioxide on Kepler 13Ab, another Hot Jupiter. The discovery on KELT-9b however, is the first detection of elemental titanium in an atmosphere.

    KELT-9b’s atmosphere is also known to contain hydrogen, which was easily identifiable without requiring the type of complex analysis needed to identify iron and titanium. However, a study in July [Nature Astronomy] found that the hydrogen is literally boiling off the planet, leading to the hypothesis that its escape could also be dragging the metals higher into the atmosphere, making their detection easier.

    Further studies into KELT-9b’s atmosphere are continuing, with suggestions that announcements of other metals could be forthcoming. In addition, the complex analysis required in this study could be useful for identifying obscure components in the atmospheres of other planets.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The Instituto de Astrofísica de Canarias(IAC) is an international research centre in Spain which comprises:

    The Instituto de Astrofísica, the headquarters, which is in La Laguna (Tenerife).
    The Centro de Astrofísica en La Palma (CALP)
    The Observatorio del Teide (OT), in Izaña (Tenerife).
    The Observatorio del Roque de los Muchachos (ORM), in Garafía (La Palma).

    Roque de los Muchachos Observatory is an astronomical observatory located in the municipality of Garafía on the island of La Palma in the Canary Islands, at an altitude of 2,396 m (7,861 ft)

    These centres, with all the facilities they bring together, make up the European Northern Observatory(ENO).

    The IAC is constituted administratively as a Public Consortium, created by statute in 1982, with involvement from the Spanish Government, the Government of the Canary Islands, the University of La Laguna and Spain’s Science Research Council (CSIC).

    The International Scientific Committee (CCI) manages participation in the observatories by institutions from other countries. A Time Allocation Committee (CAT) allocates the observing time reserved for Spain at the telescopes in the IAC’s observatories.

    The exceptional quality of the sky over the Canaries for astronomical observations is protected by law. The IAC’s Sky Quality Protection Office (OTPC) regulates the application of the law and its Sky Quality Group continuously monitors the parameters that define observing quality at the IAC Observatories.

    The IAC’s research programme includes astrophysical research and technological development projects.

    The IAC is also involved in researcher training, university teaching and outreachactivities.

    The IAC has devoted much energy to developing technology for the design and construction of a large 10.4 metre diameter telescope, the ( Gran Telescopio CANARIAS, GTC), which is sited at the Observatorio del Roque de los Muchachos.



    Gran Telescopio Canarias at the Roque de los Muchachos Observatory on the island of La Palma, in the Canaries, SpainGran Telescopio CANARIAS, GTC

     
  • richardmitnick 10:23 am on August 17, 2018 Permalink | Reply
    Tags: A step closer to a theory of quantum gravity, Cosmos Magazine   

    From COSMOS Magazine: “A step closer to a theory of quantum gravity” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    17 August 2018
    Phil Dooley

    1
    Resolving differences between the theory of general relativity and the predictions of quantum physics remains a huge challenge. Credit: diuno / Getty Images

    A new approach to combining Einstein’s General Theory of Relativity with quantum physics could come out of a paper published in the journal Nature Physics. The insights could help build a successful theory of quantum gravity, something that has so far eluded physicists.

    Magdalena Zych from University of Queensland in Australia and Caslav Brukner from University of Vienna in Austria have devised a set of principles that compare the way objects behave as predicted by Einstein’s theory with their behaviour predicted by quantum theory.

    Quantum physics has very successfully described the behaviour of tiny particles such as atoms and electrons, while relativity is very accurate for forces at cosmic scales. However, in some cases, notably gravity, the two theories produce incompatible results.

    Einstein’s theory revolutionised the concept of the gravity, by showing that it was caused by curves in spacetime rather than by a force. In contrast, quantum theory has successfully shown other forces, such as magnetism, are the result of fleeting particles being exchanged between interacting objects.

    The difference between the two cases throws up a surprising question: do objects attracted by electrical or magnetic forces behave the same way as when attracted by the gravity of a nearby planet?

    In physics language, an object’s inertial mass and its gravitational mass are held to be the same, a property known as the Einstein equivalence principle. But, given that the two theories are so different, it is not clear that the idea still holds at the quantum level.

    Zych and Brukner combined two principles to formulate the problem. From relativity they took the equation E=MC2, which holds that when objects gain more energy they become heavier. This even applies to an atom moving from a low energy level to a more excited state.

    To this they added the principle of quantum superposition, which holds that particles can be smeared into more than one state at once. And since the different energy levels have different masses, then the total mass gets smeared across a range of values, too.

    This prediction allowed the pair to propose tests that would tease out the quantum behaviour of gravitational acceleration.

    “For example, for an object in freefall in a superposition of accelerations, quantum correlations – entanglement – would develop between the internal states of the particle and their position,” Zych explains.

    “So, the particle would actually smear across space as it falls, which would violate the equivalence principle.”

    As most current theories of quantum gravity predict that the equivalence principle will indeed be violated, the tests proposed by Zych and Brukner could help evaluate whether these approaches are on the right track.

    Zych was inspired to tackle the problem when thinking about a variant of Einstein’s “twin paradox”. This arises as a consequence of relativity, and says that one twin travelling at high speed will age more slowly than the other, who remains stationary.

    Instead, Zych imagined kind of quantum conjoined twins, built from the quantum superposition of two different energy states – and therefore two superposed masses.

    “It was surprising to find these corners of quantum physics that have not been explored before,” Zych says.

    She estimates the difference caused by the quantum behaviour of an atom interacting with a visible wavelength laser would be around one part in 10^11.

    An Italian group has already begun work on such experiments and found no deviation from the equivalence principle up to one part in 109.

    If Einstein’s work does turn out to be violated, it could have consequences for the use of quantum systems as very precise atomic clocks.

    “If the Einstein principle was violated only as allowed in classical physics, clocks could fail to be time-dilated, as predicted by relativity,” says Zych.

    “But if it is violated as allowed in quantum theory, clocks would generically cease to be clocks at all.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 8:38 am on August 8, 2018 Permalink | Reply
    Tags: , Cosmos Magazine, Incredibly tiny explosion packs a big punch, Nanoplasma, ,   

    From COSMOS Magazine: “Incredibly tiny explosion packs a big punch” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    08 August 2018
    Phil Dooley

    Japanese researchers record for the first time the birth of nanoplasma.

    3
    (piranka/istock)

    1
    By bombarding xenon atoms with X-rays, researchers can create nanoplasma. Credit: Science Picture Co / Getty Images

    Japanese researchers have captured the birth of a nanoplasma – a mixture of highly charged ions and electrons – in exquisite detail, as a high-powered X-ray laser roasted a microscopic cluster of atoms, tearing off electrons.

    While it’s cool to witness an explosion lasting just half a trillionth of a second and occupying one-hundredth the diameter of a human hair, caused by an X-ray beam 12,000 times brighter than the sun, it’s also important for studies of tiny structures such as proteins and crystals.

    To study small things you need light of a comparably small wavelength. The wavelength of the X-rays used by Yoshiaki Kumagai and his colleagues in this experiment at the Spring-8 Angstrom Compact free electron Laser (SACLA) in Japan is one ten billionth of a meter: you could fit a million wavelengths into the thickness of a sheet of paper.

    SACLA Free-Electron Laser Riken Japan

    This is the perfect wavelength for probing the structure of crystals and proteins, and the brightness of a laser gives a good strong signal. The problem, however, is that the laser itself damages the structure, says Kumagai, a physicist from Tohoku University in the city of Sendai.

    “Some proteins are very sensitive to irradiation,” he explains. “It is hard to know if we are actually detecting the pure protein structure, or whether there is already radiation damage.”

    The tell-tale sign of radiation damage is the formation of a nanoplasma, as the X-rays break bonds and punch out electrons from deep inside atoms to form ions. This happens in tens of femtoseconds (that is, quadrillionths of a second) and sets off complex cascades of collisions, recombinations and internal rearrangements of atoms. SACLA’s ultra short pulses, only 10 femtoseconds long, are the perfect tool to map out the progress of the tiny explosion moment by moment.

    To untangle the complicated web of processes going on the team chose a very simple structure to study, a cluster of about 5000 xenon atoms injected into a vacuum, which they then hit with an X-ray laser pulse.

    A second laser pulse followed, this time from an infrared laser, which was absorbed by the fragments and ions. The patterns of the absorption told the scientists what the nanoplasma contained. By repeating the experiment, each time delaying the infrared laser a little more, they built a set of snapshots of the nanoplasma’s birth.

    Previous experiments had shown that on average at least six electrons eventually get blasted off each xenon atom, but the team’s set of new snapshots, published in the journal Physical Review X, show that it doesn’t all happen immediately.

    1
    3
    4

    Instead, within 10 femtoseconds many of the xenon atoms have absorbed a lot of energy but not lost any electrons. Some atoms do lose electrons, and the attraction between the positive ions and the free electrons holds the plasma together. This leads to many collisions, which share the energy among the neutral atoms. The number of these atoms then declines over the next several hundred femtoseconds, as more ions form.

    Kumagai says the large initial population of highly-excited neutral xenon atoms were gateway states to the nanoplasma formation.

    “The excited atoms play an important role in the charge transfer and energy migration. It’s the first time we’ve caught this very fast step in nanoplasma formation,” he says.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:30 am on July 30, 2018 Permalink | Reply
    Tags: , Cosmos Magazine, Hello quantum world, , ,   

    From COSMOS Magazine: “Hello quantum world” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    30 July 2018
    Will Knight

    Quantum computing – IBM

    Inside a small laboratory in lush countryside about 80 kilometres north of New York City, an elaborate tangle of tubes and electronics dangles from the ceiling. This mess of equipment is a computer. Not just any computer, but one on the verge of passing what may, perhaps, go down as one of the most important milestones in the history of the field.

    Quantum computers promise to run calculations far beyond the reach of any conventional supercomputer. They might revolutionise the discovery of new materials by making it possible to simulate the behaviour of matter down to the atomic level. Or they could upend cryptography and security by cracking otherwise invincible codes. There is even hope they will supercharge artificial intelligence by crunching through data more efficiently.

    Yet only now, after decades of gradual progress, are researchers finally close to building quantum computers powerful enough to do things that conventional computers cannot. It’s a landmark somewhat theatrically dubbed ‘quantum supremacy’. Google has been leading the charge toward this milestone, while Intel and Microsoft also have significant quantum efforts. And then there are well-funded startups including Rigetti Computing, IonQ and Quantum Circuits.

    No other contender can match IBM’s pedigree in this area, though. Starting 50 years ago, the company produced advances in materials science that laid the foundations for the computer revolution. Which is why, last October, I found myself at IBM’s Thomas J. Watson Research Center to try to answer these questions: What, if anything, will a quantum computer be good for? And can a practical, reliable one even be built?

    2
    Credit: Graham Carlow

    Why we think we need a quantum computer

    The research center, located in Yorktown Heights, looks a bit like a flying saucer as imagined in 1961. It was designed by the neo-futurist architect Eero Saarinen and built during IBM’s heyday as a maker of large mainframe business machines. IBM was the world’s largest computer company, and within a decade of the research centre’s construction it had become the world’s fifth-largest company of any kind, just behind Ford and General Electric.

    While the hallways of the building look out onto the countryside, the design is such that none of the offices inside have any windows. It was in one of these cloistered rooms that I met Charles Bennett. Now in his 70s, he has large white sideburns, wears black socks with sandals and even sports a pocket protector with pens in it.

    3
    Charles Bennett was one of the pioneers who realised quantum computers could solve some problems exponentially faster than conventional computers. Credit:Bartek Sadowski

    Surrounded by old computer monitors, chemistry models and, curiously, a small disco ball, he recalled the birth of quantum computing as if it were yesterday.

    When Bennett joined IBM in 1972, quantum physics was already half a century old, but computing still relied on classical physics and the mathematical theory of information that Claude Shannon had developed at MIT in the 1950s. It was Shannon who defined the quantity of information in terms of the number of ‘bits’ (a term he popularised but did not coin) required to store it. Those bits, the 0s and 1s of binary code, are the basis of all conventional computing.

    A year after arriving at Yorktown Heights, Bennett helped lay the foundation for a quantum information theory that would challenge all that. It relies on exploiting the peculiar behaviour of objects at the atomic scale. At that size, a particle can exist ‘superposed’ in many states (e.g., many different positions) at once. Two particles can also exhibit ‘entanglement’, so that changing the state of one may instantaneously affect the other.

    Bennett and others realised that some kinds of computations that are exponentially time consuming, or even impossible, could be efficiently performed with the help of quantum phenomena. A quantum computer would store information in quantum bits, or qubits. Qubits can exist in superpositions of 1 and 0, and entanglement and a trick called interference can be used to find the solution to a computation over an exponentially large number of states. It’s annoyingly hard to compare quantum and classical computers, but roughly speaking, a quantum computer with just a few hundred qubits would be able to perform more calculations simultaneously than there are atoms in the known universe.

    In the summer of 1981, IBM and MIT organised a landmark event called the First Conference on the Physics of Computation. It took place at Endicott House, a French-style mansion not far from the MIT campus.

    In a photo that Bennett took during the conference, several of the most influential figures from the history of computing and quantum physics can be seen on the lawn, including Konrad Zuse, who developed the first programmable computer, and Richard Feynman, an important contributor to quantum theory. Feynman gave the conference’s keynote speech, in which he raised the idea of computing using quantum effects. “The biggest boost quantum information theory got was from Feynman,” Bennett told me. “He said, ‘Nature is quantum, goddamn it! So if we want to simulate it, we need a quantum computer.’”

    IBM’s quantum computer – one of the most promising in existence – is located just down the hall from Bennett’s office. The machine is designed to create and manipulate the essential element in a quantum computer: the qubits that store information.

    The gap between the dream and the reality

    The IBM machine exploits quantum phenomena that occur in superconducting materials. For instance, sometimes current will flow clockwise and counterclockwise at the same time. IBM’s computer uses superconducting circuits in which two distinct electromagnetic energy states make up a qubit.

    The superconducting approach has key advantages. The hardware can be made using well-established manufacturing methods, and a conventional computer can be used to control the system. The qubits in a superconducting circuit are also easier to manipulate and less delicate than individual photons or ions.

    Inside IBM’s quantum lab, engineers are working on a version of the computer with 50 qubits. You can run a simulation of a simple quantum computer on a normal computer, but at around 50 qubits it becomes nearly impossible.

    That means IBM is theoretically approaching the point where a quantum computer can solve problems a classical computer cannot: in other words, quantum supremacy.

    But as IBM’s researchers will tell you, quantum supremacy is an elusive concept. You would need all 50 qubits to work perfectly, when in reality quantum computers are beset by errors that need to be corrected. It is also devilishly difficult to maintain qubits for any length of time; they tend to ‘decohere’, or lose their delicate quantum nature, much as a smoke ring breaks up at the slightest air current. And the more qubits, the harder both challenges become.

    3
    The cutting-edge science of quantum computing requires nanoscale precision mixed with the tinkering spirit of home electronics. Researcher Jerry Chow is here shown fitting a circuitboard in the IBM quantum research lab. Jon Simon

    “If you had 50 or 100 qubits and they really worked well enough, and were fully error-corrected – you could do unfathomable calculations that can’t be replicated on any classical machine, now or ever,” says Robert Schoelkopf, a Yale professor and founder of a company called Quantum Circuits. “The flip side to quantum computing is that there are exponential ways for it to go wrong.”

    Another reason for caution is that it isn’t obvious how useful even a perfectly functioning quantum computer would be. It doesn’t simply speed up any task you throw at it; in fact, for many calculations, it would actually be slower than classical machines. Only a handful of algorithms have so far been devised where a quantum computer would clearly have an edge. And even for those, that edge might be short-lived. The most famous quantum algorithm, developed by Peter Shor at MIT, is for finding the prime factors of an integer. Many common cryptographic schemes rely on the fact that this is hard for a conventional computer to do. But cryptography could adapt, creating new kinds of codes that don’t rely on factorisation.

    This is why, even as they near the 50-qubit milestone, IBM’s own researchers are keen to dispel the hype around it. At a table in the hallway that looks out onto the lush lawn outside, I encountered Jay Gambetta, a tall, easygoing Australian who researches quantum algorithms and potential applications for IBM’s hardware. “We’re at this unique stage,” he said, choosing his words with care. “We have this device that is more complicated than you can simulate on a classical computer, but it’s not yet controllable to the precision that you could do the algorithms you know how to do.”

    What gives the IBMers hope is that even an imperfect quantum computer might still be a useful one.

    Gambetta and other researchers have zeroed in on an application that Feynman envisioned back in 1981. Chemical reactions and the properties of materials are determined by the interactions between atoms and molecules. Those interactions are governed by quantum phenomena. A quantum computer can – at least in theory – model those in a way a conventional one cannot.

    Last year, Gambetta and colleagues at IBM used a seven-qubit machine to simulate the precise structure of beryllium hydride. At just three atoms, it is the most complex molecule ever modelled with a quantum system. Ultimately, researchers might use quantum computers to design more efficient solar cells, more effective drugs or catalysts that turn sunlight into clean fuels.

    Those goals are a long way off. But, Gambetta says, it may be possible to get valuable results from an error-prone quantum machine paired with a classical computer.

    4
    Credit Cosmos Magazine

    Physicist’s dream to engineer’s nightmare

    “The thing driving the hype is the realisation that quantum computing is actually real,” says Isaac Chuang, a lean, soft-spoken MIT professor. “It is no longer a physicist’s dream – it is an engineer’s nightmare.”

    Chuang led the development of some of the earliest quantum computers, working at IBM in Almaden, California, during the late 1990s and early 2000s. Though he is no longer working on them, he thinks we are at the beginning of something very big – that quantum computing will eventually even play a role in artificial intelligence.

    But he also suspects that the revolution will not really begin until a new generation of students and hackers get to play with practical machines. Quantum computers require not just different programming languages but a fundamentally different way of thinking about what programming is. As Gambetta puts it: “We don’t really know what the equivalent of ‘Hello, world’ is on a quantum computer.”

    We are beginning to find out. In 2016 IBM connected a small quantum computer to the cloud. Using a programming tool kit called QISKit, you can run simple programs on it; thousands of people, from academic researchers to schoolkids, have built QISKit programs that run basic quantum algorithms. Now Google and other companies are also putting their nascent quantum computers online. You can’t do much with them, but at least they give people outside the leading labs a taste of what may be coming.

    The startup community is also getting excited. A short while after seeing IBM’s quantum computer, I went to the University of Toronto’s business school to sit in on a pitch competition for quantum startups. Teams of entrepreneurs nervously got up and presented their ideas to a group of professors and investors. One company hoped to use quantum computers to model the financial markets. Another planned to have them design new proteins. Yet another wanted to build more advanced AI systems. What went unacknowledged in the room was that each team was proposing a business built on a technology so revolutionary that it barely exists. Few seemed daunted by that fact.

    This enthusiasm could sour if the first quantum computers are slow to find a practical use. The best guess from those who truly know the difficulties –people like Bennett and Chuang – is that the first useful machines are still several years away. And that’s assuming the problem of managing and manipulating a large collection of qubits won’t ultimately prove intractable.

    Still, the experts hold out hope. When I asked him what the world might be like when my two-year-old son grows up, Chuang, who learned to use computers by playing with microchips, responded with a grin. “Maybe your kid will have a kit for building a quantum computer,” he said.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:46 am on July 2, 2018 Permalink | Reply
    Tags: Australia’s reputation for research integrity at the crossroads, Cosmos Magazine   

    From COSMOS Magazine: “Australia’s reputation for research integrity at the crossroads” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    02 July 2018
    David Vaux
    Peter Brooks
    Simon Gandevia

    Changes to Australia’s code of research conduct endanger its reputation for world-standard output.

    1
    Researchers are under pressure to deliver publications and win grants. Shutterstock

    In 2018, Australia still does not have appropriate measures in place to maintain research integrity. And recent changes to our code of research conduct have weakened our already inadequate position.

    In contrast, China’s recent move to crack down on academic misconduct moves it into line with more than twenty European countries, the UK, USA, Canada and others that have national offices for research integrity.

    Australia risks its reputation by turning in the opposite direction.

    Research integrity is vital

    Our confidence in science relies on its integrity – relating to both the research literature (its freedom from errors), and the researchers themselves (that they behave in a principled way).

    However, the pressures on scientists to publish and win grants can lead to misconduct. This can range from cherry-picking results that support a favoured hypothesis, to making up experimental, animal or patient results from thin air. A recent report found that around 1 in 25 papers contained duplicated images (inconsistent with good research practice), and about half of these had features suggesting deliberate manipulation.

    For science to progress efficiently, and to remain credible, we need good governance structures, and as transparent and open a system as possible. Measures are needed to identify and correct errors, and to rectify misbehaviour.

    In Australia, one such measure is the Australian Code for the Responsible Conduct of Research. But recently published revisions of this code allow research integrity to be handled internally by institutions, and investigations to be kept secret. This puts at risk the hundreds of millions of dollars provided by the taxpayer to fund research.

    As a nation, we can and must do much better, before those who invest in and conduct research go elsewhere – to countries that are serious about the governance of research integrity.

    Learning from experience – the Hall affair

    Developed jointly by the National Health and Medical Research Council (NHMRC), the Australian Research Council (ARC) and Universities Australia, the Australian Code for the Responsible Conduct of Research has the stated goal of improving research integrity in Australia.

    The previous version of the Australian Code was written in 2007, partly in response to the “Hall affair”.

    In 2001, complaints of research misconduct were levelled at Professor Bruce Hall, an immunologist at University of New South Wales (UNSW). After multiple inquiries, UNSW Vice Chancellor Rory Hume concluded that Hall was not guilty of scientific misconduct but had “committed errors of judgement sufficiently serious in two instances to warrant censure.” All allegations were denied by Hall.

    Commenting on the incident in 2004, Editor-in-Chief of the Medical Journal of Australia Martin Van Der Weyden highlighted the importance of external and independent review in investigating research practice:

    “The initial inquiry by the UNSW’s Dean of Medicine [was] patently crippled by perceptions of conflicts of interest — including an institution investigating allegations of improprieties carried out in its own backyard!

    Herein lies lesson number one — once allegations of scientific misconduct and fraud have been made, these should be addressed from the beginning by an external and independent inquiry.”

    An external and independent panel

    Avoiding conflicts of interest – real or perceived – was one of the reasons the 2007 version of the Australian Code required “institutions to establish independent external research misconduct inquiries to evaluate allegations of serious research misconduct that are contested.”

    But it seems this lesson has been forgotten. With respect to establishing a panel to investigate alleged misconduct, the revised Code says meekly:

    “There will be occasions where some or all members should be external to the institution.”

    Institutions will now be able to decide for themselves the terms of reference for investigations, and the number and composition of inquiry panels.

    Reducing research misconduct in Australia

    The chief justification for revising the 2007 Australian Code was to reduce research misconduct.

    In its initial draft form in 2016, the committee charged with this task suggested simply removing the term “research misconduct” from the Code, meaning that research misconduct would no longer officially exist in Australia.

    Unsurprisingly, this created a backlash, and, in the final version of the revised Code, a definition of the term “research misconduct” has returned:

    “Research misconduct: a serious breach of the Code which is also intentional or reckless or negligent.”

    However, institutions now have the option of “whether and how to use the term ‘research misconduct’ in relation to serious breaches of the Code”.

    Principles not enough

    The new Code is split into a set of principles of responsible research conduct that lists the responsibilities of researchers and institutions, together with a set of guides. The first guide describes how potential breaches of the Code should be investigated and managed.

    The principles of responsible research conduct are fine, and exhort researchers to be honest and fair, rigorous and respectful. No one would have an issue with this.

    Similarly, no one would think it unreasonable that institutions also have responsibilities, such as to identify and comply with relevant laws, regulations, guidelines and policies related to the conduct of research.

    However, having a set of lofty principles alone is not sufficient; there also need to be mechanisms to ensure compliance, not just by researchers, but also by institutions.

    Transparency, accountability, and trust

    The new Code says that institutions must ensure that all investigations are confidential. There is no requirement to make the outcome public, but only to “consider whether a public statement is appropriate to communicate the outcome of an investigation”.

    Combining mandatory confidentiality with self-regulation is bound to undermine trust in the governance of research integrity.

    In the new Code there is no mechanism for oversight. The outcome of a misconduct investigation can be appealed to the Australian Research Integrity Committee (ARIC), but only on the grounds of improper process, and not based on evidence or facts.

    Given that the conduct of investigations as well as the findings are to be confidential, it will be difficult to make an appeal to ARIC on any grounds.

    We need a national office of research integrity

    It is not clear why Australia does not learn from the experience of countries with independent agencies for research integrity, and adopt one of the models that is already working elsewhere in the world.

    Those who care about research and careers in research should ask their politicians and university Vice Chancellors why a national office of research integrity is necessary in the nations of Europe, the UK, US, Canada and now China, but not in Australia.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: