Tagged: Cosmos Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:48 am on August 20, 2018 Permalink | Reply
    Tags: , , , , Cosmos Magazine, , KELT-9b   

    From IAC via COSMOS: “The planet KELT-9b literally has an iron sky” 

    IAC

    From Instituto de Astrofísica de Canarias – IAC

    via

    COSMOS

    20 August 2018
    Ben Lewis

    1
    The Gran Telescopio Canarias on Las Palma in the Canary Islands was instrumental in determining the constituents of the exoplanet’s atmosphere. Dominic Dähncke/Getty Images

    KELT-9b, one of the most unlikely planets ever discovered, has surprised astronomers yet again with the discovery that its atmosphere contains the metals iron and titanium, according to research published in the journal Nature.

    2
    NASA/JPL-Caltech

    The planet is truly like no other. Located around 620 light-years away from Earth in the constellation Cygnus, it is known as a Hot Jupiter – which gives a hint to its nature. Nearly three times the size of Jupiter, its surface temperature tops 3780 degrees Celsius – the hottest exoplanet ever discovered. It is even hotter than the surface of some stars. In some ways it straddles the line between a star and a gas-giant exoplanet.

    And it’s that super-hot temperature, created by a very close orbit to its host star, that allows the metals to become gaseous and fill the atmosphere, say the findings from a team led by Jens Hoeijmakers of the University of Geneva in Switzerland.

    On the night of 31 July 2017, as KELT-9b passed across the face of its star, the HARPS-North spectrograph attached to the Telescopio Nazionale Galileo, located the Spanish Canary Island of La Palma, began watching. The telescope recorded changes in colour in the planet’s atmosphere, the result of chemicals with different light-filtering properties.

    Telescopio Nazionale Galileo – Harps North


    Telescopio Nazionale Galileo a 3.58-meter Italian telescope, located at the Roque de los Muchachos Observatory on the island of La Palma in the Canary Islands, Spain, Altitude 2,396 m (7,861 ft)

    By subtracting the plain starlight from the light that had passed through the atmosphere, the team were left with a spectrograph of its chemical make-up.

    They then homed in on titanium and iron, because the relative abundances of uncharged and charged atoms tend to change dramatically at the temperatures seen on KELT-9b. After a complex process of analysis and cross-correlation of results, they saw dramatic peaks in the ionised forms of both metals.

    It has been long suspected that iron and titanium exist on some exoplanets, but to date they have been difficult to detect. Somewhat like Earth, where the two elements are mostly found in solid form, the cooler conditions of most exoplanets means that the iron and titanium atoms are generally “trapped in other molecules,” as co-author Kevin Heng from the University of Bern in Switzerland recently told Space.com.

    However, the permanent heatwave on KELT-9b means the metals are floating in the atmosphere as individual charged atoms, unable to condense or form compounds.

    While this is the first time iron has been detected in an exoplanet’s atmosphere, titanium has previously been detected in the form of titanium dioxide on Kepler 13Ab, another Hot Jupiter. The discovery on KELT-9b however, is the first detection of elemental titanium in an atmosphere.

    KELT-9b’s atmosphere is also known to contain hydrogen, which was easily identifiable without requiring the type of complex analysis needed to identify iron and titanium. However, a study in July [Nature Astronomy] found that the hydrogen is literally boiling off the planet, leading to the hypothesis that its escape could also be dragging the metals higher into the atmosphere, making their detection easier.

    Further studies into KELT-9b’s atmosphere are continuing, with suggestions that announcements of other metals could be forthcoming. In addition, the complex analysis required in this study could be useful for identifying obscure components in the atmospheres of other planets.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The Instituto de Astrofísica de Canarias(IAC) is an international research centre in Spain which comprises:

    The Instituto de Astrofísica, the headquarters, which is in La Laguna (Tenerife).
    The Centro de Astrofísica en La Palma (CALP)
    The Observatorio del Teide (OT), in Izaña (Tenerife).
    The Observatorio del Roque de los Muchachos (ORM), in Garafía (La Palma).

    Roque de los Muchachos Observatory is an astronomical observatory located in the municipality of Garafía on the island of La Palma in the Canary Islands, at an altitude of 2,396 m (7,861 ft)

    These centres, with all the facilities they bring together, make up the European Northern Observatory(ENO).

    The IAC is constituted administratively as a Public Consortium, created by statute in 1982, with involvement from the Spanish Government, the Government of the Canary Islands, the University of La Laguna and Spain’s Science Research Council (CSIC).

    The International Scientific Committee (CCI) manages participation in the observatories by institutions from other countries. A Time Allocation Committee (CAT) allocates the observing time reserved for Spain at the telescopes in the IAC’s observatories.

    The exceptional quality of the sky over the Canaries for astronomical observations is protected by law. The IAC’s Sky Quality Protection Office (OTPC) regulates the application of the law and its Sky Quality Group continuously monitors the parameters that define observing quality at the IAC Observatories.

    The IAC’s research programme includes astrophysical research and technological development projects.

    The IAC is also involved in researcher training, university teaching and outreachactivities.

    The IAC has devoted much energy to developing technology for the design and construction of a large 10.4 metre diameter telescope, the ( Gran Telescopio CANARIAS, GTC), which is sited at the Observatorio del Roque de los Muchachos.



    Gran Telescopio Canarias at the Roque de los Muchachos Observatory on the island of La Palma, in the Canaries, SpainGran Telescopio CANARIAS, GTC

     
  • richardmitnick 10:23 am on August 17, 2018 Permalink | Reply
    Tags: A step closer to a theory of quantum gravity, Cosmos Magazine   

    From COSMOS Magazine: “A step closer to a theory of quantum gravity” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    17 August 2018
    Phil Dooley

    1
    Resolving differences between the theory of general relativity and the predictions of quantum physics remains a huge challenge. Credit: diuno / Getty Images

    A new approach to combining Einstein’s General Theory of Relativity with quantum physics could come out of a paper published in the journal Nature Physics. The insights could help build a successful theory of quantum gravity, something that has so far eluded physicists.

    Magdalena Zych from University of Queensland in Australia and Caslav Brukner from University of Vienna in Austria have devised a set of principles that compare the way objects behave as predicted by Einstein’s theory with their behaviour predicted by quantum theory.

    Quantum physics has very successfully described the behaviour of tiny particles such as atoms and electrons, while relativity is very accurate for forces at cosmic scales. However, in some cases, notably gravity, the two theories produce incompatible results.

    Einstein’s theory revolutionised the concept of the gravity, by showing that it was caused by curves in spacetime rather than by a force. In contrast, quantum theory has successfully shown other forces, such as magnetism, are the result of fleeting particles being exchanged between interacting objects.

    The difference between the two cases throws up a surprising question: do objects attracted by electrical or magnetic forces behave the same way as when attracted by the gravity of a nearby planet?

    In physics language, an object’s inertial mass and its gravitational mass are held to be the same, a property known as the Einstein equivalence principle. But, given that the two theories are so different, it is not clear that the idea still holds at the quantum level.

    Zych and Brukner combined two principles to formulate the problem. From relativity they took the equation E=MC2, which holds that when objects gain more energy they become heavier. This even applies to an atom moving from a low energy level to a more excited state.

    To this they added the principle of quantum superposition, which holds that particles can be smeared into more than one state at once. And since the different energy levels have different masses, then the total mass gets smeared across a range of values, too.

    This prediction allowed the pair to propose tests that would tease out the quantum behaviour of gravitational acceleration.

    “For example, for an object in freefall in a superposition of accelerations, quantum correlations – entanglement – would develop between the internal states of the particle and their position,” Zych explains.

    “So, the particle would actually smear across space as it falls, which would violate the equivalence principle.”

    As most current theories of quantum gravity predict that the equivalence principle will indeed be violated, the tests proposed by Zych and Brukner could help evaluate whether these approaches are on the right track.

    Zych was inspired to tackle the problem when thinking about a variant of Einstein’s “twin paradox”. This arises as a consequence of relativity, and says that one twin travelling at high speed will age more slowly than the other, who remains stationary.

    Instead, Zych imagined kind of quantum conjoined twins, built from the quantum superposition of two different energy states – and therefore two superposed masses.

    “It was surprising to find these corners of quantum physics that have not been explored before,” Zych says.

    She estimates the difference caused by the quantum behaviour of an atom interacting with a visible wavelength laser would be around one part in 10^11.

    An Italian group has already begun work on such experiments and found no deviation from the equivalence principle up to one part in 109.

    If Einstein’s work does turn out to be violated, it could have consequences for the use of quantum systems as very precise atomic clocks.

    “If the Einstein principle was violated only as allowed in classical physics, clocks could fail to be time-dilated, as predicted by relativity,” says Zych.

    “But if it is violated as allowed in quantum theory, clocks would generically cease to be clocks at all.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 8:38 am on August 8, 2018 Permalink | Reply
    Tags: , Cosmos Magazine, Incredibly tiny explosion packs a big punch, Nanoplasma, ,   

    From COSMOS Magazine: “Incredibly tiny explosion packs a big punch” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    08 August 2018
    Phil Dooley

    Japanese researchers record for the first time the birth of nanoplasma.

    3
    (piranka/istock)

    1
    By bombarding xenon atoms with X-rays, researchers can create nanoplasma. Credit: Science Picture Co / Getty Images

    Japanese researchers have captured the birth of a nanoplasma – a mixture of highly charged ions and electrons – in exquisite detail, as a high-powered X-ray laser roasted a microscopic cluster of atoms, tearing off electrons.

    While it’s cool to witness an explosion lasting just half a trillionth of a second and occupying one-hundredth the diameter of a human hair, caused by an X-ray beam 12,000 times brighter than the sun, it’s also important for studies of tiny structures such as proteins and crystals.

    To study small things you need light of a comparably small wavelength. The wavelength of the X-rays used by Yoshiaki Kumagai and his colleagues in this experiment at the Spring-8 Angstrom Compact free electron Laser (SACLA) in Japan is one ten billionth of a meter: you could fit a million wavelengths into the thickness of a sheet of paper.

    SACLA Free-Electron Laser Riken Japan

    This is the perfect wavelength for probing the structure of crystals and proteins, and the brightness of a laser gives a good strong signal. The problem, however, is that the laser itself damages the structure, says Kumagai, a physicist from Tohoku University in the city of Sendai.

    “Some proteins are very sensitive to irradiation,” he explains. “It is hard to know if we are actually detecting the pure protein structure, or whether there is already radiation damage.”

    The tell-tale sign of radiation damage is the formation of a nanoplasma, as the X-rays break bonds and punch out electrons from deep inside atoms to form ions. This happens in tens of femtoseconds (that is, quadrillionths of a second) and sets off complex cascades of collisions, recombinations and internal rearrangements of atoms. SACLA’s ultra short pulses, only 10 femtoseconds long, are the perfect tool to map out the progress of the tiny explosion moment by moment.

    To untangle the complicated web of processes going on the team chose a very simple structure to study, a cluster of about 5000 xenon atoms injected into a vacuum, which they then hit with an X-ray laser pulse.

    A second laser pulse followed, this time from an infrared laser, which was absorbed by the fragments and ions. The patterns of the absorption told the scientists what the nanoplasma contained. By repeating the experiment, each time delaying the infrared laser a little more, they built a set of snapshots of the nanoplasma’s birth.

    Previous experiments had shown that on average at least six electrons eventually get blasted off each xenon atom, but the team’s set of new snapshots, published in the journal Physical Review X, show that it doesn’t all happen immediately.

    1
    3
    4

    Instead, within 10 femtoseconds many of the xenon atoms have absorbed a lot of energy but not lost any electrons. Some atoms do lose electrons, and the attraction between the positive ions and the free electrons holds the plasma together. This leads to many collisions, which share the energy among the neutral atoms. The number of these atoms then declines over the next several hundred femtoseconds, as more ions form.

    Kumagai says the large initial population of highly-excited neutral xenon atoms were gateway states to the nanoplasma formation.

    “The excited atoms play an important role in the charge transfer and energy migration. It’s the first time we’ve caught this very fast step in nanoplasma formation,” he says.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:30 am on July 30, 2018 Permalink | Reply
    Tags: , Cosmos Magazine, Hello quantum world, , ,   

    From COSMOS Magazine: “Hello quantum world” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    30 July 2018
    Will Knight

    Quantum computing – IBM

    Inside a small laboratory in lush countryside about 80 kilometres north of New York City, an elaborate tangle of tubes and electronics dangles from the ceiling. This mess of equipment is a computer. Not just any computer, but one on the verge of passing what may, perhaps, go down as one of the most important milestones in the history of the field.

    Quantum computers promise to run calculations far beyond the reach of any conventional supercomputer. They might revolutionise the discovery of new materials by making it possible to simulate the behaviour of matter down to the atomic level. Or they could upend cryptography and security by cracking otherwise invincible codes. There is even hope they will supercharge artificial intelligence by crunching through data more efficiently.

    Yet only now, after decades of gradual progress, are researchers finally close to building quantum computers powerful enough to do things that conventional computers cannot. It’s a landmark somewhat theatrically dubbed ‘quantum supremacy’. Google has been leading the charge toward this milestone, while Intel and Microsoft also have significant quantum efforts. And then there are well-funded startups including Rigetti Computing, IonQ and Quantum Circuits.

    No other contender can match IBM’s pedigree in this area, though. Starting 50 years ago, the company produced advances in materials science that laid the foundations for the computer revolution. Which is why, last October, I found myself at IBM’s Thomas J. Watson Research Center to try to answer these questions: What, if anything, will a quantum computer be good for? And can a practical, reliable one even be built?

    2
    Credit: Graham Carlow

    Why we think we need a quantum computer

    The research center, located in Yorktown Heights, looks a bit like a flying saucer as imagined in 1961. It was designed by the neo-futurist architect Eero Saarinen and built during IBM’s heyday as a maker of large mainframe business machines. IBM was the world’s largest computer company, and within a decade of the research centre’s construction it had become the world’s fifth-largest company of any kind, just behind Ford and General Electric.

    While the hallways of the building look out onto the countryside, the design is such that none of the offices inside have any windows. It was in one of these cloistered rooms that I met Charles Bennett. Now in his 70s, he has large white sideburns, wears black socks with sandals and even sports a pocket protector with pens in it.

    3
    Charles Bennett was one of the pioneers who realised quantum computers could solve some problems exponentially faster than conventional computers. Credit:Bartek Sadowski

    Surrounded by old computer monitors, chemistry models and, curiously, a small disco ball, he recalled the birth of quantum computing as if it were yesterday.

    When Bennett joined IBM in 1972, quantum physics was already half a century old, but computing still relied on classical physics and the mathematical theory of information that Claude Shannon had developed at MIT in the 1950s. It was Shannon who defined the quantity of information in terms of the number of ‘bits’ (a term he popularised but did not coin) required to store it. Those bits, the 0s and 1s of binary code, are the basis of all conventional computing.

    A year after arriving at Yorktown Heights, Bennett helped lay the foundation for a quantum information theory that would challenge all that. It relies on exploiting the peculiar behaviour of objects at the atomic scale. At that size, a particle can exist ‘superposed’ in many states (e.g., many different positions) at once. Two particles can also exhibit ‘entanglement’, so that changing the state of one may instantaneously affect the other.

    Bennett and others realised that some kinds of computations that are exponentially time consuming, or even impossible, could be efficiently performed with the help of quantum phenomena. A quantum computer would store information in quantum bits, or qubits. Qubits can exist in superpositions of 1 and 0, and entanglement and a trick called interference can be used to find the solution to a computation over an exponentially large number of states. It’s annoyingly hard to compare quantum and classical computers, but roughly speaking, a quantum computer with just a few hundred qubits would be able to perform more calculations simultaneously than there are atoms in the known universe.

    In the summer of 1981, IBM and MIT organised a landmark event called the First Conference on the Physics of Computation. It took place at Endicott House, a French-style mansion not far from the MIT campus.

    In a photo that Bennett took during the conference, several of the most influential figures from the history of computing and quantum physics can be seen on the lawn, including Konrad Zuse, who developed the first programmable computer, and Richard Feynman, an important contributor to quantum theory. Feynman gave the conference’s keynote speech, in which he raised the idea of computing using quantum effects. “The biggest boost quantum information theory got was from Feynman,” Bennett told me. “He said, ‘Nature is quantum, goddamn it! So if we want to simulate it, we need a quantum computer.’”

    IBM’s quantum computer – one of the most promising in existence – is located just down the hall from Bennett’s office. The machine is designed to create and manipulate the essential element in a quantum computer: the qubits that store information.

    The gap between the dream and the reality

    The IBM machine exploits quantum phenomena that occur in superconducting materials. For instance, sometimes current will flow clockwise and counterclockwise at the same time. IBM’s computer uses superconducting circuits in which two distinct electromagnetic energy states make up a qubit.

    The superconducting approach has key advantages. The hardware can be made using well-established manufacturing methods, and a conventional computer can be used to control the system. The qubits in a superconducting circuit are also easier to manipulate and less delicate than individual photons or ions.

    Inside IBM’s quantum lab, engineers are working on a version of the computer with 50 qubits. You can run a simulation of a simple quantum computer on a normal computer, but at around 50 qubits it becomes nearly impossible.

    That means IBM is theoretically approaching the point where a quantum computer can solve problems a classical computer cannot: in other words, quantum supremacy.

    But as IBM’s researchers will tell you, quantum supremacy is an elusive concept. You would need all 50 qubits to work perfectly, when in reality quantum computers are beset by errors that need to be corrected. It is also devilishly difficult to maintain qubits for any length of time; they tend to ‘decohere’, or lose their delicate quantum nature, much as a smoke ring breaks up at the slightest air current. And the more qubits, the harder both challenges become.

    3
    The cutting-edge science of quantum computing requires nanoscale precision mixed with the tinkering spirit of home electronics. Researcher Jerry Chow is here shown fitting a circuitboard in the IBM quantum research lab. Jon Simon

    “If you had 50 or 100 qubits and they really worked well enough, and were fully error-corrected – you could do unfathomable calculations that can’t be replicated on any classical machine, now or ever,” says Robert Schoelkopf, a Yale professor and founder of a company called Quantum Circuits. “The flip side to quantum computing is that there are exponential ways for it to go wrong.”

    Another reason for caution is that it isn’t obvious how useful even a perfectly functioning quantum computer would be. It doesn’t simply speed up any task you throw at it; in fact, for many calculations, it would actually be slower than classical machines. Only a handful of algorithms have so far been devised where a quantum computer would clearly have an edge. And even for those, that edge might be short-lived. The most famous quantum algorithm, developed by Peter Shor at MIT, is for finding the prime factors of an integer. Many common cryptographic schemes rely on the fact that this is hard for a conventional computer to do. But cryptography could adapt, creating new kinds of codes that don’t rely on factorisation.

    This is why, even as they near the 50-qubit milestone, IBM’s own researchers are keen to dispel the hype around it. At a table in the hallway that looks out onto the lush lawn outside, I encountered Jay Gambetta, a tall, easygoing Australian who researches quantum algorithms and potential applications for IBM’s hardware. “We’re at this unique stage,” he said, choosing his words with care. “We have this device that is more complicated than you can simulate on a classical computer, but it’s not yet controllable to the precision that you could do the algorithms you know how to do.”

    What gives the IBMers hope is that even an imperfect quantum computer might still be a useful one.

    Gambetta and other researchers have zeroed in on an application that Feynman envisioned back in 1981. Chemical reactions and the properties of materials are determined by the interactions between atoms and molecules. Those interactions are governed by quantum phenomena. A quantum computer can – at least in theory – model those in a way a conventional one cannot.

    Last year, Gambetta and colleagues at IBM used a seven-qubit machine to simulate the precise structure of beryllium hydride. At just three atoms, it is the most complex molecule ever modelled with a quantum system. Ultimately, researchers might use quantum computers to design more efficient solar cells, more effective drugs or catalysts that turn sunlight into clean fuels.

    Those goals are a long way off. But, Gambetta says, it may be possible to get valuable results from an error-prone quantum machine paired with a classical computer.

    4
    Credit Cosmos Magazine

    Physicist’s dream to engineer’s nightmare

    “The thing driving the hype is the realisation that quantum computing is actually real,” says Isaac Chuang, a lean, soft-spoken MIT professor. “It is no longer a physicist’s dream – it is an engineer’s nightmare.”

    Chuang led the development of some of the earliest quantum computers, working at IBM in Almaden, California, during the late 1990s and early 2000s. Though he is no longer working on them, he thinks we are at the beginning of something very big – that quantum computing will eventually even play a role in artificial intelligence.

    But he also suspects that the revolution will not really begin until a new generation of students and hackers get to play with practical machines. Quantum computers require not just different programming languages but a fundamentally different way of thinking about what programming is. As Gambetta puts it: “We don’t really know what the equivalent of ‘Hello, world’ is on a quantum computer.”

    We are beginning to find out. In 2016 IBM connected a small quantum computer to the cloud. Using a programming tool kit called QISKit, you can run simple programs on it; thousands of people, from academic researchers to schoolkids, have built QISKit programs that run basic quantum algorithms. Now Google and other companies are also putting their nascent quantum computers online. You can’t do much with them, but at least they give people outside the leading labs a taste of what may be coming.

    The startup community is also getting excited. A short while after seeing IBM’s quantum computer, I went to the University of Toronto’s business school to sit in on a pitch competition for quantum startups. Teams of entrepreneurs nervously got up and presented their ideas to a group of professors and investors. One company hoped to use quantum computers to model the financial markets. Another planned to have them design new proteins. Yet another wanted to build more advanced AI systems. What went unacknowledged in the room was that each team was proposing a business built on a technology so revolutionary that it barely exists. Few seemed daunted by that fact.

    This enthusiasm could sour if the first quantum computers are slow to find a practical use. The best guess from those who truly know the difficulties –people like Bennett and Chuang – is that the first useful machines are still several years away. And that’s assuming the problem of managing and manipulating a large collection of qubits won’t ultimately prove intractable.

    Still, the experts hold out hope. When I asked him what the world might be like when my two-year-old son grows up, Chuang, who learned to use computers by playing with microchips, responded with a grin. “Maybe your kid will have a kit for building a quantum computer,” he said.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:46 am on July 2, 2018 Permalink | Reply
    Tags: Australia’s reputation for research integrity at the crossroads, Cosmos Magazine   

    From COSMOS Magazine: “Australia’s reputation for research integrity at the crossroads” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    02 July 2018
    David Vaux
    Peter Brooks
    Simon Gandevia

    Changes to Australia’s code of research conduct endanger its reputation for world-standard output.

    1
    Researchers are under pressure to deliver publications and win grants. Shutterstock

    In 2018, Australia still does not have appropriate measures in place to maintain research integrity. And recent changes to our code of research conduct have weakened our already inadequate position.

    In contrast, China’s recent move to crack down on academic misconduct moves it into line with more than twenty European countries, the UK, USA, Canada and others that have national offices for research integrity.

    Australia risks its reputation by turning in the opposite direction.

    Research integrity is vital

    Our confidence in science relies on its integrity – relating to both the research literature (its freedom from errors), and the researchers themselves (that they behave in a principled way).

    However, the pressures on scientists to publish and win grants can lead to misconduct. This can range from cherry-picking results that support a favoured hypothesis, to making up experimental, animal or patient results from thin air. A recent report found that around 1 in 25 papers contained duplicated images (inconsistent with good research practice), and about half of these had features suggesting deliberate manipulation.

    For science to progress efficiently, and to remain credible, we need good governance structures, and as transparent and open a system as possible. Measures are needed to identify and correct errors, and to rectify misbehaviour.

    In Australia, one such measure is the Australian Code for the Responsible Conduct of Research. But recently published revisions of this code allow research integrity to be handled internally by institutions, and investigations to be kept secret. This puts at risk the hundreds of millions of dollars provided by the taxpayer to fund research.

    As a nation, we can and must do much better, before those who invest in and conduct research go elsewhere – to countries that are serious about the governance of research integrity.

    Learning from experience – the Hall affair

    Developed jointly by the National Health and Medical Research Council (NHMRC), the Australian Research Council (ARC) and Universities Australia, the Australian Code for the Responsible Conduct of Research has the stated goal of improving research integrity in Australia.

    The previous version of the Australian Code was written in 2007, partly in response to the “Hall affair”.

    In 2001, complaints of research misconduct were levelled at Professor Bruce Hall, an immunologist at University of New South Wales (UNSW). After multiple inquiries, UNSW Vice Chancellor Rory Hume concluded that Hall was not guilty of scientific misconduct but had “committed errors of judgement sufficiently serious in two instances to warrant censure.” All allegations were denied by Hall.

    Commenting on the incident in 2004, Editor-in-Chief of the Medical Journal of Australia Martin Van Der Weyden highlighted the importance of external and independent review in investigating research practice:

    “The initial inquiry by the UNSW’s Dean of Medicine [was] patently crippled by perceptions of conflicts of interest — including an institution investigating allegations of improprieties carried out in its own backyard!

    Herein lies lesson number one — once allegations of scientific misconduct and fraud have been made, these should be addressed from the beginning by an external and independent inquiry.”

    An external and independent panel

    Avoiding conflicts of interest – real or perceived – was one of the reasons the 2007 version of the Australian Code required “institutions to establish independent external research misconduct inquiries to evaluate allegations of serious research misconduct that are contested.”

    But it seems this lesson has been forgotten. With respect to establishing a panel to investigate alleged misconduct, the revised Code says meekly:

    “There will be occasions where some or all members should be external to the institution.”

    Institutions will now be able to decide for themselves the terms of reference for investigations, and the number and composition of inquiry panels.

    Reducing research misconduct in Australia

    The chief justification for revising the 2007 Australian Code was to reduce research misconduct.

    In its initial draft form in 2016, the committee charged with this task suggested simply removing the term “research misconduct” from the Code, meaning that research misconduct would no longer officially exist in Australia.

    Unsurprisingly, this created a backlash, and, in the final version of the revised Code, a definition of the term “research misconduct” has returned:

    “Research misconduct: a serious breach of the Code which is also intentional or reckless or negligent.”

    However, institutions now have the option of “whether and how to use the term ‘research misconduct’ in relation to serious breaches of the Code”.

    Principles not enough

    The new Code is split into a set of principles of responsible research conduct that lists the responsibilities of researchers and institutions, together with a set of guides. The first guide describes how potential breaches of the Code should be investigated and managed.

    The principles of responsible research conduct are fine, and exhort researchers to be honest and fair, rigorous and respectful. No one would have an issue with this.

    Similarly, no one would think it unreasonable that institutions also have responsibilities, such as to identify and comply with relevant laws, regulations, guidelines and policies related to the conduct of research.

    However, having a set of lofty principles alone is not sufficient; there also need to be mechanisms to ensure compliance, not just by researchers, but also by institutions.

    Transparency, accountability, and trust

    The new Code says that institutions must ensure that all investigations are confidential. There is no requirement to make the outcome public, but only to “consider whether a public statement is appropriate to communicate the outcome of an investigation”.

    Combining mandatory confidentiality with self-regulation is bound to undermine trust in the governance of research integrity.

    In the new Code there is no mechanism for oversight. The outcome of a misconduct investigation can be appealed to the Australian Research Integrity Committee (ARIC), but only on the grounds of improper process, and not based on evidence or facts.

    Given that the conduct of investigations as well as the findings are to be confidential, it will be difficult to make an appeal to ARIC on any grounds.

    We need a national office of research integrity

    It is not clear why Australia does not learn from the experience of countries with independent agencies for research integrity, and adopt one of the models that is already working elsewhere in the world.

    Those who care about research and careers in research should ask their politicians and university Vice Chancellors why a national office of research integrity is necessary in the nations of Europe, the UK, US, Canada and now China, but not in Australia.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:21 am on June 12, 2018 Permalink | Reply
    Tags: , , , , , Cosmos Magazine,   

    From COSMOS Magazine: “‘Galactic archaeology’ provides clues to star formation, and origin of gold” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    12 June 2018
    Richard A Lovett

    Old stars in our own galaxy are yielding information that illuminates conditions in the early universe.

    1
    Analysing ancient neighbourhood stars is a promising avenue of research. Image credit temniy/Getty Images

    Cosmologists looking for fingerprints of the early universe need look no further than old stars in our own galaxy and its neighbours, astronomers say.

    Not that these stars date back to the dawn of time, but a few were formed when the universe was only a fraction of its current age, and the composition of their atmospheres reveals much about how conditions have changed between then and the time, much later, when our own sun was formed. They can even reveal the origin of important elements such as silver and gold.

    It’s a type of study that Gina Duggan, a graduate student in astrophysics at California Institute of Technology, Pasadena, in the US, calls galactic archaeology. “[It] uses elements in stars alive today to probe the galaxy’s history,” she says.

    In fact, adds Timothy Beers, an astrophysicist at The University of Notre Dame, in Indiana, US, it’s not just our own galaxy’s history that can be probed in this manner. Such stars provide clues to conditions throughout the early universe.

    Both researchers recently presented their ideas to the annual meeting of the American Astronomical Society in Denver, US.

    The first stars, cosmologists believe, were composed entirely of hydrogen and helium — the only elements formed directly in the Big Bang. These elements still compose the bulk of today’s stars; the sun, for instance, is 98% hydrogen and helium.

    But there’s a big difference between 98% and 100%. Pure hydrogen and helium stars tend to be hot and big, burning bright and dying young in giant explosions. In the process, they spray other elements into the cosmos – elements that enrich the next generation of stars, building toward the 2% of them found in the sun.

    Such chemically enriched stars, Beers says, don’t necessarily burn as brightly or die as young. Some can be smaller, with lifetimes of 10 billion or more years. “These low-mass stars we can still see today,” he says.

    Spectroscopic analysis can determine how much “pollution” these stars picked up from materials ejected by their predecessors. This allows astronomers to pick out early second-generation stars from other stars populating the Milky Way galaxy and its neighbours, allowing them to be used as cosmological time capsules.

    “We can learn about the chemistry of the very early universe right in our own backyard, not just from studying faint sources more than 10 billion light years away,” Beers says.

    In fact, one of these stars, known as BD+44:493, is only 600 light years away.

    “It’s visible with binoculars,” Beers exclaims. “But it’s preserving stuff from the early universe!”

    Kris Youakim of the Leibniz Institute for Astrophysics in Potsdam, Germany, adds that such stars can also be used to study the way large galaxies like our own were formed by mergers of numerous smaller ones. Such mergers, he says, tore the smaller galaxies apart, producing long “spaghettified” streamers.

    But by using old stars similar to those studied by Beers as markers, he adds, it’s possible find these streamers and trace the history of how our galaxy came together.

    Other researchers believe that nearby dwarf galaxies that have not yet merged into larger galaxies are good laboratories for understanding processes in the early universe, where dwarf galaxies dominated.

    Small Magellanic Cloud. NASA/ESA Hubble and ESO/Digitized Sky Survey 2

    Large Magellanic Cloud. Adrian Pingstone December 2003

    Magellanic Bridge ESA Gaia satellite. Image credit V. Belokurov D. Erkal A. Mellinger.

    “This is an under-utilised but important way to get at where and how the first stars might have formed and the kind of galaxies that helped,” says Aparna Venkatesan of the University of San Francisco, California, US.

    But the most exciting find involves the origin of the Earth’s gold.

    Geologically, of course, we know it comes from gold mines. But before the Earth was formed there had to have been gold in the dust cloud that created the solar system, and there are two theories for how that gold could have been made.

    One, says Duggan, is that it was formed in the heart of giant stellar explosions called magnetorotational supernovae. Another is that it was made in an equally titanic process: the collision of the remnants of dead stars known as neutron stars.

    The former tended to occur early in the universe’s history, when giant stars met their catastrophic ends. The latter mostly came later, following the deaths of later-generation stars.

    To figure out which it was, Duggan’s team looked at the concentration of a related element, barium, in stars of a variety of ages. By comparing the amount of barium to that of iron, which is known to build up steadily with each new generation of stars, she was able to determine if it, acting as a proxy for gold, appeared on the scene early – a sign that they were produced by magnetorotational supernovae – or more recently, a sign that they came from neutron star collisions.

    Evan Kirby, a researcher on the project, calls it another example of galactic archaeology in operation.

    “This study … used elements present in stars today to ‘dig up’ evidence of the history of element production in galaxies,” he says.

    “By measuring the ratio of elements in stars with different ages, we are able to say when these elements were created.”

    The conclusion: gold and related elements were largely formed later on, in neutron star collisions.

    UC Santa Cruz

    UC Santa Cruz

    14

    A UC Santa Cruz special report

    Tim Stephens

    Astronomer Ryan Foley says “observing the explosion of two colliding neutron stars” [see https://sciencesprings.wordpress.com/2017/10/17/from-ucsc-first-observations-of-merging-neutron-stars-mark-a-new-era-in-astronomy ]–the first visible event ever linked to gravitational waves–is probably the biggest discovery he’ll make in his lifetime. That’s saying a lot for a young assistant professor who presumably has a long career still ahead of him.

    2
    The first optical image of a gravitational wave source was taken by a team led by Ryan Foley of UC Santa Cruz using the Swope Telescope at the Carnegie Institution’s Las Campanas Observatory in Chile. This image of Swope Supernova Survey 2017a (SSS17a, indicated by arrow) shows the light emitted from the cataclysmic merger of two neutron stars. (Image credit: 1M2H Team/UC Santa Cruz & Carnegie Observatories/Ryan Foley)

    Carnegie Institution Swope telescope at Las Campanas, Chile, 100 kilometres (62 mi) northeast of the city of La Serena. near the north end of a 7 km (4.3 mi) long mountain ridge. Cerro Las Campanas, near the southern end and over 2,500 m (8,200 ft) high, at Las Campanas, Chile

    A neutron star forms when a massive star runs out of fuel and explodes as a supernova, throwing off its outer layers and leaving behind a collapsed core composed almost entirely of neutrons. Neutrons are the uncharged particles in the nucleus of an atom, where they are bound together with positively charged protons. In a neutron star, they are packed together just as densely as in the nucleus of an atom, resulting in an object with one to three times the mass of our sun but only about 12 miles wide.

    “Basically, a neutron star is a gigantic atom with the mass of the sun and the size of a city like San Francisco or Manhattan,” said Foley, an assistant professor of astronomy and astrophysics at UC Santa Cruz.

    These objects are so dense, a cup of neutron star material would weigh as much as Mount Everest, and a teaspoon would weigh a billion tons. It’s as dense as matter can get without collapsing into a black hole.

    THE MERGER

    Like other stars, neutron stars sometimes occur in pairs, orbiting each other and gradually spiraling inward. Eventually, they come together in a catastrophic merger that distorts space and time (creating gravitational waves) and emits a brilliant flare of electromagnetic radiation, including visible, infrared, and ultraviolet light, x-rays, gamma rays, and radio waves. Merging black holes also create gravitational waves, but there’s nothing to be seen because no light can escape from a black hole.

    Foley’s team was the first to observe the light from a neutron star merger that took place on August 17, 2017, and was detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO).


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    1
    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    Now, for the first time, scientists can study both the gravitational waves (ripples in the fabric of space-time), and the radiation emitted from the violent merger of the densest objects in the universe.

    3
    The UC Santa Cruz team found SSS17a by comparing a new image of the galaxy N4993 (right) with images taken four months earlier by the Hubble Space Telescope (left). The arrows indicate where SSS17a was absent from the Hubble image and visible in the new image from the Swope Telescope. (Image credits: Left, Hubble/STScI; Right, 1M2H Team/UC Santa Cruz & Carnegie Observatories/Ryan Foley)

    It’s that combination of data, and all that can be learned from it, that has astronomers and physicists so excited. The observations of this one event are keeping hundreds of scientists busy exploring its implications for everything from fundamental physics and cosmology to the origins of gold and other heavy elements.


    A small team of UC Santa Cruz astronomers were the first team to observe light from two neutron stars merging in August. The implications are huge.

    ALL THE GOLD IN THE UNIVERSE

    It turns out that the origins of the heaviest elements, such as gold, platinum, uranium—pretty much everything heavier than iron—has been an enduring conundrum. All the lighter elements have well-explained origins in the nuclear fusion reactions that make stars shine or in the explosions of stars (supernovae). Initially, astrophysicists thought supernovae could account for the heavy elements, too, but there have always been problems with that theory, says Enrico Ramirez-Ruiz, professor and chair of astronomy and astrophysics at UC Santa Cruz.

    4
    The violent merger of two neutron stars is thought to involve three main energy-transfer processes, shown in this diagram, that give rise to the different types of radiation seen by astronomers, including a gamma-ray burst and a kilonova explosion seen in visible light. (Image credit: Murguia-Berthier et al., Science)

    A theoretical astrophysicist, Ramirez-Ruiz has been a leading proponent of the idea that neutron star mergers are the source of the heavy elements. Building a heavy atomic nucleus means adding a lot of neutrons to it. This process is called rapid neutron capture, or the r-process, and it requires some of the most extreme conditions in the universe: extreme temperatures, extreme densities, and a massive flow of neutrons. A neutron star merger fits the bill.

    Ramirez-Ruiz and other theoretical astrophysicists use supercomputers to simulate the physics of extreme events like supernovae and neutron star mergers. This work always goes hand in hand with observational astronomy. Theoretical predictions tell observers what signatures to look for to identify these events, and observations tell theorists if they got the physics right or if they need to tweak their models. The observations by Foley and others of the neutron star merger now known as SSS17a are giving theorists, for the first time, a full set of observational data to compare with their theoretical models.

    According to Ramirez-Ruiz, the observations support the theory that neutron star mergers can account for all the gold in the universe, as well as about half of all the other elements heavier than iron.

    RIPPLES IN THE FABRIC OF SPACE-TIME

    Einstein predicted the existence of gravitational waves in 1916 in his general theory of relativity, but until recently they were impossible to observe. LIGO’s extraordinarily sensitive detectors achieved the first direct detection of gravitational waves, from the collision of two black holes, in 2015. Gravitational waves are created by any massive accelerating object, but the strongest waves (and the only ones we have any chance of detecting) are produced by the most extreme phenomena.

    Two massive compact objects—such as black holes, neutron stars, or white dwarfs—orbiting around each other faster and faster as they draw closer together are just the kind of system that should radiate strong gravitational waves. Like ripples spreading in a pond, the waves get smaller as they spread outward from the source. By the time they reached Earth, the ripples detected by LIGO caused distortions of space-time thousands of times smaller than the nucleus of an atom.

    The rarefied signals recorded by LIGO’s detectors not only prove the existence of gravitational waves, they also provide crucial information about the events that produced them. Combined with the telescope observations of the neutron star merger, it’s an incredibly rich set of data.

    LIGO can tell scientists the masses of the merging objects and the mass of the new object created in the merger, which reveals whether the merger produced another neutron star or a more massive object that collapsed into a black hole. To calculate how much mass was ejected in the explosion, and how much mass was converted to energy, scientists also need the optical observations from telescopes. That’s especially important for quantifying the nucleosynthesis of heavy elements during the merger.

    LIGO can also provide a measure of the distance to the merging neutron stars, which can now be compared with the distance measurement based on the light from the merger. That’s important to cosmologists studying the expansion of the universe, because the two measurements are based on different fundamental forces (gravity and electromagnetism), giving completely independent results.

    “This is a huge step forward in astronomy,” Foley said. “Having done it once, we now know we can do it again, and it opens up a whole new world of what we call ‘multi-messenger’ astronomy, viewing the universe through different fundamental forces.”

    IN THIS REPORT

    Neutron stars
    A team from UC Santa Cruz was the first to observe the light from a neutron star merger that took place on August 17, 2017 and was detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO)

    5
    Graduate students and post-doctoral scholars at UC Santa Cruz played key roles in the dramatic discovery and analysis of colliding neutron stars.Astronomer Ryan Foley leads a team of young graduate students and postdoctoral scholars who have pulled off an extraordinary coup. Following up on the detection of gravitational waves from the violent merger of two neutron stars, Foley’s team was the first to find the source with a telescope and take images of the light from this cataclysmic event. In so doing, they beat much larger and more senior teams with much more powerful telescopes at their disposal.

    “We’re sort of the scrappy young upstarts who worked hard and got the job done,” said Foley, an untenured assistant professor of astronomy and astrophysics at UC Santa Cruz.

    7
    David Coulter, graduate student

    The discovery on August 17, 2017, has been a scientific bonanza, yielding over 100 scientific papers from numerous teams investigating the new observations. Foley’s team is publishing seven papers, each of which has a graduate student or postdoc as the first author.

    “I think it speaks to Ryan’s generosity and how seriously he takes his role as a mentor that he is not putting himself front and center, but has gone out of his way to highlight the roles played by his students and postdocs,” said Enrico Ramirez-Ruiz, professor and chair of astronomy and astrophysics at UC Santa Cruz and the most senior member of Foley’s team.

    “Our team is by far the youngest and most diverse of all of the teams involved in the follow-up observations of this neutron star merger,” Ramirez-Ruiz added.

    8
    Charles Kilpatrick, postdoctoral scholar

    Charles Kilpatrick, a 29-year-old postdoctoral scholar, was the first person in the world to see an image of the light from colliding neutron stars. He was sitting in an office at UC Santa Cruz, working with first-year graduate student Cesar Rojas-Bravo to process image data as it came in from the Swope Telescope in Chile. To see if the Swope images showed anything new, he had also downloaded “template” images taken in the past of the same galaxies the team was searching.

    9
    Ariadna Murguia-Berthier, graduate student

    “In one image I saw something there that was not in the template image,” Kilpatrick said. “It took me a while to realize the ramifications of what I was seeing. This opens up so much new science, it really marks the beginning of something that will continue to be studied for years down the road.”

    At the time, Foley and most of the others in his team were at a meeting in Copenhagen. When they found out about the gravitational wave detection, they quickly got together to plan their search strategy. From Copenhagen, the team sent instructions to the telescope operators in Chile telling them where to point the telescope. Graduate student David Coulter played a key role in prioritizing the galaxies they would search to find the source, and he is the first author of the discovery paper published in Science.

    10
    Matthew Siebert, graduate student

    “It’s still a little unreal when I think about what we’ve accomplished,” Coulter said. “For me, despite the euphoria of recognizing what we were seeing at the moment, we were all incredibly focused on the task at hand. Only afterward did the significance really sink in.”

    Just as Coulter finished writing his paper about the discovery, his wife went into labor, giving birth to a baby girl on September 30. “I was doing revisions to the paper at the hospital,” he said.

    It’s been a wild ride for the whole team, first in the rush to find the source, and then under pressure to quickly analyze the data and write up their findings for publication. “It was really an all-hands-on-deck moment when we all had to pull together and work quickly to exploit this opportunity,” said Kilpatrick, who is first author of a paper comparing the observations with theoretical models.

    11
    César Rojas Bravo, graduate student

    Graduate student Matthew Siebert led a paper analyzing the unusual properties of the light emitted by the merger. Astronomers have observed thousands of supernovae (exploding stars) and other “transients” that appear suddenly in the sky and then fade away, but never before have they observed anything that looks like this neutron star merger. Siebert’s paper concluded that there is only a one in 100,000 chance that the transient they observed is not related to the gravitational waves.

    Ariadna Murguia-Berthier, a graduate student working with Ramirez-Ruiz, is first author of a paper synthesizing data from a range of sources to provide a coherent theoretical framework for understanding the observations.

    Another aspect of the discovery of great interest to astronomers is the nature of the galaxy and the galactic environment in which the merger occurred. Postdoctoral scholar Yen-Chen Pan led a paper analyzing the properties of the host galaxy. Enia Xhakaj, a new graduate student who had just joined the group in August, got the opportunity to help with the analysis and be a coauthor on the paper.

    12
    Yen-Chen Pan, postdoctoral scholar

    “There are so many interesting things to learn from this,” Foley said. “It’s a great experience for all of us to be part of such an important discovery.”

    13
    Enia Xhakaj, graduate student

    IN THIS REPORT

    Scientific Papers from the 1M2H Collaboration

    Coulter et al., Science, Swope Supernova Survey 2017a (SSS17a), the Optical Counterpart to a Gravitational Wave Source

    Drout et al., Science, Light Curves of the Neutron Star Merger GW170817/SSS17a: Implications for R-Process Nucleosynthesis

    Shappee et al., Science, Early Spectra of the Gravitational Wave Source GW170817: Evolution of a Neutron Star Merger

    Kilpatrick et al., Science, Electromagnetic Evidence that SSS17a is the Result of a Binary Neutron Star Merger

    Siebert et al., ApJL, The Unprecedented Properties of the First Electromagnetic Counterpart to a Gravitational-wave Source

    Pan et al., ApJL, The Old Host-galaxy Environment of SSS17a, the First Electromagnetic Counterpart to a Gravitational-wave Source

    Murguia-Berthier et al., ApJL, A Neutron Star Binary Merger Model for GW170817/GRB170817a/SSS17a

    Kasen et al., Nature, Origin of the heavy elements in binary neutron star mergers from a gravitational wave event

    Abbott et al., Nature, A gravitational-wave standard siren measurement of the Hubble constant (The LIGO Scientific Collaboration and The Virgo Collaboration, The 1M2H Collaboration, The Dark Energy Camera GW-EM Collaboration and the DES Collaboration, The DLT40 Collaboration, The Las Cumbres Observatory Collaboration, The VINROUGE Collaboration & The MASTER Collaboration)

    Abbott et al., ApJL, Multi-messenger Observations of a Binary Neutron Star Merger

    PRESS RELEASES AND MEDIA COVERAGE


    Watch Ryan Foley tell the story of how his team found the neutron star merger in the video below. 2.5 HOURS.

    Press releases:

    UC Santa Cruz Press Release

    UC Berkeley Press Release

    Carnegie Institution of Science Press Release

    LIGO Collaboration Press Release

    National Science Foundation Press Release

    Media coverage:

    The Atlantic – The Slack Chat That Changed Astronomy

    Washington Post – Scientists detect gravitational waves from a new kind of nova, sparking a new era in astronomy

    New York Times – LIGO Detects Fierce Collision of Neutron Stars for the First Time

    Science – Merging neutron stars generate gravitational waves and a celestial light show

    CBS News – Gravitational waves – and light – seen in neutron star collision

    CBC News – Astronomers see source of gravitational waves for 1st time

    San Jose Mercury News – A bright light seen across the universe, proving Einstein right

    Popular Science – Gravitational waves just showed us something even cooler than black holes

    Scientific American – Gravitational Wave Astronomers Hit Mother Lode

    Nature – Colliding stars spark rush to solve cosmic mysteries

    National Geographic – In a First, Gravitational Waves Linked to Neutron Star Crash

    Associated Press – Astronomers witness huge cosmic crash, find origins of gold

    Science News – Neutron star collision showers the universe with a wealth of discoveries

    UCSC press release
    First observations of merging neutron stars mark a new era in astronomy

    Credits

    Writing: Tim Stephens
    Video: Nick Gonzales
    Photos: Carolyn Lagattuta
    Header image: Illustration by Robin Dienel courtesy of the Carnegie Institution for Science
    Design and development: Rob Knight
    Project managers: Sherry Main, Scott Hernandez-Jason, Tim Stephens

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    Gemini South telescope, Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile, at an altitude of 7200 feet

    Noted in the video but not in the article:

    NASA/Chandra Telescope

    NASA/SWIFT Telescope

    NRAO/Karl V Jansky VLA, on the Plains of San Agustin fifty miles west of Socorro, NM, USA

    Prompt telescope CTIO Chile

    NASA NuSTAR X-ray telescope

    See the full UCSC article here

    Without such impacts, perhaps everything from gold rushes to the history of precious coins might have been entirely different.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:18 am on May 14, 2018 Permalink | Reply
    Tags: Cosmos Magazine, , , , University of Sidney   

    From University of Sidney and Durham University via COSMOS: “Multiverse theory cops a blow after dark energy findings” 

    From

    U Sidney bloc

    University of Sidney

    and

    Durham U bloc

    Durham University

    via

    COSMOS

    14 May 2018
    Andrew Masterson

    1
    Each universe in a multiverse contains different levels of dark energy, according to the dominant theory. Credit: Stolk/Getty Images

    The question of dark energy in one universe does not require others to provide an answer.

    A hypothetical multiverse seems less likely after modelling by researchers in Australia and the UK threw one of its key assumptions into doubt.

    The multiverse concept suggests that our universe is but one of many. It finds support among some of the world’s most accomplished physicists, including Brian Greene, Max Tegmark, Neil deGrasse Tyson and the late Stephen Hawking.

    One of the prime attractions of the idea is that it potentially accounts for an anomaly in calculations for dark energy.

    The mysterious force is thought to be responsible for the accelerating expansion of our own universe. Current theories, however, predict there should be rather more of it around than there appears to be. This throws up another set of problems: if the amount of dark energy around was as much as equations require – and that is many trillions of times the level that seems to exist – the universe would expand so rapidly that stars and planets would not form – and life, thus, would not be possible.

    The multiverse idea to an extent accounts for and accommodates this oddly small – but life-permitting – dark energy quotient. Essentially it permits a curiously self-serving explanation: there are a vast number of universes all with differing amounts of dark energy. We exist in one that has an amount low enough to permit stars and so on to form, and thus life to exist. (And we find ourselves here, runs the logic, because we couldn’t find ourselves anywhere else.)

    So far, so anthropic. But now a group of astronomers, including Luke Barnes from the University of Sydney in Australia and Jaime Salcido from Durham University in the UK, has published two papers in the journal Monthly Notices of the Royal Astronomical Society [Galaxy formation efficiency and the multiverse explanation of the cosmological constant with EAGLE simulations and The impact of dark energy on galaxy formation. What does the future of our Universe hold? that show the dark energy and star formation balance isn’t quite as fine as previous estimates have suggested.

    The team created simulations of the universe using the supercomputer architecture contained within the Evolution and Assembly of GaLaxies and their Environments (EAGLE) project. This is a UK-based collaboration that models some 10,000 galaxies over a distance of 300 million-light years, and compares the results with actual observations from the Hubble Telescope and other observatories.

    The simulations allowed the researchers to adjust the amount of dark energy in the universe and watch what happened.

    The results were a surprise. The research revealed that the amount of dark energy could be increased a couple of hundred times – or reduced equally drastically – without substantially affecting anything else.

    “For many physicists, the unexplained but seemingly special amount of dark energy in our universe is a frustrating puzzle,” says Salcido.

    “Our simulations show that even if there was much more dark energy or even very little in the universe then it would only have a minimal effect on star and planet formation.”

    And this, he suggests, implies that life could potentially exist in many multiverse universes – ironically enough, an uncomfortable conclusion.

    “The multiverse was previously thought to explain the observed value of dark energy as a lottery – we have a lucky ticket and live in the universe that forms beautiful galaxies which permit life as we know it,” says Barnes.

    “Our work shows that our ticket seems a little too lucky, so to speak. It’s more special than it needs to be for life. This is a problem for the multiverse; a puzzle remains.”

    It is a puzzle that goes right to the heart of the matter: if the dark energy assumptions are flawed, does a multiverse even exist? The researchers acknowledge that their results do not preclude it – but they do diminish the likelihood.

    “The formation of stars in a universe is a battle between the attraction of gravity, and the repulsion of dark energy,” says co-author Richard Bower, also from Durham University.

    “We have found in our simulations that universes with much more dark energy than ours can happily form stars. So why such a paltry amount of dark energy in our universe?

    “I think we should be looking for a new law of physics to explain this strange property of our universe, and the multiverse theory does little to rescue physicists’ discomfort.”

    See the full article here .

    Please help promote STEM in your local schools.

    stem

    Stem Education Coalition

    Durham U campus

    Durham University is distinctive – a residential collegiate university with long traditions and modern values. We seek the highest distinction in research and scholarship and are committed to excellence in all aspects of education and transmission of knowledge. Our research and scholarship affect every continent. We are proud to be an international scholarly community which reflects the ambitions of cultures from around the world. We promote individual participation, providing a rounded education in which students, staff and alumni gain both the academic and the personal skills required to flourish.

    U Sidney campus

    Our founding principle as Australia’s first university was that we would be a modern and progressive institution. It’s an ideal we still hold dear today.

    When Charles William Wentworth proposed the idea of Australia’s first university, University of Sidney, in 1850, he imagined “the opportunity for the child of every class to become great and useful in the destinies of this country”.

    We’ve stayed true to that original value and purpose by promoting inclusion and diversity for the past 160 years.

    It’s the reason that, as early as 1881, we admitted women on an equal footing to male students. Oxford University didn’t follow suit until 30 years later, and Jesus College at Cambridge University did not begin admitting female students until 1974.

    It’s also why, from the very start, talented students of all backgrounds were given the chance to access further education through bursaries and scholarships.

    Today we offer hundreds of scholarships to support and encourage talented students, and a range of grants and bursaries to those who need a financial helping hand.

     
  • richardmitnick 9:41 am on May 14, 2018 Permalink | Reply
    Tags: , Biography, Charles Proteus Steinmetz, Cosmos Magazine   

    From COSMOS Magazine: “This week in science history: The “Wizard of Schenectady” is born” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    09 April 2018
    Jeff Glorfeld

    Einstein’s pal Charles Proteus Steinmetz transformed the electric power industry.

    1

    Charles Steinmetz taking time out from General Electric to relax with an ostrich. Credit:Bettmann/Getty Images

    He was called the Wizard of Schenectady, and counted as friends Albert Einstein, Nikola Tesla and Thomas Edison. He stood little more than 120 centimetres tall, his body contorted by a hump caused by a congenital deformity known as abnormal kyphosis, an extreme curvature of the upper spine. He was one of the greatest mathematicians and electrical engineers of his time, whose discoveries continue to resonate today.

    Charles Proteus Steinmetz was born Karl August Rudolph Steinmetz on April 9, 1865. He Americanised his name when he emigrated to the United States in 1889. He chose Proteus as his middle name, derived from a nickname bestowed him in Germany.

    Steinmetz went to work for a small electrical firm in New York, and his experiments on power losses in the magnetic materials used in machinery led to his first important work, the law of hysteresis [IEEE EXPLORE DIGITAL LIBRARY], which deals with the power loss that occurs in electrical devices when magnetic action is converted to unusable heat. His discovery, published in 1892, allowed engineers to calculate and minimise losses of electric power owing to magnetism and change their designs accordingly.

    More important was his development of a practical method for making mathematical calculations when dealing with alternating-current circuits. Steinmetz formulated a symbolic method of calculating alternating-current phenomena, which simplified an extremely complicated field so that average engineers could work in it.

    This development was largely responsible for the rapid progress made in the commercial introduction of alternating-current apparatus.

    In 1893, the newly formed General Electric Company, based in Schenectady, New York, bought Steinmetz’s employer, primarily for its patents, but Steinmetz was considered one of its major assets.

    There is a story about Steinmetz which first appeared in Life Magazine in 1965. Jack B. Scott wrote to tell of his father’s encounter with the Wizard of Schenectady at Henry Ford’s factory in Dearborn, Michigan.

    Ford’s engineers couldn’t solve the problems they were having with a gigantic generator, so Scott Senior asked GE to send Steinmetz to the plant. Upon arriving, Steinmetz asked for a notebook, pencil and cot. He listened to the generator and made notes for two days and nights.

    On the second night, he asked for a ladder, climbed up the generator, and made a chalk mark on its side. Then he told Ford’s engineers to remove a plate at the mark and replace sixteen windings from the field coil. They did, and the generator performed to perfection.

    Ford was thrilled, until he got an invoice from GE for $10,000. He asked for an itemised bill.

    Steinmetz responded personally to Ford’s request with the following:

    “Making chalk mark on generator: $1.

    “Knowing where to make mark $9,999.”

    Ford paid the bill.

    Steinmetz died on October 26, 1923.

    See the full article here .

    Please help promote STEM in your local schools.

    stem

    Stem Education Coalition

     
  • richardmitnick 12:08 pm on March 27, 2018 Permalink | Reply
    Tags: , , , , Cosmos Magazine, , , , National Computational Infrastructure at the Australian National University in Canberra, SkyMapper telescope at Siding Spring Observatory, SN KSN 2015K,   

    From Space Science Telescope Institute via COSMOS: “Gone in a flash: supernova burns up in just 25 days” 

    Space Science Telescope Institute

    COSMOS

    27 March 2018
    Lauren Fuge

    Huge, bright and incredibly violent, a new supernova provides new challenges for astronomers.

    1
    An artists impression of how the explosive light of the supernova was hidden for a while behind a cocoon of ejected dust. Nature Astronomy.

    Astronomers have witnessed a blazing supernova explosion that faded away 10 times faster than expected.

    A supernova is the violent death of a massive star, typically occurring when it exhausts its fuel supply and collapses under its own weight, generating a powerful shockwave that blasts light and material out into space.

    Supernovae often blaze so brightly that they briefly outshine all the other stars in their host galaxy. They show off for months on end — in 1054, a supernova could be seen during the day for three weeks and only disappeared completely after two years. Its remnants are known as the Crab Nebula.

    2
    The Crab Nebula in all its glory. NASA, ESA, NRAO/AUI/NSF and G. Dubner (University of Buenos Aires).

    Now an international team of astronomers, led by Armin Rest from the Space Science Telescope Institute in Baltimore, US, has observed a supernova that rapidly soared to its peak brightness in 2.2 days then faded away in just 25.

    “When I first saw the Kepler data, and realised how short this transient is, my jaw dropped,” recalls Rest.

    The supernova, dubbed KSN 2015K, is part of a puzzling class of rare events called Fast-Evolving Luminous Transients (FELTs).

    4
    KSN 2015K’s host is the star-forming spiral galaxy 2MASX-J13315109-1044061. Image credit: Rest et al: https://www.nature.com/articles/s41550-018-0423-2.

    FELTs don’t fit into existing supernova models and astronomers are still debating their sources. Previous suggestions include the afterglow of a gamma-ray burst, a supernova turbo-boosted by a magnetically-powerful neutron star, or a failed example of special type of binary star supernova known as a type 1a. KSN 2015K is the most extreme example found so far.

    In a paper published in the journal Nature Astronomy, the team says that KSN 2015K’s behaviour can most likely be explained by its surroundings: the star was swathed in dense gas and dust that it ejected in its old age, like a caterpillar spinning a cocoon. When the supernova detonated, it took some time for the resulting shock wave to slam into the shell of material and produce a burst of light, becoming visible to astronomers.

    KSN 2015K was captured by NASA’s Kepler Space Telescope, which is designed to hunt for planets by noticing the tiny, temporary dips in light from far-away stars when planets pass in front of them.

    NASA/Kepler Telescope

    Planet transit. NASA/Ames

    This exact skill is also useful in studying supernovae and other brief, explosive events.

    “Using Kepler’s high-speed light-measuring capabilities, we’ve been able to see this exotic star explosion in incredible detail,” says team member Brad Tucker, an astrophysicist from the Australian National University.

    Co-author David Khatami from the University of California, Berkeley, US, adds that this is the first time astronomers can test FELT models to a high degree of accuracy. “The fact that Kepler completely captured the rapid evolution really constrains the exotic ways in which stars die,” he says.

    Australian researchers and facilities were also key to this discovery. Follow-up observations were made with the SkyMapper telescope at Siding Spring Observatory, and then processed by the National Computational Infrastructure at the Australian National University in Canberra.

    ANU Skymapper telescope, a fully automated 1.35 m (4.4 ft) wide-angle optical telescope, at Siding Spring Observatory , near Coonabarabran, New South Wales, Australia, Altitude 1,165 m (3,822 ft)

    Siding Spring Observatory, near Coonabarabran, New South Wales, Australia, Altitude 1,165 m (3,822 ft)

    4
    The National Computational Infrastructure building at the Australian National University

    Tucker says that by learning more about how stars live and die, astronomers can better understand solar systems as a whole, including the potential life on orbiting planets.

    He concludes: “With the imminent launch of NASA’s new space telescope, TESS, we hope to find even more of these rare and violent explosions.”

    NASA/TESS

    See the full article here . Other articles here and here and here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    We are the Space Telescope Science Institute in Baltimore, Maryland, operated by the Association of Universities for Research in Astronomy. We help humanity explore the universe with advanced space telescopes and ever-growing data archives.


    Association of Universities for Research in Astronomy

    Founded in 1982, we have helped guide the most famous observatory in history, the Hubble Space Telescope.

    NASA/ESA Hubble Telescope

    Since its launch in 1990, we have performed the science operations for Hubble. We also lead the science and mission operations for the James Webb Space Telescope (JWST), scheduled for launch in 2019.

    NASA/ESA/CSA Webb Telescope annotated

    We will perform parts of the science operations for the Wide Field Infrared Survey Telescope (WFIRST), in formulation for launch in the mid-2020s, and we are partners on several other NASA missions.

    NASA/WFIRST

    Our staff conducts world-class scientific research; our Barbara A. Mikulski Archive for Space Telescopes (MAST) curates and disseminates data from over 20 astronomical missions;

    Mikulski Archive For Space Telescopes

    and we bring science to the world through internationally recognized news, education, and public outreach programs. We value our diverse workforce and civility in the workplace, and seek to be an example for others to follow.

     
  • richardmitnick 8:27 am on February 19, 2018 Permalink | Reply
    Tags: , Cosmos Magazine, Meteotsunami,   

    From COSMOS Magazine: “Prevalence and danger of little known tsunami type revealed” 

    Cosmos Magazine bloc

    COSMOS Magazine

    19 February 2018
    Richard A Lovett

    1
    huffpost.

    On 4 July 2003, beachgoers at Warren Dunes State Park, in the US state of Michigan, were enjoying America’s Independence Day holiday when a fast-moving line of thunderstorms blew in from Lake Michigan. They scurried for shelter, but the event passed so quickly it didn’t appear that their holiday was ruined.

    “In 15 minutes it was gone,” says civil engineer Alvaro Linares of the University of Wisconsin, Madison.

    But when swimmers re-entered the water, rip currents appeared seemingly from nowhere, pulling eight people out into the lake, where seven drowned.

    What these people had encountered, Linares says, was a meteotsunami — an aquatic hazard of which few people, including scientists, were aware of until recently.

    Few scientists have researched the phenomenon. May of those who have gathered recently at the annual American Geophysical Union Ocean Sciences meeting, held in Portland, Oregon, US, to compare notes.

    Conventional tsunamis are caused by underwater processes such as earthquakes and submarine landslides. Meteotsunamis, as the name indicates, are caused by weather. But while the catalysts are different, the effects are not.

    “The wave characteristics are very similar,” says Eric Anderson of the Great Lakes Environmental Research Laboratory of the National Oceanic and Atmospheric Administration (NOAA) in Ann Arbor, Michigan.

    To create a meteotsunami, what’s required is a combination of a strong, fast-moving storm and relatively shallow water. The sudden increase in winds along the storm front, possibly combined with changes in air pressure, starts the process by kicking up a tsunami-style wave that runs ahead of it. But the process would quickly fizzle out if the water was too deep, because in deep water, such waves propagate very quickly and would soon outrun the storm.

    What’s needed to produce a meteotsunami is a water depth at which the storm’s speed and the wave’s speed match, allowing the wave to build as it and the storm move in tandem. “The storm puts all its energy into that wave,” Anderson says.

    Furthermore, the wave can magnify even more when it hits shallower water or shoals. “That is when these become destructive,” Anderson says.

    In 2004, for example, a storm front 300 kilometres wide sped across the East China Sea at a speed of 31 metres per second, 112 kilometres per hour, says Katsutoshi Fukuzawa of the University of Tokyo.

    Water there is shallow, he adds, with depths mostly under 100 metres. This limits wave speed to about 30 metres per second — a near-perfect match to the storm’s. As a result, parts of the island of Kyushu were hit with a tsunami as big as 1.6-metres.

    Not that meteotsunamis have to be that big to be dangerous. The one at Warren Dunes was probably no more than 30 centimeters, says Linares — small enough not even to be visible in the lake’s normal chop.

    But unlike normal surf, meteotsunamis produce a sustained slosh that lasts several minutes between run-up and retreat. That means that even low-height waves carry a lot of water, creating the potential for strong rip currents when they withdraw. According to Linares’ models [Journal of Geophysical Research], these currents would have persisted for about an hour — plenty long enough to drag unwary swimmers far out into the lake, long after the storm had passed.

    It’s also possible for meteotsunamis to become “detached” from the storm front that created them, striking shores far away. Researchers reviewing records in the Great Lakes have concluded that that is what happened when such a wave hit Chicago in 1954, killing 10 people.

    “The wave came out of nowhere,” Anderson says. “It was a calm, sunny day.”

    It’s not just Japan and America’s Great Lakes that have seen such events. In May 2017, a storm raced up the English Channel, kicking up a metre-high wave that swept beaches in The Netherlands as bystanders looked on with awe, says Ap van Dongeren of the Deltares research institute in Delft, The Netherlands.

    Quirks of topography can magnify the effects of such tsunamis. On 13 June 2013, a group of spearfishermen in New Jersey were stunned when a surge of water threw them across a breakwater into the open ocean [nj.com]. A few minutes later, another surge threw them back where they’d come from. And that came from a meteotsunami that measured at well less than a metre on local tide gauges, says Gregory Dusek, a NOAA oceanographer at Camp Springs, Maryland.

    Meteotsunamis have occurred on all inhabited continents, including one that hit the port of Fremantle, near the Australian city of Perth, in 2014, causing a ship to break free from its moorings and crash into a railroad bridge in 2014, Sarath Wijeratne of the University of Western Australia reported in a conference abstract. In fact, Wijeratne concluded, a look back at historical water level records indicates that Western Australia may have seen more than 15 such events each year between 2008 and 2016.

    Other researchers are also finding these events to be surprisingly frequent. By studying tide gauge records back to 1996, Dusek has concluded that they occur on America’s eastern seaboard at a rate of 23 per year — though most are small enough nobody would ever notice. In Holland, Van Dongeren says that a quick check of historical tide gauge records revealed at least three such events in the past decade that had gone unnoticed because they happened at low tide. “They’re not that rare,” he says.

    Fukuzawa says that Japan saw 37 meteotsunamis exceeding one metre from 1961 to 2005.

    Furthermore, bigger ones are possible. In June 2014, Croatia was hit by a two-to-three metre tsunami sweeping in from the Adriatic Sea, says Clea Denamiel, of the Croatian Institute of Oceanography and Fisheries.

    But the mother of all meteotsunamis came in 1978, when Vela Luka, at the southern end of Croatia’s scenic Dalmatian coast, was smashed by a meteotsunami measuring a full six metres, with giant waves surging and retreating about every 17 minutes, just as might have occurred in the aftermath of a large offshore earthquake.

    As of now, scientists don’t know enough about meteotsunamis to be able to predict them, though efforts are under way to create models that can do just that. But as they dig back through old records, they are increasingly realising that meteotsunamis might have been with us for a long time.

    Or as Linares puts it with typical scientific understatement, “meteotsunamis are a beach hazard that has been overlooked”.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: