Tagged: COSMOS Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:35 am on April 21, 2017 Permalink | Reply
    Tags: COSMOS, , ,   

    From COSMOS: “Sixteen ways of looking at a supernova” 

    Cosmos Magazine bloc


    21 April 2017
    Andrew Masterson

    Thanks to fast thinking, luck, and gravitational lensing, four telescopes managed to observe a quadruple image of a single supernova. Andrew Masterson reports.

    The light from the supernova iPTF16geu and of its host galaxy is warped and amplified by the curvature of space by the mass of a foreground galaxy.
    ALMA (ESO/NRAO/NAOJ), L. Calçada (ESO), Y. Hezaveh et al., edited and modified by Joel Johansson

    In September 2016, when astronomer Ariel Goobar and his colleagues at the Intermediate Palomar Transient Factory in California saw the image recorded by the facility’s field camera, they knew they had to move fast.

    Caltech Palomar Intermediate Palomar Transient Factory telescope at the Samuel Oschin Telescope at Palomar Observatory,located in San Diego County, California, United States

    They were looking at something that was simultaneously massive, spectacular, new, short-lived, and a triumphant demonstration of Einstein’s theory of general relativity.

    As reported in the journal Science, Goobar, from Stockholm University in Sweden, and his team had discovered a brand new Type 1a supernova, which they later dubbed iPTF16geu.

    Any freshly discovered supernova is a significant astronomical find, but in this case its importance was magnified – quite literally – by circumstance.

    Einstein’s theory of general relativity predicts that matter curves the spacetime surrounding it. The region of curved spacetime around a particularly massive object – a galaxy, say – can, if the alignment is correct, bend the paths of light travelling through it in such a way as to act as a lens, enlarging the appearance of objects in the distance behind it.

    The effect is known as “gravitational lensing” and is well known to astronomers.

    Gravitational Lensing NASA/ESA

    From left: an image from the SDSS survey; a zoomed view showing the foreground lensing galaxy; two versions of the four resolved images of the supernova, resolved by the Hubble Space Telescope and the Keck/NIRC2 instrument. Joel Johansson

    Goobar’s team quickly realised that its view of iPTF16geu was an extreme example of the phenomenon. A galaxy situated between Earth and the supernova was magnifying the phenomenon by 50 times, providing an unparalleled view of the stellar explosion. They were also able to see four separate images of the supernova, each formed by light taking a different path around the galaxy.

    The light burst from a Type 1 supernova starts to fade precipitously after only a couple of minutes, and disappears pretty much completely after a year.

    Realising that the window of opportunity was limited and closing fast, the team hit the phones and did some rapid talking. In a very short period, three other big facilities homed in on iPTF16geu.

    As well as the initial Palomar shot, the astronomers captured images from the Hubble Telescope, the Very Large Telescope in Chile, and the Keck Observatory in Hawaii.

    NASA/ESA Hubble Telescope

    ESO/VLT at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level

    Keck Observatory, Mauna Kea, Hawaii, USA

    The results – multiple observations of multiple images of the supernova event – provide data that will offer insights not only into the supernova itself, but also into the structure of the intervening galaxy and the physics of gravitational lensing.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:04 am on April 10, 2017 Permalink | Reply
    Tags: , COSMOS, Dengue fever often goes unrecognised by Australian doctors study finds,   

    From COSMOS: “Dengue fever often goes unrecognised by Australian doctors, study finds” 

    Cosmos Magazine bloc


    10 April 2017
    Jana Howden

    An Aedes aegypti mosquito – the kind that carry dengue – feeding. Muhammad Mahdi Karim

    Infecting 50 to 100 million people each year and causing symptoms ranging from a rash to haemorrhaging, dengue virus is categorised by the World Health Organization (WHO) as both a major international public health problem, and a neglected one.

    A new study published in the Medical Journal of Australia has revealed that the mosquito-borne virus is indeed flying under the radar. It revealed that a significant number of Australian travellers bringing home the unwanted souvenir – predominately those returning from Indonesia and Thailand – presented warning signs that were not recognised by clinicians, with more than 20% of patients prescribed medication that could in fact increase their risk of haemorrhage.

    In a collaborative project conducted by researchers from Austin Health, Monash Health, Monash University, the University of Melbourne, the Victorian Infectious Diseases Services in Melbourne, and the Royal Darwin Hospital, 208 hospitalised patients from January 2012 to May 2015 were included in the study.

    Analysing the archives of four health care networks in Australia, the researchers searched the hospitals’ databases to see what symptoms dengue sufferers were presenting with, where they had travelled, and what the response of their health care facility was.

    They found that WHO guidelines for the classifications of dengue – designed to make classification of the condition simpler to better determine a patient’s treatment plan – were followed in only 10 of the 208 cases.

    They also found that only 14% of the patients had a complete fluid balance chart for at least one day. The authors write that “managing the patient’s fluid balance is vital when treating dengue,” calling this lack of fluid monitoring “concerning”.

    Yet “even more worrying,” according to the researchers, was the discovery that 22% of patients were prescribed NSAIDs – a family of common anti-inflammatory drugs, including aspirin – which can worsen the impact of dengue on patients through risk of bleeding complications.

    As Australian travel to Asia continues to increase, the researchers urge Australian GPs and clinicians to increase their familiarity with the variety of clinical manifestations of the disease to ensure treatment errors, including the prescription of NSAIDs, are avoided.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 8:17 am on April 6, 2017 Permalink | Reply
    Tags: , COSMOS, Paradoxes of probability and other statistical strangeness,   

    From COSMOS: “Paradoxes of probability and other statistical strangeness” 

    Cosmos Magazine bloc


    Statistics and probability can sometimes yield mind bending results. Shutterstock

    Statistics is a useful tool for understanding the patterns in the world around us. But our intuition often lets us down when it comes to interpreting those patterns. In this series we look at some of the common mistakes we make and how to avoid them when thinking about statistics, probability and risk.

    You don’t have to wait long to see a headline proclaiming that some food or behaviour is associated with either an increased or a decreased health risk, or often both. How can it be that seemingly rigorous scientific studies can produce opposite conclusions?

    Nowadays, researchers can access a wealth of software packages that can readily analyse data and output the results of complex statistical tests. While these are powerful resources, they also open the door to people without a full statistical understanding to misunderstand some of the subtleties within a dataset and to draw wildly incorrect conclusions.

    Here are a few common statistical fallacies and paradoxes and how they can lead to results that are counterintuitive and, in many cases, simply wrong.

    Simpson’s paradox, What is it?

    This is where trends that appear within different groups disappear when data for those groups are combined. When this happens, the overall trend might even appear to be the opposite of the trends in each group.

    One example of this paradox is where a treatment can be detrimental in all groups of patients, yet can appear beneficial overall once the groups are combined.
    How does it happen?

    This can happen when the sizes of the groups are uneven. A trial with careless (or unscrupulous) selection of the numbers of patients could conclude that a harmful treatment appears beneficial.

    Consider the following double blind trial of a proposed medical treatment. A group of 120 patients (split into subgroups of sizes 10, 20, 30 and 60) receive the treatment, and 120 patients (split into subgroups of corresponding sizes 60, 30, 20 and 10) receive no treatment.

    The overall results make it look like the treatment was beneficial to patients, with a higher recovery rate for patients with the treatment than for those without it.

    The Conversation, CC BY-ND

    However, when you drill down into the various groups that made up the cohort in the study, you see in all groups of patients, the recovery rate was 50% higher for patients who had no treatment.

    The Conversation, CC BY-ND

    But note that the size and age distribution of each group is different between those who took the treatment and those who didn’t. This is what distorts the numbers. In this case, the treatment group is disproportionately stacked with children, whose recovery rates are typically higher, with or without treatment.

    Base rate fallacy
    What is it?

    This fallacy occurs when we disregard important information when making a judgement on how likely something is.

    If, for example, we hear that someone loves music, we might think it’s more likely they’re a professional musician than an accountant. However, there are many more accountants than there are professional musicians. Here we have neglected that the base rate for the number of accountants is far higher than the number of musicians, so we were unduly swayed by the information that the person likes music.
    How does it happen?

    The base rate fallacy occurs when the base rate for one option is substantially higher than for another.

    Consider testing for a rare medical condition, such as one that affects only 4% (1 in 25) of a population.

    Let’s say there is a test for the condition, but it’s not perfect. If someone has the condition, the test will correctly identify them as being ill around 92% of the time. If someone doesn’t have the condition, the test will correctly identify them as being healthy 75% of the time.

    So if we test a group of people, and find that over a quarter of them are diagnosed as being ill, we might expect that most of these people really do have the condition. But we’d be wrong.

    In a typical sample of 300 patients, for every 11 people correctly identified as unwell, a further 72 are incorrectly identified as unwell. The Conversation, CC BY-ND

    According to our numbers above, of the 4% of patients who are ill, almost 92% will be correctly diagnosed as ill (that is, about 3.67% of the overall population). But of the 96% of patients who are not ill, 25% will be incorrectly diagnosed as ill (that’s 24% of the overall population).

    What this means is that of the approximately 27.67% of the population who are diagnosed as ill, only around 3.67% actually are. So of the people who were diagnosed as ill, only around 13% (that is, 3.67%/27.67%) actually are unwell.

    Worryingly, when a famous study asked general practitioners to perform a similar calculation to inform patients of the correct risks associated with mammogram results, just 15% of them did so correctly.

    Will Rogers paradox
    What is it?

    This occurs when moving something from one group to another raises the average of both groups, even though no values actually increase.

    The name comes from the American comedian Will Rogers, who joked that “when the Okies left Oklahoma and moved to California, they raised the average intelligence in both states”.

    Former New Zealand Prime Minister Rob Muldoon provided a local variant on the joke in the 1980s, regarding migration from his nation into Australia.

    How does it happen?

    When a datapoint is reclassified from one group to another, if the point is below the average of the group it is leaving, but above the average of the one it is joining, both groups’ averages will increase.

    Consider the case of six patients whose life expectancies (in years) have been assessed as being 40, 50, 60, 70, 80 and 90.

    The patients who have life expectancies of 40 and 50 have been diagnosed with a medical condition; the other four have not. This gives an average life expectancy within diagnosed patients of 45 years and within non-diagnosed patients of 75 years.

    If an improved diagnostic tool is developed that detects the condition in the patient with the 60-year life expectancy, then the average within both groups rises by 5 years.

    The Conversation, CC BY-ND

    Berkson’s paradox
    What is it?

    Berkson’s paradox can make it look like there’s an association between two independent variables when there isn’t one.
    How does it happen?

    This happens when we have a set with two independent variables, which means they should be entirely unrelated. But if we only look at a subset of the whole population, it can look like there is a negative trend between the two variables.

    This can occur when the subset is not an unbiased sample of the whole population. It has been frequently cited in medical statistics. For example, if patients only present at a clinic with disease A, disease B or both, then even if the two diseases are independent, a negative association between them may be observed.


    Consider the case of a school that recruits students based on both academic and sporting ability. Assume that these two skills are totally independent of each other. That is, in the whole population, an excellent sportsperson is just as likely to be strong or weak academically as is someone who’s poor at sport.

    If the school admits only students who are excellent academically, excellent at sport or excellent at both, then within this group it would appear that sporting ability is negatively correlated with academic ability.

    To illustrate, assume that every potential student is ranked on both academic and sporting ability from 1 to 10. There are an equal proportion of people in each band for each skill. Knowing a person’s band in either skill does not tell you anything about their likely band in the other.

    Assume now that the school only admits students who are at band 9 or 10 in at least one of the skills.

    If we look at the whole population, the average academic rank of the weakest sportsperson and the best sportsperson are both equal (5.5).

    However, within the set of admitted students, the average academic rank of the elite sportsperson is still that of the whole population (5.5), but the average academic rank of the weakest sportsperson is 9.5, wrongly implying a negative correlation between the two abilities.

    The Conversation, CC BY-ND

    Multiple comparisons fallacy
    What is it?

    This is where unexpected trends can occur through random chance alone in a data set with a large number of variables.

    How does it happen?

    When looking at many variables and mining for trends, it is easy to overlook how many possible trends you are testing. For example, with 1,000 variables, there are almost half a million (1,000×999/2) potential pairs of variables that might appear correlated by pure chance alone.

    While each pair is extremely unlikely to look dependent, the chances are that from the half million pairs, quite a few will look dependent.

    The Birthday paradox is a classic example of the multiple comparisons fallacy.

    In a group of 23 people (assuming each of their birthdays is an independently chosen day of the year with all days equally likely), it is more likely than not that at least two of the group have the same birthday.

    People often disbelieve this, recalling that it is rare that they meet someone who shares their own birthday. If you just pick two people, the chance they share a birthday is, of course, low (roughly 1 in 365, which is less than 0.3%).

    However, with 23 people there are 253 (23×22/2) pairs of people who might have a common birthday. So by looking across the whole group you are testing to see if any one of these 253 pairings, each of which independently has a 0.3% chance of coinciding, does indeed match. These many possibilities of a pair actually make it statistically very likely for coincidental matches to arise.

    For a group of as few as 40 people, it is almost nine times as likely that there is a shared birthday than not.

    The probability of no shared birthdays drops as the number of people in a group increases. The Conversation, CC BY-ND

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 8:18 am on March 31, 2017 Permalink | Reply
    Tags: , , , , , COSMOS, direct collapse black hole ala Avi Loeb   

    From COSMOS: “When giants warped the universe” 

    Cosmos Magazine bloc


    31 March 2017
    Graham Phillips

    They don’t make them like they used to: supermassive black holes emerged billions of years earlier than thought. Getty Images

    They gobble stars, bend space, warp time and may even provide gateways to other universes.

    Black holes fire the imagination of scientists and science-fiction aficionados alike. But at least one thing about them wasn’t all that mind-bending: we’ve long understood black holes to be the end point in the life of a big star, when it runs out of fuel and collapses on itself.

    However, in recent times astronomers have been confronted with a paradox: gigantic black holes that existed when the universe was less than a billion years old.

    Since average-sized black holes take many billions of years to form, astrophysicists have been scratching their heads to figure out how these monsters could have arisen so early. It now seems that rather than being the end game in the evolution of stars and galaxies, supermassive black holes were around at their beginnings and played a major role in shaping them.

    Recommended reading: The bright side of black holes

    It was the little known English clergyman and scientist John Michell who, in 1783, first articulated the idea of “dark stars” whose gravity was so great they would prevent light from escaping them. The concept was astonishingly prescient even if parts of his theory – particularly those based on Newton’s idea that light particles had mass – were flawed.

    The first accurate description of black holes came in 1916 from German physicist and astronomer Karl Schwarzschild. Schwarzschild was serving in the German Army at the time, despite already being over 40 years of age.

    After seeing action on both the western and eastern fronts, Schwarzschild was sent home due to a serious auto-immune skin disease, pemphigus.

    It was late 1915 and Einstein’s theory of General Relativity had just been published. Inspired, Schwarzschild lost no time writing a paper that predicted the existence of black holes; it was published just months before he succumbed to his disease in May 1916.

    According to Einstein’s theory, the force of gravity was the result of a mass distorting the fabric of space-time. In the same way that a bowling ball dimples the fabric of a trampoline, a star’s mass dimpled the space-time fabric of its system, keeping planets circling around it.

    The theory was underpinned by equations laying out the interaction of energy, mass, space and time. Schwarzschild’s achievement was to apply Einstein’s equations to a simplified scenario: a perfectly spherical star. One of the things that jumped out of his mathematical musings was an object with such a strong gravitational pull that not even light could escape it.

    While Schwarzschild’s idea made sense in the theoretical realm of mathematics, most physicists did not expect to find an exemplar in the real universe.

    By the 1960s, however, expectations were changing. Astronomers discovered the existence of extremely dense objects known as neutron stars. Detected by their unusual pulsing of electromagnetic radiation, they were the dense corpses of large stars that had exhausted their fuel. Without the force of the burning fuel pushing against their own gravity, they collapsed, compressing their matter until only the pressure of neutron against neutron halted the crush.

    Neutron stars got astrophysicists thinking back to Schwarzschild’s idea. What happens when really big suns with even stronger gravity cave in? All the matter would be squeezed down to a point with an extraordinarily strong gravitational field.

    Sometime in the 1960s, physicists coined the term “black hole”, and the hunt for something more than just a mathematical artefact was on.

    The first evidence that black holes weren’t just theoretical came in 1964, when a rocket decked with sensitive instruments was shot into sub-orbital space. It detected suspicious X-rays emanating from the constellation of Cygnus (the swan).

    The X-ray source became known as Cygnus X-1. By the early 1970s most astronomers inferred the X-rays were radiated by super-heated matter being sucked into the gravitational field of the black hole. It would take decades more, however, before the first conclusive evidence that black holes exist and obey Einstein’s equations of general relativity.

    This came in September 2015 with the detection of gravitational waves by the Laser Interferometer Gravitational-Wave Observatory (LIGO) in the United States.

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    These ripples in the fabric of space-time had been generated by two black holes colliding 1.3 billion years ago. Theorists had predicted that if such a titanic event occurred somewhere in our galaxy, the reverberations should be measurable on Earth.

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    LIGO’s detection of gravitational waves thus also confirmed the existence of black holes. Yet even as the evidence that black holes truly exist has firmed up, our understanding of how they arise seems to be crumbling.

    The cracks in the theory grew gradually as astronomers accumulated evidence for the existence of a very different kind of black hole. While most black holes have a mass that is equivalent to 10-100 times that of our Sun, these monsters were equivalent to a million or a billion solar masses. With typical prosaicness, astronomers dubbed them supermassive black holes.

    Unlike smaller black holes, they also resided at the centres of galaxies. Most surprising of all, far-reaching telescopes like the European Southern Observatory’s Very Large Telescope detected them in extremely distant galaxies.

    ESO/VLT at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level

    Because of the extreme length of time it takes for their light to reach Earth, these galaxies provide snapshots of the universe in its infancy.

    “A billion years after the big bang you have black holes that are as massive as the biggest black holes we find around us today,” says Avi Loeb, an astrophysicist at Harvard University.

    That simply doesn’t make sense according to the accepted understanding that black holes come only at the end of a star’s life. “It’s sort of like going to the delivery room in a hospital and finding giant babies.”

    Were these monster babies the result of many black holes colliding? Or did they arise from moderately sized black holed that ballooned by feeding on gas and other stars? Neither of these scenarios sits well with astrophysicists.

    “Getting from even a hundred solar masses up to several billion solar masses in less than a billion years is quite challenging,” says Mitch Begelman, an astrophysicist from the University of Colorado. “Black holes are not vacuum cleaners. That’s a popular misconception. It’s very difficult to get a black hole to swallow lots of stuff [in a short period of time].”

    Loeb, who has been captivated by supermassive black holes since he got into astrophysics, thinks he might have a solution to the mystery: in 1994, he came up with the idea that a different kind of process gave birth to black holes in the early universe.

    In the modern universe, a black hole takes billions of years to form. The black hole’s precursor star (which must be greater than 10 solar masses to muster the required gravitational force) must first burn through its fuel, then explode as a supernova before it collapses.

    But while the biggest stars today reach the size of 300 solar masses, the early universe might have blazed with stars equivalent to as much as a million solar masses. Such a super star, according to Loeb’s calculations, would burn so feverishly it would use up its fuel in just a million years.

    Then it would collapse directly into a black hole a million times the mass of the Sun – what Loeb calls “a direct collapse black hole”.

    According to Loeb, the reason super stars were formed only in the embryonic universe, is because back then stars were made of simpler stuff: “The gas was pristine. It came from the big bang and had only hydrogen and helium,” he explains.

    Lacking heavier elements to radiate heat, the clouds stayed relatively warm. That allowed them to grow without fragmenting, forming super stars.

    By contrast, in today’s universe star dust contains heavy atoms like carbon, silicon and oxygen – forged in the nuclear furnaces of the first generation of stars and blown throughout the cosmos when those stars exploded.

    As result, modern-day dust clouds can cool to extremely low temperatures and fragment, mostly forming stars about the size of the Sun.

    If Loeb is right, early super stars gave rise to the direct collapsers, which gave rise to supermassive black holes. These monsters have had an enormous influence on how the universe evolved. They shaped galaxies in two ways.

    First, they gobbled up clouds and stars in their immediate vicinity. Second, like some cosmic air blower, they beamed out jets of energy that propelled dust and gas out of their galaxy.

    “Within tens of millions of years the black holes can remove the gas from the host galaxy,” Loeb says. By cleaning the galaxy of the raw material for star creation and growth, the black holes have capped the size of galaxies.

    If not for the supermassive black hole at the centre of the Milky Way, Loeb estimates, our galaxy could have grown a thousand times bigger than it is today. That would be some night sky to look up at.

    “The growth of black holes seems to be a crucial element in galaxy formation,” Begelman agrees. “Galaxies would look very different if there weren’t these black holes.”

    Of course, the absolute proof that direct collapse black holes exist will come when one is observed.

    In the past year astronomers have seen some tantalising clues. One is a galaxy known as CR7, which hosts a source of light much brighter than its stars – perhaps the radiation caused by a black hole sucking in gas.

    “You see evidence for a galaxy that has mainly hydrogen and helium,” Loeb says. “That could potentially be the birthplace of a direct collapse black hole.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 5:41 am on March 30, 2017 Permalink | Reply
    Tags: Ancient Earth leaves a fading signature, , COSMOS, ,   

    From COSMOS: “Ancient Earth leaves a fading signature” 

    Cosmos Magazine bloc


    17 March 2017
    Richard A Lovett

    Granite such as this along the eastern shores of the Hudson Bay reveal remnants of the Earth’s crust. Rick Carlson

    Scientists studying ancient rocks in northeastern Canada have found them to be composed of remnants of even older rocks, dating back to within a few hundred million years of the formation of the Earth.

    These remnants suggest that tectonic processes in the planet’s first 1.5 billion years may have been very different to what we know today. The find is important in part because on most of the planet’s surface, geological processes have long ago erased visible traces of the Earth’s primitive crust.

    There are a few places with rocks believed to be at least four billion years old, and in Western Australia, geologists have found crystals, called zircons, that might have formed 4.4 billion years ago, only 150 million years or so after the Earth’s formation.

    But in general, says Richard Carlson, a geochemist from Carnegie Institution for Science in Washington DC, “finding really old rock has been almost impossible.” Not that Carlton and his colleague, Jonathan O’Neil of the University of Ottawa, Canada, actually found a new trove of super-ancient rocks.

    Instead, in a study published in Science, they looked for isotopic traces of earlier rocks in ones not quite so ancient. The rocks in question are granites lying east of Canada’s Hudson Bay.

    Scientists have long known that these formed about 2.7 billion years ago. Their chemical composition says they didn’t erupt directly from the mantle, but were instead formed from pre-existing basalts that were pulled below the surface, heated, and then recycled back to the surface to form the granites we see today.

    In the process, the physical remnants of the older rocks were destroyed, but their isotopic signatures remain. The isotope in question is neodymium-142, a rare-earth element used to make extremely powerful magnets.

    Neodymium-142 is one of five stable isotopes of neodymium, but it’s important because it is the decay product from the radioactive decay of an isotope of another rare-earth element, samarium-146.

    Samarium-146 has a half-life of 103 million years. That may sound like a lot in human terms, but in the context of the world’s most ancient rocks, it is actually fairly short, especially because within five or six half-lives it would have been “basically gone,” Carlson says.

    What this means is that by carefully measuring the relative quantities of various isotopes of neodymium, including neodymium-142, scientists can determine whether a rock includes ingredients that come from an older rock that formed before the earth ran out of samarium-146.

    “You can see it with a mass spectrometer, but you can’t see it with a microscope,” Carlton says.

    Using this method, he and O’Neil found that the basalts that were reprocessed to form the 2.7-billion-year-old granites must have formed at least 4.2 billion years ago.

    That’s an interesting find in and of itself, says Tim Johnson, from Curtin University in Perth, Australia, who was not part of the study team, because it provides “convincing evidence” that the Earth’s most ancient crust was indeed recycled into granites, such as those studied by Carlton and O’Neil, rocks that Johnson calls “the nuclei of the continents.”

    But it’s also important because it means that the basalts that formed Carlton’s and O’Neal’s 2.7-billion-year-old granites survived for 1.5 billion years before they were subducted and metamorphosed into them. That’s a long time, given the fact that today’s basalts only survive for a couple hundred million years before modern plate tectonics recycles them.

    One explanation might be that the basalts that formed the Canadian granites came from a gigantic block of rock that somehow resisted subduction for 1.5 billion years. Another is that tectonics on the early Earth moved very slowly, if at all, allowing basalts to remain on the surface of the earth for much longer than is possible in today’s tectonic regime.

    Johnson thinks it’s the latter. Other research, including his own, has been finding that plate tectonics may well not have been occurring on the early Earth.

    “In my view,” he says, “[this] is another nail in the coffin for the view that plate tectonics best explains the geodynamic evolution of the Earth in its first billion years.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 1:10 pm on March 28, 2017 Permalink | Reply
    Tags: , , COSMOS,   

    From COSMOS: “Solar jet stream promises better flare forecasting” 

    Cosmos Magazine bloc


    28 March 2017
    Richard A. Lovett

    This NASA Solar and Heliospheric Administration (SOHO) image shows a solar flare erupting from a giant sunspot. NASA/Getty Images


    Scientists studying 360-degree images of the sun have discovered that deep in its atmosphere, its magnetic field makes looping meanders intriguingly analogous to the earth’s jet stream.

    Technically known as Rossby waves, these meanders were traced by observing their effect on coronal brightpoints — small bright features that dot the sun.

    Their movements can be used to track motions deeper in the solar atmosphere. They are not particularly fast, especially when measured against the huge scale of the sun itself.

    “We get speeds of three metres per second,” says Scott McIntosh, a solar physicist from the National Centre for Atmospheric Research in Boulder, Colorado, USA. “Slow, but measurable.”

    Tracking the movements is difficult from earth, however, because we can only see one side of the sun at a time. That’s a problem because it rotates approximately once every 24 days, meaning that each portion of its surface is out of sight for 12.

    However, for three years, from 2011 to 2014, solar scientists had a unique opportunity, because there were three deep-space satellites observing the sun all at once, spaced so their divergent angles allowed the entire surface to be seen simultaneously. Two were a pair known as the Solar TErrestrial RElations Observatory (STEREO), specifically designed for the purpose. The third was NASA’s Deep Solar Dynamics Observatory (SDO), which sits directly between the earth and the sun.

    NASA/STEREO spacecraft


    Collectively, the trio was able to monitor the whole shebang until 2014, when something went wrong with one of the STEREO spacecraft and it lost contact with its controllers. But three years of data were more than enough for McIntosh’s team to track the slow movements of the brightpoints and realise what that revealed about the existence of Rossby waves in the underlying magnetic field.

    Rossby waves are important, because on earth changes in the jet stream are major factors in influencing local weather patterns. And now that we know similar features exist in the sun’s magnetic field, McIntosh says, we may be able to learn how they relate to the formation of sunspots, active regions, and solar flares. If so, it opens the door to forecasting solar storms long before they might hit us.

    “This is exciting work,” says Daniel Baker, director of the Laboratory for Atmospheric and Space Physics at the University of Colorado, Boulder, who was not part of the study team. “Those of us interested in the ‘space weather’ effects of solar activity can really applaud.”

    Predicting space weather is important, because solar storms can hurl dangerous radiation at astronauts, damage satellites, interfere with communications and navigation systems, potentially take out electrical generators and wreak havoc on electronics.

    “Estimates put the cost of space weather hazards at $10 billion per year,” says Ilia Roussev, program director in the US National Science Foundation (NSF) Division of Atmospheric and Geospace Sciences.

    Historically there have occasionally been truly giant solar storms that if replicated today would have a devastating effect on modern technological society.

    “I always tell people we live in the atmosphere of our star,” McIntosh says, referring to the solar wind. “What we have [in terms of technology], it could easily take away any time, in the blink of an eye. But because 99.99% of the time it rises in the morning and sets in the evening without doing any damage, we take it for granted.”

    What’s now needed, he adds, is to restore our ability to view the whole 360-degree surface of the sun, all at once, perhaps by such methods as placing a constellation of spacecraft in orbit around it. “These are things I’d like to see in my lifetime,” he says.

    The study was published 27 March in Nature Astronomy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 11:05 am on March 24, 2017 Permalink | Reply
    Tags: , , , COSMOS, nicotinamide adenine dinucleotide (NAD+)   

    From COSMOS: “Can ageing be held at bay by injections and pills?” 

    Cosmos Magazine bloc


    24 March 2017
    Elizabeth Finkel

    Two fast ageing mice. The one on the left was treated with a FOXO4 peptide, which targets senescent cells and leads to hair regrowth in 10 days.
    Peter L.J. de Keizer

    The day we pop up a pill or get a jab to stave off ageing is closer, thanks to two high profile papers just published today.

    A Science paper from a team, led by David Sinclair from Harvard Medical School and the University of NSW, shows how popping a pill that raises the levels of a natural molecule called nicotinamide adenine dinucleotide (NAD+) staves off the DNA damage that leads to aging.

    The other paper, published in Cell, led by Peter de Keizer’s group at Erasmus University in the Netherlands, shows how a short course of injections to kill off defunct “senescent cells” reversed kidney damage, hair loss and muscle weakness in aged mice.

    Taken together, the two reports give a glimpse of how future medications might work together to forestall ageing when we are young, and delete damaged cells as we grow old. “This is what we in the field are planning”, says Sinclair.

    Sinclair has been searching for factors that might slow the clock of ageing for decades. His group stumbled upon the remarkable effects of NAD+ in the course of studying powerful anti-ageing molecules known as sirtuins, a family of seven proteins that mastermind a suite of anti-ageing mechanisms, including protecting DNA and proteins.

    Resveratrol, a compound found in red wine, stimulates their activity. But back in 2000, Sinclair’s then boss Lenny Guarente at MIT discovered a far more powerful activator of sirtuins – NAD+. It was a big surprise.

    “It would have to be the most boring molecule in the world”, notes Sinclair.

    It was regarded as so common and boring that no-one thought it could play a role in something as profound as tweaking the ageing clock. But Sinclair found that NAD+ levels decline with age.

    “By the time you’re 50, the levels are halved,” he notes.

    And in 2013, his group showed [Cell] that raising NAD+ levels in old mice restored the performance of their cellular power plants, mitochondria.

    One of the key findings of the Science paper is identifying the mechanism by which NAD+ improves the ability to repair DNA. It acts like a basketball defence, staying on the back of a troublesome protein called DBC1 to keep it away from the key player PARP1– a protein that repairs DNA.

    When NAD+ levels fall, DBC1 tackles PARP1. End result: DNA damage goes unrepaired and the cell ‘ages’.

    “We ‘ve discovered the reason why DNA repair declines as we get older. After 100 years that’s exciting,” says Sinclair .

    His group has helped developed a compound, nicotinamide mono nucleotide (NMN), that raises NAD+ levels. As reported in the Science paper, when injected into aged mice it restored the ability of their liver cells to repair DNA damage. In young mice that had been exposed to DNA-damaging radiation, it also boosted their ability to repair it. The effects were seen within a week of the injection.

    These kinds of results have impressed NASA. The organisation is looking for methods to protect its astronauts from radiation damage during their one-year trip to Mars. Last December it hosted a competition for the best method of preventing that damage. Out of 300 entries, Sinclair’s group won.

    As well as astronauts, children who have undergone radiation therapy for cancer might also benefit from this treatment. According to Sinclair, clinical trials for NMN should begin in six months. While many claims have been made for NAD+ to date, and compounds are being sold to raise its levels, this will be the first clinical trial, says Sinclair.

    By boosting rates of DNA repair, Sinclair’s drug holds the hope of slowing down the ageing process itself. The work from de Keizer’s lab, however, offers the hope of reversing age-related damage.

    His approach stems from exploring the role of senescent cells. Until 2001, these cells were not really on the radar of researchers who study ageing. They were considered part of a protective mechanism that mothballs damaged cells, preventing them from ever multiplying into cancer cells.

    The classic example of senescent cells is a mole. These pigmented skin cells have incurred DNA damage, usually triggering dangerous cancer-causing genes. To keep them out of action, the cells are shut down.

    If humans lived only the 50-year lifespan they were designed for, there’d be no problem. But because we exceed our use-by date, senescent cells end up doing harm.

    As Judith Campisi at the Buck Institute, California, showed in 2001, they secrete inflammatory factors that appear to age the tissues around them.

    But cells have another option. They can self-destruct in a process dubbed apoptosis. It’s quick and clean, and there are no nasty compounds to deal with.

    So what relegates some cells to one fate over another? That’s the question Peter de Keizer set out to solve when he did a post-doc in Campisi’s lab back in 2009.

    Finding the answer didn’t take all that long. A crucial protein called p53 was known to give the order for the coup de grace. But sometimes it showed clemency, relegating the cell to senesce instead.

    De Keizer used sensitive new techniques to identify that in senescent cells, it was a protein called FOXO4 that tackled p53, preventing it from giving the execution order.

    The solution was to interfere with this liaison. But it’s not easy to wedge proteins apart; not something that small diffusible molecules – the kind that make great drugs – can do.

    De Keizer, who admits to “being stubborn” was undaunted. He began developing a protein fragment that might act as a wedge. It resembled part of the normal FOXO4 protein, but instead of being built from normal L- amino acids it was built from D-amino acids. It proved to be a very powerful wedge.

    Meanwhile other researchers were beginning to show that executing senescent cells was indeed a powerful anti-ageing strategy. For instance, a group from the Mayo Clinic last year showed that mice genetically engineered to destroy 50-70% of their senescent cells in response to a drug experienced a greater “health span”.

    Compared to their peers they were more lively and showed less damage to their kidney and heart muscle. Their average lifespan was also boosted by 20%.

    But humans are not likely to undergo mass genetic engineering. To achieve similar benefits requires a drug that works on its own. Now de Keizer’s peptide looks like it could be the answer.

    As the paper in Cell shows, in aged mice, three injections of the peptide per week had dramatic effects. After three weeks, the aged balding mice regrew hair and showed improvements to kidney function. And while untreated aged mice could be left to flop onto the lab bench while the technician went for coffee, treated mice would scurry away.

    “It’s remarkable. it’s the best result I’ve seen in age reversal,” says Sinclair of his erstwhile competitor’s paper.

    Dollops of scepticism are healthy when it comes to claims of a fountain of youth – even de Keizer admits his work “sounds too good to be true”. Nevertheless some wary experts are impressed.

    “It raises my optimism that in our lifetime we will see treatments that can ameliorate multiple age-related diseases”, says Campisi.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 10:53 am on March 24, 2017 Permalink | Reply
    Tags: , , COSMOS, Fight looms over evolution's essence, Palaeontologogy, Species selection   

    From COSMOS: “Macro or micro? Fight looms over evolution’s essence” 

    Cosmos Magazine bloc


    24 March 2017
    Stephen Fleischfresser

    Evolution over deep time: is it in the genes, or the species?
    Roger Harris/Science Photo Library

    A new paper threatens to pit palaeontologists against the rest of the biological community and promises to reignite the often-prickly debate over the question of the level at which selection operates.

    Carl Simpson, a researcher in palaeobiology at the Smithsonian Institution National Museum of Natural History, has revived the controversial idea of ‘species selection’: that selective forces in nature operate on whole species at a macroevolutionary scale, rather than on individuals at the microevolutionary level.

    Macroevolution, mostly concerned with extinct species, is the study of large-scale evolutionary phenomena across vast time spans. By contrast, microevolution focusses on evolution in individuals and species over shorter periods, and is the realm of biologists concerned with living organisms, sometimes called neontologists.

    Neontologists, overall, maintain that all evolutionary phenomena can be explained in microevolutionary terms. Macroevolutionists often disagree.

    In a paper, yet to be peer-reviewed, on the biological pre-print repository bioRxiv, Simpson has outlined a renewed case for species selection, using recent research and new insights, both scientific and philosophical. And this might be too much for the biological community to swallow.

    The debate over levels of selection dates to Charles Darwin himself and concerns the question of what the ‘unit of selection’ is in evolutionary biology.

    The default assumption is that the individual organism is the unit of selection. If individuals of a particular species possess a trait that gives them reproductive advantage over others, then these individuals will have more offspring.

    If this trait is heritable, the offspring too will reproduce at a higher rate than other members of the species. With time, this leads to the advantageous trait becoming species-typical.

    Here, selection is operating on individuals, and this percolates up to cause species-level characteristics.

    While Darwin favoured this model, he recognised that certain biological phenomena, such as the sterility of workers in eusocial insects such as bees and ants, could best be explained if selection operated at a group level.

    Since Darwin, scientists have posited different units of selection: genes, organelles, cells, colonies, groups and species among them.

    Simpson’s argument hinges on the kind of macroevolutionary phenomena common in palaeontology: speciation and extinction over deep-time. Species selection is real, he says, and is defined as, “a macroevolutionary analogue of natural selection, with species playing an analogous part akin to that played by organisms in microevolution”.

    Simpson takes issue with the argument that microevolutionary processes such as individual selection percolate up to cause macroevolutionary phenomena.

    He presents evidence contradicting the idea, and concludes that the “macroevolutionary patterns we actually observe are not simply the accumulation of microevolutionary change… macroevolution occurs by changes within a population of species.”

    How this paper will be received, only time will tell. A 2010 paper in Nature saw the famous evolutionary biologist E. O. Wilson recant decades of commitment to the gene as the unit of selection, hinting instead at group selection. The mere suggestion of this brought a sharp rebuke from 137 scientists.

    Simpson’s claim is more radical again, so we can only wait for the controversy to deepen.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:39 am on March 9, 2017 Permalink | Reply
    Tags: , Autism Spectrum Disorder (ASD), Big data reveals more suspect autism genes, COSMOS,   

    From COSMOS: “Big data reveals more suspect autism genes” 

    Cosmos Magazine bloc


    09 March 2017
    Paul Biegler

    Deep data dives are revealing more complexities in the autism story. luckey_sun

    Researchers have isolated 18 new genes believed to increase risk for Autism Spectrum Disorder (ASD), a finding that may pave the way for earlier diagnosis and possible future drug treatments for the disorder.

    The study, published this week in Nature Neuroscience, used a technique called whole genome sequencing (WGS) to map the genomes of 5193 people with ASD.

    WGS goes beyond traditional analyses that look at the roughly 1% of DNA that makes up our genes to take in the remaining “noncoding” or “junk” DNA once thought to have little biological function.

    The study, led by Ryan Yuen of the Hospital for Sick Children in Toronto, Canada, used a cloud-based “big data” approach to link genetic variations with participants’ clinical data.

    Researchers identified 18 genes that increased susceptibility to ASD, noting people with mutations in those genes had reduced “adaptive functioning”, including the ability to communicate and socialise.

    “Detection of the mutation would lead to prioritisation of these individuals for comprehensive clinical assessment and referral for earlier intervention and could end long-sought questions of causation,” the authors write.

    But the study also found increased variations in the noncoding DNA of people with ASD, including so-called “copy number variations” where stretches of DNA are repeated. The finding highlights the promise of big data to link fine-grained genetic changes with real world illness, something the emerging discipline of precision medicine will harness to better target treatments.

    Commenting on the study, Dr Jake Gratten from the Institute for Molecular Bioscience at the University of Queensland said, “whole genome sequencing holds real promise for understanding the genetics of ASD, but establishing the role of noncoding variation in the disorder is an enormous challenge.”

    “This study is a good first step but we’re not there yet – much larger studies will be needed,” he said. ASD affects around 1% of the population, and is characterised by impaired social and emotional communication, something poignantly depicted by John Elder Robeson in his 2016 memoir Switched On.

    But the study findings went beyond autism, isolating ASD-linked genetic changes that increase risk for heart problems and diabetes, raising the possibility of preventative screening for participants and relatives.

    The authors note that 80% of the 61 ASD-risk genes already discovered by the project, a collaboration between advocacy group Autism Speaks and Verily Life Sciences, and known as MSSNG, are potential research targets for new drug treatments.

    But the uncomfortable nexus between scientific advances and public policy is also highlighted this week in an editorial in the New England Journal of Medicine. Health policy researchers David Mandell and Colleen Barry argue that planned Trump administration rollbacks threaten services to people with autism.

    Any repeal of the Affordable Care Act (“Obamacare”) they write, could include cuts to the public insurer Medicaid and subsequent limits on physical, occupational and language therapy for up to 250,000 children with autism.

    The authors also warn that comments made by US Attorney General Jeff Sessions bode ill for the Individuals with Disabilities Education Act (IDEA), legislation that guarantees free education for children with disabilities such as autism. Sessions has reportedly said the laws “may be the single most irritating problem for teachers throughout America today.”

    The authors also voice concern the Trump administration’s embrace of debunked links between vaccination and autism are a major distraction from these “growing threats to essential policies that support the health and well-being of people with autism or other disabilities”.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 2:44 pm on March 4, 2017 Permalink | Reply
    Tags: , COSMOS, ,   

    From COSMOS: “Resistance is futile: the super science of superconductivity” 

    Cosmos Magazine bloc


    30 May 2016 [Re-issued?]
    Cathal O’Connell

    From maglev trains to prototype hoverboards and the Large Hadron Collider – superconductors are finding more and more uses for modern technology. What superconductors are and how they work.

    A superconducting ceramic operates at the relatively high temperature of 123 Kelvin in a Japanese lab.

    What are superconductors?

    All the electronic devices around you – your phone, your computer, even your bedside lamp – are based on moving electrons through materials. In most materials, there is an opposition to this movement (kind of like friction, but for electrons) called electrical resistance, which wastes some of the energy as heat.

    This is why your laptop heats up during use, and the same effect is used to boil water in a kettle.

    Superconductors are materials that carry electrical current with exactly zero electrical resistance. This means you can move electrons through it without losing any energy to heat.

    Sounds amazing. What’s the catch?

    The snag is you have to cool a superconductor below a critical temperature for it to work. That critical temperature depends on the material, but it’s usually below -100 °C.

    A room temperature superconductor, if one could be found, could revolutionise modern technology, letting us transmit power across continents without any loss.

    How was superconductivity discovered?

    When you cool a metal, its electrical resistance tends to decrease. This is because the atoms in the metal jiggle around less, and so are less likely to get in an electrons way.

    Around the turn of the 19th century, physicists were debating what would happen at absolute zero, when the jiggling stops altogether.

    Some wondered whether the resistance would continue to decrease until it reached zero.

    Others, such as Lord Kelvin (after whom the temperature scale is named), argued that the resistance would become infinite as electrons themselves would stop moving.

    In April 1911, Dutch physicist Heike Kamerlingh Onnes cooled a solid mercury wire to 4.2 Kelvin and found the electrical resistance suddenly vanished – the mercury became a perfect conductor. It was a shocking discovery, both because of the abruptness of the change, and the fact it happened still a good four degrees above absolute zero.

    Kamerlingh Onnes had discovered superconductivity, although it took another 40 years for his results to be fully explained.

    What’s the explanation for superconductivity?

    It turns out there are at least two kinds of superconductivity, and physicists can only explain one of them.

    In the simplest case, when you cool a single element down below its critical temperature (as with the mercury example above) physicists can explain superconductivity pretty well: it arises from a weird quantum effect which causes the electrons to pair up within the material. When paired, the electrons gain the ability to flow through the material without getting knocked about by atoms.

    But more complex materials, such as some ceramics which are superconducting at higher temperatures, can’t be explained using this theory.

    Physicists don’t have a good explanation for what causes superconductivity in these “non-traditional superconductor” materials, although the answer must be another quantum effect which links up the electrons in some way.

    What are high-temperature superconductors?

    Physicists have a loose definition of what a “high temperature” is. In this case, it usually means anything above 70 Kelvin (or -203 °C). They choose this temperature because it means the superconductor can be cooled using liquid nitrogen, making it relatively cheap to run (liquid nitrogen only costs about 10-15 cents a litre.)

    The threshold temperature for superconductivity has been increasing for decades. The current record (-70 °C) is held by hydrogen sulfide (yes, the same molecule that gives rotten eggs their distinctive smell).

    The hope is that one day scientists will produce a material that superconducts at room temperature with no cooling required.

    What are superconductors used for now?

    Superconductors are used to make incredibly strong magnets for magnetic levitation (maglev) trains, for the magnetic resonance imaging (MRI) machines in hospitals, and to keep particles on track as they race around the Large Hadron Collider.

    CERN LHC particles
    CERN LHC particles

    The reason superconductors can make strong magnets comes down to Faraday’s law (a moving electric field creates a magnetic field). With no resistance, you can create a huge current, which makes for a correspondingly large magnetic field.

    For example, maglev trains have a series of superconducting coils along each wagon. Each superconductor contains a permanent electric current of about 700,000 amperes.

    The Japanese SCMaglev’s EDS suspension is powered by the magnetic fields induced either side of the vehicle by the passage of the vehicle’s superconducting magnets.


    The current runs round and round the coil without ever winding down, and so the magnetic field it generates is constant and incredibly strong. As the train passes over other electromagnets in the track, it levitates.

    With no friction to slow them down, maglev trains can reach over 600 kilometres per hour, making them the fastest in the world.

    A prototype hoverboard designed by Lexus also uses superconducting magnets for levitation

    Lexus via Wired

    What uses might superconductors have in the future?

    About 6% of all the electricity generated by power plants is lost in transmitting and distributing it around the country along copper wires.

    By replacing copper wires with superconducting wires, we could potentially transmit electrical power across entire continents without any loss. The problem, at the moment, is this would be ludicrously expensive.

    In 2014, the German city of Essen installed a kilometre-long superconducting cable for transmitting electrical power. It can transmit five times more power than a conventional cable, and with hardly any loss, although it’s a complicated bit of kit.

    To keep the superconductor below its critical temperature, liquid nitrogen must be pumped through the core and the whole thing is encased in several layers of insulation, a bit like a thermos flask.

    For a more practical solution, we’ll need to wait for cheap superconductors that can operate closer to room temperature, an advance that can be expected to take decades.

    Closer to reality, perhaps, are superconducting computers. Scientists have already developed computer chips based on superconductors, such as the Hypres Superconducting Microchip. Using such processors could lead to supercomputers requiring 1/50Oth the power of a regular supercomputer.

    Hypres Superconducting Microchip, Incorporating 6000 Josephson Junctions. Noimage credit. http://www.superconductors.org/uses.htm

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: