Tagged: COSMOS Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:05 am on March 24, 2017 Permalink | Reply
    Tags: , , , COSMOS, nicotinamide adenine dinucleotide (NAD+)   

    From COSMOS: “Can ageing be held at bay by injections and pills?” 

    Cosmos Magazine bloc


    24 March 2017
    Elizabeth Finkel

    Two fast ageing mice. The one on the left was treated with a FOXO4 peptide, which targets senescent cells and leads to hair regrowth in 10 days.
    Peter L.J. de Keizer

    The day we pop up a pill or get a jab to stave off ageing is closer, thanks to two high profile papers just published today.

    A Science paper from a team, led by David Sinclair from Harvard Medical School and the University of NSW, shows how popping a pill that raises the levels of a natural molecule called nicotinamide adenine dinucleotide (NAD+) staves off the DNA damage that leads to aging.

    The other paper, published in Cell, led by Peter de Keizer’s group at Erasmus University in the Netherlands, shows how a short course of injections to kill off defunct “senescent cells” reversed kidney damage, hair loss and muscle weakness in aged mice.

    Taken together, the two reports give a glimpse of how future medications might work together to forestall ageing when we are young, and delete damaged cells as we grow old. “This is what we in the field are planning”, says Sinclair.

    Sinclair has been searching for factors that might slow the clock of ageing for decades. His group stumbled upon the remarkable effects of NAD+ in the course of studying powerful anti-ageing molecules known as sirtuins, a family of seven proteins that mastermind a suite of anti-ageing mechanisms, including protecting DNA and proteins.

    Resveratrol, a compound found in red wine, stimulates their activity. But back in 2000, Sinclair’s then boss Lenny Guarente at MIT discovered a far more powerful activator of sirtuins – NAD+. It was a big surprise.

    “It would have to be the most boring molecule in the world”, notes Sinclair.

    It was regarded as so common and boring that no-one thought it could play a role in something as profound as tweaking the ageing clock. But Sinclair found that NAD+ levels decline with age.

    “By the time you’re 50, the levels are halved,” he notes.

    And in 2013, his group showed [Cell] that raising NAD+ levels in old mice restored the performance of their cellular power plants, mitochondria.

    One of the key findings of the Science paper is identifying the mechanism by which NAD+ improves the ability to repair DNA. It acts like a basketball defence, staying on the back of a troublesome protein called DBC1 to keep it away from the key player PARP1– a protein that repairs DNA.

    When NAD+ levels fall, DBC1 tackles PARP1. End result: DNA damage goes unrepaired and the cell ‘ages’.

    “We ‘ve discovered the reason why DNA repair declines as we get older. After 100 years that’s exciting,” says Sinclair .

    His group has helped developed a compound, nicotinamide mono nucleotide (NMN), that raises NAD+ levels. As reported in the Science paper, when injected into aged mice it restored the ability of their liver cells to repair DNA damage. In young mice that had been exposed to DNA-damaging radiation, it also boosted their ability to repair it. The effects were seen within a week of the injection.

    These kinds of results have impressed NASA. The organisation is looking for methods to protect its astronauts from radiation damage during their one-year trip to Mars. Last December it hosted a competition for the best method of preventing that damage. Out of 300 entries, Sinclair’s group won.

    As well as astronauts, children who have undergone radiation therapy for cancer might also benefit from this treatment. According to Sinclair, clinical trials for NMN should begin in six months. While many claims have been made for NAD+ to date, and compounds are being sold to raise its levels, this will be the first clinical trial, says Sinclair.

    By boosting rates of DNA repair, Sinclair’s drug holds the hope of slowing down the ageing process itself. The work from de Keizer’s lab, however, offers the hope of reversing age-related damage.

    His approach stems from exploring the role of senescent cells. Until 2001, these cells were not really on the radar of researchers who study ageing. They were considered part of a protective mechanism that mothballs damaged cells, preventing them from ever multiplying into cancer cells.

    The classic example of senescent cells is a mole. These pigmented skin cells have incurred DNA damage, usually triggering dangerous cancer-causing genes. To keep them out of action, the cells are shut down.

    If humans lived only the 50-year lifespan they were designed for, there’d be no problem. But because we exceed our use-by date, senescent cells end up doing harm.

    As Judith Campisi at the Buck Institute, California, showed in 2001, they secrete inflammatory factors that appear to age the tissues around them.

    But cells have another option. They can self-destruct in a process dubbed apoptosis. It’s quick and clean, and there are no nasty compounds to deal with.

    So what relegates some cells to one fate over another? That’s the question Peter de Keizer set out to solve when he did a post-doc in Campisi’s lab back in 2009.

    Finding the answer didn’t take all that long. A crucial protein called p53 was known to give the order for the coup de grace. But sometimes it showed clemency, relegating the cell to senesce instead.

    De Keizer used sensitive new techniques to identify that in senescent cells, it was a protein called FOXO4 that tackled p53, preventing it from giving the execution order.

    The solution was to interfere with this liaison. But it’s not easy to wedge proteins apart; not something that small diffusible molecules – the kind that make great drugs – can do.

    De Keizer, who admits to “being stubborn” was undaunted. He began developing a protein fragment that might act as a wedge. It resembled part of the normal FOXO4 protein, but instead of being built from normal L- amino acids it was built from D-amino acids. It proved to be a very powerful wedge.

    Meanwhile other researchers were beginning to show that executing senescent cells was indeed a powerful anti-ageing strategy. For instance, a group from the Mayo Clinic last year showed that mice genetically engineered to destroy 50-70% of their senescent cells in response to a drug experienced a greater “health span”.

    Compared to their peers they were more lively and showed less damage to their kidney and heart muscle. Their average lifespan was also boosted by 20%.

    But humans are not likely to undergo mass genetic engineering. To achieve similar benefits requires a drug that works on its own. Now de Keizer’s peptide looks like it could be the answer.

    As the paper in Cell shows, in aged mice, three injections of the peptide per week had dramatic effects. After three weeks, the aged balding mice regrew hair and showed improvements to kidney function. And while untreated aged mice could be left to flop onto the lab bench while the technician went for coffee, treated mice would scurry away.

    “It’s remarkable. it’s the best result I’ve seen in age reversal,” says Sinclair of his erstwhile competitor’s paper.

    Dollops of scepticism are healthy when it comes to claims of a fountain of youth – even de Keizer admits his work “sounds too good to be true”. Nevertheless some wary experts are impressed.

    “It raises my optimism that in our lifetime we will see treatments that can ameliorate multiple age-related diseases”, says Campisi.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 10:53 am on March 24, 2017 Permalink | Reply
    Tags: , , COSMOS, Fight looms over evolution's essence, Palaeontologogy, Species selection   

    From COSMOS: “Macro or micro? Fight looms over evolution’s essence” 

    Cosmos Magazine bloc


    24 March 2017
    Stephen Fleischfresser

    Evolution over deep time: is it in the genes, or the species?
    Roger Harris/Science Photo Library

    A new paper threatens to pit palaeontologists against the rest of the biological community and promises to reignite the often-prickly debate over the question of the level at which selection operates.

    Carl Simpson, a researcher in palaeobiology at the Smithsonian Institution National Museum of Natural History, has revived the controversial idea of ‘species selection’: that selective forces in nature operate on whole species at a macroevolutionary scale, rather than on individuals at the microevolutionary level.

    Macroevolution, mostly concerned with extinct species, is the study of large-scale evolutionary phenomena across vast time spans. By contrast, microevolution focusses on evolution in individuals and species over shorter periods, and is the realm of biologists concerned with living organisms, sometimes called neontologists.

    Neontologists, overall, maintain that all evolutionary phenomena can be explained in microevolutionary terms. Macroevolutionists often disagree.

    In a paper, yet to be peer-reviewed, on the biological pre-print repository bioRxiv, Simpson has outlined a renewed case for species selection, using recent research and new insights, both scientific and philosophical. And this might be too much for the biological community to swallow.

    The debate over levels of selection dates to Charles Darwin himself and concerns the question of what the ‘unit of selection’ is in evolutionary biology.

    The default assumption is that the individual organism is the unit of selection. If individuals of a particular species possess a trait that gives them reproductive advantage over others, then these individuals will have more offspring.

    If this trait is heritable, the offspring too will reproduce at a higher rate than other members of the species. With time, this leads to the advantageous trait becoming species-typical.

    Here, selection is operating on individuals, and this percolates up to cause species-level characteristics.

    While Darwin favoured this model, he recognised that certain biological phenomena, such as the sterility of workers in eusocial insects such as bees and ants, could best be explained if selection operated at a group level.

    Since Darwin, scientists have posited different units of selection: genes, organelles, cells, colonies, groups and species among them.

    Simpson’s argument hinges on the kind of macroevolutionary phenomena common in palaeontology: speciation and extinction over deep-time. Species selection is real, he says, and is defined as, “a macroevolutionary analogue of natural selection, with species playing an analogous part akin to that played by organisms in microevolution”.

    Simpson takes issue with the argument that microevolutionary processes such as individual selection percolate up to cause macroevolutionary phenomena.

    He presents evidence contradicting the idea, and concludes that the “macroevolutionary patterns we actually observe are not simply the accumulation of microevolutionary change… macroevolution occurs by changes within a population of species.”

    How this paper will be received, only time will tell. A 2010 paper in Nature saw the famous evolutionary biologist E. O. Wilson recant decades of commitment to the gene as the unit of selection, hinting instead at group selection. The mere suggestion of this brought a sharp rebuke from 137 scientists.

    Simpson’s claim is more radical again, so we can only wait for the controversy to deepen.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:39 am on March 9, 2017 Permalink | Reply
    Tags: , Autism Spectrum Disorder (ASD), Big data reveals more suspect autism genes, COSMOS,   

    From COSMOS: “Big data reveals more suspect autism genes” 

    Cosmos Magazine bloc


    09 March 2017
    Paul Biegler

    Deep data dives are revealing more complexities in the autism story. luckey_sun

    Researchers have isolated 18 new genes believed to increase risk for Autism Spectrum Disorder (ASD), a finding that may pave the way for earlier diagnosis and possible future drug treatments for the disorder.

    The study, published this week in Nature Neuroscience, used a technique called whole genome sequencing (WGS) to map the genomes of 5193 people with ASD.

    WGS goes beyond traditional analyses that look at the roughly 1% of DNA that makes up our genes to take in the remaining “noncoding” or “junk” DNA once thought to have little biological function.

    The study, led by Ryan Yuen of the Hospital for Sick Children in Toronto, Canada, used a cloud-based “big data” approach to link genetic variations with participants’ clinical data.

    Researchers identified 18 genes that increased susceptibility to ASD, noting people with mutations in those genes had reduced “adaptive functioning”, including the ability to communicate and socialise.

    “Detection of the mutation would lead to prioritisation of these individuals for comprehensive clinical assessment and referral for earlier intervention and could end long-sought questions of causation,” the authors write.

    But the study also found increased variations in the noncoding DNA of people with ASD, including so-called “copy number variations” where stretches of DNA are repeated. The finding highlights the promise of big data to link fine-grained genetic changes with real world illness, something the emerging discipline of precision medicine will harness to better target treatments.

    Commenting on the study, Dr Jake Gratten from the Institute for Molecular Bioscience at the University of Queensland said, “whole genome sequencing holds real promise for understanding the genetics of ASD, but establishing the role of noncoding variation in the disorder is an enormous challenge.”

    “This study is a good first step but we’re not there yet – much larger studies will be needed,” he said. ASD affects around 1% of the population, and is characterised by impaired social and emotional communication, something poignantly depicted by John Elder Robeson in his 2016 memoir Switched On.

    But the study findings went beyond autism, isolating ASD-linked genetic changes that increase risk for heart problems and diabetes, raising the possibility of preventative screening for participants and relatives.

    The authors note that 80% of the 61 ASD-risk genes already discovered by the project, a collaboration between advocacy group Autism Speaks and Verily Life Sciences, and known as MSSNG, are potential research targets for new drug treatments.

    But the uncomfortable nexus between scientific advances and public policy is also highlighted this week in an editorial in the New England Journal of Medicine. Health policy researchers David Mandell and Colleen Barry argue that planned Trump administration rollbacks threaten services to people with autism.

    Any repeal of the Affordable Care Act (“Obamacare”) they write, could include cuts to the public insurer Medicaid and subsequent limits on physical, occupational and language therapy for up to 250,000 children with autism.

    The authors also warn that comments made by US Attorney General Jeff Sessions bode ill for the Individuals with Disabilities Education Act (IDEA), legislation that guarantees free education for children with disabilities such as autism. Sessions has reportedly said the laws “may be the single most irritating problem for teachers throughout America today.”

    The authors also voice concern the Trump administration’s embrace of debunked links between vaccination and autism are a major distraction from these “growing threats to essential policies that support the health and well-being of people with autism or other disabilities”.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 2:44 pm on March 4, 2017 Permalink | Reply
    Tags: , COSMOS, ,   

    From COSMOS: “Resistance is futile: the super science of superconductivity” 

    Cosmos Magazine bloc


    30 May 2016 [Re-issued?]
    Cathal O’Connell

    From maglev trains to prototype hoverboards and the Large Hadron Collider – superconductors are finding more and more uses for modern technology. What superconductors are and how they work.

    A superconducting ceramic operates at the relatively high temperature of 123 Kelvin in a Japanese lab.

    What are superconductors?

    All the electronic devices around you – your phone, your computer, even your bedside lamp – are based on moving electrons through materials. In most materials, there is an opposition to this movement (kind of like friction, but for electrons) called electrical resistance, which wastes some of the energy as heat.

    This is why your laptop heats up during use, and the same effect is used to boil water in a kettle.

    Superconductors are materials that carry electrical current with exactly zero electrical resistance. This means you can move electrons through it without losing any energy to heat.

    Sounds amazing. What’s the catch?

    The snag is you have to cool a superconductor below a critical temperature for it to work. That critical temperature depends on the material, but it’s usually below -100 °C.

    A room temperature superconductor, if one could be found, could revolutionise modern technology, letting us transmit power across continents without any loss.

    How was superconductivity discovered?

    When you cool a metal, its electrical resistance tends to decrease. This is because the atoms in the metal jiggle around less, and so are less likely to get in an electrons way.

    Around the turn of the 19th century, physicists were debating what would happen at absolute zero, when the jiggling stops altogether.

    Some wondered whether the resistance would continue to decrease until it reached zero.

    Others, such as Lord Kelvin (after whom the temperature scale is named), argued that the resistance would become infinite as electrons themselves would stop moving.

    In April 1911, Dutch physicist Heike Kamerlingh Onnes cooled a solid mercury wire to 4.2 Kelvin and found the electrical resistance suddenly vanished – the mercury became a perfect conductor. It was a shocking discovery, both because of the abruptness of the change, and the fact it happened still a good four degrees above absolute zero.

    Kamerlingh Onnes had discovered superconductivity, although it took another 40 years for his results to be fully explained.

    What’s the explanation for superconductivity?

    It turns out there are at least two kinds of superconductivity, and physicists can only explain one of them.

    In the simplest case, when you cool a single element down below its critical temperature (as with the mercury example above) physicists can explain superconductivity pretty well: it arises from a weird quantum effect which causes the electrons to pair up within the material. When paired, the electrons gain the ability to flow through the material without getting knocked about by atoms.

    But more complex materials, such as some ceramics which are superconducting at higher temperatures, can’t be explained using this theory.

    Physicists don’t have a good explanation for what causes superconductivity in these “non-traditional superconductor” materials, although the answer must be another quantum effect which links up the electrons in some way.

    What are high-temperature superconductors?

    Physicists have a loose definition of what a “high temperature” is. In this case, it usually means anything above 70 Kelvin (or -203 °C). They choose this temperature because it means the superconductor can be cooled using liquid nitrogen, making it relatively cheap to run (liquid nitrogen only costs about 10-15 cents a litre.)

    The threshold temperature for superconductivity has been increasing for decades. The current record (-70 °C) is held by hydrogen sulfide (yes, the same molecule that gives rotten eggs their distinctive smell).

    The hope is that one day scientists will produce a material that superconducts at room temperature with no cooling required.

    What are superconductors used for now?

    Superconductors are used to make incredibly strong magnets for magnetic levitation (maglev) trains, for the magnetic resonance imaging (MRI) machines in hospitals, and to keep particles on track as they race around the Large Hadron Collider.

    CERN LHC particles
    CERN LHC particles

    The reason superconductors can make strong magnets comes down to Faraday’s law (a moving electric field creates a magnetic field). With no resistance, you can create a huge current, which makes for a correspondingly large magnetic field.

    For example, maglev trains have a series of superconducting coils along each wagon. Each superconductor contains a permanent electric current of about 700,000 amperes.

    The Japanese SCMaglev’s EDS suspension is powered by the magnetic fields induced either side of the vehicle by the passage of the vehicle’s superconducting magnets.


    The current runs round and round the coil without ever winding down, and so the magnetic field it generates is constant and incredibly strong. As the train passes over other electromagnets in the track, it levitates.

    With no friction to slow them down, maglev trains can reach over 600 kilometres per hour, making them the fastest in the world.

    A prototype hoverboard designed by Lexus also uses superconducting magnets for levitation

    Lexus via Wired

    What uses might superconductors have in the future?

    About 6% of all the electricity generated by power plants is lost in transmitting and distributing it around the country along copper wires.

    By replacing copper wires with superconducting wires, we could potentially transmit electrical power across entire continents without any loss. The problem, at the moment, is this would be ludicrously expensive.

    In 2014, the German city of Essen installed a kilometre-long superconducting cable for transmitting electrical power. It can transmit five times more power than a conventional cable, and with hardly any loss, although it’s a complicated bit of kit.

    To keep the superconductor below its critical temperature, liquid nitrogen must be pumped through the core and the whole thing is encased in several layers of insulation, a bit like a thermos flask.

    For a more practical solution, we’ll need to wait for cheap superconductors that can operate closer to room temperature, an advance that can be expected to take decades.

    Closer to reality, perhaps, are superconducting computers. Scientists have already developed computer chips based on superconductors, such as the Hypres Superconducting Microchip. Using such processors could lead to supercomputers requiring 1/50Oth the power of a regular supercomputer.

    Hypres Superconducting Microchip, Incorporating 6000 Josephson Junctions. Noimage credit. http://www.superconductors.org/uses.htm

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:48 am on March 2, 2017 Permalink | Reply
    Tags: , COSMOS, Great Oxidation, Humans have created at least 208 new types of mineral, International Mineralogical Association (IMA)   

    From COSMOS: “Humans have created at least 208 new types of mineral” 

    Cosmos Magazine bloc


    02 March 2017
    Richard A Lovett

    Simonkolleite [Zn5(OH)8Cl2·H2O] is an anthropogenic mineral, found on a copper mining artifact, Rowley mine, Maricopa County, Arizona. RRUFF

    Scientists have discovered that humans are adding to our planet’s catalogue of mineral types at a rate never before seen.

    It’s happening so fast that human-created minerals now total 208 of the 5,208 types recorded by the International Mineralogical Association (IMA)—and these are merely the ones that have been officially recognised. There are probably hundreds more currently not acknowledged.

    Nor are these substances mere laboratory curiosities created by bored scientists.

    “We make bricks,” says Robert Hazen, a mineralogist at the Carnegie Institution for Science in Washington DC. “We make cement. We make reinforced concrete. We have porcelain in glassware. We have all sorts of crystals in technology, batteries, and magnets. We have pigments and paints and glues and things that include mineral-like crystal substances which never before existed in the history of the world.”

    Other substances are created by accident. “Many are associated with mining,” says Edward Grew, a mineralogist and petrologist from the University of Maine, who collaborated with Hazen on a paper published in the current issue of American Mineralogist.

    “Mining disturbs the environment under the earth or at the earth’s surface,” he says, “and that disturbance makes for environments where new minerals can form. Some have been dated from the Bronze Age, but for the most part they are much newer.”

    To figure out when minerals first appeared, the scientists went through geological databases, looking for the time when each officially recognised mineral first appeared in the geological record.

    “This one formed in a mine tunnel, this one in a shipwreck, and this one in an old Egyptian statue,” Hazen says, adding that his favourite was a mineral – calclacite – that formed in a museum drawer where a mineral specimen reacted with acetic acid from the wood to create an entirely new substance. “You had a new mineral forming in a museum!” he says.

    And while the wood-drawer mineral might be dismissed as a curiosity, the overall effect is important, the scientists say.

    That’s because the only other time in Earth’s history when there was a remotely comparable growth in the number of mineral types was during the “Great Oxidation,” which occurred when oxygen began to build up in the Earth’s atmosphere, about 2.2 billion years ago.

    This caused the oxidation of pre-existing minerals, producing the first appearances of as many as two-thirds of the minerals currently in the IMA’s catalogue (including economically important iron ores).

    But the Great Oxidation event took place over the course of hundreds of millions of years. Today’s explosion in new minerals has occurred in a tiny fraction of that time. That’s important, Hazen says, because minerals are durable and the ones created in recent history will likely outlive the civilisation that produced them.

    “They will be preserved for billions of years in the sedimentary record,” he says.

    And that, he says, bolsters the argument for designating modern times as a new geological epoch: the Anthropocene.

    Abhurite [Sn21O6(OH)14Cl16] from the wreck of the SS Cheerful, which foundered off St. Ives, Cornwall, England. RRUFF

    The designation of geological epochs might seem arcane, but in the long-run view of geologists, it is anything but. The issue, Hazen says, is what a geologist of the future might think, going through the billion-year-future of the Grand Canyon, examining sediments laid down in our era.

    “Cubic zirconium, laser crystals, silicon chips, and stuff like that are very stable materials,” he says. “Future geologists will be able to hammer out chunks of materials and say, ‘Look at this.’”

    Other scientists have suggested that a similar worldwide stratigraphic layer might be created by fallout from nuclear testing, or from the fumes of leaded gasoline. But that, Hazen says, is “nothing” compared to “minerals that are being produced in huge volumes all around the world.”

    Other scientists agree. J. Kelly Russell, a volcanologist and igneous petrologist at the University of British Columbia in Vancouver, notes that Hazen is “great ‘big-idea’ guy.” He didn’t have time to review the new paper for this article, but “I heard some of [his] ideas a few years ago and it was quite stimulating,” he says.

    Allen Glazer, an igneous petrologist from the University of North Carolina, Chapel Hill, agrees. “I’m not an expert, but the concept make sense to me,” he says.
    “If you go forward, say a million years, there will be plenty of markers showing how humans affected the landscape and the stratigraphic record. I remember as a lad going into a mine and finding beautiful blue curving crystals of chalcanthite on mine timbers. That’s a natural mineral, but was on human workings. That’s one of the sorts of things they’re talking about.”

    Richard Alley, a geoscientist at Pennsylvania State University, calls the new study “one more demonstration of the large and growing human impact on the planet. “I still occasionally meet people who don’t believe that humans are powerful enough for our actions to have global consequences,” he says. “I’m not sure this paper will change those people’s minds, but [it] does re-confirm the increasingly pervasive reach of human influence.”

    But Ken Caldeira, a climate scientist at the Carnegie Institution for Science’s Department of Global Ecology, Stanford, California, notes that if the future has any geologists to look back across vast spans of time, “the presence of humans on the face of this planet will be marked by the widespread extinction of many species, large mammals in particular. “It is this loss of biodiversity that I mourn. Whatever the mineralogical markers of human activity on this planet, the extinction record is the real tragedy of irrecoverable loss.”

    By their names shall you know them.
    Here are just a few of the 208 man-made minerals officially recognised by the International Mineralogical Association.

    These ones are all associated in various ways with mines and mining.


    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 2:39 pm on February 27, 2017 Permalink | Reply
    Tags: A quartz crystal vibrates when an electrical current is passed through it, , Atomin clocks, COSMOS, Frequency reference, , Piezoelectric, The quartz clock was invented in 1927   

    From COSMOS: “How atomic clocks can keep accurate time for billions of years” 

    Cosmos Magazine bloc


    16 February 2017
    Vishnu Varma

    In caesium atomic clocks, atoms of vaporised caesium-133 oscillate between two energy levels as they pass between magnets at each end of the resonator. Science Photo Library / Getty Images

    The most accurate clock ever made. Building the world’s most accurate clock is no simple task. At the clock’s ticking heart is a chamber filled with strontium atoms that are excited by laser light. – Ye group and Brad Baxley/JLA

    It’s easy to take clocks for granted – fewer people are wearing wristwatches, instead preferring to check their smartphone or laptop for the time. But making sure that timekeeping is accurate was a problem only solved in the 1940s.

    The basic principles of how a clock works haven’t really changed much in more than 350 years. The most important part of any time-keeping device is called the “frequency reference” which ensures each second is exactly the same.

    Take, for instance, a pendulum. A small force taps it to make sure it takes a second to complete a swing, known in the time business as an oscillation. The problem, though, is that a slight jostle, or even a temperature fluctuation, can change its duration.

    The quartz clock was invented in 1927, replacing the pendulum with a quartz crystal oscillator as the frequency reference. A quartz crystal vibrates when an electrical current is passed through it. It is also piezoelectric, meaning it accumulates electrical charge when flexed.

    A small fuel source – a watch battery, for instance – provides power to a microchip circuit, which makes the crystal oscillate at a certain frequency: 32,768 times each second. The circuit then detects these vibrations and converts them into electric pulses – one every second – which feed a miniature motor to keep the second hand sweeping along.

    Today, many clocks continue to use quartz because it’s easy to maintain and keeps time with reasonable accuracy. But like pendulums, the oscillation of a quartz crystal can change due to temperature and pressure. And while these tiny changes don’t affect everyday life, advancements in technology and highly accurate experiments require much more reliable time-keeping.

    In 1949, scientists from the National Institute of Standards and Technology in the US developed the first atomic clock. Its frequency reference was what’s called the “resonant frequency” – the natural rate of vibration determined by the physical parameters of the vibrating object, at which it will attain a high-energy state.

    For instance, a caesium-133 atom has a resonant frequency is 9,192,631,770 hertz. This means its outermost electron will most likely “jump” to the next higher energy level – producing a high-energy version of the atom – in the presence of microwave radiation at that exact frequency.

    Caesium is commonly used in atomic clocks because in 1967 the second was redefined based on the oscillation frequency of photons spat out when it jumped between energy levels.

    Cosmos magazine.

    In a standard atomic clock, liquid caesium – the element’s natural state at room temperature – is placed in an oven and heated to a gas, which boosts some atoms to a high-energy state. The gas is funnelled through a magnet to filter them out.

    The remaining stream of low-energy atoms then passes through a wave transmitter, which bombards them with microwaves at 9,192,631,770 hertz. If the frequency is exactly right, all the caesium-133 atoms should flick into a high-energy state.

    The atoms then pass through another magnet. But this time, only the high-energy ones are allowed to pass and hit a detector.

    If the detector senses gaps between impacts, it knows that not all the atoms were boosted – and that therefore the wave transmitter isn’t generating the correct frequency.

    An electrical signal is then sent back to the generator, correcting the frequency transmitted, until a steady stream of caesium-133 atoms hits the detector. The product is then divided by 9,192,631,770, to produce one second.

    This self-correcting ability is what makes atomic clocks such accurate devices. If left alone, the best today – which use elements such as strontium – would only change by less than a second after more than a billion years.

    Although these accuracies seem unnecessary, aside from experimental measurements, these clocks are an incredibly important part of the Global Positioning System (GPS). Without such precision, for instance, mistakes as small as a microsecond could make your GPS think that you are hundreds of metres away from where you actually are.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:21 am on February 20, 2017 Permalink | Reply
    Tags: , , COSMOS, ,   

    From COSMOS: “Fast radio bursts: enigmatic and infuriating” 

    Cosmos Magazine bloc


    13 February 2017
    Katie Mack

    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia
    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia

    The best science stories are mystery stories. Something unexplained occurs, the detectives gather their clues, theories are proposed and shot down. In the end, if all goes well, the mystery is solved – at least until the next time something goes bump in the night.

    One of the most perplexing mysteries in astronomy today is the fast radio burst, or FRB. Almost 10 years ago, astronomer Duncan Lorimer at West Virginia University noticed a shockingly bright, incredibly quick signal in data collected by the Parkes radio telescope observatory in New South Wales a few years before. Only a few milliseconds long, the burst was as brilliant as some of the brightest galaxies radio astronomers had ever observed.

    Intriguingly, the signal swept across radio frequencies, mimicking the behaviour of bright flashes of radiation from very distant pulsars – ultra-dense stars that emit regular pulses of light. A signal that spreads across frequencies usually indicates that cosmic matter is dispersing the light, in the same way a prism spreads white light into a rainbow.

    But while the burst looked a lot like a pulsar blip, it didn’t repeat the way pulsar signals do, and no other telescope detected it. Dubbed the “Lorimer Burst”, it stood for years as a one-off event.

    Given its uniqueness, some suggested it must have been some kind of Earth-based interference, or perhaps simply a glitch in the Parkes telescope.

    Today, fast radio bursts are no longer anomalies. With a hint of what to look for – very short, bright events – astronomers have scoured data from the Parkes telescope and other radio telescopes around the world. FRBs are now so numerous it’s hard to keep up with their discovery.

    Yet FRBs are a study in contradictions. So far, only one source repeats, but at such irregular intervals that astronomers have not been able to determine a pattern. Only two bursts have coincided with emissions in visible or any other kind of light, which is necessary to pinpoint the source of the FRB since the radio telescopes can’t give an exact location.

    However, one of those two bursts now appears more likely to be a chance alignment than a true correlation, and the other paints the picture of an explosion with such odd characteristics it is hard to reconcile with any known model.

    Careful analysis of different FRB signals has suggested explosions of young stars, or old stars, or even collisions between stars, but none of those fit with an FRB that repeats.

    One of the biggest open questions is exactly how far away FRBs are. Every attempt to work out their distance has been inconclusive. Even the pattern of their locations in the sky is odd. If they’re all far beyond our own galaxy, we would expect them to appear at random places in the sky.

    If they’re all in our galaxy, we should see them mostly along the plane of the Milky Way, where most of the stars are. In actuality, we’ve found them to lie somewhat more often above or below the plane of the galaxy, not randomly like distant sources, and not in the plane like close ones. But with only 20 or so seen so far, it is hard to draw a conclusion.

    Thanks to FRBs, we are now looking at the universe in a new way, redesigning our observation strategies and scouring the data for super-short-duration events. Just as every new observing wavelength we try or instrumental technique we develop opens a new window to the universe, this new frontier may allow us to see an entire zoo of cosmic events that were happening all along, unseen. It wouldn’t be surprising to find that FRBs represent a diverse family of cosmic explosions rather than one kind of thing.

    The key to solving this mystery will be to catch an FRB in the act and, at the same time, see its fingerprints on a signal detected with another kind of light, thus allowing us to see the galaxy it came from.

    Astronomers are already designing surveys that watch for FRBs with radio telescopes and scour the sky with optical, infrared, or gamma ray telescopes around the world simultaneously. Once we have a handful of real-time FRBs along with their host galaxies, we will start to close this case and, more likely than not, open several exciting new ones.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 5:15 pm on February 10, 2017 Permalink | Reply
    Tags: , COSMOS, Deccan Traps eruption,   

    From COSMOS: “Two huge magma plumes fed the Deccan Traps eruption” 

    Cosmos Magazine bloc


    10 February 2017
    Kate Ravilious

    Thick lava flows in Hawaii are nothing compared to the mammoth rivers of hot rock that rolled across in India in the late Cretaceous. New research suggests those flows were fed by two magma sources. Justinreznick / Getty Images

    Some 65 million years ago, the skies over India darkened as one of Earth’s biggest volcanic eruptions burbled from below. It rumbled on for millions of years, blocking out sunlight and casting a chill globally, to produce what we know today as the Deccan Traps.

    Many believe the eruption sent the dinosaurs into severe demise before an asteroid collision finally finished them off. But just how the Earth produced such vast volumes of lava (covering an area greater than the Australian states of New South Wales and Victoria combined) has remained a bit of a mystery. Now a new study by a pair of geologists in Canada shows that the eruption may have been fed by not one, but two deep mantle plumes.

    Like the hot air that rises to create a thundercloud, mantle plumes are thought to be narrow regions of convection that fast-track hot material all the way up from the core-mantle boundary and through the Earth’s 2,900-kilometre-thick layer of hot rock called the mantle.

    There are thought to be a number of active mantle plumes today, some of which have created a chain of volcanic islands as the oceanic plate glides across the plume top. The Hawaiian-Emperor seamount chain, the Easter Islands and the Walvis Ridge (culminating in the island of Tristan da Cunha) are just a few examples.

    By calculating past movements of tectonic plates, scientists have shown that the mantle plume currently underneath the Indian Ocean Island of Réunion was probably responsible for melting the mantle underneath the Deccan region 66 million years ago. But scientists have remained perplexed as to how one mantle plume could produce such a prodigious volume of melt.

    Petar Glišović and Alessandro Forte from the University of Quebec in Montréal, Canada, decided to revisit the Deccan conundrum using a model of mantle convection and running it in reverse for 70 million years.

    “This is a really hard problem as it is impossible to undo heat diffusion,” explains James Wookey, a geophysicist at the University of Bristol in the UK, who wasn’t involved with the study.

    So the pair ran many iterations of their model, with each scenario starting 2.5 million years ago with a different mantle structure configuration, and run forwards until one produced current mantle conditions.

    Taking the best fit and rewinding mantle dynamics by 70 million years, Glišović and Forte’s model showed that the Réunion mantle plume was situated underneath the Deccan region of India, as expected, but to their surprise there was also another mantle plume nearby at that time, responsible for feeding the volcanism on the East African island of Comoros today.

    Publishing in Science, Glišović and Forte calculated that the combined heat of the Réunion and Comoros mantle plumes would have been sufficient to melt around 60 million cubic kilometres of mantle at the time of the eruption; more than enough to feed the Deccan Traps. “We see mantle plumes merging and splitting in our forward running models of mantle convection, so the idea that these two plumes merged in the past is certainly plausible,” says Wookey.

    The model also shows that the Comoros plume had lost most of its heat by 40 million years ago, while the Réunion mantle plume ran out of steam around 20 million years ago. Today, both plumes are mere shadows of their former selves. But Wookey cautions against taking the findings too literally, adding: “the physics of the model is reasonable, but whether the mantle movements are precisely what the Earth actually did is another matter.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 1:13 pm on February 7, 2017 Permalink | Reply
    Tags: COSMOS, Network for Observation of Volcanic and Atmospheric Change (NOVAC), The tricky science of tracking and predicting volcanic eruptions,   

    From COSMOS: “The tricky science of tracking and predicting volcanic eruptions” 

    Cosmos Magazine bloc


    07 February 2017
    Kate Ravilious

    The Japanese city of Kagoshima sits near the base of the active volcano Sakurajima. Jim Holmes / Getty Images

    It was just after 3pm on 13 November 1985 when the Colombian volcano Nevado del Ruiz erupted. Within minutes, four deadly rivers of clay, ice and molten rock raced down its flanks, destroying towns and villages.

    More than 23,000 people died, making it the second deadliest volcanic disaster in the 20th century – outranked only by the 1902 eruption of Mount Pelée in the Caribbean, which killed 30,000.

    There had been mini-eruptions and earthquakes in the run-up to the Colombian event, but while scientists noted such rumblings, they had no way of knowing whether they were just minor tantrums or harbingers of something worse.

    Since then, the science of eruption forecasting has come a long way. In 1991, 75,000 people were evacuated prior to the massive explosion of the Mount Pinatubo on the Philippine island of Luzon. In 2010, 70,000 were moved out of harm’s way before Indonesia’s Mount Merapi erupted.

    That’s not to say forecasting has become infallible. In 2014, Mount Ontake in Japan erupted unexpectedly, killing 57 people. And in many areas people live in the shadows of dangerous volcanoes that are not monitored at all.

    But new methods of remote forecasting, combined with powerful computer models, promise to be a game changer.

    Around the world are an estimated 1,550 active volcanoes. Most signal their vitality with just an occasional rumble, around 20 are non-stop fumers that don’t erupt – and about 50 explode each year. A handful of these are big enough to cause problems.

    To forecast big blasts, scientists measure fumes emanating from within the rocks. Changes in the ratio of carbon dioxide to sulfur dioxide can be a clue to restlessness down below. When magma starts to move upwards, carbon dioxide, being less soluble, bubbles out first. It’s followed by a belch of sulfur dioxide as the magma nears the surface.

    For decades, the only way to measure these gases involved walking up the slopes towards the crater, or swooping past in an aircraft – both risky activities, especially once an eruption is underway – and especially if scientists wanted to see how gas composition changed while the volcano was actually erupting.

    Since 2005, though, an international group of researchers has been developing instruments to monitor the target gases remotely and continuously. Known as the Network for Observation of Volcanic and Atmospheric Change (NOVAC) [no link to any organization found, but many articles], group members use portable low-cost spectrometers that analyse gas concentrations based on how sunlight is absorbed as it passes through the volcanic plume.

    Another meter measures changing levels of sulfur dioxide and can be installed kilometres downwind of active vents or on aircraft and satellites, allowing continuous monitoring.

    At present, some 35 volcanoes around the world are watched this way.

    A type of spectrometer known as a multi-GAS analyser continuously measures the ratio of carbon dioxide to sulfur dioxide. It is installed right on the edge of volcanic craters, inside the plume.

    In Costa Rica, these instruments are now successfully sniffing out tell-tale bad volcano breath, providing a valuable early warning service. Early in 2014, Maarten de Moor, from the country’s Volcanic and Seismic Observatory, installed gas sensors on Turrialba, a volcano that threatens the capital city of San José, which lies just 30 kilometres to its west.

    Around six months later, an eruption kicked off. Prior to each ejection, de Moor and his colleagues saw a sharp increase in the carbon-sulfur ratio. “It is a really promising result and a huge step forward for eruption forecasting,” de Moor says.

    On 20 May 2016, the Turrialba volcano started erupting columns of smoke and ash that the wind extended towards the Costa Rican capital of San Jose. EZEQUIEL BECERRA / AFP / Getty Images

    So far, the activity of Turrialba has been small, but de Moor is worried. “The last large eruption on Turrialba was in 1864,” he says. “The ash deposits suggest that it started with small eruptions, like those we are seeing now.”

    The little disturbances, he continued, gave way to an enormous outburst – dubbed “Strombolian” in the jargon of the discipline, a reference to an ultra-active volcano on the island of Stromboli, off the coast of Sicily in the Tyrrhenian Sea. Such a powerful eruption from Turrialba would devastate the surrounding terrain, potentially killing thousands and crippling Costa Rica’s economy.

    The change to the carbon/sulfur ratio picked up by NOVAC’s spectrometers, though, turns out not to be a reliable early warning signal in every case. It was not recorded, for instance, on Turrialba’s neighbour, Poas, before its most recent eruption.

    The explanation for its absence concerns the acidic lake that fills Poas’s crater. The lake absorbs sulfur dioxide while allowing carbon dioxide to pass through, resulting in a markedly different gas profile. As pressure built and an eruption became imminent, the lake became super-saturated with sulfur dioxide, meaning the excess gas passed through into the atmosphere. This produced a different, but equally telling, change in the ratio, a clear warning that trouble was afoot.

    Deciphering this signal from Poás was a milestone, de Moor says, since many of the world’s most unpredictable and explosive volcanoes – including Nevado del Ruiz and Mount Ontake in Japan – have crater lakes.

    The two Costa Rican volcanoes underscore that “there is no one size fits all” eruption signal, he adds.

    The signal that warned of Turrialba’s eruptions was not repeated at its neighbouring volcano, Poás – it produced the opposite signal to Turrialba prior to eruption. The acidic crater lake on Poás normally absorbs sulfur dioxide but allows carbon dioxide to bubble through, creating a permanently high carbon/sulfur ratio in the gas cloud plume.

    But in the days prior to an eruption, the carbon/sulfur ratio fell; the lake could not keep pace with the excess sulfur dioxide accompanying the rising magma. Deciphering this signal from Poás is a milestone, de Moor says, since many of the world’s most unpredictable and explosive volcanoes – including Nevado del Ruiz in Columbia and Mount Ontake – have crater lakes.

    And the Costa Rican volcanoes underscore that “there is no one size fits all” eruption signal, he adds.

    The key to successful prediction is to combine gas and classic seismic monitoring, as well as deploying new techniques that reveal whether the volcano is actually swelling with magma.

    A satellite’s GPS can monitor the movement of a volcano’s surface. Volcanologist James Hickey at the University of Exeter in the UK used this type of data to generate a computer model of what was happening underneath Sakurajima, an active volcano in Kyushu, Japan.

    Sakurajima’s last major eruption took place in 1914, killing 58 people and causing a massive flood in the nearby seaside city of Kagoshima. Its magma chambers have been refilling since, causing minor eruptions virtually every day.

    Hickey and his colleagues incorporated the area’s topography and underlying rock types into their model, along with very precise GPS measurements of surface movement, to gauge just how fast the magma was replenishing. Their results, published in Scientific Reports last September, indicate the tank needs roughly 130 years to fill.

    “In other words […] enough magma might be stored in the next 30 years for an eruption of the same scale as one in 1914,” Hickey says.

    That finding prompted the Kagoshima City Office to review its evacuation plans. Meanwhile, Hickey is developing similar models for volcanoes in Ecuador and the Lesser Antilles in the Caribbean.

    But even with all the high-tech advances, people in the poorest parts of the world are still at risk. Despite the efforts of NOVAC, right now there are still too few experts to analyse every volcano’s halitosis and generate the models that reveal what is going on deep underground. “We hope to interest more people in coming to do this kind of work,” de Moor says.

    In many countries with significant populations living in volcanic danger zones, there is barely any monitoring at all. Indonesia and the Philippines top the list for populations most at threat, according to a 2015 United Nations report.

    But at least in Colombia, 30 years after the devastation, villagers living under the menacing shadow of Nevado de Ruiz are placing their hopes in science.

    Continuous gas monitoring instruments were installed in the volcano’s vent last year and scientists are schooling themselves in how to read warning signs.

    With luck, they’ll have sussed it out before she blows again.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:29 am on January 24, 2017 Permalink | Reply
    Tags: , , COSMOS,   

    From COSMOS: “Seven elusive dwarf galaxy groups revealed” 

    Cosmos Magazine bloc


    24 January 2017
    No writer credit found

    Four dwarf galaxies identified by astronomers. These tiny galaxies can offer insight into the formation of larger ones, such as the Milky Way.
    Kelsey E Johnson, Sandra E Liss and Sabrina Stierwalt.

    The discovery forms an early piece of the galactic evolution puzzle.

    A piece of the galactic growth chart has been revealed with seven gangs of tiny galaxies, long-sought by astronomers, confirmed in Nature Astronomy.

    The finds provide insights into how mid-sized galaxies, such as our own Milky Way, formed.

    Astronomers think most medium-to-large galaxies grew through collisions. You can see evidence for such mergers – streams of stars and gas can be flung out as two galaxies combine.

    The Milky Way and our nearest major galactic neighbour Andromeda are on a collision course, tipped to combine into a larger galaxy in around four billion years.

    Of course, that’s a long wait. So researchers find and examine groups of dwarf galaxies, 10 to 1,000 times smaller than the Milky Way, to see if they might show signs of such mergers.

    The problem is dwarf galaxies are hard to find, let alone groups of them. Given the universe is 13.8 billion years old, it’s harder still to find galactic groups out on their own in space consisting only of dwarfs.

    Systems previously identified were quite close to a large galaxy, or the galaxies in the group were very far from each other – conditions that could affect their behaviour.

    So to hunt down tightly bound dwarf galaxy congregations that were far enough from massive galaxies, Sabrina Stierwalt from the National Radio Astronomy Observatory in the US and colleagues searched the Sloan Digital Sky Survey for dwarf galaxy pairs. They turned up 60 candidates.

    Confirmation came from observations from telescopes such as the 3.5-metre telescope at Apache Point Observatory in the US and Magellan Telescope in Chile. And given time, it’s thought they’ll merge into intermediate-mass galaxies.

    SDSS Telescope at Apache Point, NM, USA
    SDSS Telescope at Apache Point, NM, USA

    Carnegie 6.5 meter Magellan  Baade and Clay Telescopes located at Carnegie’s Las Campanas Observatory, Chile.
    Carnegie 6.5 meter Magellan Baade and Clay Telescopes located at Carnegie’s Las Campanas Observatory, Chile.

    They even found one dwarf galaxy, dubbed DDO68, which appeared to be the product of a collision of two even tinier galaxies. It too had star streams indicating a merger.

    DDO68. https://www.sao.ru/Doc-en/SciNews/LBV-DDO68/

    This example, the researchers write, will be “fertile ground” for future telescopes and deeper surveys such as the planned Large Synoptic Survey Telescope.

    LSST/Camera, built at SLAC
    LSST/Camera, built at SLAC
    LSST Interior
    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.
    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: