Tagged: The Conversation Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:57 am on February 20, 2017 Permalink | Reply
    Tags: , Man-made earthquakes, The Conversation   

    From The Conversation: “Earthquakes triggered by humans pose growing risk” 

    Conversation
    The Conversation

    January 22, 2017
    No writer credit found

    1
    Devastation in Sichuan province after the 2008 Wenchuan earthquake, thought to be induced by industrial activity at a nearby reservoir. dominiqueb/flickr

    People knew we could induce earthquakes before we knew what they were. As soon as people started to dig minerals out of the ground, rockfalls and tunnel collapses must have become recognized hazards.

    Today, earthquakes caused by humans occur on a much greater scale. Events over the last century have shown mining is just one of many industrial activities that can induce earthquakes large enough to cause significant damage and death. Filling of water reservoirs behind dams, extraction of oil and gas, and geothermal energy production are just a few of the modern industrial activities shown to induce earthquakes.

    As more and more types of industrial activity were recognized to be potentially seismogenic, the Nederlandse Aardolie Maatschappij BV, an oil and gas company based in the Netherlands, commissioned us to conduct a comprehensive global review of all human-induced earthquakes.

    Our work assembled a rich picture from the hundreds of jigsaw pieces scattered throughout the national and international scientific literature of many nations. The sheer breadth of industrial activity we found to be potentially seismogenic came as a surprise to many scientists. As the scale of industry grows, the problem of induced earthquakes is increasing also.

    In addition, we found that, because small earthquakes can trigger larger ones, industrial activity has the potential, on rare occasions, to induce extremely large, damaging events.

    How humans induce earthquakes

    As part of our review we assembled a database of cases that is, to our knowledge, the fullest drawn up to date. On Jan. 28, we will release this database publicly. We hope it will inform citizens about the subject and stimulate scientific research into how to manage this very new challenge to human ingenuity.

    Our survey showed mining-related activity accounts for the largest number of cases in our database.

    Earthquakes caused by humans

    Last year, the Nederlandse Aardolie Maatschappij BV commissioned a comprehensive global review of all human-induced earthquakes. The sheer breadth of industrial activity that is potentially seismogenic came as a surprise to many scientists. These examples are now catalogued at The Induced Earthquakes Database.

    Mining 37.4%
    Water reservoir impoundment 23.3%
    Conventional oil and gas 15%
    Geothermal 7.8%
    Waste fluid injection 5%
    Fracking 3.9%
    Nuclear explosion 3%
    Research experiments 1.8%
    Groundwater extraction 0.7%
    Construction 0.3%
    Carbon capture and storage 0.3%

    Source: Earth-Science Reviews Get the data

    Initially, mining technology was primitive. Mines were small and relatively shallow. Collapse events would have been minor – though this might have been little comfort to anyone caught in one.

    But modern mines exist on a totally different scale. Precious minerals are extracted from mines that may be over two miles deep or extend several miles offshore under the oceans. The total amount of rock removed by mining worldwide now amounts to several tens of billions of tons per year. That’s double what it was 15 years ago – and it’s set to double again over the next 15. Meanwhile, much of the coal that fuels the world’s industry has already been exhausted from shallow layers, and mines must become bigger and deeper to satisfy demand.

    As mines expand, mining-related earthquakes become bigger and more frequent. Damage and fatalities, too, scale up. Hundreds of deaths have occurred in coal and mineral mines over the last few decades as a result of earthquakes up to magnitude 6.1 that have been induced.

    Other activities that might induce earthquakes include the erection of heavy superstructures. The 700-megaton Taipei 101 building, raised in Taiwan in the 1990s, was blamed for the increasing frequency and size of nearby earthquakes.

    Since the early 20th century, it has been clear that filling large water reservoirs can induce potentially dangerous earthquakes. This came into tragic focus in 1967 when, just five years after the 32-mile-long Koyna reservoir in west India was filled, a magnitude 6.3 earthquake struck, killing at least 180 people and damaging the dam.

    Throughout the following decades, ongoing cyclic earthquake activity accompanied rises and falls in the annual reservoir-level cycle. An earthquake larger than magnitude 5 occurs there on average every four years. Our report found that, to date, some 170 reservoirs the world over have reportedly induced earthquake activity.

    Magnitude of human-induced earthquakes

    The magnitudes of the largest earthquakes postulated to be associated with projects of different types varies greatly. This graph shows the number of cases reported for projects of various types vs. maximum earthquake magnitude for the 577 cases for which data are available.

    4
    *”Other” category includes carbon capture and storage, construction, groundwater extraction, nuclear explosion, research experiments, and unspecified oil, gas and waste water.

    Source: Earth-Science Reviews Get the data [links are above]

    The production of oil and gas was implicated in several destructive earthquakes in the magnitude 6 range in California. This industry is becoming increasingly seismogenic as oil and gas fields become depleted. In such fields, in addition to mass removal by production, fluids are also injected to flush out the last of the hydrocarbons and to dispose of the large quantities of salt water that accompany production in expiring fields.

    A relatively new technology in oil and gas is shale-gas hydraulic fracturing, or fracking, which by its very nature generates small earthquakes as the rock fractures. Occasionally, this can lead to a larger-magnitude earthquake if the injected fluids leak into a fault that is already stressed by geological processes.

    The largest fracking-related earthquake that has so far been reported occurred in Canada, with a magnitude of 4.6. In Oklahoma, multiple processes are underway simultaneously, including oil and gas production, wastewater disposal and fracking. There, earthquakes as large as magnitude 5.7 have rattled skyscrapers that were erected long before such seismicity was expected. If such an earthquake is induced in Europe in the future, it could be felt in the capital cities of several nations.

    Our research shows that production of geothermal steam and water has been associated with earthquakes up to magnitude 6.6 in the Cerro Prieto Field, Mexico. Geothermal energy is not renewable by natural processes on the timescale of a human lifetime, so water must be reinjected underground to ensure a continuous supply. This process appears to be even more seismogenic than production. There are numerous examples of earthquake swarms accompanying water injection into boreholes, such as at The Geysers, California.

    What this means for the future

    Nowadays, earthquakes induced by large industrial projects no longer meet with surprise or even denial. On the contrary, when an event occurs, the tendency may be to look for an industrial project to blame. In 2008, an earthquake in the magnitude 8 range struck Ngawa Prefecture, China, killing about 90,000 people, devastating over 100 towns, and collapsing houses, roads and bridges. Attention quickly turned to the nearby Zipingpu Dam, whose reservoir had been filled just a few months previously, although the link between the earthquake and the reservoir has yet to be proven.

    The minimum amount of stress loading scientists think is needed to induce earthquakes is creeping steadily downward. The great Three Gorges Dam in China, which now impounds 10 cubic miles of water, has already been associated with earthquakes as large as magnitude 4.6 and is under careful surveillance.

    Scientists are now presented with some exciting challenges. Earthquakes can produce a “butterfly effect”: Small changes can have a large impact. Thus, not only can a plethora of human activities load Earth’s crust with stress, but just tiny additions can become the last straw that breaks the camel’s back, precipitating great earthquakes that release the accumulated stress loaded onto geological faults by centuries of geological processes. Whether or when that stress would have been released naturally in an earthquake is a challenging question.

    An earthquake in the magnitude 5 range releases as much energy as the atomic bomb dropped on Hiroshima in 1945. A earthquake in the magnitude 7 range releases as much energy as the largest nuclear weapon ever tested, the Tsar Bomba test conducted by the Soviet Union in 1961. The risk of inducing such earthquakes is extremely small, but the consequences if it were to happen are extremely large. This poses a health and safety issue that may be unique in industry for the maximum size of disaster that could, in theory, occur. However, rare and devastating earthquakes are a fact of life on our dynamic planet, regardless of whether or not there is human activity.

    Our work suggests that the only evidence-based way to limit the size of potential earthquakes may be to limit the scale of the projects themselves. In practice, this would mean smaller mines and reservoirs, less minerals, oil and gas extracted from fields, shallower boreholes and smaller volumes injected. A balance must be struck between the growing need for energy and resources and the level of risk that is acceptable in every individual project.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 10:17 am on February 16, 2017 Permalink | Reply
    Tags: , , , , , , , SCOAP³, The Conversation   

    From The Conversation: “How the insights of the Large Hadron Collider are being made open to everyone” 

    Conversation
    The Conversation

    January 12, 2017 [Just appeared in social media.]
    Virginia Barbour

    CERN CMS Higgs Event
    CERN CMS Higgs Event

    If you visit the Large Hadron Collider (LHC) exhibition, now at the Queensland Museum, you’ll see the recreation of a moment when the scientist who saw the first results indicating discovery of the Higgs boson laments she can’t yet tell anyone.

    It’s a transitory problem for her, lasting as long as it takes for the result to be thoroughly cross-checked. But it illustrates a key concept in science: it’s not enough to do it; it must be communicated.

    That’s what is behind one of the lesser known initiatives of CERN (European Organization for Nuclear Research): an ambitious plan to make all its research in particle physics available to everyone, with a big global collaboration inspired by the way scientists came together to make discoveries at the LHC.

    This initiative is called SCOAP³, the Sponsoring Consortium for Open Access in Particle Physics Publishing, and is now about to enter its fourth year of operation. It’s a worldwide collaboration of more than 3,000 libraries (including six in Australia), key funding agencies and research centres in 44 countries, together with three intergovernmental organisations.

    It aims to make work previously only available to paying subscribers of academic journals freely and immediately available to everyone. In its first three years it has made more than 13,000 articles available.

    Not only are these articles free for anyone to read, but because they are published under a Creative Commons attribution license (CCBY), they are also available for anyone to use in anyway they wish, such as to illustrate a talk, pass onto a class of school children, or feed to an artificial intelligence program to extract information from. And these usage rights are enshrined forever.

    The concept of sharing research is not new in physics. Open access to research is now a growing worldwide initiative, including in Australasia. CERN, which runs the LHC, was also where the world wide web was invented in 1989 by Tim Berners-Lee, a British computer scientist at CERN.

    The main purpose of the web was to enable researchers contributing to CERN from all over the world share documents, including scientific drafts, no matter what computer systems they were using.

    Before the web, physicists had been sharing paper drafts by post for decades, so they were one of the first groups to really embrace the new online opportunities for sharing early research. Today, the pre-press site arxiv.org has more than a million free article drafts covering physics, mathematics, astronomy and more.

    But, with such a specialised field, do these “open access” papers really matter? The short answer is “yes”. Downloads have doubled to journals participating in SCOAP³.

    With millions of open access articles now being downloaded across all specialities, there is enormous opportunity for new ideas and collaborations to spring from chance readership. This is an important trend: the concept of serendipity enabled by open access was explored in 2015 in an episode of ABC RN’s Future Tense program.

    Greater than the sum of the parts

    There’s also a bigger picture to SCOAP³’s open access model. Not long ago, the research literature was fragmented. Individual papers and the connections between them were only as good as the physical library, with its paper journals, that academics had access to.

    Now we can do searches in much less time than we spend thinking of the search question, and the results we are presented with are crucially dependent on how easily available the findings themselves are. And availability is not just a function of whether an article is free or not but whether it is truly open, i.e. connected and reusable.

    One concept is whether research is “FAIR”, or Findable, Accessible, Interoperable and Reusable. In short, can anyone find, read, use and reuse the work?

    The principle is most advanced for data, but in Australia work is ongoing to apply it to all research outputs. This approach was also proposed at the November 2016 meeting of the G20 Science, Technology and Innovation Ministers Meeting. Research findings that are not FAIR can, effectively, be invisible. It’s a huge waste of millions of taxpayer dollars to fund research that won’t be seen.

    There is an even bigger picture that research and research publications have to fit into: that of science in society.

    Across the world we see politicians challenging accepted scientific norms. Is the fact that most academic research remains available only to those who can pay to see it contributing to an acceptance of such misinformed views?

    If one role for science is to inform public debate, then restricting access to that science will necessarily hinder any informed public debate. Although no one suggests that most readers of news sites will regularly want to delve into the details of papers in high energy physics, open access papers are 47% more likely to end up being cited in Wikipedia, which is a source that many non-scientists do turn to.

    Even worse, work that is not available openly now may not even be available in perpetuity, something that is being discussed by scientists in the USA.

    So in the same way that CERN itself is an example of the power of international collaboration to ask some of the fundamental scientific questions of our time, SCOAP³ provides a way to ensure that the answers, whatever they are, are available to everyone, forever.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 9:29 am on January 30, 2017 Permalink | Reply
    Tags: , , , , The Conversation   

    From The Conversation: “Giant atoms could help unveil ‘dark matter’ and other cosmic secrets” 

    Conversation
    The Conversation

    January 5, 2017
    Diego A. Quiñones

    1
    Composite image showing the galaxy cluster 1E 0657-56. Chandra X-Ray Observatory/NASA

    The universe is an astonishingly secretive place. Mysterious substances known as dark matter and dark energy account for some 95% of it. Despite huge effort to find out what they are, we simply don’t know.

    We know dark matter exists because of the gravitational pull of galaxy clusters – the matter we can see in a cluster just isn’t enough to hold it together by gravity. So there must be some extra material there, made up by unknown particles that simply aren’t visible to us. Several candidate particles have already been proposed.

    Scientists are trying to work out what these unknown particles are by looking at how they affect the ordinary matter we see around us. But so far it has proven difficult, so we know it interacts only weakly with normal matter at best. Now my colleague Benjamin Varcoe and I have come up with a new way to probe dark matter that may just prove successful: by using atoms that have been stretched to be 4,000 times larger than usual.

    Advantageous atoms

    We have come a long way from the Greeks’ vision of atoms as the indivisible components of all matter. The first evidence-based argument for the existence of atoms was presented in the early 1800s by John Dalton. But it wasn’t until the beginning of the 20th century that JJ Thomson and Ernest Rutherford discovered that atoms consist of electrons and a nucleus. Soon after, Erwin Schrödinger described the atom mathematically using what is today called quantum theory.

    Modern experiments have been able to trap and manipulate individual atoms with outstanding precision. This knowledge has been used to create new technologies, like lasers and atomic clocks, and future computers may use single atoms as their primary components.

    Individual atoms are hard to study and control because they are very sensitive to external perturbations. This sensitivity is usually an inconvenience, but our study suggests that it makes some atoms ideal as probes for the detection of particles that don’t interact strongly with regular matter – such as dark matter.

    Our model is based on the fact that weakly interacting particles must bounce from the nucleus of the atom it collides with and exchange a small amount of energy with it – similar to the collision between two pool balls. The energy exchange will produce a sudden displacement of the nucleus that will eventually be felt by the electron. This means the entire energy of the atom changes, which can be analysed to obtain information about the properties of the colliding particle.

    However the amount of transferred energy is very small, so a special kind of atom is necessary to make the interaction relevant. We worked out that the so-called “Rydberg atom” would do the trick. These are atoms with long distances between the electron and the nucleus, meaning they possess high potential energy. Potential energy is a form of stored energy. For example, a ball on a high shelf has potential energy because this could be converted to kinetic energy if it falls off the shelf.

    In the lab, it is possible to trap atoms and prepare them in a Rydberg state – making them as big as 4,000 times their original size. This is done by illuminating the atoms with a laser with light at a very specific frequency.

    This prepared atom is likely much heavier than the dark matter particles. So rather than a pool ball striking another, a more appropriate description will be a marble hitting a bowling ball. It seems strange that big atoms are more perturbed by collisions than small ones – one may expect the opposite (smaller things are usually more affected when a collision occurs).

    The explanation is related to two features of Rydberg atoms: they are highly unstable because of their elevated energy, so minor perturbations would disturb them more. Also, due to their big area, the probability of the atoms interacting with particles is increased, so they will suffer more collisions.

    Spotting the tiniest of particles

    Current experiments typically look for dark matter particles by trying to detect their scattering off atomic nuclei or electrons on Earth. They do this by looking for light or free electrons in big tanks of liquid noble gases that are generated by energy transfer between the dark matter particle and the atoms of the liquid.

    1
    The Large Underground Xenon experiment installed 4,850 ft underground inside a 70,000-gallon water tank shield. Gigaparsec at English Wikipedia, CC BY-SA

    But, according to the laws of quantum mechanics, there needs to be a certain a minimum energy transfer for the light to be produced. An analogy would be a particle colliding with a guitar string: it will produce a note that we can hear, but if the particle is too small the string will not vibrate at all.

    So the problem with these methods is that the dark matter particle has to be big enough if we are to detect it in this way. However, our calculations show that the Rydberg atoms will be disturbed in a significant way even by low-mass particles – meaning they can be applied to search for candidates of dark matter that other experiments miss. One of such particles is the Axion, a hypothetical particle which is a strong candidate for dark matter.

    Experiments would require for the atoms to be treated with extreme care, but they will not require to be done in a deep underground facility like other experiments, as the Rydberg atoms are expected to be less susceptible to cosmic rays compared to dark matter.

    We are working to further improve the sensitivity of the system, aiming to extend the range of particles that it may be able to perceive.

    Beyond dark matter we are also aiming to one day apply it for the detection of gravitational waves, the ripples in the fabric of space predicted by Einstein long time ago. These perturbations of the space-time continuum have been recently discovered, but we believe that by using atoms we may be able to detect gravitational waves with a different frequency to the ones already observed.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
    • gregoriobaquero 9:46 am on January 30, 2017 Permalink | Reply

      Precisely to the point of my paper. If I am right nothing is going to be found. No new particles. The density of neutrinos”hot Dark Matter”) we can measure in our frame of reference does not tell the whole picture since we have the same local time with neutrinos passing by. What had not been taken into account is that gravitational time dilation is accumulating neutrinos when compared to neutrinos passing far away from the galaxy. Sent from my iPhone

      >

      Like

    • gregoriobaquero 9:48 am on January 30, 2017 Permalink | Reply

      Also, this phenomenon is similar to how relativity explains electromagnetism. Veritasium has a good video about it.

      Sent from my iPhone

      >

      Like

    • gregoriobaquero 9:54 am on January 30, 2017 Permalink | Reply

      Precisely to the point of my paper. If I am right nothing is going to be found. No new particles. The density of neutrinos”hot Dark Matter”) we can measure in our frame of reference does not tell the whole picture since we have the same local time with neutrinos passing by. What had not been taken into account is that gravitational time dilation is accumulating neutrinos when compared to neutrinos passing far away from the galaxy.

      Also, this phenomenon is similar to how relativity explains electromagnetism. Veritasium has a good video about it.

      Like

    • richardmitnick 10:19 am on January 30, 2017 Permalink | Reply

      Thank you so much for coming on to comment. I appreciate it very much.

      Like

  • richardmitnick 12:06 pm on January 16, 2017 Permalink | Reply
    Tags: ASKAP finally hits the big-data highway, , , , , , , The Conversation, WALLABY - Widefield ASKAP L-band Legacy All-sky Blind surveY   

    From The Conversation for SKA: “The Australian Square Kilometre Array Pathfinder finally hits the big-data highway” 

    Conversation
    The Conversation

    SKA Square Kilometer Array

    SKA

    SKA/ASKAP radio telescope at the Murchison Radio-astronomy Observatory (MRO) in Mid West region of Western Australia
    SKA/ASKAP radio telescope at the Murchison Radio-astronomy Observatory (MRO) in Mid West region of Western Australia

    January 15, 2017
    Douglas Bock
    Director of Astronomy and Space Science, CSIRO

    Antony Schinckel
    ASKAP Director, CSIRO

    You know how long it takes to pack the car to go on holidays. But there’s a moment when you’re all in, everyone has their seatbelt on, you pull out of the drive and you’re off.

    Our ASKAP (Australian Square Kilometre Array Pathfinder) telescope has just pulled out of the drive, so to speak, at its base in Western Australia at the Murchison Radio-astronomy Observatory (MRO), about 315km northeast of Geraldton.

    ASKAP is made of 36 identical 12-metre wide dish antennas that all work together, 12 of which are currently in operation. Thirty ASKAP antennas have now been fitted with specialised phased array feeds, the rest will be installed later in 2017.

    Until now, we’d been taking data mainly to test how ASKAP performs. Having shown the telescope’s technical excellence it’s now off on its big trip, starting to make observations for the big science projects it’ll be doing for the next five years.

    And it’s taking lots of data. Its antennas are now churning out 5.2 terabytes of data per second (about 15 per cent of the internet’s current data rate).

    Once out of the telescope, the data is going through a new, almost automatic data-processing system we’ve developed.

    It’s like a bread-making machine: put in the data, make some choices, press the button and leave it overnight. In the morning you have a nice batch of freshly made images from the telescope.

    Go the WALLABIES

    The first project we’ve been taking data for is one of ASKAP’s largest surveys, WALLABY (Widefield ASKAP L-band Legacy All-sky Blind surveY).

    On board the survey are a happy band of 100-plus scientists – affectionately known as the WALLABIES – from many countries, led by one of our astronomers, Bärbel Koribalski, and Lister Staveley-Smith of the International Centre for Radio Astronomy Research (ICRAR), University of Western Australia.

    They’re aiming to detect and measure neutral hydrogen gas in galaxies over three-quarters of the sky. To see the farthest of these galaxies they’ll be looking three billion years back into the universe’s past, with a redshift of 0.26.

    2
    Neutral hydrogen gas in one of the galaxies, IC 5201 in the southern constellation of Grus (The Crane), imaged in early observations for the WALLABY project. Matthew Whiting, Karen Lee-Waddell and Bärbel Koribalski (all CSIRO); WALLABY team, Author provided

    Neutral hydrogen – just lonely individual hydrogen atoms floating around – is the basic form of matter in the universe. Galaxies are made up of stars but also dark matter, dust and gas – mostly hydrogen. Some of the hydrogen turns into stars.

    Although the universe has been busy making stars for most of its 13.7-billion-year life, there’s still a fair bit of neutral hydrogen around. In the nearby (low-redshift) universe, most of it hangs out in galaxies. So mapping the neutral hydrogen is a useful way to map the galaxies, which isn’t always easy to do with just starlight.

    But as well as mapping where the galaxies are, we want to know how they live their lives, get on with their neighbours, grow and change over time.

    When galaxies live together in big groups and clusters they steal gas from each other, a processes called accretion and stripping. Seeing how the hydrogen gas is disturbed or missing tells us what the galaxies have been up to.

    We can also use the hydrogen signal to work out a lot of a galaxy’s individual characteristics, such as its distance, how much gas it contains, its total mass, and how much dark matter it contains.

    This information is often used in combination with characteristics we learn from studying the light of the galaxy’s stars.

    Oh what big eyes you have ASKAP

    ASKAP sees large pieces of sky with a field of view of 30 square degrees. The WALLABY team will observe 1,200 of these fields. Each field contains about 500 galaxies detectable in neutral hydrogen, giving a total of 600,000 galaxies.

    3
    One of the first fields targeted by WALLABY, the NGC 7232 galaxy group. Ian Heywood (CSIRO); WALLABY team, Author provided

    This image (above) of the NGC 7232 galaxy group was made with just two nights’ worth of data.

    ASKAP has now made 150 hours of observations of this field, which has been found to contain 2,300 radio sources (the white dots), almost all of them galaxies.

    It has also observed a second field, one containing the Fornax cluster of galaxies, and started on two more fields over the Christmas and New Year period.

    Even more will be dug up by targeted searches. Simply detecting all the WALLABY galaxies will take more than two years, and interpreting the data even longer. ASKAP’s data will live in a huge archive that astronomers will sift through over many years with the help of supercomputers at the Pawsey Centre in Perth, Western Australia.

    ASKAP has nine other big survey projects planned, so this is just the beginning of the journey. It’s really a very exciting time for ASKAP and the more than 350 international scientists who’ll be working with it.

    Who knows where this Big Trip will take them, and what they’ll find along the way?

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 11:43 am on August 4, 2016 Permalink | Reply
    Tags: , , , The Conversation   

    From The Conversation: “Expanding citizen science models to enhance open innovation” 

    Conversation
    The Conversation

    August 3, 2016
    Kendra L. Smith

    Over the years, citizen scientists have provided vital data and contributed in invaluable ways to various scientific quests. But they’re typically relegated to helping traditional scientists complete tasks the pros don’t have the time or resources to deal with on their own. Citizens are asked to count wildlife, for instance, or classify photos that are of interest to the lead researchers.

    This type of top-down engagement has consigned citizen science to the fringes, where it fills a manpower gap but not much more. As a result, its full value has not been realized. Marginalizing the citizen scientists and their potential contribution is a grave mistake – it limits how far we can go in science and the speed and scope of discovery.

    Instead, by harnessing globalization’s increased interconnectivity, citizen science should become an integral part of open innovation. Science agendas can be set by citizens, data can be open, and open-source software and hardware can be shared to assist in the scientific process. And as the model proves itself, it can be expanded even further, into nonscience realms.

    1
    Since 1900 the Audubon Society has sponsored its annual Christmas Bird Count, which relies on amateur volunteers nationwide. USFWS Mountain-Prairie, CC BY

    Some major citizen science successes

    Citizen-powered science has been around for over 100 years, utilizing the collective brainpower of regular, everyday people to collect, observe, input, identify and crossmatch data that contribute to and expand scientific discovery. And there have been some marked successes.

    eBird allows scores of citizen scientists to record bird abundance via field observation; those data have contributed to over 90 peer-reviewed research articles. Did You Feel It? crowdsources information from people around that world who have experienced an earthquake. Snapshot Serengeti uses volunteers to identify, classify and catalog photos taken daily in this African ecosystem.

    FoldIt is an online game where players are tasked with using the tools provided to virtually fold protein structures. The goal is to help scientists figure out if these structures can be used in medical applications. A set of users determined the crystal structure of an enzyme involved in the monkey version of AIDS in just three weeks – a problem that had previously gone unsolved for 15 years.

    Galaxy Zoo is perhaps the most well-known online citizen science project. It uploads images from the Sloan Digital Sky Survey [SDSS] and allows users to assist with the morphological classification of galaxies. The citizen astronomers discovered an entirely new class of galaxy – “green pea” galaxies – that have gone on to be the subject of over 20 academic articles.

    SDSS Telescope at Apache Point, NM, USA
    SDSS Telescope at Apache Point, NM, USA

    These are all notable successes, with citizens contributing to the projects set out by professional scientists. But there’s so much more potential in the model. What does the next generation of citizen science look like?

    2
    People can contribute to crowdsourced projects from just about anywhere. Nazareth College, CC BY

    Open innovation could advance citizen science

    The time is right for citizen science to join forces with open innovation. This is a concept that describes partnering with other people and sharing ideas to come up with something new. The assumption is that more can be achieved when boundaries are lowered and resources – including ideas, data, designs and software and hardware – are opened and made freely available.

    Open innovation is collaborative, distributed, cumulative and it develops over time. Citizen science can be a critical element here because its professional-amateurs can become another significant source of data, standards and best practices that could further the work of scientific and lay communities.

    Globalization has spurred on this trend through the ubiquity of internet and wireless connections, affordable devices to collect data (such as cameras, smartphones, smart sensors, wearable technologies), and the ability to easily connect with others. Increased access to people, information and ideas points the way to unlock new synergies, new relationships and new forms of collaboration that transcend boundaries. And individuals can focus their attention and spend their time on anything they want.

    We are seeing this emerge in what has been termed the “solution economy” – where citizens find fixes to challenges that are traditionally managed by government.

    Consider the issue of accessibility. Passage of the 1990 Americans with Disabilities Act aimed to improve accessibility issues in the U.S. But more than two decades later, individuals with disabilities are still dealing with substantial mobility issues in public spaces – due to street conditions, cracked or nonexistent sidewalks, missing curb cuts, obstructions or only portions of a building being accessible. These all can create physical and emotional challenges for the disabled.

    To help deal with this issue, several individual solution seekers have merged citizen science, open innovation and open sourcing to create mobile and web applications that provide information about navigating city streets. For instance, Jason DaSilva, a filmmaker with multiple sclerosis, developed AXS Map – a free online and mobile app powered by Google Places API. It crowdsources information from people across the country about wheelchair accessibility in cities nationwide.

    Broadening the model

    There’s no reason the diffuse resources and open process of the citizen scientist model need be applied only to science questions.

    For instance, Science Gossip is a Zooniverse citizen science project. It’s rooted in Victorian-era natural history – the period considered to be the dawn of modern science – but it crosses disciplinary boundaries. At the time, scientific information was produced everywhere and recorded in letters, books, newspapers and periodicals (it was also the beginning of mass printing). Science Gossip allows citizen scientists to pore through pages of Victorian natural history periodicals. The site prompts them with questions meant to ensure continuity with other user entries.

    The final product is digitized data based on the 140,000 pages of 19th-century periodicals. Anyone can access it on Biodiversity Heritage Library easily and for free. This work has obvious benefits for natural history researchers but it also can be used by art enthusiasts, ethnographers, biographers, historians, rhetoricians, or authors of historical fiction or filmmakers of period pieces who seek to create accurate settings. The collection possesses value that goes beyond scientific data and becomes critical to understanding the period in which data was collected.

    It’s also possible to imagine flipping the citizen science script, with the citizens themselves calling the shots about what they want to see investigated. Implementing this version of citizen science in disenfranchised communities could be a means of access and empowerment. Imagine Flint, Michigan residents directing expert researchers on studies of their drinking water.

    Or consider the aim of many localities to become so-called smart cities – connected cities that integrate information and communication technologies to improve the quality of life for residents as well as manage the city’s assets. Citizen science could have a direct impact on community engagement and urban planning via data consumption and analysis, feedback loops and project testing. Or residents can even collect data on topics important to local government. With technology and open innovation, much of this is practical and possible.

    What stands in the way?

    Perhaps the most pressing limitation of scaling up the citizen science model is issues with reliability. While many of these projects have been proven reliable, others have fallen short.

    For instance, crowdsourced damage assessments from satellite images following 2013’s Typhoon Haiyan in the Philippines faced challenges. But according to aid agencies, remote damage assessments by citizen scientists had a devastatingly low accuracy of 36 percent. They overrepresented “destroyed” structures by 134 percent.

    4
    Crowds can’t reliably rate typhoon damage like this without adequate training. Bronze Yu, CC BY-NC-ND

    Reliability problems often stem from a lack of training, coordination and standardization in platforms and data collection. It turned out in the case of Typhoon Haiyan the satellite imagery did not provide enough detail or high enough resolution for contributors to accurately classify buildings. Further, volunteers weren’t given proper guidance on making accurate assessments. There also were no standardized validation review procedures for contributor data.

    Another challenge for open source innovation is organizing and standardizing data in a way that would be useful to others. Understandably, we collect data to fit our own needs – there isn’t anything wrong with that. However, those in charge of databases need to commit to data collection and curation standards so anyone may use the data with complete understanding of why, by whom and when they were collected.

    Finally, deciding to open data – making it freely available for anyone to use and republish – is critical. There’s been a strong, popular push for government to open data of late but it isn’t done widely or well enough to have widespread impact. Further, the opening of of nonproprietary data from nongovernment entities – nonprofits, universities, businesses – is lacking. If they are in a position to, organizations and individuals should seek to open their data to spur innovation ecosystems in the future.

    Citizen science has proven itself in some fields and has the potential to expand to others as organizers leverage the effects of globalization to enhance innovation. To do so, we must keep an eye on citizen science reliability, open data whenever possible, and constantly seek to expand the model to new disciplines and communities.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 9:30 pm on July 17, 2016 Permalink | Reply
    Tags: , , , The Conversation   

    From The Conversation via ANU: “How to keep more women in science, technology, engineering and mathematics (STEM)” 

    Conversation
    The Conversation

    ANU Bloc
    Australian National University

    July 12, 2016

    3
    http://www.masterstudies.com/article/Why-Science-is-%28also%29-for-Women/

    There have been myriad promises made by the major political parties over the years focused on funding programs aimed at increasing the number of women pursuing careers in science, technology, engineering and mathematics (STEM).

    Although some of the policies do target disciplines where women are underrepresented, there seems to be very little acknowledgement of the bigger problem.

    Attracting women to STEM careers is one issue, retaining them is another. And that does not seem to get the same level of attention.

    Simply trying to get more women into STEM without addressing broader systemic issues will achieve nothing except more loss through a leaky pipeline.

    Higher Education Research Data from 2014 shows more females than males were being awarded undergraduate degrees in STEM fields. Early career researchers, classified as level A and B academics, are equally represented in the genders.

    1
    Gender disparity in STEM fields at the higher academic levels (C-E) based on Higher Education Research Data, 2014. Science in Australia Gender Equity (SAGE)

    At senior levels, though, the gender disparity plainly manifests – males comprise almost 80% of the most senior positions.

    A biological and financial conundrum

    Studies in the United States found that women having children within five to ten years of completing their PhD are less likely to have tenured or tenure-track positions, and are more likely to earn less than their male or childless female colleagues.

    Angela (name changed) is a single parent and a PhD student in the sciences. She told me she is determined to forge a career for herself in academia, despite the bureaucratic and financial hurdles she has to overcome.

    ” Finding ways to get enough money to afford childcare […] jumping through bureaucratic hoops […] It was ridiculous and at times I wondered if it was all worth it.

    It may be just one reason for women leaving STEM, especially those with children, and doubly so for single parent women.”

    Women tend to be the primary caregivers for children, and are more likely to work part time, so perhaps this could explain the financial disparity. But according to the latest report from the Office of the Chief Scientist on Australia’s STEM workforce, men who also work part time consistently earn more, irrespective of their level of qualification.

    2
    Percentage of doctorate level STEM graduates working part time who earned more than $104 000 annually, by age group and gender. Australia’s STEM Workforce March 2016 report from the Office of the Australian Chief Scientist., CC BY-NC-SA

    The same report also shows that women who do not have children tend to earn more than women who do, but both groups still earn less than men.

    Perhaps children do play a part in earning capacity, but the pay disparities or part-time employment do not seem to fully explain why women leave STEM.

    Visible role models

    The absence of senior females in STEM removes a source of visible role models for existing and aspiring women scientists. This is a problem for attracting and retaining female scientists.

    Having female role models in STEM helps younger women envision STEM careers as potential pathways they can take, and mentors can provide vital support.

    Yet even with mentoring, women in STEM still have higher attrition rates than their male colleagues.

    So what else can we do?

    There are many programs and initiatives that are already in place to attract and support women in STEM, including the Science in Australia Gender Equity (SAGE) pilot, based on the United Kingdom’s Athena SWAN charter.

    But women’s voices are still absent from leadership tables to our detriment.

    Homeward Bound

    This absence is especially noticeable in STEM and policy making arenas, and was the impetus for Australian leadership expert, Fabian Dattner, in collaboration with Dr Jess Melbourne-Thomas from the Australian Antarctic Division, to create Homeward Bound.

    Dattner says she believes the absence of women from leadership “possibly, if not probably, places us at greatest peril”.

    To address this, Homeward Bound is aimed at developing the leadership, strategic and scientific capabilities of female scientists to enhance their impact in influencing policy and decisions affecting the sustainability of the planet.

    Initially, it will involve 77 women scientists from around the world. But this is only the first year of the program, and it heralds the beginning of a global collaboration of 1,000 women over ten years.

    These women are investing heavily – financially, emotionally and professionally – and it is clearly not an option for everyone.

    Flexible approaches

    There are other simple ways to support women in STEM, which anyone can do.

    Simply introducing genuinely flexible work arrangements could do a lot towards alleviating the pressure as Angela shows:

    ” My supervisor made sure that we never had meetings outside of childcare hours […] or I could Skype her from home once my child was in bed. They really went above and beyond to make sure that I was not disadvantaged.”

    We have already attracted some of the best and brightest female minds to STEM.

    If keeping them there means providing support, publicly celebrating high-achieving women, and being flexible in how meetings are held, surely that’s an investment we can all make.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 5:48 am on July 14, 2016 Permalink | Reply
    Tags: Americans want a say in what happens to their donated blood and tissue in biobanks, , , The Conversation   

    From The Conversation: “Americans want a say in what happens to their donated blood and tissue in biobanks” 

    Conversation
    The Conversation

    July 13, 2016
    Raymond G. De Vries
    Co-Director, Center for Bioethics and Social Sciences in Medicine, University of Michigan

    Tom Tomlinson
    Chair Professor, Michigan State University

    1
    No image caption. No image credit. Image obtained independent from article.

    The last time you went to a hospital, you probably had to fill out forms listing the medications you are taking and updating your emergency contacts. You also might have been asked a question about what is to be done with “excess tissues or specimens” that may be removed during diagnosis or treatment. Are you willing to donate these leftover bits of yourself (stripped of your name, of course) for medical research?

    If you are inclined to answer, “Sure, why not?” you will join the majority of Americans who would agree to donate, allowing your leftovers, such as blood or unused bits from biopsies or even embryos, to be sent to a “biobank” that collects specimens and related medical information from donors.

    But what, exactly, will be done with your donation? Can the biobank guarantee that information about your genetic destiny will not find its way to insurance companies or future employers? Could, for example, a pharmaceutical company use it to develop and patent a new drug that will be sold back to you at an exorbitant price?

    These questions may soon become a lot more real for many of us.

    Precision medicine, a promising new approach to treating and preventing disease, will require thousands, or even millions, of us to provide samples for genetic research. So how much privacy are we willing to give up in the name of cutting-edge science? And do we care about the kinds of research that will be done with our donations?

    Precision medicine needs you

    In January 2015, President Obama announced his “Precision Medicine Initiative” (PMI), asking for US$215 million to move medical care from a “one size fits all” approach to one that tailors treatments to each person’s genetic makeup. In his words, precision medicine is “one of the greatest opportunities for new medical breakthroughs that we have ever seen,” allowing doctors to provide “the right treatments at the right time, every time, to the right person.”

    The PMI is now being implemented, and a critical part of the initiative is the creation of a “voluntary national research cohort” of one million people who will provide the “data” researchers need to make this big jump in medical care. And yes, those “data” will include blood, urine and information from your electronic health records, all of which will help scientists find the link between genes, illness and treatments.

    Recognizing that there may be some reluctance to donate, the drafters of the initiative bent over backwards to assure future donors that their privacy will be “rigorously protected.” But privacy is not the only thing donors are worrying about.

    Together with our colleagues at the Center for Bioethics and Social Sciences in Medicine at the University of Michigan and the Center for Ethics and Humanities in the Life Sciences at Michigan State University, we asked the American public about their willingness to donate blood and tissue to researchers.

    Data from our national survey – published in the Journal of the American Medical Association – reveal that while most Americans are willing to donate to biobanks, they have serious concerns about how we ask for their consent and about how their donations may be used in future research.

    What are you consenting to?

    We asked our respondents – a sample representative of the U.S. population – if they would be willing to donate to a biobank using the current method of “blanket consent” where donors are asked to agree that their tissue can be used for any research study approved by the biobank, “without further consent from me.”

    A healthy majority – 68 percent – agreed. But when we asked if they would still be willing to give blanket consent if their specimens might be used “to develop patents and earn profits for commercial companies,” that number dropped to 55 percent. Only 57 percent agreed to donate if there was a possibility their donation would be used to develop vaccines against biological weapons, research that might first require creating biological weapons. And less than 50 percent of our sample agreed to donate if told their specimen may be used “to develop more safe and effective abortion methods.”

    You may think that some of these scenarios are far-fetched, but we consulted with a biobank researcher who reviewed all of our scenarios and confirmed that such research could be done with donations to biobanks, or associated data. And some scenarios are real. For instance, biobanked human embryos have been used to confirm how mifepristone, a drug which is used to induce miscarriages, works.

    Trust in science is important

    Should we take these moral concerns about biobank research seriously? Yes, because progress in science and medicine depends on public trust in the research enterprise. If scientists violate that trust they risk losing public support – including funding – for their work.

    Witness the story of the Havasupai tribe of Arizona. Researchers collected DNA from members of the tribe in an effort to better understand their high rate of diabetes. That DNA was then used, without informing those who donated, for a study tracing the migration of Havasupai ancestors. The findings of that research undermined the tribal story of its origins. The result? The tribe banished all researchers.

    Rebecca Skloot’s best-seller, “The Immortal Life of Henrietta Lacks,” revealed the way tissues and blood taken for clinical uses can be used for purposes unknown to the donors.

    In the early 1950s, Ms. Lacks was unsuccessfully treated for cervical cancer. Researchers harvested her cells without her knowledge, and after her death they used these cells to develop the HeLa cell line. Because of their unique properties, Hela cells have become critical to medical research. They have been used to secure more than 17,000 patents, but neither she nor her family members were compensated.

    In a similar case, blood cells from the spleen of a man named John Moore, taken as part of his treatment for leukemia, were used to create a patented cell line for fighting infection. Moore sued for his share of the profits generated by the patent, but his suit was dismissed by local, state and federal courts. As a result of these and similar cases, nearly all biobank consent forms now include a clause indicating that donations might be used to develop commercial products and that the donor has no claim on the proceeds.

    Researchers can ill afford to undermine public trust in their work. In our sample we found that lack of trust in scientists and scientific research was the strongest predictor of unwillingness to donate to a biobank.

    Those who ask you to donate some of yourself must remember that it is important not only to protect your privacy but also to ensure that your decision to do good for others does not violate your sense of what is good.

    The “Proposed Privacy and Trust Principles” issued by the PMI in 2015 are a hopeful sign. They call for transparency about “how [participant] data will be used, accessed, and shared,” including “the types of studies for which the individual’s data may be used.” The PMI soon will be asking us to donate bits of ourselves, and if these principles are honored, they will go a long way toward building the trust that biobanks – and precision medicine – need to succeed.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 8:59 am on June 8, 2016 Permalink | Reply
    Tags: , , , The Conversation   

    From The Conversation: “Why the Deep Space Atomic Clock is key for future space exploration” 

    Conversation
    The Conversation

    June 7, 2016
    Todd Ely
    Principal Investigator on Deep Space Atomic Clock Technology Demonstration Mission,
    Jet Propulsion Laboratory, NASA

    NASA Deep Space Atomic Clock Part I
    NASA Deep Space Atomic Clock Part I

    NASA Deep Space Atomic Clock Part II
    NASA Deep Space Atomic Clock Part II

    We all intuitively understand the basics of time. Every day we count its passage and use it to schedule our lives.

    We also use time to navigate our way to the destinations that matter to us. In school we learned that speed and time will tell us how far we went in traveling from point A to point B; with a map we can pick the most efficient route – simple.

    But what if point A is the Earth, and point B is Mars – is it still that simple? Conceptually, yes. But to actually do it we need better tools – much better tools.

    At NASA’s Jet Propulsion Laboratory, I’m working to develop one of these tools: the Deep Space Atomic Clock, or DSAC for short. DSAC is a small atomic clock that could be used as part of a spacecraft navigation system. It will improve accuracy and enable new modes of navigation, such as unattended or autonomous.

    In its final form, the Deep Space Atomic Clock will be suitable for operations in the solar system well beyond Earth orbit. Our goal is to develop an advanced prototype of DSAC and operate it in space for one year, demonstrating its use for future deep space exploration.

    Speed and time tell us distance

    To navigate in deep space, we measure the transit time of a radio signal traveling back and forth between a spacecraft and one of our transmitting antennae on Earth (usually one of NASA’s Deep Space Network complexes located in Goldstone, California; Madrid, Spain; or Canberra, Australia).

    NASA Deep Space Network Canberra
    NASA Deep Space Network Canberra, Australia

    We know the signal is traveling at the speed of light, a constant at approximately 300,000 km/sec (186,000 miles/sec). Then, from how long our “two-way” measurement takes to go there and back, we can compute distances and relative speeds for the spacecraft.

    For instance, an orbiting satellite at Mars is an average of 250 million kilometers from Earth. The time the radio signal takes to travel there and back (called its two-way light time) is about 28 minutes. We can measure the travel time of the signal and then relate it to the total distance traversed between the Earth tracking antenna and the orbiter to better than a meter, and the orbiter’s relative speed with respect to the antenna to within 0.1 mm/sec.

    We collect the distance and relative speed data over time, and when we have a sufficient amount (for a Mars orbiter this is typically two days) we can determine the satellite’s trajectory.

    Measuring time, way beyond Swiss precision

    Fundamental to these precise measurements are atomic clocks. By measuring very stable and precise frequencies of light emitted by certain atoms (examples include hydrogen, cesium, rubidium and, for DSAC, mercury), an atomic clock can regulate the time kept by a more traditional mechanical (quartz crystal) clock. It’s like a tuning fork for timekeeping. The result is a clock system that can be ultra stable over decades.

    The precision of the Deep Space Atomic Clock relies on an inherent property of mercury ions – they transition between neighboring energy levels at a frequency of exactly 40.5073479968 GHz. DSAC uses this property to measure the error in a quartz clock’s “tick rate,” and, with this measurement, “steers” it towards a stable rate. DSAC’s resulting stability is on par with ground-based atomic clocks, gaining or losing less than a microsecond per decade.

    Continuing with the Mars orbiter example, ground-based atomic clocks at the Deep Space Network error contribution to the orbiter’s two-way light time measurement is on the order of picoseconds, contributing only fractions of a meter to the overall distance error. Likewise, the clocks’ contribution to error in the orbiter’s speed measurement is a minuscule fraction of the overall error (1 micrometer/sec out of the 0.1 mm/sec total).

    The distance and speed measurements are collected by the ground stations and sent to teams of navigators who process the data using sophisticated computer models of spacecraft motion. They compute a best-fit trajectory that, for a Mars orbiter, is typically accurate to within 10 meters (about the length of a school bus).

    Sending an atomic clock to deep space

    The ground clocks used for these measurements are the size of a refrigerator and operate in carefully controlled environments – definitely not suitable for spaceflight. In comparison, DSAC, even in its current prototype form as seen above, is about the size of a four-slice toaster. By design, it’s able to operate well in the dynamic environment aboard a deep-space exploring craft.

    One key to reducing DSAC’s overall size was miniaturizing the mercury ion trap. Shown in the figure above, it’s about 15 cm (6 inches) in length. The trap confines the plasma of mercury ions using electric fields. Then, by applying magnetic fields and external shielding, we provide a stable environment where the ions are minimally affected by temperature or magnetic variations. This stable environment enables measuring the ions’ transition between energy states very accurately.

    The DSAC technology doesn’t really consume anything other than power. All these features together mean we can develop a clock that’s suitable for very long duration space missions.

    Because DSAC is as stable as its ground counterparts, spacecraft carrying DSAC would not need to turn signals around to get two-way tracking. Instead, the spacecraft could send the tracking signal to the Earth station or it could receive the signal sent by the Earth station and make the tracking measurement on board. In other words, traditional two-way tracking can be replaced with one-way, measured either on the ground or on board the spacecraft.

    So what does this mean for deep space navigation? Broadly speaking, one-way tracking is more flexible, scalable (since it could support more missions without building new antennas) and enables new ways to navigate.

    3
    DSAC enables the next generation of deep space tracking. Jet Propulsion Laboratory, CC BY

    DSAC advances us beyond what’s possible today

    The Deep Space Atomic Clock has the potential to solve a bunch of our current space navigation challenges.

    Places like Mars are “crowded” with many spacecraft: Right now, there are five orbiters competing for radio tracking. Two-way tracking requires spacecraft to “time-share” the resource. But with one-way tracking, the Deep Space Network could support many spacecraft simultaneously without expanding the network. All that’s needed are capable spacecraft radios coupled with DSAC.

    With the existing Deep Space Network, one-way tracking can be conducted at a higher-frequency band than current two-way. Doing so improves the precision of the tracking data by upwards of 10 times, producing range rate measurements with only 0.01 mm/sec error.

    One-way uplink transmissions from the Deep Space Network are very high-powered. They can be received by smaller spacecraft antennas with greater fields of view than the typical high-gain, focused antennas used today for two-way tracking. This change allows the mission to conduct science and exploration activities without interruption while still collecting high-precision data for navigation and science. As an example, use of one-way data with DSAC to determine the gravity field of Europa, an icy moon of Jupiter, can be achieved in a third of the time it would take using traditional two-way methods with the flyby mission currently under development by NASA.

    Collecting high-precision one-way data on board a spacecraft means the data are available for real-time navigation. Unlike two-way tracking, there is no delay with ground-based data collection and processing. This type of navigation could be crucial for robotic exploration; it would improve accuracy and reliability during critical events – for example, when a spacecraft inserts into orbit around a planet. It’s also important for human exploration, when astronauts will need accurate real-time trajectory information to safely navigate to distant solar system destinations.

    4
    The Next Mars Orbiter (NeMO) currently in concept development by NASA is one mission that could potentially benefit from the one-way radio navigation and science that DSAC would enable. NASA, CC BY

    Countdown to DSAC launch

    The DSAC mission is a hosted payload on the Surrey Satellite Technology Orbital Test Bed spacecraft. Together with the DSAC Demonstration Unit, an ultra stable quartz oscillator and a GPS receiver with antenna will enter low altitude Earth orbit once launched via a SpaceX Falcon Heavy rocket in early 2017.

    While it’s on orbit, DSAC’s space-based performance will be measured in a yearlong demonstration, during which Global Positioning System tracking data will be used to determine precise estimates of OTB’s orbit and DSAC’s stability. We’ll also be running a carefully designed experiment to confirm DSAC-based orbit estimates are as accurate or better than those determined from traditional two-way data. This is how we’ll validate DSAC’s utility for deep space one-way radio navigation.

    In the late 1700s, navigating the high seas was forever changed by John Harrison’s development of the H4 “sea watch.” H4’s stability enabled seafarers to accurately and reliably determine longitude, which until then had eluded mariners for thousands of years. Today, exploring deep space requires traveling distances that are orders of magnitude greater than the lengths of oceans, and demands tools with ever more precision for safe navigation. DSAC is at the ready to respond to this challenge.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 2:08 pm on April 25, 2016 Permalink | Reply
    Tags: , , , The Conversation   

    From The Conversation: “Has China’s coal use peaked? Here’s how to read the tea leaves” 

    Conversation
    The Conversation

    April 12, 2016
    Valerie J. Karplus

    As the largest emitter of carbon dioxide in the world, how much coal China is burning is of global interest.

    In March, the country’s National Bureau of Statistics said the tonnage of coal has fallen for the second year in the row. Indeed, there are reports that China will stop construction of new plants, as the country grapples with overcapacity, and efforts to phase out inefficient and outdated coal plants are expected to continue.

    A sustained reduction in coal, the main fuel used to generate electricity in China, will be good news for the local environment and global climate. But it also raises questions: what is driving the drop? And can we expect this nascent trend to continue?

    It appears many of the forces that led coal use to slow down in recent years are here to stay. Nevertheless, uncertainties abound.

    The future of coal in China will depend on economic factors, including whether alternatives are cheaper and whether a return to high oil prices will encourage production of liquid fuels from coal. Also crucial to coal’s future trajectory are the pace of China’s economic growth and the country’s national climate and air pollution policies.

    Overcapacity

    First, let’s consider how certain we are that the rise in China’s coal use has reversed course. Unpacking that requires understanding the context in which the data is produced.

    China’s national energy statistics are subject to ongoing adjustments. The most recent one, in 2014, revised China’s energy use upward, mainly as a result of adjustments to coal use. The revisions follow the Third National Economic Census, which involved a comprehensive survey of energy use and economic activity that better represent the energy use of small- and medium-sized enterprises.

    There is good reason to believe these revised figures better reflect reality, because they help to explain a well-recognized gap between previously published national totals and the sum of provincial energy statistics, and because these regular revisions capture more sources of energy consumption.

    In short, the latest numbers show China is using more coal and energy than previously thought, but the last two years of data suggest China’s coal use may be peaking earlier than expected.

    1
    China’s latest available energy statistics, based on a more thorough accounting of energy use in China, has the country’s coal use, measured in energy terms, plateauing and declining over the past two years. Valerie Karplus. Data: China Statistical Yearbook 2015 and 2014., Author provided.

    Working from the revised numbers, the observed leveling off of China’s coal use may be both real and sustainable. Efforts to eliminate overcapacity of coal and raise energy efficiency in electric power and heavy industries are biting: in 2014, coal use in electric power fell by 7.8 percent year-on-year, while annual growth in coal consumption in manufacturing fell to 1.6 percent from 4 percent in the previous year, according to data from CEIC.

    The drop in coal use is partly due to structural shifts and partly to good fortune. In electric power, a shift to larger, more efficient power plants and industrial boilers, as well as a reduction in operating hours, has reduced overall coal use.

    The contribution of hydro, wind, solar, nuclear and natural gas in the power generation mix also continues to expand. Abundant rainfall in 2014 allowed the contribution of hydroelectric power to total generation to increase significantly as well.

    Also, the government-led war on air pollution is giving new impetus to clean up or shut down the oldest, dirtiest plants, and bring online new plants with air pollution controls in place.

    These trends, as well as slower economic growth that increasingly driven more by domestic consumption and less by expansion of heavy industry, suggest that coal demand is likely to continue leveling off. This is true even though prices for coal are falling because of overcapacity.

    The impending launch of a national carbon market in 2017 will further penalize coal in several sectors that use it intensively. The carbon market will require heavy emitters to either reduce carbon dioxide emissions by using less coal or purchase credits for emissions reductions from other market participants.

    In this scenario, China’s coal is likely to instead be increasingly exported to other energy-hungry nations less focused on air quality and climate concerns.

    A role for oil prices and air pollution policy

    However, there is at least one scenario in which coal use could easily reverse its downward trend: a return to high oil and natural gas prices.

    Globally, oil prices have plummeted in the past two years, while natural gas prices in China are relatively low for domestic users. High prices for oil and natural gas would make it attractive to convert coal into products that can be used in place of oil, natural gas or chemicals.

    China already has facilities for producing these products, often referred to as synthetic fuels. Plans to substantially expand these activities have been postponed or scuttled due to the lack of an economic rationale.

    But a return to high oil and gas prices would give new life to these projects, which even with a modest price on carbon emissions are likely to be economically viable. The scale of existing synthetic fuels capacity is sufficient to reverse the downward trend in coal use, should it become economic to bring it online.

    2
    Solar, wind and hydro are making a bigger contribution to China’s power generation mix but given the country’s huge energy needs, it is too early to say the country’s carbon emissions are going down. Jason Lee/Reuters

    Meanwhile, China’s carbon and air pollution rules are starting to have an impact on coal use, although the ultimate size of any reduction will depend on resources and incentives to implement policies at the local level.

    Under the National Air Pollution Action Plan, three major urban regions on China’s populous east coast face significant pressure to reduce the concentration of ambient particulate matter pollution by 20-25 percent before 2017, while a 10 percent reduction target is set for the nation as a whole. Cleaning up the air will involve mobilizing an enormous number of local actors on the ground, financing technology upgrades, and introducing policies to reduce pollution-intensive fuels through efficiency and substitution.

    Yet coal use reductions are unlikely to follow in lock step with air pollution reductions. Slower economic growth would be expected to reduce energy demand. But reducing the energy intensity of growth – the amount of energy needed to produce a unit of GDP – will likely become harder over time.

    As economic growth slows, localities will be under pressure to expand opportunities for existing businesses and create new ones. If this pressure leads local officials to resort to expanding energy-intensive activities, such as iron and steel, cement, and heavy manufacturing, to boost local GDP, it will become more difficult to continue reducing China’s coal use per unit of economic output.

    Carbon market in 2017

    Whether coal use continues to decline or goes back up has implications for the timing of China’s emissions peak. At the Paris Climate Summit, China pledged to peak its emissions at latest by 2030, meaning emissions of carbon dioxide would start to fall in absolute terms.

    While it is too early to say that China’s carbon dioxide emissions will continue to fall, it is unlikely that they will rise much further even if the country’s economic aspirations are realized.

    The expansion of hydro and other forms of low or zero carbon energy will help. If the challenges of integrating the energy generated from solar and wind – less predictable sources of energy compared to dispatchable sources such as coal and gas – can be solved, renewable energy that is already installed has the potential to displace significant additional coal use as well, while contributing the reduction in air pollution and carbon dioxide emissions.

    So the answer to the question of whether or not China’s coal use has peaked is: perhaps. China’s coal cap of 4.2 billion tons in 2020 is set roughly 7% higher than the 2013 peak, suggesting that even if the decline reverses course, at least use will not rise much higher. (Note that China’s coal cap sets a limit on coal use on a mass basis, while the above figure reports coal use on an energy basis, and is therefore not directly comparable.)

    This is good news because China’s carbon dioxide emissions have already reached levels in line with previously projected peak levels for 2030, prior to the data revisions. So earlier peaks in both coal use and carbon dioxide emissions now look not only desirable, but possible.

    Sustaining a commitment in China to cultivating cleaner forms of energy production and use will be challenging in the current economic headwinds, but the potential benefits to human health are great, especially in the medium to longer term.

    The country’s national carbon market, planned to launch in 2017, is an important step in the right direction. A sufficiently high and stable carbon price could form the cornerstone of a sustained transition away from coal in favor of clean and renewable energy, developments that would be consistent with existing targets and air quality goals.

    Any transition away from coal in China has the potential to help the world curb its carbon dioxide emissions and to improve domestic air quality – something that will allow us all to breathe a little easier.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 8:52 am on February 27, 2016 Permalink | Reply
    Tags: , , , The Conversation   

    From The Conversation: “What is time – and why does it move forward?” 

    Conversation
    The Conversation

    February 22, 2016
    Thomas Kitching

    Imagine time running backwards. People would grow younger instead of older and, after a long life of gradual rejuvenation – unlearning everything they know – they would end as a twinkle in their parents’ eyes. That’s time as represented in a novel by science fiction writer Philip K Dick but, surprisingly, time’s direction is also an issue that cosmologists are grappling with.

    While we take for granted that time has a given direction, physicists don’t: most natural laws are “time reversible” which means they would work just as well if time was defined as running backwards. So why does time always move forward? And will it always do so?

    Does time have a beginning?

    Any universal concept of time must ultimately be based on the evolution of the cosmos itself. When you look up at the universe you’re seeing events that happened in the past – it takes light time to reach us. In fact, even the simplest observation can help us understand cosmological time: for example the fact that the night sky is dark. If the universe had an infinite past and was infinite in extent, the night sky would be completely bright – filled with the light from an infinite number of stars in a cosmos that had always existed.

    For a long time scientists, including Albert Einstein, thought that the universe was static and infinite. Observations have since shown that it is in fact expanding, and at an accelerating rate. This means that it must have originated from a more compact state that we call the Big Bang, implying that time does have a beginning. In fact, if we look for light that is old enough we can even see the relic radiation from Big Bang – the cosmic microwave background [CMB]. Realising this was a first step in determining the age of the universe (see below).

    Cosmic Background Radiation Planck
    CMB

    But there is a snag, Einstein’s special theory of relativity, shows that time is … relative: the faster you move relative to me, the slower time will pass for you relative to my perception of time. So in our universe of expanding galaxies, spinning stars and swirling planets, experiences of time vary: everything’s past, present and future is relative.

    So is there a universal time that we could all agree on?

    Into what is the universe expanding NASA Goddard
    NASA/Goddard

    It turns out that because the universe is on average the same everywhere, and on average looks the same in every direction, there does exist a cosmic time. To measure it, all we have to do is measure the properties of the cosmic microwave background. Cosmologists have used this to determine the age of the universe; its cosmic age. It turns out that the universe is 13.799 billion years old.

    Time’s arrow

    So we know time most likely started during the Big Bang. But there is one nagging question that remains: what exactly is time?

    To unpack this question, we have to look at the basic properties of space and time. In the dimension of space, you can move forwards and backwards; commuters experience this everyday. But time is different, it has a direction, you always move forward, never in reverse. So why is the dimension of time irreversible? This is one of the major unsolved problems in physics.

    To explain why time itself is irreversible, we need to find processes in nature that are also irreversible. One of the few such concepts in physics (and life!) is that things tend to become less “tidy” as time passes. We describe this using a physical property called entropy that encodes how ordered something is.

    Imagine a box of gas in which all the particles were initially placed in one corner (an ordered state). Over time they would naturally seek to fill the entire box (a disordered state) – and to put the particles back into an ordered state would require energy. This is irreversible. It’s like cracking an egg to make an omelette – once it spreads out and fills the frying pan, it will never go back to being egg-shaped. It’s the same with the universe: as it evolves, the overall entropy increases.

    It turns out entropy is a pretty good way to explain time’s arrow. And while it may seem like the universe is becoming more ordered rather than less – going from a wild sea of relatively uniformly spread out hot gas in its early stages to stars, planets, humans and articles about time – it’s nevertheless possible that it is increasing in disorder. That’s because the gravity associated with large masses may be pulling matter into seemingly ordered states – with the increase in disorder that we think must have taken place being somehow hidden away in the gravitational fields. So disorder could be increasing even though we don’t see it.

    But given nature’s tendency to prefer disorder, why did the universe start off in such an ordered state in the first place? This is still considered a mystery. Some researchers argue that the Big Bang may not even have been the beginning, there may in fact be “parallel universes” where time runs in different directions.

    Will time end?

    Time had a beginning but whether it will have an end depends on the nature of the dark energy that is causing it to expand at an accelerating rate. The rate of this expansion may eventually tear the universe apart, forcing it to end in a Big Rip; alternatively dark energy may decay, reversing the Big Bang and ending the Universe in a Big Crunch; or the Universe may simply expand forever.

    But would any of these future scenarios end time? Well, according to the strange rules of quantum mechanics, tiny random particles can momentarily pop out of a vacuum – something seen constantly in particle physics experiments. Some have argued that dark energy could cause such “quantum fluctuations” giving rise to a new Big Bang, ending our time line and starting a new one. While this is extremely speculative and highly unlikely, what we do know is that only when we understand dark energy will we know the fate of the universe.

    So what is the most likely outcome? Only time will tell.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: