Tagged: aeon Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:16 pm on November 24, 2017 Permalink | Reply
    Tags: 16S ribosomal RNA, aeon, , , Despite the advantages of metabolic partnerships some deep microbes have evolved to go it alone, Nematodes, Nowadays genetic sequencing techniques allow us to investigate in great detail which organism has the potential to metabolise what component of the environment, Old-water ecosystems are dominated by hydrogen-utilising microorganisms such as sulphate-reducing bacteria and methane-producing archaea, Scientists including the members of our team at Princeton University in New Jersey have found surprisingly diverse microorganisms in the deep Earth adapted to a lifestyle independent of the Sun, Single-cell genomic data has not only permitted us to investigate cell-to-cell variations in the genomic materials of subsurface microbes but also to recover the genomic blueprints of microbes that ca, Studies of the deep ecosystem are already resonating across many fields of science, Subsurface microorganisms are estimated to be extraordinarily long-lived, The living landscape all around us is just a thin veneer atop the vast little-understood bulk of the Earth’s interior, Those methane-producing archaea or methanogens are microbes that visually resemble bacteria but are so structurally and genetically distinct that they belong to a completely separate domain of life, To comprehend the deep biosphere we must look past the familiar rules of biology, We are also beginning to map the different ecosystems and populations of the deep Earth, What made nematodes a logical choice to look for in the deep subsurface is their proven track record for being able to survive in extreme environments   

    From aeon: “Life goes deeper” 



    24 November, 2017
    Gaetan Borgonie, Extreme Life Isyensya in Gentbrugge, Belgium
    Maggie Lau, Princeton University


    The living landscape all around us is just a thin veneer atop the vast, little-understood bulk of the Earth’s interior. A widespread misconception about the deep subsurface is that this realm consists of a continuous mass of uniform compressed solid rock. Few are aware that this mass of rock is heavily fractured, and water runs in many of these fractures and faults, down to depths of many kilometres. The deep Earth supports an entire biosphere, largely cut off from the surface world, and is still only beginning to be explored and understood.

    The amount of water in the subsurface is considerable. Globally, the freshwater reservoir in the subsurface is estimated to be up to 100 times as great as all the available fresh water in the rivers, lakes and swamps combined. This water, ranging in ages from seven years to 2 billion years, is being intensely studied by researchers because it defines the location and scope of deep life. We know now that the deep terrestrial subsurface is home to one quintillion simple (prokaryotic) cells. That is two to 20 times as many cells as live in all the open ocean. By some estimates, the deep biosphere could contain up to one third of Earth’s entire biomass.

    To comprehend the deep biosphere, we must look past the familiar rules of biology. On the surface, life without the Sun for an extended period of time is dangerous or deadly. Without daylight, no plants or crops can grow. Temperatures get colder and colder. Few organisms, including human beings, can long tolerate such conditions. For instance, people living within the Arctic Circle – as well as the maintenance staff at Antarctic research stations during winter – experience 24-hour darkness for several months each year. They are more vulnerable to health issues such as depression. They find ways to adapt and get through the long, dark, cold winter, but it isn’t easy.

    Now imagine the challenges in places that have been isolated from sunlight and organic compounds derived from light-dependent reactions for millions or even billions of years. It seems incomprehensible that anything could survive there. Yet scientists, including the members of our team at Princeton University in New Jersey, have found surprisingly diverse microorganisms in the deep Earth, adapted to a lifestyle independent of the Sun.

    Sunlight can filter down to depths of about 1,000 metres in ocean water, but light penetrates no more than a few centimetres into soils or rocks. Cold is not a problem down there, however. Quite the opposite: rainwater that percolates kilometres deep into the crust along fractures and faults between rocks can reach temperatures of 60°C (140°F) or higher. The further down you go from the surface, the closer you are to the mantle. Heat rising from the inner Earth is what warms the fissure water. Additionally, the water is under high pressure, contains very little or no oxygen, and is bombarded by radiation from natural radioactive elements in the rocks.

    Within this hellish environment, though, are crucial ingredients for nurturing life. Underground water reacts with minerals in the continental crust, and the longer the water has been trapped down there, the more time there has been for the results of those reactions to accumulate along the flow path. The slow reactions between water and rock dissolve minerals into the water, and break up some of the water molecules, producing molecular hydrogen. This hydrogen is an important fuel for microorganisms in the deep subsurface.

    We are also beginning to map the different ecosystems and populations of the deep Earth. Generally speaking, the older subterranean fissure water is brinier (saltier) and has higher concentrations of dissolved hydrogen. Our studies and those by some of our colleagues have shown an apparent trend that the microbes living in older, more brackish water are distinctly different from ones in the younger, less saline water.

    Old-water ecosystems are dominated by hydrogen-utilising microorganisms such as sulphate-reducing bacteria and methane-producing archaea. Those methane-producing archaea, or methanogens, are microbes that visually resemble bacteria but are so structurally and genetically distinct that they belong to a completely separate domain of life. Sulfate-reducing bacteria and methanogens are among the life forms that appeared earlier in the evolutionary history. In contrast, young-water ecosystems are dominated by metabolically diverse and versatile bacteria of the phylum proteobacteria.

    Studies of the deep ecosystem are already resonating across many fields of science. They are sparking new ideas about the origin of life and about the limits of metabolism. They are filling in new details about the cycling, distribution and storage of carbon on Earth. Deep continental ecosystems will aid the search for underground life on rocky planets such as Mars; deep-sea and sub-seafloor ecosystems, in turn, will help researchers assess the likelihood and possible nature of organisms living on the ocean moons Europa and Enceladus. The implications of this research are truly cosmic in scope.

    Subsurface microorganisms are estimated to be extraordinarily long-lived. In our studies, they show a turnover time as slow as 1,000 years, meaning that they divide only once every few thousand years. To put it in perspective, the common gut bacterium E.coli divides once every 20 minutes. One of the long-standing questions is, how do the deep microbes achieve such a slow-motion lifestyle?

    It is not easy to make a living in the subsurface because the biochemical reactions to harness energy from minerals and geological gases – a set of processes known as chemotrophy – are not as efficient as photosynthesis, the process that green plants use to capture energy from photons of sunlight on the surface. Some subsurface microorganisms can form stress-resistant spores and remain inactive in order to withstand extreme subsurface conditions; otherwise, microorganisms have to invest at least a certain amount of energy, which varies from one taxa (evolutionary population) to another, to maintain the integrity and functionality of the cells.

    Nowadays, genetic sequencing techniques allow us to investigate in great detail which organism has the potential to metabolise what component of the environment. We can also probe the metabolic potential of the community as a whole using metagenomics, a way to study the collective genetic diversity. Together, these approaches are revealing the overall structure and functioning of the deep biome.

    Our studies of the proteobacteria-dominated communities (collected from several sites 1 to 3 km below land surface) show that they share a high degree of similarity with each other, as determined by a genetic marker known as the 16S ribosomal RNA. However, the same functional traits are carried out by different taxa. This variation cannot be explained by physical separation of the sites, nor by each location’s unique physico-chemical features – normally the most ecologically influential factors for such segregation. Neither depth nor water-residence time appear to be a significant contributor to differences, either. Future investigations on the origins of subsurface microorganisms, along with their evolution and movement over the geological history, will aid our understanding of the biogeography, or living landscape, of the subsurface.

    We recently completed a study of subsurface microbes using high-throughput sequencing to look at the total population of RNA and proteins. In a 2015 paper [Nature Communications], we described for the first time the comprehensive network of metabolic functions being actively executed in the subsurface. At 1.3 km below land surface at the Beatrix gold mine in South Africa, the active community was comprised of 39 phyla from all three domains of life: bacteria, archaea and eukarya – the domain of complex organisms that include humans. Overall, the ecosystem was dominated by proteobacteria.

    The molecular data, together with isotope geochemistry and thermodynamic modelling, presented a unified story that the most successful group down there is the betaproteobacteria, a class of proteobacteria that obtain energy through a coupling of nitrate reduction and sulphur oxidation in order to fix carbon dioxide for cellular growth. The demand for nitrate among deep microbes was unexpected; it had gone unnoticed prior to our study because the measured nitrate concentrations in the subsurface water samples were tiny. More interesting, we deduce that deep microbial groups have established strong, paired metabolic partnerships, or syntrophic relationships, which helps the organisms overcome the challenges of extracting the limited energy that originated from rocks. Rather than competing directly with each other, these microbes establish a win-win collaboration.

    Scanning electron microscope image of some of the eukarya recovered from two different mines. (a) Dochmiotrema sp. (Platyhelminthes), (b) A. hemprichi (Annelida), (c) Mylonchulus brachyurus (Nematoda), (d) Amphiascoides? (Arthropoda). Scale bar, 50 μm (a,b), 100 μm (c), 20 μm (d).

    Most of the carbon in microbial cells appears to be derived directly and indirectly from methane. This is true even though methanogens and methane-oxidising microorganisms together accounted for less than 1 per cent of the organisms in our samples – an astonishingly low fraction, given that methane was the most abundant dissolved gas (~80 per cent) in the water samples we studied. The different kinds of microbial taxa that recycle methane in the subsurface occur at varying abundance over time and space.

    Despite the advantages of metabolic partnerships, some deep microbes have evolved to go it alone. Through metagenomics and genome-based analysis, the research scientist Dylan Chivian of Lawrence Berkeley National Laboratory (building on work by Tullis Onstott, the head of our team at Princeton University) discovered a sulphate-reducing bacterium, Candidatus Desulforudis audaxviator, that has complete self-reliance for living in the subsurface ecosystem. Since the publishing of this discovery in 2008 [Science], Ca. Desulforudis has been detected elsewhere in both continental and marine subsurface. Single-cell genomic data suggests that ancient viral infections transported archaea genes into Ca. Desulforudis cells, which gave the bacterium the genetic machinery for its self-reliance.

    Single-cell genomic data has not only permitted us to investigate cell-to-cell variations in the genomic materials of subsurface microbes, but also to recover the genomic blueprints of microbes that cannot be cultivated. These overlooked organisms are sometimes called ‘microbial dark matter’ because they evade detection by conventional laboratory methods. As with astronomical dark matter, microbial dark matter vastly exceeds the amount that is ‘visible’ to us. Some 99 per cent of the microorganisms do not grow under artificial laboratory conditions. We must rely on single-cell genomics and metagenomics to hunt for microbial dark matter in the deep subsurface.

    Even after we and several other research teams realised that bacteria and viruses have colonised the harsh, deep subsurface, most scientists still considered it unlikely that anything more complex than these unicellular organisms would be able to survive down there. More complex, multicellular organisms generally cope less well with low oxygen levels and high pressure, and they require more food. All the same, in 2006 our group (led by Onstott and Gaetan Borgonie) started to look for nematodes at great depths.

    Nematodes (commonly called roundworms, not to be confused with earthworms, which belong to a group all of their own, the Annelida) are extremely common multicellular organisms. Together with insects, they are the most dominant animals on the planet. Nematodes are mostly very small. Although some can range up to several metres in length, most are less than 1 mm long. Their origin extends back 1.1 billion years, to a time not long after the divergence of plants and animals in evolution. Nematodes are considered to be among the oldest multicellular organisms still known on the planet. They have conquered almost any niche on the planet from soil to oceans; some have even evolved to parasitise plants and animals, including humans.

    What made nematodes a logical choice to look for in the deep subsurface is their proven track record for being able to survive in extreme environments. Many species are able to alter their life cycle when confronted with life-threatening conditions. They form a survival stage in which their metabolism is greatly reduced. In this way, they are able to withstand anoxia, heat, drought, freezing and toxic conditions for several decades, and then revive when wetted or when conditions are adequate again.

    Nematodes can withstand huge pressures, too. When the Space Shuttle Columbia broke up during re-entry in 2003, a biological experiment on board containing nematodes made a free fall from an altitude as high as 42 km. Their canister hit the ground with a force of roughly 2,500 g. (Transient centrifugation at up to 10,000 gs, which would liquefy a human, is a common manipulation in standard nematode laboratory procedures.) A few weeks later, the experiment was recovered. The nematodes inside the canister had not only survived the ordeal, they were reproducing. Furthermore, humans need 21 per cent oxygen in our atmosphere to be able to breathe. Nematodes can make do indefinitely with only 0.5 per cent oxygen, and many species can survive extended periods with less or no oxygen at all.

    Our search for deep-Earth nematodes resulted in the 2011 discovery [Nature] of a new species of nematode, Halicephalobus mephisto. Its name literally means ‘the devil worm’. The nematode was recovered from water that flowed out of a fissure at a depth of about 1.3 km in the Beatrix gold mine. Carbon-dating showed the water there to be around 3,000 years old. In the years that followed, we found more nematodes living at an even more remarkable depth of 3.8 km.

    After the discovery of the devil-worm nematode, we performed a long filtration sampling setup that lasted two years. During that time, we filtered 12,845,647 litres of water at a depth of 1.4 km. (The search for deep life is painstaking work!) This effort resulted in the discovery of a whole zoo of invertebrates in water that was 12,300 years old. We recovered species of flatworms, nematodes, rotifers, arthropods, annelids, fungi and protozoa, a whole community thriving inside the filter.

    Genetic analysis revealed that none of these was a new species, but that they were all species already known from the surface. Further investigation revealed that nearly all the complex subsurface dwellers shared a common characteristic: they were known to be cosmopolitan, and therefore well-suited to living in extreme environments. At that time, we also made the first video footage of a biofilm – a thin, self-contained living layer – attached to crevices deep inside the rock. The biofilm is composed of bacteria and organic matrix, and it is home to all these animals.

    We also found several non-animal species, such as fungi and protozoa, living in deep fissure water that ranged in ages from 7,000 to 500,000 years old. Often their abundance in the fissure water was low, just one specimen per 10,000 litres. In contrast, in certain areas we found patches of bacterial biofilm containing worms at population densities of more than 1,000,000 individuals per square metre. Because the known subsurface animals are small, a cavity the size of your thumb can hold an entire ecosystem containing several hundred small invertebrates, fungi and protozoa.

    The commonality of species on the surface and subsurface posed a consistent research challenge. At all times, we had to make extensive analysis to be sure that any specimen found was not the result of contamination of the mines where we were executing our research. We also measured the age of the water to be sure it was not recent, using both chemical and bacteriological techniques. And we had to maintain aseptic conditions at all times. These are similar to, though milder than, the kinds of precautions that might soon be needed for analysing samples from Mars for evidence of extraterrestrial life.

    Except for Halicephalobus mephisto, we never did find any completely new species of multicellular organisms in the Beatrix mine. This seemed counterintuitive at first, as we expected that a long process of adaptive selection in the deep subsurface would lead to novel life forms. With the advantage of hindsight, though, it is not so surprising.

    If you consider any patch of soil anywhere in the world, the nematodes (or any other small invertebrate) living there undergo a daily and seasonal cycle of stress. On bright days, sunshine can dry out the soil; when it rains, puddles might cut off all oxygen; at night, the freezing of water or a bigger animal stepping on that patch adds pressure and disturbs the soil. In summary, animals living in the soil on the surface already experience stress every day. Many of the organisms transported to the deep subsurface would have adapted to extreme conditions long ago, so they would not need a long adaptive selection process to be able to survive. That would account for the paucity of undiscovered deep species.

    Even after we got past the surprise of what organisms we found living in the subsurface, we were still caught off-guard by where we found them. During our survey of the Beatrix mine, we discovered nematodes living inside salty stalactites at a depth of about 1.4 km. Moreover, this species of nematode was adapted to living in salty water and could not even survive in fresh water. On the surface, this species had been found years before to live in brackish water conditions. Although the Beatrix mine is situated in a dry salt pan, it is still an enigma how a salt-dependent surface worm managed to get that deep without encountering a deadly layer of fresh water in between.

    The process of transport to the deep subsurface is not yet understood, and is the subject of much current research. Even in the absence of answers, the broader realisation that complex surface life forms can also survive indefinitely in the deep subsurface is good news for the search for life on planets and moons in our solar system. A similar process of migration could have transported life forms to the deep subsurface long before the surface conditions became inhospitable on Mars, for instance.

    And our journey into the inner life of the Earth is just beginning. We are interested in determining whether species from the deep subsurface truly are as isolated as they seem, and if the migrations go in both directions. It is possible that some subsurface organisms reappear on the surface via hot springs. Our analyses of hot-spring waters in the Limpopo region as well as the southern and western Cape regions of South Africa did not turn up any evidence of such resurfacing. Nevertheless, this is a provocative issue that we are continuing to investigate because it will tell us how frequently genetic materials are being exchanged between the surface and the deep subsurface.

    Finally, we recognise that we have probably explored only a tiny fraction of the deep biosphere, and might not yet have encountered its most significant inhabitants. It stands to reason that, if cosmopolitan species from the surface can survive in the deep subsurface, isolated from their surface brethren, then over a long period of time some organisms might have adapted to even more extreme conditions deeper in the subsurface. It could be that the real treasure trove of new and weird life forms still awaits discovery far beneath our feet.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Since 2012, Aeon has established itself as a unique digital magazine, publishing some of the most profound and provocative thinking on the web. We ask the big questions and find the freshest, most original answers, provided by leading thinkers on science, philosophy, society and the arts.

    Aeon has three channels, and all are completely free to enjoy:

    Essays – Longform explorations of deep issues written by serious and creative thinkers

    Ideas – Short provocations, maintaining Aeon’s high editorial standards but in a more nimble and immediate form. Our Ideas are published under a Creative Commons licence, making them available for republication.

    Video – A mixture of curated short documentaries and original Aeon productions

    Through our Partnership program, we publish pieces from university research groups, university presses and other selected cultural organisations.

    Aeon was founded in London by Paul and Brigid Hains. It now has offices in London, Melbourne and New York. We are a not-for-profit, registered charity operated by Aeon Media Group Ltd. Aeon is endorsed as a Deductible Gift Recipient (DGR) organisation in Australia and, through its affiliate Aeon America, registered as a 501(c)(3) charity in the US.

    We are committed to big ideas, serious enquiry and a humane worldview. That’s it.

  • richardmitnick 7:45 am on October 16, 2017 Permalink | Reply
    Tags: aeon, , , Being labelled a ‘maverick’ a ‘crank’ or a ‘little bit crazy’ can be career-killing, Careers in science are now extremely hard to come by. There’s a scarcity of jobs compared with the number of applicants and very few high-ranking and ‘big impact’ journals, Does science need mavericks?, Professional training also involves learning how to convince colleagues in your field that your work is legitimate, Today modern science encourages conformity   

    From aeon: “Does science need mavericks?” 



    Adrian Currie

    Staid and conformist, science risks losing its creative spark. Does it need more mavericks, or are they part of the problem?

    Nikolai Tesla reading a book in his laboratory in Colorado Springs, 1900. Photo by Getty

    At the end of his life, the English naturalist Charles Darwin became intrigued by the musicality of worms. In the last book he ever wrote, in 1881, he describes a series of experiments on his vermicular subjects. Worms, Darwin discovered, are sensitive to vibrations when transmitted through a solid surface, but tone-deaf and unresponsive to the shriek of a whistle or the bellow of a bassoon.

    Earlier, in the 1760s, the French natural philosopher Comte de Buffon heated up balls of iron and other minerals until they were white-hot. Then, by sense of touch alone, he recorded how long it took them to cool to room temperature.

    A hundred years before that, Isaac Newton wrote about the time he slid a bodkin – a kind of thick tailor’s needle – between his skull and his eye, and rubbed the needle so as to distort the shape of his own eyeball.

    ‘An Experiment to Put Pressure on the Eye’, from Isaac Newton’s notebooks (1665-6). Photo courtesy Cambridge University Library

    These experiments are all pretty wacky, but they still bear the mark of the scientific. Each one involves the careful recording and assessment of data. Darwin was excluding the hypothesis that hearing explained earthworm behaviour; Buffon extrapolated the age of the Earth from a wide range of geological materials (his estimate: 75,000 years); and Newton’s unpleasant self-surgery helped to develop his theory of optics, by clarifying the relationship between the eye’s geometry and the resulting visual effects. Their methods might have been unorthodox, but they were following their intellectual instincts about what the enquiry demanded. They had licence to be scientific mavericks.

    The word ‘maverick’ has a surprising history. In the 1860s, a Texan lawyer and rancher stopped branding his cattle. Of course, the unbranded livestock quickly became identified as his. The man’s name? Samuel Maverick. By the 1880s, through one of those strange transformations so distinctive of language, the term had come to mean anyone who refuses to abide by the rules. I like the connection with cattle: where most cows and steers follow the herd, some – the unbranded – find their own path.

    Nowadays scientists tend to shun the ‘maverick’ label. If you’ve hung out in a lab lately, you’ll notice that scientific researchers are often terrible gossips. Being labelled a ‘maverick’, a ‘crank’ or a ‘little bit crazy’ can be career-killing. The result is what the philosopher Huw Price at the University of Cambridge calls ‘reputation traps’: if an area of study gets a bad smell, a waft of the illegitimate, serious scientists won’t go anywhere near it.

    Mavericks such as Newton, Buffon and Darwin operated in a very different time to our own. Theirs was the age of the ‘gentleman scholar’, in which research was pursued by a moneyed class with time to kill. Today, though, modern science encourages conformity. For a start, you need to get a degree to become a scientist of some stripe. You also need to publish, get peer-reviewed, obtain money from a funder, and find a job. These things all mould the young scientist: you aren’t just taught proper pipette technique, but also take on a kind of disciplinary worldview. The process of acculturation is part of what the philosopher and historian Thomas Kuhn called a ‘paradigm’, a set of values, practices and basic concepts that scientists hold in common.

    On top of this standardisation, careers in science are now extremely hard to come by. There’s a scarcity of jobs compared with the number of applicants, and very few high-ranking and ‘big impact’ journals. This means that the research decisions that scientists make, particularly early on, are high-risk wagers about what will be fruitful and lead to a decent career. The road to academic stardom (and, for that matter, academic mediocrity) is littered with brilliant, passionate people who simply made bad bets. In such an environment, researchers are bound to be conservative – with the stakes set so high, taking a punt on something outlandish, and that you know is likely to hurt your career, is not a winning move.

    Of course, all these filters help to ensure that the science we read about is well-supported and reliable, compared with Darwin’s day. There’s much good in sharing a paradigm; it makes communication easier and helps knowledge accumulate from a common base. But professional training also involves learning how to convince colleagues in your field that your work is legitimate, that it meets their ideas of what the good questions are and what good answers look like. This makes science more productive, but less creative. Enquiries can become hidebound and unadventurous. As a result, truly revolutionary research – the domain of the maverick – is increasingly hard to pursue.

    The biologist Barbara McClintock devoted enormous effort and paid a very high price for her path-breaking research into so-called ‘jumping genes’ in the mid-20th century. She was certainly no scientific outsider – by the 1940s, she was widely recognised for her foundational work on heredity. She took up several fellowships at Cornell University, worked at Stanford University, and had a permanent research position at Cold Springs Laboratory, with membership at the National Academy of Sciences to boot. But things changed when she became interested in how genes were controlled – that is, how identical genetic sequences could express themselves differently at various stages of growth and in different parts of the same living structure.

    It was widely believed that a cell’s chromosomes consisted of a fixed line-up of genes, which were like a static blueprint or map for the organism. What McClintock found, instead, was that the genes in any particular cell could move around on the chromosome, as well as turn themselves on and off in response to environmental factors, other genes, and bits and pieces of the developmental soup. The response from the scientific community was initially hostile – not to the basic phenomenon McClintock had discovered, but to the complex, interwoven picture of biological systems that she developed on the back of the discovery. For 20 years, McClintock was forced to switch gears completely and work on the study of maize. It wasn’t until the 1970s that her peers came around, and she was duly awarded the Nobel Prize in 1983. ‘One must await the right time for conceptual change,’ she wrote in a shoulder-shrugging letter to a fellow geneticist in 1973.

    Even from a position of scientific respectability, ‘maverick’ thinking such as McClintock’s can involve decades in the academic hinterlands. Modern science might be a high-output activity, but it seems to lack an appreciation for the freewheeling inventiveness of its forebearers. Surely, then, that’s why we need to cherish and preserve these mavericks, to make space for the visionary who is willing to swim against the tide, for the unappreciated genius who will break rank? Mavericks are outsiders by definition, and if science wants to challenge its own orthodoxies, it needs outsiders. Or so the story goes.

    Perhaps this tale already contains the germ of a solution: we should recover the ‘gentleman (and gentlewoman) scholar’, in the form of the modern-day entrepreneur, venture capitalist or Silicon Valley billionaire. In July 2012, a Californian businessman called Russ George, in collaboration with the Old Massett Village of the Haida Nation, arranged for 100 tonnes of iron sulphate to be dumped into the Pacific Ocean off the west coast of Canada. It was a massive experiment with a geoengineering technique called ocean fertilisation. The idea was to give a huge boost to the local phytoplankton, those tiny microorganisms that teem all over the Earth’s waters and need iron to synthesise sunlight. They would then act as a food source to help replenish local fish supplies, and also to create a carbon sink as the phytoplankton died and settled on the ocean floor. If this trial was successful, perhaps ocean fertilisation could be used on a wider scale to help with food stocks and reduce atmospheric carbon.

    The outcry was immediate. The International Maritime Organization said the experiment hadn’t met the necessary guidelines. The Canadian government filed a search warrant against George, seized his data, and suggested he had violated various UN moratoria. George defended his actions in the press and said he was suffering ‘under this dark cloud of vilification’ for ‘daring to go where none have gone before’. He claimed that, while the scientific establishment only talked about the catastrophic risks of climate change, he was the one taking action. George took on the mantle of the maverick with pride.

    What’s the lesson of this story? As we’ve said, science needs maverick thinking, but it’s very bad at accommodating it. So shouldn’t we just leave it to those with the will, bravery and ingenuity – as well as the hubris, privilege and cash – to simply get on with the job outside of those conservative institutions? Surely ambitious leaders backed by well-funded private companies can step in and lead the charge on ‘moonshot’ thinking. After all, who better to steer technological and scientific development towards a brighter future than the inventors and owners of those same innovations? Can we put our faith in the Russ Georges of this world?

    Well, to put it bluntly – no.

    One reason for skepticism boils down to the basic architecture of the maverick story. The maverick is an archetype from a narrative of human progress in which specific individuals – iconoclasts, geniuses – are what shapes history’s trajectory. Yes, there’s something very satisfying about the tale of the solitary battler going it alone against the ignorant mainstream, and finally triumphing over orthodox opinion. That’s a version of the Great Man view of history (and, McClintock notwithstanding, such stories in science are almost always about men). But are these narratives accurate? How can we be sure that they’re true stories, as opposed to merely good ones?

    In fact, economic, environmental and socio-political factors – rather than unique individuals – are often better at explaining the origin of scientific innovations. Take the venerated maverick Galileo, who played an important role in overturning Ptolemaic astronomy. In addition to his astronomical and mathematical prowess, we might also point to major advances in lens-grinding, which produced much better telescopes; to the spread of printing, which enabled Galileo to propagate his ideas (thanks to his undoubted genius for propaganda); and to the political and religious context of reformation and counter-reformation Europe, which led to a much more diverse and open intellectual climate (even if Galileo ultimately ran up against the Catholic Church). On these accounts, the players who act out history’s drama become much less relevant. Instead, we should pay much more attention to the historical backdrop – and we shouldn’t be so quick to think that the solution to our problem is a few good mavericks.

    There’s also a worry about the kind of mavericks we’re listening to. Most of those floating around today are wealthy, white and male. That’s not surprising: to be a successful maverick, chances are you’re already pretty privileged. Why? Well, you need a sense of entitlement and the confidence to take on the mainstream. You need the emotional and physical support to enable risk-taking. You need credentials so people will pay attention to you. Realistically, only a small minority of people will be lucky enough to acquire all these things, and those people are likely to be drawn from a pretty narrow band of society. From this perspective, it’s no surprise that the folk who’ve garnered most of the accolades for their individual contributions to science have been gentlemen scholars.

    The upshot is that mavericks aren’t diverse. That’s not only unfair as a matter of justice – it’s also dangerously limiting if you assess science on its own terms. Most obviously, it gives you a shallower pool of potential researchers. Any structural features that block people from thinking independently, and that have nothing to do with how good they are as scientists, constrain our scientific investigations. Homogeneity produces suboptimal science. This is not just a matter of the quantity of research, but also its breadth and quality. It seems plain to me that your background, experiences and personal history affect the kinds of ideas you have – and so if we want a diversity of ideas, we need a diversity of people.

    A classic example comes from palaeoanthropology, the study of human evolution. In the 1960s, the male-dominated field was fixated on a particular narrative about what drove human evolution, a narrative exemplified by the ‘Man the Hunter’ conference at the University of Chicago in 1966. Human evolution was about hunting: a meat-based diet gave us the energy to increase brain size, the needs of cooperation and cognitive acumen required by the hunt drove brain-size increases, and so forth. And indeed, the presumed masculine ‘hunting’ side of the equation was firmly emphasised over the presumed feminine ‘gathering’ side.

    As women entered the field, however, they poked holes in this picture. They suggested that hunting perhaps wasn’t so central to human subsistence – at best, the occasional bounty of meat from big game was a supplement to the steady fare provided by gathering. Further, emphasis shifted to the needs of childbirth: cooperation could be explained by the fact that the longer childhood that goes along with increased brain size required collective child-raising, provisioning and protection. The anthropologist Kristen Hawkes at the University of Utah put forward the grandmother hypothesis, which noted the near-unique long post-menopausal life of human women, and suggested that non-breeding women – grandmas – played a crucial role in maintaining well-functioning social groups. Theoretically, it would have been possible for men to come up with these ideas. And it’s true that new technologies and finds also played their part in diversifying our vision of human evolution. But the experiences and perspectives of women definitely helped these new theories to flourish.

    Inevitably, if it falls to a small set of people to shape how science and technology develops, we’re likely to make decisions that protect the interests of that narrow group. Which brings me to a final criticism: trusting in anti-establishment mavericks makes us very vulnerable to their idiosyncrasies. We have very little control over the whims of our heroes (male or female); they could very well pick the best, most important and carefully thought-out application of their time and money. Or they might not. It is their time and money, after all. They’re not ultimately accountable to the people they affect. No matter how benevolent the intentions, if wealthy white men are the only people who can be mavericks, that’s bad news for the underprivileged.

    The international aid and development sector is regrettably full of sobering examples of what can happen when the gap between ‘helper’ and ‘helped’ is too wide; the ‘voluntourism’ industry, which sends students and travellers from wealthy nations to help out with development projects abroad, has been roundly criticised for being inefficient and even exploitative. Good intentions without the right knowledge and perspective often do more harm than good. It seems to me that what goes for aid likely goes for technology and science, too. Without a range of people from different backgrounds, these fields will develop in ways that entrench existing inequality, inadvertently or not.

    Paradoxically, then, I don’t think the solution to the lack of creative thinking is to look for more mavericks. Sure, we want people who battle received opinion, who resist old truths. But instead of a maverick signalling someone with the grit to swim against the scientific tide, perhaps being speculative and risky should simply be more central to science itself. It should be considered as going with the tide.

    That’s not easy. Remember what we said about gate-keeping, peer review and betting on different avenues of research. These all play important roles in maintaining scientific authority, and can be enormously productive. But they’re productive only in certain contexts. I’m not suggesting that we give up these things entirely; rather, what we need to do is to relinquish the idea that there’s one single good way to organise scientific communities.

    One area that’s in dire need of a revamp – and that’s rife with accusations of crankery – is a field that’s dear to my heart: the study of the existential risk from emerging technologies and scientific innovations. Think rogue artificial intelligence, deadly superbugs, geoengineering gone wrong, an asteroid colliding with Earth. These perils to humanity fall into what the legal scholar Jonathan B Wiener at Duke University in North Carolina calls the ‘tragedy of the uncommons’: we’re all remarkably bad at thinking about large-scale, low-probability, high-impact events.

    But think we must. Even if the chances of extinction-level events are minimal, if there are actions we can take now to reduce the likelihood of their occurrence, we should. And this requires a science to help address those threats.

    Because of the inherent conservatism of science, however, such speculative research programmes are often seen as doomsday propheteering. Most respectable researchers won’t even get close for fear of falling into a reputation trap. This potentially stops scientists from considering some of the more unlikely – but possible and high-consequence – upshots of their own research. Admittedly, the chances of civilisation-squashing events are hard to calculate or even express, and perceptions and predictions differ wildly. But again, just because existential risk is hard to study, that doesn’t mean we shouldn’t try. It’s surely worth devoting even just a little of our energy to mapping out safer futures for our species and our planet.

    The diversity question looms particularly large here. Just because existential risks are everyone’s risks, it doesn’t mean that underprivileged groups count for any less. Consider the possibility of geoengineering as a means of preventing catastrophic climate change. Lower-lying, and usually poorer, countries have been the first to experience the impact of changing weather patterns and rising sea levels, so the perceived urgency of taking action is unevenly distributed. But to be effective, geoengineering will need to be wide-ranging, and probably very dangerous – so who gets to decide which risks we ought to run? When is the threat of climate change sufficient to take the chance on an unpredictable new technology? Answering these questions is well above my pay grade, but if the people making these choices are drawn from a particular – and privileged – bit of society, the results are unlikely to be good for everyone else.

    In areas of science where we’re pretty confident about having unearthed the most fruitful lines of enquiry, focusing our research efforts is less problematic. But in existential risk, we don’t even know what the most useful questions are. When the stakes are so high, and the field is so daunting and unclear, that’s precisely the moment to get creative. Instead of having platoons of scientists thinking about the same kind of problem in the same kind of way, now is when we should be spreading them out across the whole landscape of research. When we don’t know what’s good and what’s bad, putting too many eggs into a small number of baskets is a bad idea.

    How science works now isn’t necessarily how science will always work, or how it should work. The rules aren’t handed down in stone from the mountain top; rather, they’re painfully and slowly built through decades of hard work, hard thought, and more than a little happenstance, power-play and other features typical of any human endeavour. Scientists are human, and as such they’ll respond to incentives. If we want a less conservative science, then, we should ‘tweak’ the system by changing the allocation of incentives.

    For example, we might bring lotteries into science funding to boost the number of risky research bets. If referees have less power over who gets funded, those looking for money won’t have to work so hard to please those referees. We might also set up more institutions – well-supported, well-funded and hopefully high-status – to encourage ‘exploratory’ research. Perhaps we also need more recognition of the fact that scientific success and significance can’t be summed up in a single measure.

    These ideas could be reflected in the standards of journals, reviewers, funding bodies and hiring committees. Scientific findings could still be required to meet a standard of significance, but we should expand what that looks like: from the ‘probably wrong but probably productive’, to the ‘likely right but only in specific circumstances’, to the ‘imaginative and the opening up new areas of research’, and so on. Plus, if science is going to get less conservative and more wacky, we need to think about insulating the more outré research from public misunderstanding or even misuse – we’ll have to get better at communicating and contextualising scientific data.

    These are all extremely tentative suggestions. A lot of work needs to be done to reimagine the institutions that control, shape and enable science to thrive. But first we need to recognise that our very romanticism about mavericks might be the thing that’s holding us back from unearthing more audacious ideas. You shouldn’t have to be a maverick in order think like one: to be speculative, weird and risky. In some areas, maverick thinking should be the status quo. We don’t need to reincarnate idiosyncratic gentlemen scholars such as Newton, Darwin and Buffon. Instead, let’s try to bake their kind of thinking into the structures of modern science.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Since 2012, Aeon has established itself as a unique digital magazine, publishing some of the most profound and provocative thinking on the web. We ask the big questions and find the freshest, most original answers, provided by leading thinkers on science, philosophy, society and the arts.

    Aeon has three channels, and all are completely free to enjoy:

    Essays – Longform explorations of deep issues written by serious and creative thinkers

    Ideas – Short provocations, maintaining Aeon’s high editorial standards but in a more nimble and immediate form. Our Ideas are published under a Creative Commons licence, making them available for republication.

    Video – A mixture of curated short documentaries and original Aeon productions

    Through our Partnership program, we publish pieces from university research groups, university presses and other selected cultural organisations.

    Aeon was founded in London by Paul and Brigid Hains. It now has offices in London, Melbourne and New York. We are a not-for-profit, registered charity operated by Aeon Media Group Ltd. Aeon is endorsed as a Deductible Gift Recipient (DGR) organisation in Australia and, through its affiliate Aeon America, registered as a 501(c)(3) charity in the US.

    We are committed to big ideas, serious enquiry and a humane worldview. That’s it.

  • richardmitnick 12:02 pm on September 19, 2017 Permalink | Reply
    Tags: aeon, , To find aliens, we must think of life as we don’t know it   

    From aeon: “To find aliens, we must think of life as we don’t know it” 



    Ramin Skibba

    Jupiter’s moon, Europa, is believed to conceal a buried ocean. Photo NASA/JPL-Caltech/SETI Institute

    From blob-like jellyfish to rock-like lichens, our planet teems with such diversity of life that it is difficult to recognise some organisms as even being alive. That complexity hints at the challenge of searching for life as we don’t know it – the alien biology that might have taken hold on other planets, where conditions could be unlike anything we’ve seen before. ‘The Universe is a really big place. Chances are, if we can imagine it, it’s probably out there on a planet somewhere,’ said Morgan Cable, an astrochemist at the Jet Propulsion Laboratory in Pasadena, California. ‘The question is, will we be able to find it?’

    For decades, astronomers have come at that question by confining their search to organisms broadly similar to the ones here. In 1976, NASA’s Viking landers examined soil samples on Mars, and tried to animate them using the kind of organic nutrients that Earth microbes like, with inconclusive results.

    NASA/Viking 1 Lander

    Later this year, the European Space Agency’s ExoMars Trace Gas Orbiter will begin scoping out methane in the Martian atmosphere, which could be produced by Earth-like bacterial life.

    ESA/ExoMars Trace Gas Orbiter


    NASA’s Mars 2020 rover will likewise scan for carbon-based compounds from possible past or present Mars organisms.

    NASA Mars 2020 orbiter schematic

    NASA Mars 2020 rover depiction

    But the environment on Mars isn’t much like that on Earth, and the exoplanets that astronomers are finding around other stars are stranger still – many of them quite unlike anything in our solar system. For that reason, it’s important to broaden the search for life. We need to open our minds to genuinely alien kinds of biological, chemical, geological and physical processes. ‘Everybody looks for “biosignatures”, but they’re meaningless because we don’t have any other examples of biology,’ said the chemist Lee Cronin at the University of Glasgow.

    To open our minds, we need to go back to basics and consider the fundamental conditions that are necessary for life. First, it needs some form of energy, such as from volcanic hot springs or hydrothermal vents. That would seem to rule out any planets or moons lacking a strong source of internal heat. Life also needs protection from space radiation, such as an atmospheric ozone layer. Many newly discovered Earth-size worlds, including ones around TRAPPIST-1 and Proxima Centauri, orbit red dwarf stars whose powerful flares could strip away a planet’s atmosphere.

    The TRAPPIST-1 star, an ultracool dwarf, is orbited by seven Earth-size planets (NASA).

    ESO Belgian robotic Trappist National Telescope at Cerro La Silla, Chile interior

    ESO Belgian robotic Trappist-South National Telescope at Cerro La Silla, Chile

    Centauris Alpha Beta Proxima 27, February 2012. Skatebiker

    Studies by the James Webb Space Telescope (JWST), set to launch next year, will reveal whether we should rule out these worlds, too.

    NASA/ESA/CSA Webb Telescope annotated

    Finally, everything we know about life indicates that it requires some kind of liquid solvent in which chemical interactions can lead to self-replicating molecules. Water is exceptionally effective in that regard. It facilitates making and breaking chemical bonds, assembling proteins or other structural molecules, and – for an actual organism – feeding and getting rid of waste. That’s why planetary scientists currently focus on the ‘habitable zone’ around stars, the locations where a world could have the right temperature for liquid water on its surface.

    These constraints still leave a bewildering range of possibilities. Perhaps other liquids could take the place of water. Or a less exotic possibility: maybe biology could arise in the buried ocean on an ice-covered alien world. Such a setting could offer energy, protection and liquid water, yet provide almost no outward sign of life, making it tough to detect. For planets around other stars, we simply do not know enough yet to say what is (or is not) happening there. ‘It’s difficult to imagine that we could definitively find life on an exoplanet,’ conceded Jonathan Lunine, a planetary scientist at Cornell University. ‘But the outer solar system is accessible to us.’

    The search for exotic life therefore must begin close to home. The moons of Saturn and Jupiter offer a test case of whether biology could exist without an atmosphere. Jupiter’s Europa and Saturn’s Enceladus both have inner oceans and internal heat sources. Enceladus spews huge geysers of water vapour from its south pole; Europa appears to puff off occasional plumes as well. Future space missions could fly through the plumes and study them for possible biochemicals. NASA’s proposed Europa lander, which could launch in about a decade, could seek out possible microbe-laced ocean water that seeped up or snowed back down onto the surface.

    An artist’s concept of a Europa lander, which would look for evidence of past or present life on the icy moon of Jupiter during a 20-day mission on the surface. Credit: NASA/JPL-Caltech

    Meanwhile, another Saturn moon, Titan, could tell us whether life can arise without liquid water. Titan is dotted with lakes of methane and ethane, filled by a seasonal hydrocarbon rain. Lunine and his colleagues have speculated that life could arise in this frigid setting. Several well-formulated (but as-yet unfunded) concepts exist for a lander that could investigate Titan’s methane lakes, looking for microbial life.

    For the motley bunch of exoplanets that have no analog in our solar system, however, scientists have to rely on laboratory experiments and sheer imagination. ‘We’re still looking for the basic physical and chemical requirements that we think life needs, but we’re trying to keep the net as broad as possible,’ Cable said. Exoplanet researchers such as Sara Seager at the Massachusetts Institute of Technology and Victoria Meadows at the University of Washington are modelling disparate types of possible planetary atmospheres and the kinds of chemical signatures that life might imprint onto them.

    Now the onus is on NASA and other space agencies to design instruments capable of detecting as many signs of life as possible. Most current telescopes access only a limited range of wavelengths. ‘If you think of the spectrum like a set of venetian blinds, there are only a few slats removed. That’s not a very good way to get at the composition,’ Lunine said. In response, astronomers led by Seager and Scott Gaudi of the Ohio State University have proposed the Habitable Exoplanet Imaging Mission (HabEx) for NASA in the 2030s or 2040s. It would scan exoplanets over a wide range of optical and near-infrared wavelengths for signs of oxygen and water vapour.

    Casting a wide search for ET won’t be easy and it won’t be cheap, but it will surely be transformative. Even if astrobiologists find nothing, that knowledge will tell us how special life is here on Earth. And any kind of success will be Earth-shattering. Finding terrestrial-style bacteria on Mars would tell us we’re not alone. Finding methane-swimming organisms on Titan would tell us, even more profoundly, that ours is not the only way to make life. Either way, we Earthlings will never look at the cosmos the same way again.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Since 2012, Aeon has established itself as a unique digital magazine, publishing some of the most profound and provocative thinking on the web. We ask the big questions and find the freshest, most original answers, provided by leading thinkers on science, philosophy, society and the arts.

    Aeon has three channels, and all are completely free to enjoy:

    Essays – Longform explorations of deep issues written by serious and creative thinkers

    Ideas – Short provocations, maintaining Aeon’s high editorial standards but in a more nimble and immediate form. Our Ideas are published under a Creative Commons licence, making them available for republication.

    Video – A mixture of curated short documentaries and original Aeon productions

    Through our Partnership program, we publish pieces from university research groups, university presses and other selected cultural organisations.

    Aeon was founded in London by Paul and Brigid Hains. It now has offices in London, Melbourne and New York. We are a not-for-profit, registered charity operated by Aeon Media Group Ltd. Aeon is endorsed as a Deductible Gift Recipient (DGR) organisation in Australia and, through its affiliate Aeon America, registered as a 501(c)(3) charity in the US.

    We are committed to big ideas, serious enquiry and a humane worldview. That’s it.

  • richardmitnick 11:39 am on June 30, 2017 Permalink | Reply
    Tags: aeon, , , ,   

    From aeon: “In the dark” 



    Alexander B Fry

    Photomultiplier array at LUX, SURF, South Dakota. Photo courtesy of Luxdarkmatter.org

    Dark matter is the commonest, most elusive stuff there is. Can we grasp this great unsolved problem in physics?

    Lux Dark Matter Experiment

    SURF building in Lead SD USA

    LUX Xenon experiment at SURF, Lead, SD, USA

    I’m sitting at my desk at the University of Washington trying to conserve energy. It isn’t me who’s losing it; it’s my computer simulations. Actually, colleagues down the hall might say I was losing it as well. When I tell people I’m working on speculative theories about dark matter, they start to speculate about me. I don’t think everyone who works in the building even believes in it.

    In presentations, I point out how many cosmological puzzles it helps to solve. Occam’s Razor is my silver bullet: the fact that just one posit can explain so much. Then I talk about the things that standard dark matter doesn’t fix. There don’t seem to be enough satellite galaxies around our Milky Way. The inner shapes of small galaxies are inconsistent. I invoke Occam’s Razor again and argue that you can resolve these issues by adding a weak self-interaction to standard dark matter, a feeble scattering pattern when its particles collide. Then someone will ask me if I really believe in all this stuff. Tough question.

    The world we see is an illusion, albeit a highly persistent one. We have gradually got used to the idea that nature’s true reality is one of uncertain quantum fields; that what we see is not necessarily what is. Dark matter is a profound extension of this concept. It appears that the majority of matter in the universe has been hidden from us. That puts physicists and the general public alike in an uneasy place. Physicists worry that they can’t point to an unequivocal confirmed prediction or a positive detection of the stuff itself. The wider audience finds it hard to accept something that is necessarily so shadowy and elusive. The situation, in fact, bears an ominous resemblance to the aether controversy of more than a century ago.

    In the late-1800s, scientists were puzzled at how electromagnetic waves (for instance, light) could pass through vacuums. Just as the most familiar sort of waves are constrained to water — it’s the water that does the waving — it seemed obvious that there had to be some medium in which electromagnetic waves were ripples. Hence the notion of ‘aether’, an imperceptible field that was thought to permeate all of space.

    The American scientists Albert Michelson and Edward Morley carried out the most famous experiment to probe the existence of aether in 1887. If light needed a medium to propagate, they reasoned, then the Earth ought to be moving through this same medium. They set up an ingenious apparatus to test the idea: a rigid optics table floating on a cushioning vat of liquid mercury such that the table could rotate in any direction. The plan was to compare the wavelengths of light beams travelling in different relative directions, as the apparatus rotated or as the Earth swung around the sun. As our planet travelled along its orbit in an opposite direction to the background aether, light beams should be impeded, compressing their wavelength. Six months later, the direction of the impedance should reverse and the wavelength would expand. But to the surprise of many, the wavelengths were the same no matter what direction the beams travelled in. There was no sign of the expected medium. Aether appeared to be a mistake.

    This didn’t rule out its existence in every physicist’s opinion. Disagreement about the question rumbled on until at least some of the aether proponents died. Morley himself didn’t believe his own results. Only with perfect hindsight is the Michelson-Morley experiment seen as evidence for the absence of aether and, as it turned out, confirmation of Albert Einstein’s more radical theory of relativity.

    Dark matter, dark energy, dark money, dark markets, dark biomass, dark lexicon, dark genome: scientists seem to add dark to any influential phenomenon that is poorly understood and somehow obscured from direct perception. The darkness, in other words, is metaphorical. At first, however, it was intended quite literally. In the 1930s, the Swiss astronomer Fritz Zwicky observed a cluster of galaxies, all gravitationally bound to each other and orbiting one another much too fast. Only the gravitational pull of a very large, unseen mass seemed capable of explaining why they did not simply spin apart. Zwicky postulated the presence of some kind of ‘dark’ matter in the most casual sense possible: he just thought there was something he couldn’t see. But astronomers have continued to find the signature of unseen mass throughout the cosmos. For example, the stars of galaxies also rotate too fast. In fact, it looks as if dark matter is the commonest form of matter in our universe.

    It is also the most elusive. It does not interact strongly with itself or with the regular matter found in stars, planets or us. Its presence is inferred purely through its gravitational effects, and gravity, vexingly, is the weakest of the fundamental forces. But gravity is the only significant long-range force, which is why dark matter dominates the universe’s architecture at the largest scales.

    In the past half-century, we have developed a standard model of cosmology that describes our observed universe quite well.

    The standard cosmology model, ΛCDM model Cosmic pie chart after Planck Big Bang and inflation

    In the beginning, a hot Big Bang caused a rapid expansion of space and sowed the seeds for fluctuations in the density of matter throughout the universe. Over the next 13.7 billion years, those density patterns were scaled up thanks to the relentless force of gravity, ultimately forming the cosmic scaffolding of dark matter whose gravitational pull suspends the luminous galaxies we can see.

    This standard model of cosmology is supported by a lot of data, including the pervasive radiation field of the universe, the distribution of galaxies in the sky, and colliding clusters of galaxies. These robust observations combine expertise and independent analysis from many fields of astronomy. All are in strong agreement with a cosmological model that includes dark matter. Astrophysicists who try to trifle with the fundamentals of dark matter tend to find themselves cut off from the mainstream. It isn’t that anybody thinks it makes for an especially beautiful theory; it’s just that no other consistent, predictively successful alternative exists. But none of this explains what dark matter actually is. That really is a great, unsolved problem in physics.

    So the hunt is on. Particle accelerators sift through data, detectors wait patiently underground, and telescopes strain upwards. The current generation of experiments has already placed strong constraints on viable theories. Optimistically, the nature of dark matter could be understood within a few decades. Pessimistically, it might never be understood.

    We are in an era of discovery. A body of well-confirmed theory governs the assortment of fundamental particles that we have already observed. The same theory allows the existence of other, hitherto undetected particles. A few decades ago, theorists realised that a so-called Weakly Interacting Massive Particle (WIMP) might exist. This generic particle would have all the right characteristics to be dark matter, and it would be able to hide right under our noses. If dark matter is indeed a WIMP, it would interact so feebly with regular matter that we would have been able to detect it only with the generation of dark matter experiments that are just now coming on stream. The most promising might be the Large Underground Xenon (LUX) experiment in South Dakota, the biggest dark matter detector in the world. The facility opened in a former gold mine this February and is receptive to the most elusive of subatomic particles. And yet, despite LUX’s exquisite sensitivity, the hunt for dark matter itself has been something of a waiting game. So far, the only particles to turn up in the detector’s trap are bits of cosmic noise: nothing more than a nuisance.

    The past success of standard paradigms in theoretical physics leads us to hunt for a single generic dark matter particle — the dark matter. Arguably, though, we have little justification for supposing that there is anything to be found at all; as the English physicist John D Barrow said in 1994: ‘There is no reason that the universe should be designed for our convenience.’ With that caveat in mind, it appears the possibilities are as follows. Either dark matter exists or it doesn’t. If it exists, then either we can detect it or we can’t. If it doesn’t exist, either we can show that it doesn’t exist or we can’t. The observations that led astronomers to posit dark matter in the first place seem too robust to dismiss, so the most common argument for non-existence is to say there must be something wrong with our understanding of gravity – that it must not behave as Einstein predicted. That would be a drastic change in our understanding of physics, so not many people want to go there. On the other hand, if dark matter exists and we can’t detect it, that would put us in a very inconvenient position indeed.

    But we are living through a golden age of cosmology. In the past two decades, we have discovered so much: we have measured variations in the relic radiation of the Big Bang, learnt that the universe’s expansion is accelerating, glimpsed black holes and spotted the brightest explosions ever in the universe. In the next decades, we are likely to observe the first stars in the universe, map nearly the entire distribution of matter, and hear the cataclysmic merging of black holes through gravitational waves. Even among these riches, dark matter offers a uniquely inviting prospect, sitting at a confluence of new observations, theory, technology and (we hope) new funding.

    The various proposals to get its measure tend to fall into one of three categories: artificial creation (in a particle accelerator), indirect detection, and direct detection. The last, in which researchers attempt to catch WIMPs in the wild, is where the excitement is. The underground LUX detector is one of the first in a new generation of ultra-sensitive experiments. It counts on the WIMP interacting with the nucleus of a regular atom. These experiments generally consist of a very pure detector target, such as pristine elemental Germanium or Xenon, cooled to extremely low temperatures and shielded from outside particles. The problem is that stray particles tend to sneak in anyway. Interloper interactions are carefully monitored. Noise reduction, shielding and careful statistics are the only way to confirm real dark-matter interaction events from false alarms.

    Theorists have considered a lot of possibilities for how the real thing might work with the standard WIMP. Actually, the first generation of experiments has already ruled out the so-called z-boson scattering interaction. What is left is Higgs boson-mediated scattering, which would involve the same particle that the Large Hadron Collider discovered in Geneva in November last year.

    CERN CMS Higgs Event

    CERN ATLAS Higgs Event

    Higgs Always the last place your look.

    That implies a very weak interaction, but it would be perfectly matched to the current sensitivity threshold of the new generation of experiments.

    Then again, science is less about saying what is than what is not, and non-detections have placed relatively interesting constraints on dark matter. They have also, in a development that is strikingly reminiscent of the aether controversy, thrown out some anomalies that need to be cleared up. Using a different detector target to LUX, the Italian DAMA (short for ‘DArk MAtter’) experiment claims to have found an annual modulation of their dark matter signal.

    DAMA-LIBRA at Gran Sasso

    Gran Sasso LABORATORI NAZIONALI del GRAN SASSO, located in the Abruzzo region of central Italy

    Detractors dispute whether they really have any signal at all. Just like with the aether, we expected to see this kind of yearly variation, as the Earth orbits the Sun, sometimes moving with the larger galactic rotation and sometimes against it. The DAMA collaboration measured such an annual modulation. Other competing projects (XENON, CDMS, Edelweiss and ZEPLIN, for example) didn’t, but these experiments cannot be compared directly, so we should probably reserve judgment.

    XENON1T at Gran Sasso

    LBNL SuperCDMS, at SNOLAB (Vale Inco Mine, Sudbury, Canada)

    Edelweiss Dark Matter Experiment, located at the Modane Underground Laboratory in France

    Lux Zeplin project at SURF

    Nature can be cruel. Physicists could take non-detection as a hint to give up, but there is always the teasing possibility that we just need a better experiment. Or perhaps dark matter will reveal itself to be almost as complex as regular matter. Previous experiments imposed quite strict limitations on just how much complexity we can expect — there’s no prospect of dark-matter people, or even dark-matter chemistry, really — but it could still come in multiple varieties. We might find a kind of particle that explains only a fraction of the expected total mass of dark matter.

    In a sense, this has already occurred. Neutrinos are elusive but widespread (60 billion of them pass through an area the size of your pinky every second). They hardly ever interact with regular matter, and until 1998 we thought they were entirely massless. In fact, neutrinos make up a tiny fraction of the mass budget of the universe, and they do act like an odd kind of dark matter. They aren’t ‘the’ dark matter, but perhaps there is no single type of dark matter to find.

    To say that we are in an era of discovery is really just to say that we are in an era of intense interest. Physicists say we would have achieved something if we determine that dark matter is not a WIMP. Would that not be a discovery? At the same time, the field is burgeoning with ideas and rival theories. Some are exploring the idea that dark matter has interactions, but we will never be privy to them. In this scenario, dark matter would have an interaction at the smallest of scales which would leave standard cosmology unchanged. It might even have an exotic universe of its own: a dark sector. This possibility is at once terrifying and entrancing to physicists. We could posit an intricate dark matter realm that will always escape our scrutiny, save for its interaction with our own world through gravity. The dark sector would be akin to a parallel universe.

    It is rather easy to tinker with the basic idea of dark matter when you make all of your modifications very feeble. And so this is what all dark matter theorists are doing. I have run with the idea that dark matter might have self-interactions and worked that into supercomputer simulations of galaxies. On the largest scales, where cosmology has made firm predictions, this modification does nothing, but on small scales, where the theory of dark matter shows signs of faltering, it helps with several issues. The simulations are pretty to look at and they make acceptable predictions. There are too many free parameters, though — what scientists call fine-tuning — such that the results can seem tailored to fit the observations. That’s why I reserve judgement, and you would be well advised to do the same.

    We will probably never know for certain whether dark matter has self-interactions. At best, we might put an upper limit on how strong such interactions could be. So, when people ask me if I think self-interacting dark matter is the correct theory, I say no. I am constraining what is possible, not asserting what is. But this is kind of disappointing, isn’t it? Surely cosmology should hold some deep truth that we can hope to grasp.

    One day, perhaps, LUX or one of its competitors might discover just what they are looking for. Or maybe on some unassuming supercomputer, I will uncover a hidden truth about dark matter. Regardless, such a discovery will feel removed from us, mediated as it will be through several layers of ghosts in machines. The dark matter universe is part of our universe, but it will never feel like our universe.

    Nature plays an epistemological trick on us all. The things we observe each have one kind of existence, but the things we cannot observe could have limitless kinds of existence. A good theory should be just complex enough. Dark matter is the simplest solution to a complicated problem, not a complicated solution to simple problem. Yet there is no guarantee that it will ever be illuminated. And whether or not astrophysicists find it in a conceptual sense, we will never grasp it in our hands. It will remain out of touch. To live in a universe that is largely inaccessible is to live in a realm of endless possibilities, for better or worse.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:35 am on June 29, 2017 Permalink | Reply
    Tags: "Do we matter in the cosmos?", aeon,   

    From aeon: “Do we matter in the cosmos?” 



    Nick Hughes

    The Spaceship of the Imagination voyages out to distant galaxies and into the mysteries of DNA in the new cosmos. (Credit: Fox)

    Humanity is nothing more than a microscopic blip in the universe. But does that mean we are insignificant?

    Humanity occupies a very small place in an unfathomably vast Universe. Travelling at the speed of light – 671 million miles per hour – it would take us 100,000 years to cross the Milky Way.

    Milky Way NASA/JPL-Caltech /ESO R. Hurt

    But we still wouldn’t have gone very far. By recent estimates, the Milky Way is just one of 2 trillion galaxies in the observable Universe, and the region of space that they occupy spans at least 90 billion light-years.

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    If you imagine Earth shrunk down to the size of a single grain of sand, and you imagine the size of that grain of sand relative to the entirety of the Sahara Desert, you are still nowhere near to comprehending how infinitesimally small a position we occupy in space. The American astronomer Carl Sagan put the point vividly in 1994 when discussing the famous ‘Pale Blue Dot’ photograph taken by Voyager 1. Our planet, he said, is nothing more than ‘a mote of dust suspended in a sunbeam’.

    And that’s just the spatial dimension. The observable Universe has existed for around 13.8 billion years. If we shrink that span of time down to a single year, with the Big Bang occurring at midnight on 1 January, the first Homo sapiens made an appearance at 22:24 on 31 December. It’s now 23:59:59, as it has been for the past 438 years, and at the rate we’re going it’s entirely possible that we’ll be gone before midnight strikes again. The Universe, on the other hand, might well continue existing forever, for all we know. Sagan could have added, then, that our time on this mote of dust will amount to nothing more than a blip. In the grand scheme of things we are very, very small.

    For Sagan, the Pale Blue Dot underscores our responsibility to treat one another with kindness and compassion. But reflection on the vastness of the Universe and our physical and temporal smallness within it often takes on an altogether darker hue. If the Universe is so large, and we are so small and so fleeting, doesn’t it follow that we are utterly insignificant and inconsequential? This thought can be a spur to nihilism. If we are so insignificant, if our existence is so trivial, how could anything we do or are – our successes and failures, our anxiety and sadness and joy, all our busy ambition and toil and endeavour, all that makes up the material of our lives – how could any of that possibly matter? To think of one’s place in the cosmos, as the American philosopher Susan Wolf puts it in The Meanings of Lives (2007), is ‘…to recognise the possibility of a perspective … from which one’s life is merely gratuitous….’

    The sense that we are somehow insignificant seems to be widely felt. The American author John Updike expressed it in 1985 when he wrote of modern science that:

    “…We shrink from what it has to tell us of our perilous and insignificant place in the cosmos … our century’s revelations of unthinkable largeness and unimaginable smallness, of abysmal stretches of geological time when we were nothing, of supernumerary galaxies … of a kind of mad mathematical violence at the heart of the matter have scorched us deeper than we know….”

    In a similar vein, the French philosopher Blaise Pascal wrote in Pensées (1669):

    “…When I consider the short duration of my life, swallowed up in an eternity before and after, the little space I fill engulfed in the infinite immensity of spaces whereof I know nothing, and which know nothing of me, I am terrified. The eternal silence of these infinite spaces frightens me….”

    Commenting on this passage in Between Man and Man (1947), the Austrian-Israeli philosopher Martin Buber said that Pascal had experienced the “…uncanniness of the heavens…”, and thereby came to know ‘man’s limitation, his inadequacy, the casualness of his existence’. In the film Monty Python’s The Meaning of Life (1983), John Cleese and Eric Idle conspire to persuade a character, played by Terry Gilliam, to give up her liver for donation. Understandably reluctant, she is eventually won over by a song that sharply details just how comically inconsequential she is in the cosmic frame.

    Even the relatively upbeat Sagan wasn’t, in fact, immune to the pessimistic point of view. As well as viewing it as a lesson in the need for collective good will, he also argued that the Pale Blue Dot challenges ‘our posturings, our imagined self-importance, and the delusion that we have some privileged position in the Universe’.

    The pessimistic view, then, is that, because we occupy such a small and brief place in the cosmos, we and the things we do are insignificant and inconsequential. But is that right? Are we insignificant and inconsequential? And if we are, should we respond with despair and nihilism? These questions are paradigmatically philosophical, but they have received little attention from contemporary philosophers. To the extent that they address the question of whether we are cosmically insignificant at all, they have typically dismissed it as confused.

    The English moral philosopher Bernard Williams is representative of the dismissers. As he understands it, having significance from the cosmic point of view is the same thing as having objective value. Something has objective value when it is not only valuable to some person or other, but valuable independently of whether anyone judges it to be so – valuable, Williams might say, from a universal perspective. By contrast, something can be subjectively valuable even if it is not objectively valuable. Provided that someone finds a thing valuable, then it has subjective value to them, though not necessarily to the rest of us. Williams takes it to be a consequence of a naturalistic, atheistic worldview that nothing has objective value. In his posthumous essay ‘The Human Prejudice’ (2006), he argues that the only kind of value that exists is the subjective kind. I value Mozart’s Requiem. Maybe you do too. But even so, Williams would say, it is valuable only insofar as we judge it to be. Its value is not an independent fact lying out there, beyond the reach of our opinions, waiting to be uncovered.

    Since, according to Williams, to be significant from the cosmic point of view is to be objectively valuable, and there is no such thing as objective value, it follows that there is no such thing as cosmic significance. The very idea, he argues, is “…a relic of a world not yet thoroughly disenchanted….” In other words, of a world that still believes in the existence of God. Once we recognise that there is no such thing, he says, there is “…no other point of view except ours in which our activities can have or lack a significance….” The question of what is significant from the point of view of the cosmos is incoherent: one might as well ask what is significant from the point of view of a pile of rocks. The philosopher Simon Blackburn at the University of Cambridge puts it even more bluntly in Being Good (2001). When we ask if human life has meaning or significance, he simply responds: “To whom?”

    Is the whole worry about cosmic insignificance nothing more than a muddle then? Guy Kahane at the University of Oxford is one of the few contemporary philosophers to have written about these issues in detail. He disagrees. In Our Cosmic Insignificance (2013) he points out that if the naturalistic worldview does indeed rule out the possibility of anything having objective value, then it would still do so if the Universe were the size of a matchbox, or came into existence only moments ago. If, on the other hand, there is such a thing as objective value, then it would exist no less in an infinitely large, old and silent universe. Matters of cosmological size and scale don’t even come into the equation. Kahane thinks this is obvious. But if so, is it really plausible that we are making such an elementary error? Or is it more likely that there is something else driving our sense of cosmic insignificance?

    Kahane thinks that there is a better way of thinking about the matter. He disputes Williams’s claim that nothing has objective value: intelligent life, he argues, has it in spades (and little else comes close). But more importantly, the dismissers have misunderstood what it means for something to be significant or insignificant. Kahane argues that the significance of something is the product of two things: how valuable (or disvaluable) it is, but also how worthy it is of attention. As he points out, when one’s frame of reference expands to encompass more and more, the attention-worthiness of something within it, and so that thing’s significance within the frame of reference, tends to decrease. What’s significant from the point of view of your life – the birth of your child, perhaps – might be less significant, less noteworthy, from the point of view of the town you live in. And what’s significant from the point of view of the town you live in – the closure of the local hospital, let’s say – might be relatively insignificant from the point of view of the entire country. What’s significant from the point of view of the country could, in turn, merit little attention from the point of view of the entire world.

    The cosmic point of view encompasses literally everything in the Universe: the entirety of [spacetime], from edge to edge, and beginning to end. From that point of view, we are nothing more than a microscopic blip, physically and temporally speaking at least. And this, Kahane argues, is what gives rise to our sense of insignificance. Since the cosmic point of view encompasses so much, and the significance of things tends to diminish as the frame of reference expands, it is natural to think that we couldn’t possibly stand out as worthy of special attention within it; there is simply too much to compete with. If not, we conclude, then we must be insignificant.

    But, Kahane argues, this is all too quick. We mustn’t forget that significance is also a function of value. If, for some reason, human life stands out as a source of value compared with everything else, then even from the cosmic point of view we might be significant. A single diamond sitting on display in a huge empty warehouse might be small by comparison with its surroundings, but that doesn’t mean that it’s insignificant or that it merits no attention. Since, Kahane argues, the primary source of value is intelligent life, it follows that our cosmic significance depends on how much intelligent life there is out there. If the Universe is teeming with it, if we are just one diamond among millions or billions of others, many of which are just as large and bright, or more so, then we are indeed cosmically insignificant. If, however, we are the sole exemplars of intelligent life, then we are of immense cosmic significance: we are a single diamond shining forth, surrounded by nothingness, like an incandescent beacon of light in the Stygian night. The rub, of course, is that we currently cannot tell: we don’t know what, or rather who, we share the cosmos with.

    Kahane’s view, then, is that intelligent life is the primary source of value, and since only that which has value is significant, whether or not we matter depends on the quantity of intelligent life in the Universe. If it is abundant, then we are insignificant and matter little. But if we alone exemplify it, then we are of immense significance even from the supremely broad perspective of the entire Universe.

    Is that right? I think that, like Williams and the dismissers, Kahane has misdiagnosed the issue. It is a striking fact that none of the passages quoted earlier expressing the idea that we are cosmically insignificant makes any reference to the possibility that we are only one among many communities of intelligent life spread throughout the Universe. If that was the crux of the matter, wouldn’t we at least expect it to be mentioned? In fact, wouldn’t we expect it to be front and centre? Yet it is nowhere. Not in the passages quoted, nor, as far as I know, anywhere else. Instead, what we find are evocative descriptions of the minute location we occupy in space and the disheartening brevity of our temporal span. Worse still, when considering the possibilities that Kahane describes for our significance, it is easy to remain unmoved. Speaking for myself, insofar as reflection on our tiny place in the Universe leads me to the feeling that we are unimportant and that nothing we do matters, that feeling remains unwavering whether or not I imagine a universe full of life, or a vast barren wasteland. In fact, if anything, things get worse when I contemplate the second possibility. I suspect I am not alone in feeling this way.

    A better diagnosis might be that when we reflect on our place in the Universe we find ourselves wanting on an altogether different scale of significance. To see what I have in mind, notice that something can be significant while being neither valuable nor disvaluable. Suppose that a group of meteorologists is trying to establish whether a rapidly developing tropical storm will turn into a hurricane before it hits land. There is, it transpires, a large body of moist warm air 50 miles offshore, into which the storm will shortly collide. Moist warm air tends to intensify tropical storms. As a result, upon learning of its presence, the meteorologists conclude that the storm will indeed turn into a hurricane. This, let’s suppose, is exactly what happens. When explaining events to the public, it would be perfectly natural for the meteorologists to say that the formation of the body of air was significant in the chain of events that led to the storm turning into a hurricane. But there need not be any suggestion of value or disvalue here. The body of air, and, we may suppose, the hurricane itself, had no positive or negative value whatsoever. Only that which affects intelligent life has value or disvalue – or at least, only that which affects sentient life, if we want to include other species – and the land where the hurricane hit was wholly unpopulated; no human or animal concerns were affected. The body of air was significant, yet it doesn’t register anywhere on the value scale.

    In what sense was it significant then? The obvious answer is that it was causally significant in virtue of being one of the main causes of the storm turning into a hurricane. Clearly, causal significance needn’t involve value. The causal significance of something is the result of the degree of influence it has within a causal chain. The more influence it has, the more significant it is. The less influence it has, the less significant it is. The presence of moist warm air offshore was significant because it played an important role in the tropical storm developing into a hurricane. Perhaps, unrelatedly, a forest fire started on the other side of the world at the same time. If so, that fact was not significant – at least for the meteorologists – since it made no difference whatsoever to the chain of events that they were interested in.

    I think that, contra Kahane, it is a sense of causal, rather than value, insignificance that is central to the sense that we are cosmically insignificant. Recognition of the tiny place we occupy in the Universe throws a stark light on our distinct lack of causal power. Those of us who are thoroughly disenchanted know that almost all of space is completely beyond our control, and that, living on no more than a mote of dust, we will be borne away by the slightest breeze that happens to drift our way. Worse still, we know that once we have been snuffed out, the Universe will continue to roll on as though nothing had happened. Causally speaking, we really are insignificant from the point of view of the whole Universe.

    Why think that it’s the comprehension of our causal insignificance that drives the pessimistic line of thought? Well, for one thing, it makes sense of the fact that our sense of insignificance can easily remain unshaken upon considering the possibility that we are alone in the Universe. Whether the Universe is teeming with intelligent life, or almost wholly barren, makes no difference whatsoever to our degree of causal influence within it. But more importantly, the primary source of our concern regarding our cosmic insignificance is, it seems, that we occupy a very small place in the Universe. Given this, it presumably makes sense to think that, were we not so small, we would correspondingly not feel so insignificant.

    The causal-powers explanation (as we might call it) makes sense of this. Holding fixed our causal powers as they actually are, the smaller the Universe is, the greater our size and the degree of our causal influence within it; and the larger the Universe, the smaller our size and the lesser our degree of causal influence. This might explain why the sense that we are cosmically insignificant is a largely modern phenomenon. With a few exceptions, most of our predecessors had no inkling of the coming revelations of astronomy, and believed that the Earth was at the centre of a rather small universe. There is little evidence that they felt insignificant in the way that we are liable to. If the causal-powers explanation is correct, this should come as no surprise: they might have seen themselves as wielding a considerable degree of causal power.

    The causal-powers explanation also makes sense of a related hypothetical scenario. Suppose that, rather than imagining a situation in which our causal powers are held fixed and the size of the Universe is altered, we instead hold fixed the size of the Universe as it actually is, and instead imaginatively alter our causal powers. Suppose, then, that we were to have control over the trajectory of distant stars, and the future of far-flung galaxies; that we could bend and warp the course of the Universe to fit our purposes, and so on. Would we still feel cosmically insignificant? I doubt it. Probably we would feel rather pleased with ourselves.

    The causal-powers explanation might also explain at least some of the appeal of theism. Religious believers sometimes say that their faith gives significance to their lives, and fear that a life without God would be meaningless. One way in which this might seem to be true – though presumably this is not all they have in mind – is that through allegiance to a supremely powerful being they are able to share in its power. If it worked, prayer would open the door to the possibility of causal powers far out-stripping those we can effect in the corporeal realm.

    Still, for the disenchanted, it is hard to deny that our causal powers are insignificant from the point of view of the entire Universe. But should we be troubled by this? Should it lead us to nihilism and despair? I don’t think so. To see why, we need to go back to the issue of value and draw another distinction. Some of the things that we care about – happiness and human flourishing, for example – are intrinsically valuable to us. That is to say, we find them to be valuable in themselves. That doesn’t necessarily mean that they’re objectively valuable. Maybe they are, maybe they’re not (we need not go along with Williams and Kahane in taking a stand on that matter). What it does mean, however, is that we value them for their own sake. But not everything that has value is intrinsically valuable. Some things are only instrumentally valuable – valuable only as a means to an end. Cash, for example, doesn’t have any intrinsic value – it’s just paper with ink printed on it – but it is instrumentally valuable, since you can use it to acquire other things of value. Perhaps not happiness, if the cliché is to be believed, but comfort at least.

    We tend to treat power as though it is intrinsically valuable. We seek it out and covet it, quite irrespective of how we might wield it and what it might get us. One need only look at the history of totalitarian politics to recognise this tendency in its most grotesque form. But power isn’t intrinsically valuable, it’s only instrumentally valuable – valuable as a means to an end. And whether or not they are objectively valuable, the ends that matter to us, the things that we care about most – our relationships, our projects and goals, our shared experiences, social justice, the pursuit of knowledge, the creation and appreciation of art, music and literature, and the future and fate of ours and other species – do not depend to any considerable extent on our having control over a vast but largely irrelevant Universe. We might be distinctly lacking in power from the cosmic perspective, and so, in a sense, insignificant. But having such power and such significance wouldn’t make much of a difference anyway. To lament its lack and respond with despair and nihilism is merely a form of narcissism. Most of what matters to us is right here on Earth.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 8:54 am on June 22, 2017 Permalink | Reply
    Tags: aeon, , , , Center for High Angular Resolution Astronomy (CHARA) Array, , , Michigan Infra-Red Combiner (MIRC), Mount Wilson Observatry perched atop the San Gabriel Mountains outside Los Angeles CA USA, Mt Wilson 100 inch Hooker Telescope   

    From aeon: “How the face of a distant star reveals our place in the cosmos” 27 July, 2016, but worth a look 



    Rachael Roettenbacher
    27 July, 2016

    Courtesy Dr Rachael Roettenbacher, University of Michigan

    Mount Wilson Observatory, perched atop the San Gabriel Mountains outside Los Angeles, has been the site of some of the greatest expansions in human knowledge of the cosmos.

    Mt Wilson 100 inch Hooker Telescope, perched atop the San Gabriel Mountains outside Los Angeles, CA, USA

    It is here that, in 1924, Edwin Hubble proved the existence of galaxies beyond our own, and here that he also collected the clinching evidence that the Universe is expanding. Now Mount Wilson is home to another observational leap: bringing the stars into view, not as points of light but as evolving, dynamic suns every bit as tangible as our own.

    The essential tool for this breakthrough is interferometry, in which astronomers combine light from widely separated telescopes to create a virtual telescope with a diameter as large as that separation. This technique makes it possible to resolve details far too small to discern using a standard telescope. The first such observations of stars took place at Mount Wilson in the 1920s. Using a 20-foot interferometer (two small mirrors mounted 20 feet apart on the 100-inch Hooker reflector to effectively make a 20-foot telescope), Albert A Michelson and Francis G Pease managed to measure the angular size of stars other than the Sun for the first time. Their interferometer was powerful enough to measure only a few of the closest stars. Building a much larger device was beyond the practical engineering capabilities of the time.

    After that, the field fell dormant for many decades. In 1950, the astronomer Gerald E Kron mused on the possibility of resolving the surfaces of other stars but concluded that they are ‘too distant to be observed as resolved disks with optical equipment now available, or, probably, with optical equipment that will ever be available to us’. (He later managed to infer the presence of dark surface features on other stars, albeit indirectly.) With the recent rebirth of optical interferometry using distinct telescopes, though, the technology has progressed far beyond what Kron could imagine.

    Several such facilities are now operating, but Mount Wilson boasts the world’s longest optical interferometer: the Center for High Angular Resolution Astronomy (CHARA) Array. The CHARA Array is resolving the surfaces of nearby stars, providing unprecedented glimpses of the Sun’s neighbours.


    The CHARA Array consists of six one-metre telescopes that are in a Y-shaped configuration, having baselines of various lengths up to 331 metres. Those six telescopes can be combined into 15 telescope pairs that fill in unique parts of the virtual 331-metre telescope with each observation. An instrument called the Michigan Infra-Red Combiner (MIRC), developed by John D Monnier and his group at the University of Michigan, can simultaneously combine light from all six telescopes to take full advantage of the Array. MIRC has previously been used to image the oblate (flattened) surfaces of rapidly rotating stars, circumstellar disks, and the expanding shell of a nova explosion.

    Using the CHARA Array and MIRC together, it is now possible to do what Kron thought impossible: directly image the spotted, active surface of distant stars. The job is still hugely taxing. Most stars are too small to resolve even with the current state-of-the-art technology. Creating a resolved image requires selecting the right targets. The stars have to appear bright and relatively large in the sky. They must have starspots – regions of magnetic activity, analogous to sunspots on the Sun – so that there are dark features for us to observe. Finally, the stars must spin quickly enough that we can watch them through a full rotation without the spots evolving too much.

    I was excited to take on the challenge as part of my doctoral dissertation. I chose as my target the primary member of the binary system zeta Andromedae, a star dimly visible to the naked eye in the autumn sky. Zeta And (as it is commonly called) is fairly nearby to us (181 light years away) and is 16 times the radius of the Sun. It has an approximate prolate spheroid shape, akin to the shape of an American football, caused by the gravity of its close companion; it has also been shown via indirect imaging to host dark spots, including one on its visible pole, so I knew it would be a perfect target for my dissertation work. Getting a clear look at zeta And required a group of 14 collaborators, including my advisor (and MIRC’s creator), Monnier. We observed the star for as many nights as possible through a single stellar rotation, spanning 18 nights, in September 2013 at the CHARA array. Combining all the data and mapping it on the rotating surface required a great deal of additional time and effort.

    This May, we published [Nature] our triumphant result: the highest-resolution image ever of a star outside of our Solar System. We were able to detect the spot on the pole of zeta And, along with starspots that form with seemingly no pattern on the surface. The star’s behaviour is quite unlike that of the Sun, which forms sunspots only at very specific latitudes on its surface. Part of the reason for the difference is that zeta And is an older, evolved star with a different internal structure. Theoretical models suggest that much of zeta And’s interior outside of its core is convective, with hotter material rising and cooler material falling like a boiling pot of water on a stove; in contrast, only the Sun’s outermost layers behave that way. Zeta And’s 18-day spin is also significantly faster than the 27-day rotation period of the Sun.

    Our study of zeta And constrains theories attempting to link solar magnetism to that of other stars. It also offers an intriguing glimpse into the past. Evolutionary models indicate that the young Sun similarly had a thick convective layer and rotated more rapidly than it does today. By examining the spotted surface of zeta And, we get new insights into the solar activity that could have influenced the formation of the solar system 4.5 billion years ago, and also the subsequent development of life on Earth.

    Best of all, our mapping of zeta And is only the beginning. Planned upgrades to the CHARA Array and MIRC will make it possible to observe the surfaces of fainter stars, including ‘young solar analogs’ – that is, infant stars surrounded by disks that are currently adding mass to the star and forming new planets. By resolving a variety of types of stars and their features, we can constrain our theories on stellar structure, magnetism, formation and evolution.

    The power of interferometry is just beginning to be harnessed, and the images of zeta And demonstrate the great potential of this under-utilised technique.

    ESO VLT Interferometer image, Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level

    Four centuries ago, Galileo, an avid observer of sunspots, realised that the Milky Way is composed of ‘a mass of innumerable stars planted together in clusters’. Today, we are at last beginning to find out what those other stars really look like.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 6:32 am on June 14, 2017 Permalink | Reply
    Tags: aeon, , , , , , Philosophy ans Philosophers   

    From aeon: “The idea of creating a new universe in the lab is no joke” 



    Zeeya Merali

    Artwork illustrating the concept of an alternate ‘bubble’ universe in which our universe (left) is not the only one. Some scientists think that bubble universes may pop into existence all the time, and occasionally nudge ours. NASA/JPL-Caltech/R. Hurt (IPAC)

    Physicists aren’t often reprimanded for using risqué humour in their academic writings, but in 1991 that is exactly what happened to the cosmologist Andrei Linde at Stanford University. He had submitted a draft article entitled Hard Art of the Universe Creation to the journal Nuclear Physics B. In it, he outlined the possibility of creating a universe in a laboratory: a whole new cosmos that might one day evolve its own stars, planets and intelligent life. Near the end, Linde made a seemingly flippant suggestion that our Universe itself might have been knocked together by an alien ‘physicist hacker’. The paper’s referees objected to this ‘dirty joke’; religious people might be offended that scientists were aiming to steal the feat of universe-making out of the hands of God, they worried. Linde changed the paper’s title and abstract but held firm over the line that our Universe could have been made by an alien scientist. ‘I am not so sure that this is just a joke,’ he told me.

    Fast-forward a quarter of a century, and the notion of universe-making – or ‘cosmogenesis’ as I dub it – seems less comical than ever. I’ve travelled the world talking to physicists who take the concept seriously, and who have even sketched out rough blueprints for how humanity might one day achieve it. Linde’s referees might have been right to be concerned, but they were asking the wrong questions. The issue is not who might be offended by cosmogenesis, but what would happen if it were truly possible. How would we handle the theological implications? What moral responsibilities would come with fallible humans taking on the role of cosmic creators?

    Theoretical physicists have grappled for years with related questions as part of their considerations of how our own Universe began. In the 1980s, the cosmologist Alex Vilenkin at Tufts University in Massachusetts came up with a mechanism through which the laws of quantum mechanics could have generated an inflating universe from a state in which there was no time, no space and no matter. There’s an established principle in quantum theory that pairs of particles can spontaneously, momentarily pop out of empty space. Vilenkin took this notion a step further, arguing [Physical Review D] that quantum rules could also enable a minuscule bubble of space itself to burst into being from nothing, with the impetus to then inflate to astronomical scales. Our cosmos could thus have been burped into being by the laws of physics alone. To Vilenkin, this result put an end to the question of what came before the Big Bang: nothing. Many cosmologists have made peace with the notion of a universe without a prime mover, divine or otherwise.

    At the other end of the philosophical spectrum, I met with Don Page, a physicist and evangelical Christian at the University of Alberta in Canada, noted for his early collaboration with Stephen Hawking [Physical Review D] on the nature of black holes. To Page, the salient point is that God created the Universe ex nihilo – from absolutely nothing. The kind of cosmogenesis envisioned by Linde, in contrast, would require physicists to cook up their cosmos in a highly technical laboratory, using a far more powerful cousin of the Large Hadron Collider near Geneva. It would also require a seed particle called a ‘monopole’ (which is hypothesised to exist by some models of physics, but has yet to be found). The idea goes that if we could impart enough energy to a monopole, it will start to inflate.

    Rather than growing in size within our Universe, the expanding monopole would bend spacetime within the accelerator to create a tiny wormhole tunnel leading to a separate region of space. From within our lab we would see only the mouth of the wormhole; it would appear to us as a mini black hole, so small as to be utterly harmless. But if we could travel into that wormhole, we would pass through a gateway into a rapidly expanding baby universe that we had created.

    We have no reason to believe that even the most advanced physics hackers could conjure a cosmos from nothing at all, Page argues. Linde’s concept of cosmogenesis, audacious as it might be, is still fundamentally technological. Page, therefore, sees little threat to his faith. On this first issue, then, cosmogenesis would not necessarily upset existing theological views.

    But flipping the problem around, I started to wonder: what are the implications of humans even considering the possibility of one day making a universe that could become inhabited by intelligent life? As I discuss in my book A Big Bang in a Little Room (2017), current theory suggests that, once we have created a new universe, we would have little ability to control its evolution or the potential suffering of any of its residents. Wouldn’t that make us irresponsible and reckless deities? I posed the question to Eduardo Guendelman, a physicist at Ben Gurion University in Israel, who was one of the architects of the cosmogenesis model back in the 1980s. Today, Guendelman is engaged in research that could bring baby-universe-making within practical grasp. I was surprised to find that the moral issues did not cause him any discomfort. Guendelman likens scientists pondering their responsibility over making a baby universe to parents deciding whether or not to have children, knowing they will inevitably introduce them to a life filled with pain as well as joy.

    Other physicists are more wary. Nobuyuki Sakai of Yamaguchi University in Japan, one of the theorists who proposed [Phys.Rev. D] that a monopole could serve as the seed for a baby universe, admitted that cosmogenesis is a thorny issue that we should ‘worry’ about as a society in the future. But he absolved himself of any ethical concerns today. Although he is performing the calculations that could allow cosmogenesis, he notes that it will be decades before such an experiment might feasibly be realised. Ethical concerns can wait.

    Many of the physicists I approached were reluctant to wade into such potential philosophical quandaries. So I turned to a philosopher, Anders Sandberg at the University of Oxford, who contemplates the moral implications of creating artificial sentient life in computer simulations. He argues that the proliferation of intelligent life, regardless of form, can be taken as something that has inherent value. In that case, cosmogenesis might actually be a moral obligation.

    Looking back on my numerous conversations with scientists and philosophers on these issues, I’ve concluded that the editors at Nuclear Physics B did a disservice both to physics and to theology. Their little act of censorship served only to stifle an important discussion. The real danger lies in fostering an air of hostility between the two sides, leaving scientists afraid to speak honestly about the religious and ethical consequences of their work out of concerns of professional reprisal or ridicule.

    We will not be creating baby universes anytime soon, but scientists in all areas of research must feel able to freely articulate the implications of their work without concern for causing offence. Cosmogenesis is an extreme example that tests the principle. Parallel ethical issues are at stake in the more near-term prospects of creating artificial intelligence or developing new kinds of weapons, for instance. As Sandberg put it, although it is understandable that scientists shy away from philosophy, afraid of being thought weird for veering beyond their comfort zone, the unwanted result is that many of them keep quiet on things that really matter.

    As I was leaving Linde’s office at Stanford, after we’d spent a day riffing on the nature of God, the cosmos and baby universes, he pointed at my notes and commented ruefully: ‘If you want to have my reputation destroyed, I guess you have enough material.’ This sentiment was echoed by a number of the scientists I had met, whether they identified as atheists, agnostics, religious or none of the above. The irony was that if they felt able to share their thoughts with each other as openly as they had with me, they would know that they weren’t alone among their colleagues in pondering some of the biggest questions of our being.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 1:39 pm on May 20, 2017 Permalink | Reply
    Tags: aeon, , ,   

    From aeon: “Creative blocks” How Close are we to AI 



    Illustration by Sam Green

    The very laws of physics imply that artificial intelligence must be possible. What’s holding us up?

    David Deutsch

    It is uncontroversial that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos. It is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.

    But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially — the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence.

    Why? Because, as an unknown sage once remarked, ‘it ain’t what we don’t know that causes trouble, it’s what we know for sure that just ain’t so’ (and if you know that sage was Mark Twain, then what you know ain’t so either). I cannot think of any other significant field of knowledge in which the prevailing wisdom, not only in society at large but also among experts, is so beset with entrenched, overlapping, fundamental errors. Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough.

    Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation. This entails that everything that the laws of physics require a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory. The first people to guess this and to grapple with its ramifications were the 19th-century mathematician Charles Babbage and his assistant Ada, Countess of Lovelace. It remained a guess until the 1980s, when I proved it using the quantum theory of computation.

    Babbage came upon universality from an unpromising direction. He had been much exercised by the fact that tables of mathematical functions (such as logarithms and cosines) contained mistakes. At the time they were compiled by armies of clerks, known as ‘computers’, which is the origin of the word. Being human, the computers were fallible. There were elaborate systems of error correction, but even proofreading for typographical errors was a nightmare. Such errors were not merely inconvenient and expensive: they could cost lives. For instance, the tables were extensively used in navigation. So, Babbage designed a mechanical calculator, which he called the Difference Engine. It would be programmed by initialising certain cogs. The mechanism would drive a printer, in order to automate the production of the tables. That would bring the error rate down to negligible levels, to the eternal benefit of humankind.

    Unfortunately, Babbage’s project-management skills were so poor that despite spending vast amounts of his own and the British government’s money, he never managed to get the machine built. Yet his design was sound, and has since been implemented by a team led by the engineer Doron Swade at the Science Museum in London.

    Slow but steady: a detail from Charles Babbage’s Difference Engine, assembled nearly 170 years after it was designed. Courtesy Science Museum

    Here was a cognitive task that only humans had been able to perform. Nothing else in the known universe even came close to matching them, but the Difference Engine would perform better than the best humans. And therefore, even at that faltering, embryonic stage of the history of automated computation — before Babbage had considered anything like AGI — we can see the seeds of a philosophical puzzle that is controversial to this day: what exactly is the difference between what the human ‘computers’ were doing and what the Difference Engine could do? What type of cognitive task, if any, could either type of entity perform that the other could not in principle perform too?

    One immediate difference between them was that the sequence of elementary steps (of counting, adding, multiplying by 10, and so on) that the Difference Engine used to compute a given function did not mirror those of the human ‘computers’. That is to say, they used different algorithms. In itself, that is not a fundamental difference: the Difference Engine could have been modified with additional gears and levers to mimic the humans’ algorithm exactly. Yet that would have achieved nothing except an increase in the error rate, due to increased numbers of glitches in the more complex machinery. Similarly, the humans, given different instructions but no hardware changes, would have been capable of emulating every detail of the Difference Engine’s method — and doing so would have been just as perverse. It would not have copied the Engine’s main advantage, its accuracy, which was due to hardware not software. It would only have made an arduous, boring task even more arduous and boring, which would have made errors more likely, not less.

    For humans, that difference in outcomes — the different error rate — would have been caused by the fact that computing exactly the same table with two different algorithms felt different. But it would not have felt different to the Difference Engine. It had no feelings. Experiencing boredom was one of many cognitive tasks at which the Difference Engine would have been hopelessly inferior to humans. Nor was it capable of knowing or proving, as Babbage did, that the two algorithms would give identical results if executed accurately. Still less was it capable of wanting, as he did, to benefit seafarers and humankind in general. In fact, its repertoire was confined to evaluating a tiny class of specialised mathematical functions (basically, power series in a single variable).

    Thinking about how he could enlarge that repertoire, Babbage first realised that the programming phase of the Engine’s operation could itself be automated: the initial settings of the cogs could be encoded on punched cards. And then he had an epoch-making insight. The Engine could be adapted to punch new cards and store them for its own later use, making what we today call a computer memory. If it could run for long enough — powered, as he envisaged, by a steam engine — and had an unlimited supply of blank cards, its repertoire would jump from that tiny class of mathematical functions to the set of all computations that can possibly be performed by any physical object. That’s universality.

    Babbage called this improved machine the Analytical Engine. He and Lovelace understood that its universality would give it revolutionary potential to improve almost every scientific endeavour and manufacturing process, as well as everyday life. They showed remarkable foresight about specific applications. They knew that it could be programmed to do algebra, play chess, compose music, process images and so on. Unlike the Difference Engine, it could be programmed to use exactly the same method as humans used to make those tables. And prove that the two methods must give the same answers, and do the same error-checking and proofreading (using, say, optical character recognition) as well.

    But could the Analytical Engine feel the same boredom? Could it feel anything? Could it want to better the lot of humankind (or of Analytical Enginekind)? Could it disagree with its programmer about its programming? Here is where Babbage and Lovelace’s insight failed them. They thought that some cognitive functions of the human brain were beyond the reach of computational universality. As Lovelace wrote, ‘The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.’

    And yet ‘originating things’, ‘following analysis’, and ‘anticipating analytical relations and truths’ are all behaviours of brains and, therefore, of the atoms of which brains are composed. Such behaviours obey the laws of physics. So it follows inexorably from universality that, with the right program, an Analytical Engine would undergo them too, atom by atom and step by step. True, the atoms in the brain would be emulated by metal cogs and levers rather than organic material — but in the present context, inferring anything substantive from that distinction would be rank racism.

    Despite their best efforts, Babbage and Lovelace failed almost entirely to convey their enthusiasm about the Analytical Engine to others. In one of the great might-have-beens of history, the idea of a universal computer languished on the back burner of human thought. There it remained until the 20th century, when Alan Turing arrived with a spectacular series of intellectual tours de force, laying the foundations of the classical theory of computation, establishing the limits of computability, participating in the building of the first universal classical computer and, by helping to crack the Enigma code, contributing to the Allied victory in the Second World War.

    Alan Turing. GIZMODO

    Turing fully understood universality. In his 1950 paper ‘Computing Machinery and Intelligence’, he used it to sweep away what he called ‘Lady Lovelace’s objection’, and every other objection both reasonable and unreasonable. He concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written.

    This astounding claim split the intellectual world into two camps, one insisting that AGI was none the less impossible, and the other that it was imminent. Both were mistaken. The first, initially predominant, camp cited a plethora of reasons ranging from the supernatural to the incoherent. All shared the basic mistake that they did not understand what computational universality implies about the physical world, and about human brains in particular.

    But it is the other camp’s basic mistake that is responsible for the lack of progress. It was a failure to recognise that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techniques that suffice for writing any other type of program. Nor can it be achieved merely by improving their performance at tasks that they currently do perform, no matter by how much.

    Why? I call the core functionality in question creativity: the ability to produce new explanations. For example, suppose that you want someone to write you a computer program to convert temperature measurements from Centigrade to Fahrenheit. Even the Difference Engine could have been programmed to do that. A universal computer like the Analytical Engine could achieve it in many more ways. To specify the functionality to the programmer, you might, for instance, provide a long list of all inputs that you might ever want to give it (say, all numbers from -89.2 to +57.8 in increments of 0.1) with the corresponding correct outputs, so that the program could work by looking up the answer in the list on each occasion. Alternatively, you might state an algorithm, such as ‘divide by five, multiply by nine, add 32 and round to the nearest 10th’. The point is that, however the program worked, you would consider it to meet your specification — to be a bona fide temperature converter — if, and only if, it always correctly converted whatever temperature you gave it, within the stated range.

    Now imagine that you require a program with a more ambitious functionality: to address some outstanding problem in theoretical physics — say the nature of Dark Matter — with a new explanation that is plausible and rigorous enough to meet the criteria for publication in an academic journal.

    Such a program would presumably be an AGI (and then some). But how would you specify its task to computer programmers? Never mind that it’s more complicated than temperature conversion: there’s a much more fundamental difficulty. Suppose you were somehow to give them a list, as with the temperature-conversion program, of explanations of Dark Matter that would be acceptable outputs of the program. If the program did output one of those explanations later, that would not constitute meeting your requirement to generate new explanations. For none of those explanations would be new: you would already have created them yourself in order to write the specification. So, in this case, and actually in all other cases of programming genuine AGI, only an algorithm with the right functionality would suffice. But writing that algorithm (without first making new discoveries in physics and hiding them in the program) is exactly what you wanted the programmers to do!

    I’m sorry Dave, I’m afraid I can’t do that: HAL, the computer intelligence from Stanley Kubrick’s 2001: A Space Odyssey. Courtesy MGM

    Traditionally, discussions of AGI have evaded that issue by imagining only a test of the program, not its specification — the traditional test having been proposed by Turing himself. It was that (human) judges be unable to detect whether the program is human or not, when interacting with it via some purely textual medium so that only its cognitive abilities would affect the outcome. But that test, being purely behavioural, gives no clue for how to meet the criterion. Nor can it be met by the technique of ‘evolutionary algorithms’: the Turing test cannot itself be automated without first knowing how to write an AGI program, since the ‘judges’ of a program need to have the target ability themselves. (For how I think biological evolution gave us the ability in the first place, see my book The Beginning of Infinity.)

    And in any case, AGI cannot possibly be defined purely behaviourally. In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.

    The upshot is that, unlike any functionality that has ever been programmed to date, this one can be achieved neither by a specification nor a test of the outputs. What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.

    Such a theory is beyond present-day knowledge. What we do know about epistemology implies that any approach not directed towards that philosophical breakthrough must be futile. Unfortunately, what we know about epistemology is contained largely in the work of the philosopher Karl Popper and is almost universally underrated and misunderstood (even — or perhaps especially — by philosophers). For example, it is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable. But an AGI programmer needs to know where the theories come from in the first place. The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’. But that is impossible. I myself remember, for example, observing on thousands of consecutive occasions that on calendars the first two digits of the year were ‘19’. I never observed a single exception until, one day, they started being ‘20’. Not only was I not surprised, I fully expected that there would be an interval of 17,000 years until the next such ‘19’, a period that neither I nor any other human being had previously experienced even once.

    How could I have ‘extrapolated’ that there would be such a sharp departure from an unbroken pattern of experiences, and that a never-yet-observed process (the 17,000-year interval) would follow? Because it is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation. The future is actually unlike the past in most ways. Of course, given the explanation, those drastic ‘changes’ in the earlier pattern of 19s are straightforwardly understood as being due to an invariant underlying pattern or law. But the explanation always comes first. Without that, any continuation of any sequence constitutes ‘the same thing happening again’ under some explanation.

    So, why is it still conventional wisdom that we get our theories by induction? For some reason, beyond the scope of this article, conventional wisdom adheres to a trope called the ‘problem of induction’, which asks: ‘How and why can induction nevertheless somehow be done, yielding justified true beliefs after all, despite being impossible and invalid respectively?’ Thanks to this trope, every disproof (such as that by Popper and David Miller back in 1988), rather than ending inductivism, simply causes the mainstream to marvel in even greater awe at the depth of the great ‘problem of induction’.

    In regard to how the AGI problem is perceived, this has the catastrophic effect of simultaneously framing it as the ‘problem of induction’, and making that problem look easy, because it casts thinking as a process of predicting that future patterns of sensory experience will be like past ones. That looks like extrapolation — which computers already do all the time (once they are given a theory of what causes the data). But in reality, only a tiny component of thinking is about prediction at all, let alone prediction of our sensory experiences. We think about the world: not just the physical world but also worlds of abstractions such as right and wrong, beauty and ugliness, the infinite and the infinitesimal, causation, fiction, fears, and aspirations — and about thinking itself.

    Now, the truth is that knowledge consists of conjectured explanations — guesses about what really is (or really should be, or might be) out there in all those worlds. Even in the hard sciences, these guesses have no foundations and don’t need justification. Why? Because genuine knowledge, though by definition it does contain truth, almost always contains error as well. So it is not ‘true’ in the sense studied in mathematics and logic. Thinking consists of criticising and correcting partially true guesses with the intention of locating and eliminating the errors and misconceptions in them, not generating or justifying extrapolations from sense data. And therefore, attempts to work towards creating an AGI that would do the latter are just as doomed as an attempt to bring life to Mars by praying for a Creation event to happen there.

    Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism, unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values — the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished. As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI. It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines.

    Furthermore, despite the above-mentioned enormous variety of things that we create explanations about, our core method of doing so, namely Popperian conjecture and criticism, has a single, unified, logic. Hence the term ‘general’ in AGI. A computer program either has that yet-to-be-fully-understood logic, in which case it can perform human-type thinking about anything, including its own thinking and how to improve it, or it doesn’t, in which case it is in no sense an AGI. Consequently, another hopeless approach to AGI is to start from existing knowledge of how to program specific tasks — such as playing chess, performing statistical analysis or searching databases — and then to try to improve those programs in the hope that this will somehow generate AGI as a side effect, as happened to Skynet in the Terminator films.

    Nowadays, an accelerating stream of marvellous and useful functionalities for computers are coming into use, some of them sooner than had been foreseen even quite recently. But what is neither marvellous nor useful is the argument that often greets these developments, that they are reaching the frontiers of AGI. An especially severe outbreak of this occurred recently when a search engine called Watson, developed by IBM, defeated the best human player of a word-association database-searching game called Jeopardy. ‘Smartest machine on Earth’, the PBS documentary series Nova called it, and characterised its function as ‘mimicking the human thought process with software.’ But that is precisely what it does not do.

    The thing is, playing Jeopardy — like every one of the computational functionalities at which we rightly marvel today — is firmly among the functionalities that can be specified in the standard, behaviourist way that I discussed above. No Jeopardy answer will ever be published in a journal of new discoveries. The fact that humans perform that task less well by using creativity to generate the underlying guesses is not a sign that the program has near-human cognitive abilities. The exact opposite is true, for the two methods are utterly different from the ground up. Likewise, when a computer program beats a grandmaster at chess, the two are not using even remotely similar algorithms. The grandmaster can explain why it seemed worth sacrificing the knight for strategic advantage and can write an exciting book on the subject. The program can only prove that the sacrifice does not force a checkmate, and cannot write a book because it has no clue even what the objective of a chess game is. Programming AGI is not the same sort of problem as programming Jeopardy or chess.

    An AGI is qualitatively, not quantitatively, different from all other computer programs. The Skynet misconception likewise informs the hope that AGI is merely an emergent property of complexity, or that increased computer power will bring it forth (as if someone had already written an AGI program but it takes a year to utter each sentence). It is behind the notion that the unique abilities of the brain are due to its ‘massive parallelism’ or to its neuronal architecture, two ideas that violate computational universality. Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough.

    In 1950, Turing expected that by the year 2000, ‘one will be able to speak of machines thinking without expecting to be contradicted.’ In 1968, Arthur C. Clarke expected it by 2001. Yet today in 2012 no one is any better at programming an AGI than Turing himself would have been.

    This does not surprise people in the first camp, the dwindling band of opponents of the very possibility of AGI. But for the people in the other camp (the AGI-is-imminent one) such a history of failure cries out to be explained — or, at least, to be rationalised away. And indeed, unfazed by the fact that they could never induce such rationalisations from experience as they expect their AGIs to do, they have thought of many.

    The very term ‘AGI’ is an example of one. The field used to be called ‘AI’ — artificial intelligence. But ‘AI’ was gradually appropriated to describe all sorts of unrelated computer programs such as game players, search engines and chatbots, until the G for ‘general’ was added to make it possible to refer to the real thing again, but now with the implication that an AGI is just a smarter species of chatbot.

    Another class of rationalisations runs along the general lines of: AGI isn’t that great anyway; existing software is already as smart or smarter, but in a non-human way, and we are too vain or too culturally biased to give it due credit. This gets some traction because it invokes the persistently popular irrationality of cultural relativism, and also the related trope that: ‘We humans pride ourselves on being the paragon of animals, but that pride is misplaced because they, too, have language, tools …

    … And self-awareness.’

    Remember the significance attributed to Skynet’s becoming ‘self-aware’? That’s just another philosophical misconception, sufficient in itself to block any viable approach to AGI. The fact is that present-day software developers could straightforwardly program a computer to have ‘self-awareness’ in the behavioural sense — for example, to pass the ‘mirror test’ of being able to use a mirror to infer facts about itself — if they wanted to. As far as I am aware, no one has done so, presumably because it is a fairly useless ability as well as a trivial one.

    Perhaps the reason that self-awareness has its undeserved reputation for being connected with AGI is that, thanks to Kurt Gödel’s theorem and various controversies in formal logic in the 20th century, self-reference of any kind has acquired a reputation for woo-woo mystery. So has consciousness. And here we have the problem of ambiguous terminology again: the term ‘consciousness’ has a huge range of meanings. At one end of the scale there is the philosophical problem of the nature of subjective sensations (‘qualia’), which is intimately connected with the problem of AGI. At the other, ‘consciousness’ is simply what we lose when we are put under general anaesthetic. Many animals certainly have that.

    AGIs will indeed be capable of self-awareness — but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves. This does not mean that apes who pass the mirror test have any hint of the attributes of ‘general intelligence’ of which AGI would be an artificial version. Indeed, Richard Byrne’s wonderful research into gorilla memes has revealed how apes are able to learn useful behaviours from each other without ever understanding what they are for: the explanation of how ape cognition works really is behaviouristic.

    Ironically, that group of rationalisations (AGI has already been done/is trivial/ exists in apes/is a cultural conceit) are mirror images of arguments that originated in the AGI-is-impossible camp. For every argument of the form ‘You can’t do AGI because you’ll never be able to program the human soul, because it’s supernatural’, the AGI-is-easy camp has the rationalisation, ‘If you think that human cognition is qualitatively different from that of apes, you must believe in a supernatural soul.’

    ‘Anything we don’t yet know how to program is called human intelligence,’ is another such rationalisation. It is the mirror image of the argument advanced by the philosopher John Searle (from the ‘impossible’ camp), who has pointed out that before computers existed, steam engines and later telegraph systems were used as metaphors for how the human mind must work. Searle argues that the hope for AGI rests on a similarly insubstantial metaphor, namely that the mind is ‘essentially’ a computer program. But that’s not a metaphor: the universality of computation follows from the known laws of physics.

    Some, such as the mathematician Roger Penrose, have suggested that the brain uses quantum computation, or even hyper-quantum computation relying on as-yet-unknown physics beyond quantum theory, and that this explains the failure to create AGI on existing computers. To explain why I, and most researchers in the quantum theory of computation, disagree that this is a plausible source of the human brain’s unique functionality is beyond the scope of this essay. (If you want to know more, read Litt et al’s 2006 paper ‘Is the Brain a Quantum Computer?’, published in the journal Cognitive Science.)

    That AGIs are people has been implicit in the very concept from the outset. If there were a program that lacked even a single cognitive ability that is characteristic of people, then by definition it would not qualify as an AGI. Using non-cognitive attributes (such as percentage carbon content) to define personhood would, again, be racist. But the fact that the ability to create new explanations is the unique, morally and intellectually significant functionality of people (humans and AGIs), and that they achieve this functionality by conjecture and criticism, changes everything.

    Currently, personhood is often treated symbolically rather than factually — as an honorific, a promise to pretend that an entity (an ape, a foetus, a corporation) is a person in order to achieve some philosophical or practical aim. This isn’t good. Never mind the terminology; change it if you like, and there are indeed reasons for treating various entities with respect, protecting them from harm and so on. All the same, the distinction between actual people, defined by that objective criterion, and other entities has enormous moral and practical significance, and is going to become vital to the functioning of a civilisation that includes AGIs.

    For example, the mere fact that it is not the computer but the running program that is a person, raises unsolved philosophical problems that will become practical, political controversies as soon as AGIs exist. Once an AGI program is running in a computer, to deprive it of that computer would be murder (or at least false imprisonment or slavery, as the case may be), just like depriving a human mind of its body. But unlike a human body, an AGI program can be copied into multiple computers at the touch of a button. Are those programs, while they are still executing identical steps (ie before they have become differentiated due to random choices or different experiences), the same person or many different people? Do they get one vote, or many? Is deleting one of them murder, or a minor assault? And if some rogue programmer, perhaps illegally, creates billions of different AGI people, either on one computer or on many, what happens next? They are still people, with rights. Do they all get the vote?

    Furthermore, in regard to AGIs, like any other entities with creativity, we have to forget almost all existing connotations of the word ‘programming’. To treat AGIs like any other computer programs would constitute brainwashing, slavery, and tyranny. And cruelty to children, too, for ‘programming’ an already-running AGI, unlike all other programming, constitutes education. And it constitutes debate, moral as well as factual. To ignore the rights and personhood of AGIs would not only be the epitome of evil, but also a recipe for disaster: creative beings cannot be enslaved forever.

    Some people are wondering whether we should welcome our new robot overlords. Some hope to learn how we can rig their programming to make them constitutionally unable to harm humans (as in Isaac Asimov’s ‘laws of robotics’), or to prevent them from acquiring the theory that the universe should be converted into paper clips (as imagined by Nick Bostrom). None of these are the real problem. It has always been the case that a single exceptionally creative person can be thousands of times as productive — economically, intellectually or whatever — as most people; and that such a person could do enormous harm were he to turn his powers to evil instead of good.

    These phenomena have nothing to do with AGIs. The battle between good and evil ideas is as old as our species and will continue regardless of the hardware on which it is running. The issue is: we want the intelligences with (morally) good ideas always to defeat the evil intelligences, biological and artificial; but we are fallible, and our own conception of ‘good’ needs continual improvement. How should society be organised so as to promote that improvement? ‘Enslave all intelligence’ would be a catastrophically wrong answer, and ‘enslave all intelligence that doesn’t look like us’ would not be much better.

    One implication is that we must stop regarding education (of humans or AGIs alike) as instruction — as a means of transmitting existing knowledge unaltered, and causing existing values to be enacted obediently. As Popper wrote (in the context of scientific discovery, but it applies equally to the programming of AGIs and the education of children): ‘there is no such thing as instruction from without … We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error.’ That is to say, conjecture and criticism. Learning must be something that newly created intelligences do, and control, for themselves.

    I do not highlight all these philosophical issues because I fear that AGIs will be invented before we have developed the philosophical sophistication to understand them and to integrate them into civilisation. It is for almost the opposite reason: I am convinced that the whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology, and that the philosophical progress that is essential to their future integration is also a prerequisite for developing them in the first place.

    The lack of progress in AGI is due to a severe logjam of misconceptions. Without Popperian epistemology, one cannot even begin to guess what detailed functionality must be achieved to make an AGI. And Popperian epistemology is not widely known, let alone understood well enough to be applied. Thinking of an AGI as a machine for translating experiences, rewards and punishments into ideas (or worse, just into behaviours) is like trying to cure infectious diseases by balancing bodily humours: futile because it is rooted in an archaic and wildly mistaken world view.

    Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose ‘thinking’ is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely creativity.

    Clearing this logjam will not, by itself, provide the answer. Yet the answer, conceived in those terms, cannot be all that difficult. For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: