Tagged: The Conversation Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:29 pm on May 19, 2019 Permalink | Reply
    Tags: , , , RNA messages in the cell drive function, The Conversation, Today there is no medical treatment for autism.   

    From The Conversation: “New autism research on single neurons suggests signaling problems in brain circuits” 

    Conversation
    From The Conversation

    1
    Artist impression of neurons communicating in the brain. whitehoune/Shutterstock.com

    May 17, 2019
    Dmitry Velmeshev

    Autism affects at least 2% of children in the United States – an estimated 1 in 59. This is challenging for both the patients and their parents or caregivers. What’s worse is that today there is no medical treatment for autism. That is in large part because we still don’t fully understand how autism develops and alters normal brain function.

    One of the main reasons it is hard to decipher the processes that cause the disease is that it is highly variable. So how do we understand how autism changes the brain?

    Using a new technology called single-nucleus RNA sequencing, we analyzed the chemistry inside specific brain cells from both healthy people and those with autism and identified dramatic differences that may cause this disease. These autism-specific differences could provide valuable new targets for drug development.

    I am a neuroscientist in the lab of Arnold Kreigstein, a researcher of human brain development at the University of California, San Francisco. Since I was a teenager, I have been fascinated by the human brain and computers and the similarities between the two. The computer works by directing a flow of information through interconnected electronic elements called transistors. Wiring together many of these small elements creates a complex machine capable of functions from processing a credit card payment to autopiloting a rocket ship. Though it is an oversimplification, the human brain is, in many respects, like a computer. It has connected cells called neurons that process and direct information flow – a process called synaptic transmission in which one neuron sends a signal to another.

    When I started doing science professionally, I realized that many diseases of the human brain are due to specific types of neurons malfunctioning, just like a transistor on a circuit board can malfunction either because it was not manufactured properly or due to wear and tear.

    RNA messages in the cell drive function

    Every cell in any living organism is made of the same types of biological molecules. Molecules called proteins create cellular structures, catalyze chemical reactions and perform other functions within the cell.

    Two related types of molecules – DNA and RNA – are made of sequences of just four basic elements and used by the cell to store information. DNA is used for hereditary long-term information storage; RNA is a short-lived message that signals how active a gene is and how much of a particular protein the cell needs to make. By counting the number of RNA molecules carrying the same message, researchers can get insights into the processes happening inside the cell.

    When it comes to the brain, scientists can measure RNA inside individual cells, identify the type of brain cell and and analyze the processes taking place inside it – for instance, synaptic transmission. By comparing RNA analyses of brain cells from healthy people not diagnosed with any brain disease with those done in patients with autism, researchers like myself can figure out which processes are different and in which cells.

    Until recently, however, simultaneously measuring all RNA molecules in a single cell was not possible. Researchers could perform these analyses only from a piece of brain tissue containing millions of different cells. This was complicated further because it was possible to collect these tissue samples only from patients who have already died.

    New tech pinpoints neurons affected in autism

    However, recent advances in technology allowed our team to measure RNA that is contained within the nucleus of a single brain cell. The nucleus of a cell contains the genome, as well as newly synthesized RNA molecules. This structure remains intact ever after the death of a cell and thus can be isolated from dead (also called postmortem) brain tissue.

    3
    Neurons in the upper (left) and deep layers of the human developing cortex. Chen & Kriegstein, 2015 Science/American Association for the Advancement of Science, CC BY-SA

    By analyzing single cellular nuclei from this postmortem brain of people with and without autism, we profiled the RNA within 100,000 single brain cells of many such individuals.

    Comparing RNA in specific types of brain cells between the individuals with and without autism, we found that some specific cell types are more altered than others in the disease.

    In particular, we found [Science]that certain neurons called upper-layer cortical neurons that exchange information between different regions of the cerebral cortex have an abnormal number of RNA-encoding proteins located at the synapse – the points of contacts between neurons where signals are transmitted from one nerve cell to another. These changes were detected in regions of the cortex vital for higher-order cognitive functions, such as social interactions.

    This suggests that synapses in these upper-layer neurons are malfunctioning, leading to changes in brain functions. In our study, we showed that upper-layer neurons had very different quantities of certain RNA compared to the same cells in healthy people. That was especially true in autism patients who suffered from the most severe symptoms, like not being able to speak.

    4
    New results suggest that the synapse formed by neurons in the upper layers of the cerebral cortex are not functioning correctly. CI Photos/Shutterstock.com

    Glial cells are also affected in autism

    In addition to neurons that are directly responsible for synaptic communication, we also saw changes in the RNA of other non-neuronal cells – called glia. Glia play important roles in regulating the behavior of neurons, including how they send and receive messages via the synapse. These may also play an important role in causing autism.

    So what do these findings mean for future medical treatment of autism?

    From these results, I and my colleagues understand that the same parts of the synaptic machinery which are critical for sending signals and transmitting information in the upper-layer neurons might be broken in many autism patients, leading to abnormal brain function.

    If we can repair these parts, or fine-tune neuronal function to a near-normal state, it might offer dramatic relief of symptoms for the patients. Studies are underway to deliver drugs and gene therapy to specific cell types in the brain, and many scientists including myself believe such approaches will be indispensable for future treatments of autism.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 11:05 am on May 16, 2019 Permalink | Reply
    Tags: , , Phonon lasers, The Conversation, The optical tweezer   

    From The Conversation: “Laser of sound promises to measure extremely tiny phenomena” 

    Conversation
    From The Conversation

    May 16, 2019
    Mishkat Bhattacharya
    Associate Professor of Physics and Astronomy, Rochester Institute of Technology

    Nick Vamivakas
    Associate Professor of Quantum Optics & Quantum Physics, University of Rochester

    1
    The crests (bright) and troughs (dark) of waves spread out after they were produced. The picture applies to both light and sound waves. Titima Ongkantong

    Most people are familiar with optical lasers through their experience with laser pointers. But what about a laser made from sound waves?

    What makes optical laser light different from a light bulb or the sun is that all the light waves emerging from it are moving in the same direction and are pretty much in perfect step with each other. This is why the beam coming out of the laser pointer does not spread out in all directions.

    In contrast, rays from the sun and light from a light bulb go in every direction. This is a good thing because otherwise it would be difficult to illuminate a room; or worse still, the Earth might not receive any sunlight. But keeping the light waves in step – physicists call it coherence – is what makes a laser special. Sound is also made of waves.

    Recently there has been considerable scientific interest in creating phonon lasers in which the oscillations of light waves are replaced by the vibrations of a tiny solid particle. By generating sound waves that are perfectly synchronized, we figured out how to make a phonon laser – or a “laser for sound.”

    In work we recently published in the journal Nature Photonics, we have constructed our phonon laser using the oscillations of a particle – about a hundred nanometers in diameter – levitated using an optical tweezer.

    2
    A red laser beam from a high-power lab laser. Doug McLean/Shutterstock.com

    Waves in sync

    An optical tweezer is simply a laser beam which goes through a lens and traps a nanoparticle in midair, like the tractor beam in “Star Wars.” The nanoparticle does not stay still. It swings back and forth like a pendulum, along the direction of the trapping beam.

    Since the nanoparticle is not clamped to a mechanical support or tethered to a substrate, it is very well isolated from its surrounding environment. This enables physicists like us to use it for sensing weak electric, magnetic and gravitational forces whose effects would be otherwise obscured.

    To improve the sensing capability, we slow or “cool” the nanoparticle motion. This is done by measuring the position of the particle as it changes with time. We then feed that information back into a computer that controls the power in the trapping beam. Varying the trapping power allows us to constrain the particle so that it slows down. This setup has been used by several groups around the world in applications that have nothing to do with sound lasers. We then took a crucial step that makes our device unique and is essential for building a phonon laser.

    This involved modulating the trapping beam to make the nanoparticle oscillate faster, yielding laser-like behavior: The mechanical vibrations of the nanoparticle produced synchronized sound waves, or a phonon laser.

    The phonon laser is a series of synchronized sound waves. A detector can monitor the phonon laser and identify changes in the pattern of these sound waves that reveal the presence of a gravitational or magnetic force.

    It might appear that the particle becomes less sensitive because it is oscillating faster, but the effect of having all the oscillations in sync actually overcomes that effect and makes it a more sensitive instrument.

    3
    An artist’s depiction of optical tweezers (pink) holding the nanoparticle in midair, while allowing it to move back and forth and create sound waves. A. Nick Vamivakas and Michael Osadciw, University of Rochester illustration, CC BY-SA

    Possible applications

    It is clear that optical lasers are very useful. They carry information over optical fiber cables, read bar codes in supermarkets and run the atomic clocks which are essential for GPS.

    We originally developed the phonon laser as a tool for detecting weak electric, magnetic and gravitational fields, which affect the sound waves in a way we can detect. But we hope that others will find new uses for this technology in communication and sensing, such as the mass of very small molecules.

    On the fundamental side, our work leverages current interest in testing quantum physics theories about the behavior of collections of billion atoms – roughly the number contained in our nanoparticle. Lasers are also the starting point for creating exotic quantum states like the famous Schrodinger cat state, which allows an object to be in two places at the same time. Of course the most exciting uses of the optical tweezer phonon laser may well be ones we cannot currently foresee.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 1:50 pm on April 30, 2019 Permalink | Reply
    Tags: "Why the idea of alien life now seems inevitable and possibly imminent", 6500 light years away a giant cloud of space alcohol floats among the stars, Amino acids just like those that make up every protein in our bodies have been found in the tails of comets, , , At least two other places in our Solar System might be inhabited., , , Habitable planets seem to be common, Life appeared on Earth so soon after the planet was formed, The ancient question “Are we alone?” has graduated from being a philosophical musing to a testable hypothesis. We should be prepared for an answer., The Conversation, Two frozen ice worlds but the gravity of their colossal planets is enough to churn up their insides melting water to create vast subglacial seas   

    From The Conversation: “Why the idea of alien life now seems inevitable and possibly imminent” 

    Conversation
    From The Conversation

    1
    Relative sizes of planets that are in a zone potentially compatible with life: Kepler-22b, Kepler-69c, Kepler-62e, Kepler-62f and Earth (named left to right; except for Earth, these are artists’ renditions). NASA, CC BY

    April 25, 2019
    Cathal D. O’Connell

    Extraterrestrial life, that familiar science-fiction trope, that kitschy fantasy, that CGI nightmare, has become a matter of serious discussion, a “risk factor”, a “scenario”.

    How has ET gone from sci-fi fairytale to a serious scientific endeavour modelled by macroeconomists, funded by fiscal conservatives and discussed by theologians?

    Because, following a string of remarkable discoveries over the past two decades, the idea of alien life is not as far-fetched as it used to seem.

    Discovery now seems inevitable and possibly imminent.

    It’s just chemistry

    While life is a special kind of complex chemistry, the elements involved are nothing special: carbon, hydrogen, oxygen and so on are among the most abundant elements in the universe. Complex organic chemistry is surprisingly common.

    Amino acids, just like those that make up every protein in our bodies, have been found in the tails of comets [Meteorics and Planetary Science]. There are other organic compounds in Martian soil [Science].

    And 6,500 light years away a giant cloud of space alcohol [phys.org] floats among the stars.

    Habitable planets seem to be common too. The first planet beyond our Solar System was discovered in 1995. Since then astronomers have catalogued thousands.

    Based on this catalogue, astronomers from the University of California, Berkeley worked out there could be as many as 40 billion Earth-sized exoplanets in the so-called “habitable zone” around their star, where temperatures are mild enough for liquid water to exist on the surface.

    There’s even a potentially Earth-like world [Nature]orbiting our nearest neighbouring star, Proxima Centauri. At just four light years away, that system might be close enough for us to reach using current technology. With the Breakthrough Starshot project launched by Stephen Hawking in 2016, plans for this are already afoot.

    Life is robust

    It seems inevitable other life is out there, especially considering that life appeared on Earth so soon after the planet was formed.

    The oldest fossils ever found here are 3.5 billion years old, while clues in our DNA suggest life could have started as far back as 4 billion years ago, just when giant asteroids stopped crashing into the surface.

    Our planet was inhabited as soon as it was habitable – and the definition of “habitable” has proven to be a rather flexible concept too.

    Life survives in all manner of environments that seem hellish to us:

    floating on a lake of sulphuric acid
    inside barrels of nuclear waste
    in water superheated to 122 degrees
    in the wastelands of Antarctica
    in rocks five kilometres below ground.

    Tantalisingly, some of these conditions seem to be duplicated elsewhere in the Solar System.

    Snippets of promise

    Mars was once warm and wet, and was probably a fertile ground for life before the Earth.

    Today, Mars still has liquid water underground [Science]. One gas strongly associated with life on Earth, methane, has already been found in the Martian atmosphere, and at levels that mysteriously rise and fall with the seasons [NASA]. (However, the methane result is under debate, with one Mars orbiter recently confirming the methane detection [Nature Geoscience] and another detecting nothing [Nature].)

    Martian bugs might turn up as soon as 2021 when the ExoMars rover Rosalind Franklin will hunt for them with a two-metre drill.

    ESA/ExoMars Rosalind Franklin

    Besides Earth and Mars, at least two other places in our Solar System might be inhabited. Jupiter’s moon Europa and Saturn’s moon Enceladus are both frozen ice worlds, but the gravity of their colossal planets is enough to churn up their insides, melting water to create vast subglacial seas [NASA].

    In 2017, specialists in sea ice from the University of Tasmania concluded [International Journal of Astrobiology]that some Antarctic microbes could feasibly survive on these worlds. Both Europa and Enceladus have undersea hydrothermal vents, just like those on Earth where life may have originated.

    When a NASA probe tasted the material geysered into space out of Enceladus last June it found large organic molecules [Nature]. Possibly there was something living among the spray; the probe just didn’t have the right tools to detect it.

    Russian billionaire Yuri Milner has been so enthused by this prospect, he wants to help fund a return mission.

    A second genesis?

    A discovery, if it came, could turn the world of biology upside down.

    All life on Earth is related, descended ultimately from the first living cell to emerge some 4 billion years ago.

    Bacteria, fungus, cacti and cockroaches are all our cousins and we all share the same basic molecular machinery: DNA that makes RNA, and RNA that makes protein.

    A second sample of life, though, might represent a “second genesis” – totally unrelated to us. Perhaps it would use a different coding system in its DNA. Or it might not have DNA at all, but some other method of passing on genetic information.

    By studying a second example of life, we could begin to figure out which parts of the machinery of life are universal, and which are just the particular accidents of our primordial soup.

    Perhaps amino acids are always used as essential building blocks, perhaps not.

    We might even be able to work out some universal laws of biology, the same way we have for physics – not to mention new angles on the question of the origin of life itself.

    A second independent “tree of life” would mean that the rapid appearance of life on Earth was no fluke; life must abound in the universe.

    It would greatly increase the chances that, somewhere among those billions of habitable planets in our galaxy, there could be something we could talk to.

    Perhaps life is infectious

    If, on the other hand, the discovered microbes were indeed related to us that would be a bombshell of a different kind: it would mean life is infectious.

    When a large meteorite hits a planet, the impact can splash pulverised rock right out into space, and this rock can then fall onto other planets as meteorites.

    Life from Earth has probably already been taken to other planets – perhaps even to the moons of Saturn and Jupiter. Microbes might well survive the trip.

    In 1969, Apollo 12 astronauts retrieved an old probe that had sat on the Moon for three years in extreme cold and vacuum – there were viable bacteria still inside [Life Science Data Archive].

    As Mars was probably habitable before Earth, it’s possible life originated there before hitchhiking on a space rock to here. Perhaps we’re all Martians.

    Even if we never find other life in our Solar System, we might still detect it on any one of thousands of known exoplanets.

    It is already possible to look at starlight filtered through an exoplanet and tell something about the composition of its atmosphere; an abundance of oxygen could be a telltale sign of life.

    A testable hypothesis

    The James Webb Space Telescope, planned for a 2021 launch, will be able to take these measurements for some of the Earth-like worlds already discovered.

    Just a few years later will come space-based telescopes that will take pictures of these planets directly.

    Using a trick a bit like the sun visor in your car, planet-snapping telescopes will be paired with giant parasols called starshades that will fly in tandem 50,000 kilometres away in just the right spot to block the blinding light of the star, allowing the faint speck of a planet to be captured.

    The colour and the variability of that point of light could tell us the length of the planet’s day, whether it has seasons, whether it has clouds, whether it has oceans, possibly even the colour of its plants.

    The ancient question “Are we alone?” has graduated from being a philosophical musing to a testable hypothesis. We should be prepared for an answer.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 3:29 pm on April 23, 2019 Permalink | Reply
    Tags: , , Dimitri Mendeleev's early Periodic Table, FOCS 1- a continuous cold caesium fountain atomic clock in Switzerland started operating in 2004 at an uncertainty of one second in 30 million years., Group I metals also known as alkali metals are very reactive, , The Conversation, Volatile Group I metals   

    From The Conversation: “Understanding the periodic table through the lens of the volatile Group I metals” 

    Conversation
    From The Conversation

    April 23, 2019
    Erwin Boschmann

    1

    The news broke that a railroad car, loaded with pure sodium, had just derailed and was spilling its contents. A television reporter called me for an explanation of why firefighters were not allowed to use water on the flames bursting from the mangled car. While on the air I added some sodium to a bit of water in a petri dish and we observed the vicious reaction. For further dramatic effect, I also placed some potassium into water and astonished everyone with the explosive bluish flames.

    Because Group I metals, also known as alkali metals, are very reactive, like the sodium from the rail car or the potassium, they are not found in nature in pure form but only as salts. Not only are they very reactive, they are soft and shiny, can easily be cut even with a dull knife and are the most metallic of all known elements.

    I am a chemist who spent his career building new molecules, sometimes using Group I elements. By studying the behavior and trends of Group I elements, we can get a glimpse of how the periodic table is arranged and how to interpret it.

    2
    Periodic Table of the Elements. The Group I metals are on the far left colored red. Humdan/Shutterstock.com

    The basics

    The arrangement of the periodic table and the properties of each element in it is based of the atomic number and the arrangement of the electrons orbiting the nucleus. The atomic number describes the number of protons in the nucleus of the element. Hydrogen’s atomic number is 1, helium’s is 2, lithium’s is 3 and so on.

    Each of the 18 columns in the table is called a group or a family. Elements in the same group share similar properties. And the properties can be assumed based on the location within the group. Going from the top of Group I to the bottom, for example, the atomic radii – the distance from the nucleus to the outer electrons – increases. But the amount of energy needed to rip off an outer electron decreases going from the top to the bottom because the electrons are farther from the nucleus and not held as tightly.

    This is important because how elements interact and react with each other depends on their ability to lose and gain electrons to make new compounds.

    The horizontal rows of the table are called periods. Moving from the left side of the period to the right, the atomic radius becomes smaller because each element has one additional proton and one additional electron. More protons means that electrons are pulled in more tightly toward the nucleus. For the same reason electronegativity – the degree to which an element tends to gain electrons – increases from left to right.

    The force required to remove the outermost electron, known as the ionization potential, also increases from the left-hand side of the table, which has elements with a metallic character, to the right side, which are nonmetals.

    Electronegativity decreases from the top of the column to the bottom. The melting point of the elements within a group also decreases from the top to the bottom of a group.

    2
    Trends of the periodic table. Sandbh/Wikipedia, CC BY-SA

    3
    The outermost electron surrounding the Cesium atom is far from the nucleus and thus easy to remove. That makes cesium highly reactive. gstraub/Shutterstock.com

    Applying the basics to Group I elements

    As its name implies, Group I elements occupy the first column in the periodic table. Each element starts a new period. Lithium is at the top of the group and is followed by sodium, Na; potassium, K; rubidium, Rb; cesium, Cs and ends with the radioactive francium, Fr. Because it is highly radioactive, virtually no chemistry is performed with this element.

    Because each element in this column has a single outer electron in a new shell, the volumes of these elements are large and increase dramatically when moving from the top to the bottom of the group.

    Of all the Group I elements, cesium has the largest volumes because the outermost single electron is loosely held.

    In spite of these trends, the properties of the elements of Group I are more similar to each other than those of any other group.

    Alkali metals through history

    Using chemical properties as his guide, Russian chemist Dimitri Mendeleev correctly ordered the first Group I elements into his 1869 periodic table.

    4
    Dimitri Mendeleev’s early Periodic Table

    It is called periodic because every eighth element repeats the properties of the one above it in the table. After arranging all of the then known elements, Mendeleev took the bold step of leaving blanks where his extrapolation of chemical properties showed that an element should exist. Subsequent discovery of these new elements proved his prediction correct.

    Some alkali metals have been known and put to good use long before Mendeleev created the periodic table. For instance, the Old Testament mentions salt – a combination of the alkali metal sodium with chlorine – 31 times. The New Testament refers to it 10 times and calls sodium carbonate “neter” and potassium nitrate “saltpeter.”

    People have known since antiquity that wood ashes produce a potassium salt which, when combined with animal fat, will yield soap. Samuel Hopkins obtained the first U.S. patent on July 31, 1790, for soap under the new patent statute just signed into law by President George Washington a few months earlier.

    The pyrotechnic industry loves these Group I elements for their vibrant colors and explosive nature.

    4
    Fireworks owe their vivid colors to the Group I metals. elena_prosvirova/Shutterstock.com

    Burning lithium produces a vivid crimson red color; sodium a yellow one; potassium lilac; rubidium red; and cesium violet. These colors are produced as electrons jump from their home environment orbiting the nucleus and returning back again.

    The cesium atomic clock, the most accurate timepiece ever developed, functions by measuring the frequency of cesium electrons jumping back and forth between energy states.

    5
    FOCS 1, a continuous cold caesium fountain atomic clock in Switzerland, started operating in 2004 at an uncertainty of one second in 30 million years.

    Clocks based on electrons jumping provide an extremely precise way to count seconds.

    Other applications include sodium vapor lamps and lithium batteries.

    In my own research I have used Group I metals as tools to perform other chemistry. Once I was in need of absolutely dry alcohol, and the driest I could buy still contained minute traces of water. The only way to get rid of the last remnant of water was by treating the water-containing alcohol with sodium – a rather dramatic way to remove water.

    The alkali elements not only occupy the first column in the periodic table, but they also show the most reactivity of all groups in the entire table and have the most dramatic trends in volume and ionization potential, while maintaining great similarity among themselves.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 3:15 pm on April 17, 2019 Permalink | Reply
    Tags: "Russia isn’t the first country to protest Western control over global telecommunications", In 1938 U.S. companies and the British and French government-owned telecommunications companies that controlled more than 96% of the telegraph cables that connected the world, In the 1860s European states established the International Telegraph Union to oversee technical work., The Conversation, The international telegraph network in 1901 spanned the globe with particular emphasis on the North Atlantic., The internet is less regulated than any of the technologies that preceded it and made it possible., The ITU’s first task was to ensure that telegraph cable technologies were universally compatible, To make a global network states had to link their national networks together, U.S. policy as it has unfolded over the last six decades has played a major role in establishing the world’s current international communications system   

    From The Conversation: “Russia isn’t the first country to protest Western control over global telecommunications” 

    Conversation
    From The Conversation

    April 17, 2019
    Sarah Nelson, Vanderbilt University

    1

    As the international community becomes increasingly concerned about misinformation and data breaches, the Russian government has announced plans to test its own, sweeping solution to the problem: disconnecting Russia from the global internet.

    Russian President Vladimir Putin has argued that internet administration is too concentrated in the U.S., and that online misinformation campaigns threaten Russia’s national security. Reaction from the international press and tech experts has ranged from horrified to bemused, calling Russia’s behavior an act of totalitarian censorship or economic and technological recklessness.

    But neither these problems nor Putin’s intended solution are particularly new. In fact, my research on the history of international telecommunications and information policy suggests that these criticisms echo – if not co-opt – a set of arguments and policy proposals based in other, less powerful nations’ historically reasonable objections about the West’s (especially the United States’) disproportionate power over international communications.

    When developing states demanded global telecommunications reform after World War II, the U.S. began to evade and undermine efforts by intergovernmental organizations to manage information flows between countries. This U.S. policy, as it has unfolded over the last six decades, has played a major role in establishing the world’s current international communications system, centered on an internet that is less regulated than any of the technologies that preceded it and made it possible.

    2
    The Great Eastern laying a transatlantic telegraph cable in 1866. Internet Archive Book Images/Wikimedia Commons

    The origins of the international network

    The international communications network originated with the telegraph. Terrestrial cables created the first national networks in the 1840s; submarine cables began traversing the Atlantic in the 1870s and by 1900 crossed the Pacific and Indian oceans. For the first time in history, communication – even to distant continents – was no longer tethered to the speed of human movement.

    3
    The international telegraph network in 1901 spanned the globe, with particular emphasis on the North Atlantic. A.B.C. Telegraphic Code 5th Edition/Malus Catulus/Wikimedia Commons

    But to make a global network, states had to link their national networks together. In the 1860s, European states established the International Telegraph Union to oversee that technical work.

    The ITU’s first task was to ensure that telegraph cable technologies were universally compatible, so that a message from any nation could be sent to any other nation. Second, it regulated costs and rates of network use. And after the ITU became responsible for regulating radio broadcast in the 1930s, it was tasked with assigning portions of the broadcast spectrum to states, which then distributed the frequencies among public and private radio companies.

    These three tasks facilitated the movement of information that we often associate with the creation of the modern global information network.

    Postwar promise

    After World War II, it wasn’t just the physical equipment of international telecommunications that was in disarray. Most nations believed strongly that one of the war’s root causes was the international community’s failure to regulate the global flow and quality of information. The ITU had made a global network possible – but what good was that network if it helped to circulate fascistic propaganda and ignite world war?

    Observers insisted that to avoid World War III, the newly formed United Nations would have to supplement the ITU by considering not just the technical elements of communication but the content of the information sent and received. They wanted an international organization to address censorship, misinformation and incitement to violence, in particular. At its very first meeting in January 1946, the U.N. General Assembly called for a global conference on “Freedom of Information and the Press.”

    4
    The first U.N. General Assembly, in London in 1946. United Nations/Marcel Bolomey

    But the resulting conference, in 1948, revealed deep fissures over what “freedom of information” meant in practical terms. The U.S. and most of Western Europe wanted to guarantee Western news and telecommunications firms’ freedom to set up networks wherever and however they saw fit; journalists’ freedom of movement; and uniformly low, standardized telegraph rates for press use.

    The developing countries, however, wanted to address the global inequality baked into international telecommunications and information flows. In 1938, for instance, U.S. companies and the British and French government-owned telecommunications companies controlled more than 96% of the telegraph cables that connected the world – much of which was still colonized. Four news agencies enjoyed a monopoly over international news: the U.S.’s Associated Press, Britain’s Reuters, France’s Havas and (until 1939) Germany’s Wolff. In 1946-47, four of every 10 telegraphs sent internationally came from the U.S. alone.

    Developing nations therefore sought measures that would bring more equality to international communications: making access to information a human right, holding international journalists and news agencies accountable for biased or untrue reporting and creating international funds to develop poorer nations’ telecommunications and news industries.

    Shifting power

    The U.S. State Department was scandalized by suggestions that press rights should be tempered with some form of international accountability, or that truly free information flows would require addressing global inequality. U.S. representatives to the U.N. played a leading role in quashing the freedom of information question altogether. The Human Rights Sub-Commission on Freedom of Information and the Press was disbanded entirely, and the ITU continued its exclusively technological responsibilities, reserving only nominal sums to invest in poorer nations’ telecommunications development.

    But as colonies won their independence in the 1950s and ’60s, they joined the U.N. and ITU as voting members. Those developing countries soon commanded a voting majority in both organizations, fundamentally shifting the balance of power. Proposals to curtail the overwhelming telecommunications power of the U.S. and its allies threatened to gain influence.

    In response, the U.S. began to evade and undermine the ITU as a telecom regulator, going so far as to create an entirely separate organization, Intelsat, to administer satellite communications in the 1960s. Originally admitting members only by invitation, and basing voting power in financial contributions, Intelsat blocked most poorer and post-colonial nations out of early satellite development.

    5
    The first Intelsat satellite, nicknamed ‘Early Bird,’ was launched in 1965. NASA

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 7:38 am on March 13, 2019 Permalink | Reply
    Tags: "Old stone walls record the changing location of magnetic North", , , , Geomagnetism, The Conversation   

    From The Conversation: “Old stone walls record the changing location of magnetic North” 

    Conversation
    From The Conversation

    March 12, 2019
    John Delano

    1
    The orientations of the stone walls that crisscross the Northeastern U.S. can tell a geomagnetic tale as well as a historical one. John Delano, CC BY-ND

    When I was a kid living in southern New Hampshire, my family home was on the site of an abandoned farmstead consisting of massive stone foundations of quarried granite where dwellings once stood. Stone walls snaked throughout the forest. As I explored the deep woods of tall oaks and maples, I wondered about who had built these walls, and why. What stories did these walls contain?

    Decades later, while living in a rural setting in upstate New York and approaching retirement as a geologist, my long dormant interest was rekindled by treks through the neighboring woods. By now I knew that stone walls in New England and New York are iconic vestiges from a time when farmers, in order to plant crops and graze livestock, needed to clear the land of stones. Tons and tons of granite had been deposited throughout the region during the last glaciation that ended about 10,000 years ago.

    By the late 1800s, nearly 170,000 subsistence farming families had built an estimated 246,000 miles of stone walls across the Northeast. But by then, the Industrial Revolution had already started to contribute to the widespread abandonment of these farms in the northeastern United States. They were overgrown by forests within a few decades.

    During my more recent walks through the woods, on a whim I used a hand-held GPS unit to map several miles of stone walls. And that was how I realized that in addition to being part of an American legacy, their locations record a centuries-long history of the Earth’s wandering magnetic field.

    Connecting the walls with historical maps

    The complex array of walls that emerged from my GPS readings made no sense to me until I found an old map of my town’s property boundaries at the local historical society. Suddenly I saw that some of the stone walls on my map lay along property lines from 1790. They marked boundaries.

    My subsequent searches of church records and decades of the federal census revealed the names of these farm families and details of their lives, including annual yields from their harvests. I started to feel like the stone walls were letting me connect with the long-gone folks who had worked this land.

    Now the wheels in my scientist’s mind really started spinning. Did the original land surveys from the 18th and 19th centuries in this part of town still exist? What were the magnetic compass-bearings of those boundaries on the original surveys?

    2
    Historical maps and surveys underscore the orderly way plots were divvied up from the landscape in a grid. Charles Peirce/Stoddard, New Hampshire

    I knew that the location of magnetic north drifts over time due to changes in the Earth’s core. Could I determine its drift using stone walls and the old land surveys? My preliminary map of stone walls and a few historical surveys showed that the approach had potential.

    To have any scientific value, though, this work had to encompass much larger areas. I needed a different strategy for finding and mapping stone walls. Luckily I found two troves of useful information. First, the New York State Archives had hundreds of the original land surveys from the 18th and 19th centuries. And secondly, airborne LiDAR (light detection and ranging) images were publicly available that could reveal stone walls hidden beneath the forest canopy over much larger areas than I could cover on my own by foot.

    3
    Magnetic north and geographic north aren’t the same – and their difference changes over time. Siberian Art/Shutterstock.com

    Tracking magnetic north’s drift over time

    The Earth rotates on its axis once every 24 hours. The location of that spin axis in the Northern Hemisphere is called true north, and wanders very slowly. The location of true north can be considered stationary, though, on a timescale of a few centuries.

    But that’s not where a compass aims when it points north. The location of the north magnetic pole is not only at a different location from true north, but also changes rapidly – currently about one degree per 10 years in New England.

    The difference in direction between true north and magnetic north (at a specific time and location on the Earth) is known as the magnetic declination. Global information about historic variations in magnetic declination is currently based on thousands of magnetic compass-bearings recorded in ships’ navigational logs from 1590 onwards.

    But now my work on 726 miles of stone walls provides an independent check [JGR Solid Earth] on magnetic declination between 1685 and 1910.

    Here’s the logic. When settlers were piling up those stones along the boundaries of their plots, they were using property lines that had been laid out according to the surveyors’ compass readings. Using LiDAR images, the bearings of those stone walls could be determined with respect to true north and compared with the surveyors’ magnetic bearings. The difference is the magnetic declination at the time of the original survey.

    For example, the original surveys divided New Hampshire’s Stoddard township into hundreds of lots with boundaries with magnetic compass-bearings of N80 degrees W and N14 degrees E in 1768. As the land was cleared for farming, owners built stone walls along and within those 1768 surveyed boundaries.

    4
    Lidar reveals the stone walls hidden beneath the canopy. Comparing their orientation with true north provides the magnetic declination at this location when boundaries were surveyed in 1768. CC BY-ND

    Now one can compare the bearings of those stone wall-defined boundaries relative to magnetic north and true north today. The difference shows that the magnetic declination at this location in 1768 was 7.6 ± 0.3 degrees W. That’s a good match for scientists’ current geophysical model. Since the magnetic declination at this location today is 14.2 degrees W, the direction to magnetic north at this location has moved about 6.6 degrees further west since 1768.

    Data from these stone walls strengthen the current geophysical model about the Earth’s magnetic field.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 1:23 am on March 5, 2019 Permalink | Reply
    Tags: Deuterium and tritium- called heavy hydrogen have been used to make hydrogen bombs, Fusion Technology-when burned in a controlled way hydrogen offers the cleanest fuel producing only water as the waste product, , , Protons also are the key component of fuel cells. Rather than burn the hydrogen fuel cells convert it to electricity and are seen as the way of the future. They do this by splitting the hydrogen gas i, The Conversation, With rapid advances in chemistry and engineering hydrogen stations could start to appear soon becoming as commonplace as gasoline filling stations are today.   

    From The Conversation: “Lightweight of periodic table plays big role in life on Earth” 

    Conversation
    From The Conversation

    3.3.19
    Nicholas Leadbeater

    Periodic table Sept 2017. Wikipedia

    Although hydrogen is the lightweight of the chemical elements, it packs a real punch when it comes to its role in life and its potential as a solution to some of the world’s challenges. As we celebrate the 150th anniversary of the periodic table, it seems reasonable to tip our hat to this, the first element on the table.

    1
    One oxygen atom is connected to two hydrogen atoms to make water. Liaskovskaia Ekaterina/SHutterstock.com

    Hydrogen is the most abundant element in the universe, but not on Earth due to its light weight, which allows the gas to just float off into space. Hydrogen is essential to our life – it fuels the sun, which converts hundreds of million tons of hydrogen into helium every second. And two hydrogen atoms are attached to one oxygen atom to make water. Both these things make our planet habitable.

    Not only does hydrogen enable the sun to warm the Earth and help create the water that sustains life, but this simplest of all the elements may also provide the key to finding a clean fuel source to power the planet.

    Hydrogen’s yin and yang as an energy source

    Like many other chemical elements, although hydrogen is of immense value to us, it also has a darker side. Being lighter than air, it makes things float, which is why is was used in early airships. But hydrogen is highly explosive, and in 1937 the German airship the Hindenburg exploded on its attempt to dock with its mooring mast after a transatlantic journey, killing 36 people.

    3
    Isotopes of hydrogen: protium, deuterium and tritium. Designua/Shutterstock.com

    Hydrogen’s cousins, deuterium and tritium, called heavy hydrogen, have been used to make hydrogen bombs. Here, the heavy hydrogen atoms merge together in a process called nuclear fusion to make helium, a bit like the reaction that takes place in the sun. The amount of energy produced by this process is greater than any other known process – the area at the center of the explosion is essentially vaporized, generating shock waves that destroy anything in their way. The bright white light produced can blind people many miles away. It also produces radioactive products that are carried in the air and cause widespread contamination of the environment.

    Taming the beast, however, could be the solution to the energy problems of the future. When burned in a controlled way, hydrogen offers the cleanest fuel, producing only water as the waste product. That’s refreshing when compared with a gasoline engine that produces climate change-inducing carbon dioxide and a range of other nasty gases. When stored under high pressure and very low temperature of -400 degrees Fahrenheit, hydrogen exists as a liquid, and its combustion with oxygen is used for propelling rockets into space.

    However, a car with a tank of highly explosive hydrogen rocket fuel doesn’t sound like a safe bet. There’s currently lots of research focused on solving the storage problem. Large numbers of scientists are trying to develop chemical compounds that safely hold and release hydrogen. This is actually a hard nut to crack and is something that will take time and many great minds to solve.

    The power of hydrogen

    Hydrogen atoms also give things like lemon juice and vinegar their distinctive tart taste. Positively charged hydrogen atoms, called protons, having been stripped of their only electron, float around in these solutions and are the key component of acids. The chemistry of these protons is also responsible for driving photosynthesis, the process whereby plants turn light energy into chemical energy, and powering many processes in the human body.

    3
    This is the symbol and electron diagram for hydrogen. BlueRingMedia/Shutterstock.com

    Protons also are the key component of fuel cells. Rather than burn the hydrogen, fuel cells convert it to electricity and are seen as the way of the future. They do this by splitting the hydrogen gas into protons and electrons on one side of the fuel cell. The positively charged protons move over to the other side of the cell, leaving behind the negatively charged electrons. This creates a flow of electricity between the sides of the cell when connected with an external circuit. This current can power an electric motor placed in this circuit. Hydrogen-powered trains are already in operation in Germany, and several international car manufacturers are developing fuel-cell powered cars. Again, the only byproduct of the process is water.

    In the future, I think we will see increasing use of hydrogen as a fuel. For it to be useful, there are two major challenges. A big one is the storage issue. Engineers need to figure out how to store hydrogen safely and start to build places where people can fill up. With rapid advances in chemistry and engineering, hydrogen stations could start to appear soon, becoming as commonplace as gasoline filling stations are today. This sort of infrastructure is going to be essential. You don’t want run out of fuel on a journey because, unlike a gas-powered car, you can’t call a friend to bring you a canister of hydrogen.

    4
    Hydrogen fuel pump at Shell station, for automobiles running on pollution-free hydrogen-powered fuel cells. Rob Crandall/Shutterstock.com

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 11:04 am on February 23, 2019 Permalink | Reply
    Tags: , , The Conversation, Utilities are starting to invest in big batteries instead of building new power plants   

    From The Conversation: “Utilities are starting to invest in big batteries instead of building new power plants” 

    Conversation
    From The Conversation

    February 22, 2019
    Jeremiah Johnson
    Associate Professor of Environmental Engineering
    North Carolina State University

    Joseph F. DeCarolis
    Associate Professor of Environmental Engineering
    North Carolina State University

    1
    Utilities are starting to invest in big batteries instead of building new power plants. This is what a 5-megawatt, lithium-ion energy storage system looks like. phys.org
    Credit: Pacific Northwest National Laboratory.

    Due to their decreasing costs, lithium-ion batteries now dominate a range of applications including electric vehicles, computers and consumer electronics.

    You might only think about energy storage when your laptop or cellphone are running out of juice, but utilities can plug bigger versions into the electric grid. And thanks to rapidly declining lithium-ion battery prices, using energy storage to stretch electricity generation capacity.

    Based on our research on energy storage costs and performance in North Carolina, and our analysis of the potential role energy storage could play within the coming years, we believe that utilities should prepare for the advent of cheap grid-scale batteries and develop flexible, long-term plans that will save consumers money.

    2
    All of the new utility-scale electricity capacity coming online in the U.S. in 2019 will be generated through natural gas, wind and solar power as coal, nuclear and some gas plants close. U.S. Energy Information Administration

    Peak demand is pricey

    The amount of electricity consumers use varies according to the time of day and between weekdays and weekends, as well as seasonally and annually as everyone goes about their business.

    Those variations can be huge.

    For example, the times when consumers use the most electricity in many regions is nearly double the average amount of power they typically consume. Utilities often meet peak demand by building power plants that run on natural gas, due to their lower construction costs and ability to operate when they are needed.

    However, it’s expensive and inefficient to build these power plants just to meet demand in those peak hours. It’s like purchasing a large van that you will only use for the three days a year when your brother and his three kids visit.

    The grid requires power supplied right when it is needed, and usage varies considerably throughout the day. When grid-connected batteries help supply enough electricity to meet demand, utilities don’t have to build as many power plants and transmission lines.

    Given how long this infrastructure lasts and how rapidly battery costs are dropping, utilities now face new long-term planning challenges.

    3
    Grid-scale batteries are being installed coast-to-coast as this snapshot from 2017 indicates. Source: U.S. Energy Information Administration, U.S. Battery Storage Market Trends, 2018.

    Cheaper batteries

    About half of the new generation capacity built in the U.S. annually since 2014 has come from solar, wind or other renewable sources. Natural gas plants make up the much of the rest but in the future, that industry may need to compete with energy storage for market share.

    In practice, we can see how the pace of natural gas-fired power plant construction might slow down in response to this new alternative.

    So far, utilities have only installed the equivalent of one or two traditional power plants in grid-scale lithium-ion battery projects, all since 2015. But across California, Texas, the Midwest and New England, these devices are benefiting the overall grid by improving operations and bridging gaps when consumers need more power than usual.

    Based on our own experience tracking lithium-ion battery costs, we see the potential for these batteries to be deployed at a far larger scale and disrupt the energy business.

    When we were given approximately one year to conduct a study on the benefits and costs of energy storage in North Carolina, keeping up with the pace of technological advances and increasing affordability was a struggle.

    Projected battery costs changed so significantly from the beginning to the end of our project that we found ourselves rushing at the end to update our analysis.

    Once utilities can easily take advantage of these huge batteries, they will not need as much new power-generation capacity to meet peak demand.

    What energy-storage batteries cost

    Grid-scale lithium-ion battery costs per kilowatt hour have plummeted in the past four years. They will probably fall further.

    Utility planning

    Even before batteries could be used for large-scale energy storage, it was hard for utilities to make long-term plans due to uncertainty about what to expect in the future.

    For example, most energy experts did not anticipate the dramatic decline in natural gas prices due to the spread of hydraulic fracturing, or fracking, starting about a decade ago – or the incentive that it would provide utilities to phase out coal-fired power plants.

    In recent years, solar energy and wind power costs have dropped far faster than expected, also displacing coal – and in some cases natural gas – as a source of energy for electricity generation.

    Something we learned during our storage study is illustrative.

    We found that lithium ion batteries at 2019 prices were a bit too expensive in North Carolina to compete with natural gas peaker plants – the natural gas plants used occasionally when electricity demand spikes. However, when we modeled projected 2030 battery prices, energy storage proved to be the more cost-effective option.

    Federal, state and even some local policies are another wild card. For example, Democratic lawmakers have outlined the Green New Deal, an ambitious plan that could rapidly address climate change and income inequality at the same time.

    And no matter what happens in Congress, the increasingly frequent bouts of extreme weather hitting the U.S. are also expensive for utilities. Droughts reduce hydropower output and heatwaves make electricity usage spike.

    4
    The Scattergood power plant in Los Angeles is one of three natural gas power plants slated to shut down by 2029. AP Photo/Marcio Jose Sanchez

    The future

    Several utilities are already investing in energy storage.

    California utility Pacific Gas & Electric, for example, got permission from regulators to build a massive 567.5 megawatt energy-storage battery system near San Francisco, although the utility’s bankruptcy could complicate the project.

    Hawaiian Electric Company is seeking approval for projects that would establish several hundred megawatts of energy storage across the islands. And Arizona Public Service and Puerto Rico Electric Power Authority are looking into storage options as well.

    We believe these and other decisions will reverberate for decades to come. If utilities miscalculate and spend billions on power plants it turns out they won’t need instead of investing in energy storage, their customers could pay more than they should to keep the lights through the middle of this century.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 2:48 pm on February 7, 2019 Permalink | Reply
    Tags: A massive leap forward in nuclear physics, , Nuclear fission, , , She was excluded from the victory celebration [The Nobel Prize] because she was a Jewish woman, The Conversation, Today Lise Meitner remains obscure and largely forgotten   

    From The Conversation: “Lise Meitner — the forgotten woman of nuclear physics who deserved a Nobel Prize” 

    Conversation
    From The Conversation

    February 7, 2019
    Timothy J. Jorgensen

    Nuclear fission – the physical process by which very large atoms like uranium split into pairs of smaller atoms – is what makes nuclear bombs and nuclear power plants possible. But for many years, physicists believed it energetically impossible for atoms as large as uranium (atomic mass = 235 or 238) to be split into two.

    That all changed on Feb. 11, 1939, with a letter to the editor of Nature – a premier international scientific journal – that described exactly how such a thing could occur and even named it fission. In that letter, physicist Lise Meitner, with the assistance of her young nephew Otto Frisch, provided a physical explanation of how nuclear fission could happen.

    It was a massive leap forward in nuclear physics, but today Lise Meitner remains obscure and largely forgotten.

    3
    Lise Meitner (7 November 1878 – 27 October 1968) Smithsonian Institution

    She was excluded from the victory celebration because she was a Jewish woman. Her story is a sad one.

    What happens when you split an atom

    Meitner based her fission argument on the “liquid droplet model” of nuclear structure – a model that likened the forces that hold the atomic nucleus together to the surface tension that gives a water droplet its structure.

    She noted that the surface tension of an atomic nucleus weakens as the charge of the nucleus increases, and could even approach zero tension if the nuclear charge was very high, as is the case for uranium (charge = 92+). The lack of sufficient nuclear surface tension would then allow the nucleus to split into two fragments when struck by a neutron – a chargeless subatomic particle – with each fragment carrying away very high levels of kinetic energy. Meisner remarked: “The whole ‘fission’ process can thus be described in an essentially classical [physics] way.” Just that simple, right?

    Meitner went further to explain how her scientific colleagues had gotten it wrong. When scientists bombarded uranium with neutrons, they believed the uranium nucleus, rather than splitting, captured some neutrons. These captured neutrons were then converted into positively charged protons and thus transformed the uranium into the incrementally larger elements on the periodic table of elements – the so-called “transuranium,” or beyond uranium, elements.

    Some people were skeptical that neutron bombardment could produce transuranium elements, including Irene Joliot-Curie – Marie Curie’s daughter – and Meitner. Joliot-Curie had found that one of these new alleged transuranium elements actually behaved chemically just like radium, the element her mother had discovered. Joliot-Curie suggested that it might be just radium (atomic mass = 226) – an element somewhat smaller than uranium – that was coming from the neutron-bombarded uranium.

    Meitner had an alternative explanation. She thought that, rather than radium, the element in question might actually be barium – an element with a chemistry very similar to radium. The issue of radium versus barium was very important to Meitner because barium (atomic mass = 139) was a possible fission product according to her split uranium theory, but radium was not – it was too big (atomic mass = 226).

    7
    When a neutron bombards a uranium atom, the uranium nucleus splits into two different smaller nuclei. Stefan-Xp/Wikimedia Commons, CC BY-SA

    Meitner urged her chemist colleague Otto Hahn to try to further purify the uranium bombardment samples and assess whether they were, in fact, made up of radium or its chemical cousin barium. Hahn complied, and he found that Meitner was correct: the element in the sample was indeed barium, not radium. Hahn’s finding suggested that the uranium nucleus had split into pieces – becoming two different elements with smaller nuclei – just as Meitner had suspected.

    As a Jewish woman, Meitner was left behind

    Meitner should have been the hero of the day, and the physicists and chemists should have jointly published their findings and waited to receive the world’s accolades for their discovery of nuclear fission. But unfortunately, that’s not what happened.

    Meitner had two difficulties: She was a Jew living as an exile in Sweden because of the Jewish persecution going on in Nazi Germany, and she was a woman. She might have overcome either one of these obstacles to scientific success, but both proved insurmountable.

    5
    Lise Meitner and Otto Hahn in Berlin, 1913.

    Meitner had been working as Hahn’s academic equal when they were on the faculty of the Kaiser Wilhelm Institute in Berlin together. By all accounts they were close colleagues and friends for many years. When the Nazis took over, however, Meitner was forced to leave Germany. She took a position in Stockholm, and continued to work on nuclear issues with Hahn and his junior colleague Fritz Strassmann through regular correspondence. This working relationship, though not ideal, was still highly productive. The barium discovery was the latest fruit of that collaboration.

    Nuclear fission – the physical process by which very large atoms like uranium split into pairs of smaller atoms – is what makes nuclear bombs and nuclear power plants possible. But for many years, physicists believed it energetically impossible for atoms as large as uranium (atomic mass = 235 or 238) to be split into two.

    That all changed on Feb. 11, 1939, with a letter to the editor of Nature – a premier international scientific journal – that described exactly how such a thing could occur and even named it fission. In that letter, physicist Lise Meitner, with the assistance of her young nephew Otto Frisch, provided a physical explanation of how nuclear fission could happen.

    It was a massive leap forward in nuclear physics, but today Lise Meitner remains obscure and largely forgotten. She was excluded from the victory celebration because she was a Jewish woman. Her story is a sad one.
    What happens when you split an atom

    Meitner based her fission argument on the “liquid droplet model” of nuclear structure – a model that likened the forces that hold the atomic nucleus together to the surface tension that gives a water droplet its structure.

    She noted that the surface tension of an atomic nucleus weakens as the charge of the nucleus increases, and could even approach zero tension if the nuclear charge was very high, as is the case for uranium (charge = 92+). The lack of sufficient nuclear surface tension would then allow the nucleus to split into two fragments when struck by a neutron – a chargeless subatomic particle – with each fragment carrying away very high levels of kinetic energy. Meisner remarked: “The whole ‘fission’ process can thus be described in an essentially classical [physics] way.” Just that simple, right?

    Meitner went further to explain how her scientific colleagues had gotten it wrong. When scientists bombarded uranium with neutrons, they believed the uranium nucleus, rather than splitting, captured some neutrons. These captured neutrons were then converted into positively charged protons and thus transformed the uranium into the incrementally larger elements on the periodic table of elements – the so-called “transuranium,” or beyond uranium, elements.

    Some people were skeptical that neutron bombardment could produce transuranium elements, including Irene Joliot-Curie – Marie Curie’s daughter – and Meitner. Joliot-Curie had found that one of these new alleged transuranium elements actually behaved chemically just like radium, the element her mother had discovered. Joliot-Curie suggested that it might be just radium (atomic mass = 226) – an element somewhat smaller than uranium – that was coming from the neutron-bombarded uranium.

    Meitner had an alternative explanation. She thought that, rather than radium, the element in question might actually be barium – an element with a chemistry very similar to radium. The issue of radium versus barium was very important to Meitner because barium (atomic mass = 139) was a possible fission product according to her split uranium theory, but radium was not – it was too big (atomic mass = 226).
    When a neutron bombards a uranium atom, the uranium nucleus splits into two different smaller nuclei. Stefan-Xp/Wikimedia Commons, CC BY-SA

    Meitner urged her chemist colleague Otto Hahn to try to further purify the uranium bombardment samples and assess whether they were, in fact, made up of radium or its chemical cousin barium. Hahn complied, and he found that Meitner was correct: the element in the sample was indeed barium, not radium. Hahn’s finding suggested that the uranium nucleus had split into pieces – becoming two different elements with smaller nuclei – just as Meitner had suspected.
    As a Jewish woman, Meitner was left behind

    Meitner should have been the hero of the day, and the physicists and chemists should have jointly published their findings and waited to receive the world’s accolades for their discovery of nuclear fission. But unfortunately, that’s not what happened.

    Meitner had two difficulties: She was a Jew living as an exile in Sweden because of the Jewish persecution going on in Nazi Germany, and she was a woman. She might have overcome either one of these obstacles to scientific success, but both proved insurmountable.

    Meitner had been working as Hahn’s academic equal when they were on the faculty of the Kaiser Wilhelm Institute in Berlin together. By all accounts they were close colleagues and friends for many years. When the Nazis took over, however, Meitner was forced to leave Germany. She took a position in Stockholm, and continued to work on nuclear issues with Hahn and his junior colleague Fritz Strassmann through regular correspondence. This working relationship, though not ideal, was still highly productive. The barium discovery was the latest fruit of that collaboration.

    Yet when it came time to publish, Hahn knew that including a Jewish woman on the paper would cost him his career in Germany. So he published without her, falsely claiming that the discovery was based solely on insights gleaned from his own chemical purification work, and that any physical insight contributed by Meitner played an insignificant role. All this despite the fact he wouldn’t have even thought to isolate barium from his samples had Meitner not directed him to do so.

    Hahn had trouble explaining his own findings, though. In his paper, he put forth no plausible mechanism as to how uranium atoms had split into barium atoms. But Meitner had the explanation. So a few weeks later, Meitner wrote her famous fission letter to the editor, ironically explaining the mechanism of “Hahn’s discovery.”

    Even that didn’t help her situation. The Nobel Committee awarded the 1944 Nobel Prize in Chemistry “for the discovery of the fission of heavy nuclei” to Hahn alone. Paradoxically, the word “fission” never appeared in Hahn’s original publication, as Meitner had been the first to coin the term in the letter published afterward.

    A controversy has raged about the discovery of nuclear fission ever since, with critics claiming it represents one of the worst examples of blatant racism and sexism by the Nobel committee. Unlike another prominent female nuclear physicist whose career preceded her – Marie Curie – Meitner’s contributions to nuclear physics were never recognized by the Nobel committee. She has been totally left out in the cold, and remains unknown to most of the public.

    6
    Meitner received the Enrico Fermi Award in 1966. Her nephew Otto Frisch is on the left. IAEA, CC BY-SA

    After the war, Meitner remained in Stockholm and became a Swedish citizen. Later in life, she decided to let bygones be bygones. She reconnected with Hahn, and the two octogenarians resumed their friendship. Although the Nobel committee never acknowledged its mistake, the slight to Meitner was partly mitigated in 1966 when the U.S. Department of Energy jointly awarded her, Hahn and Strassmann its prestigious Enrico Fermi Award “for pioneering research in the naturally occurring radioactivities and extensive experimental studies leading to the discovery of fission.” The two-decade late recognition came just in time for Meitner. She and Hahn died within months of each other in 1968; they were both 89 years old.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 1:21 pm on December 16, 2018 Permalink | Reply
    Tags: Huge previously-undetected coral reef off US East Coast, , The Conversation   

    From The Conversation: “Deepwater corals thrive at the bottom of the ocean, but can’t escape human impacts” 

    Conversation
    From The Conversation

    December 3, 2018
    Sandra Brooke

    When people think of coral reefs, they typically picture warm, clear waters with brightly colored corals and fishes. But other corals live in deep, dark, cold waters, often far from shore in remote locations. These varieties are just as ecologically important as their shallow water counterparts. They also are just as vulnerable to human activities like fishing and energy production.

    7
    Deep sea corals off Florida. Image via NOAA.

    Earlier this year I was part of a research expedition conducted by the Deep Search project, which is studying little-known deep-sea ecosystems off the southeast U.S. coast. We were exploring areas that had been mapped and surveyed by the U.S. National Oceanic and Atmospheric Administration’s research ship Okeanos.

    1
    Map of target areas to be surveyed during the first phase of the Deepwater Atlantic Habitats II study, DEEP SEARCH, including seep targets. USGS image.

    2
    NOAA Ship Okeanos Explorer

    NOAA Ship Okeanos Explorer is the only federal vessel dedicated to exploring our largely unknown ocean for the purpose of discovery and the advancement of knowledge about the deep ocean. The ship is operated by the NOAA Commissioned Officer Corps and civilians as part of NOAA’s fleet managed by NOAA’s Office of Marine and Aviation Operations. Mission equipment is operated by NOAA’s Office of Ocean Exploration and Research in partnership with the Global Foundation for Ocean Exploration .

    Missions of the 224-foot vessel include mapping, site characterization, reconnaissance, advancing technology, education, and outreach—all focused on understanding, managing, and protecting our ocean. Expeditions are planned collaboratively, with input from partners and stakeholders, and with the goal of providing data that will benefit NOAA, the scientific community, and the public.

    During Okeanos Explorer expeditions, data are collected using a variety of advanced technologies to explore and characterize unknown or poorly known deepwater ocean areas, features, and phenomena at depths ranging from 250 to 6,000 meters (820 to 19,700 feet). The ship is equipped with four different types of mapping sonars that collect high-resolution data about the seafloor and the water column, a dual-body remotely operated vehicle (ROV) capable of diving to depths of 6,000 meters, and a suite of other instruments to help characterize the deep ocean. Expeditions typically consist of either 24-hour mapping operations or a combination of daytime ROV dives and overnight mapping operations.

    In an area 160 miles off South Carolina we deployed Alvin, a three-person research submersible, to explore some features revealed during the mapping.

    4
    Human Occupied Vehicle (HOV) Alvin is part of the National Deep Submergence Facility (NDSF). Alvin enables in-situ data collection and observation by two scientists to depths reaching 4,500 meters, during dives lasting up to ten hours.

    What the scientists aboard Alvin found was a huge “forest” of coldwater corals. I went down on the second dive in this area and saw another dense coral ecosystem. These were just two features in a series that covered about 85 miles, in water nearly 2,000 feet deep. This unexpected find shows how much we still have to learn about life on the ocean floor.


    Scientists from the August 2018 Deep Search expedition discuss the significance of finding a huge, previously undetected deepwater coral reef off the U.S. East Coast.

    Life in the dark

    Deep corals are found in all of the world’s oceans. They grow in rocky habitats on the seafloor as it slopes down into the deep oceans, on seamounts (underwater mountains), and in submarine canyons. Most are found at depths greater than 650 feet (200 meters), but where surface waters are very cold, they can grow at much shallower depths.

    Shallow corals get much of their energy from sunlight that filters down into the water. Like plants on land, tiny algae that live within the corals’ polyps use sunlight to make energy, which they transfer to the coral polyps. Deep-sea species grow below the sunlit zone, so they feed on organic material and zooplankton, delivered to them by strong currents.

    In both deep and shallow waters, stony corals – which create hard skeletons – are the reef builders, while others such as soft corals add to reef diversity. Just five deep-sea stony coral species create reefs like the one we found in August.

    6
    Stylaster californicus at 135 feet depth on Farnsworth Bank off southern California. NOAA

    The most widely distributed and well-studied is Lophelia pertusa, a branching stony coral that begins life as a tiny larva, settles on hard substrate and grows into a bushy colony.

    6
    Lophelia pertusa

    As the colony grows, its outside branches block the flow of water that delivers food and oxygen to inner branches and washes away waste. Without flow, the inner branches die and weaken, then break apart, and the outer live branches overgrow the dead skeleton.

    This sequence of growth, death, collapse, and overgrowth continues for thousands of years, creating reefs that can be hundreds of feet tall. These massive, complex structures provide habitat for diverse and abundant assemblages of invertebrates and fishes, some of which are economically valuable.

    Other important types include gorgonians and black corals, often called “tree corals.” These species can grow very large and form dense “coral gardens” in rocky, current-swept areas. Small invertebrates and fishes use their branches for shelter, feeding and nursery habitat.

    Probing the deep oceans

    Organisms that live in deep, cold waters grow slowly, mature late and have long lifespans. Deep-sea black corals are among the oldest animals on earth: One specimen has been dated at 4,265 years old. As they grow, corals incorporate ocean elements into their skeletons. This makes them archives of ocean conditions that long predate human records. They also can provide valuable insights into the likely effects of future changes in the oceans.

    To protect these ecosystems, scientists need to find them. This is challenging because most of the seafloor has not been mapped. Once they have maps, researchers know where to deploy underwater vehicles so they can begin to understand how these ecosystems function.

    Scientists use submersibles like Alvin or remotely operated vehicles to study deep-water corals because other gear, such as trawls and dredges, would become entangled in these fragile colonies and damage them. Submersibles can take visual surveys and collect samples without impacting reefs.

    7
    The NOAA ROV Deep Discoverer documents benthic communities at Paganini Seamount in the north-central Pacific. NOAA

    This work is expensive and logistically challenging. It requires large ships to transport and launch the submersibles, and can only be done when seas are calm enough to work.

    Looming threats

    The greatest threat to deep corals globally is industrial bottom-trawl fishing, which can devastate deep reefs. Trawling is indiscriminate, sweeping up unwanted animals – including corals – as “bycatch.”“ It also stirs up sediment, which clogs deep-sea organisms’ feeding and breathing structures. Other forms of fishing, including traps, bottom longlines and dredges, can also impact the seafloor.

    Offshore energy production creates other problems. Oil and gas operations can release drilling muds and stir up sediments. Anchors and cables can directly damage reefs, and oil spills can have long-term impacts on coral health. Studies have shown that exposure to oil from the 2010 Deepwater Horizon spill caused stress and tissue damage in Gulf of Mexico deep-sea corals.

    Yet another growing concern is deep sea mining for materials such as cobalt, which is used to build batteries for cell phones and electric cars. The International Seabed Authority, a United Nations agency, is working with scientists and non-government organizations to develop a global regulatory code for deep sea mining, which is expected to be completed in 2020 or 2021. However, the International Union for the Conservation of Nature has warned that not enough is known about deep sea life to ensure that the code will protect it effectively.

    Finally, deep-sea corals are not immune to climate change. Ocean currents circulate around the planet, transporting warm surface waters into the deep sea. Warming temperatures could drive corals deeper, but deep waters are naturally higher in carbon dioxide than surface waters. As their waters become more acidified, deep-sea corals will be restricted to an increasingly narrow band of optimal conditions.

    Conservation and management

    Vast areas of deep coral habitats are on the high seas and are extremely difficult to manage. However, many countries have taken measures to protect deep corals within their territorial waters. For example, the United States has created several deep coral protected areas. And the U.S. Bureau of Ocean Energy Management restricts industry activities near deep corals and funds deep sea coral research.

    These are useful steps, but nations can only protect what they know about. Without exploration, no one would have known about the coral zone that we found off South Carolina, along one of the busiest coastlines in the United States. As a scientist, I believe it is imperative to explore and understand our deep ocean resources so we can preserve them into the future.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: