Tagged: The Conversation Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:50 am on October 12, 2021 Permalink | Reply
    Tags: "The Electron-Ion Collider- new accelerator could solve the mystery of how matter holds together", , , The Conversation,   

    From The Conversation : “The Electron-Ion Collider- new accelerator could solve the mystery of how matter holds together” 

    From The Conversation

    October 11, 2021
    Daria Sokhan

    1
    DOE’s Brookhaven National Laboratory (US) campus. Credit: Brookhaven National Laboratory.

    When the Nobel Prize-winning US physicist Robert Hofstadter and his team fired highly energetic electrons at a small vial of hydrogen at the Stanford Linear Accelerator Center in 1956, they opened the door to a new era of physics. Until then, it was thought that protons and neutrons, which make up an atom’s nucleus, were the most fundamental particles in nature. They were considered to be “dots” in space, lacking physical dimensions. Now it suddenly became clear that these particles were not fundamental at all, and had a size and complex internal structure as well.

    What Hofstadter and his team saw was a small deviation in how electrons “scattered”, or bounced, when hitting the hydrogen. This suggested there was more to a nucleus than the dot-like protons and neutrons they had imagined. The experiments that followed around the world at accelerators – machines that propel particles to very high energies – heralded a paradigm shift in our understanding of matter.

    Yet there is a lot we still don’t know about the atomic nucleus – as well as the “strong force”, one of four fundamental forces of nature, that holds it together. Now a brand-new accelerator, the Electron-Ion Collider, to be built within the decade at the DOE’s Brookhaven National Laboratory (US), with the help of 1,300 scientists from around the world, could help take our understanding of the nucleus to a new level.

    Strong but strange force

    After the revelations of the 1950s, it soon became clear that particles called quarks and gluons are the fundamental building blocks of matter. They are the constituents of hadrons, which is the collective name for protons and other particles. Sometimes people imagine that these kinds of particles fit together like Lego, with quarks in a certain configuration making up protons, and then protons and neutrons coupling up to create a nucleus, and the nucleus attracting electrons to build an atom. But quarks and gluons are anything but static building blocks.

    A theory called quantum chromodynamics describes how the strong force works between quarks, mediated by gluons, which are force carriers. Yet it cannot help us to analytically calculate the proton’s properties. This isn’t some fault of our theorists or computers — the equations themselves are simply not solvable.

    This is why the experimental study of the proton and other hadrons is so crucial: to understand the proton and the force that binds it, one must study it from every angle. For this, the accelerator is our most powerful tool.

    Yet when you look at the proton with a collider (a type of accelerator which uses two beams), what we see depends on how deep — and with what — we look: sometimes it appears as three constituent quarks, at other times as an ocean of gluons, or a teeming sea of pairs of quarks and their antiparticles (antiparticles are near identical to particles, but have the opposite charge or other quantum properties).

    2
    How an electron colliding with a charged atom can reveal its nuclear structure. Brookhaven National Lab/Flickr, CC BY-NC.

    So while our understanding of matter at this tiniest of scales has made great progress in the past 60 years, many mysteries remain which the tools of today cannot fully address. What is the nature of the confinement of quarks within a hadron? How does the mass of the proton arise from the almost massless quarks, 1,000 times lighter?

    To answer such questions, we need a microscope that can image the structure of the proton and nucleus across the widest range of magnifications in exquisite detail, and build 3D images of their structure and dynamics. That’s exactly what the new collider will do.

    Experimental set up

    The Electron-Ion Collider (EIC) will use a very intense beam of electrons as its probe, with which it will be possible to slice the proton or nucleus open and look at the structure inside it. It will do that by colliding a beam of electrons with a beam of protons or ions (charged atoms) and look at how the electrons scatter. The ion beam is the first of its kind in the world.

    Effects which are barely perceptible, such as scattering processes which are so rare you only observe them once in a billion collisions, will become visible. By studying these processes, myself and other scientists will be able to reveal the structure of protons and neutrons, how it is modified when they are bound by the strong force, and how new hadrons are created. We could also uncover what sort of matter is made up of pure gluons — something which has never been seen.

    The collider will be tuneable to a wide range of energies: this is like turning the magnification dial on a microscope, the higher the energy, the deeper inside the proton or nucleus one can look and the finer the features one can resolve.

    Newly formed collaborations of scientists across the world, which are part of the EIC team, are also designing detectors, which will be placed at two different collision points in the collider. Aspects of this effort are led by UK teams, which have just been awarded a grant to lead the design of three key components of the detectors and develop the technologies needed to realise them: sensors for precision tracking of charged particles, sensors for the detection of electrons scattered extremely closely to the beam line and detectors to measure the polarisation (direction of spin) of the particles scattered in the collisions.

    While it may take another ten years before the collider is fully designed and built, it is likely to be well worth the effort. Understanding the structure of the proton and, through it, the fundamental force that gives rise to over 99% of the visible mass in the universe, is one of the greatest challenges in physics today.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 9:57 am on January 31, 2021 Permalink | Reply
    Tags: "Answering the biggest question of all: why is there something rather than nothing?", , , , The Conversation   

    From The Conversation: “Answering the biggest question of all: why is there something rather than nothing?” 

    From The Conversation

    November 11, 2016 [Brought forward today, a never answered question, the most important question.]
    Lloyd Strickland

    1
    Credit: nienora/Shutterstock.

    In an ideal world, every extraordinary philosophical question would come with an extraordinary story telling the tale of how someone first thought of it. Unfortunately, we can only guess at what led a German philosopher, perhaps today best known for the Choco Leibniz biscuits later named after him, to come up with what is often described as the greatest philosophical question of all, namely: why is there something rather than nothing?

    The philosopher was Gottfried Wilhelm Leibniz, the man who also bequeathed us calculus and the binary system at the heart of modern computers. He died 300 years ago, on November 14, 1716.

    Many earlier thinkers had asked why our universe is the way it is, but Leibniz went a step further, wondering why there is a universe at all. The question is a challenging one because it seems perfectly possible that there might have been nothing whatsoever – no Earth, no stars, no galaxies, no universe. Leibniz even thought that nothing would have been “simpler and easier”. If nothing whatsoever had existed then no explanation would have been needed – not that there would have been anyone around to ask for an explanation, of course, but that’s a different matter.

    Leibniz thought that the fact that there is something and not nothing requires an explanation. The explanation he gave was that God wanted to create a universe – the best one possible – which makes God the simple reason that there is something rather than nothing.

    The philosopher was Gottfried Wilhelm Leibniz, the man who also bequeathed us calculus and the binary system at the heart of modern computers. He died 300 years ago, on November 14, 1716.

    Many earlier thinkers had asked why our universe is the way it is, but Leibniz went a step further, wondering why there is a universe at all. The question is a challenging one because it seems perfectly possible that there might have been nothing whatsoever – no Earth, no stars, no galaxies, no universe. Leibniz even thought that nothing would have been “simpler and easier”. If nothing whatsoever had existed then no explanation would have been needed – not that there would have been anyone around to ask for an explanation, of course, but that’s a different matter.

    Leibniz thought that the fact that there is something and not nothing requires an explanation. The explanation he gave was that God wanted to create a universe – the best one possible – which makes God the simple reason that there is something rather than nothing.

    In the years since Leibniz’s death, his great question has continued to exercise philosophers and scientists, though in an increasingly secular age it is not surprising that many have been wary of invoking God as the answer to it.

    Quantum gods

    One kind of answer is to say that there had to be something; that it would have been impossible for there to have been nothing. This was the view of the 17th century philosopher Spinoza, who claimed that the entire universe, along with all of its contents, laws and events, had to exist, and exist in the way it does. Einstein, who counted himself a follower of Spinoza’s philosophy, appears to have held a similar view.

    Other scientists, such as theoretical physicist Laurence Krauss in his populist book A Universe from Nothing (2012), offer a more nuanced version of this answer to Leibniz’s great question. Krauss claims that our universe arose naturally and inevitably from the operation of gravity on the quantum vacuum, empty space teeming with virtual particles that spontaneously pop into existence before disappearing again. Krauss’s theory implies that there could not have been nothing because there has always been something: first there was gravity and the quantum vacuum, and out of that was born the universe as we know it.

    Other theories in cosmology also seem to presuppose that there must always have been something in existence from which our universe arose, such as strings or membranes.

    The trouble with such scientific answers to the question of “why there is something and not nothing” is that it is not clear why we should think that there had to be gravity, or the quantum vacuum, or strings, or even a universe at all. It seems entirely possible that instead of these things there could have been absolutely nothing.

    What question?

    Another response to Leibniz’s great question is simply to deny that it has an answer. The philosopher Bertrand Russell took this line in a famous radio debate in 1948. He was asked why he thought the universe exists, and responded I should say that the universe is just there, and that’s all.

    On this account, the universe would be what philosophers call a brute fact – something that does not have an explanation. Russell’s point was not that humans hadn’t yet explained why there is something rather than nothing but that there is no possible explanation. Those who believe that our universe is part of the larger multiverse also take this line, suggesting that the multiverse – and hence our universe – has no ultimate explanation. Although it is now a popular response to Leibniz’s great question to say the universe is ultimately inexplicable, it does have the drawback of being intellectually unsatisfying (though of course that does not mean the response is false).

    The most novel answer to Leibniz’s great question is to say that our universe exists because it should. The thinking here is that all possible universes have an innate tendency to exist, but that some have a greater tendency to exist than others. The idea is actually Leibniz’s, who entertained the thought that there may be a struggle for existence between possible worlds, with the very best one coming out on top as if through a process of virtual natural selection. In the end he did not accept the idea, and retreated instead to the more traditional view that the universe exists because God chose to make it so.

    But the idea of a virtual struggle among possible universes has appealed to some modern philosophers, who have followed it to its logical conclusion and claimed that the possible universe with the greatest tendency to exist – which might be because it is the best, or because it contains some important feature such as the conditions that permit life to arise – will actually bring itself into existence.

    According to this theory, our universe becomes actual not because God or anything else made it so but because it literally lifted itself out of non-existence and made itself actual. Weird? Yes. But we shouldn’t let that put us off. After all, an extraordinary philosophical question might just require an extraordinary answer.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 1:07 pm on December 27, 2020 Permalink | Reply
    Tags: "Quantum philosophy- 4 ways physics will challenge your reality", , Physicists Niels Bohr and Albert Einstein famously disagreed about what quantum mechanics meant for the nature of reality., , The Conversation   

    From The Conversation: “Quantum philosophy- 4 ways physics will challenge your reality” 

    From The Conversation

    December 24, 2020
    Peter Evans, The University of Queensland (AU)

    1
    Credit: Shutterstock

    Imagine opening the weekend paper and looking through the puzzle pages for the Sudoku. You spend your morning working through this logic puzzle, only to realise by the last few squares there’s no consistent way to finish it.

    “I must have made a mistake,” you think. So you try again, this time starting from the corner you couldn’t finish and working back the other way. But the same thing happens again. You’re down to the last few squares and find there is no consistent solution.

    Working out the basic nature of reality according to quantum mechanics is a little bit like an impossible Sudoku. No matter where we start with quantum theory, we always end up at a conundrum that forces us to rethink the way the world fundamentally works. (This is what makes quantum mechanics so much fun.)

    Let me take you on a brief tour, through the eyes of a philosopher, of the world according to quantum mechanics.

    1. Spooky action-at-a-distance

    As far as we know, the speed of light (around 300 million metres per second) is the universe’s ultimate speed limit. Albert Einstein famously scoffed at the prospect of physical systems influencing each other faster than a light signal could travel between them.

    Back in the 1940s Einstein called this “spooky action-at-a-distance”. When quantum mechanics had earlier appeared to predict such spooky goings-on, he argued the theory must not yet be finished, and some better theory would tell the true story.

    We know today it is very unlikely there is any such better theory. And if we think the world is made up of well-defined, independent pieces of “stuff”, then our world has to be one where spooky action-at-a-distance between these pieces of stuff is allowed.

    2. Loosening our grip on reality

    “What if the world isn’t made of well-defined, independent pieces of ‘stuff’?” I hear you say. “Then can we avoid this spooky action?”

    Yes, we can. And many in the quantum physics community think this way, too. But this would be no consolation to Einstein.

    Einstein had a long-running debate with his friend Niels Bohr, a Danish physicist, about this very question. Bohr argued we should indeed give up the idea of the stuff of the world being well defined, so we can avoid spooky action-at-a-distance. In Bohr’s view, the world doesn’t have definite properties unless we’re looking at it. When we’re not looking, Bohr thought, the world as we know it isn’t really there.

    2
    Physicists Niels Bohr (left) and Albert Einstein famously disagreed about what quantum mechanics meant for the nature of reality. Credit: Paul Ehrenfest

    But Einstein insisted the world has to be made of something whether we look at it or not, otherwise we couldn’t talk to each other about the world, and so do science. But Einstein couldn’t have both a well-defined, independent world and no spooky action-at-a-distance … or could he?

    3. Back to the future

    The Bohr-Einstein debate is reasonably familiar fare in the history of quantum mechanics. Less familiar is the foggy corner of this quantum logic puzzle where we can rescue both a well-defined, independent world and no spooky action. But we will need to get weird in other ways.

    If doing an experiment to measure a quantum system in the lab could somehow affect what the system was like before the measurement, then Einstein could have his cake and eat it too. This hypothesis is called “retrocausality”, because the effects of doing the experiment would have to travel backwards in time.

    If you think this is strange, you’re not alone. This is not a very common view in the quantum physics community, but it has its supporters. If you are faced with having to accept spooky action-at-a-distance, or no world-as-we-know-it when we don’t look, retrocausality doesn’t seem like such a weird option after all.

    4. No view from Olympus

    Imagine Zeus perched atop Mount Olympus, surveying the world. Imagine he were able to see everything that has happened, and will happen, everywhere and for all time. Call this the “God’s eye view” of the world. It is natural to think there must be some way the world is, even if it can only be known by an all-seeing God.

    Recent research [Nature Physics] in quantum mechanics suggests a God’s eye view of the world is impossible, even in principle. In certain strange quantum scenarios, different scientists can look carefully at the systems in their labs and make thorough recordings of what they see – but they will disagree about what happened when they come to compare notes. And there might well be no absolute fact of the matter about who’s correct – not even Zeus could know!

    So next time you encounter an impossible Sudoku, rest assured you’re in good company. The entire quantum physics community, and perhaps even Zeus himself, knows exactly how you feel.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 12:37 pm on December 27, 2020 Permalink | Reply
    Tags: "CERN: discovery sheds light on the great mystery of why the universe has less ‘antimatter’ than matter", , , CERN LHCb (CH), , , , , The Conversation   

    From The Conversation: “CERN- discovery sheds light on the great mystery of why the universe has less ‘antimatter’ than matter” 

    From The Conversation

    December 21, 2020 [Catching up.]
    Lars Eklund

    1
    There’s a lot of matter in the universe, here the cat paw nebula of dust and gas. Credit: NASA.

    It’s one of the greatest puzzles in physics. All the particles that make up the matter around us, such electrons and protons, have antimatter versions which are nearly identical, but with mirrored properties such as the opposite electric charge. When an antimatter and a matter particle meet, they annihilate in a flash of energy.

    If antimatter and matter are truly identical but mirrored copies of each other, they should have been produced in equal amounts in the Big Bang. The problem is that would have made it all annihilate. But today, there’s nearly no antimatter left in the universe – it appears only in some radioactive decays and in a small fraction of cosmic rays. So what happened to it? Using the LHCb experiment at CERN to study the difference between matter and antimatter, we have discovered a new way that this difference can appear [Observation of CP violation in two-body B0(s)-meson decays to charged pions and kaons].

    CERN (CH) /LHCb detector.

    The existence of antimatter was predicted by physicist Paul Dirac’s equation describing the motion of electrons in 1928. At first, it was not clear if this was just a mathematical quirk or a description of a real particle. But in 1932 Carl Anderson discovered an antimatter partner to the electron – the positron – while studying cosmic rays that rain down on Earth from space. Over the next few decades physicists found that all matter particles have antimatter partners.

    Scientists believe that in the very hot and dense state shortly after the Big Bang, there must have been processes that gave preference to matter over antimatter. This created a small surplus of matter, and as the universe cooled, all the antimatter was destroyed, or annihilated, by an equal amount of matter, leaving a tiny surplus of matter. And it is this surplus that makes up everything we see in the universe today.

    Exactly what processes caused the surplus is unclear, and physicists have been on the lookout for decades.

    Known asymmetry

    The behaviour of quarks, which are the fundamental building blocks of matter along with leptons, can shed light on the difference between matter and antimatter. Quarks come in many different kinds, or “flavours”, known as up, down, charm, strange, bottom and top plus six corresponding anti-quarks.

    The up and down quarks are what make up the protons and neutrons in the nuclei of ordinary matter, and the other quarks can be produced by high-energy processes – for instance by colliding particles in accelerators such as the Large Hadron Collider at CERN.

    Particles consisting of a quark and an anti-quark are called mesons, and there are four neutral mesons (B0S, B0, D0 and K0) that exhibit a fascinating behaviour. They can spontaneously turn into their antiparticle partner and then back again, a phenomenon that was observed for the first time in the 1960. Since they are unstable, they will “decay” – fall apart – into other more stable particles at some point during their oscillation. This decay happens slightly differently for mesons compared with anti-mesons, which combined with the oscillation means that the rate of the decay varies over time.

    The rules for the oscillations and decays are given by a theoretical framework called the Cabibbo-Kobayashi-Maskawa (CKM) mechanism. It predicts that there is a difference in the behaviour of matter and antimatter, but one that is too small to generate the surplus of matter in the early universe required to explain the abundance we see today.

    This indicates that there is something we don’t understand and that studying this topic may challenge some of our most fundamental theories in physics.

    New physics?

    Our recent result from the LHCb experiment is a study of neutral B0S mesons, looking at their decays into pairs of charged K mesons. The B0S mesons were created by colliding protons with other protons in the Large Hadron Collider where they oscillated into their anti-meson and back three trillion times per second. The collisions also created anti-B0S mesons that oscillate in the same way, giving us samples of mesons and anti-mesons that could be compared.

    We counted the number of decays from the two samples and compared the two numbers, to see how this difference varied as the oscillation progressed. There was a slight difference – with more decays happening for one of the B0S mesons. And for the first time for B0S mesons, we observed that the difference in decay, or asymmetry, varied according to the oscillation between the B0S meson and the anti-meson.

    In addition to being a milestone in the study of matter-antimatter differences, we were also able to measure the size of the asymmetries. This can be translated into measurements of several parameters of the underlying theory. Comparing the results with other measurements provides a consistency check, to see if the currently accepted theory is a correct description of nature. Since the small preference of matter over antimatter that we observe on the microscopic scale cannot explain the overwhelming abundance of matter that we observe in the universe, it is likely that our current understanding is an approximation of a more fundamental theory.

    Investigating this mechanism that we know can generate matter-antimatter asymmetries, probing it from different angles, may tell us where the problem lies. Studying the world on the smallest scale is our best chance to be able to understand what we see on the largest scale.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 2:44 pm on December 16, 2020 Permalink | Reply
    Tags: "Where does the Earth’s heat come from?", , , Detecting neutrinos generated by radioactive decay within the Earth-geoneutrinos- should give us an idea of what is happening at its deepest levels., Earth generates heat. The deeper you go the higher the temperature., Everything on earth is radioactive., Experimental searches for geoneutrinos way underground: KamLAND in Japan; Borexino in Italy; SNO+ in Canada; and Juno in China., , , Here we will be talking about “beta” decay where an election and a neutrino are emitted., Other heavier nuclei like deuterium and helium formed at the same time in a process called Big Bang nucleosynthesis., , , Scientific knowledge of deeper levels has been obtained through seismic measurements., The Big Bang produced matter in the form of protons; neutrons; electrons and neutrinos., The Conversation, When the stars died in supernovae these heavy elements spread out across space to be captured in the form of planets.   

    From The Conversation: “Where does the Earth’s heat come from?” 

    From The Conversation

    December 15, 2020
    François Vannucci

    1
    The Piton de la Fournaise in eruption, 2015. Credit: Greg de Serra/Flickr, CC BY

    Earth generates heat. The deeper you go, the higher the temperature. At 25km down, temperatures rise as high as 750°C; at the core, it is said to be 4,000°C. Humans have been making use of hot springs as far back as antiquity, and today we use geothermal technology to heat our apartments. Volcanic eruptions, geysers and earthquakes are all signs of the Earth’s internal powerhouse.

    The average heat flow from the earth’s surface is 87mW/m^2 – that is, 1/10,000th of the energy received from the sun, meaning the earth emits a total of 47 terawatts, the equivalent of several thousand nuclear power plants. The source of the earth’s heat has long remained a mystery, but we now know that most of it is the result of radioactivity.

    The birth of atoms

    To understand where all this heat is coming from, we have to go back to the birth of the atomic elements.

    The Big Bang produced matter in the form of protons, neutrons, electrons, and neutrinos. It took around 370,000 years for the first atoms to form – protons attracted electrons, producing hydrogen. Other, heavier nuclei, like deuterium and helium, formed at the same time, in a process called Big Bang nucleosynthesis.

    The creation of heavy elements was far more arduous. First, stars were born and heavy nuclei formed via accretion in their fiery crucible. This process, called stellar nucleosynthesis, took billions of years. Then, when the stars died, these elements spread out across space to be captured in the form of planets.

    The earth’s composition is therefore highly complex. Luckily for us, and our existence, it includes all the natural elements, from the simplest atom, hydrogen, to heavy atoms such as uranium, and everything in between, carbon, iron – the entire periodic table. Inside the bowels of the earth is an entire panoply of elements, arranged within various onion-like layers.

    2
    Our planet contains all the elements of the periodic table. Credit: Sandbh/Wikipedia, CC BY

    We know little about the inside of our planet. The deepest mines reach down 10km at the most, while the earth has a radius of 6,500km. Scientific knowledge of deeper levels has been obtained through seismic measurements. Using this data, geologists divided the earth’s structure into various strata, with the core at the center, solid on the inside and liquid on the outside, followed by the lower and upper mantles and, finally, the crust. The earth is made up of heavy, unstable elements and is therefore radioactive, meaning there is another way to find out about its depths and understand the source of its heat.

    3
    Drugs and cosmetics containing a small dose of radium, early 20th century. Credit: Rama/Wikimedia, CC BY-SA

    What is radioactivity?

    Radioactivity is a common and inescapable natural phenomenon. Everything on earth is radioactive – that is to say, everything spontaneously produces elementary particles (humans emit a few thousand per second). In Marie Curie’s day, no one was afraid of radioactivity.

    On the contrary, it was said to have beneficial effects: beauty creams were certified radioactive and contemporary literature extolled the radioactive properties of mineral water. Maurice Leblanc wrote of a thermal spring saving his protagonist Arsène Lupin during one of his adventures:

    “The water contained such energy and power as to make it a veritable fountain of youth, properties arising from its incredible radioactivity.” (Maurice Leblanc, “La demoiselle aux yeux verts”, 1927)

    There are various kinds of radioactivity, each involving the spontaneous release of particles and emitting energy that can be detected in the form of heat deposits. Here, we will be talking about “beta” decay, where an election and a neutrino are emitted. The electron is absorbed as soon as it is produced, but the neutrino has the surprising ability to penetrate a wide range of materials. The whole of the Earth is transparent to neutrinos, so detecting neutrinos generated by radioactive decay within the Earth should give us an idea of what is happening at its deepest levels.

    These kinds of particles are called geoneutrinos, and they provide an original way to investigate the depths of the Earth. Although detecting them is no easy matter, since neutrinos interact little with matter, some detectors are substantial enough to perform this kind of research.

    Geoneutrinos mainly arise from heavy elements with very long half-lives, whose properties are now thoroughly understood through lab studies: chiefly uranium, thorium and potassium. The decay of one uranium-238 nucleus, for example, releases an average of 6 neutrinos, and 52 megaelectronvolts of energy carried by the released particles that then lodge in matter and deposit heat. Each neutrino carries around two megaelectronvolts of energy. According to standardized measures, one megaelectronvolt is equivalent to 1.6 10^-13 joules, so it would take around 10^25 decays per second to reach the earth’s total heat. The question is, can these neutrinos be detected?

    Detecting geoneutrinos

    In practice, we have to take aggregate measurements at the detection site of flows coming from all directions. It is difficult to ascertain the exact source of the flows, since we cannot measure their direction. We have to use models to create computer simulations. Knowing the energy spectrum of each decay mode and modeling the density and position of the various geological strata affecting the final result, we get an overall spectrum of expected neutrinos which we then deduct from the number of events predicted for a given detector. This number is always very low – only a handful of events per kiloton of detector per year.

    Two recent experiments have added to the research: KamLAND, a detector weighing 1,000 metric tons underneath a Japanese mountain, and Borexino, which is located in a tunnel under the Gran Sasso mountain in Italy and weighs 280 metric tons.

    KamLAND at the Kamioka Observatory in located in a mine in Hida, Japan.

    Borexino detector. Image INFN.


    INFN/Borexino Solar Neutrino detector, at Laboratori Nazionali del Gran Sasso, situated below Gran Sasso mountain in Italy.

    Both use “liquid scintillators”. To detect neutrinos from the earth or the cosmos, you need a detection method that is effective at low energies; this means exciting atoms in a scintillating liquid. Neutrinos interact with protons, and the resulting particles emitted produce observable light.

    KamLAND has announced more than 100 events and Borexino around 20 that could be attributed to geoneutrinos, with an uncertainty factor of 20-30%. We cannot pinpoint their source, but this overall measurement – while fairly rough – is in line with the predictions of the simulations, within the limits of the low statistics obtained.

    Therefore, the traditional hypothesis of a kind of nuclear reactor at the center of the earth, consisting of a ball of fissioning uranium like those in nuclear power plants, has now been excluded. Fission is not a spontaneous radioactivity but is stimulated by slow neutrons in a chain reaction.

    There are now new, more effective detectors being developed: Canada’s SNO+, and China’s Juno, which will improve our knowledge of geoneutrinos.

    4
    The Sno+ experiment uses the SnoLab detector in Canada, to detect geoneutrinos, among other things. Credit: SNOLAB.

    JUNO Underground Observatory, at Kaiping, Jiangmen in Southern China.


    JUNO Chinese Neutrino Experiment, at Kaiping, Jiangmen in Southern China.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 6:10 pm on December 9, 2020 Permalink | Reply
    Tags: "Fragments of energy – not waves or particles – may be the fundamental building blocks of the universe", , , The Conversation   

    From The Conversation: “Fragments of energy – not waves or particles – may be the fundamental building blocks of the universe” 

    From The Conversation

    December 9, 2020
    Larry M. Silverberg

    1
    New mathematics have shown that lines of energy can be used to describe the universe. Credit: zf L/Moment via Getty Images.

    “Matter is what makes up the universe, but what makes up matter? This question has long been tricky for those who think about it – especially for the physicists. Reflecting recent trends in physics, my colleague Jeffrey Eischen and I have described an updated way to think about matter. We propose that matter is not made of particles or waves, as was long thought, but – more fundamentally – that matter is made of fragments of energy [Physics Essays].

    2
    In ancient times, five elements were thought to be the building blocks of reality. Credit: IkonStudio/iStock via Getty Images.

    From five to one

    The ancient Greeks conceived of five building blocks of matter – from bottom to top: earth, water, air, fire and aether. Aether was the matter that filled the heavens and explained the rotation of the stars, as observed from the Earth vantage point. These were the first most basic elements from which one could build up a world. Their conceptions of the physical elements did not change dramatically for nearly 2,000 years.

    Then, about 300 years ago, Sir Isaac Newton introduced the idea that all matter exists at points called particles. One hundred fifty years after that, James Clerk Maxwell introduced the electromagnetic wave – the underlying and often invisible form of magnetism, electricity and light. The particle served as the building block for mechanics and the wave for electromagnetism – and the public settled on the particle and the wave as the two building blocks of matter. Together, the particles and waves became the building blocks of all kinds of matter.

    This was a vast improvement over the ancient Greeks’ five elements, but was still flawed. In a famous series of experiments, known as the double-slit experiments, light sometimes acts like a particle and at other times acts like a wave. And while the theories and math of waves and particles allow scientists to make incredibly accurate predictions about the universe, the rules break down at the largest and tiniest scales.

    Einstein proposed a remedy in his theory of general relativity. Using the mathematical tools available to him at the time, Einstein was able to better explain certain physical phenomena and also resolve a longstanding paradox relating to inertia and gravity. But instead of improving on particles or waves, he eliminated them as he proposed the warping of space and time.

    Using newer mathematical tools, my colleague and I have demonstrated a new theory that may accurately describe the universe. Instead of basing the theory on the warping of space and time, we considered that there could be a building block that is more fundamental than the particle and the wave. Scientists understand that particles and waves are existential opposites: A particle is a source of matter that exists at a single point, and waves exist everywhere except at the points that create them. My colleague and I thought it made logical sense for there to be an underlying connection between them.

    3
    A new building block of matter can model both the largest and smallest of things – from stars to light. Credit: Christopher Terrell, CC BY-ND.

    Our theory begins with a new fundamental idea – that energy always “flows” through regions of space and time.

    Think of energy as made up of lines that fill up a region of space and time, flowing into and out of that region, never beginning, never ending and never crossing one another.

    Working from the idea of a universe of flowing energy lines, we looked for a single building block for the flowing energy. If we could find and define such a thing, we hoped we could use it to accurately make predictions about the universe at the largest and tiniest scales.

    There were many building blocks to choose from mathematically, but we sought one that had the features of both the particle and wave – concentrated like the particle but also spread out over space and time like the wave. The answer was a building block that looks like a concentration of energy – kind of like a star – having energy that is highest at the center and that gets smaller farther away from the center.

    Much to our surprise, we discovered that there were only a limited number of ways to describe a concentration of energy that flows. Of those, we found just one that works in accordance with our mathematical definition of flow. We named it a fragment of energy.


    Fragment of Energy | The Primitive Stuff from Which Everything Originates. Credit: Ricky Puyana.

    For the math and physics aficionados, it is defined as A = -⍺/r where ⍺ is intensity and r is the distance function.

    Using the fragment of energy as a building block of matter, we then constructed the math necessary to solve physics problems. The final step was to test it out.

    Back to Einstein, adding universality

    More than 100 ago, Einstein had turned to two legendary problems in physics to validate general relativity: the ever-so-slight yearly shift – or precession – in Mercury’s orbit, and the tiny bending of light as it passes the Sun.

    These problems were at the two extremes of the size spectrum. Neither wave nor particle theories of matter could solve them, but general relativity did. The theory of general relativity warped space and time in such way as to cause the trajectory of Mercury to shift and light to bend in precisely the amounts seen in astronomical observations

    If our new theory was to have a chance at replacing the particle and the wave with the presumably more fundamental fragment, we would have to be able to solve these problems with our theory, too.

    For the precession-of-Mercury problem, we modeled the Sun as an enormous stationary fragment of energy and Mercury as a smaller but still enormous slow-moving fragment of energy. For the bending-of-light problem, the Sun was modeled the same way, but the photon was modeled as a minuscule fragment of energy moving at the speed of light. In both problems, we calculated the trajectories of the moving fragments and got the same answers as those predicted by the theory of general relativity. We were stunned.

    Our initial work demonstrated how a new building block is capable of accurately modeling bodies from the enormous to the minuscule. Where particles and waves break down, the fragment of energy building block held strong. The fragment could be a single potentially universal building block from which to model reality mathematically – and update the way people think about the building blocks of the universe.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 1:00 pm on December 7, 2020 Permalink | Reply
    Tags: "Peatlands keep a lot of carbon out of Earth’s atmosphere but that could end with warming and development", , , The Conversation   

    From The Conversation and Texas A&M University: “Peatlands keep a lot of carbon out of Earth’s atmosphere but that could end with warming and development” 

    From The Conversation

    and

    Texas A&M logo

    Texas A&M

    December 7, 2020
    Julie Loisel

    1
    More valuable than it looks. Credit: David Stanley/Flickr, CC BY.

    Peatlands [International Peat Society] are a type of wetland where dead plant material doesn’t fully decompose because it’s too soggy. In these ecosystems, peat builds up as spongy dark soil that’s sometimes referred to as sod or turf. Over thousands of years, yards-thick layers of peat accumulate and trap huge amounts of carbon, helping to cool the climate on a global scale.

    But that might not be true for much longer. Warming temperatures and human actions, such as draining bogs and converting them for agriculture, threaten to turn the world’s peatlands from carbon reservoirs to carbon sources.

    In a newly published study [NatureClimateChange], our multidisciplinary team of 70 scientists from around the world analyzed existing research and surveyed 44 leading experts to identify factors that could change peatlands’ carbon balance now and in the future. We found that permafrost degradation, warming temperatures, rising sea levels and drought are causing many peatlands around the world to lose some of their stored carbon. This is in addition to rapid degradation caused by human activity. And unless steps are taken to protect peatlands, carbon loss could accelerate.

    2
    Peatlands are found in an estimated 180 countries. Many of them have not been recognized and are not yet properly mapped. Credit: Levi Westerveld/GRID-Arendal, CC BY-ND.

    From carbon sink to carbon source

    Although they only occupy 3% of the global land area, peatlands contain about 25% of global soil carbon — twice as much as the world’s forests. Peatlands exist on every continent, even in Antarctica. In the U.S. they are found in many states, including Maine, Pennsylvania, Washington and Wisconsin. These ecosystems form where partially decayed organic matter accumulates in cold soil that is nearly always wet, which dramatically slows decomposition.


    What is Peat? | Distiller.
    Humans have used peat for centuries as a fuel, and also to flavor whiskey.

    But now climate change is altering those conditions. For example, in many regions of the Arctic, rapid permafrost thawing [Nature] promotes microbial activity that releases greenhouse gases into the atmosphere. These microbes feed off carbon-rich peats that were once frozen.

    Massive peatland fires also contribute. Recent wildfires like those in Russia are known to release as much carbon in a few months as total human carbon dioxide emissions in an entire year. And these fires are especially tricky to put out. Embers within the dense organic matter can reignite many months or even years later.

    Human activities are also increasing greenhouse gas releases from these carbon-rich ecosystems. In the United Kingdom, for example, extracting peat for use in gardening has caused peatlands to emit an estimated 16 million tons of carbon every year – roughly equivalent to the annual greenhouse gas emissions from over 12 million cars.

    In Indonesia and Malaysia, as fertile land becomes increasingly scarce, peatlands are being burned, drained, and repurposed. Already, most peatlands in Indonesia have been destroyed in order to build palm oil plantations.

    3
    Peat cut into blocks and drying on racks in Tierra del Fuego, Argentina. Credit: Julie Loisel, CC BY-ND.

    The World Resources Institute estimates that in Indonesia and Malaysia, peatland draining results in total annual emissions equal to those of nearly 70 coal plants. These activities also endanger vulnerable animal populations, such as orangutans and various species of freshwater fish. Peatland degradation due to human activity accounts for 5-10% of annual carbon dioxide emissions from human activity, despite these zones’ tiny geographic footprint.

    Quantifying peatland carbon

    Predicting how much carbon will be released from peatlands worldwide is hard to do, especially because no models can adequately represent these ecosystems and the many factors that influence their carbon balance.

    Peatlands are not included in most earth system models that scientists use to make future climate change projections. There is a long-held view that peatlands are minor players in the global carbon cycle on a year-to-year basis, but our study and many others show that climate change and human intervention are making these ecosystems very dynamic. Our study highlights the need to integrate peatlands into these models; we also hope it can help direct new research.

    Even though models are not ready, decisions need to be made now about how to manage peatlands. That’s why we surveyed experts as a first step towards predicting the fate of peat carbon worldwide.

    Based on their responses, we estimate that 100 billion tons of carbon could be emitted from peatlands by 2100 – am amount equivalent to about 10 years of emissions from all human activities, including burning fossil fuels and clearing forests. The experts we consulted have not reached a consensus, and our estimate is highly uncertain: Net changes in peat carbon over the next 80 years could range from a gain of 103 billion tons to a loss of 360 billion tons.

    Not every region will be affected the same way. High-latitude peatlands might see an increase in carbon storage under a warming climate because of increased plant growth and greater peat accumulation. Tropical peats, on the other hand, are more likely to dry out and burn due to warming temperatures and human activity. These factors and human choices about peatland use will affect whether these areas become carbon sources or sinks in the future.

    Overall, our results suggest that carbon releases will surpass carbon gains in the coming years, primarily because of human impacts in tropical peatlands. This switch from carbon sink to carbon source will feed a positive feedback loop, with peatlands releasing carbon that makes Earth’s climate warmer, which makes peatlands release more carbon, and so on.

    Despite the uncertainty in our findings, we believe our results show that peatlands should be included in climate models, and that nations should take steps to preserve them.

    Toward sustainable use

    A balance must be achieved between wise peatland use and local economic needs. Given how much carbon peatlands hold and how vulnerable they are, many surveyed experts believe people soon will adopt more sustainable practices for managing them. But others are not so optimistic. In regions such as the Amazon and the Congo basins, where large peatland complexes were recently discovered, it is critical to take action to preserve them.

    Peatlands should also be considered in integrated assessment models that researchers use to understand climate change impacts and options for mitigating them. Models that project future socioeconomic change and carbon emission pathways could help develop incentives such as peatland carbon pricing and sustainable use practices. This would change the way these ecosystems are valued and managed.

    The first step, however, is to raise awareness around the world of this precious natural resource and the consequences of continuing to exploit it.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Texas A&M University, is located in College Station, Texas, about 90 miles northwest of Houston and within a two to three-hour drive from Austin and Dallas.
    Home to more than 50,000 students, ranking as the sixth-largest university in the country, with more than 370,000 former students worldwide.
    Holds membership in the prestigious Association of American Universities, one of only 62 institutions with this distinction.
    More than $820 million in research expenditures generated by faculty-researchers
    Has an endowment valued at more than $5 billion, which ranks fourth among U.S. public universities and 10th overall.

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 12:05 pm on December 6, 2020 Permalink | Reply
    Tags: "The Atlantic- The driving force behind ocean circulation and our taste for cod", , , The Conversation   

    From The Conversation: “The Atlantic- The driving force behind ocean circulation and our taste for cod” 

    From The Conversation

    December 6, 2020

    Suzanne OConnell
    Pascal Le Floc’h

    “Did the Atlantic close and then reopen?” That was the question posed in a 1966 paper by the Canadian geophysicist J. Tuzo Wilson.

    The answer? Yes, over millions of years. And it was the breakup of the supercontinent Pangea, starting some 180 million years ago, that began creating the Atlantic Ocean basin as we know it today.

    Pangea with modern borders http://www.visualcapitalist.com.

    Earth’s surface is made up of intersecting tectonic plates.

    The tectonic plates of the world were mapped in 1996, USGS.

    For much of our planet’s history these plates have been bumping into one another, forming chains of mountains and volcanoes, and then rifting apart, creating oceans.

    When Pangea existed it would have been possible to walk from modern Connecticut or Georgia in the U.S. to what is now Morocco in Africa. Geologists don’t know what causes continents to break up, but we know that when rifting occurs, continents thin and pull apart. Magma intrudes into the continental rocks.

    The oldest portions of crust in the Atlantic Ocean lie off of North America and Africa, which were adjacent in Pangea. They show that these two continents separated about 180 million years ago, forming the North Atlantic Ocean basin. The rest of Africa and South America rifted apart about 40 million or 50 million years later, creating what is now the South Atlantic Ocean basin.

    Magma wells upward from beneath the ocean floor at the Mid-Atlantic Ridge, creating new crust where the plates move apart.

    2
    Mid-Atlantic Ridge. Britannica.

    Some of this ocean crust is younger than you or me, and more is being created today. The Atlantic is still growing.

    3
    This map shows how ocean crust rises upward at rifts between tectonic plates and spreads outward. In the Atlantic, light blue crust began forming 180 million years ago when North America and Africa rifted apart. Green crust was produced 128 million to 84 million years ago when Africa and South America rifted apart. Dark red crust is the youngest, formed up to 10 million years ago. Credit:NOAA NatGeo.

    Winds and currents

    Once the ocean basin formed after Pangea’s breakup, water entered from rain and rivers. Winds began to move the surface water.

    Thanks to the unequal heating of Earth’s surface and its rotation, these winds blow in different directions.


    What is global circulation?

    The Earth is warmer at the equator than near the poles, which puts air in motion. At the equator the planet’s heat causes moist air to warm, expand and rise. At the polar regions cold, dry, heavier air descends.

    This motion creates “cells” of rising and descending air that control global wind patterns. Earth’s rotation dictates that different parts of the globe travel at different speeds. At a pole, a molecule of air would just spin around, while a particle of air at the equator in Quito, Ecuador, would travel 7,918 miles (12,742 kilometers) in a single day.

    This different movement causes the air cells to break up. For example, in the Hadley Cell, tropical air, which rose at the equator, cools in the upper atmosphere and descends at about 30 degrees north and south latitude – roughly, near the northern and southern tips of Africa. Earth’s rotation turns this descending air, creating trade winds that flow from east to west across the Atlantic and back to the equator. At higher latitudes in the North and South Atlantic, the same forces create mid-latitude cells with winds that blow from west to east.

    4
    Earth’s atmospheric circulation, showing the Hadley, midlatitude and polar cells, and the wind patterns they produce. Credit: NASA/Wikimedia.

    As air flows across the ocean’s surface, it moves water. This creates a circulating system of gyres, or rotating currents, that move clockwise in the North Atlantic and counterclockwise in the South Atlantic. These gyres are part of a global conveyor belt that transports and redistributes heat and nutrients throughout the global ocean.

    The Gulf Stream, which follows the U.S. East Coast before heading east across the North Atlantic, is part of the North Atlantic gyre. Since the current carries warm water north, it is easy to see on false-color infrared satellite images as it transports heat northward. Like a river, it also meanders.

    Moving water masses

    These wind-blown surface currents are important for many reasons, including human navigation, but they affect only about 10% of the Atlantic’s volume. Most of the ocean operates in a different system, which is called thermohaline circulation because it is driven by heat (thermo) and salt (saline).

    Like many processes in the ocean, salinity is tied to weather and circulation. For example, trade winds blow moist air from the Atlantic across Central America and into the Pacific Ocean, which concentrates salinity in the Atlantic waters left behind. As a result, the Atlantic is slightly saltier than the Pacific.

    This extra salinity makes the Atlantic the driving force in ocean circulation. As currents move surface waters poleward, the water cools and becomes more dense. Eventually at high latitudes this cold, salty water sinks to the ocean floor. From there it flows along the bottom and back toward the the opposite pole, creating density-driven currents with names such as North Atlantic Deep Water and Antarctic Bottom Water.

    5
    Global thermohaline circulation is driven primarily by the formation and sinking of deep water. It moves heat from the equator toward the poles. Credit: Hugo Ahlenius, UNEP/GRID-Arendal, CC BY-ND.

    As these deep currents move, they collect surface organisms that have died and fallen to the bottom. With time, the organisms decompose, filling the deep water with essential nutrients.

    In some locations this nutrient-rich water rises back up to the surface, a process called upwelling. When it reaches the ocean’s sunlit zone, within 650 feet (200 meters) of the surface, tiny organisms called phytoplankton feed on the nutrients. In turn, they become food for zooplankton and larger organisms higher up the food chain. Some of the the Atlantic’s richest fishing grounds, such as the Grand Banks to the southeast of Newfoundland in Canada and the Falkland/Malvinas Islands in the South Atlantic, are upwelling areas.

    Much about the Atlantic remains to be discovered, especially in a changing climate. Will rising carbon dioxide levels and resulting ocean acidification disrupt marine food chains? How will a warmer ocean affect circulation and hurricane intensity? What we do know is that the Atlantic’s winds, currents and sea life are intricately connected, and disrupting them can have far-reaching effects.

    Atlantic cod fishing

    Now, let’s head back up to the surface, and into the wake of the first sailboats that set out to fish for cod along the Canadian coast. These pioneering ships paved the way for greater exploitation of the Atlantic’s wealth of fishery resources – particularly cod. Communities of people greatly benefited from these resources over the following centuries, until the threat of overfishing became impossible to ignore.

    The history of fishing in the Atlantic is often said to trace back to the discovery of the cod-rich Canadian waters of Newfoundland, attributed to Italian navigator and explorer John Cabot, who led an English expedition there in 1497. From the 16th to the 20th centuries, cod-fishing mania swept European fleets. Between 1960 to 1976, ships from Spain, Portugal and France were responsible for 40% of the catch. However, in 1977 Canada extended its territory offshore by 200 miles, taking possession of the Newfoundland cod fisheries, which accounted for 70% of cod production in the Northwest Atlantic.

    7
    Fishermen aboard a boat with a haul of cod. Credit: Georg Kristiansen/Shutterstock.

    For five centuries, the only thing that mattered was the size of the catch. This drove innovations in the design and equipment of fishing boats. The sailboat cod-fishing industry in Newfoundland and Iceland hit its peak in the late 19th century; from 1800 to 1900, France – the main fishing operator alongside Britain – outfitted more than 30,000 schooners.

    At the end of the 19th century, the rowboat was replaced by the dory, a small (two-person) boat from North America, which sharply increased production. A plaque commenting on the new safety of the dory in the French Museum of Fisheries, in Normandy – dedicated to the history of commercial cod fishing – noted that the hazard of losing a man overboard was “built into the mindset of cod-fishing.” But by the early 20th century, steamers had begun to replace these boats.

    New productivity gains came with new techniques, such as using back-trawling instead of side-trawling in the 1950s and 1960s, alongside reduced crew sizes.

    The biggest cod catch, at nearly 1.9 million tons, was recorded in 1968. After that, overall production declined year after year, reaching less than a million tons in 1973. Numbers slowly picked up again in the 1980s after European fleets were excluded from the Newfoundland area, but this comeback was short-lived. On July 2, 1992, the Canadian government announced a moratorium on cod fishing, confirming that populations had collapsed. This collapse in the northwestern Atlantic has since become a textbook example of the risks of overfishing.

    The wider catch

    Seafood production in the Atlantic went from an estimated 9 million tons in 1950 to more than 23 million tons in 1980 and 2000, and 22 million tons in 2018. This overall production has remained stable since 1970.

    In the North Atlantic, whiting and herring are the two most fished species by tonnage. Sardine and sardinella hold the top spots in the Central Atlantic. In the South Atlantic, mackerel and Argentine hake dominate the catch.

    The Food and Agriculture Organization of the United Nations (FAO) has identified six production areas in the Atlantic Ocean, divided up cardinally, as shown on the map below. In 1950, these various areas accounted for 52% of the worldwide catch. From 1960 to 1980, this proportion went down to 37% to 43%. Since 1990, one-quarter of global seafood production is caught by fleets operating in the Atlantic.

    Nearly 60% of seafood production now comes from fisheries in the Pacific Ocean, and 15% from the Indian Ocean.

    7
    FAO has identified six production areas in the Atlantic Ocean. Credit: Le Floc’h (adapted from FAO’s map, 2003), CC BY-NC-ND.

    The northeastern Atlantic (FAO Area 27) covers fisheries operated by European fleets. This area is, by far, the most bountiful of the entire Atlantic zone, with a total catch of 9.6 million tons in 2018. Norway took the lead for seafood production by tonnage (2.5 million tons) in 2018, ahead of Spain (just under a million tons). It is also the most diversified zone, with more than 450 commercial species.

    The northwestern Atlantic (FAO Area 21) stretches from the Rhode Island and Gulf of Maine coastlines in the U.S. to the Canadian coasts, including the Gulf of Saint Lawrence and the waters of Newfoundland and Labrador. Cod has dominated the history of fishing in this area since the 16th century. The biggest overall catch was recorded in 1970, at more than 4 million tons. But, after 1990, that number dropped, as a consequence of the 1992 moratorium. Since 2000, the northwest area has accounted for around 10% of the Atlantic catch (1.7 million tons in 2018). There are 220 monitored species in the area.

    Eastern Central Atlantic (FAO Area 34) stretches from the Moroccan to the Zairian coasts. Species caught include sardine, anchovy and herring. In 2018, this area accounted for a quarter of the total seafood production of all six Atlantic areas. That same year, West African fisheries recorded the second biggest catches after the northeastern Atlantic. The high number of commercial species identified by the FAO sets this region apart, at nearly 300.

    Western Central Atlantic (FAO Area 31) stretches from the southern U.S. to the north of Brazil, including the Caribbean. Since 1970, catch size has remained between 1.3 million and 1.8 million tons (5% to 10% of the entire Atlantic catch). Lobster and shrimp are the target species in the Caribbean waters.

    Southeast Atlantic (FAO Area 47) connects the African coastlines of Angola, Namibia and South Africa. Production surpassed 2 million tons in 1970 and 1980, accounting for 10% of the total Atlantic catch. Since 1990, the catch has been stable, with a plateau of 1.5 million tons. It’s the least diversified region in the Atlantic, with 160 species monitored by the FAO. Mackerel, hake and anchovy make up 59% of total production.

    Southwest Atlantic (FAO Area 41), which stretches along the coastlines of Brazil, Uruguay and Argentina in South America, was the lowest-producing of the six areas until 1980. It recorded no more than 5% of the total Atlantic catch. But from 1990, fisheries produced 1.8 million to 2 million tons (8% to 10% of the overall catch). This can be attributed to investment from the Argentinian government into fishing fleets in the 1980s. Some 225 commercial species are being statistically monitored, with 52% of total production coming from hake, shortfin squid and shrimp.

    8
    Catches in the Atlantic (1950-2018) according to the FAO areas. Credit: Le Floc’h, CC BY-NC-ND.

    Protecting the entire ecosystem

    At a time when scientific research predicts that all living marine resources will be exhausted by 2048, a new fisheries approach is required to avoid new tragedies, like the one that befell the cod populations in the northwestern Atlantic.

    In this context, protecting ecosystems has become a priority. This growing acknowledgment of the impacts of fishing is a direct result of the successful work undertaken by ecological and social science researchers since the 1970s, who placed the concept of resilience at the heart of their studies.

    This new ecosystem-based management approach, now inscribed in law in Europe and Canada, has been positive. A similar U.S. policy was revoked by President Donald Trump, but likely will be restored by incoming president Joe Biden. However, there is still work to do to tackle the main challenge – making this approach a reality in all Atlantic fisheries.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 9:13 am on December 5, 2020 Permalink | Reply
    Tags: , , , , , , The Aliens- where are they?, The Conversation,   

    From The Conversation: “I’m an astronomer and I think aliens may be out there – but UFO sightings aren’t persuasive” 

    From The Conversation

    December 4, 2020

    Chris Impey
    University Distinguished Professor of Astronomy,
    University of Arizona

    1
    Many people who say they have seen UFOs are either dog walkers or smokers. Credit:Aaron Foster/THeImage Bank/Getty Images.

    If intelligent aliens visit the Earth, it would be one of the most profound events in human history.

    Surveys show that nearly half of Americans believe that aliens have visited the Earth, either in the ancient past or recently. That percentage has been increasing. Belief in alien visitation is greater than belief that Bigfoot is a real creature, but less than belief that places can be haunted by spirits.

    Scientists dismiss these beliefs as not representing real physical phenomena. They don’t deny the existence of intelligent aliens. But they set a high bar for proof that we’ve been visited by creatures from another star system. As Carl Sagan said, “Extraordinary claims require extraordinary evidence.”

    I’m a professor of astronomy who has written extensively on the search for life in the universe. I also teach a free online class on astrobiology. Full disclosure: I have not personally seen a UFO.

    Unidentified flying objects

    UFO means unidentified flying object. Nothing more, nothing less.

    There’s a long history of UFO sightings. Air Force studies of UFOs have been going on since the 1940s. In the United States, “ground zero” for UFOs occurred in 1947 in Roswell, New Mexico. The fact that the Roswell incident was soon explained as the crash landing of a military high-altitude balloon didn’t stem a tide of new sightings. The majority of UFOs appear to people in the United States. It’s curious that Asia and Africa have so few sightings despite their large populations, and even more surprising that the sightings stop at the Canadian and Mexican borders.

    Most UFOs have mundane explanations. Over half can be attributed to meteors, fireballs and the planet Venus. Such bright objects are familiar to astronomers but are often not recognized by members of the public. Reports of visits from UFOs inexplicably peaked about six years ago.

    Many people who say they have seen UFOs are either dog walkers or smokers. Why? Because they’re outside the most. Sightings concentrate in evening hours, particularly on Fridays, when many people are relaxing with one or more drinks.

    A few people, like former NASA employee James Oberg, have the fortitude to track down and find conventional explanations for decades of UFO sightings. Most astronomers find the hypothesis of alien visits implausible, so they concentrate their energy on the exciting scientific search for life beyond the Earth.


    Animated Maps: A Century of UFO Sightings.
    Most UFO sightings have been in the United States.

    Are we alone?

    While UFOs continue to swirl in the popular culture, scientists are trying to answer the big question that is raised by UFOs: Are we alone?

    Astronomers have discovered over 4,000 exoplanets, or planets orbiting other stars, a number that doubles every two years. Some of these exoplanets are considered habitable, since they are close to the Earth’s mass and at the right distance from their stars to have water on their surfaces. The nearest of these habitable planets are less than 20 light years away, in our cosmic “back yard.” Extrapolating from these results leads to a projection of 300 million habitable worlds in our galaxy. Each of these Earth-like planets is a potential biological experiment, and there have been billions of years since they formed for life to develop and for intelligence and technology to emerge.

    Astronomers are very confident there is life beyond the Earth. As astronomer and ace exoplanet-hunter Geoff Marcy, puts it, “The universe is apparently bulging at the seams with the ingredients of biology.” There are many steps in the progression from Earths with suitable conditions for life to intelligent aliens hopping from star to star. Astronomers use the Drake Equation to estimate the number of technological alien civilizations in our galaxy.

    Frank Drake with his Drake Equation. Credit Frank Drake.


    Drake Equation, Frank Drake, Seti Institute.

    There are many uncertainties in the Drake Equation, but interpreting it in the light of recent exoplanet discoveries makes it very unlikely that we are the only, or the first, advanced civilization.

    This confidence has fueled an active search for intelligent life, which has been unsuccessful so far.



    SETI@home, a BOINC [Berkeley Open Infrastructure for Network Computing] project originated in the Space Science Lab at UC Berkeley.

    So researchers have recast the question “Are we alone?” to “Where are they?”

    6
    “WoW!” signal from Ohio State Big Ear Radio Telescope Aug. 15, 1977.

    The absence of evidence for intelligent aliens is called the Fermi Paradox. Even if intelligent aliens do exist, there are a number of reasons why we might not have found them and they might not have found us. Scientists do not discount the idea of aliens. But they aren’t convinced by the evidence to date because it is unreliable, or because there are so many other more mundane explanations.

    Modern myth and religion

    UFOs are part of the landscape of conspiracy theories, including accounts of abduction by aliens and crop circles created by aliens. I remain skeptical that intelligent beings with vastly superior technology would travel trillion of miles just to press down our wheat.

    It’s useful to consider UFOs as a cultural phenomenon. Diana Pasulka, a professor at the University of North Carolina, notes that myths and religions are both means for dealing with unimaginable experiences. To my mind, UFOs have become a kind of new American religion.

    So no, I don’t think belief in UFOs is crazy, because some flying objects are unidentified, and the existence of intelligent aliens is scientifically plausible.

    But a study of young adults did find that UFO belief is associated with schizotypal personality, a tendency toward social anxiety, paranoid ideas and transient psychosis. If you believe in UFOs, you might look at what other unconventional beliefs you have.

    I’m not signing on to the UFO “religion,” so call me an agnostic. I recall the aphorism popularized by Carl Sagan, “It pays to keep an open mind, but not so open your brains fall out.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 11:55 am on November 28, 2020 Permalink | Reply
    Tags: "The periodic table is 150 – but it could have looked very different", Alfred Werner, , , Charles Janet, , Dimitri Mendeleev, E. V. Babaev and Ray Hefferlin, Heinrich Baumhauer, John Dalton, The Conversation   

    From The Conversation: “The periodic table is 150 – but it could have looked very different” 

    From The Conversation

    January 2, 2019 [Re-presented now in social media.]
    Mark Lorch

    1
    Theodor Benfey’s spira table (1964). Credit: DePiep/Wikipedia.

    The periodic table stares down from the walls of just about every chemistry lab.

    Periodic Table from
    International Union of Pure and Applied Chemistry 2019.

    The credit for its creation generally goes to Dimitri Mendeleev, a Russian chemist who in 1869 wrote out the known elements (of which there were 63 at the time) on cards and then arranged them in columns and rows according to their chemical and physical properties. To celebrate the 150th anniversary of this pivotal moment in science, the UN has proclaimed 2019 to be the International year of the Periodic Table.

    2
    John Dalton’s element list. Credit: Wikimedia Commons.

    But the periodic table didn’t actually start with Mendeleev. Many had tinkered with arranging the elements. Decades before, chemist John Dalton tried to create a table as well as some rather interesting symbols for the elements (they didn’t catch on). And just a few years before Mendeleev sat down with his deck of homemade cards, John Newlands also created a table sorting the elements by their properties.

    3
    Dimitry Mendeleev’s table complete with missing elements. Credit: Wikimedia Commons.

    Notice the question marks in his table above? For example, next to Al (aluminium) there’s space for an unknown metal. Mendeleev foretold it would have an atomic mass of 68, a density of six grams per cubic centimetre and a very low melting point. Six years later Paul Émile Lecoq de Boisbaudran, isolated gallium and sure enough it slotted right into the gap with an atomic mass of 69.7, a density of 5.9g/cm³ and a melting point so low that it becomes liquid in your hand. Mendeleev did the same for scandium, germanium and technetium (which wasn’t discovered until 1937, 30 years after his death).

    At first glance Mendeleev’s table doesn’t look much like the one we are familiar with. For one thing, the modern table has a bunch of elements that Mendeleev overlooked (and failed to leave room for), most notably the noble gases (such as helium, neon, argon). And the table is oriented differently to our modern version, with elements we now place together in columns arranged in rows.

    But once you give Mendeleev’s table a 90-degree turn, the similarity to the modern version becomes apparent. For example, the halogens – fluorine (F), chlorine (Cl), bromine (Br), and Iodine (I) (the J symbol in Mendeleev’s table) – all appear next to one another. Today they are arranged in the table’s 17th column (or group 17 as chemists prefer to call it).

    Period of experimentation

    It may seem a small leap from this to the familiar diagram but, years after Mendeleev’s publications, there was plenty of experimentation with alternative layouts for the elements. Even before the table got its permanent right-angle flip, folks suggested some weird and wonderful twists.

    4
    Heinrich Baumhauer’s spiral. Reprinted (adapted) with permission from Types of graphic classifications of the elements. III. Spiral, helical, and miscellaneous charts, G. N. Quam, Mary Battell Quam. Copyright (1934) American Chemical Society.

    One particularly striking example is Heinrich Baumhauer’s spiral, published in 1870, with hydrogen at its centre and elements with increasing atomic mass spiralling outwards. The elements that fall on each of the wheel’s spokes share common properties just as those in a column (group) do so in today’s table. There was also Henry Basset’s rather odd “dumb-bell” formulation of 1892.

    5
    Concept of Chemical Periodicity: from Mendeleev Table to Molecular Hyper-Periodicity Patterns
    E. V. Babaev* and Ray Hefferlin+
    *Chemistry Department, Moscow State University, Moscow, 119899, Russia
    +Physics Department, Southern College, P.O. Box 370, Collegedale, TN 37315, USA

    Nevertheless, by the beginning of the 20th century, the table had settled down into a familiar horizontal format with the strikingly modern looking version from Alfred Werner in 1905. For the first time, the noble gases appeared in their now familiar position on the far right of the table. Werner also tried to take a leaf out of Mendeleev’s book by leaving gaps, although he rather overdid the guess work with suggestions for elements lighter than hydrogen and another sitting between hydrogen and helium (none of which exist).

    6
    Alfred Werner’s modern incarnation. Reprinted (adapted) with permission from Types of graphic classifications of the elements. I. Introduction and short tables, G. N. Quam, Mary Battell Quam. Copyright (1934) American Chemical Society.

    Despite this rather modern looking table, there was still a bit of rearranging to be done. Particularly influential was Charles Janet’s version. He took a physicist’s approach to the table and used a newly discovered quantum theory to create a layout based on electron configurations. The resulting “left step” table is still preferred by many physicists. Interestingly, Janet also provided space for elements right up to number 120 despite only 92 being known at the time (we’re only at 118 now).

    8
    Charles Janet’s left-step table. Credit: Wikipedia, CC BY-SA.

    Settling on a design

    The modern table is actually a direct evolution of Janet’s version. The alkali metals (the group topped by lithium) and the alkaline earth metals (topped by beryllium) got shifted from far right to the far left to create a very wide looking (long form) periodic table. The problem with this format is that it doesn’t fit nicely on a page or poster, so largely for aesthetic reasons the f-block elements are usually cut out and deposited below the main table. That’s how we arrived at the table we recognise today.

    That’s not to say folks haven’t tinkered with layouts, often as an attempt to highlight correlations between elements that aren’t readily apparent in the conventional table. There are literally hundreds of variations (check out Mark Leach’s database) with spirals and 3D versions being particularly popular, not to mention more tongue-in-cheek variants.

    9
    3D ‘Mendeleev flower’ version of the table. Credit: Тимохова Ольга/Wikipedia, CC BY-SA.

    How about my own fusion of two iconic graphics, Mendeleev’s table and Henry Beck’s London Underground map below?

    10
    The author’s underground map of the elements. Mark Lorch, Author provided.

    Or the dizzy array of imitations that aim to give a science feel to categorising everything from beer to Disney characters, and my particular favourite “irrational nonsense”. All of which go to show how the periodic table of elements has become the iconic symbol of science.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: