Tagged: The Conversation (AU) Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:58 pm on March 16, 2023 Permalink | Reply
    Tags: "The multiverse - our universe is suspiciously unlikely to exist – unless it is one of many", , , , , Critics sometimes argue that the multiverse is unscientific because we can’t ever observe other universes., How do we know just how atypical our universe is? To answer that we need to work out the probabilities of each combination of constants., If we had a theory that described physics under the extreme conditions of the ultra-early Big Bang if it predicted multiple Big Bangs we should take it seriously., Our preferences are irrelevant to the way physical reality actually is – so we should surely be open minded to the possibility of an imminent grand cosmological revolution., Physicists including Andrei Linde have shown that under some specific but plausible assumptions about the uncertain physics at this ancient era there would be an “eternal” production of Big Bangs., , Physics or metaphysics? How do we know just how atypical our universe is?, , The “laws of nature” may in this still grander perspective be local by-laws governing our own cosmic patch., The Conversation (AU), The theory of inflation, There is a real motivation to explore “counterfactual” universes – places with different gravity and different physics and so forth., There is no reason to dismiss the multiverse as unscientific., We don’t ultimately know if there are other Big Bangs. But they’re not just metaphysics. We might one day have reasons to believe that they exist.   

    From “The Conversation (AU)” : “The multiverse – our universe is suspiciously unlikely to exist – unless it is one of many” Martin Rees 

    From “The Conversation (AU)”

    3.15.23
    Martin Rees | University of Cambridge

    1
    Do universes pop up as bubbles from a multiverse? arda savasciogullari/Shutterstock.

    “It’s easy to envisage other universes, governed by slightly different laws of physics, in which no intelligent life, nor indeed any kind of organized complex systems, could arise. Should we therefore be surprised that a universe exists in which we were able to emerge?

    That’s a question physicists including me have tried to answer for decades. But it is proving difficult. Although we can confidently trace cosmic history back to one second after the Big Bang, what happened before is harder to gauge. Our accelerators simply can’t produce enough energy to replicate the extreme conditions that prevailed in the first nanosecond.

    But we expect that it’s in that first tiny fraction of a second that the key features of our universe were imprinted.

    The conditions of the universe can be described through its “fundamental constants” – fixed quantities in nature, such as the gravitational constant (called G) or the speed of light (called C). There are about 30 of these representing the sizes and strengths of parameters such as particle masses, forces or the universe’s expansion. But our theories don’t explain what values these constants should have. Instead, we have to measure them and plug their values into our equations to accurately describe nature.

    The values of the constants are in the range that allows complex systems such as stars, planets, carbon and ultimately humans to evolve. Physicists have discovered that if we tweaked some of these parameters by just a few percent, it would render our universe lifeless. The fact that life exists therefore takes some explaining.

    Some argue it is just a lucky coincidence. An alternative explanation, however, is that that we live in a multiverse, containing domains with different physical laws and values of fundamental constants. Most might be wholly unsuitable for life. But a few should, statistically speaking, be life-friendly.

    Impending revolution?

    What is the extent of physical reality? We’re confident that it’s more extensive than the domain that astronomers can ever observe, even in principle. That domain is definitely finite. That’s essentially because, like on the ocean, there’s a horizon that we can’t see beyond. And just as we don’t think the ocean stops just beyond our horizon, we expect galaxies beyond the limit of our observable universe. In our accelerating universe, our remote descendants will also never be able to observe them.

    Most physicists would agree there are galaxies that we can’t ever see, and that these outnumber the ones we can observe. If they stretched far enough, then everything we could ever imagine happening may be repeated over and over. Far beyond the horizon, we could all have avatars.

    This vast (and mainly unobservable) domain would be the aftermath of “our” Big Bang – and would probably be governed by the same physical laws that prevail in the parts of the universe we can observe. But was our Big Bang the only one?

    The theory of inflation, which suggests that the early universe underwent a period when it doubled in size every trillionth of a trillionth of a trillionth of a second has genuine observational support [Astronomy & Astrophysics (below)].

    ___________________________________________________________________
    Inflation

    In physical cosmology, cosmic inflation, cosmological inflation is a theory of exponential expansion of space in the early universe. The inflationary epoch lasted from 10^−36 seconds after the conjectured Big Bang singularity to some time between 10^−33 and 10^−32 seconds after the singularity. Following the inflationary period, the universe continued to expand, but at a slower rate. The acceleration of this expansion due to dark energy began after the universe was already over 7.7 billion years old (5.4 billion years ago).

    Inflation theory was developed in the late 1970s and early 80s, with notable contributions by several theoretical physicists, including Alexei Starobinsky at Landau Institute for Theoretical Physics, Alan Guth at Cornell University, and Andrei Linde at Lebedev Physical Institute. Alexei Starobinsky, Alan Guth, and Andrei Linde won the 2014 Kavli Prize “for pioneering the theory of cosmic inflation.” It was developed further in the early 1980s. It explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe. Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed.

    The detailed particle physics mechanism responsible for inflation is unknown. The basic inflationary paradigm is accepted by most physicists, as a number of inflation model predictions have been confirmed by observation; however, a substantial minority of scientists dissent from this position. The hypothetical field thought to be responsible for inflation is called the inflaton.

    In 2002 three of the original architects of the theory were recognized for their major contributions; physicists Alan Guth of M.I.T., Andrei Linde of Stanford, and Paul Steinhardt of Princeton shared the prestigious Dirac Prize “for development of the concept of inflation in cosmology”. In 2012 Guth and Linde were awarded the Breakthrough Prize in Fundamental Physics for their invention and development of inflationary cosmology.

    4
    Alan Guth, from M.I.T., who first proposed Cosmic Inflation.

    Alan Guth’s notes:
    Alan Guth’s original notes on inflation.
    ___________________________________________________________________

    It accounts for why the universe is so large and smooth, except for fluctuations and ripples that are the “seeds” for galaxy formation.

    But physicists including Andrei Linde have shown that, under some specific but plausible assumptions about the uncertain physics at this ancient era, there would be an “eternal” production of Big Bangs – each giving rise to a new universe.

    String theory, which is an attempt to unify gravity with the laws of microphysics, conjectures everything in the universe is made up of tiny, vibrating strings. But it makes the assumption that there are more dimensions than the ones we experience. These extra dimensions, it suggests, are compacted so tightly together that we don’t notice them all. And each type of compactification could create a universe with different microphysics – so other Big Bangs, when they cool down, could be governed by different laws.

    The “laws of nature” may therefore in this still grander perspective be local by-laws governing our own cosmic patch.

    2
    We can only see a fraction of the universe. NASA/ESA/CSA James Webb telescope.

    If physical reality is like this, then there’s a real motivation to explore “counterfactual” universes – places with different gravity, different physics and so forth – to explore what range or parameters would allow complexity to emerge, and which would lead to sterile or “stillborn” cosmos. Excitingly, this is ongoing, with recent reseach [Physics Reports (below)] suggesting you could imagine universes that are even more friendly to life than our own. Most “tweakings” of the physical constants, however, would render a universe stillborn.

    That said, some don’t like the concept of the multiverse. They worry it would render the hope for a fundamental theory to explain the constants as vain as Kepler’s numerological quest to relate planetary orbits to nested platonic solids.

    But our preferences are irrelevant to the way physical reality actually is – so we should surely be open minded to the possibility of an imminent grand cosmological revolution. First we had the Copernican realization that the Earth wasn’t the centre of the Solar System – it revolves around the Sun. Then we realized that there are zillions of planetary systems in our galaxy, and that there are zillions of galaxies in our observable universe.

    So could it be that our observable domain – indeed our Big Bang – is a tiny part of a far larger and possibly diverse ensemble?

    Physics or metaphysics?

    How do we know just how atypical our universe is? To answer that we need to work out the probabilities of each combination of constants. And that’s a can of worms that we can’t yet open – it will have to await huge theoretical advances.

    We don’t ultimately know if there are other Big Bangs. But they’re not just metaphysics. We might one day have reasons to believe that they exist.

    Specifically, if we had a theory that described physics under the extreme conditions of the ultra-early Big Bang – and if that theory had been corroborated in other ways, for instance by deriving some unexplained parameters in the standard model of particle physics – then if it predicted multiple Big Bangs we should take it seriously.

    Critics sometimes argue that the multiverse is unscientific because we can’t ever observe other universes. But I disagree. We can’t observe the interior of black holes, but we believe what physicist Roger Penrose says about what happens there – his theory has gained credibility by agreeing with many things we can observe.

    About 15 years ago, I was on a panel at Stanford where we were asked how seriously we took the multiverse concept – on the scale “would you bet your goldfish, your dog, or your life” on it. I said I was nearly at the dog level. Linde said he’d almost bet his life. Later, on being told this, physicist Steven Weinberg said he’d “happily bet Martin Rees’ dog and Andrei Linde’s life”.

    Sadly, I suspect Linde, my dog and I will all be dead before we have an answer.”

    Indeed, we can’t even be sure we’d understand the answer – just as quantum theory is too difficult for monkeys. It’s conceivable that machine intelligence could explore the geometrical intricacies of some string theories and spew out, for instance, some generic features of the standard model. We’d then have confidence in the theory and take its other predictions seriously.

    But we’d never have the “aha” insight moment that’s the greatest satisfaction for a theorist. Physical reality at its deepest level could be so profound that its elucidation would have to await posthuman species – depressing or exhilarating as that may be, according to taste. But it’s no reason to dismiss the multiverse as unscientific.

    Astronomy & Astrophysics 2016
    Physics Reports 2019

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation (AU) launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 12:19 pm on March 11, 2023 Permalink | Reply
    Tags: "Causation", "Quantum mechanics - how the future might influence the past", "Retrocausality", , , , The Conversation (AU)   

    From “The Conversation (AU)” : “Quantum mechanics – how the future might influence the past” 

    From “The Conversation (AU)”

    3.8.23
    Huw Price
    Emeritus Fellow, Trinity College
    University of Cambridge

    Ken Wharton
    Professor of Physics and Astronomy
    San José State University

    1

    In 2022, the physics Nobel prize was awarded for experimental work showing that the quantum world must break some of our fundamental intuitions about how the universe works.

    Many look at those experiments and conclude that they challenge “locality” — the intuition that distant objects need a physical mediator to interact. And indeed, a mysterious connection between distant particles would be one way to explain these experimental results.

    Others instead think the experiments challenge “realism” — the intuition that there’s an objective state of affairs underlying our experience. After all, the experiments are only difficult to explain if our measurements are thought to correspond to something real. Either way, many physicists agree about what’s been called “the death by experiment” of local realism.

    But what if both of these intuitions can be saved, at the expense of a third? A growing group of experts think that we should abandon instead the assumption that present actions can’t affect past events. Called “retrocausality”, this option claims to rescue both locality and realism.

    “Causation”

    What is causation anyway? Let’s start with the line everyone knows: correlation is not causation. Some correlations are causation, but not all. What’s the difference?

    Consider two examples. (1) There’s a correlation between a barometer needle and the weather – that’s why we learn about the weather by looking at the barometer. But no one thinks that the barometer needle is causing the weather. (2) Drinking strong coffee is correlated with a raised heart rate. Here it seems right to say that the first is causing the second.

    The difference is that if we “wiggle” the barometer needle, we won’t change the weather. The weather and the barometer needle are both controlled by a third thing, the atmospheric pressure – that’s why they are correlated. When we control the needle ourselves, we break the link to the air pressure, and the correlation goes away.

    But if we intervene to change someone’s coffee consumption, we’ll usually change their heart rate, too. Causal correlations are those that still hold when we wiggle one of the variables.

    These days, the science of looking for these robust correlations is called “causal discovery”. It’s a big name for a simple idea: finding out what else changes when we wiggle things around us.

    In ordinary life, we usually take for granted that the effects of a wiggle are going to show up later than the wiggle itself. This is such a natural assumption that we don’t notice that we’re making it.

    But nothing in the scientific method requires this to happen, and it is easily abandoned in fantasy fiction. Similarly in some religions, we pray that our loved ones are among the survivors of yesterday’s shipwreck, say. We’re imagining that something we do now can affect something in the past. That’s retrocausality.

    Quantum retrocausality

    The quantum threat to locality (that distant objects need a physical mediator to interact) stems from an argument by the Northern Ireland physicist John Bell in the 1960s. Bell considered experiments in which two hypothetical physicists, Alice and Bob, each receive particles from a common source. Each chooses one of several measurement settings, and then records a measurement outcome. Repeated many times, the experiment generates a list of results.

    Bell realised that quantum mechanics predicts that there will be strange correlations (now confirmed) in this data. They seemed to imply that Alice’s choice of setting has a subtle “nonlocal” influence on Bob’s outcome, and vice versa – even though Alice and Bob might be light years apart. Bell’s argument is said to pose a threat to Albert Einstein’s theory of special relativity, which is an essential part of modern physics.

    But that’s because Bell assumed that quantum particles don’t know what measurements they are going to encounter in the future. Retrocausal models propose that Alice’s and Bob’s measurement choices affect the particles back at the source. This can explain the strange correlations, without breaking special relativity.

    In recent work, we’ve proposed [Foundations of Physics (below)] a simple mechanism for the strange correlation – it involves a familiar statistical phenomenon called Berkson’s bias (see our popular summary here).

    There’s now a thriving group of scholars who work on quantum retrocausality. But it’s still invisible to some experts in the wider field. It gets confused for a different view called “superdeterminism”.

    Superdeterminism

    Superdeterminism agrees with retrocausality that measurement choices and the underlying properties of the particles are somehow correlated.

    But superdeterminism treats it like the correlation between the weather and the barometer needle. It assumes there’s some mysterious third thing – a “superdeterminer” – that controls and correlates both our choices and the particles, the way atmospheric pressure controls both the weather and the barometer.

    So superdeterminism denies that measurement choices are things we are free to wiggle at will, they are predetermined. Free wiggles would break the correlation, just as in the barometer case. Critics object that superdeterminism thus undercuts core assumptions necessary to undertake scientific experiments. They also say that it means denying free will, because something is controlling both the measurement choices and particles.

    These objections don’t apply to retrocausality. Retrocausalists do scientific causal discovery in the usual free, wiggly way. We say it is folk who dismiss retrocausality who are forgetting the scientific method, if they refuse to follow the evidence where it leads.

    Evidence

    What is the evidence for retrocausality? Critics ask for experimental evidence, but that’s the easy bit: the relevant experiments just won a Nobel Prize. The tricky part is showing that retrocausality gives the best explanation of these results.

    We’ve mentioned the potential to remove the threat to Einstein’s special relativity. That’s a pretty big hint, in our view, and it’s surprising it has taken so long to explore it. The confusion with superdeterminism seems mainly to blame.

    In addition, we and others have argued that retrocausality makes better sense of the fact that the microworld of particles doesn’t care about the difference between past and future.

    We don’t mean that it is all plain sailing. The biggest worry about retrocausation is the possibility of sending signals to the past, opening the door to the paradoxes of time travel. But to make a paradox, the effect in the past has to be measured. If our young grandmother can’t read our advice to avoid marrying grandpa, meaning we wouldn’t come to exist, there’s no paradox. And in the quantum case, it’s well known that we can never measure everything at once.

    Still, there’s work to do in devising concrete retrocausal models that enforce this restriction that you can’t measure everything at once. So we’ll close with a cautious conclusion. At this stage, it’s retrocausality that has the wind in its sails, so hull down towards the biggest prize of all: saving locality and realism from “death by experiment”.

    Foundations of Physics

    Fig. 1
    2
    Entanglement swapping is a procedure in which entanglement may be “swapped” from a pair of jointly measured particles to a pair of particles lacking common preparation. The technique has become quite commonplace in experiments involving entanglement and has numerous applications in quantum information theory. A simple experimental arrangement is depicted.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation (AU) launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 1:49 pm on February 20, 2023 Permalink | Reply
    Tags: "Were viruses around on Earth before living cells emerged? A microbiologist explains", , , , , , The Conversation (AU),   

    From “The Conversation (AU)” : “Were viruses around on Earth before living cells emerged? A microbiologist explains” 

    From “The Conversation (AU)”

    2.20.23
    Kenneth Noll | Professor Emeritus of Microbiology, University of Connecticut

    1
    Maybe the first life on Earth was part of an ‘RNA world.’ Artur Plawgo/Science Photo Library via Getty Images.

    “How life on Earth started has puzzled scientists for a long time. And it still does.

    Fossils provide very important evidence about the evolution of plants and animals. Unfortunately, there are very few fossils of ancient microbes available, so scientists rely on modern microbes to devise theories about how life started. I studied bacteria and another type of microbe called archaea from hot environments for many years to learn how they might have evolved on early Earth, but I still have so many unanswered questions.

    Based on the fossil evidence we have, single-celled microbes appeared on Earth before larger cellular life like plants and animals. But which kinds of microbes were the very first kind of life?


    The mysterious origins of life on Earth – Luka Seamus Wright.
    Some scientists think hydrothermal vents are the cradle of early life on Earth.

    3
    Hydrothermal vent. NOAA.

    Which microbes are considered alive?

    Microbes are living, single-celled creatures surrounded by a membrane. They consume and convert nutrients into biological molecules or energy and are too small to be seen without a microscope.

    By this definition, bacteria, archaea and single-celled eukaryotes are microbes. Bacteria and archaea are single-celled creatures that lack internal membrane-enclosed structures, like a nucleus to hold their genetic material. Single-celled eukaryotes have a nucleus and may have other membrane-enclosed structures.

    3
    Unlike prokaryotic cells, eukaryotic cells have membrane-enclosed structures like a nucleus and mitochondria. VectorMine/iStock via Getty Images Plus

    Some scientists consider viruses to be microbes made of genetic material enclosed in a protein coat. They are unable to replicate on their own and hijack the machinery of other cells to make copies of themselves. Because they don’t have many features of living cells, they are not technically alive.

    Evidence for early life on Earth

    Fossils can provide scientists with clues as to when life started, but they best record hard things like bones and teeth. Microbes are made of soft materials that do not fossilize well. However, some live together in very large groups of cells that can accumulate minerals and leave behind quite large fossils.

    For example, cyanobacteria formed large structures called stromatolites in the oceans of early Earth. Scientists have found fossil stromatolites that date back to 3.48 billion years ago.

    4
    Stromatolites can provide information about life on early Earth. Jana Kriz/Moment via Getty Images.

    Other scientists found what they believe are fossilized archaea in rocks from a 3.4 billion-year-old hot seafloor. The Earth became habitable about 4 billion years ago, so bacteria and archaea must have appeared between 3.5 billion and 4 billion years ago.

    Looking at the chemical reactions that cells carry out can also provide clues. The reactions that make biological molecules and generate energy make up what’s called the cell’s metabolism. Scientists have found evidence that some metabolic reactions were occurring at least 4.1 billion years ago. These reactions may have been occurring on their own before cells had evolved, perhaps on the surfaces of clays or minerals.

    Theories about how life started on Earth

    Cells copy their genetic material, made of DNA and RNA, to pass it on to new generations. Although DNA is the form of genetic material most living organisms use today, some scientists believe that RNA was the first information storage molecule on early Earth because it can make copies of itself.

    Because some modern viruses use RNA to store genetic information, some scientists believe that viruses could have evolved from self-replicating RNAs. This possibility would mean that viruses may have appeared before bacteria. But because viruses don’t leave fossils behind, there isn’t available evidence to support this idea.


    The RNA Origin of Life
    The RNA-world hypothesis proposes that self-replicating RNA evolved before DNA or proteins.

    At some point, metabolic reactions and replication processes had to come together inside a membrane to make an early form of a cell: a pre-cell. Perhaps this happened when a viruslike structure infected a collection of metabolic reactions enclosed within a membrane. The pre-cell could then duplicate itself, leading to the evolution of the first living cell. This cell would have been like today’s bacteria and archaea.

    Maybe viruslike structures did form before cells. However, those simple viruslike structures might have been just pieces of DNA or RNA, so could they really be considered “viruses”?

    Another popular theory states that viruses evolved from degenerated bacteria or archaea that lost most of the genetic instructions for carrying out metabolism and forming cells. There are many examples of similar smaller degenerations that have occurred in the bacterial world today.

    Uncovering early life

    The surface of the Earth today is very different from what it was 4 billion years ago. Some have speculated that deep under the Earth’s surface, where it is too hot for modern life, these early conditions might still be present, allowing some protolife forms to continue to exist where they are protected from being consumed by other microbes.

    When people can explore other planets or moons, perhaps we will find processes similar to those that were at work on early Earth. This kind of discovery could help us solve the puzzle of life’s origin here.”

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation (AU) launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 8:12 am on February 4, 2023 Permalink | Reply
    Tags: "Astronomers Studied More Than 5000 Black Holes to Figure Out Why They Twinkle", , , , , , The Conversation (AU)   

    From “The Conversation (AU)” : “Astronomers Studied More Than 5000 Black Holes to Figure Out Why They Twinkle” 

    From “The Conversation (AU)”

    2.4.23
    Christian Wolf

    Black holes are bizarre things, even by the standards of astronomers. Their mass is so great it bends space around them so tightly that nothing can escape, even light itself.

    And yet, despite their famous blackness, some black holes are quite visible. The gas and stars these galactic vacuums devour are sucked into a glowing disc before their one-way trip into the hole, and these discs can shine more brightly than entire galaxies.

    Stranger still, these black holes twinkle.

    1
    This illustration shows a disk of hot gas swirling around a black hole. The stream of gas stretching to the right is what remains of a star that was pulled apart by the black hole. Credit: NASA/JPL-Caltech.

    The brightness of the glowing discs can fluctuate from day to day, and nobody is entirely sure why.

    We piggy-backed on NASA’s asteroid defense effort to watch more than 5,000 of the fastest-growing black holes in the sky for five years in an attempt to understand why this twinkling occurs.

    In a new paper in Nature Astronomy [below], we report our answer: a kind of turbulence driven by friction and intense gravitational and magnetic fields.

    Gigantic star-eaters

    We study supermassive black holes, the kind that sit at the centers of galaxies and are as massive as millions or billions of Suns.

    Our own galaxy, the Milky Way, has one of these giants at its center, with a mass of about four million Suns.

    For the most part, the 200 billion or so stars that make up the rest of the galaxy (including our Sun) happily orbit around the black hole at the center.

    However, things are not so peaceful in all galaxies. When pairs of galaxies pull on each other via gravity, many stars may end up tugged too close to their galaxy’s black hole. This ends badly for the stars: They are torn apart and devoured.

    We are confident this must have happened in galaxies with black holes that weigh as much as a billion Suns, because we can’t imagine how else they could have grown so large. It may also have happened in the Milky Way in the past.

    Black holes can also feed in a slower, more gentle way: by sucking in clouds of gas blown out by geriatric stars known as red giants.

    Feeding time

    In our new study, we looked closely at the feeding process among the 5,000 fastest-growing black holes in the Universe.

    In earlier studies, we discovered the black holes with the most voracious appetite. Last year, we found a black hole that eats an Earth’s-worth of stuff every second [PASA (below)]. In 2018, we found one that eats a whole Sun every 48 hours [PASA (below)].

    But we have lots of questions about their actual feeding behavior. We know material on its way into the hole spirals into a glowing “accretion disc” that can be bright enough to outshine entire galaxies. These visibly feeding black holes are called quasars.

    Most of these black holes are a long, long way away – much too far for us to see any detail of the disc. We have some images of accretion discs around nearby black holes, but they are merely breathing in some cosmic gas rather than feasting on stars.

    Five years of flickering black holes

    In our new work, we used data from NASA’s ATLAS telescope in Hawaii.


    It scans the entire sky every night (weather permitting), monitoring for asteroids approaching Earth from the outer darkness.

    These whole-sky scans also happen to provide a nightly record of the glow of hungry black holes, deep in the background. Our team put together a five-year movie of each of those black holes, showing the day-to-day changes in brightness caused by the bubbling and boiling glowing maelstrom of the accretion disc.

    The twinkling of these black holes can tell us something about accretion discs.

    In 1998, astrophysicists Steven Balbus and John Hawley proposed a theory of “magneto-rotational instabilities” that describes how magnetic fields can cause turbulence in the discs [Reviews of Modern Physics (below)]. If that is the right idea, then the discs should sizzle in regular patterns.

    They would twinkle in random patterns that unfold as the discs orbit. Larger discs orbit more slowly with a slow twinkle, while tighter and faster orbits in smaller discs twinkle more rapidly.

    But would the discs in the real world prove this simple, without any further complexities? (Whether “simple” is the right word for turbulence in an ultra-dense, out-of-control environment embedded in intense gravitational and magnetic fields where space itself is bent to breaking point is perhaps a separate question.)

    Using statistical methods, we measured how much the light emitted from our 5,000 discs flickered over time. The pattern of flickering in each one looked somewhat different.

    But when we sorted them by size, brightness, and color, we began to see intriguing patterns. We were able to determine the orbital speed of each disc – and once you set your clock to run at the disc’s speed, all the flickering patterns started to look the same.

    This universal behavior is indeed predicted by the theory of “magneto-rotational instabilities”.

    That was comforting! It means these mind-boggling maelstroms are “simple” after all.

    And it opens new possibilities. We think the remaining subtle differences between accretion discs occur because we are looking at them from different orientations.

    The next step is to examine these subtle differences more closely and see whether they hold clues to discern a black hole’s orientation. Eventually, our future measurements of black holes could be even more accurate.

    Nature Astronomy 2022
    PASA 2022
    PASA 2018
    See the above science papers for instructive material with images.
    Reviews of Modern Physics 1998

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation (AU) launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 3:31 pm on January 23, 2023 Permalink | Reply
    Tags: "How has the inside of the Earth stayed as hot as the Sun’s surface for billions of years?", Every atom of a given element has the same number of protons but different isotope cousins have varying numbers of neutrons., Geoscienes, , Plates change the environment and force life to adapt to new conditions., Potassium-40 and thorium-232 and uranium-235 and uranium-238 are four of the radioactive isotopes keeping Earth’s interior hot., Radioactive isotopes are not stable. They release a steady stream of energy that converts to heat., The Conversation (AU), The core is at such high pressure deep within the planet the iron it’s made up of remains liquid or solid even at nearly 10800 F., The lithosphere is divided into several large blocks called plates which move by plate techtonics., The upper part of the mantle typically moves together with the crust. Together, These energy-releasing isotopes provide the heat to drive the motion of the plates., they are called the lithosphere., Where does all that heat come from? Two sources: the heat that Earth inherited during its formation 4.5 billion years ago; the decay of radioactive isotopes distributed everywhere in the Earth., Without plate tectonics human beings probably would not exist.   

    From “The Conversation (AU)” : “How has the inside of the Earth stayed as hot as the Sun’s surface for billions of years?” 

    From “The Conversation (AU)”

    1.23.23
    Shichun Huang

    “Our Earth is structured sort of like an onion – it’s one layer after another.

    Starting from the top down, there’s the crust, which includes the surface you walk on; then farther down, the mantle, mostly solid rock; then even deeper, the outer core, made of liquid iron; and finally, the inner core, made of solid iron, and with a radius that’s 70% the size of the Moon’s. The deeper you dive, the hotter it gets – parts of the core are as hot as the surface of the Sun.

    As a professor of earth and planetary sciences, I study the insides of our world. Just as a doctor can use a technique called sonography to make pictures of the structures inside your body with ultrasound waves, scientists use a similar technique to image the Earth’s internal structures. But instead of ultrasound, geoscientists use seismic waves – sound waves produced by earthquakes.

    At the Earth’s surface, you see dirt, sand, grass and pavement, of course. Seismic vibrations reveal what’s below that: rocks, large and small. This is all part of the crust, which may go down as far as 20 miles (30 kilometers); it floats on top of the layer called the mantle.

    The upper part of the mantle typically moves together with the crust. Together, they are called the lithosphere, which is about 60 miles (100 kilometers) thick on average, although it can be thicker at some locations.

    The lithosphere is divided into several large blocks called plates. For example, the Pacific plate is beneath the whole Pacific Ocean, and the North American plate covers most of North America. Plates are kind of like puzzle pieces that fit roughly together and cover the surface of the Earth.

    The plates are not static; instead, they move.

    Sometimes it’s the tiniest fraction of inches over a period of years. Other times, there’s more movement, and it’s more sudden. This sort of movement is what triggers earthquakes and volcanic eruptions.

    What’s more, plate movement is a critical, and probably essential, factor driving the evolution of life on Earth, because the moving plates change the environment and force life to adapt to new conditions.


    What Would a Journey to the Earth’s Core Be Like?

    The heat is on

    Plate motion requires a hot mantle. And indeed, as you go deeper into the Earth, the temperature increases.

    At the bottom of the plates, around 60 miles (100 kilometers) deep, the temperature is about 2,400 degrees Fahrenheit (1,300 degrees Celsius).

    By the time you get to the boundary between the mantle and the outer core, which is 1,800 miles (2,900 kilometers) down, the temperature is nearly 5,000 F (2,700 C).

    Then, at the boundary between outer and inner cores, the temperature doubles, to nearly 10,800 F (over 6,000 C). That’s the part that’s as hot as the surface of the Sun. At that temperature, virtually everything – metals, diamonds, human beings – vaporizes into gas. But because the core is at such high pressure deep within the planet the iron it’s made up of remains liquid or solid.


    The World Before Plate Tectonics. Without plate tectonics human beings probably would not exist.

    Collisions in outer space

    Where does all that heat come from?

    It is not from the Sun. While it warms us and all the plants and animals on Earth’s surface, sunlight can’t penetrate through miles of the planet’s interior.

    Instead, there are two sources. One is the heat that Earth inherited during its formation 4.5 billion years ago. The Earth was made from the solar nebula, a gigantic gaseous cloud, amid endless collisions and mergings between bits of rock and debris called planetesimals. This process took tens of millions of years.

    An enormous amount of heat was produced during those collisions, enough to melt the whole Earth. Although some of that heat was lost in space, the rest of it was locked away inside the Earth, where much of it remains even today.

    The other heat source: the decay of radioactive isotopes, distributed everywhere in the Earth.

    To understand this, first imagine an element as a family with isotopes as its members. Every atom of a given element has the same number of protons, but different isotope cousins have varying numbers of neutrons.

    Radioactive isotopes are not stable. They release a steady stream of energy that converts to heat. Potassium-40, thorium-232, uranium-235 and uranium-238 are four of the radioactive isotopes keeping Earth’s interior hot.

    Some of those names may sound familiar to you. Uranium-235, for example, is used as a fuel in nuclear power plants. Earth is in no danger of running out of these sources of heat: Although most of the original uranium-235 and potassium-40 are gone, there’s enough thorium-232 and uranium-238 to last for billions more years.

    Along with the hot core and mantle, these energy-releasing isotopes provide the heat to drive the motion of the plates.

    No heat, no plate movement, no life

    Even now, the moving plates keep changing the surface of the Earth, constantly making new lands and new oceans over millions and billions of years. The plates also affect the atmosphere over similarly lengthy time scales.

    But without the Earth’s internal heat, the plates would not have been moving. The Earth would have cooled down. Our world would likely have been uninhabitable. You wouldn’t be here.

    Think about that, the next time you feel the Earth under your feet.”

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation (AU) launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 10:56 am on January 9, 2023 Permalink | Reply
    Tags: "Visualizing the inside of cells at previously impossible resolutions provides vivid insights into how they work", All life is made up of cells several magnitudes smaller than a grain of salt., , , , , , Cryo-electron tomography, , Researchers are beginning to be able to visualize this complex molecular activity to a level of detail they haven’t been able to before., The Conversation (AU), , There has been a resolution gap between a cell’s smallest structures e.g. the cytoskeleton that supports the cell’s shape and its largest structures e.g. the ribosomes that make proteins in cells., Understanding how biological structures fit together in a cell is key to understanding how organisms function.,   

    From The University of Pittsburgh Via “The Conversation (AU)” : “Visualizing the inside of cells at previously impossible resolutions provides vivid insights into how they work” 

    U Pitt bloc

    From The University of Pittsburgh

    Via

    “The Conversation (AU)”

    1.6.23
    Jeremy Berg

    1
    Cryo-electron tomography shows what molecules look like in high-resolution – in this case, the virus that causes COVID-19. Nanographics, CC BY-SA.

    “All life is made up of cells several magnitudes smaller than a grain of salt. Their seemingly simple-looking structures mask the intricate and complex molecular activity that enables them to carry out the functions that sustain life. Researchers are beginning to be able to visualize this activity to a level of detail they haven’t been able to before.

    Biological structures can be visualized by either starting at the level of the whole organism and working down, or starting at the level of single atoms and working up. However, there has been a resolution gap between a cell’s smallest structures, such as the cytoskeleton that supports the cell’s shape, and its largest structures, such as the ribosomes that make proteins in cells.

    By analogy of Google Maps, while scientists have been able to see entire cities and individual houses, they did not have the tools to see how the houses came together to make up neighborhoods. Seeing these neighborhood-level details is essential to being able to understand how individual components work together in the environment of a cell.

    New tools are steadily bridging this gap. And ongoing development of one particular technique, cryo-electron tomography, or cryo-ET, has the potential to deepen how researchers study and understand how cells function in health and disease.


    Cryo-EM won the 2017 Nobel Prize in chemistry: Cryo-electron microscopy explained.

    As the former editor-in-chief of Science magazine and as a researcher who has studied hard-to-visualize large protein structures for decades, I have witnessed astounding progress in the development of tools that can determine biological structures in detail. Just as it becomes easier to understand how complicated systems work when you know what they look like, understanding how biological structures fit together in a cell is key to understanding how organisms function.

    A brief history of Microscopy

    In the 17th century, “light microscopy” first revealed the existence of cells. In the 20th century, “electron microscopy” offered even greater detail, revealing the elaborate structures within cells, including organelles like the endoplasmic reticulum, a complex network of membranes that play key roles in protein synthesis and transport.

    From the 1940s to 1960s, biochemists worked to separate cells into their molecular components and learn how to determine the 3D structures of proteins and other macromolecules at or near atomic resolution. This was first done using “X-ray crystallography” to visualize the structure of myoglobin, a protein that supplies oxygen to muscles.

    Over the past decade, techniques based on nuclear magnetic resonance, which produces images based on how atoms interact in a magnetic field, and cryo-electron microscopy have rapidly increased the number and complexity of the structures scientists can visualize.

    What is cryo-EM and cryo-ET?

    Cryo-electron microscopy, or cryo-EM, uses a camera to detect how a beam of electrons is deflected as the electrons pass through a sample to visualize structures at the molecular level. Samples are rapidly frozen to protect them from radiation damage. Detailed models of the structure of interest are made by taking multiple images of individual molecules and averaging them into a 3D structure.

    Cryo-ET shares similar components with cryo-EM but uses different methods. Because most cells are too thick to be imaged clearly, a region of interest in a cell is first thinned by using an ion beam. The sample is then tilted to take multiple pictures of it at different angles, analogous to a CT scan of a body part – although in this case the imaging system itself is tilted, rather than the patient. These images are then combined by a computer to produce a 3D image of a portion of the cell.

    1
    This is a cryo-ET image of the chloroplast of an algal cell. Engel et al. (2015), CC BY.

    The resolution of this image is high enough that researchers – or computer programs – can identify the individual components of different structures in a cell. Researchers have used this approach, for example, to show how proteins move and are degraded inside an algal cell.

    Many of the steps researchers once had to do manually to determine the structures of cells are becoming automated, allowing scientists to identify new structures at vastly higher speeds. For example, combining cryo-EM with artificial intelligence programs like AlphaFold [see Nature paper below] [Nature (below)] can facilitate image interpretation by predicting protein structures that have not yet been characterized.

    Understanding cell structure and function

    As imaging methods and workflows improve, researchers will be able to tackle some key questions in cell biology with different strategies.

    The first step is to decide what cells and which regions within those cells to study. Another visualization technique called correlated light and electron microscopy, or CLEM [FEBSLetters (below)], uses fluorescent tags to help locate regions where interesting processes are taking place in living cells.

    1
    This is a cryo-EM image of a human T-cell leukemia virus type-1 (HTLV-1). vdvornyk/iStock via Getty Images Plus.

    Comparing the genetic difference between cells [iScience (below)] can provide additional insight. Scientists can look at cells that are unable to carry out particular functions and see how this is reflected in their structure. This approach can also help researchers study how cells interact with each other.

    Cryo-ET is likely to remain a specialized tool for some time. But further technological developments and increasing accessibility will allow the scientific community to examine the link between cellular structure and function at previously inaccessible levels of detail. I anticipate seeing new theories on how we understand cells, moving from disorganized bags of molecules to intricately organized and dynamic systems.”

    Science papers:
    Nature 2021
    FEBSLetters 2022
    iScience 2018
    See the science papers for instructive material with images.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Pitt campus

    The University of Pittsburgh is a state-related research university, founded as the Pittsburgh Academy in 1787. Pitt is a member of The Association of American Universities, which comprises 62 preeminent doctorate-granting research institutions in North America.

    From research achievements to the quality of its academic programs, the University of Pittsburgh ranks among the best in higher education.

    Faculty members have expanded knowledge in the humanities and sciences, earning such prestigious honors as the National Medal of Science, the MacArthur Foundation’s “genius” grant, the Lasker-DeBakey Clinical Medical Research Award, and election to The National Academy of Sciences and The Institute of Medicine.
    Pitt students have earned Rhodes, Goldwater, Marshall, and Truman Scholarships, among other highly competitive national and international scholarship

    Alumni have pioneered MRI and TV, won Nobels and Pulitzers, led corporations and universities, served in government and the military, conquered Hollywood and The New York Times bestsellers list, and won Super Bowls and NBA championships.

     
  • richardmitnick 4:29 pm on December 7, 2022 Permalink | Reply
    Tags: "How far has nuclear fusion power come? We could be at a turning point for the technology", , , EAST experiment in China, , ITER Tokamak in Saint-Paul-lès-Durance France, Korea’s flagship experiment KSTAR, , , The Conversation (AU), The Joint European Torus [JET] tokamak generator based at the Culham Center for Fusion Energy located at the Culham Science Centre near Culham in Oxfordshire England, The National Ignition Facility at Lawrence Livermore National Laboratory in California   

    From “The Conversation (AU)” : “How far has nuclear fusion power come? We could be at a turning point for the technology” 

    From “The Conversation (AU)”

    12.6.22
    Nathan Garland
    Lecturer in Applied Mathematics and Physics
    Griffith University

    Matthew Hole
    Senior Research Fellow, Mathematical Sciences Institute
    Australian National University

    Our society faces the grand challenge of providing sustainable, secure and affordable means of generating energy, while trying to reduce carbon dioxide emissions to net zero around 2050.

    To date, developments in fusion power, which potentially ticks all these boxes, have been funded almost exclusively by the public sector. However, something is changing.

    Private equity investment in the global fusion industry has more than doubled in just one year – from US$2.1 billion in 2021 to US$4.7 billion in 2022, according to a survey from the Fusion Industry Association.

    So, what is driving this recent change? There’s lots to be excited about.

    The U.K.-based JET laboratory recently managed to produce and maintain a comparatively high level of thermal energy over a five-second period, a promising sign for the viability of nuclear fusion.
    Courtesy of Euro Fusion.

    Before we explore that, let’s take a quick detour to recap what fusion power is.

    Merging atoms together

    Fusion works the same way our Sun does, by merging two heavy hydrogen atoms under extreme heat and pressure to release vast amounts of energy.

    It’s the opposite of the fission process used by nuclear power plants, in which atoms are split to release large amounts of energy.

    Sustaining nuclear fusion at scale has the potential to produce a safe, clean, almost inexhaustible power source.

    Our Sun sustains fusion at its core with a plasma of charged particles at around 15 million degrees Celsius. Down on Earth, we are aiming for hundreds of millions of degrees Celsius, because we don’t have the enormous mass of the Sun compressing the fuel down for us.

    Scientists and engineers have worked out several designs for how we might achieve this, but most fusion reactors use strong magnetic fields to “bottle” and confine the hot plasma.

    Generally, the main challenge to overcome on our road to commercial fusion power is to provide environments that can contain the intense burning plasma needed to produce a fusion reaction that is self-sustaining, producing more energy than was needed to get it started.

    Joining the public and private

    Fusion development has been progressing since the 1950s. Most of it was driven by government funding for fundamental science.

    Now, a growing number of private fusion companies around the world are forging ahead towards commercial fusion energy. A change in government attitudes has been crucial to this.

    The US and UK governments are fostering public-private partnerships to complement their strategic research programs.

    For example, the White House recently announced it would develop a “bold decadal vision for commercial fusion energy”.

    In the United Kingdom, the government has invested in a program aimed at connecting a fusion generator to the national electricity grid.

    The technology has actually advanced, too.

    In addition to public-private resourcing, the technologies we need for fusion plants have come along in leaps and bounds.

    In 2021, MIT scientists and Commonwealth Fusion Systems developed a record-breaking magnet that will allow them to build a compact fusion device called SPARC “that is substantially smaller, lower cost, and on a faster timeline”.

    In recent years, several fusion experiments have also reached the all-important milestone of sustaining plasma temperatures of 100 million degrees Celsius or above. These include the EAST experiment in China, Korea’s flagship experiment KSTAR, and UK-based company Tokamak Energy.

    These incredible feats demonstrate an unprecedented ability to replicate conditions found inside our Sun and keep extremely hot plasma trapped long enough to encourage fusion to occur.

    In February, the Joint European Torus [above]– the world’s most powerful operational tokamak – announced world-record energy confinement.

    And the next-step fusion energy experiment to demonstrate net power gain, ITER, is under construction in France and now about 80% complete.

    Magnets aren’t the only path to fusion either. In November 2021, The National Ignition Facility at Lawrence Livermore National Laboratory in California achieved a historic step forward for inertial confinement fusion.

    By focusing nearly 200 powerful lasers to confine and compress a target the size of a pencil’s eraser, they produced a small fusion “hot spot” generating fusion energy over a short time period.

    In Australia, a company called HB11 is developing proton-boron fusion technology through a combination of high-powered lasers and magnetic fields.

    Fusion and renewables can go hand in hand

    It is crucial that investment in fusion is not at the cost of other forms of renewable energy and the transition away from fossil fuels.

    We can afford to expand adoption of current renewable energy technology like solar, wind, and pumped hydro while also developing next-generation solutions for electricity production.

    This exact strategy was outlined recently by the United States in its Net-Zero Game Changers Initiative. In this plan, resource investment will be targeted to developing a path to rapid decarbonization in parallel with the commercial development of fusion.

    History shows us that incredible scientific and engineering progress is possible when we work together with the right resources – the rapid development of COVID-19 vaccines is just one recent example.

    It is clear many scientists, engineers, and now governments and private investors (and even fashion designers) have decided fusion energy is a solution worth pursuing, not a pipe dream. Right now, it’s the best shot we’ve yet had to make fusion power a viable reality.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation (AU) launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.

    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 12:03 pm on November 30, 2022 Permalink | Reply
    Tags: "Where did the Earth’s oxygen come from? New study hints at an unexpected source", A tantalizing new possibility for oxygenation: that at least some of the Earth’s early oxygen came from a tectonic source via the movement and destruction of the Earth’s crust., Aerobics, , , , , In the deep past — as far back as the Neoarchean era 2.8 to 2.5 billion years ago — this oxygen was almost absent., , The amount of oxygen in the Earth’s atmosphere makes it a habitable planet., The Archean eon represents one third of our planet’s history from 2.5 billion years ago to four billion years ago., The Conversation (AU), There is considerable debate over whether plate tectonics operated back in the Archean era., This early Earth was a water-world covered in green oceans and shrouded in a methane haze and completely lacking multi-cellular life., This research aimed to test whether the absence of oxidized materials in Archean bottom waters and sediments could prevent the formation of oxidized magmas.   

    From “The Conversation (AU)” : “Where did the Earth’s oxygen come from? New study hints at an unexpected source” 

    From “The Conversation (AU)”

    11.28.22
    David Mole
    Postdoctoral fellow, Earth Sciences
    Laurentian University

    Adam Charles Simon
    Arthur F. Thurnau Professor, Earth & Environmental Sciences
    University of Michigan

    Xuyang Meng
    Postdoctoral Fellow, Earth and Environmental Sciences
    University of Michigan

    1
    An artist’s impression of the Earth around 2.7 billion years ago in the Archean Eon. With green iron-rich seas, an orange methane-rich atmosphere and a surface dominated by oceans, the Archean Earth would have been a very different place. (Illustration by Andrey Atuchin), Author provided (no reuse)[Used under “Fair Use” for academic teaching purposes.]

    “The amount of oxygen in the Earth’s atmosphere makes it a habitable planet.

    Twenty-one per cent of the atmosphere consists of this life-giving element. But in the deep past — as far back as the Neoarchean era 2.8 to 2.5 billion years ago — this oxygen was almost absent [Science Advances (below)].

    So, how did Earth’s atmosphere become oxygenated?

    Our research, published in Nature Geoscience [below], adds a tantalizing new possibility: that at least some of the Earth’s early oxygen came from a tectonic source via the movement and destruction of the Earth’s crust.

    The Archean Earth

    The Archean eon represents one third of our planet’s history from 2.5 billion years ago to four billion years ago.

    This alien Earth was a water-world, covered in green oceans, shrouded in a methane haze and completely lacking multi-cellular life. Another alien aspect of this world was the nature of its tectonic activity.

    2
    The cross-section of a subduction zone, where oceanic lithosphere slides under a continental margin. (Xuyang Meng), Author provided (no reuse)[Used under “Fair Use” for academic teaching purposes.]

    On modern Earth, the dominant tectonic activity is called plate tectonics, where oceanic crust — the outermost layer of the Earth under the oceans — sinks into the Earth’s mantle (the area between the Earth’s crust and its core) at points of convergence called subduction zones.

    However, there is considerable debate over whether plate tectonics operated back in the Archean era.

    One feature of modern subduction zones is their association with oxidized magmas. These magmas are formed when oxidized sediments and bottom waters — cold, dense water near the ocean floor — are introduced into the Earth’s mantle [PNAS (below)]. This produces magmas with high oxygen and water contents.

    Our research aimed to test whether the absence of oxidized materials in Archean bottom waters and sediments could prevent the formation of oxidized magmas. The identification of such magmas in Neoarchean magmatic rocks could provide evidence that subduction and plate tectonics occurred 2.7 billion years ago.

    The experiment

    We collected samples of 2750- to 2670-million-year-old granitoid rocks from across the Abitibi-Wawa subprovince of the Superior Province — the largest preserved Archean continent stretching over 2000 km from Winnipeg, Manitoba to far-eastern Quebec. This allowed us to investigate the level of oxidation of magmas generated across the Neoarchean era.

    Measuring the oxidation-state of these magmatic rocks — formed through the cooling and crystalization of magma or lava — is challenging. Post-crystallization events may have modified these rocks through later deformation, burial or heating.

    So, we decided to look at the mineral apatite which is present in the zircon crystals in these rocks. Zircon crystals can withstand the intense temperatures and pressures of the post-crystallization events. They retain clues about the environments in which they were originally formed and provide precise ages for the rocks themselves.

    Small apatite crystals that are less than 30 microns wide — the size of a human skin cell — are trapped in the zircon crystals. They contain sulfur. By measuring the amount of sulfur in apatite, we can establish whether the apatite grew from an oxidized magma.

    3
    Map of the Superior Province that stretches from central Manitoba to eastern Quebec in Canada. (Xuyang Meng), Author provided.

    We were able to successfully measure the oxygen fugacity of the original Archean magma — which is essentially the amount of free oxygen in it — using a specialized technique called X-ray Absorption Near Edge Structure Spectroscopy (S-XANES) at the Advanced Photon Source synchrotron at The DOE’s Argonne National Laboratory in Illinois.

    Creating oxygen from water?

    We found that the magma sulfur content, which was initially around zero, increased to 2000 parts per million around 2705 million years. This indicated the magmas had become more sulfur-rich. Additionally, the predominance of S6+ — a type of sulfer ion — in the apatite [Journal of Petrology (below)] suggested that the sulfur was from an oxidized source, matching the data from the host zircon crystals [Precambrian Research (below)].

    These new findings indicate that oxidized magmas did form in the Neoarchean era 2.7 billion years ago. The data show that the lack of dissolved oxygen in the Archean ocean reservoirs did not prevent the formation of sulfur-rich, oxidized magmas in the subduction zones. The oxygen in these magmas must have come from another source, and was ultimately released into the atmosphere during volcanic eruptions.

    We found that the occurrence of these oxidized magmas correlates with major gold mineralization events in the Superior Province and Yilgarn Craton (Western Australia), demonstrating a connection between these oxygen-rich sources and global world-class ore deposit formation.

    The implications of these oxidized magmas go beyond the understanding of early Earth geodynamics. Previously, it was thought unlikely that Archean magmas could be oxidized, when the ocean water [Science (below)] and ocean floor rocks or sediments [Nature (below)] were not.

    While the exact mechanism is unclear, the occurrence of these magmas suggests that the process of subduction, where ocean water is taken hundreds of kilometres into our planet, generates free oxygen. This then oxidizes the overlying mantle.

    Our study shows that Archean subduction could have been a vital, unforeseen factor in the oxygenation of the Earth, the early whiffs of oxygen 2.7 billion years ago [Nature Geoscience (below)] and also the Great Oxidation Event, which marked an increase in atmospheric oxygen by two per cent 2.45 to 2.32 billion years ago [Treatise on Geochemistry (Second Edition) (below)].

    As far as we know, the Earth is the only place in the solar system — past or present — with plate tectonics and active subduction. This suggests that this study could partly explain the lack of oxygen and, ultimately, life on the other rocky planets in the future as well.”

    Science papers:
    PNAS 2019
    Science Advances 2020
    Journal of Petrology 2021
    Precambrian Research 2021
    See these above science papers for instructive material with images and tables.
    Treatise on Geochemistry (Second Edition) 2014
    Nature Geoscience 2017
    Nature 2018
    Science 2002
    Nature Geoscience

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation (AU) launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.

    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 10:08 pm on November 16, 2022 Permalink | Reply
    Tags: "Powerful linear accelerator begins smashing atoms – 2 scientists on the team explain how it could reveal rare forms of matter", 1. What are the properties of atomic nuclei with a large difference between the numbers of protons and neutrons?, 2. How are elements formed in the cosmos?, 3. Do physicists understand the fundamental symmetries of the universe like why there is more matter than antimatter in the universe?, 4. How can the information from rare isotopes be applied in medicine and industry and national security?, A community of roughly 1600 nuclear scientists from all over the world has been waiting for a decade to begin doing science enabled by the new particle accelerator., At full strength the FRIB will be the most powerful heavy-ion accelerator on Earth., , , Even though the facility is currently running at only a fraction of its full power multiple scientific collaborations working at FRIB have already produced and detected about 100 rare isotopes., Experiments at FRIB promise to provide new insights into the fundamental nature of the universe., Nuclear Chemistry, , Over the coming years FRIB is set to explore four big questions in nuclear physics:, , The Conversation (AU), , , The study of rare isotopes   

    From The Facility for Rare Isotope Beams [FRIB] At The Michigan State University Via “The Conversation (AU)” : “Powerful linear accelerator begins smashing atoms – 2 scientists on the team explain how it could reveal rare forms of matter” 

    From The Facility for Rare Isotope Beams [FRIB]

    At

    Michigan State Bloc

    The Michigan State University

    Via

    “The Conversation (AU)”

    11.14.22

    1
    A new particle accelerator at Michigan State University is set to discover thousands of never-before-seen isotopes. Facility for Rare Isotope Beams, CC BY-ND

    Sean Liddick
    Associate Professor of Chemistry, Michigan State University

    Artemis Spyrou
    Professor of Nuclear Physics, Michigan State University

    “Just a few hundred feet from where we are sitting is a large metal chamber devoid of air and draped with the wires needed to control the instruments inside. A beam of particles passes through the interior of the chamber silently at around half the speed of light until it smashes into a solid piece of material, resulting in a burst of rare isotopes.

    This is all taking place in the Facility for Rare Isotope Beams, or FRIB, which is operated by Michigan State University for The DOE Office of Science. Starting in May 2022, national and international teams of scientists converged at Michigan State University and began running scientific experiments at FRIB with the goal of creating, isolating and studying new isotopes. The experiments promised to provide new insights into the fundamental nature of the universe.

    We are two professors in nuclear chemistry and nuclear physics who study rare isotopes. Isotopes are, in a sense, different flavours of an element with the same number of protons in their nucleus but different numbers of neutrons.

    The accelerator at FRIB started working at low power, but when it finishes ramping up to full strength, it will be the most powerful heavy-ion accelerator on Earth. By accelerating heavy ions – electrically charged atoms of elements – FRIB will allow scientists like us to create and study thousands of never-before-seen isotopes. A community of roughly 1,600 nuclear scientists from all over the world has been waiting for a decade to begin doing science enabled by the new particle accelerator.

    The first experiments at FRIB were completed over the summer of 2022. Even though the facility is currently running at only a fraction of its full power, multiple scientific collaborations working at FRIB have already produced and detected about 100 rare isotopes. These early results are helping researchers learn about some of the rarest physics in the universe.


    Put some Uranium 238 in a cloud chamber to see the radioactive particles.

    What is a rare isotope?

    It takes incredibly high amounts of energy to produce most isotopes. In nature, heavy rare isotopes are produced during the cataclysmic deaths of massive stars called supernovas or during the merging of two neutron stars.

    To the naked eye, two isotopes of any element look and behave the same way – all isotopes of the element mercury would look just like the liquid metal used in old thermometers. However, because the nuclei of isotopes of the same element have different numbers of neutrons, they differ in how long they live, what type of radioactivity they emit and in many other ways.

    For example, some isotopes are stable and do not decay or emit radiation, so they are common in the universe. Other isotopes of the very same element can be radioactive so they inevitably decay away as they turn into other elements. Since radioactive isotopes disappear over time, they are relatively rarer.

    Not all decay happens at the same rate though. Some radioactive elements – like potassium-40 – emit particles through decay at such a low rate that a small amount of the isotope can last for billions of years. Other, more highly radioactive isotopes like magnesium-38 exist for only a fraction of a second before decaying away into other elements. Short-lived isotopes, by definition, do not survive long and are rare in the universe. So if you want to study them, you have to make them yourself.

    2
    FRIB at Michigan State University for the DOE delineated.

    Creating isotopes in a lab

    While only about 250 isotopes naturally occur on Earth, theoretical models predict that about 7,000 isotopes should exist in nature. Scientists have used particle accelerators to produce around 3,000 of these rare isotopes.

    3
    The green-colored chambers use electromagnetic waves to accelerate charged ions to nearly half the speed of light. Facility for Rare Isotope Beams, CC BY-ND.

    The FRIB accelerator is 1,600 feet long and made of three segments folded in roughly the shape of a paperclip. Within these segments are numerous, extremely cold vacuum chambers that alternatively pull and push the ions using powerful electromagnetic pulses. FRIB can accelerate any naturally occurring isotope – whether it is as light as oxygen or as heavy as uranium – to approximately half the speed of light.

    To create radioactive isotopes, you only need to smash this beam of ions into a solid target like a piece of beryllium metal or a rotating disk of carbon.

    4
    There are many different instruments designed to measure specific attributes of the particles created during experiments at FRIB – like this instrument called FDSi, which is built to measure charged particles, neutrons and photons. Facility for Rare Isotope Beams, CC BY-ND.

    The impact of the ion beam on the fragmentation target breaks the nucleus of the stable isotope apart and produces many hundreds of rare isotopes simultaneously. To isolate the interesting or new isotopes from the rest, a separator sits between the target and the sensors. Particles with the right momentum and electrical charge will be passed through the separator while the rest are absorbed. Only a subset of the desired isotopes will reach the many instruments built to observe the nature of the particles.

    The probability of creating any specific isotope during a single collision can be very small. The odds of creating some of the rarer exotic isotopes can be on the order of 1 in a quadrillion – roughly the same odds as winning back-to-back Mega Millions jackpots. But the powerful beams of ions used by FRIB contain so many ions and produce so many collisions in a single experiment that the team can reasonably expect to find even the rarest of isotopes. According to calculations, FRIB’s accelerator should be able to produce approximately 80% of all theorized isotopes.

    The first two FRIB scientific experiments

    A multi-institution team led by researchers at The DOE’s Lawrence Berkeley National Laboratory, The DOE’s Oak Ridge National Laboratory, University of Tennessee, Knoxville, Mississippi State University and Florida State University, together with researchers at MSU, began running the first experiment at FRIB on May 9, 2022. The group directed a beam of calcium-48 – a calcium nucleus with 48 neutrons instead of the usual 20 – into a beryllium target at 1 kW of power. Even at one quarter of a percent of the facility’s 400-kW maximum power, approximately 40 different isotopes passed through the separator to the instruments.

    The FDSi device recorded the time each ion arrived, what isotope it was and when it decayed away. Using this information, the collaboration deduced the half-lives of the isotopes; the team has already reported on five previously unknown half-lives.

    The second FRIB experiment began on June 15, 2022, led by a collaboration of researchers from The DOE’s Lawrence Livermore National Laboratory, ORNL, UTK and MSU. The facility accelerated a beam of selenium-82 and used it to produce rare isotopes of the elements scandium, calcium and potassium. These isotopes are commonly found in neutron stars, and the goal of the experiment was to better understand what type of radioactivity these isotopes emit as they decay. Understanding this process could shed light on how neutron stars lose energy.

    The first two FRIB experiments were just the tip of the iceberg of this new facility’s capabilities. Over the coming years, FRIB is set to explore four big questions in nuclear physics: First, what are the properties of atomic nuclei with a large difference between the numbers of protons and neutrons? Second, how are elements formed in the cosmos? Third, do physicists understand the fundamental symmetries of the universe, like why there is more matter than antimatter in the universe? Finally, how can the information from rare isotopes be applied in medicine, industry and national security?”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Michigan State Campus

    Michigan State University is a public research university located in East Lansing, Michigan, United States. Michigan State University was founded in 1855 and became the nation’s first land-grant institution under the Morrill Act of 1862, serving as a model for future land-grant universities.

    The university was founded as the Agricultural College of the State of Michigan, one of the country’s first institutions of higher education to teach scientific agriculture. After the introduction of the Morrill Act, the college became coeducational and expanded its curriculum beyond agriculture. Today, Michigan State University is one of the largest universities in the United States (in terms of enrollment) and has approximately 634,300 living alumni worldwide.

    U.S. News & World Report ranks its graduate programs the best in the U.S. in elementary teacher’s education, secondary teacher’s education, industrial and organizational psychology, rehabilitation counseling, African history (tied), supply chain logistics and nuclear physics in 2019. Michigan State University pioneered the studies of packaging, hospitality business, supply chain management, and communication sciences. Michigan State University is a member of the Association of American Universities and is classified among “R1: Doctoral Universities – Very high research activity”. The university’s campus houses the National Superconducting Cyclotron Laboratory, the W. J. Beal Botanical Garden, the Abrams Planetarium, the Wharton Center for Performing Arts, the Eli and Edythe Broad Art Museum, the Facility for Rare Isotope Beams, and the country’s largest residence hall system.

    Research

    The university has a long history of academic research and innovation. In 1877, botany professor William J. Beal performed the first documented genetic crosses to produce hybrid corn, which led to increased yields. Michigan State University dairy professor G. Malcolm Trout improved the process for the homogenization of milk in the 1930s, making it more commercially viable. In the 1960s, Michigan State University scientists developed cisplatin, a leading cancer fighting drug, and followed that work with the derivative, carboplatin. Albert Fert, an Adjunct professor at Michigan State University, was awarded the 2007 Nobel Prize in Physics together with Peter Grünberg.

    Today Michigan State University continues its research with facilities such as the Department of Energy -sponsored Plant Research Laboratory and a particle accelerator called the National Superconducting Cyclotron Laboratory [below]. The Department of Energy Office of Science named Michigan State University as the site for the Facility for Rare Isotope Beams (FRIB). The $730 million facility will attract top researchers from around the world to conduct experiments in basic nuclear science, astrophysics, and applications of isotopes to other fields.

    Michigan State University FRIB [Facility for Rare Isotope Beams] .

    In 2004, scientists at the Cyclotron produced and observed a new isotope of the element germanium, called Ge-60 In that same year, Michigan State University, in consortium with the University of North Carolina at Chapel Hill and the government of Brazil, broke ground on the 4.1-meter Southern Astrophysical Research Telescope (SOAR) in the Andes Mountains of Chile.

    The consortium telescope will allow the Physics & Astronomy department to study galaxy formation and origins. Since 1999, MSU has been part of a consortium called the Michigan Life Sciences Corridor, which aims to develop biotechnology research in the State of Michigan. Finally, the College of Communication Arts and Sciences’ Quello Center researches issues of information and communication management.


    The Michigan State University Spartans compete in the NCAA Division I Big Ten Conference. Michigan State Spartans football won the Rose Bowl Game in 1954, 1956, 1988 and 2014, and the university claims a total of six national football championships. Spartans men’s basketball won the NCAA National Championship in 1979 and 2000 and has attained the Final Four eight times since the 1998–1999 season. Spartans ice hockey won NCAA national titles in 1966, 1986 and 2007. The women’s cross country team was named Big Ten champions in 2019. In the fall of 2019, MSU student-athletes posted all-time highs for graduation success rates and federal graduation rates, according to NCAA statistics.

     
  • richardmitnick 2:11 pm on November 11, 2022 Permalink | Reply
    Tags: "We tested Einstein’s theory of gravity on the scale of the universe – here’s what we found", , , , , , , The Conversation (AU),   

    From The University of Portsmouth (UK) And Simon Fraser University (CA) Via “The Conversation (AU)” : “We tested Einstein’s theory of gravity on the scale of the universe – here’s what we found” 

    From The University of Portsmouth (UK)

    And

    Simon Fraser University (CA)

    Via

    “The Conversation (AU)”

    11.10.11

    Kazuya Koyama
    Professor of Cosmology, University of Portsmouth

    Levon Pogosian
    Professor of Physics, Simon Fraser University

    1
    Thousands of galaxies seen by the James Webb Space Telescope. Credit: NASA

    Everything in the universe has gravity – and feels it too. Yet this most common of all fundamental forces is also the one that presents the biggest challenges to physicists. Albert Einstein’s Theory of General Relativity has been remarkably successful in describing the gravity of stars and planets, but it doesn’t seem to apply perfectly on all scales.

    General Relativity has passed many years of observational tests, from Eddington’s measurement of the deflection of starlight by the Sun in 1919 to the recent detection of gravitational waves.

    However, gaps in our understanding start to appear when we try to apply it to extremely small distances, where the laws of quantum mechanics operate, or when we try to describe the entire universe.

    Our new study, published in Nature Astronomy [below], has now tested Einstein’s theory on the largest of scales. We believe our approach may one day help resolve some of the biggest mysteries in Cosmology, and the results hint that the Theory of General Relativity may need to be tweaked on this scale.

    Faulty model?

    Quantum theory predicts that empty space, the vacuum, is packed with energy. We do not notice its presence because our devices can only measure changes in energy rather than its total amount.

    However, according to Einstein, the vacuum energy has a repulsive gravity – it pushes the empty space apart. Interestingly, in 1998, it was discovered that the expansion of the universe is in fact accelerating (a finding awarded with the 2011 Nobel prize in physics).

    However, the amount of vacuum energy, or dark energy as it has been called, necessary to explain the acceleration is many orders of magnitude smaller than what quantum theory predicts.

    Hence the big question, dubbed “the old cosmological constant problem”, is whether the vacuum energy actually gravitates – exerting a gravitational force and changing the expansion of the universe.

    If yes, then why is its gravity so much weaker than predicted? If the vacuum does not gravitate at all, what is causing the cosmic acceleration?

    We don’t know what dark energy is, but we need to assume it exists in order to explain the universe’s expansion. Similarly, we also need to assume there is a type of invisible matter presence, dubbed dark matter, to explain how galaxies and clusters evolved to be the way we observe them today.

    These assumptions are baked into scientists’ standard cosmological theory, called the lambda cold dark matter (LCDM) model – suggesting there is 70% dark energy, 25% dark matter and 5% ordinary matter in the cosmos. And this model has been remarkably successful in fitting all the data collected by cosmologists over the past 20 years.

    But the fact that most of the universe is made up of dark forces and substances, taking odd values that don’t make sense, has prompted many physicists to wonder if Einstein’s theory of gravity needs modification to describe the entire universe.

    A new twist appeared a few years ago when it became apparent that different ways of measuring the rate of cosmic expansion, dubbed the “Hubble constant”, give different answers – a problem known as the “Hubble tension”.

    The disagreement, or tension, is between two values of the Hubble constant. One is the number predicted by the LCDM cosmological model, which has been developed to match the light left over from the Big Bang (the cosmic microwave background radiation). The other is the expansion rate measured by observing exploding stars known as supernovas in distant galaxies.

    Many theoretical ideas have been proposed for ways of modifying LCDM to explain the Hubble tension. Among them are alternative gravity theories.

    Digging for answers

    We can design tests to check if the universe obeys the rules of Einstein’s theory. General Relativity describes gravity as the curving or warping of space and time, bending the pathways along which light and matter travel. Importantly, it predicts that the trajectories of light rays and matter should be bent by gravity in the same way.

    Together with a team of cosmologists, we put the basic laws of general relativity to test. We also explored whether modifying Einstein’s theory could help resolve some of the open problems of cosmology, such as the Hubble tension.

    To find out whether General Relativity is correct on large scales, we set out, for the first time, to simultaneously investigate three aspects of it. These were the expansion of the universe, the effects of gravity on light and the effects of gravity on matter.

    Using a statistical method known as the Bayesian inference, we reconstructed the gravity of the universe through cosmic history in a computer model based on these three parameters. We could estimate the parameters using the cosmic microwave background data from the Planck satellite, supernova catalogues as well as observations of the shapes and distribution of distant galaxies by the SDSS and DES telescopes. We then compared our reconstruction to the prediction of the LCDM model (essentially Einstein’s model).

    ___________________________________________________________________
    Apache Point Observatory
    SDSS Telescope at Apache Point Observatory, near Sunspot NM, USA, Altitude 2,788 meters (9,147 ft).

    Apache Point Observatory near Sunspot, New Mexico Altitude 2,788 meters (9,147 ft).
    ___________________________________________________________________
    The Dark Energy Survey

    Dark Energy Camera [DECam] built at The DOE’s Fermi National Accelerator Laboratory.

    NOIRLab National Optical Astronomy Observatory Cerro Tololo Inter-American Observatory (CL) Victor M Blanco 4m Telescope which houses the Dark-Energy-Camera – DECam at Cerro Tololo, Chile at an altitude of 7200 feet.

    NOIRLabNSF NOIRLab NOAO Cerro Tololo Inter-American Observatory(CL) approximately 80 km to the East of La Serena, Chile, at an altitude of 2200 meters.

    The Dark Energy Survey is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. The Dark Energy Survey began searching the Southern skies on August 31, 2013.

    According to Albert Einstein’s Theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up.

    Nobel Prize in Physics for 2011 Expansion of the Universe

    4 October 2011

    The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics for 2011

    with one half to

    Saul Perlmutter
    The Supernova Cosmology Project
    The DOE’s Lawrence Berkeley National Laboratory and The University of California-Berkeley,

    and the other half jointly to

    Brian P. SchmidtThe High-z Supernova Search Team, The Australian National University, Weston Creek, Australia.

    and

    Adam G. Riess

    The High-z Supernova Search Team,The Johns Hopkins University and The Space Telescope Science Institute, Baltimore, MD.

    Written in the stars

    “Some say the world will end in fire, some say in ice…” *

    What will be the final destiny of the Universe? Probably it will end in ice, if we are to believe this year’s Nobel Laureates in Physics. They have studied several dozen exploding stars, called supernovae, and discovered that the Universe is expanding at an ever-accelerating rate. The discovery came as a complete surprise even to the Laureates themselves.

    In 1998, cosmology was shaken at its foundations as two research teams presented their findings. Headed by Saul Perlmutter, one of the teams had set to work in 1988. Brian Schmidt headed another team, launched at the end of 1994, where Adam Riess was to play a crucial role.

    The research teams raced to map the Universe by locating the most distant supernovae. More sophisticated telescopes on the ground and in space, as well as more powerful computers and new digital imaging sensors (CCD, Nobel Prize in Physics in 2009), opened the possibility in the 1990s to add more pieces to the cosmological puzzle.

    The teams used a particular kind of supernova, called Type 1a supernova. It is an explosion of an old compact star that is as heavy as the Sun but as small as the Earth. A single such supernova can emit as much light as a whole galaxy. All in all, the two research teams found over 50 distant supernovae whose light was weaker than expected – this was a sign that the expansion of the Universe was accelerating. The potential pitfalls had been numerous, and the scientists found reassurance in the fact that both groups had reached the same astonishing conclusion.

    For almost a century, the Universe has been known to be expanding as a consequence of the Big Bang about 14 billion years ago. However, the discovery that this expansion is accelerating is astounding. If the expansion will continue to speed up the Universe will end in ice.

    The acceleration is thought to be driven by dark energy, but what that dark energy is remains an enigma – perhaps the greatest in physics today. What is known is that dark energy constitutes about three quarters of the Universe. Therefore the findings of the 2011 Nobel Laureates in Physics have helped to unveil a Universe that to a large extent is unknown to science. And everything is possible again.

    *Robert Frost, Fire and Ice, 1920
    ______________________________________________________________________________

    We found interesting hints of a possible mismatch with Einstein’s prediction, albeit with rather low statistical significance. This means that there is nevertheless a possibility that gravity works differently on large scales, and that the theory of general relativity may need to be tweaked.

    Our study also found that it is very difficult to solve the Hubble tension problem by only changing the theory of gravity. The full solution would probably require a new ingredient in the cosmological model, present before the time when protons and electrons first combined to form hydrogen just after the Big Bang, such as a special form of dark matter, an early type of dark energy or primordial magnetic fields. Or, perhaps, there’s a yet unknown systematic error in the data.

    That said, our study has demonstrated that it is possible to test the validity of general relativity over cosmological distances using observational data. While we haven’t yet solved the Hubble problem, we will have a lot more data from new probes in a few years.

    This means that we will be able to use these statistical methods to continue tweaking general relativity, exploring the limits of modifications, to pave the way to resolving some of the open challenges in cosmology.

    Science paper:
    Nature Astronomy

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Simon Fraser University (CA) is a public research university in British Columbia, Canada, with three campuses: Burnaby (main campus), Surrey, and Vancouver. The 170-hectare (420-acre) main Burnaby campus on Burnaby Mountain, located 20 kilometres (12 mi) from downtown Vancouver, was established in 1965 and comprises more than 30,000 students and 160,000 alumni. The university was created in an effort to expand higher education across Canada.

    Simon Fraser University (CA) is a member of multiple national and international higher education, including the Association of Commonwealth Universities, International Association of Universities, and Universities Canada (CA). Simon Fraser University has also partnered with other universities and agencies to operate joint research facilities such as the TRIUMF- Canada’s particle accelerator centre [Centre canadien d’accélération des particules] (CA) for particle and nuclear physics, which houses the world’s largest cyclotron, and Bamfield Marine Station, a major centre for teaching and research in marine biology.

    Undergraduate and graduate programs at Simon Fraser University (CA) operate on a year-round, three-semester schedule. Consistently ranked as Canada’s top comprehensive university and named to the Times Higher Education list of 100 world universities under 50, Simon Fraser University (CA)is also the first Canadian member of the National Collegiate Athletic Association, the world’s largest college sports association. In 2015, Simon Fraser University (CA) became the second Canadian university to receive accreditation from the Northwest Commission on Colleges and Universities. Simon Fraser University (CA) faculty and alumni have won 43 fellowships to the Royal Society of Canada [Société royale du Canada](CA), three Rhodes Scholarships and one Pulitzer Prize. Among the list of alumni includes two former premiers of British Columbia, Gordon Campbell and Ujjal Dosanjh, owner of the Vancouver Canucks NHL team, Francesco Aquilini, Prime Minister of Lesotho, Pakalitha Mosisili, director at the MPG Society [MPG Gesellschaft](DE) , Robert Turner, and humanitarian and cancer research activist, Terry Fox.

    The University of Portsmouth (UK) is a public university in the city of Portsmouth, Hampshire, England. The history of the university dates back to 1908, when the Park building opened as a Municipal college and public library. It was previously known as Portsmouth Polytechnic until 1992, when it was granted university status through the Further and Higher Education Act 1992. It is ranked among the Top 100 universities under 50 in the world.

    We’re a New Breed of University
    We’re proud to be a breath of fresh air in the academic world – a place where everyone gets the support they need to achieve their best.
    We’re always discovering. Through the work we do, we engage with our community and world beyond our hometown. We don’t fit the mould, we break it.
    We educate and transform the lives of our students and the people around us. We recruit students for their promise and potential and for where they want to go.
    We stand out, not just in the UK but in the world, in innovation and research, with excellence in areas from cosmology and forensics to cyber security, epigenetics and brain tumour research.
    Just as the world keeps moving, so do we. We’re closely involved with our local community and we take our ideas out into the global marketplace. We partner with business, industry and government to help improve, navigate and set the course for a better future.
    Since the first day we opened our doors, our story has been about looking forward. We’re interested in the future, and here to help you shape it.
    The university offers a range of disciplines, from Pharmacy, International relations and politics, to Mechanical Engineering, Paleontology, Criminology, Criminal Justice, among others. The Guardian University Guide 2018 ranked its Sports Science number one in England, while Criminology, English, Social Work, Graphic Design and Fashion and Textiles courses are all in the top 10 across all universities in the UK. Furthermore, 89% of its research conducted in Physics, and 90% of its research in Allied Health Professions (e.g. Dentistry, Nursing and Pharmacy) have been rated as world-leading or internationally excellent in the most recent Research Excellence Framework (REF2014).

    The University is a member of the University Alliance and The Channel Islands Universities Consortium. Alumni include Tim Peake, Grayson Perry, Simon Armitage and Ben Fogle.
    Portsmouth was named the UK’s most affordable city for students in the Natwest Student Living Index 2016. On Friday 4 May 2018, the University of Portsmouth was revealed as the main shirt sponsor of Portsmouth F.C. for the 2018–19, 2019–20 and 2020–21 seasons.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: