Tagged: WIRED Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:39 pm on January 10, 2021 Permalink | Reply
    Tags: "A Newfound Source of Cellular Order in the Chemistry of Life", , , Cell biologists seem to find condensates everywhere they look., Cell signaling proteins can also exhibit phase separation behavior., , Condensates also helped to solve a different cellular mystery—not inside the nucleus but along the cell membrane., Enzymes need to find their substrates and signaling molecules need to find their receptors., , Inside cells droplets called condensates merge divide and dissolve. Their dance may regulate vital processes., Methyltransferases, Oligomer formation, , Ribosomes are cells’ protein-making factories and the number of them in a cell often limits its rate of growth., Some proteins spontaneously gather into transient assemblies called condensates., WIRED   

    From WIRED: “A Newfound Source of Cellular Order in the Chemistry of Life” 


    From WIRED

    01.10.2021
    Viviane Callier

    Inside cells, droplets called condensates merge, divide, and dissolve. Their dance may regulate vital processes.

    1
    Credit: Ed Reschke/Getty Images.

    Imagine packing all the people in the world into the Great Salt Lake in Utah—all of us jammed shoulder to shoulder, yet also charging past one another at insanely high speeds. That gives you some idea of how densely crowded the 5 billion proteins in a typical cell are, said Anthony Hyman, a British cell biologist and a director of the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany.

    Somehow in that bustling cytoplasm, enzymes need to find their substrates, and signaling molecules need to find their receptors, so the cell can carry out the work of growing, dividing and surviving. If cells were sloshing bags of evenly mixed cytoplasm, that would be difficult to achieve. But they are not. Membrane-bounded organelles help to organize some of the contents, usefully compartmentalizing sets of materials and providing surfaces that enable important processes, such as the production of ATP, the biochemical fuel of cells. But, as scientists are still only beginning to appreciate, they are only one source of order.

    Recent experiments reveal that some proteins spontaneously gather into transient assemblies called condensates, in response to molecular forces that precisely balance transitions between the formation and dissolution of droplets inside the cell. Condensates, sometimes referred to as membraneless organelles, can sequester specific proteins from the rest of the cytoplasm, preventing unwanted biochemical reactions and greatly increasing the efficiency of useful ones. These discoveries are changing our fundamental understanding of how cells work.

    For instance, condensates may explain the speed of many cellular processes. “The key thing about a condensate—it’s not like a factory; it’s more like a flash mob. You turn on the radio, and everyone comes together, and then you turn it off and everyone disappears,” Hyman said.

    As such, the mechanism is “exquisitely regulatable,” said Gary Karpen, a cell biologist at the University of California, Berkeley, and the Lawrence Berkeley National Laboratory. “You can form these things and dissolve them quite readily by just changing concentrations of molecules” or chemically modifying the proteins. This precision provides leverage for control over a host of other phenomena, including gene expression.

    The first hint of this mechanism arrived in the summer of 2008, when Hyman and his then-postdoctoral fellow Cliff Brangwynne (now a Howard Hughes Medical Institute investigator at Princeton University) were teaching at the famed Marine Biological Laboratory physiology course and studying the embryonic development of C. elegans roundworms. When they and their students observed that aggregates of RNA in the fertilized worm egg formed droplets that could split away or fuse with each other, Hyman and Brangwynne hypothesized that these “P granules” formed through phase separation in the cytoplasm, just like oil droplets in a vinaigrette.

    That proposal, published in 2009 in Science, didn’t get much attention at the time. But more papers on phase separation in cells trickled out around 2012, including a key experiment [Nature] in Michael Rosen’s lab at the University of Texas Southwestern Medical Center in Dallas, which showed that cell signaling proteins can also exhibit this phase separation behavior. By 2015, the stream of papers had turned into a torrent, and since then there’s been a veritable flood of research on biomolecular condensates, these liquid-like cell compartments with both elastic and viscous properties.

    2
    Credit: Samuel Velasco/Quanta Magazine.

    Now cell biologists seem to find condensates everywhere they look: in the regulation of gene expression, the formation of mitotic spindles, the assembly of ribosomes, and many more cellular processes in the nucleus and cytoplasm. These condensates aren’t just novel but thought-provoking: The idea that their functions emerge from the collective behaviors of the molecules has become the central concept in condensate biology, and it contrasts sharply with the classic picture of pairs of biochemical agents and their targets fitting together like locks and keys. Researchers are still figuring out how to probe the functionality of these emergent properties; that will require the development of new techniques to measure and manipulate the viscosity and other properties of tiny droplets in a cell.

    What Drives Droplet Formation

    When biologists were first trying to explain what drives the phase separation phenomenon behind condensation in living cells, the structure of the proteins themselves offered a natural place to start. Well-folded proteins typically have a mix of hydrophilic and hydrophobic amino acids. The hydrophobic amino acids tend to bury themselves inside the protein folds, away from water molecules, while the hydrophilic amino acids get drawn to the surface. These hydrophobic and hydrophilic amino acids determine how the protein folds and holds its shape.

    But some protein chains have relatively few hydrophobic amino acids, so they have no reason to fold. Instead, these intrinsically disordered proteins (IDPs) fluctuate in form and engage in many weak multivalent interactions. IDP interactions were thought for years to be the best explanation for the fluidlike droplet behavior.

    3
    Nucleoli appear as green dots in this stained tissue from a roundworm. Each cell, regardless of its size, has a single nucleolus. Recent research has shown that the size of nucleoli depends on the concentration of nucleolar proteins in a cell. Credit: Stephanie Weber.

    Last year, however, Brangwynne published a couple of papers highlighting that IDPs are important, but that “the field has gone too far in emphasizing them.” Most proteins involved in condensates, he says, have a common architecture with some structured domains and some disordered regions. To seed condensates, the molecules must have many weak multivalent interactions with others, and there’s another way to achieve that: oligomerization.

    Oligomerization occurs when proteins bind to each other and form larger complexes with repeating units, called oligomers. As the concentration of proteins increases, so does the phase separation and the oligomer formation. In a talk at the American Society for Cell Biology meeting in December, Brangwynne showed that as the concentration of oligomers increases, the strength of their interactions eventually overcomes the nucleation barrier, the energy required to create a surface separating the condensate from the rest of the cytoplasm. At that point, the proteins are containing themselves within a droplet.

    In the past five years, researchers have taken big strides in understanding how this collective behavior of proteins arises from tiny physical and chemical forces. But they are still learning how (and whether) cells actually use this phenomenon to grow and divide.

    Condensates and Gene Expression

    Condensates seem to be involved in many aspects of cellular biology, but one area that has received particular attention is gene expression and the production of proteins.

    Ribosomes are cells’ protein-making factories, and the number of them in a cell often limits its rate of growth. Work by Brangwynne and others suggests that fast-growing cells might get some help from the biggest condensate in the nucleus: the nucleolus. The nucleolus facilitates the rapid transcription of ribosomal RNAs by gathering up all of the required transcription machinery, including the specific enzyme (RNA polymerase I) that makes them.

    A few years ago, Brangwynne and his then-postdoc Stephanie Weber, who is now an assistant professor at McGill University in Montreal, investigated how the size of the nucleolus (and therefore the speed of ribosomal RNA synthesis) was controlled in early C. elegans embryos. Because the mother worm contributes the same number of proteins to every embryo, small embryos have high concentrations of proteins and large embryos have low concentrations. And as the researchers reported in a 2015 Current Biology paper, the size of the nucleoli is concentration-dependent: Small cells have large nucleoli and large cells have small ones.

    Brangwynne and Weber found that by artificially changing cell size, they could raise and lower the protein concentration and the size of the resulting nucleoli. In fact, if they lowered the concentration below a critical threshold, there was no phase separation and no nucleolus. The researchers derived a mathematical model based on the physics of condensate formation that could exactly predict the size of nucleoli in cells.

    Now Weber is looking for condensates in bacteria, which have smaller cells and no membrane-bound compartments. “Maybe this is an even more important mechanism for compartmentalization, because they [bacteria] don’t have an alternative,” she suggested.

    4
    In this series of images, purified bacterial transcription factor in solution acts like a fluid by condensing into spherical droplets that then fuse together. Researchers are studying whether condensates might play a role in regulating bacterial cells as well as eukaryotic ones.Credit: John Wall.

    Last summer, Weber published a study [PNAS] showing that in cells of slow-growing E. coli bacteria, the RNA polymerase enzyme is uniformly distributed, but in fast-growing cells it clusters in droplets. The fast-growing cells may need to concentrate the polymerase around ribosomal genes to synthesize ribosomal RNA efficiently.

    “It looks like it [phase separation] is in all domains of life, and a universal mechanism that has then been able to specialize into a whole bunch of different functions,” Weber said.

    Although Weber and Brangwynne showed that active transcription occurs in one large condensate, the nucleolus, other condensates in the nucleus do the opposite. Large portions of the DNA in the nucleus are classified as heterochromatin because they are more compact and generally not expressed as proteins. In 2017, Karpen, Amy Strom (who is now a postdoc in Brangwynne’s lab) and their colleagues showed [Nature] that a certain protein will undergo phase separation and form droplets on the heterochromatin in Drosophila embryos. These droplets can fuse with each other, possibly providing a mechanism for compacting heterochromatin inside the nucleus.

    The results also suggested an exciting possible explanation for a long-standing mystery. Years ago, geneticists discovered that if they took an actively expressed gene and placed it right next to the heterochromatin, the gene would be silenced, as if the heterochromatin state was spreading. “This phenomenon of spreading was something that arose early on, and no one really understood it,” Karpen said.

    Later, researchers discovered enzymes involved in epigenetic regulation called methyltransferases, and they hypothesized that the methyltransferases would simply proceed from one histone to the next down the DNA strand from the heterochromatin into the adjacent euchromatin, a kind of “enzymatic, processive mechanism,” Karpen said. This has been the dominant model to explain the spreading phenomenon for the last 20 years. But Karpen thinks that the condensates that sit on the heterochromatin, like wet beads on a string, could be products of a different mechanism that accounts for the spreading of the silent heterochromatin state. “These are fundamentally different ways to think about how the biology works,” he said. He’s now working to test the hypothesis.


    In these fruit fly embryos, the chromosomes (pink) thicken and separate as the cells divide. A heterochromatin protein (green) then begins to condense into small droplets that grow and fuse, seemingly to help organize the genetic material for the cell’s use.Credit: Gary Karpen.

    The Formation of Filaments

    Condensates also helped to solve a different cellular mystery—not inside the nucleus but along the cell membrane. When a ligand binds to a receptor protein on a cell’s surface, it initiates a cascade of molecular changes and movements that convey a signal through the cytoplasm. But for that to happen, something first has to gather together all the dispersed players in the mechanism. Researchers now think phase separation might be a trick cells use to cluster the required signaling molecules at the membrane receptor, explains Lindsay Case, who trained in the Rosen lab as a postdoc and is starting her own lab at the Massachusetts Institute of Technology this month.

    Case notes that protein modifications that are commonly used for transducing signals, such as the addition of phosphoryl groups, change the valency of a protein—that is, its capacity to interact with other molecules. The modifications therefore also affect proteins’ propensity to form condensates. “If you think about what a cell is doing, it is actually regulating this parameter of valency,” Case said.

    Condensates may also play an important role in regulating and organizing the polymerization of small monomer subunits into long protein filaments. “Because you’re bringing molecules together for a longer period of time than you would outside the condensate, that favors polymerization,” Case said. In her postdoctoral research, she found that condensates enhance the polymerization of actin into filaments that help specialized kidney cells maintain their unusual shapes.

    The polymerization of tubulin is key to the formation of the mitotic spindles that help cells divide. Hyman became interested in understanding the formation of mitotic spindles during his graduate studies in the Laboratory of Molecular Biology at the University of Cambridge in the 1980s. There, he studied how the single-celled C. elegans embryo forms a mitotic spindle before splitting into two cells. Now he’s exploring the role of condensates in this process.

    5
    Credit: Samuel Velasco/Quanta Magazine.

    In one in vitro experiment, Hyman and his team created droplets of the microtubule-binding tau protein and then added tubulin, which migrates into the tau droplets. When they added nucleotides to the drops to simulate polymerization, the tubulin monomers assembled into beautiful microtubules. Hyman and his colleagues have proposed that phase separation could be a general way for cells to initiate the polymerization of microtubules and the formation of the mitotic spindle.

    The tau protein is also known for forming the protein aggregates that are the hallmarks of Alzheimer’s disease. In fact, many neurodegenerative conditions, such as amyotrophic lateral sclerosis (ALS) and Parkinson’s disease, involve the faulty formation of protein aggregates in cells.

    To investigate how these aggregates might form, Hyman’s team focused on a protein called FUS that has mutant forms associated with ALS. The FUS protein is normally found in the nucleus, but in stressed cells, the protein leaves the nucleus and goes into the cytoplasm, where it forms into droplets. Hyman’s team found that when they made droplets of mutated FUS proteins in vitro, after only about eight hours the droplets solidified into what he calls “horrible aggregates.” The mutant proteins drove a liquid-to-solid phase transition far faster than normal form of FUS did.

    Maybe the question isn’t why the aggregates form in disease, but why they don’t form in healthy cells. “One of the things I often ask in group meetings is: Why is the cell not scrambled eggs?” Hyman said in his talk at the cell biology meeting; the protein content of the cytoplasm is “so concentrated that it should just crash out of solution.”

    6
    Two types of proteins (red, yellow) isolated from the nucleoli of frog eggs can spontaneously organize into condensate droplets. By altering the concentrations of each protein in the solution, researchers can make either or both of the types of condensates grow or disappear.Credit: Marina Feric & Clifford Brangwynne.

    A clue came when researchers in Hyman’s lab added the cellular fuel ATP to condensates of purified stress granule proteins and saw those condensates vanish. To investigate further, the researchers put egg whites in test tubes, added ATP to one tube and salt to the other, and then heated them. While the egg whites in the salt aggregated, the ones with ATP did not: The ATP was preventing protein aggregation at the concentrations found in living cells.

    But how? It remained a puzzle until Hyman fortuitously met a chemist when presenting a seminar in Bangalore. The chemist noted that in industrial processes, additives called hydrotropes are used to increase the solubility of hydrophobic molecules. Returning to his lab, Hyman and his colleagues found that ATP worked exceptionally well as a hydrotrope.

    Intriguingly, ATP is a very abundant metabolite in cells, with a typical concentration of 3-5 millimolar. Most enzymes that use ATP operate efficiently with concentrations three orders of magnitude lower. Why, then, is ATP so concentrated inside cells, if it isn’t needed to drive metabolic reactions?

    One candidate explanation, Hyman suggests, is that ATP doesn’t act as a hydrotrope below 3-5 millimolar. “One possibility is that in the origin of life, ATP might have evolved as a biological hydrotrope to keep biomolecules soluble in high concentration and was later co-opted as energy,” he said.

    It’s difficult to test that hypothesis experimentally, Hyman admits, because it is challenging to manipulate ATP’s hydrotropic properties without also affecting its energy function. But if the idea is correct, it might help to explain why protein aggregates commonly form in diseases associated with aging, because ATP production becomes less efficient with age.

    Other Uses for Droplets

    Protein aggregates are clearly bad in neurodegenerative diseases. But the transition from liquid to solid phases can be adaptive in other circumstances.

    Take primordial oocytes, cells in the ovaries that can lie dormant for decades before maturing into an egg. Each of these cells has a Balbiani body, a large condensate of amyloid protein found in the oocytes of organisms ranging from spiders to humans. The Balbiani body is believed to protect mitochondria during the oocyte’s dormant phase by clustering a majority of the mitochondria together [PubMed.gov] with long amyloid protein fibers. When the oocyte starts to mature into an egg, those amyloid fibers dissolve and the Balbiani body disappears, explains Elvan Böke, a cell and developmental biologist at the Center for Genomic Regulation in Barcelona. Böke is working to understand how these amyloid fibers assemble and dissolve, which could lead to new strategies for treating infertility or neurodegenerative diseases.

    Protein aggregates can also solve problems that require very quick physiological responses, like stopping bleeding after injury. For example, Mucor circinelloides is a fungal species with interconnected, pressurized networks of rootlike hyphae through which nutrients flow. Researchers at the Temasek Life Sciences Laboratory led by the evolutionary cell biologist Greg Jedd recently discovered [CurrentBiology] that when they injured the tip of a Mucor hypha, the protoplasm gushed out at first but almost instantaneously formed a gelatinous plug that stopped the bleeding.

    Jedd suspected that this response was mediated by a long polymer, probably a protein with a repetitive structure. The researchers identified two candidate proteins and found that, without them, injured fungi catastrophically bled out into a puddle of protoplasm.

    Jedd and his colleagues studied the structure of the two proteins, which they called gellin A and gellin B. The proteins had 10 repetitive domains, some of which had hydrophobic amino acids that could bind to cell membranes. The proteins also unfolded at forces similar to those they would experience when the protoplasm comes gushing out at the site of an injury. “There’s this massive acceleration in flow, and so we were thinking that maybe this is the trigger that is telling the gellin to change its state,” Jedd said. The plug, triggered by a physical cue that causes the gellin to transition from liquid to solid phase, is irreversibly solidified.

    In contrast, in the fungal species Neurospora, the hyphae are divided into compartments, with pores that regulate the flow of water and nutrients. Jedd wanted to know how the pores were opened and closed. “What we discovered [PNAS] is some intrinsically disordered proteins that seem to be undergoing a condensation to aggregate at the pore, to provide a mechanism for closing it,” Jedd explained.

    The Neurospora proteins that were candidates for this job, Jedd’s team learned, had repeated mixed-charge domains that could be found in some mammalian proteins, too. When the researchers synthesized proteins of varying compositions but with similar mixes of lengths and charge patterning and introduced them into mammalian cells, they found that the proteins could be incorporated into nuclear speckles, which are condensates in the mammalian cell nucleus that help to regulate gene expression, as they and colleagues led by Rohit Pappu of Washington University in St. Louis reported in a 2020 Molecular Cell paper.

    The fungal and mammalian kingdoms seem to have arrived independently at a strategy of using disordered sequences in mechanisms based on condensation, Jedd said, “but they’re using it for entirely different reasons, in different compartments.”

    Reconsidering Old Explanations

    Phase separation has turned out to be ubiquitous, and researchers have generated lots of ideas about how this phenomenon could be involved in various cell functions. “There’s lots of exciting possibilities that [phase separation] raises, so that’s what I think drives … interest in the field,” Karpen said. But he also cautions that while it is relatively easy to show that a molecule undergoes phase separation in a test tube, demonstrating that phase separation has a function in the cell is much more challenging. “We still don’t know so much,” he said.

    Brangwynne agreed. “If you’re really honest, it’s still pretty much at a hand-wavy stage, the whole field,” he said. “It’s very early days for understanding how this all works. The fact that it’s hand-wavy doesn’t mean that liquid phase separation isn’t the key driving force. In fact, I think it is. But how does it really work?”

    The uncertainties do not discourage Hyman, either. “What phase separation is allowing everyone to do is go back and look at old problems which stalled out and think: Can we now think about this a different way?” he said. “All the structural biology that was done has just been brilliant—but many problems stalled out. They couldn’t actually explain things. And that’s what phase separation has allowed, is for everyone to think again about these problems.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 4:05 pm on January 3, 2021 Permalink | Reply
    Tags: "The Milky Way Gets a New Origin Story", , , , , , WIRED   

    From WIRED: “The Milky Way Gets a New Origin Story” 


    From WIRED

    01.03.2021
    Charlie Wood

    1
    A large Milky Way–like galaxy collides with a smaller dwarf galaxy in this digital simulation. Astronomers believe that at least one major collision like this happened early in the Milky Way’s development.


    A large Milky Way-like galaxy collides with a smaller dwarf galaxy in this digital simulation. Astronomers believe that at least one major collision like this happened early in the Milky Way’s development.Credit: VIDEO: KOPPELMAN, VILLALOBOS & HELMI.

    When the Khoisan hunter-gatherers of sub-Saharan Africa gazed upon the meandering trail of stars and dust that split the night sky, they saw the embers of a campfire. Polynesian sailors perceived a cloud-eating shark. The ancient Greeks saw a stream of milk, gala, which would eventually give rise to the modern term “galaxy.”

    In the 20th century, astronomers discovered that our silver river is just one piece of a vast island of stars, and they penned their own galactic origin story. In the simplest telling, it held that our Milky Way galaxy came together nearly 14 billion years ago when enormous clouds of gas and dust coalesced under the force of gravity. Over time, two structures emerged: first, a vast spherical “halo,” and later, a dense, bright disk. Billions of years after that, our own solar system spun into being inside this disk, so that when we look out at night, we see spilt milk—an edge-on view of the disk splashed across the sky.

    Yet over the past two years, researchers have rewritten nearly every major chapter of the galaxy’s history. What happened? They got better data.

    On April 25, 2018, a European spacecraft by the name of Gaia released a staggering quantity of information about the sky.

    ESA (EU)/GAIA satellite .

    Critically, Gaia’s years-long data set described the detailed motions of roughly 1 billion stars. Previous surveys had mapped the movement of just thousands. The data brought a previously static swath of the galaxy to life. “Gaia started a new revolution,” said Federico Sestito, an astronomer at the Strasbourg Astronomical Observatory in France.

    Gaia EDR3 StarTrails 600.

    The river of stars in the southern sky. ESA/GAIA (Gaia DR2 skymap)

    3
    Data from more than 1.8 billion stars have been used to create this map of the entire sky. It shows the total brightness and color of stars observed by ESA’s Gaia satellite and released as part of Gaia’s Early Data Release 3.

    Astronomers raced to download the dynamic star map, and a flurry of discoveries followed. They found that parts of the disk, for example, appeared impossibly ancient. They also found evidence of epic collisions that shaped the Milky Way’s violent youth, as well as new signs that the galaxy continues to churn in an unexpected way.


    The Gaia satellite has revolutionized our understanding of the Milky Way since its launch in December 2013. Credit:Video from ESA/ATG Media Lab.

    Taken together, these results have spun a new story about our galaxy’s turbulent past and its ever-evolving future. “Our picture of the Milky Way has changed so quickly,” said Michael Petersen, an astronomer at the University of Edinburgh. “The theme is that the Milky Way is not a static object. Things are changing rapidly everywhere.”

    The Earliest Stars

    To peer back to the galaxy’s earliest days, astronomers seek stars that were around back then. These stars were fashioned only from hydrogen and helium, the cosmos’s rawest materials. Fortunately, the smaller stars from this early stock are also slow to burn, so many are still shining.

    After decades of surveys, researchers had assembled a catalog of 42 such ancients, known as ultra metal-poor stars (to astronomers, any atom bulkier than helium qualifies as metallic). According to the standard story of the Milky Way, these stars should be swarming throughout the halo, the first part of the galaxy to form. By contrast, stars in the disk—which was thought to have taken perhaps an additional billion years to spin itself flat—should be contaminated with heaver elements such as carbon and oxygen.

    In late 2017, Sestito set out to study how this metal-poor swarm moves by writing code to analyze the upcoming Gaia results. Perhaps their spherical paths could offer some clues as to how the halo came to be, he thought.

    In the days following Gaia’s data release, he extracted the 42 ancient stars from the full data set, then tracked their motions. He found that most were streaming through the halo, as predicted. But some—roughly 1 in 4—weren’t. Rather, they appeared stuck in the disk [MNRAS], the Milky Way’s youngest region. “What the hell,” Sestito wondered, though he used a different four-letter term. “What’s going on?”

    Follow-up research confirmed that the stars really are long-term residents of the disk, and not just tourists passing through. From two recent surveys, Sestito and colleagues amassed a library of roughly 5,000 metal-poor stars. A few hundred of them appear to be permanent denizens of the disk [MNRAS]. Another group sifted through about 500 stars identified by another survey, finding that about 1 in 10 of these stars lie flat in circular, sunlike orbits [MNRAS]. And a third research group found stars of various metallicities (and therefore various ages) moving in flat disk orbits. “This is something completely new,” said lead author Paola Di Matteo, an astronomer at the Paris Observatory.

    How did these anachronisms get there? Sestito speculated that perhaps pockets of pristine gas managed to dodge all the metals expelled from supernovas for eons, then collapsed to form stars that looked deceptively old. Or the disk may have started taking shape when the halo did, nearly 1 billion years ahead of schedule.

    To see which was more probable, he connected with Tobias Buck, a researcher at the Leibniz Institute for Astrophysics in Potsdam, Germany, who specializes in crafting digital galaxy simulations. Past efforts had generally produced halos first and disks second, as expected. But these were relatively low-resolution efforts.


    Galaxy simulation
    In these digital simulations, a Milky Way–like galaxy forms and evolves over 13.8 billion years — from the early universe to the present day. The leftmost column shows the distribution of invisible dark matter; the center column the temperature of gas (where blue is cold and red is hot); and the right column the density of stars. Each row highlights a different size scale: The top row is a zoomed-in look at the galactic disk; the center column a mid-range view of the galactic halo; and the bottom row a zoomed-out view of the environment around the galaxy.

    Buck increased the crispness of his simulations by about a factor of 10. At that resolution, each run demanded intensive computational resources. Even though he had access to Germany’s Leibniz Supercomputing Center, a single simulation required three months of computing time. He repeated the exercise six times.

    Of those six, five produced Milky Way doppelgängers. Two of those featured substantial numbers of metal-poor disk stars.

    How did those ancient stars get into the disk? Simply put, they were stellar immigrants. Some of them were born in clouds that predated the Milky Way. Then the clouds just happened to deposit some of their stars into orbits that would eventually form part of the galactic disk. Other stars came from small “dwarf” galaxies that slammed into the Milky Way and aligned with an emerging disk.

    The results, which the group published in November [MNRAS], suggest that the classic galaxy formation models were incomplete. Gas clouds do collapse into spherical halos, as expected. But stars arriving at just the right angles can kick-start a disk at the same time. “[Theorists] weren’t wrong,” Buck said. “They were missing part of the picture.”

    A Violent Youth

    The complications don’t end there. With Gaia, astronomers have found direct evidence of cataclysmic collisions. Astronomers assumed that the Milky Way had a hectic youth, but Helmer Koppelman, an astronomer now at the Institute for Advanced Study in Princeton, New Jersey, used the Gaia data to help pinpoint specific debris from one of the largest mergers.

    Gaia’s 2018 data release fell on a Wednesday, and the mad rush to download the catalog froze its website, Koppelman recalled. He processed the data on Thursday, and by Friday he knew he was on to something big. In every direction, he saw a huge number of halo stars ping-ponging back and forth in the center of the Milky Way in the same peculiar way—a clue that they had come from a single dwarf galaxy. Koppelman and his colleagues had a brief paper [The Astrophysical Journal Letters] ready by Sunday and followed it up with a more detailed analysis that June [Nature].

    The galactic wreckage was everywhere. Perhaps half of all the stars in the inner 60,000 light-years of the halo (which extends hundreds of thousands of light-years in every direction) came from this lone collision, which may have boosted the young Milky Way’s mass by as much as 10 percent. “This is a game changer for me,” Koppelman said. “I expected many different smaller objects.”


    A simulation shows the formation and evolution of a Milky Way–like galaxy over about 10 billion years. Many smaller dwarf galaxies accrete onto the main galaxy, often becoming a part of it.Video credit: Tobias Buck.

    The group named the incoming galaxy Gaia-Enceladus, after the Greek goddess Gaia—one of the primordial deities—and her Titan son Enceladus. Another team at the University of Cambridge independently discovered the galaxy around the same time [MNRAS], dubbing it the Sausage for its appearance in certain orbital charts.

    When the Milky Way and Gaia-Enceladus collided, perhaps 10 billion years ago, the Milky Way’s delicate disk may have suffered widespread damage. Astronomers debate why our galactic disk seems to have two parts: a thin disk, and a thicker one where stars bungee up and down while orbiting the galactic center. Research led by Di Matteo[Astronomy & Astrophysics] now suggests that Gaia-Enceladus exploded much of the disk, puffing it up during the collision. “The first ancient disk formed pretty fast, and then we think Gaia-Enceladus kind of destroyed it,” Koppelman said.

    Hints of additional mergers have been spotted in bundles of stars known as globular clusters. Diederik Kruijssen, an astronomer at Heidelberg University in Germany, used galaxy simulations to train a neural network to scrutinize globular clusters. He had it study their ages, makeup, and orbits. From that data, the neural network could reconstruct the collisions that assembled the galaxies. Then he set it loose on data from the real Milky Way. The program reconstructed known events such as Gaia-Enceladus, as well as an older, more significant merger that the group has dubbed Kraken.

    In August, Kruijssen’s group published a merger lineage of the Milky Way and the dwarf galaxies that formed it [MNRAS]. They also predicted the existence of 10 additional past collisions that they’re hoping will be confirmed with independent observations. “We haven’t found the other 10 yet,” Kruijssen said, “but we will.”

    All these mergers have led some astronomers to suggest [The Astrophysical Journal] that the halo may be made almost exclusively of immigrant stars. Models from the 1960s and ’70s predicted that most Milky Way halo stars should have formed in place. But as more and more stars have been identified as galactic interlopers, astronomers may not need to assume that many, if any, stars are natives, said Di Matteo.

    A Still-Growing Galaxy

    The Milky Way has enjoyed a relatively quiet history in recent eons, but newcomers continue to stream in. Stargazers in the Southern Hemisphere can spot with the naked eye a pair of dwarf galaxies called the Large and Small Magellanic Clouds. Astronomers long believed the pair to be our steadfast orbiting companions, like moons of the Milky Way.

    Then a series of Hubble Space Telescope observations [The Astrophysical Journal] between 2006 and 2013 found that they were more like incoming meteorites. Nitya Kallivayalil, an astronomer at the University of Virginia, clocked the clouds as coming in hot at about 330 kilometers per second—nearly twice as fast as had been predicted.

    When a team led by Jorge Peñarrubia, an astronomer at the Royal Observatory of Edinburgh, crunched the numbers a few years later, they concluded that the speedy clouds must be extremely hefty—perhaps 10 times bulkier than previously thought.

    “It’s been surprise after surprise,” Peñarrubia said.

    Various groups have predicted that the unexpectedly beefy dwarfs might be dragging parts of the Milky Way around, and this year Peñarrubia teamed up with Petersen to find proof.

    The problem with looking for galaxy-wide motion is that the Milky Way is a raging blizzard of stars, with astronomers looking outward from one of the snowflakes. So Peñarrubia and Petersen spent most of lockdown figuring out how to neutralize the motions of the Earth and the sun, and how to average out the motion of halo stars so that the halo’s outer fringe could serve as a stationary backdrop.

    When they calibrated the data in this way, they found that the Earth, the sun, and the rest of the disk in which they sit are lurching in one direction—not toward the Large Magellanic Cloud’s current position, but toward its position around a billion years ago (the galaxy is a lumbering beast with slow reflexes, Petersen explained). They recently detailed their findings in Nature Astronomy.

    The sliding of the disk against the halo undermines a fundamental assumption: that the Milky Way is an object in balance. It may spin and slip through space, but most astronomers assumed that after billions of years, the mature disk and the halo had settled into a stable configuration.

    Peñarrubia and Petersen’s analysis proves that assumption wrong. Even after 14 billion years, mergers continue to sculpt the overall shape of the galaxy. This realization is just the latest change in how we understand the great stream of milk across the sky.

    “Everything we thought we knew about the future and the history of the Milky Way,” said Petersen, “we need a new model to describe that.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 1:52 pm on December 31, 2020 Permalink | Reply
    Tags: "Timnit Gebru’s Exit From Google Exposes a Crisis in AI", Checks on the industry are further compromised by the close ties between tech companies and ostensibly independent academic institutions., Corporate-funded research can never be divorced from the realities of power and the flows of revenue and capital., First and foremost tech workers need a union., Opinion-"Timnit Gebru’s Exit From Google Exposes a Crisis in AI", the civility politics that yoked the young effort to construct the necessary guardrails around AI have been torn apart., The field is dominated by an elite primarily white male workforce and it is controlled and funded primarily by large industry players—Microsoft; Facebook; Amazon; IBM; and yes- Google., The situation has made clear that the field needs to change., We also need protections and funding for critical research outside of the corporate environment that’s free of corporate influence., WIRED, With the proliferation of AI into domains such as health care; criminal justice and education researchers and advocates are raising urgent concerns.   

    From WIRED: Opinion-“Timnit Gebru’s Exit From Google Exposes a Crisis in AI” 


    From WIRED

    12.31.2020
    Alex Hanna
    Meredith Whittaker

    The situation has made clear that the field needs to change. Here’s where to start, according to a current and a former Googler.

    1
    Timnit Gebru, a former AI researcher at Google, is one of the few Black women in the field. Credit: Noam Galai/Getty Images.

    This year has held many things, among them bold claims of artificial intelligence breakthroughs. Industry commentators speculated that the language-generation model GPT-3 may have achieved “artificial general intelligence,” while others lauded Alphabet subsidiary DeepMind’s protein-folding algorithm—Alphafold—and its capacity to “transform biology.” While the basis of such claims is thinner than the effusive headlines, this hasn’t done much to dampen enthusiasm across the industry, whose profits and prestige are dependent on AI’s proliferation.

    It was against this backdrop that Google fired Timnit Gebru, our dear friend and colleague, and a leader in the field of artificial intelligence. She is also one of the few Black women in AI research and an unflinching advocate for bringing more BIPOC, women, and non-Western people into the field. By any measure, she excelled at the job Google hired her to perform, including demonstrating racial and gender disparities in facial-analysis technologies and developing reporting guidelines for data sets and AI models. Ironically, this and her vocal advocacy for those underrepresented in AI research are also the reasons, she says, the company fired her. According to Gebru, after demanding that she and her colleagues withdraw a research paper critical of (profitable) large-scale AI systems, Google Research told her team that it had accepted her resignation, despite the fact that she hadn’t resigned. (Google declined to comment for this story.)

    Google’s appalling treatment of Gebru exposes a dual crisis in AI research. The field is dominated by an elite, primarily white male workforce, and it is controlled and funded primarily by large industry players—Microsoft, Facebook, Amazon, IBM, and yes, Google. With Gebru’s firing, the civility politics that yoked the young effort to construct the necessary guardrails around AI have been torn apart, bringing questions about the racial homogeneity of the AI workforce and the inefficacy of corporate diversity programs to the center of the discourse. But this situation has also made clear that—however sincere a company like Google’s promises may seem—corporate-funded research can never be divorced from the realities of power, and the flows of revenue and capital.

    This should concern us all. With the proliferation of AI into domains such as health care, criminal justice, and education, researchers and advocates are raising urgent concerns. These systems make determinations that directly shape lives, at the same time that they are embedded in organizations structured to reinforce histories of racial discrimination. AI systems also concentrate power in the hands of those designing and using them, while obscuring responsibility (and liability) behind the veneer of complex computation. The risks are profound, and the incentives are decidedly perverse.

    The current crisis exposes the structural barriers limiting our ability to build effective protections around AI systems. This is especially important because the populations subject to harm and bias from AI’s predictions and determinations are primarily BIPOC people, women, religious and gender minorities, and the poor—those who’ve borne the brunt of structural discrimination. Here we have a clear racialized divide between those benefiting—the corporations and the primarily white male researchers and developers—and those most likely to be harmed.

    Take facial-recognition technologies, for instance, which have been shown to “recognize” darker skinned people less frequently than those with lighter skin. This alone is alarming. But these racialized “errors” aren’t the only problems with facial recognition. Tawana Petty, director of organizing at Data for Black Lives, points out that these systems are disproportionately deployed in predominantly Black neighborhoods and cities, while cities that have had success in banning and pushing back against facial recognition’s use are predominately white.

    Without independent, critical research that centers the perspectives and experiences of those who bear the harms of these technologies, our ability to understand and contest the overhyped claims made by industry is significantly hampered. Google’s treatment of Gebru makes increasingly clear where the company’s priorities seem to lie when critical work pushes back on its business incentives. This makes it almost impossible to ensure that AI systems are accountable to the people most vulnerable to their damage.

    Checks on the industry are further compromised by the close ties between tech companies and ostensibly independent academic institutions. Researchers from corporations and academia publish papers together and rub elbows at the same conferences, with some researchers even holding concurrent positions at tech companies and universities. This blurs the boundary between academic and corporate research and obscures the incentives underwriting such work. It also means that the two groups look awfully similar—AI research in academia suffers from the same pernicious racial and gender homogeneity issues as its corporate counterparts. Moreover, the top computer science departments accept copious amounts of Big Tech research funding. We have only to look to Big Tobacco and Big Oil for troubling templates that expose just how much influence over the public understanding of complex scientific issues large companies can exert when knowledge creation is left in their hands.

    Gebru’s firing suggests this dynamic is at work once again. Powerful companies like Google have the ability to co-opt, minimize, or silence criticisms of their own large-scale AI systems—systems that are at the core of their profit motives. Indeed, according to a recent Reuters report, Google leadership went as far as to instruct researchers to “strike a positive tone” in work that examined technologies and issues sensitive to Google’s bottom line. Gebru’s firing also highlights the danger the rest of the public faces if we allow an elite, homogenous research cohort, made up of people who are unlikely to experience the negative effects of AI, to drive and shape the research on it from within corporate environments. The handful of people who are benefiting from AI’s proliferation are shaping the academic and public understanding of these systems, while those most likely to be harmed are shut out of knowledge creation and influence. This inequity follows predictable racial, gender, and class lines.

    As the dust begins to settle in the wake of Gebru’s firing, one question resounds: What do we do to contest these incentives, and to continue critical work on AI in solidarity with the people most at risk of harm? To that question, we have a few, preliminary answers.

    First and foremost, tech workers need a union. Organized workers are a key lever for change and accountability, and one of the few forces that has been shown capable of pushing back against large firms. This is especially true in tech, given that many workers have sought-after expertise and are not easily replaceable, giving them significant labor power. Such organizations can act as a check on retaliation and discrimination, and can be a force pushing back against morally reprehensible uses of tech. Just look at Amazon workers’ fight against climate change or Google employees’ resistance to military uses of AI, which changed company policies and demonstrated the power of self-organized tech workers. To be effective here, such an organization must be grounded in anti-racism and cross-class solidarity, taking a broad view of who counts as a tech worker, and working to prioritize the protection and elevation of BIPOC tech workers across the board. It should also use its collective muscle to push back on tech that hurts historically marginalized people beyond Big Tech’s boundaries, and to align with external advocates and organizers to ensure this.

    We also need protections and funding for critical research outside of the corporate environment that’s free of corporate influence. Not every company has a Timnit Gebru prepared to push back against reported research censorship. Researchers outside of corporate environments must be guaranteed greater access to technologies currently hidden behind claims of corporate secrecy, such as access to training data sets, and policies and procedures related to data annotation and content moderation. Such spaces for protected, critical research should also prioritize supporting BIPOC, women, and other historically excluded researchers and perspectives, recognizing that racial and gender homogeneity in the field contribute to AI’s harms. This endeavor would need significant funding, which could be achieved through a tax levied on these companies.

    Finally, the AI field desperately needs regulation. Local, state, and federal governments must step in and pass legislation that protects privacy and ensures meaningful consent around data collection and the use of AI; increases protections for workers, including whistle-blower protections and measures to better protect BIPOC workers and others subject to discrimination; and ensures that those most vulnerable to the risks of AI systems can contest—and refuse—their use.

    This crisis makes clear that the current AI research ecosystem—constrained as it is by corporate influence and dominated by a privileged set of researchers—is not capable of asking and answering the questions most important to those who bear the harms of AI systems. Public-minded research and knowledge creation isn’t just important for its own sake, it provides essential information for those developing robust strategies for the democratic oversight and governance of AI, and for social movements that can push back on harmful tech and those who wield it. Supporting and protecting organized tech workers, expanding the field that examines AI, and nurturing well-resourced and inclusive research environments outside the shadow of corporate influence are essential steps in providing the space to address these urgent concerns.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 8:52 am on December 28, 2020 Permalink | Reply
    Tags: "2020 Was a Breakout Year for Crispr", , , , For thousands of years humans have been modifying the DNA by breeding animals to produce the most desirable traits. With Crispr one no longer has to wait generations to make significant genetic change, , , On December 21 the US Department of Agriculture which regulates changes made to crops announced a proposal to take charge of overseeing gene editing in animals bred for food as well., Scientists in Seattle and Boston published a study showing they had discovered a way to harness a strange enzyme in biofilm-forming bacteria to make precise changes to mitochondrial DNA., The FDA authorized two Crispr-based tests both for detecting SARS-CoV-2., The Nobel Prize, The pandemic sped up the need to develop commercial diagnostics without the need for expensive lab instruments., The US Food and Drug Administration decided in 2017 to regulate changes made by Crispr and other molecular tools as animal drugs., WIRED   

    From WIRED: “2020 Was a Breakout Year for Crispr” 


    From WIRED

    12.28.2020
    Megan Molteni

    Between glimpses of a medical cure and winning science’s shiniest prize, this proved to the gene-editing technology’s biggest year yet.

    1
    Credit: Tracy J. Lee; Elena Lacey; Getty Images.

    It will be difficult to remember 2020 as anything other than the year Covid-19 drew the world to a socially distanced standstill. But while thousands of life scientists pivoted to trying to understand how the novel coronavirus wreaks havoc on the human body, and others transformed their labs into pop-up testing facilities, the field of Crispr gene editing nevertheless persisted. In fact, it triumphed. Here are five of the (mostly coronavirus-free) breakthroughs in the Crisprsphere that you might have missed in 2020.

    1. Crispr takes on blood diseases

    Last summer, doctors in Tennessee injected Victoria Gray—a 34-year-old sickle cell disease patient—with billions of her own stem cells that scientists in Massachusetts had reprogrammed with Crispr to produce healthy blood cells. The hours-long infusion made her the first American with a heritable disease to be treated with the experimental gene-editing technology. And it appears to be working.

    This July, Gray celebrated a year of being symptom-free. In December, a team that includes researchers from the two companies that developed the treatment—CRISPR Therapeutics and Vertex Pharmaceuticals—published promising results from a clinical trial, which is also treating patients in Germany who suffer from a related disease called ß-thalassaemia. In both groups of patients, the treatment seems to be safe, and it so far has eliminated the need for regular blood transfusions. It’s still too soon to say how long the effects will last, so don’t call it a cure just yet. But the consequences could be huge. Sickle cell disease and ß-thalassaemia are among the most common genetic disorders caused by mutations to a single gene, affecting millions of people worldwide.

    2. The stable of gene-edited animals grows

    For thousands of years, humans have been modifying the DNA of our closest furry and feathered friends by breeding animals to produce the most desirable traits. With Crispr, one no longer has to wait generations to make significant genetic changes. This year, researchers welcomed a raft of world-first barnyard creatures. Among them are pandemic-proof pigs, whose cells have been edited to remove the molecular lock-and-key mechanism that a variety of respiratory viruses use to infect them, and chickens Crispr’d to make them impervious to a common bird disease caused by the avian leukosis virus.

    In April, scientists at UC Davis birthed Cosmo, a black bull calf whose genome had been altered so that 75 percent of his future offspring—rather than the natural 50 percent—will be male. He’s the first Crispr knock-in bovine, and proof that one day making all-male beef herds might be possible. (Female beef cattle convert feed to protein less efficiently, so in theory, the approach could mean fewer animals on the land, making it a win both for ranchers and the environment.)

    For years, the future of gene editing in agricultural animals has been uncertain, since the US Food and Drug Administration decided in 2017 to regulate changes made by Crispr and other molecular tools as animal drugs. But on December 21, the US Department of Agriculture, which (much more leniently) regulates similar changes made to crops, announced a proposal to take charge of overseeing gene editing in animals bred for food as well. The move, if it goes through, could make it much easier for breeders to bring Crispr’d cows, chickens, pigs, and sheep to market in the US.

    3. Disease detectors hit the market

    For the past few years, startups spun out of Crispr patent rivals UC Berkeley and the Broad Institute have been sprinting to develop commercial diagnostics without the need for expensive lab instruments. The idea is to use Crispr’s programmable gene-seeking capabilities to pick up bits of foreign genetic material—from a virus, bacteria, or fungus—circulating in a sick person’s bodily fluids, and deliver those results via something that looks like a pregnancy test. Tests made with disposable paper strips are cheap and can go into the field or into people’s homes, greatly expanding their reach.

    The pandemic sped up the need for such tests. This summer, the FDA authorized two Crispr-based tests, both for detecting SARS-CoV-2. Boston-based Sherlock Biosciences received the green light for its test in May, and the Bay Area’s Mammoth Biosciences followed in August. It marked the first time the FDA has allowed a Crispr-based diagnostic tool to be used on patients. The tests still need to be analyzed in a lab, but they are faster than the standard method for detecting SARS-CoV-2, called PCR, which typically takes four to eight hours to run. The new tests return results in about one hour. Both companies are currently working toward versions of the test that can be conducted at home.

    “Before the pandemic, there was a lot of general excitement about the potential of next-generation diagnostics to decentralize the testing industry, but there was still a lot of inertia,” Mammoth Bioscience CEO Trevor Martin told WIRED this summer. The coronavirus, he says, shocked the industry out of it. “Things that would have taken years are now things that must be done in months.”

    4. Mitochondria join the genome-editing party

    Crispr can make precise cuts to the genomes of pretty much any organism on the planet. But mitochondria—cells’ energy-producing nanofactories—have their own DNA separate from the rest of the genome. Until recently, this DNA-targeting tool couldn’t manage to make changes to the genetic code coiled inside them.

    And unlike chromosomes, which you inherit from both parents, mitochondrial DNA comes only from your maternal side. Mutations in mitochondrial DNA can cripple the cell’s ability to generate energy and lead to debilitating, often fatal conditions that affect about one in 6,500 people worldwide. Up until now, scientists have tried preventing mitochondrial disease by swapping out one egg’s mitochondria for another, a procedure commonly known as three-person IVF, which is currently banned in the US.

    But this summer, scientists in Seattle and Boston published a study [Nature] showing they had discovered a way to harness a strange enzyme found in biofilm-forming bacteria to make precise changes to mitochondrial DNA.

    3
    A mitochondrion, as seen through a transmission electron microscope. The membranes (in pink) prevent CRISPR–Cas9 genome editing.Credit: CNRI/SPL.

    The work was led by David Liu, whose evolution-hacking lab at the Broad Institute and Harvard University has churned out a series of groundbreaking DNA-altering tools over the last few years. The new system has not yet been tested in humans, and clinical trials are still a long way off, but the discovery opens up another promising avenue for treating mitochondrial disease.

    5. Crispr’s Nobel victory

    Last but certainly not least, in October, the 2020 Nobel Prize in Chemistry was awarded to Emmanuelle Charpentier and Jennifer Doudna for Crispr genome editing. It was both a stunning choice (as a DNA-altering tool, Crispr has only been around for 8 years) and a completely expected one. Crispr has completely revolutionized biological research since its arrival in 2012; scientists have since published more than 300,000 studies using the tool to manipulate the genomes of organisms across every kingdom, including mosquitoes, tomatoes, King Charles Spaniels, and even humans. It’s cheap, fast, and easy enough for almost anyone to use. Today, scientists can order custom-made Crispr components with the click of a button.

    The win also broke barriers of another sort. Doudna and Charpentier are the first women to win a Nobel Prize in the sciences together. And there had been much speculation about who the prize would actually go to, since credit for the creation story of Crispr is still a matter of hot debate (and litigation). “Many women think that, no matter what they do, their work will never be recognized the way it would be if they were a man,” said Doudna upon learning the news. “And I think [this prize] refutes that. It makes a strong statement that women can do science, women can do chemistry, and that great science is recognized and honored.” In other words, she continued, “women rock.” We couldn’t agree more.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:56 am on December 26, 2020 Permalink | Reply
    Tags: "Can the Paris Agreement Still Avert Climate Chaos?", , , , WIRED   

    From WIRED: “Can the Paris Agreement Still Avert Climate Chaos?” 


    From WIRED

    12.26.2020
    Fiona Harvey

    As the Trump era wanes, there is a sense of optimism about what the accord could achieve, five years in—but only if countries meet their targets.

    1
    Credit: Daniel Grizelj/Getty Images.

    No one who was in the hall that winter evening in a gloomy conference center [The Guardian] on the outskirts of the French capital will ever forget it. Tension had been building throughout the afternoon, as after two weeks of fraught talks the expected resolution was delayed and then delayed yet again. Rumors swirled—had the French got it wrong? Was another climate failure approaching, the latest botched attempt at solving the world’s global heating crisis?

    Finally, as the mood in the hall was growing twitchy, the UN security guards cleared the platform, and the top officials of the landmark Paris climate talks took to the podium. For two weeks, 196 countries had huddled in countless meetings, wrangling over dense pages of text, scrutinizing every semicolon. And they had finally reached agreement. Laurent Fabius, the French foreign minister in charge of the grueling talks, looking exhausted but delighted, reached for his gavel and brought it down with a resounding crack. The Paris agreement was approved at last.

    Climate economist Nicholas Stern found himself hugging Xia Zhenhua, the normally reserved Chinese minister, while whoops and shouts echoed round the hall. “I felt that the Paris agreement was the moment when the world decided it really had to manage climate change in a serious way,” he said. “We were all in it together—that’s what people realized.”

    At Paris, for the first time, rich and poor countries joined together in a legally binding treaty pledging to hold global heating well below 2 degrees Celsius, or 3.6 degrees Fahrenheit, the scientifically advised limit of safety, with an aspiration not to breach 1.5 C above preindustrial levels [The Guardian]. Those two weeks of tense talks in the French capital were the climax of 25 years of tortuous negotiations on the climate, since governments were warned of the dangers of climate chaos in 1990. The failure, discord, and recriminations of those decades were left behind as delegates from 196 countries hugged, wept, and cheered in Paris.

    Todd Stern, climate envoy to President Barack Obama, recalls: “My team and I had been working toward this for seven years … and the story of climate negotiations had so often been one of disappointment. And yet here we were, and we knew that we had—all together—done a really big thing. A very special moment. An unforgettable one.”

    The accord itself has proven remarkably resilient. Bringing together 196 nations in 2015 was not easy—even as Fabius brought down the gavel on the agreement there was a little chicanery, as Nicaragua had planned to object to the required consensus but was ignored. Yet that consensus has remained robust. When the US—the world’s biggest economy and second biggest emitter—began the process of withdrawal from Paris, under President Donald Trump in 2017 [The Guardian], a disaster might have been expected. The Kyoto 1997 protocol fell apart after the US signed but failed to ratify the agreement, leaving climate negotiations in limbo for a decade.

    If Trump was hoping to wreck Paris, he was disappointed: The rest of the world shrugged and carried on. There was no exodus of other countries, although some did pursue more aggressive tactics [The Guardian] at the annual UN talks. The key axis of China and the EU remained intact, deliberately underlined by Chinese president Xi Jinping when he chose to surprise the world with a net-zero-emissions target [The Guardian] at the UN general assembly in September, just as the UN election race was heating up.

    Remy Rioux, one of the French government team who led the talks, now chief executive of the French Development Agency, said: “The Paris agreement has proven to be inclusive and at scale, with the participation of countries representing 97 percent of global emissions, as well as that of nonstate actors such as businesses, local governments, and financial institutions—and very resilient, precisely because it is inclusive. The Paris agreement is a powerful signal of hope in the face of the climate emergency.”

    On some measures, Paris could be judged a failure. Emissions in 2015 were about 50 billion tons. By 2019 they had risen to about 55 billion tons [The Guardian], according to the UN Environment Programme (UNEP). Carbon output fell dramatically, by about 17 percent overall and far more in some regions, in this spring’s coronavirus lockdowns, but the plunge also revealed an uncomfortable truth: Even when transport, industry, and commerce grind to a halt, the majority of emissions remain. Far greater systemic change is needed, particularly in energy generation around the world, to meet the Paris goals.

    Ban Ki-moon, former UN secretary-general, told The Guardian: “We have lost a lot of time. Five years after the agreement in Paris was adopted with huge expectations and commitment by world leaders, we have not done enough.”

    What’s more, we are still digging up and burning fossil fuels at a frantic rate. UNEP reported last week [The Guardian] that production of fossil fuels is planned to increase by 2 percent a year. Meanwhile, we continue to destroy the world’s carbon sinks, by cutting down forests—the world is still losing an area of forest the size of the UK each year [The Guardian], despite commitments to stop deforestation—as well as drying out peatlands and wetlands, and reducing the ocean’s capacity to absorb carbon from the air.

    Global temperatures have already risen by more than 1 C above pre-industrial levels [The Guardian], and the results in extreme weather are evident around the world. Wildfires [The Guardian] raged across Australia and the US this year, more than 30 hurricanes struck [The Guardian], heatwaves blasted Siberia [The Guardian], and the Arctic ice is melting faster [The Guardian].

    António Guterres, secretary-general of the UN, put it in stark terms: “Humanity is waging war on nature. This is suicidal. Nature always strikes back—and it is already doing so with growing force and fury. Biodiversity is collapsing. One million species are at risk of extinction. Ecosystems are disappearing before our eyes.”

    But to judge Paris solely by these portents of disaster would be to lose sight of the remarkable progress that has been made on climate change since. This year, renewable energy will make up about 90 percent of the new energy generation capacity installed around the world [The Guardian], according to the International Energy Agency, and by 2025 it will be the biggest source of power, displacing coal. That massive increase reflects rapid falls in the price of wind turbines and solar panels, which are now competitive or cheaper than fossil fuel generation in many countries, even without subsidy.

    “We never expected to see prices come down so fast,” said Adair Turner, chair of the Energy Transitions Commission and former chief of the UK’s Committee on Climate Change. “We have done better than the most optimistic forecasts.”

    Oil prices plunged this spring as coronavirus lockdowns grounded planes and swept cities free of cars, and some analysts predict that the oil business will never recover its old hegemony. Some oil companies, including BP and Shell, now plan to become carbon-neutral.

    Electric vehicles have also improved much faster than expected, and that is reflected in the stunning share price rise of Tesla. The rise of low-carbon technology has meant that when the Covid-19 crisis struck, leading figures quickly called for a green recovery and set out plans for ensuring the world “builds back better.”

    Most importantly, the world has coalesced around a new target, based on the Paris goals but not explicit in the accord: net zero emissions. In the last two years, first a trickle and now a flood of countries have come forward with long-term goals to reduce their greenhouse gas emissions to a fraction of their current amount, to the point where they are equal to or outweighed by carbon sinks, such as forests.

    The UK, EU member states, Norway, Chile, and a host of developing nations led the way in adopting net zero targets. In September, China’s president surprised the world by announcing his country would achieve net zero emissions in 2060. Japan and South Korea quickly followed suit. US president-elect Joe Biden has also pledged to adopt a target of net zero emissions by 2050. That puts more than two-thirds of the global economy under pledges to reach net zero carbon around midcentury.

    If all of these countries meet their targets, the world will be almost on track to meet the upper limit of the Paris agreement. Climate Action Tracker, which analyzes carbon data, has calculated that the current pledges would lead to a temperature rise of 2.1 C, bringing the world within “striking distance” of fulfilling the 2015 promise.

    Niklas Hohne of NewClimate Institute, one of the partner organizations behind Climate Action Tracker, said: “Five years on, it’s clear the Paris agreement is driving climate action. Now we’re seeing a wave of countries signing up [to net zero emissions]. Can anyone really afford to miss catching this wave?”

    The key issue, though, is whether countries will meet these long-term targets [The Guardian]. Making promises for 2050 is one thing, but major policy changes are needed now to shift national economies on to a low-carbon footing. “None of these [net zero] targets will be meaningful without very aggressive action in this decade of the 2020s,” said Todd Stern. “I think there is growing, but not yet broad enough, understanding of that reality.”

    Renewing the shorter-term commitments in the Paris agreement will be key. As well as the overarching and legally binding limit of 1.5 C or 2 C, governments submitted nonbinding national plans at Paris to reduce their emissions, or to curb the projected rise in their emissions, in the case of smaller developing countries. The first round of those national plans—called nationally determined contributions—in 2015 were inadequate, however, and would lead to a disastrous 3 C of warming.

    The accord also contained a ratchet mechanism, by which countries must submit new national plans every five years, to bring them in line with the long-term goal, and the first deadline is now looming on December 31. UN climate talks were supposed to take place this November in Glasgow, but they had to be postponed because of the pandemic. The UK will host the Cop26 summit next November instead, and that will be the crucial meeting.

    The signs for that decisive moment are good, according to Laurent Fabius. The election of Biden in the US means it will be aligned with the EU and China in pushing for net zero emissions to be fully implemented. “Civil society, politics, business all came together for the Paris agreement,” Fabius told The Guardian. “We are looking at the same conjunction of the planets now with the US, the EU, China, Japan—if the big ones are going in the right direction, there will be a very strong incentive for all countries to go in the right direction.”

    As host of the Cop26 talks, the UK is redoubling its diplomatic efforts towards next year’s conference. The French government brought all of its diplomatic might to bear on Paris, instructing its ambassadors in every country to make climate change their top priority and sending out ministers around the globe to drum up support.

    Laurence Tubiana, France’s top diplomat at the talks, said another key innovation was what she termed “360-degree diplomacy.” That means not just working through the standard government channels, with ministerial meetings and chats among officials, but reaching out far beyond, making businesses, local government and city mayors, civil society, academics, and citizens part of the talks.

    “That was a very important part of [the success] of Paris,” she said. The UK has taken up a similar stance, with a civil society forum to ensure people’s voices are heard, and a specially convened council of young people advising the UN secretary-general. The UK’s high-level champion, Nigel Topping, is also coordinating a “race to zero” by which companies, cities, states, and other sub-national governments are themselves committing to reach net zero emissions.

    One massive issue outstanding ahead of Cop26 is finance. Bringing developing countries, which have suffered the brunt of a problem that they did little to cause, into the Paris agreement was essential. Key to that, said Fabius, was the pledge of financial assistance. The French government had to reassure poorer nations at the talks that $100 billion a year in financial assistance, for poor countries to cut their emissions and cope with the impacts of the climate crisis, would be forthcoming. “Money, money, money,” Fabius insisted, was at the heart of the talks. “If you don’t have that $100 billion [the talks will fail].”

    For the UK as hosts of Cop26, the question of money presents more of a problem since the chancellor, Rishi Sunak, swung his ax at the overseas aid budget in the recent spending review. Although the £11 billion designated for climate aid will be ring-fenced, persuading other developed countries to part with cash—and showing developing countries that the UK is on their side—has suddenly become more difficult. Amber Rudd, the former UK energy and climate minister who represented the UK at the Paris talks, said, “A country that understood the seriousness of Cop26 would not be cutting international aid right now.”

    Alok Sharma, president of Cop26 and the UK’s business secretary, will draw on his experience as the UK’s former international development minister in dealing with developing countries’ expectations. He said, “I completely recognize making sure we have the finance for climate change action is very important. That’s why we have protected international climate finance. I think people understand we are in a difficult economic situation. We have said when the economy recovers we would look to restore [overseas aid as 0.7 percent of GDP]. I do think when it comes to climate change we are putting our best foot forward.”

    Boris Johnson will be hoping to smooth over these tricky issues when he, alongside the French government and the UN, presides over a virtual meeting of world leaders on December 12, the fifth anniversary of the Paris accord. At least 70 world leaders are expected to attend, and they will be pushed to bring forward new NDCs and other policy commitments, as a staging post toward the Cop26 summit.

    Johnson kicked off preparations for the meeting on December 4 by announcing the UK’s own NDC, setting out a 68 percent cut in emissions compared with 1990 levels, by 2030. That would put the UK ahead of other developed economies, cutting emissions further and faster than any G20 country has yet committed to do.

    Critics pointed out, however, that the UK is not on track to meet its own current climate targets, for 2023. Far more detailed policy measures are likely to be required, some of them involving major changes and economic losers as well as winners, before the path to net zero is clear.

    The world is facing the task of a global economic reboot after the ravages of the coronavirus pandemic. The green recovery from that crisis is itself in need of rescue, Guardian analysis has shown, as countries are still pouring money into fossil fuel bailouts. But with so many countries now committed to net zero emissions, and an increasing number coming forward with short-term targets for 2030 to set us on that path, there are still grounds for optimism. This week’s climate ambition summit will be an important milestone, but the Cop26 summit next year will be the key test. The Paris agreement five years on still provides the best hope of avoiding the worst ravages of climate breakdown. The question is whether countries are prepared to back it up with action, rather than more hot air.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 1:18 pm on December 21, 2020 Permalink | Reply
    Tags: "The Oldest Crewed Deep Sea Submarine Just Got a Big Makeover", , , , WIRED   

    From WIRED: “The Oldest Crewed Deep Sea Submarine Just Got a Big Makeover” 


    From WIRED

    The 60-year-old sub is preparing to take its deepest plunge yet. But in the age of autonomous machines, why are humans exploring the ocean floor at all?

    1
    Credit: Woods Hole Oceanographic Institution.

    In early March, a gleaming white submarine called Alvin surfaced off the Atlantic coast of North Carolina after spending the afternoon thousands of feet below the surface. The submarine’s pilot and two marine scientists had just returned from collecting samples around a methane seep, an oasis for carbon-munching microbes and the larger species of bottom dwellers that feed on them. It was the final dive of a month-long expedition that had taken the crew from the Gulf of Mexico up the East Coast, with stops along the way to explore a massive deep sea coral reef that had recently been discovered off the coast of South Carolina.

    For Bruce Strickrott, Alvin’s chief pilot and the leader of the expedition, these sorts of missions to the bottom of the world are a regular part of life. Since he first started working on Alvin as an engineer nearly 25 years ago, Strickrott has logged more than 2,000 hours in the deep ocean, where he learned to expertly navigate the seabed’s alien landscape and probe for samples with the submarine’s spindly robotic arms. Alvin makes dozens of trips to the seabed every year, but the mission to the methane seep this spring marked a milestone in Strickrott’s career as an explorer: It was the last time that the sub would have meaningful limits on how deep it could dive.

    Since the end of that expedition, Alvin has been ashore getting a major upgrade at the Woods Hole Oceanographic Institution in Massachusetts, which operates the submersible on behalf of the US Navy. By the time Alvin’s makeover is wrapped up in late 2021, the storied submarine will rank among the most capable human-rated deep sea submersibles in the world. When Alvin hits the water again next autumn for a trip into abyssal trenches near Puerto Rico, Strickrott will be among the first to pilot what is effectively a brand new vessel. During that trip, he and a team of oceanographers and US Navy observers will push the submarine to 6,500 meters—far deeper than it has ever gone before.

    Earlier this month, Strickrott and a small team from Woods Hole presented the progress on Alvin’s upgrades at the annual meeting of the American Geophysical Union, which was held remotely as a precaution against the pandemic. Arguably the most important improvement is Alvin’s new titanium ballast spheres and a pressurized crew compartment that will enable the submarine to carry up to three occupants just over four miles below the surface. This upgrade alone will extend Alvin’s maximum depth by more than a mile and put approximately 99 percent of the seafloor within its reach. “We’ll have access to almost the full ocean,” says Strickrott. “It really opens up a lot of opportunities.”

    Alvin is a cross between a robotic laboratory and excavator. It has a portly white hull with a metallic crew sphere protruding from the front of its belly, and a bright red fin up top. Two jointed sampling arms—the upgrade will give it a third—extend from the front of the crew sphere and are used to shovel up to 500 pounds of sediment and other material into a sample hold on the craft. As part of the upgrade, Alvin will get some more powerful thrusters mounted to its back end, a suite of sophisticated imaging systems, and an acoustic transmission system so that its occupants can wirelessly send images and metadata from the bottom of the ocean to a ship on the surface.


    Credit: Woods Hole Oceanographic Institution.

    To upgrade Alvin, engineers had to tear the sub down to its metal skeleton at the National Deep Submergence Facility, a federally funded research space hosted at Woods Hole. This is a regular occurrence for Alvin, which gets stripped to its nuts and bolts every five years even when there’s not a major upgrade planned. The vessel is made almost entirely from custom components designed to withstand the uniquely hostile environment in the deep ocean, and the regular teardowns ensure that everything is in good shape.

    Adam Soule, the Chief Scientist for Deep Submergence at Woods Hole, says it’s this meticulous attention to detail that’s helped Alvin avoid having even a single serious accident after more than 5,000 dives. “We’re not developing prototypes,” Soule says. “All the technology we develop has to be bulletproof, so there’s a lot of engineering that’s done before anything makes it onto the sub.” Still, there have been some close calls. Only a few years after Alvin was commissioned, a mechanical failure on its carrier ship caused it to fall into the ocean and it began to sink with three crew members inside. The crew narrowly escaped, but it took a year to recover Alvin from the bottom of the ocean.

    Alvin has been in service for nearly six decades, but due to regular teardowns and rebuilds, the submarine piloted by Strickrott has little more than a name in common with its progenitor. For the philosophically inclined, Alvin calls to mind the Ship of Theseus, an ancient thought experiment in which the boards of a ship are torn out and replaced one by one until nothing of the original remains. Over the years, Alvin has been upgraded several times so it can carry researchers ever deeper into the ocean, spend more time at depth, and carry more samples plucked from the seabed. But until its most recent remodel, Alvin’s depth rating only gave it access to around two thirds of the seabed. There was a lot more ocean to explore.

    Alvin’s current upgrade is the second and final phase of an overhaul that began nearly a decade ago. Funded by a $40 million grant from the National Science Foundation, the first phase laid the foundation for subsequent improvements that would extend the sub’s maximum depth from 4,500 to 6,500 meters, which is deep enough to cover 99 percent of the world’s seabed. By the time that phase was finished in 2013, many of Alvin’s components were already rated to the full 6,500 meter depth, including the sub’s personnel carrier, a cramped titanium alloy sphere. But Alvin has had to wait to venture into those depths until after the final improvements were completed during the second and final phase of the upgrade this year. “Back in 2013, about 70 percent of the sub was replaced,” says Strickrott. “We knew that we were going to operate for a period before we finished the last bits and pieces, which is what we’re doing now.”

    Once engineers at Woods Hole have put the finishing touches on Alvin in the spring, it will undergo a rigorous testing process to prepare for its first dive to 6,500 meters. The first tests of the full vehicle will be uncrewed and will demonstrate that Alvin can run its life support systems for 24 hours without creating any harmful gases that would endanger its passengers. Next, a three-person crew will spend 12 hours inside Alvin on the shore to test its life support system again. If everything goes well, the Navy will give the Woods Hole team the go-ahead to begin tests in the water.

    Next September, Alvin will be transported by ship to Puerto Rico, where it will begin its first wet tests. Over the course of a week, Alvin and its crew will dive progressively deeper in roughly 500 meter increments. By the end of the week, Alvin will have reached its maximum depth and touched the seafloor in the abyssal trenches off the Puerto Rican coast. If the tests go well, the Navy will officially authorize Alvin for regular crewed expeditions to that depth, and the submarine will spend most of the next five years in the water around the US conducting scientific research until its dragged back to Massachusetts for its regular tuneup.

    The expedition in Puerto Rico will be the first time that Alvin has ventured into the hadal zone, the deepest and least understood region of the ocean. The hadal zone is dark, cold, and the ambient pressure is up to 1,000 times higher than on the air pressure at the surface. Life is scarce here. A few species of fish can exist up to around 8,000 meters below the surface, but the deepest regions of the hadal zone are occupied entirely by invertebrates and microscopic organisms.

    Altogether, the world’s hadal trenches occupy an area larger than Australia, but scientists have only just begun to discover what lurks in their depths. The hadal zone extends from 6,000 to 11,000 meters below the surface, and only four people in history have made it to the very bottom. The deepest spot in the ocean, known as the Marianas Trench, received its first visitors in 1960 and wasn’t visited again until the last decade, when filmmaker James Cameron and the adventurer Victor Vescovo made independent dives to the end of the abyss. Although Alvin will only be skimming the surface of the hadal zone, it will be one of only a handful of human-rated crafts that are capable of going that deep.

    When it comes to learning about the deep ocean, Alvin’s upgrade couldn’t come soon enough. Marine scientists are in a race to study the bottom of the ocean before it’s irreparably damaged by human activity. The deep ocean absorbs a significant amount of Earth’s carbon dioxide and heat, but the process is poorly understood. It’s still unclear how rising emissions and temperatures will affect the Earth’s feedback loops with the deep ocean, so gathering data from the bottom of the ocean today will be critical for understanding how it will change in the future.

    “Our knowledge of this abyssal zone is minimal,” Soule said during a presentation at the annual AGU meeting earlier this month. “We can almost count on discoveries of novel species and new processes each time we venture to these newly accessible depths.”

    But the seabed harbors more than just knowledge. It is a treasure trove of valuable metals like cobalt and manganese that go into our electronics; there is likely more of these metals on the seabed than on all of the Earth’s surface. Deep sea mining companies are already doing exploratory work to prepare to harvest these valuable substances at scale. But current approaches to deep sea mining are incredibly destructive, and the detrimental effects of this activity on deep sea ecosystems are not fully understood. The UN-led International Seabed Authority is still hashing out regulatory frameworks that will serve as the rule book for the deep sea gold rush, which buys scientists some valuable time to study the ocean floor before it’s dredged up. Now that it can reach most of the seabed, Alvin will be able to play an even greater role in that scientific mission.

    “You can’t possibly manage a resource or protect its environment if you don’t know what’s there,” says Strickrott. “In the big picture I think it’s pretty important that we get to the deep ocean to understand the biodiversity. If we don’t go there, we won’t know what to do, or have a stake in those decisions with respect to mineral resources.”

    Despite Alvin’s promise, it’s reasonable to wonder why Woods Hole, the National Science Foundation, the US Navy, and their many collaborators want to go to all the time and effort of sprucing up a 60-year-old submarine. These days, updating a vehicle usually means looking for ways to take humans out of the loop—we have autonomous cars, self-landing planes, and smart ships. Ocean explorers have also leaned into autonomous and remotely operated submarines that can explore the ocean floor for a fraction of the cost of Alvin and with none of the risk to human life. Why not let robots do the dirty work of collecting data and leave the humans to pursue pure science?

    Uncrewed submarines have been diving into the hadal zone for decades, but Brennan Phillips, an ocean engineer at the University of Rhode Island who specializes in remotely operated and autonomous deep sea robotics, says it’s hard to beat a human when it comes to exploring the seabed. For starters, humans can see more. Our eyes are amazing sensors and modern underwater cameras—or any cameras, for that matter—can’t come close to matching their resolution, especially in the low light of the deep ocean. “I’ve been in a manned submersible in the deep ocean and seen things with my own eyes that you can’t repeat yet with a camera,” says Phillips. “They’re still a long way short of what the human eye can do.”

    Humans are also important for discovery. Scientists cruising the seabed in Alvin are better equipped to recognize something they’ve never seen before and take a sample of it to study once they’re back on the surface. While this can also be done with a remotely operated sub that is connected to a human controller on the surface via a long tether, it’s more challenging for remote operators to identify promising sample sites. The miles-long tether can also create problems for the robot and limit where it can travel. Untethered autonomous robots have a harder time still, since they don’t have access to GPS for guidance and can struggle to recognize promising sample sites on their own.

    Phillips thinks that relying on robots might also compromise what scientists can see in the deep ocean. They tend to be much louder than subs built for humans, and they use much brighter lights because of the limited resolution of their cameras. Phillips says this likely frightens bottom dwellers, which makes it harder for researchers to make new discoveries. He suggests that part of the reason the hadal zone appears so desolate is because by the time these lumbering robots get to the bottom, they’ve scared away all the inhabitants.

    “There’s only been a handful of dives to these depths, so we really need to go more often,” says Phillips. “The hadal zone is considered to be basically featureless, but some of that might be coming down to our methodology. If you just make it a bit more stealthy, you can probably find things down there that we’ve been missing this whole time.”

    After 25 years of roaming the seafloor in Alvin, Strickrott isn’t afraid of a robot taking his job anytime soon. He acknowledges the important scientific reasons for keeping humans in the loop, but for Strickrott, human-driven deep sea exploration taps into something more profound. While many people might not relish the idea of being trapped in a cramped metal bubble in the pitch blackness of the deep ocean, Strickrott says that’s his “happy place.” He can still recall the thrill he had working on Alvin as a young ocean engineer, and he relishes accompanying budding marine scientists on their first trip to the bottom of the ocean.

    “There is, without a doubt, this really aspirational part of oceanography that involves humans exploring these parts of our planet that have never been seen before,” says Strickrott. “In order to keep the science of oceanography vibrant, we need to ensure that there are lots of people who are excited by the science.”

    Strickrott feels that establishing that connection with the ocean—by immersing yourself in it, by going as deep as you can in this alien environment, by seeing how life can thrive in an environment that would kill land dwellers instantly—is critical to its future and our own. We may need cutting edge technology to survive a trip into its depths, but the ocean and life on land are deeply intertwined. It’s that connection that Strickrott channels each time he climbs into Alvin. “Once you’re underwater, you get into this place in your mind that’s sort of Zen,” he says. “You’re part of the system.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 2:01 pm on December 20, 2020 Permalink | Reply
    Tags: "The Universe Is Expanding Faster Than Expected", , , , , , ESA/GAIA DR 3, WIRED   

    From WIRED: “The Universe Is Expanding Faster Than Expected” 


    From WIRED

    12.20.2020
    Natalie Wolchover

    1
    The Gaia telescope gauges the distances to stars by measuring their parallax, or apparent shift over the course of a year. Closer stars have a larger parallax. Credit: Samuel Velasco/Quanta Magazine.

    On December 3, humanity suddenly had information at its fingertips that people have wanted for, well, forever: the precise distances to the stars.

    “You type in the name of a star or its position, and in less than a second you will have the answer,” Barry Madore, a cosmologist at the University of Chicago and Carnegie Observatories, said on a Zoom call last week. “I mean …” He trailed off.

    “We’re drinking from a firehose right now,” said Wendy Freedman, also a cosmologist at Chicago and Carnegie and Madore’s wife and collaborator.

    “I can’t overstate how excited I am,” Adam Riess of Johns Hopkins University, who won the 2011 Nobel Prize in Physics for co-discovering dark energy, said in a phone call. “Can I show you visually what I’m so excited about?” We switched to Zoom so he could screen-share pretty plots of the new star data.

    Gaia EDR3 StarTrails 600.

    4
    Gaia EDR3. https://cds.unistra.fr/GaiaEDR3News

    The data comes from the European Space Agency’s Gaia spacecraft, which has spent the past six years stargazing from a perch 1 million miles high.

    ESA (EU)/GAIA satellite .

    The telescope has measured the “parallaxes” of 1.3 billion stars—tiny shifts in the stars’ apparent positions in the sky that reveal their distances. “The Gaia parallaxes are by far the most accurate and precise distance determinations ever,” said Jo Bovy, an astrophysicist at the University of Toronto.

    Best of all for cosmologists, Gaia’s new catalogue includes the special stars whose distances serve as yardsticks for measuring all farther cosmological distances. Because of this, the new data has swiftly sharpened the biggest conundrum in modern cosmology: the unexpectedly fast expansion of the universe, known as the Hubble tension.

    The tension is this: The cosmos’s known ingredients and governing equations predict that it should currently be expanding at a rate of 67 kilometers per second per megaparsec—meaning we should see galaxies flying away from us 67 kilometers per second faster for each additional megaparsec of distance. Yet actual measurements consistently overshoot the mark. Galaxies are receding too quickly. The discrepancy thrillingly suggests that some unknown quickening agent may be afoot in the cosmos.

    “It would be incredibly exciting if there was new physics,” Freedman said. “I have a secret in my heart that I hope there is, that there’s a discovery to be made there. But we want to make sure we’re right. There’s work to do before we can say so unequivocally.”

    That work involves reducing possible sources of error in measurements of the cosmic expansion rate. One of the biggest sources of that uncertainty has been the distances to nearby stars—distances that the new parallax data appears to all but nail down.

    In a paper posted online December 15 and submitted to The Astrophysical Journal [“Cosmic Distances Calibrated to 1% Precision with Gaia EDR3 Parallaxes and Hubble Space Telescope Photometry of 75 Milky Way Cepheids Confirm Tension with LambdaCDM”], Riess’s team has used the new data to peg the expansion rate at 73.2 kilometers per second per megaparsec, in line with their previous value, but now with a margin of error of just 1.8 percent. That seemingly cements the discrepancy with the far lower predicted rate of 67.

    Freedman and Madore expect to publish their group’s new and improved measurement of the cosmic expansion rate in January. They too expect the new data to firm up, rather than shift, their measurement, which has tended to land lower than Riess’s and those of other groups but still higher than the prediction.

    Since Gaia launched in December 2013, it has released two other massive data sets that have revolutionized our understanding of our cosmic neighborhood. Yet Gaia’s earlier parallax measurements were a disappointment. “When we looked at the first data release” in 2016, Freedman said, “we wanted to cry.”

    An Unforeseen Problem

    If parallaxes were easier to measure, the Copernican revolution might have happened sooner.

    Copernicus proposed in the 16th century that the Earth revolves around the sun. But even at the time, astronomers knew about parallax. If Earth moved, as Copernicus held, then they expected to see nearby stars shifting in the sky as it did so, just as a lamppost appears to shift relative to the background hills as you cross the street. The astronomer Tycho Brahe didn’t detect any such stellar parallax and thereby concluded that Earth does not move.

    And yet it does, and the stars do shift—albeit barely, because they’re so far away.

    It took until 1838 for a German astronomer named Friedrich Bessel to detect stellar parallax. By measuring the angular shift of the star system 61 Cygni relative to the surrounding stars, Bessel concluded that it was 10.3 light-years away. His measurement differed from the true value by only 10 percent—Gaia’s new measurements place the two stars in the system at 11.4030 and 11.4026 light-years away, give or take one or two thousandths of a light-year.

    The 61 Cygni system is exceptionally close. More typical Milky Way stars shift by mere ten-thousandths of an arcsecond—just hundredths of a pixel in a modern telescope camera. Detecting the motion requires specialized, ultra-stable instruments. Gaia was designed for the purpose, but when it switched on, the telescope had an unforeseen problem.

    The telescope works by looking in two directions at once and tracking the angular differences between stars in its two fields of view, explained Lennart Lindegren, who co-proposed the Gaia mission in 1993 and led the analysis of its new parallax data [Astronomy and Astrophysics “Gaia Early Data Release 3: Parallax bias versus magnitude, colour, and position”]. Accurate parallax estimates require the angle between the two fields of view to stay fixed. But early in the Gaia mission, scientists discovered that it does not. The telescope flexes slightly as it rotates with respect to the sun, introducing a wobble into its measurements that mimics parallax. Worse, this parallax “offset” depends in complicated ways on objects’ positions, colors and brightness.

    However, as data has accrued, the Gaia scientists have found it easier to separate the fake parallax from the real. Lindegren and colleagues managed to remove much of the telescope’s wobble from the newly released parallax data, while also devising a formula that researchers can use to correct the final parallax measurements depending on a star’s position, color and brightness.

    Climbing the Ladder

    With the new data in hand, Riess, Freedman and Madore and their teams have been able to recalculate the universe’s expansion rate. In broad strokes, the way to gauge cosmic expansion is to figure out how far away distant galaxies are and how fast they’re receding from us. The speed measurements are straightforward; distances are hard.

    The most precise measurements rely on intricate “cosmic distance ladders.” The first rung consists of “standard candle” stars in and around our own galaxy that have well-defined luminosities, and which are close enough to exhibit parallax—the only sure way to tell how far away things are without traveling there. Astronomers then compare the brightness of these standard candles with that of fainter ones in nearby galaxies to deduce their distances. That’s the second rung of the ladder. Knowing the distances of these galaxies, which are chosen because they contain rare, bright stellar explosions called Type 1a supernovas, allows cosmologists to gauge the relative distances of farther-away galaxies that contain fainter Type 1a supernovas. The ratio of these faraway galaxies’ speeds to their distances gives the cosmic expansion rate.

    Parallaxes are thus crucial to the whole construction. “You change the first step—the parallaxes—then everything that follows changes as well,” said Riess, who is one of the leaders of the distance ladder approach. “If you change the precision of the first step, then the precision of everything else changes.”

    Riess’s team has used Gaia’s new parallaxes of 75 Cepheids—pulsating stars that are their preferred standard candles—to recalibrate their measurement of the cosmic expansion rate.

    Freedman and Madore, Riess’s chief rivals at the top of the distance ladder game, have argued in recent years that Cepheids foster possible missteps on higher rungs of the ladder. So rather than lean too heavily on them, their team is combining measurements based on multiple kinds of standard-candle stars from the Gaia data set, including Cepheids, RR Lyrae stars, tip-of-the-red-giant-branch stars and so-called carbon stars.

    “Gaia’s [new data release] is providing us with a secure foundation,” said Madore. Although a series of papers by Madore and Freedman’s team aren’t expected for a few weeks, they noted that the new parallax data and correction formula appear to work well. When used with various methods of plotting and dissecting the measurements, data points representing Cepheids and other special stars fall neatly along straight lines, with very little of the “scatter” that would indicate random error.

    “It’s telling us we’re really looking at the real stuff,” Madore said.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:23 am on December 5, 2020 Permalink | Reply
    Tags: "Physicists Are Closer to Knowing the Size of a Proton … Sort of", "Physics is somewhat less broken but still not quite right", , If the muon and electron don't behave equivalently then quantum chromodynamics- a major theory in physics- is irretrievably broken in some way., , Physicists went and did something funny: they replaced the electron with its heavier and somewhat unstable equivalent the muon., , The muon should behave exactly like the electron except for the mass difference, WIRED   

    From ars technica: “Physics is somewhat less broken, but still not quite right” 

    From ars technica

    12/2/2020
    John Timmer
    jtimmer@arstechnica.com

    Measurements of the proton’s charge radius still disagree, but by less.
    A new and potentially improved measurement of a proton’s charge radius brings scientists closer to an answer. But the issue is still unresolved.

    2
    Credit: Getty Images.

    1
    One way to measure the charge radius of a proton is to bounce something off it (proton-sized clamp is only available via metaphor).

    How big is a proton? This doesn’t sound like a very complicated question, but it’s one that turned out to have the potential to wreck a lot of modern physics. That’s because different methods of measuring the proton’s charge radius produced results that disagreed—and not just by a little bit: the answers were four standard deviations apart. But now, a new and potentially improved measurement brings them much closer to agreement, although not quite close enough that we can consider the issue resolved.

    The quark structure of the proton 16 March 2006 Arpad Horvath.

    We seem to have a problem

    There are a couple of ways to measure a proton’s charge radius. One is to bounce other charged particles off the proton and measure its size based on their deflections. Another is to explore how the proton’s charge influences the behavior of an electron orbiting it in a hydrogen atom, which consists of only a single proton and electron. The exact energy difference between different orbitals is the product of the proton’s charge radius. And, if an electron transitions from one orbital to another, it will emit (or absorb) a photon with an energy that corresponds to that difference. Measure the photon, and you can work back to the energy difference and thus the proton’s charge radius.

    (The actual wavelength depends on both the charge radius and a physical constant, so you actually need to measure the wavelengths of two transitions in order to produce values for both the charge radius and the physical constant. But for the purposes of this article, we’ll just focus on one measurement.)

    A rough agreement between these two methods seemed to leave physics in good shape. But then the physicists went and did something funny: they replaced the electron with its heavier and somewhat unstable equivalent, the muon. According to what we understand of physics, the muon should behave exactly like the electron except for the mass difference. So, if you can measure the muon orbiting a proton in the brief flash of time before it decays, you should be able to produce the same value for the proton’s charge radius.

    Naturally, it produced a different value. And the difference was large enough that a simple experimental error was unlikely to account for it.

    If the measurements really were different, then that indicates a serious problem in our understanding of physics. If the muon and electron don’t behave equivalently, then quantum chromodynamics, a major theory in physics, is irretrievably broken in some way. And having a broken theory is something that makes physicists very excited.

    Combing the frequencies

    The new work is largely an improved version of past experiments in that it measures a specific orbital transition in standard hydrogen composed of an electron and a proton. To begin with, the hydrogen itself was brought to a very low temperature by passing it through an extremely cold metal nozzle on its way into the vacuum container where the measurements were made. This limits the impact of thermal noise on the measurements.

    The second improvement is that the researchers worked in the ultraviolet, where shorter wavelengths helped improve the precision. They measured the wavelength of the photons emitted by the hydrogen atoms using what’s called a frequency comb, which produces photons at an evenly spaced series of wavelengths that act a bit like the marks on a ruler. All of this helped measure the orbital transition with a precision that was 20 times more accurate than the team’s earlier attempt.

    The result the researchers get also disagrees with earlier measurements of normal hydrogen (though not a more recent one). And it’s much, much closer to the measurements made using muons orbiting protons. So, from the perspective of quantum mechanics being accurate, this is good news.

    But…

    But not great news, since the two results are still outside of each other’s error bars. Part of the problem there is that the added mass of the muon makes the error bars on those experiments extremely small. That makes it very difficult for any results obtained with a normal electron to be consistent with the muon results without completely overlapping with them. And the authors acknowledge that the difference is likely to just be unaccounted for errors that broaden the uncertainty enough to allow overlap, citing the prospect of “systematic effects in either (or both) of these measurements.”

    So, the work is an important landmark in terms of finding ways to up the precision of the results, and the outcome suggests that quantum chromodynamics is probably fine. But it doesn’t actually completely resolve the difference, meaning we’re going to need some more work before we can truly breathe easily. Which is annoying enough to possibly explain why Science chose to run the paper on Thanksgiving, when fewer people would be paying attention.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 12:26 pm on November 30, 2020 Permalink | Reply
    Tags: "Beautiful Yet Unnerving Photos of the Arctic Getting Greener", , , , WIRED   

    From WIRED: “Beautiful Yet Unnerving Photos of the Arctic Getting Greener” 


    From WIRED

    11.30.2020
    Matt Simon

    Using tricked-out drones, scientists are watching vegetation boom in the far north. Their findings could have big implications for the whole planet.

    1
    The Arctic tundra of the Yukon, Canada. Credit: Jeff Kerby/National Geographic Society.

    The Arctic is getting greener, and it’s about as pretty as you might expect—vast stretches of coastal land positively glowing against cobalt seas. But all that green is in fact an alarm: Vegetation is growing more abundant as this region warms twice as fast as the rest of the planet. Northern landscapes are undergoing massive change, with potential consequences not just for the Arctic itself, but the world as a whole.

    One group of researchers has been on a multiyear quest to understand that change on a fine scale. They’re combining satellite data, quadcopter measurements, and good old boots-on-the-frigid-ground fieldwork. We’re talking about labor like measuring individual leaves on plants to determine how much they’re growing, year after year. “So it kind of scales up from all of these little dramas of individual plants playing out, that then influence which plants exist on the landscape,” says Jeffrey Kerby, an ecologist at Aarhus University in Denmark and coauthor of a new paper [Environmental Research Letters] from the team. “And when you spread that out over a huge area, it can have very consequential impacts on the carbon cycle.”

    2
    The island Qikiqtaruk, in the Yukon Territory, Canada, where the scientists do their surveys. Credit: Jeff Kerby/National Geographic Society.

    That’s because perhaps a third of the carbon stored in the soils of the world is in the Arctic permafrost—essentially frozen dirt. Growing in that soil are mostly grasses and shrubs, none of which grow above knee height. But these scientists are finding that as the Arctic warms, the period between when the snow melts and when it returns is getting longer, so plants are greening up earlier in the year. Some are also growing taller.

    Normally, the shrubs and grasses of the tundra trap snow in the winter, and keep it from blowing around the landscape. But as temperatures rise, taller shrub species are becoming more abundant, trapping thicker layers of snow. That might seem great—all that snow keeps the permafrost from warming up—but in fact it prevents the chill of winter from penetrating the soil enough to keep it frozen. And that’s a problem, because if the permafrost doesn’t get cold enough to stay frozen—well, permanently—it will start to release that trapped carbon dioxide and methane, an extremely potent greenhouse gas.

    3
    Researchers Isla Myers-Smith and Gergana Daskalova do good-old boots-on-the-ground science, surveying a plot of plants. Credit: Jeff Kerby/National Geographic Society.

    “In other instances, shrubs are darker than grasses, so that changes the albedo,” says Kerby, referring to the way that the landscape reflects light back into space. The white snow reflects light, while darker bare earth and green plants absorb it. “It’s kind of like wearing a black T-shirt on a summer day versus a white T-shirt: You’re just going to feel hotter, because black is absorbing more heat,” Kerby continues. “And so that will melt the snow faster, or it can thaw permafrost faster.”

    To make the Arctic carbon cycle even more complicated, all that vegetation of course sequesters carbon: Plants suck in CO2 and spit out oxygen. “So one of the big questions is, will this greening signal, these increases in plants, offset the losses of carbon from the systems as permafrost thaws?” says Isla Myers-Smith, an ecologist at the University of Edinburgh, who supervises the research and coauthored the paper.

    4
    Researcher Jeff Kerby calibrates a drone for flight. Credit: Andrew C. Cunliffe.

    5
    Jakob Assmann (front) and Santeri Lethonen start up a drone Credit: Jeff Kerby/National Geographic Society.

    The team is beginning to answer that question and a slew of others with fancy drones. Being able to point a satellite at vegetation in the Arctic is great for collecting data about a large area, but the resolution is usually not super; it’s on the scale of 30 meters if you’re lucky, but usually more like 250 meters. That’s like a microbiologist only being able to study bacteria with a magnifying glass. With off-the-shelf quadcopters, like the DJI Phantom, the team can now fly over a hectare of Arctic vegetation and scan it in fine detail. In a way, they’ve now got themselves a microscope.

    These drones are equipped with cameras that see into the near-infrared, rather than the visible world you and I see. “The way a leaf reflects near-infrared—and red—light is dependent on both the chlorophyll content in there, as well as the structure of the cell layers on the leaf,” says Aarhus University ecologist Jakob Assmann, who led the work and coauthored the paper. “People have been using this to estimate vegetation productivity, the amount of photosynthetic activity.” That activity is an indicator of growth and the extent to which the Arctic is greening.

    This data is much richer and more reliable than just pointing a regular old camera at a plant and determining how green it is, given that lighting can change dramatically out in the field. By looking in the near-infrared, the team can more accurately show how plant productivity may be changing as the Arctic warms.

    These drones are what the team needed to fully characterize how the region is changing. Not that satellite data is now worthless—in fact, that data combined with the drone work and the on-the-ground measurements of plants provides a more holistic picture of the landscape. “So even though we can’t cover the whole landscape with the drones, we can still cover a large enough section that we can then statistically relate to the changes that we see in the satellite data and make sense of that,” Assmann says.

    6
    A stroll for science. Credit: Jeff Kerby/National Geographic Society.

    6
    Gergana Daskalova measures leaf growth Credit: Jeff Kerby/National Geographic Society’

    With this font of new data, the team has set out to answer a bevy of questions. For instance, what are the consequences of plants greening up earlier in the year? “One of the big questions that we have is whether this means that the plants are going to grow longer across the summer season, or whether they’re just going to move their growth to earlier in the season,” says Myers-Smith. How might this in turn affect the carbon cycle? Will an increasingly green Arctic release that locked-up carbon, or will it at the same time sequester more carbon in the new vegetation? And how will this affect herbivores like musk oxen and caribou—surely their feeding habits will change as plant communities do?


    How the Arctic changes over the year, seen from a satellite. Video credit: Jeff Kerby/National Geographic Society.

    A greening Arctic is at once a beautiful and alarming sight, climate change visualized on a massive scale. And this new work is one of our first looks that combines both minute close-ups and a large-scale portrait of the landscape from above. “Plants are kind of books that you can read, because they experience environments all year round, and then reflect that in themselves,” says Kerby. “And so if you want to see what’s changing with the climate, and you can just visualize that in the plants.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 12:58 pm on November 21, 2020 Permalink | Reply
    Tags: "A Solar-Powered Rocket Might Be Our Ticket to Interstellar Space", , , WIRED   

    From JHU Applied Physics Lab via WIRED: “A Solar-Powered Rocket Might Be Our Ticket to Interstellar Space” 

    From Johns Hopkins University Applied Physics Lab

    JHUAPL

    Johns Hopkins Applied Physics Lab bloc
    From JHU Applied Physics Lab

    via


    WIRED

    The idea for solar thermal propulsion has been around for decades, but researchers tapped by NASA just conducted a first test.

    1
    Credit: NASA.

    If Jason Benkoski is right, the path to interstellar space begins in a shipping container tucked behind a laboratory high bay in Maryland. The set up looks like something out of a low-budget sci-fi film: One wall of the container is lined with thousands of LEDs, an inscrutable metal trellis runs down the center, and a thick black curtain partially obscures the apparatus. This is the Johns Hopkins University Applied Physics Laboratory solar simulator, a tool that can shine with the intensity of 20 suns. On Thursday afternoon, Benkoski mounted a small black and white tile onto the trellis and pulled a dark curtain around the set-up before stepping out of the shipping container. Then he hit the light switch.

    Once the solar simulator was blistering hot, Benkoski started pumping liquid helium through a small embedded tube that snaked across the slab. The helium absorbed heat from the LEDs as it wound through the channel and expanded until it was finally released through a small nozzle. It might not sound like much, but Benkoski and his team just demonstrated solar thermal propulsion, a previously theoretical type of rocket engine that is powered by the sun’s heat. They think it could be the key to interstellar exploration.

    “It’s really easy for someone to dismiss the idea and say, ‘On the back of an envelope, it looks great, but if you actually build it, you’re never going to get those theoretical numbers,’” says Benkoski, a materials scientist at the Applied Physics Laboratory and the leader of the team working on a solar thermal propulsion system. “What this is showing is that solar thermal propulsion is not just a fantasy. It could actually work.”

    Only two spacecraft, Voyager 1 and Voyager 2, have left our solar system.

    Heliosphere-heliopause showing positions of Voyager spacecraft. Credit: NASA.

    But that was a scientific bonus after they completed their main mission to explore Jupiter and Saturn. Neither spacecraft was equipped with the right instruments to study the boundary between our star’s planetary fiefdom and the rest of the universe. Plus, the Voyager twins are slow. Plodding along at 30,000 miles per hour, it took them nearly a half century to escape the sun’s influence.

    But the data they have sent back from the edge is tantalizing. It showed that much of what physicists had predicted about the environment at the edge of the solar system was wrong. Unsurprisingly, a large group of astrophysicists, cosmologists, and planetary scientists are clamoring for a dedicated interstellar probe to explore this new frontier.

    In 2019, NASA tapped the Applied Physics Laboratory to study concepts for a dedicated interstellar mission. At the end of next year, the team will submit its research to the National Academies of Sciences, Engineering, and Medicine’s Heliophysics decadal survey, which determines sun-related science priorities for the next 10 years. APL researchers working on the Interstellar Probe program are studying all aspects of the mission, from cost estimates to instrumentation. But simply figuring out how to get to interstellar space in any reasonable amount of time is by far the biggest and most important piece of the puzzle.

    The edge of the solar system—called the heliopause—is extremely far away. By the time a spacecraft reaches Pluto, it’s only a third of the way to interstellar space. And the APL team is studying a probe that would go three times farther than the edge of the solar system, a journey of 50 billion miles, in about half the time it took the Voyager spacecraft just to reach the edge. To pull off that type of mission, they’ll need a probe unlike anything that’s ever been built. “We want to make a spacecraft that will go faster, further, and get closer to the sun than anything has ever done before,” says Benkoski. “It’s like the hardest thing you could possibly do.”

    In mid-November, the Interstellar Probe researchers met online for a weeklong conference to share updates as the study enters its final year. At the conference, teams from APL and NASA shared the results of their work on solar thermal propulsion, which they believe is the fastest way to get a probe into interstellar space. The idea is to power a rocket engine with heat from the sun, rather than combustion. According to Benkoski’s calculations, this engine would be around three times more efficient than the best conventional chemical engines available today. “From a physics standpoint, it’s hard for me to imagine anything that’s going to beat solar thermal propulsion in terms of efficiency,” says Benkoski. “But can you keep it from exploding?”

    Unlike a conventional engine mounted on the aft end of a rocket, the solar thermal engine that the researchers are studying would be integrated with the spacecraft’s shield. The rigid flat shell is made from a black carbon foam with one side coated in a white reflective material. Externally it would look very similar to the heat shield on the Parker Solar Probe. The critical difference is the tortuous pipeline hidden just beneath the surface. If the interstellar probe makes a close pass by the sun and pushes hydrogen into its shield’s vasculature, the hydrogen will expand and explode from a nozzle at the end of the pipe. The heat shield will generate thrust.

    It’s simple in theory, but incredibly hard in practice. A solar thermal rocket is only effective if it can pull off an Oberth maneuver, an orbital mechanics hack that turns the sun into a giant slingshot. The sun’s gravity acts like a force multiplier that dramatically increases the craft’s speed if a spacecraft fires its engines as it loops around the star. The closer a spacecraft gets to the sun during an Oberth maneuver, the faster it will go. In APL’s mission design, the interstellar probe would pass just a million miles from its roiling surface.

    To put this in perspective, by the time NASA’s Parker Solar Probe makes its closest approach in 2025, it will be within 4 million miles of the sun’s surface and booking it at nearly 430,000 miles per hour. That’s about twice the speed the interstellar probe aims to hit and the Parker Solar Probe built up speed with gravity assists from the sun and Venus over the course of seven years. The Interstellar Probe will have to accelerate from around 30,000 miles per hour to around 200,000 miles per hour in a single shot around the sun, which means getting close to the star. Really close.

    Cozying up to a sun-sized thermonuclear explosion creates all sorts of materials challenges, says Dean Cheikh, a materials technologist at NASA’s Jet Propulsion Laboratory who presented a case study on the solar thermal rocket during the recent conference. For the APL mission, the probe would spend around two-and-a-half hours in temperatures around 4,500 degrees Fahrenheit as it completed its Oberth maneuver. That’s more than hot enough to melt through the Parker Solar Probe’s heat shield, so Cheikh’s team at NASA found new materials that could be coated on the outside to reflect away thermal energy. Combined with the cooling effect of hydrogen flowing through channels in the heat shield, these coatings would keep the interstellar probe cool while it blitzed by the sun. “You want to maximize the amount of energy that you’re kicking back,” says Cheikh. “Even small differences in material reflectivity start to heat up your spacecraft significantly.”

    A still greater problem is how to handle the hot hydrogen flowing through the channels. At extremely high temperatures, the hydrogen would eat right through the carbon-based core of the heat shield, which means the inside of the channels will have to be coated in a stronger material. The team identified a few materials that could do the job, but there’s just not a lot of data on their performance, especially extreme temperatures. “There’s not a lot of materials that can fill these demands,” says Cheikh. “In some ways that’s good, because we only have to look at these materials. But it’s also bad because we don’t have a lot of options.”

    The big takeaway from his research, says Cheikh, is there’s a lot of testing that needs to be done on heat shield materials before a solar thermal rocket is sent around the sun. But it’s not a dealbreaker. In fact, incredible advances in materials science make the idea finally seem feasible more than 60 years after it was first conceived by engineers in the US Air Force. “I thought I came up with this great idea independently, but people were talking about it in 1956,” says Benkoski. “Additive manufacturing is a key component of this, and we couldn’t do that 20 years ago. Now I can 3D-print metal in the lab.”

    Even if Benkoski wasn’t the first to float the idea of a solar thermal propulsion, he believes he’s the first to demonstrate a prototype engine. During his experiments with the channeled tile in the shipping container, Benkoski and his team showed that it was possible to generate thrust using sunlight to heat a gas as it passed through embedded ducts in a heat shield. These experiments had several limitations. They didn’t use the same materials or propellant that would be used on an actual mission, and the tests occurred at temperatures well below what an interstellar probe would experience. But the important thing, says Benkoski, is that the data from the low temperature experiments matched the models that predict how an interstellar probe would perform on its actual mission once adjustments are made for the different materials. “We did it on a system that would never actually fly. And now the second step is we start to substitute each of these components with the stuff that you would put on a real spacecraft for an Oberth maneuver,” Benkoski says.

    The concept has a long way to go before it’s ready to be used on a mission—and with only a year left in the Interstellar Probe study, there’s not enough time to launch a small satellite to do experiments in low Earth orbit. But by the time Benkoski and his colleagues at APL submit their report next year, they will have generated a wealth of data that lays the foundation for in-space tests. There’s no guarantee that the National Academies will select the interstellar probe concept as a top priority for the coming decade. But whenever we are ready to leave the sun behind, there’s a good chance we’ll have to use it for a boost on our way out the door.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    JHUAPL campus.

    Founded on March 10, 1942—just three months after the United States entered World War II—Applied Physics Lab -was created as part of a federal government effort to mobilize scientific resources to address wartime challenges.

    APL was assigned the task of finding a more effective way for ships to defend themselves against enemy air attacks. The Laboratory designed, built, and tested a radar proximity fuze (known as the VT fuze) that significantly increased the effectiveness of anti-aircraft shells in the Pacific—and, later, ground artillery during the invasion of Europe. The product of the Laboratory’s intense development effort was later judged to be, along with the atomic bomb and radar, one of the three most valuable technology developments of the war.

    On the basis of that successful collaboration, the government, The Johns Hopkins University, and APL made a commitment to continue their strategic relationship. The Laboratory rapidly became a major contributor to advances in guided missiles and submarine technologies. Today, more than seven decades later, the Laboratory’s numerous and diverse achievements continue to strengthen our nation.

    APL continues to relentlessly pursue the mission it has followed since its first day: to make critical contributions to critical challenges for our nation.

    Johns Hopkins Unversity campus.

    Johns Hopkins University opened in 1876, with the inauguration of its first president, Daniel Coit Gilman. “What are we aiming at?” Gilman asked in his installation address. “The encouragement of research … and the advancement of individual scholars, who by their excellence will advance the sciences they pursue, and the society where they dwell.”

    The mission laid out by Gilman remains the university’s mission today, summed up in a simple but powerful restatement of Gilman’s own words: “Knowledge for the world.”

    What Gilman created was a research university, dedicated to advancing both students’ knowledge and the state of human knowledge through research and scholarship. Gilman believed that teaching and research are interdependent, that success in one depends on success in the other. A modern university, he believed, must do both well. The realization of Gilman’s philosophy at Johns Hopkins, and at other institutions that later attracted Johns Hopkins-trained scholars, revolutionized higher education in America, leading to the research university system as it exists today.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: