Tagged: WIRED Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:54 am on April 3, 2022 Permalink | Reply
    Tags: "Peptides on Stardust May Have Provided a Shortcut to Life", , , , , , WIRED   

    From WIRED: “Peptides on Stardust May Have Provided a Shortcut to Life” 

    From WIRED

    Apr 3, 2022
    Yasemin Saplakoglu

    The discovery that short peptides can form spontaneously on cosmic dust hints at more of a role for them in the origin of life, on Earth or elsewhere.

    The spontaneous formation of peptide molecules on cosmic dust in interstellar clouds could have implications for theories about the origin of life.Illustration: Kristina Armitage/Quanta Magazine

    Billions of years ago, some unknown location on the sterile, primordial Earth became a cauldron of complex organic molecules from which the first cells emerged. Origin-of-life researchers have proposed countless imaginative ideas about how that occurred and where the necessary raw ingredients came from. Some of the most difficult to account for are proteins, the critical backbones of cellular chemistry, because in nature today they are made exclusively by living cells. How did the first protein form without life to make it?

    Scientists have mostly looked for clues on Earth. Yet a new discovery suggests that the answer could be found beyond the sky, inside dark interstellar clouds.

    Last month in Nature Astronomy, a group of astrobiologists showed that peptides, the molecular subunits of proteins, can spontaneously form on the solid, frozen particles of cosmic dust drifting through the universe. Those peptides could in theory have traveled inside comets and meteorites to the young Earth—and to other worlds—to become some of the starting materials for life.

    The simplicity and favorable thermodynamics of this new space-based mechanism for forming peptides make it a more promising alternative to the known purely chemical processes that could have occurred on a lifeless Earth, according to Serge Krasnokutski, the lead author on the new paper and a researcher at The MPG Institute for Astronomy [MPG Institut für Astronomie](DE) and The Friedrich Schiller University Jena [Friedrich-Schiller-Universität Jena](DE). And that simplicity “suggests that proteins were among the first molecules involved in the evolutionary process leading to life,” he said.

    Whether those peptides could have survived their arduous trek from space and contributed meaningfully to the origin of life is very much an open question. Paul Falkowski, a professor at the School of Environmental and Biological Sciences at Rutgers University, said that the chemistry demonstrated in the new paper is “very cool” but “doesn’t yet bridge the phenomenal gap between proto-prebiotic chemistry and the first evidence of life.” He added, “There’s a spark that’s still missing.”

    Still, the finding by Krasnokutski and his colleagues shows that peptides might be a much more readily available resource throughout the universe than scientists believed, a possibility that could also have consequences for the prospects for life elsewhere.

    Cosmic Dust in a Vacuum

    Cells make the production of proteins look easy. They manufacture both peptides and proteins extravagantly, empowered by environments rich in useful molecules like amino acids and their own stockpiles of genetic instructions and catalytic enzymes (which are themselves typically proteins).

    But before cells existed, there wasn’t an easy way to do it on Earth, Krasnokutski said. Without any of the enzymes that biochemistry provides, the production of peptides is an inefficient two-step process that involves first making amino acids and then removing water as the amino acids link up into chains in a process called polymerization. Both steps have a high energy barrier, so they occur only if large amounts of energy are available to help kick-start the reaction.

    Because of these requirements, most theories about the origin of proteins have either centered on scenarios in extreme environments, such as near hydrothermal vents on the ocean floor, or assumed the presence of molecules like RNA with catalytic properties that could lower the energy barrier enough to push the reactions forward. (The most popular origin-of-life theory proposes that RNA preceded all other molecules, including proteins.) And even under those circumstances, Krasnokutski says that “special conditions” would be needed to concentrate the amino acids enough for polymerization. Though there have been many proposals, it isn’t clear how and where those conditions could have arisen on the primordial Earth.

    But now researchers say they’ve found a shortcut to proteins—a simpler chemical pathway that re-energizes the theory that proteins were present very early in the genesis of life.

    Last year in Low Temperature Physics, Krasnokutski predicted through a series of calculations that a more direct way to make peptides could exist under the conditions available in space, inside the extremely dense and frigid clouds of dust and gas that linger between the stars. These molecular clouds, the nurseries of new stars and solar systems, are packed with cosmic dust and chemicals, some of the most abundant of which are carbon monoxide, atomic carbon and ammonia.

    In their new paper, Krasnokutski and his colleagues showed that these reactions in the gas clouds would likely lead to the condensation of carbon onto cosmic dust particles and the formation of small molecules called aminoketenes. These aminoketenes would spontaneously link up to form a very simple peptide called polyglycine. By skipping the formation of amino acids, reactions could proceed spontaneously, without needing energy from the environment.

    To test their claim, the researchers experimentally simulated the conditions found in molecular clouds. Inside an ultrahigh vacuum chamber, they mimicked the icy surface of cosmic dust particles by depositing carbon monoxide and ammonia onto substrate plates chilled to minus 263 degrees Celsius. They then deposited carbon atoms on top of this ice layer to simulate their condensation inside molecular clouds. Chemical analyses confirmed that the vacuum simulation had indeed produced various forms of polyglycines, up to chains 10 or 11 subunits long.

    The researchers hypothesized that billions of years ago, as cosmic dust stuck together and formed asteroids and comets, simple peptides on the dust could have hitchhiked to Earth in meteorites and other impactors. They might have done the same on countless other worlds, too.

    The Gap From Peptides to Life

    The delivery of peptides to Earth and other planets “certainly would provide a head start” to forming life, said Daniel Glavin, an astrobiologist at NASA’s Goddard Space Flight Center. But “I think there’s a large jump to go from interstellar ice dust chemistry to life on Earth.”

    First the peptides would have to endure the perils of their journey through the universe, from radiation to water exposure inside asteroids, both of which can fragment the molecules. Then they’d have to survive the impact of hitting a planet. And even if they made it through all that, they would still have to go through a lot of chemical evolution to get large enough to fold into proteins that are useful for biological chemistry, Glavin said.

    Is there evidence that this has happened? Astrobiologists have discovered many small molecules including amino acids inside meteorites, and one study from 2002 discovered that two meteorites held extremely small, simple peptides made from two amino acids. But researchers have yet to discover other convincing evidence for the presence of such peptides and proteins in meteorites or samples returned from asteroids or comets, Glavin said. It’s unclear if the nearly total absence of even relatively small peptides in space rocks means that they don’t exist or if we just haven’t detected them yet.

    But Krasnokutski’s work could encourage more scientists to really start looking for these more complex molecules in extraterrestrial materials, Glavin said. For example, next year NASA’s OSIRIS-REx spacecraft is expected to bring back samples from the asteroid Bennu, and Glavin and his team plan to look for some of these types of molecules.

    The researchers are now planning to test whether bigger peptides or different types of peptides can form in molecular clouds. Other chemicals and energetic photons in the interstellar medium might be able to trigger the formation of larger and more complex molecules, Krasnokutski said. Through their unique laboratory window into molecular clouds, they hope to witness peptides getting longer and longer, and one day folding, like natural origami, into beautiful proteins that burst with potential.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 2:52 pm on March 1, 2022 Permalink | Reply
    Tags: "A New Super-High Satellite Will Eye Weather on Earth—and in Space", , , GOES stands for Geostationary Operational Environmental Satellites., GOES-T, , , WIRED   

    From WIRED: “A New Super-High Satellite Will Eye Weather on Earth—and in Space” 

    From WIRED

    Mar 1, 2022
    Ramin Skibba

    Photograph: Kim Shiflett/NASA.

    Today, the newest member of a family of storm-spotting satellites will head to space, carrying high-resolution cameras that will be used in real time to track everything from hurricanes and floods to wildfires and smoke, and even space weather. The GOES-T satellite is scheduled to blast off at 4:38 pm Eastern time—weather permitting, of course—on a United Launch Alliance Atlas V 541 rocket from Cape Canaveral in Florida.

    “It’s a very all-purpose spacecraft. Basically, any kind of good or bad weather, any kind of hazardous environmental condition, the cameras on GOES-T will see them,” says Pamela Sullivan, director of the GOES-R program at the the National Atmospheric and Oceanic Administration, which together with NASA designed and built the new satellite. “The GOES satellites really help people every day, before, during and after a disaster.”

    The new satellite will be part of a pair of eyes that spy on North America—one looking west and the other looking east. GOES-T will focus on the western continental US, Alaska, Hawaii, Mexico, some parts of Central America, and the Pacific Ocean. Its sibling, which has been orbiting since 2016, covers the eastern continental US, Canada, and Mexico.

    NOAA has been maintaining this twin set of satellites (and sometimes, a triplet set) since the 1970s, retiring orbiters as they age and swapping new ones in. Once it’s in orbit, GOES-T will be renamed GOES-18, since it’s the 18th satellite in the program, and it will also be known as GOES-West, since it’s the west-looking eye. It will replace the satellite currently covering the west, which in 2018 developed a problem with its Advanced Baseline Imager, one of its most important instruments. A loop heat pipe system has been malfunctioning and not transferring enough heat from the electronics to the radiator. As a result, the heat has become a contaminant; at certain times, the infrared detectors become saturated, degrading their images.

    The older satellite isn’t useless, though. After GOES-T takes its place, it will be put in “standby mode” and maintained as an on-orbit spare, Sullivan says. Thirteen previous satellites have been retired, while two more remain in orbit as backups. The new satellite also isn’t the last. Eventually, another satellite (GOES-U) will follow it, likely to replace the east-looking satellite, ensuring that the dynasty stretches into at least the mid-2030s.

    GOES-T is an upgrade over its predecessors. It is the third member of the new generation of GOES spacecraft that come with improved versions of the Advanced Baseline Imager that can snap high-resolution photos of the entire western hemisphere every five minutes. It takes those images at 16 different spectral bands or “channels”—a red and a blue channel at visual wavelengths, and then 14 others that range from near-infrared to mid-infrared wavelengths. (Earlier GOES imagers only had five channels.) This allows researchers to pick their favorite channels to best map out wildfires, clouds, storms, smoke, dust, water vapor, ozone, and many other atmospheric phenomena.

    While most satellites fly a few hundred miles above the ground in the relatively crowded low Earth orbit, looping the globe every two hours or so, GOES-T will ascend to 22,000 miles—about a tenth of the way to the moon. In this sparsely populated area known as geostationary orbit, spacecraft orbit as fast as the world turns, allowing them to remain positioned over the same spot on the globe. That key feature allows the GOES satellites to continuously monitor weather, which can change quickly. (GOES stands for Geostationary Operational Environmental Satellites.)

    “That is the number one big advantage of the GOES instruments,” says Amy Huff, an atmospheric scientist at the NOAA Center for Satellite Applications and Research. “It has really revolutionized the way we respond to fires and smoke.”

    With increasingly intense and destructive blazes in the western US, like the Dixie Fire in California, the Bootleg Fire in Oregon, and the Marshall Fire in Colorado, firefighters and other emergency management officials need real-time images, Huff says. Using combinations of GOES-T’s infrared channels, Huff’s colleagues will be able to continue their work tracking a fire’s location, intensity, size, and temperature all day and night. Huff’s team’s specialty is smoke: They monitor the movement of smoke plumes and air pollution, producing maps and other resources for the aviation industry and public health officials.

    Researchers will also use GOES-T to map clouds—not just the storm-generating cumulonimbus ones, but also wispy, cirrus clouds. “That’s why I’m really excited to get GOES West replaced with GOES-T. It will then be providing information over the Pacific Ocean, which is very much a data void. And since most of our weather comes from the West, that’s a problem,” says Jason Otkin, an atmospheric scientist at University of Wisconsin who frequently uses these satellites’ data. GOES-T will ultimately help improve weather forecasts across the US, he says.

    Researchers and meteorologists also like to take advantage of the satellites’ other instruments, like the Geostationary Lightning Mapper, which spots flashes of light by monitoring an area with a time resolution of 500 frames per second. With GOES-T’s predecessors, lightning-watching scientists have already broken world records, says Michael Peterson, an atmospheric scientist at The DOE’s Los Alamos National Laboratory, who frequently uses these satellite images to study the physics of lightning strikes. “We can see some rare cases where lightning can last not just one second but more than 10 seconds. It truly breaks the mold of what we think lightning can be capable of,” he says. By mapping lightning from space, he and his colleagues have also found giant flashes, some more than 450 miles long.

    GOES-T and its brethren also count as space weather trackers, Sullivan says, since some of their sensors are pointed upward. The new satellite will watch for the sun to fling giant blobs of charged particles, and track their impacts if they collide with the Earth’s magnetic field—a phenomenon often called a geomagnetic storm. The spacecraft comes equipped with two sun-focused ultraviolet and x-ray sensors, while another sensor and a magnetometer monitor the number of electrons and protons and the magnetic field around the satellite. Detecting a sudden fluctuation among those could be a sign that satellites and astronauts in lower orbits are about to get hit by a solar storm.

    As the GOES spacecraft beam down their images and data, NOAA makes them freely and publicly available, Huff says. “That’s exciting as well: People don’t have to go through emergency management officials; they can actually go to NOAA’s websites and look directly at the imagery themselves,” she says.

    On Tuesday, GOES-T is expected to launch under the gaze of its east-looking sibling, which will help monitor conditions from space. The weather looks good so far, though if for some reason the launch can’t happen during its planned two-hour window, NASA will try again the following afternoon. The launch will be aired live on NASA TV.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 11:53 am on February 13, 2022 Permalink | Reply
    Tags: "Symmetries Reveal Clues About the Holographic Universe", , , How might our universe emerge like a hologram out of a two-dimensional sheet? An infinitely distant “celestial sphere” could hold answers., In the bizarre curves of AdS space a finite boundary can encapsulate an infinite world., One of the most promising of those efforts treats gravity as something like a hologram—a three-dimensional effect that pops out of a flat two-dimensional surface., Our best theory of gravity describes it as bent spacetime., Physicists want to determine the rules for a CFT that can give rise to gravity in a world without the curves of AdS space., , , Quantum gravity reproduces the predictions of general relativity., Recent research results have given physicists hope that they’re on the right track., They’re looking for a CFT for flat space—a celestial CFT., WIRED   

    From WIRED: “Symmetries Reveal Clues About the Holographic Universe” 

    From WIRED

    Feb 13, 2022
    Katie McCormick

    How might our universe emerge like a hologram out of a two-dimensional sheet? An infinitely distant “celestial sphere” could hold answers.

    Researchers have long studied how gravity might emerge from a two-dimensional surface in hyperbolic spaces such as this one. In our own universe, the surface would be infinitely far away. Illustration: Kwok Wai Chung.

    We’ve known about gravity since Newton’s apocryphal encounter with the apple, but we’re still struggling to make sense of it. While the other three forces of nature are all due to the activity of quantum fields, our best theory of gravity describes it as bent spacetime. For decades, physicists have tried to use quantum field theories to describe gravity, but those efforts are incomplete at best.

    One of the most promising of those efforts treats gravity as something like a hologram—a three-dimensional effect that pops out of a flat two-dimensional surface. Currently, the only concrete example of such a theory is the AdS/CFT correspondence, in which a particular type of quantum field theory, called a conformal field theory (CFT), gives rise to gravity in so-called anti-de Sitter (AdS) space. In the bizarre curves of AdS space a finite boundary can encapsulate an infinite world. Juan Maldacena, the theory’s discoverer, has called it a “universe in a bottle.”

    But our universe isn’t a bottle. Our universe is (largely) flat. Any bottle that would contain our flat universe would have to be infinitely far away in space and time. Physicists call this cosmic capsule the “celestial sphere.”

    Physicists want to determine the rules for a CFT that can give rise to gravity in a world without the curves of AdS space. They’re looking for a CFT for flat space—a celestial CFT.

    The celestial CFT would be even more ambitious than the corresponding theory in AdS/CFT. Since it lives on a sphere of infinite radius, concepts of space and time break down. As a consequence, the CFT wouldn’t depend on space and time; instead, it could explain how space and time come to be.

    Recent research results have given physicists hope that they’re on the right track. These results use fundamental symmetries to constrain what this CFT might look like. Researchers have discovered a surprising set of mathematical relationships between these symmetries—relationships that have appeared before in certain string theories, leading some to wonder if the connection is more than coincidence.

    “There’s a very large, amazing animal out here,” said Nima Arkani-Hamed, a theoretical physicist at The Institute for Advanced Study in Princeton, New Jersey. “The thing we’re going to find is going to be pretty mind-blowing, hopefully.”

    Symmetries on the Sphere

    Perhaps the primary way that physicists probe the fundamental forces of nature is by blasting particles together to see what happens. The technical term for this is “scattering.” At facilities such as the Large Hadron Collider, particles fly in from distant points, interact, then fly out to the detectors in whatever transformed state has been dictated by quantum forces.

    If the interaction is governed by any of the three forces other than gravity, physicists can in principle calculate the results of these scattering problems using quantum field theory. But what many physicists really want to learn about is gravity.

    Luckily, Steven Weinberg showed [Physical Review Journals Archive] in the 1960s that certain quantum gravitational scattering problems—ones that involve low-energy gravitons—can be calculated. In this low-energy limit, “we’ve nailed the behavior,” said Monica Pate of Harvard University. “Quantum gravity reproduces the predictions of general relativity.” Celestial holographers like Pate and Sabrina Pasterski of Princeton University are using these low-energy scattering problems as the starting point to determine some of the rules the hypothetical celestial CFT must obey.

    They do this by looking for symmetries. In a scattering problem, physicists calculate the products of scattering—the “scattering amplitudes”—and what they should look like when they hit the detectors. After calculating these amplitudes, researchers look for patterns the particles make on the detector, which correspond to rules or symmetries the scattering process must obey. The symmetries demand that if you apply certain transformations to the detector, the outcome of a scattering event should remain unchanged.

    Just as quantum interactions can be translated into scattering amplitudes that then lead to symmetries, researchers working on quantum gravity hope to translate scattering problems into symmetries on the celestial sphere, then use these symmetries to fill out the celestial CFT rulebook.

    “We’re trying to just start from the basic ingredients of the dictionary,” said Pasterski, referring to the symmetries, “and then move up from there.”

    In November, a group led by Andrew Strominger of Harvard University published a paper [Physical Review Letters] that describes the “symmetry algebra” the celestial CFT must obey. The algebra dictates how different symmetry transformations combine to form new transformations. By studying the structure of the composition of the transformations, Strominger and his colleagues, including Pate, have managed to further constrain the potential CFT. They discovered that the group of symmetries on the celestial sphere obeyed a thoroughly studied and well-established algebra—one that has already appeared in certain string theories and is related to the description of well-known quantum systems such as the quantum Hall effect.

    “The fact that the structure you landed on is something that people have explored and played with before gives you encouragement that maybe there’s something to it,” said David Skinner, a theoretical physicist at The University of Cambridge.

    Infinite Issues

    When you have a theory that applies to an infinitely distant sphere, problems arise. Consider two particles that come together and scatter apart. If they scatter apart at any nonzero angle, by the time they reach the infinitely distant celestial sphere, they will also be infinitely far apart. The notion of distance breaks down. Our normal theories rely on locality, in which the strength of interactions between objects depends on their distance from one another. But if everything is infinitely far from everything else, the CFT must transcend locality.

    Even more perplexing: What is the concept of time on the celestial sphere, which is infinitely far in both the past and in the future? It has no meaning here.

    Arkani-Hamed considers the fact that concepts of space and time break down on the celestial sphere to be a feature, not a bug. It offers the potential to explain spacetime as an emergent property of a more fundamental theory.

    Others temper their enthusiasm. “I think it’s exciting, but I think there’s a long way to go,” said Skinner. “There are some things that I would say are major challenges to overcome.”

    Arkani-Hamed doesn’t disagree. “The whole thing is sort of grasping and figuring out what the question is. But the stakes are also similarly high.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 12:03 pm on January 2, 2022 Permalink | Reply
    Tags: "At the Dawn of Life Heat May Have Driven Cell Division", , , , , During cell division structural proteins and enzymes coordinate the duplication of DNA., For a protocell to grow before it divides it would have to increase not only the volume inside the cell but also the surface area of the surrounding membrane., Getting these processes right is crucial because errors can lead to daughter cells that are abnormal or unviable., , Protocells must have had some kind of heritable information they could pass down to daughter cells., The asymmetry in lipid membranes could play a role in primitive life., The energy produced by the primitive cellular metabolism would heat up the lipids on the inside of the membrane more quickly than those on the outside., The work is purely theoretical., WIRED   

    From WIRED : “At the Dawn of Life Heat May Have Driven Cell Division” 

    From WIRED

    Carrie Arnold

    A mathematical model shows how a thermodynamic mechanism could have made protocells split in two.


    Membrane-bound vesicles that were the forerunners of living cells may have divided under the influence of internally generated heat, according to a recent study.Video: Getty Images.

    An elegant ballet of proteins enables modern cells to replicate themselves. During cell division structural proteins and enzymes coordinate the duplication of DNA, the division of a cell’s cytoplasmic contents, and the cinching of the membrane that cleaves the cell. Getting these processes right is crucial because errors can lead to daughter cells that are abnormal or unviable.

    Billions of years ago, the same challenge must have faced the first self-organizing membranous bundles of chemicals arising spontaneously from inanimate materials. But these protocells almost certainly had to replicate without relying on large proteins. How they did it is a key question for astrobiologists and biochemists studying the origins of life.

    “If you delete all enzymes in the cell, nothing happens. They’re just inert sacks,” said Anna Wang, an astrobiologist at The University of New South Wales Sydney (AU). “They’re really stable, and that’s kind of the point.”

    However, in a recent paper in Biophysical Journal, Romain Attal, a physicist at the The City of Science and Industry [Cité des Sciences et de l’Industrie](FR), and the cancer biologist Laurent Schwartz of the Paris Public Hospitals developed a series of mathematical equations that model how heat alone could have been enough to drive one important part of the replication process: the fission of one protocell into two.

    Attal thinks that the chemical and physical processes active in early life were probably quite simple, and that thermodynamics alone could therefore have played a significant role in how life began. He said that the kinds of basic equations he has been working on could spell out some of the rules that governed how life first emerged.

    “Temperature gradients are important to life,” Attal said. “If you understand a subject, you need to be able to write down its principles.”

    Flipping for Fission

    For primitive cells to divide themselves without complex protein machinery, the process would have needed a physical or chemical driver. “It’s really about stripping a cell down to its basic functions and thinking, ‘What are the basic physical and chemical principles, and how can we mimic that without proteins?’” Wang said.

    Figuring out these processes becomes more challenging when you consider that scientists still can’t agree on a definition of life in general, and of protocells specifically.

    What scientists do agree on is that protocells must have had some kind of heritable information they could pass down to daughter cells, a metabolism that carried out chemical reactions, and a lipid membrane isolating the metabolism and heritable information from the randomness in the rest of Earth’s primordial soup. Whereas the outside chemical world was inherently random, the partitioning provided by the lipid membrane could create an area of lower entropy.

    For a protocell to grow before it divides it would have to increase not only the volume inside the cell but also the surface area of the surrounding membrane. To create two smaller daughter cells with the same total volume as the parent cell would require additional lipids for their membranes, because their surface area would be larger relative to their volume. The chemical reactions needed to fuel the synthesis of these lipids would give off energy in the form of heat.

    As Attal discussed these ideas with Schwartz, he began to wonder whether this energy was enough to drive early cell division. A search of the research literature revealed a study finding that mitochondria (the cell’s energy center, which began as a symbiotic bacterium billions of years ago) have a slightly higher temperature than the surrounding cell. Attal wanted to know whether that energy difference could be generated in protocells, and whether it was adequate to drive fission.

    He began sketching out a series of equations to model what might be happening. He started with a series of assumptions, such as that the protocell would be rod-shaped and that it had a double-layered membrane allowing nutrients to diffuse in and wastes to diffuse out.

    “It’s a very, very rough model,” he said. “I was surprised that it could be reduced to a single differential equation.”

    Attal realized that the energy produced by the primitive cellular metabolism would heat up the lipids on the inside of the membrane more quickly than those on the outside. Thermodynamics would then force the energetic inner lipids to “flip” to the outside, causing the outer membrane layer to expand at the expense of the inner layer. One easy solution to this imbalance would be for the cell to pinch together into two daughter cells. This pinching would occur at the middle of the parent cell, where it was hottest and the lipid movements were most pronounced.

    Too Small to Get Hot?

    The work is purely theoretical, but Attal said it can be tested experimentally by creating similar vesicles in the lab and measuring whether the temperature inside is different from the temperature outside.

    Wang says the work is important as a reminder that the asymmetry in lipid membranes could play a role in primitive life. However, both she and the biophysicist Paul Higgs of McMaster University (CA) are skeptical of some of the assumptions Attal made. They both pointed out that because cells and protocells are small, only minimal heat could be generated, and they questioned whether that temperature difference would be large enough to drive fission before the heat diffused across the membrane.

    Wang also has doubts about the proposed movement of the lipids between the inner and outer membrane. In modern membranes, lipids don’t flip-flop readily between the inside and the outside because their molecules have complex structures. That may not be the case for the simpler lipids that early life is thought to have used. When scientists create vesicles from these compounds in the lab, “they move around like crazy. You can’t stop it from happening,” she said.

    Higgs questioned Attal’s assumption that the cells would be rods. That shape requires specific proteins to stiffen the membrane, which protocells almost certainly lacked. As a result, they would be spherical, not rod-shaped.

    “I don’t see how you can maintain a rod shape without a hard wall,” he said.

    Neither of these issues means that heat didn’t play a role in early cell division, only that Attal’s mathematical model may not be the most accurate, Wang says. Still, Claudia Bonfio, a biochemist at the University of Strasbourg in France, says that the paper adds to the literature on early life because “it’s a nice starting point for experiments. We too often forget that reactions consume and produce heat, which could have an effect on things like fission.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 12:40 pm on December 28, 2021 Permalink | Reply
    Tags: "The Quest to Trap Carbon in Stone—and Beat Climate Change", , “The birth of a new species” of planet-saving technology., , , Climeworks' direct air capture plant-dubbed Orca-scrubs 4000 tons of carbon per year from the air and is the largest test of the technology to date., Climeworks’ facility is capable of pulling down only about 4000 tons of carbon per year—an eye-dropper’s worth of the 40 billion tons the world emits annually., , On a barren lava plateau in Iceland a new facility is sucking in air and stashing the carbon dioxide in rock. The next step: Build 10000 more., Orca can be deployed anywhere. It removes carbon already in the atmosphere whether belched out 10 years ago by a cement factory in Alabama or last week by a pickup truck in Zanzibar., The plant uses a technique known as direct air capture in which enormous fans suck in vast amounts of air from our despoiled atmosphere and run it over chemical-laden filters., WIRED   

    From WIRED : “The Quest to Trap Carbon in Stone—and Beat Climate Change” 

    From WIRED

    Vince Beiser

    Climeworks’ direct air capture plant-dubbed “Orca”-scrubs 4000 tons of carbon per year from the air and is the largest test of the technology to date. Photograph: Tanya Houghton.

    On a barren lava plateau in Iceland a new facility is sucking in air and stashing the carbon dioxide in rock. The next step: Build 10000 more.

    It was undoubtedly the most august gathering ever convened on the uninhabited lava plains of Hellisheidi, Iceland. Some 200 guests were seated in the modernist three-story visitors’ center of a geothermal power plant—the country’s prime minister and an ex-president, journalists from New York and Paris, financiers from London and Geneva, and researchers and policy wonks from around the world. Floor-to-ceiling windows looked out on miles of moss-carpeted rock, luminously green in the September morning sunlight. Transmission towers marched away to the horizon, carrying energy from the power plant to the capital, Reykjavik, half an hour’s drive away.

    The occasion: the formal unveiling of the world’s biggest machine for sucking carbon out of the air. The geothermally powered contraption represented a rare hopeful development in our climatically imperiled world—a way to not just limit carbon emissions but shift them into reverse. Prime Minister Katrín Jakobsdóttir declared it “an important step in the race to net zero greenhouse gas emissions.” Former president Ólafur Ragnar Grímsson predicted that “future historians will write of the success of this project.” Julio Friedmann, a prominent carbon expert at Columbia University (US), hailed it as “the birth of a new species” of planet-saving technology.

    Jan Wurzbacher and Christoph Gebald, cofounders of Climeworks, the company behind the carbon capture plant, strode up to the front of the room together. The fresh-faced Germans, both 38, were dressed in nearly identical white shirts and blue suits. They spoke in well-rehearsed, Teutonically accented English. “This year could turn into a turning point in how climate change is perceived,” said Wurzbacher (slightly taller, stubbly brown beard). “Thirty years down the road, this can be one of the largest industries on the planet,” enthused Gebald (slightly broader, curly brown hair).

    A nearby geothermal plant provides clean power to Climeworks’ carbon capture facility. Photograph: Tanya Houghton.

    These are some mighty bold claims for a small industrial plant in a tiny, peripheral country. Climeworks’ facility is capable of pulling down only about 4000 tons of carbon per year—an eye-dropper’s worth of the 40 billion tons the world emits annually. The plant uses a technique known as direct air capture in which enormous fans suck in vast amounts of air from our despoiled atmosphere and run it over chemical-laden filters. It’s similar in principle to the tech that factories and refineries use to scrub CO2 from their exhaust streams. But what’s potentially much better about direct air capture is that it can be deployed anywhere and it removes carbon already in the atmosphere whether it was belched out 10 years ago by a cement factory in Alabama or last week by a pickup truck in Zanzibar.

    True believers have been trying to turn the idea into reality for at least 20 years. For most of that time they were ignored by investors, dismissed by scientists, and regarded with suspicion by environmentalists, who worry the technology will give businesses license to keep on polluting. Now the ground is shifting rapidly. The Climeworks facility is just the first of a handful of large direct air capture plants slated to go up in the next several years, propelled by nine-figure investments and the support of powerful allies, including in the US government.

    An inflection point came in 2018, when the UN’s Intergovernmental Panel on Climate Change declared that the world will need to both cut new carbon emissions and somehow start reducing the amount of CO2 already up in the air—and that direct air capture was a promising approach. The following year, Climeworks’ top competitor, Canada-based Carbon Engineering, raised over $80 million in private investment. In 2020, Climeworks pulled in more than $100 million. Several newer startups have also leapt into the arena, and for what it’s worth, in December Elon Musk tweeted that SpaceX is starting its own atmosphere-scrubbing program.

    But direct air capture faces huge obstacles. Despite carbon’s enormous impact down at ground level, it is barely a trace element in the air—only about 415 out of every 1 million atmospheric particles are CO2. Imagine putting a single drop of ink into an Olympic-size swimming pool; the challenge of direct air capture is akin to taking that drop back out. The cost is staggering: To pull in any meaningful amount of carbon requires armies of giant machines and titanic amounts of energy to run them. Then there is the question of how to get all that energy. If you burn carbon-spewing fossil fuels to run your carbon-capturing machines, you’re kind of defeating the point. Finally, there is the carbon itself; once you’ve gathered up a few million tons of CO2, what do you do with it?

    Oh, and one more thing to consider: Among the technology’s first beneficiaries might be oil and gas companies.

    Klaus Lackner is the guy who started it all. One summer evening back in 1992, Lackner, then a particle physicist at DOE’s Los Alamos National Lab (US), was in his living room, knocking back beers with a friend and lamenting how no one seemed to be going after big, audacious science projects anymore. As the night wore on, they came up with one of their own—a system of solar-powered machines that would autonomously harvest raw materials from common dirt, use them to build more machines, and then perform useful tasks such as sucking carbon out of the atmosphere. The replico-bots didn’t pan out, because—well, do I really need to explain? But the idea of capturing atmospheric carbon took root in Lackner’s head. The basic technology existed; submarines and the International Space Station had systems for scrubbing carbon from air, developed to keep their inhabitants from suffocating. Several years later, Lackner and some colleagues published a research paper on doing the same in the open air. They concluded that, at least from a technical perspective, “there are no fundamental obstacles.”

    Lackner moved on to Columbia University and took his idea with him. Concern over climate change was mounting, and polluters were coming under growing public pressure to scrub their smokestack emissions. Lackner was among the few calling for a different approach, one focused more on the end of the process than the beginning. “Roughly half of our emissions come from distributed sources,” like cars, says Lackner, now a cheerily verbose, silver-haired professor at The Arizona State University (US), where he runs the Center for Negative Carbon Emissions. Rather than chasing the long tail of emitters, “we need to figure out how to get rid of CO2.”

    In 2004, backed by $5 million from the founder of Land’s End, Lackner helped to launch Global Research Technologies, the first serious attempt to commercialize direct air capture. He and his colleagues spent several years building a small prototype and burned through all their money in the process. The company withered away, but Lackner’s faith did not. He has continued researching and talking up direct air capture ever since. The idea slowly spread to Europe, where Gebald and Wurzbacher learned about it as students.

    On the day after their big launch in Iceland, I’m sitting with the pair in a former fish factory in Reykjavik that is now a stylish startup space. Once again, the guys are dressed like twins, in collared shirts under neutral-­colored sweaters. It doesn’t end there. They were born three months apart, and so were their respective 3-year-old sons. Gebald, the (slightly) more emotive of the pair, oversees more of the marketing and sales these days, while the (slightly) more cerebral, detail-­oriented Wurzbacher handles operations and finance. When they do butt heads, Wurzbacher estimates, the average dispute lasts 30 to 60 minutes.

    The two met in October 2003, on their very first day as engineering undergrads at The Swiss Federal Institute of Technology in Zürich [ETH Zürich] [Eidgenössische Technische Hochschule Zürich](CH). They were both outdoor-­sports-loving, overconfident sons of engineers, drawn to the school as much by its proximity to Alpine ski slopes and mountain-biking trails as by its stellar academic reputation. At the orientation session for new students, they bonded over the difficulty they had understanding the Swiss dialect spoken by most other students. This is how they tell the story today: “What are you doing here?” Gebald asked his new acquaintance. “I came to study engineering. I want to have my own company someday,” Wurzbacher replied. “Cool!” said Gebald. “I have the same dream! Let’s do that!” They high-fived, and they have been working together ever since.

    Casting about for an idea to turn into a suitably grand business, they ran across a professor, Aldo Steinfeld, who was (and still is) researching ways to manufacture synthetic fuels, which involved combining carbon dioxide with water and eventually producing a kerosene-­like substance. Steinfeld had learned about Lackner’s work, and he thought direct air capture might be a clean way to get the carbon dioxide he needed for his fuels. He encouraged Wurzbacher and Gebald to help him try to build a machine to make it work. They liked the idea of combating climate change. Among other things, as avid skiers, they were shocked by how much the glacier at one of their favorite Swiss resorts had retreated over the years. Plus, there was potentially a lot of money to be made.

    Steinfeld took them on as graduate students. Wurzbacher and Gebald started by tinkering with the systems found in submarines, which use chemicals such as soda lime that lock onto CO2 molecules. Among other challenges, they had to come up with a mechanical design that could be scaled up to handle millions of cubic meters of air. Their first prototype was bare-bones: a couple of hoses running air over a heap of filters coated with carbon-grabbing nitrogen-­hydrogen amines—derivatives of ammonia—sitting in an aluminum bucket. It wasn’t exactly world-changing. It took a full day to capture about half a gram of carbon dioxide. But it was a solid proof of concept. “We were proud, like we just landed on the moon,” Gebald says. A Swiss foundation chipped in some $300000, and Climeworks spun off from the university in 2009. “It was a really cool time,” Gebald says. “We were skiing and dreaming. We were like, ‘Yeah, we’ve got a company! Yeah, we’re gonna solve it!’”

    At almost the same time, David Keith, a Harvard University (US) professor and adviser to Bill Gates, was getting Carbon Engineering into gear in Canada. Another well-­credentialed pair of experts was launching Global Thermostat in the US. Climeworks was the runt of the litter. “We were the young guys with zero track record, right out of uni,” Gebald says. But the competition was helpful in a way; the fact that more-established scientists were pitching the same wild-sounding idea gave it more credibility. Richard Branson even offered a $25 million prize to companies that could commercialize ways to extract greenhouse gases from the atmosphere. Nobody wound up winning, but Climeworks made it to the finals.

    In 2011, however, The American Physical Society (US), a leading academic physics organization, released a report that basically concluded direct air capture was an expensive waste of time. “It was published in the local newspaper, and our investors were all wealthy people and they typically read it,” Gebald says. The pair managed to scare up around $2 million, but their investors had a condition: By the year’s end, they wanted to see a prototype capable of capturing a kilogram of CO2 per day. Wurzbacher and Gebald scrambled to hack it together, trying out different setups and combinations of chemicals. By mid-December, they had a refrigerator-­sized box stuffed with filters and a tube that pulled in air through the room’s window. They tested the machine, which seemed to operate as planned—but the readout showed it capturing barely 200 grams. The guys were flummoxed.

    With the clock ticking down, they tried everything they could think of—double-checking the filters, rerunning parts of the process. Nothing helped. A few days before Christmas, Wurzbacher was staring forlornly at the machine, trying once again to figure out what was wrong. Then he heard a small, strange hissing sound; it was coming from one end of a tiny carbon-­dioxide-carrying hose that had popped loose. It turned out the machine was in fact capturing several kilograms of CO2—but the gas was leaking out just before it hit the sensor that would have recorded it.

    Meanwhile, Climeworks’ competitors were also moving forward, each opening small demonstration facilities by the mid-2010s. Later in the decade, Climeworks retook the lead by opening its first real-world plant, just outside of Zürich. The team installed 18 silver-colored, barrel-sized fans and filters on the roof of a waste incineration facility. “I stood in front of many, many tons of steel and thought, ‘We actually built it!’” Wurzbacher says. Waste heat from the incinerator helps run the system, which pulls in some 900 tons of CO2 per year from the atmosphere. Climeworks pipes the purified gas directly to a nearby greenhouse, where it helps the plants grow.

    The rooftop machine is a small operation, but its launch marked the first time anyone had managed to use direct air capture to gather carbon and then sell it. It brought Climeworks plenty of admiring press, a visit from Greta Thunberg, and some $30 million in investment. With that plant, “we blew away the first layer of criticism” by proving that the technology works, Gebald says. But there aren’t enough incinerators to heat thousands of direct air capture machines, and greenhouses can’t absorb gigatons of carbon dioxide. To level their system up to the next order of magnitude, Wurzbacher and Gebald still had to contend with the questions of where the energy would come from and where the captured carbon would go. Which brings us to Iceland—by way of Morocco.

    One evening in November 2016, Gebald was at a swanky party in Marrakesh thrown by the philanthropist Laurene Powell Jobs. He was feeling a little out of place among her guests, a gaggle of prominent climate researchers, activists, and policymakers who were in town for the COP conference, a major annual event in climate circles. Dutifully making the rounds, he met a gregarious man with richly coiffed white hair. It was Ólafur Ragnar Grímsson, the recently retired president of Iceland. Gebald gave him the spiel about Climeworks. “That’s fantastic!” Gebald recalls Grímsson saying. “I can store CO2 underground in my country. But we’ve been lacking the technology to capture it.”

    Grímsson was talking about Carbfix, a subsidiary of publicly owned Reykjavik Energy, which was developing a system to sequester carbon by injecting it into underground geologic formations. Reykjavik Energy also happens to operate a couple of nice, clean geothermal power plants. Grímsson made some introductions, and soon after, Gebald and Wurzbacher were hammering out a partnership with Carbfix.

    Icelandic officials may have been welcoming, but Iceland itself was less so. Wurzbacher and Gebald built a small experimental plant with a single intake fan near Hellisheidi in 2017, but in short order “it literally froze,” Gebald says. One day when the temperature dropped below zero, steam from the geothermal plant hit the machine’s bare metal, covering it in ice. Another time, a giant storm almost carried away the whole multiton structure. “We had to bolt it to the ground,” Gebald says.

    Four years and many hitches later, Climeworks’ new plant, dubbed “Orca” (after both killer whales and the Icelandic word for “energy”), came online. It sits in the verdant volcanic plain, a short drive from the visitors’ center where the opening ceremony was held. Eight olive-green steel boxes the size of shipping containers stand on concrete risers, connected by elevated pipes to a low white building that is the control center. The steel vessels, dubbed CO2 collectors, are fronted by large black fans that pull in rivers of air.

    Inside the collector boxes, the air runs over filters coated with amine-based sorbents and other materials that grab hold of the CO2 molecules. The carbon eventually saturates the filters, like water bloating a sponge. At that point, sliding gates seal off the air intake, and hot air is piped in from the control center to heat the filters to around 100 degrees Celsius, which releases the CO2. Vacuums then pull the free-floating molecules to the control center, where gleaming tanks, ducts, and other hardware compress the gas. It’s then piped over to a handful of igloo-sized geodesic steel domes a couple miles away, squatting on the plain like emergency housing for Martians.

    Orca’s giant arrays of fans pull in rivers of air. Photograph: Tanya Houghton.

    Photograph: Tanya Houghton.

    Carbfix technicians and machines handle the next steps. Inside the domes, a powerful motor pushes an incoming stream of water down into an injection well. The CO2 pipeline dumps the gas into the water. “It’s an underground SodaStream!” says Sandra Snæbjörnsdóttir, a Carbfix scientist with shoulder-length brown hair and earnest green eyes framed by tortoise-shell glasses who helped design the system. A few hundred meters down, the soda stream flows into the ground, where it reacts with basalt deposits that turn it into a solid mineral. In other words, the climate-warming carbon gas is turned into stone, like the villain in a fairy tale. “It’s essentially nature’s way of storing CO2,” says Snæbjörnsdóttir. There’s plenty of room for this tactic. Worldwide, there are probably enough suitable geologic formations to store trillions of tons of carbon.

    On the most basic level, the system does what it’s supposed to: Climeworks extracts carbon from the air, and Carbfix buries it underground. And they both use geothermal power, which produces only minor greenhouse emissions. But the capturing part is still tremendously energy intensive, and therefore expensive. The fans need electricity, of course, but the bulk of the power goes to heating up the carbon to liberate it from the sorbent.

    Jennifer Wilcox, a veteran carbon researcher and principal deputy assistant secretary at The Department of Energy (US), has estimated that to grab a million tons of carbon, a direct air capture plant could devour on the order of 300 to 500 megawatts of energy per year—enough to power some 30,000 American homes. (And remember, that power has to be clean; otherwise you’re generating at least as much carbon as you’re capturing.) Wurzbacher reckons that’s in the right ballpark. Climeworks engineers estimate it costs around $750 to capture a single ton of carbon. Independent estimates of various direct air capture approaches reach as high as $1,000 per ton. If the industry were to grow significantly, those costs would almost certainly fall. Components such as the collector boxes would become cheaper and easier to make, and the energy efficiency could improve. Climeworks and Carbon Engineering, along with several outside experts, believe they can get down to $100 a ton.

    Inside the geodesic domes, carbon dioxide gas mixes with water and flows into the ground, where it reacts with basalt. Photograph: Tanya Houghton.

    But even if that bears out, multiply $100 by even a single gigaton—barely enough to make a dent in our annual emissions—and you’re talking about $100 billion. (The National Academy of Sciences has estimated that by 2050 we need to be removing at least 10 gigatons of carbon. Every year.) That’s on top of the hundreds of billions of dollars that would be required to build the plants themselves.

    Wurzbacher and Gebald aren’t expecting to cover those costs by selling carbon to greenhouses. Nor by using it for synthetic fuels, which is still one of their sidelines. The big money, they figure, lies in selling carbon sequestration to the hundreds of corporations, cities, and other entities that have pledged to reduce their emissions. Insurance giant Swiss Re, Microsoft, Stripe, the Economist Group, and Audi (not to mention Coldplay) have already signed up to pay Climeworks millions of dollars to bury carbon for them.

    Meanwhile, on the other side of the world, Climeworks’ principal rival is racing to build a facility that will also enable a giant corporation to bury carbon—but for quite a different purpose.

    Steve Oldham, a middle-aged Brit from Manchester, is CEO of that rival company, Carbon Engineering. I visited him last summer at the company’s headquarters in the almost unbearably scenic town of Squamish, British Columbia, Canada. It sits between majestic, waterfall-lined mountains and a fjord-like inlet of the Pacific Ocean. I arrived on a warm morning, in the run-up to an epochal heat wave. Three days after my visit, the town of Lytton, a couple of hours away, was hit with the highest temperature ever recorded in Canada. The next day was even hotter, and the next hotter again. The day after that, Lytton caught fire and burned to the ground. Hello, climate change.

    We sat in Oldham’s office in a trailer on Carbon Engineering’s site, windows looking out on the mountains. He wore a cornflower-blue short-sleeved button-up and gray slacks. A software engineer by training, he came to Carbon Engineering in 2018 from a Canadian space-tech company. That same year, the IPCC report endorsing direct air capture came out, and founder David Keith published a research paper mapping out how, given certain design choices and energy prices, Carbon Engineering’s capture costs could be brought down as low as $94 a ton. (Keith is still on the company’s board but isn’t involved in day-to-day operations.) Since then, the company has been on a roll. Carbon Engineering has raised $160 million. In the past three years, its staff has nearly quadrupled, to a total of 146.

    The demonstration plant set up in 2015 is still there, a cobbled-together collection of machines inside a beat-up, corrugated-metal building inherited from the chemical company that used to occupy the site. Powered largely by natural gas, the machine sucked in about a ton of carbon per day. When I visited, a construction crew was working on a bigger facility that is expected to come fully online in 2022.

    Oldham and I put on hard hats, steel-toed boots, and garish hi-viz vests to tour the place, at times shouting over the roar of diesel-powered construction machinery and clanging hammers, the acetic smell of welding drifting through the air. We clambered up three stories of steel stairs to the top of an air intake tower crowned with a giant fan. From there we looked down on the hodgepodge of tanks, walkways, ladders, and ducts, freshly painted in bright blues and yellows and the company’s signature shade of fuchsia. The plant, which will capture only about 1,000 tons of carbon per year, will serve as an experimental lab for much larger facilities coming soon. First up: a 1 million-ton-per-year plant that’s slated to break ground in Texas in 2022. Systems in Scotland and Norway are in the design phases and will capture 500,000 to 1 million tons per year.

    Carbon Engineering’s tech works on the same basic principles as Climeworks’, but the two companies have very different business models. That million-ton plant in Texas is a partnership with a subsidiary of Occidental Petroleum, a major oil and gas company based in Houston. Oxy, as it’s commonly known, plans to inject the captured carbon into the ground to push more oil into its wells, a process known as enhanced oil recovery. The CO2will stay underground—but putting it there will drive more fossil fuels into the maw of the American economy, which will belch them back out as greenhouse gases. In other words, the plant will capture carbon and use it to help put more carbon into the air.

    Back in Oldham’s office after our tour, I ask him: Doesn’t that seem counterproductive? “We get this criticism a lot,” he tells me, leaning back in his chair.

    “I’m a pragmatist,” he says. “We have to solve this problem, we have to get climate change figured out.” It “makes a lot of sense,” he adds, to get the energy sector involved. Chevron has also invested in Carbon Engineering, and ExxonMobil has a partnership with Global Thermostat. Fossil fuel executives are among the rare people willing to pay for the machines and the CO2 they capture, no doubt because it helps them extract more oil while scoring public-relations points. What’s more, those companies are already set up to wrangle large amounts of carbon dioxide—they’ve got pipelines to shuttle it around, knowledge of where favorable geological formations are, and experience with putting the stuff into the ground. Oldham says the Texas plant’s energy will come mainly from dedicated solar or wind plants. The net result is “carbon-free fossil fuel,” he says. “We’re pulling as much CO2 out of the air as is contained in the crude that comes up.” He doesn’t expect to stick with enhanced oil recovery forever. Like Climeworks, Carbon Engineering is also trying to spin captured carbon into synthetic fuel. But in the meantime, Oldham needs customers, and the world still runs on oil. “If we can make fossil fuel carbon-free,” he asks, “why is that a bad thing?”

    It is a logically sound argument. But it is shoulder-shrugging logic. It could be a bit like funding a drug rehab clinic by renting space to a pill mill. Climeworks has staked out a different position: The company won’t get involved with enhanced oil recovery, period. “We want to change something substantially with how we fight climate change,” Wurzbacher says; to him, working with oil companies is not substantial enough. Unimpressed by this distinction, many environmentalists condemn the whole field of direct air capture. They argue that it undercuts efforts to shrink greenhouse gas emissions by dangling the illusion that we can keep burning fossil fuels and simply vacuum up their CO2. In July, more than 500 groups signed an open letter to American and Canadian political leaders declaring carbon capture “a dangerous distraction.”

    Everyone I spoke to in the direct air capture industry says that they, too, believe the world needs to cut CO2 emissions as deeply as possible. But that will take time, and by now, there’s already so much CO2 in the air that even if we magically quit burning all fossil fuels tomorrow, the planet would continue to feel the effects of climate change. What’s more, renewables won’t solve all our emissions problems anytime soon: Large airplanes can’t yet run on electric batteries, and cement production generates CO2 as a byproduct, for instance. “We’re at a stage where avoiding carbon is no longer enough,” says Wilcox, the Energy Department official. “We’re going to have to be taking it back out of the atmosphere.”

    There are other ways we might do that—we might plant billions of trees or spread out tons of minerals, such as olivine, that bind to carbon in the air. Those strategies have significant costs and risks of their own, of course. Among other things, trees can burn and re-release all their carbon, and mining and crushing minerals eats up a lot of energy. No single method is potent enough to capture the 10 gigatons of carbon dioxide per year The National Academy of Sciences (US) prescribes. We’ll need to deploy several. But which ones?

    For direct air capture to have a real impact, the industry has to find a way to expand at a stupefying rate. Climeworks, Carbon Engineering, and their ilk need to build thousands of plants to capture even a few gigatons of carbon dioxide. That’s not impossible, but it is a very tall order. Most countries don’t penalize dumping carbon into the atmosphere, so business leaders have little incentive, beyond the goodness of their hearts, to spend billions to clean up their emissions.

    Klaus Lackner, the direct air capture pioneer, thinks we should treat carbon emissions the way we do sewage or municipal garbage: as a waste product to be cleaned up, perhaps with taxpayer funds. Support of that sort is starting to emerge. Canada is one of Carbon Engineering’s investors, and the European Union is backing Climeworks. The United Kingdom has pledged up to about $125 million for direct air capture research. Until recently, the US provided only piddling support, but in August the Department of Energy doled out $24 million in research grants, and the Biden administration’s infrastructure law allocates $3.5 billion for the construction of four 1 million-ton direct air capture “hubs” around the country.

    Government incentives can also push polluters to clean up their atmospheric mess. American companies are eligible for a federal tax credit of up to $50 for every ton of carbon they sequester, an amount Congress may soon boost; California offers additional credits. That’s helpful, but it still doesn’t come near to covering the current costs of paying a direct air capture company to do that sequestering.

    At the end of the day, direct air capture may turn out to be impractical or unsustainable or less effective than other tactics for removing carbon from the atmosphere. We need to find out, and quickly. If the real-world results from facilities such as Orca show the tech can take a serious bite out of atmospheric CO2 at a cost somewhere below insane, we should pour money into getting more built, ASAP. If they don’t, we should pour money into planting trees or spreading minerals or whatever other techniques work better. (Need I add that we should also be moving full-speed from fossil fuels to renewables?)

    All of which would require tremendous public investment in technologies that might not pay off. It’s worth remembering that we make gambles like that all the time. In the past year and a half, for instance, the United States has invested billions into developing Covid vaccines, many of which didn’t pan out.

    We make those kinds of investments when we believe the well-being of the entire nation is in danger. We don’t wait around for a market to develop when we’re confronted with a crisis that imperils millions of lives. We pulled out all the stops to fight an airborne virus; we need to do the same to fight an even worse threat that’s also carried in the air.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 9:11 am on November 15, 2021 Permalink | Reply
    Tags: "NASA Tries to Save Hubble- Again", , , , , WIRED   

    From WIRED : “NASA Tries to Save Hubble- Again” 

    From WIRED


    The space telescope’s latest hardware problem has kept it offline for two weeks, raising concerns that the decades-old spacecraft is running out of time.

    When engineers encounter an unknown problem, they’re meticulous. The slow process is designed to protect Hubble’s systems so it can assist scientists for as long as possible. Photograph: NASA.

    The Hubble Space telescope, one of the most famous telescopes of the 20th and 21st centuries, has faltered once again. After a computer hardware problem arose in late October, NASA engineers put Hubble into a coma, suspending its science operations as they carefully attempt to bring its systems back online.

    Engineers managed to revive one of its instruments earlier this week, offering hope that they will end the telescope’s convalescence as they restart its other systems, one at a time. “I think we are on a path to recovery,” says Jim Jeletic, Hubble’s deputy project manager.

    The problem began on October 23, when the school bus-sized space probe’s instruments didn’t receive a standard synchronization message generated by its control unit. Two days later, NASA engineers saw that the instruments missed multiple such messages, so they put them in “safe mode,” powering down some systems and shuttering the cameras.

    Some problems are fairly easy to fix, like when a random high-energy particle hits the probe and flips a bit on a switch. But when engineers encounter an unknown problem, they’re meticulous. The slow process is designed to protect Hubble’s systems and make sure the spacecraft continues to thrive and enable scientific discovery for as long as possible. “You don’t want to continually put the instruments in and out of safe mode. You’re powering things on and off, you’re changing the temperature of things over and over again, and we try to minimize that,” Jeletic says.

    In this case, they successfully brought the Advanced Camera for Surveys back online on November 7.

    National Aeronautics Space Agency(US)/European Space Agency [Agence spatiale européenne][Europäische Weltraumorganisation](EU) Advanced Camera for Surveys (ACS) on the NASA/ESA Hubble Space Telescope(US)

    It’s one of the newer cameras, installed in 2002, and it’s designed for imaging large areas of the sky at once and in great detail. Now they’re watching closely as it collects data again this week, checking to see whether the error returns. If the camera continues working smoothly, the engineers will proceed to testing Hubble’s other instruments.

    Hubble has had its share of hiccups over its long and productive career, during which it has documented everything from ancient galaxies to the birth and death of nearby stars. It launched in 1990, just a few months after the fall of the Berlin Wall, and it was deployed by the crew aboard the space shuttle Discovery. It now orbits about 340 miles above the Earth. On five occasions since its deployment, astronauts on NASA shuttles have conducted servicing missions to repair and upgrade its systems, boosting the impressive longevity of the telescope, which was originally expected to only last about a decade. Astronauts aboard the shuttle Atlantis completed the final such mission in May, 2009, when they repaired its spectrograph, among other things. Since then, all other reboot attempts have been conducted from Earth; engineers are no longer able to replace the telescope’s hardware.

    Hubble’s current glitch isn’t unprecedented. In fact, it’s the second one this year. In July, engineers put the telescope’s instruments in safe mode for about a month when the payload computer, which coordinates and monitors the science instruments, went offline. When they started using a backup power unit, they were able to make the science instruments operational again.

    Jeletic and his team also try to anticipate potential mishaps. For example, they found that the thin wires Hubble’s gyroscopes depend on gradually corrode and break, and three of its six gyros have failed. Without gyros, Hubble can’t target anything properly. But on the last servicing mission, astronauts replaced the gyros and enhanced the wires so that they can’t corrode, solving the problem.

    Nevertheless, each new hitch inevitably raises concerns about the aging telescope, which has been instrumental in so many astronomical accomplishments, including pinning down the age of the universe and discovering the smaller moons of Pluto. “I think it’s been utterly transformational,” says Adam Riess, an astronomer at The Johns Hopkins University (US) in Baltimore. He shared the 2011 Nobel Prize in Physics for showing how measurements of exploding stars, or supernovas, reveal the accelerating expansion of the universe, a project that benefited from Hubble data.

    Saul Perlmutter (center) [The Supernova Cosmology Project] shared the 2006 Shaw Prize in Astronomy, the 2011 Nobel Prize in Physics, and the 2015 Breakthrough Prize in Fundamental Physics with Brian P. Schmidt (right) and Adam Riess (left) [The High-z Supernova Search Team] for providing evidence that the expansion of the universe is accelerating.

    To this day, the telescope continues to be oversubscribed by at least fivefold, Riess says, meaning astronomers have more than five times as many proposals for using Hubble as there is available telescope time.

    The space telescope has also served as an educational tool and kindled public interest in space science for a whole generation. “Everybody knows Hubble,” says Jeyhan Kartaltepe, an astronomer at The Rochester Institute of Technology (US), whose work on multiple galaxy surveys makes extensive use of Hubble images. “It has become a household name. People enjoy reading articles about what Hubble has discovered, and they enjoy seeing the pictures. I think people have an immediate association of Hubble with astronomy.”

    Hubble’s latest hardware challenges come just a month before its successor, the James Webb Space Telescope, is scheduled to launch into orbit.

    National Aeronautics Space Agency(USA)/European Space Agency [Agence spatiale européenne][Europäische Weltraumorganisation](EU)/ Canadian Space Agency [Agence Spatiale Canadienne](CA) Webb Infrared Space Telescope(US) James Webb Space Telescope annotated. Scheduled for launch in October 2021 delayed to December 2021.

    Like its iconic predecessor, the new telescope will collect troves of spectacular images, though it’s designed to probe wavelengths more in the infrared range, allowing it to penetrate dusty parts of galaxies and stellar nebulae. Riess expects it to be similarly popular with astronomers and with the public.

    Hubble has easily surpassed its expected lifespan, and the same goes for NASA’s Chandra X-ray Observatory, which launched in 1999 and remains operational, although it was designed to last only five years.

    National Aeronautics and Space Administration Chandra X-ray telescope(US)

    This is a good sign for Webb, similarly planned for a five-year lifespan. Unlike Hubble, however, it will orbit much farther away, making it inaccessible to astronauts. That means any problems that arise will have to be fixed remotely.

    But Hubble helped set the stage for its successor. For example, after Hubble launched, engineers realized that its mirror wasn’t curved properly, initially resulting in blurry images. Webb’s design allows for engineers to adjust the curvature remotely if an error like that crops up.

    Astronomers appreciate the hard work of Hubble’s engineers and operators. “Their dedication to keep on rescuing the telescope from all its fits of pique and changes of mood is fantastic. I’m so proud of them backing the scientists who are using the data,” says Julianne Dalcanton, an astronomer at The University of Washington (US) who has used Hubble frequently throughout her career, including to map Andromeda, our galactic neighbor.

    Andromeda Galaxy. Credit: Adam Evans.

    She, Kartaltepe, and other astronomers look forward to a time when both Hubble and Webb are in the sky, taking observations together, especially as they’ll learn different things from the telescopes’ respective instruments and wavelength coverage.

    While Jeletic and his team don’t yet know when Hubble will be back online, he expects all systems to eventually be up and running once again. “Some day Hubble will die, like every other spacecraft,” he says. “But hopefully that’s still a long ways off.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 9:43 am on November 6, 2021 Permalink | Reply
    Tags: "How to Prepare for Power Outages", , , WIRED   

    From WIRED : “How to Prepare for Power Outages” 

    From WIRED

    Tushar Nene

    Whether there’s a flood or a fallen tree, your power will go out eventually. Here’s how to prepare for an outage that lasts minutes, hours, or days.

    Photograph: Scott Heins/Getty Images.

    “I live in the Philadelphia area, and that puts me in the direct line of fire for two major water-type attacks. We get the remnants of hurricanes in the summertime and what’s known as nor’easters in the winter. (For those not from the Northeast, that’s a cyclone of cold frozen hatred that hovers up our coast.) Sure, they each bring their own brand of natural strife, but they also make us vulnerable to every geek’s nightmare: the dreaded power outage. And since my place fully runs on electricity (no gas or oil), I’ve had to develop a playbook for those dark times.

    Whether it’s feet of snow or downed power lines, we need our electricity. Having been a Cub Scout as a lad, I am thankfully well prepared, but I realize that there are probably many people out there that aren’t. This guide is for you to bookmark forever.


    The best all-around solution is also the most expensive. You can get your home rigged with a built-in generator that will take over when your main power goes out.

    For full-home protection, a generator will set you back a few thousand dollars. It may be pricey, but it is still a viable option for those who’ve got the scratch, and pretty much solves everything all in one go. You could invest in a portable generator to save money, but the money you save comes at the cost of how long and how many devices it can power. Oh, and regardless of what you do, make sure to follow these generator safety tips from The Department of Energy (US).

    Preventative Measures: Protect Your Electronics

    So first off—protecting your electronics isn’t paranoia. I’ve seen it all in my IT pro day job, including boxes that fry for no apparent reason. Protecting your gear from the spikes or surges a power outage may bring is important. You may rely on cloud services, but your desktop workstation or gaming rig still needs you to look out for them. Forget having to redownload your stuff again—replacing hardware, especially these days, is out of control. You may need to trade in your first-born to replace a video card (and not even one that does ray tracing).

    Spikes happen. In rough weather power lines can fall from the weight of ice and snow, felled trees can cause massive damage, and transformers can pop in a glittering array of sparks.

    What does that mean? That $5.99 surge protector you bought and plugged your computer/TV/game console into isn’t really doing you a lot of good. Instead of cheap surge protectors, battery backups or UPS (uninterruptible power supply) units are a far better choice to protect your hardware. In addition to protection from spikes and surges, they pick up the load for everything plugged into it when power drops. This gives you a window of time to shut down your equipment properly without the risk of them going up in cinders or losing any data.

    In IT we use massive ones to make sure servers and other large-scale devices stay up during power issues, but you can buy smaller home models on the cheap to do the same. For a standard user’s computer system (plus monitor and printer), a 450 VA or 650 VA UPS unit should do just fine and will set you back south of $100. The more stuff you plug into it, the higher VA rating you want. For a modern gaming rig, you’re probably looking at something more in the 1200 to 1500 VA range to keep it safe. Which is still only around $200.

    And what’s that compared to trying to replace an RTX 3080?

    Let There Be Light

    The worst part of an outage is when night falls, and in the winter months that can come early in the evening. Without power your place is enveloped in darkness, and basic tasks like just walking to the kitchen can result in slips, bumps, and unnecessary injury in general.

    The first thing I keep on hand—in strategic places around the house—are LED lanterns. They’re low-cost, use very little power, and can go for months without having to replace the batteries. Keep one near a stairwell or on the kitchen counter so you can navigate your now-enshrouded home safely. And if you need to go somewhere else in the house? They’re portable. While being able to traverse your own place safely is important, the secondary effect is eliminating the need for the flashlight app on your phone, which is usually a battery vampire.

    And since you’re likely going to have to be better friends with your analog entertainment, setting a lantern next to your couch or favorite chair instantly creates a cozy reading nook in the dark. And if you have several to spare, you have a lit gaming surface for tabletop RPG’s or board games (which everyone should always have in stock).

    Alternatively, I have a couple of shake flashlights that rely on human power instead of double A’s. Shaking the flashlight runs a magnet back and forth through a coil to store charge in a capacitor, and voilà! A powered light. They’re not only effective but fun for kids too, and if nothing else give you a solid reason to thank the world’s lucky stars for Michael Faraday and his legendary work in electromagnetics.

    Next on the list is something that’s a bit more old school—the candle. It may sound obvious, but don’t act like just having one didn’t get you an extra heart container in The Legend of Zelda back in the day. Having candles (and, of course, matches or lighters) can again light a path for you to get wherever you need to go. Granted, it is, you know, fire, so you’ll have to pay attention to them unlike no-fuss LED lanterns, but they’re cheap, burn for a while, and I dare say contribute visual and olfactory ambiance to the occasion.

    Right now I’m running a scent called Black Tea and Lemon because I have excellent taste. I even have the sadly limited-edition A1 steak-scented ones, so, you know, you can find whatever floats your boat.

    If you have a fireplace, building a fire is an easy and cheap way to not only light a room up but heat it up when the mercury starts to drop. If you have gas or oil heat this may not be too big of an issue for you, but I have an electric heat pump, so my living room fireplace is my go-to power outage hangout.

    I try to always keep a cord of firewood on hand during the winter along with kindling or starter cubes in my inventory, but if you don’t have kindling and aren’t the Human Torch, this might add a degree of difficulty. And that’s why I keep paper phone books instead of pitching them. Sure, “you’ve got the internet,” but the thin pages from phone books make for great kindling, especially if you store your firewood outside and it’s not totally dried out yet.

    That’s right, we can keep the phone book in business for alternative service in our digital age. Look at that. I’m a jobs creator.

    The Juice Must Flow

    So now we know how to prevent our larger electronics from taking hits, but what about your mobile tech? Your smartphone is probably the most important tool you have: It can still handle calls even when the power’s out, and you can get data on your cellular network. If you have it as part of your data plan (you should) it can also serve as a Wi-Fi hotspot to provide laptops and tablets with wireless data. All combined, this can be a crushing power draw. And without electricity, those connected devices will start running out of juice soon too. Keeping portable chargers and battery packs in stock and juiced up is a great way to keep power going for your mobile devices.

    Now before you go buy some random ones, take an inventory of what you have, how much juice those things need, and what kind of plugs you require. Let’s look at a basic example:

    Tushar’s phone: Samsung Note20 Ultra – 4,500 mAh battery / USB-C.
    Tushar’s tablet: Samsung Galaxy S6 Lite – 7,040 mAh battery / USB-C.

    To be able to fully charge both devices once when they drop to zero with a full charge would require the sum of those plus about 25 percent, which is 14,425 mAh. I’m not about to get into rated versus real battery capacity and efficiency here, so for now just trust me. So to be safe, I should have 15,000 mAh in capacity available in my power banks. I mean I actually have 20,000, but you know. That only set me back $100, and we have some great suggestions here. This puts mobile gaming back on the menu, and extends the life of my phone’s Wi-Fi hotspot. Now that many laptops also come with USB-C fast charging and power, that equipment can also be included in your calculations.

    For some additional references on mobile gaming more advanced than your phone, a Nintendo Switch has about a 4,300 mAh battery, and if you’re one of the folks that reserved a Steam Deck for this winter, that battery should run about the same.

    Larger things that require AC power and outlets for, like a TV, you’re going to need an electric power source or generator. Again, the more juice you want the more it’s going to cost. You can get a 1440 W power station with 660 Wh capacity for around $750, and that can charge your gaming laptop and if needed, run a TV for about 10 hours. If you have a Chromecast handy then you have your streaming apps at your fingertips on a large screen.

    Fuel the Body

    We’ve talked about fueling your tech, now let’s talk about fueling your body. Depending on whether your range is gas or electric (and how long you can keep your fridge—which we assume is running on electricity—closed and cold), you may need some other options for hot, healthy food.

    It should go without staying to have chips and crackers and all other sorts of snacks, but none of that is a meal. You have no idea how long a power outage is going to last, your refrigerator isn’t working, and really friends, have some self-respect! Keep the pantry stocked with bread and shelf-stable sandwich materials—PB&J or otherwise. I also make sure I’ve got some canned goods too. There’s not really much point to keeping your home lit and your games and tech going if you’re in zombie mode because you haven’t eaten.

    If your home is all electric and you have dietary needs, really have to cook, or are super picky about, you know, having hot food, then a small butane, propane, or charcoal camping/tailgate grill should be in your inventory as well. You can procure a small portable charcoal grill for as low as $50, and those butane cassette grills YouTubers use for cooking videos are about the same price. And while I know we’re trying to shield ourselves from the elements, I’ve braved going outside to use my full-size propane grill when necessary.

    This guide isn’t by any means complete, but it will protect your tech, keep you warm and lit, and make sure you still have food, the internet, and portable power. Other things you should have prepared are blankets to keep warm, pots filled with water or bottled water to stay hydrated, and a first aid kit, just in case. And don’t forget: Sometimes the best emergency tool at your disposal are your friends and neighbors.

    See the full article here.


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 8:24 am on July 24, 2021 Permalink | Reply
    Tags: "20 Years Ago Steve Jobs Built the ‘Coolest Computer Ever.’ It Bombed", Apple Computer, , , The Power Mac G4 Cube, WIRED   

    From WIRED : “20 Years Ago Steve Jobs Built the ‘Coolest Computer Ever.’ It Bombed” 

    From WIRED

    07.24.2020 [Re-issued 7.24.21]
    Steven Levy

    Plus: An interview from the archives, the most-read story in WIRED history, and bottled-up screams.

    The Power Mac G4 Cube, released in 2000 and discontinued in 2001, violated the wisdom of Jobs’ product plan. Photograph: Apple/Getty Images.

    The Plain View

    This month marks the 20th anniversary of the Power Mac G4 Cube, which debuted July 19, 2000. It also marks the 19th anniversary of Apple’s announcement that it was “putting the Cube on ice”. That’s not my joke, it’s Apple’s, straight from the headline of its July 3, 2001, press release that officially pulled the plug.

    The idea of such a quick turnaround was nowhere in the mind of Apple CEO Steve Jobs on the eve of the product’s announcement at that summer 2000 Macworld Expo. I was reminded of this last week, as I listened to a cassette tape recorded 20 years prior, almost to the day. It documented a two-hour session with Jobs in Cupertino, California, shortly before the launch. The main reason he had summoned me to Apple’s headquarters was sitting under the over of a dark sheet of fabric on the long table in the boardroom of One Infinite Loop.

    “We have made the coolest computer ever,” he told me. “I guess I’ll just show it to you.”

    He yanked off the fabric, exposing an 8-inch stump of transparent plastic with a block of electronics suspended inside. It looked less like a computer than a toaster born from an immaculate conception between Philip K. Dick and Ludwig Mies van der Rohe. (But the fingerprints were, of course, Jony Ive’s.) Alongside it were two speakers encased in Christmas-ornament-sized, glasslike spheres.

    “The Cube,” Jobs said, in a stage whisper, hardly containing his excitement.

    He began by emphasizing that while the Cube was powerful, it was air-cooled. (Jobs hated fans. Hated them.) He demonstrated how it didn’t have a power switch, but could sense a wave of your hand to turn on the juice. He showed me how Apple had eliminated the tray that held CDs—with the Cube, you just hovered the disk over the slot and the machine inhaled it.

    And then he got to the plastics. It was as if Jobs had taken to heart that guy in The Graduate who gave career advice to Benjamin Braddock. “We are doing more with plastics than anyone else in the world,” he told me. “These are all specially formulated, and it’s all proprietary, just us. It took us six months just to formulate these plastics. They make bulletproof vests out of it! And it’s incredibly sturdy, and it’s just beautiful! There’s never been anything like that. How do you make something like that? Nobody ever made anything like that! Isn’t that beautiful? I think it’s stunning!”

    I admitted it was gorgeous. But I had a question for him. Earlier in the conversation, he had drawn Apple’s product matrix, four squares representing laptop and desktop, high and low end. Since returning to Apple in 1997, he had filled in all the quadrants with the iMac, Power Mac, iBook, and PowerBook. The Cube violated the wisdom of his product plan. It didn’t have the power features of the high-end Power Mac, like slots or huge storage. And it was way more expensive than the low-end iMac, even before you spent for a necessary separate display required of Cube owners. Knowing I was risking his ire, I asked him: Just who was going to buy this?

    Jobs didn’t miss a beat. “That’s easy!” he said. “A ton of people who are pros. Every designer is going to buy one.”

    Here was his justification for violating his matrix theory: “We realized there was an incredible opportunity to make something in the middle, sort of a love child, that was truly a breakthrough,” he said. The implicit message was that it was so great that people would alter their buying patterns to purchase one.

    That didn’t happen. For one thing, the price was prohibitive—by the time you bought the display, it was almost three times the price of an iMac and even more than some PowerMacs. By and large, people don’t spend their art budget on computers.

    That wasn’t the only issue with the G4 Cube. Those plastics were hard to manufacture, and people reported flaws. The air cooling had problems. If you left a sheet of paper on top of the device, it would shut down to prevent overheating. And because it had no On button, a stray wave of your hand would send the machine into action, like it or not.

    In any case, the G4 Cube failed to push buttons on the computer-buying public. Jobs told me it would sell millions. But Apple sold fewer than 150,000 units. The apotheosis of Apple design was also the apex of Apple hubris. Listening to the tape, I was struck by how much Jobs had been drunk on the elixir of aesthetics. “Do you really want to put a hole in this thing and put a button there?” Jobs asked me, justifying the lack of a power switch. “Look at the energy we put into this slot drive so you wouldn’t have a tray, and you want to ruin that and put a button in?”

    But here is something else about Jobs and the Cube that speaks not of failure but why he was a successful leader: Once it was clear that his Cube was a brick, he was quick to cut his losses and move on.

    In a 2017 talk at University of Oxford (UK), Apple CEO Tim Cook talked about the G4 Cube, which he described as “a spectacular commercial failure, from the first day, almost.” But Jobs’ reaction to the bad sales figures showed how quickly, when it became necessary, he could abandon even a product dear to his heart. “Steve, of everyone I’ve known in life,” Cook said at Oxford, “could be the most avid proponent of some position, and within minutes or days, if new information came out, you would think that he never ever thought that before.”

    But he did think it, and I have the tape to prove it. Happy birthday to Steve Jobs’ digital love child.

    Time Travel

    My July 2000 Newsweek article about the Cube came with a sidebar of excerpts from my interview with Steve Jobs. Here are a few:

    Levy: Last January you dropped the “interim” from your CEO title. Has this had any impact?

    Jobs: No, even when I first came and wasn’t sure how long I’d be here, I made decisions for the long term. The reason I finally changed the title was that it was becoming sort of a joke. And I don’t want anything at Apple to become a joke.

    Levy: Rumors have recirculated about you becoming CEO of Disney. Is there anything about running a media giant that appeals to you?

    Jobs: I was thinking of giving you a witty answer, like “Isn’t that what I’m doing now?” But no, it doesn’t appeal to me at all. I’m a product person. I believe it’s possible to express your feelings and your caring about things from your products, whether that product is a computer system or Toy Story 2. It’s wonderful to make a pure expression of something and then make a million copies. Like the G4 Cube. There will be a million copies of this out there.

    Levy: The G4 Cube reminds a lot of people that your previous company, Next, also made a cube-shaped machine.

    Jobs: Yeah, we did one before. Cubes are very efficient spaces. What makes this one [special] for me is not the fact that it’s a cube but it’s like a brain in a beaker. It’s just hanging from this perfectly clear, pristine crystal enclosure. That’s what’s so drop-dead about it. It’s incredibly functional. The whole thing is perfect.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 9:57 am on June 20, 2021 Permalink | Reply
    Tags: "The US Government Is Finally Moving at the Speed of Tech", , WIRED   

    From WIRED : Women in STEM-Lina Khan “The US Government Is Finally Moving at the Speed of Tech” 

    From WIRED

    Gilad Edelman

    Lina Khan’s ascendance to the top of the FTC, and a set of bipartisan antitrust proposals, shows just how much has changed in Washington—and how suddenly.

    Lina Khan’s overwhelming confirmation to the FTC is likely a harbinger of antitrust reform. Photograph: Saul Loeb/AFP/Bloomberg/Getty Images.

    In the summer of 2017, my boss at the Washington Monthly, a policy-focused magazine in DC, asked me to cover a bombshell story: the Democratic Party had included an anti-monopoly section in its “Better Deal” 2018 midterm agenda.

    I use the term “bombshell” ironically. The Monthly had been publishing meticulous stories about the tolls of lax antitrust enforcement for a decade, to little fanfare. Now, finally, people in power were paying attention. To the general public, some general statements about economic concentration in a document that hardly anyone paid attention to did not amount to a major story. But in our corner of the policy world, in 2017, it was a big deal merely to hear Chuck Schumer speak the word “antitrust.” My piece went on the cover.

    I’ve been thinking about that experience recently, as antitrust headlines seem to be everywhere. It is often suggested that law and government can never keep up with the pace of technology. And yet the events of the past few weeks suggest that the recent effort to regulate the biggest tech companies may be an exception to that rule. Amazon Prime membership didn’t exist until 2005, 11 years after Amazon’s founding, and didn’t hit even 20 million subscribers until 2013. Google was 10 years old when it launched the Chrome browser. Facebook had been around for eight years before it bought Instagram and 10 when it acquired WhatsApp.

    Now consider antitrust. Four years ago, Lina Khan was a month out of law school, where she had published a groundbreaking article arguing that the prevailing legal doctrine was allowing Amazon to get away with anticompetitive behavior. Antitrust law was not yet a high-profile issue, and Khan’s suggestion that it might apply to tech companies whose core consumer offerings were free or famously cheap was considered bizarre by much of the legal establishment. This week, Khan, at all of 32 years old, was appointed chair of the Federal Trade Commission, one of the two agencies with the most power to enforce competition law. Congress, meanwhile, has introduced a set of bills that represent the most ambitious bipartisan proposals to update antitrust law in decades, with the tech industry as their explicit target. Politics, in other words, may finally be moving at the speed of tech.

    In hindsight, what seems most remarkable about the Better Deal agenda is that it didn’t mention tech companies at all. Up to that point, the anti-monopoly movement in DC policy circles had been much more focused on traditional industries. Khan got her start writing about consolidation in businesses like meatpacking and Halloween candy. Silicon Valley still seemed politically untouchable. Taking on the likes of Facebook and Google, I wrote at the time, would “require angering some of the Democrats’ most important and deep-pocketed donors, something the party has not yet revealed an appetite for.”

    How did things change so quickly? There is no one smoking gun, but rather an accumulation of grievances that turned both Democrats and Republicans more and more against the tech companies. For Democrats, the key factor was the creeping sense that social media platforms, whatever the political leanings of their founders, had helped Donald Trump get elected. Facebook’s Cambridge Analytica scandal in 2018 supercharged those suspicions. Investigative reports, meanwhile, kept finding evidence that far-right and racist material was spreading on social media. At the same time—and in part as a reaction to social media platforms implementing more aggressive content moderation to mollify both advertisers and liberal critics—conservatives were growing ever more concerned that liberals in Silicon Valley were discriminating against them. And Republican politicians were picking up on the political potency of that talking point.

    The result is that we find ourselves living in a world that looks very different from the one we were living in just a few years ago. New antitrust cases against tech giants are popping up left and right, keeping the issue firmly in the public consciousness. The companies are devoting unprecedented sums toward lobbying, advocacy, and advertising to try to avert a crackdown. And in the sharpest break with the past, Congress and the White House are taking concrete steps to restructure markets that have been left to their own devices for two and a half decades.

    It’s all so much, so fast, that it’s hard to keep track of the various subplots. The introduction of the five House antitrust bills and the elevation of Khan to FTC chair, for example, look like two separate stories. But they’re really two parts of the same story: Khan was herself the key investigator behind the House antitrust subcommittee’s investigation of Apple, Amazon, Facebook, and Google, begun in 2019. The bills introduced last week are the fruits of that investigation. (While the time between the start of the investigation and the release of legislative proposals has felt like an eternity to those of us who follow this closely, it wouldn’t be bad for a Silicon Valley product launch. It took Amazon three years to bring the Kindle to market.)

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 2:01 pm on June 6, 2021 Permalink | Reply
    Tags: "Will a Volcanic Eruption Be a Burp or a Blast?", , Geldingadalur volcano - Iceland, Japan’s Ontake volcano., Kīlauea Volcano in Hawaii (US), La Soufrière- a volcano on the Caribbean island of St. Vincent., Mount Stromboli-one of three active volcanoes in Italy., New Zealand’s Whakaari volcano., Nyiragongo-a mountainous volcano in the Democratic Republic of Congo., Reykjanes volcano- Iceland, Scientists have begun to decipher the seismic signals that reveal how explosive a volcanic eruption is going to be., , WIRED   

    From WIRED : Women in STEM- Arianna Soldati NC State (US); Diana Roman Carnegie Institution for Science (US); Jackie Caplan-Auerbach Western Washington University (US) “Will a Volcanic Eruption Be a Burp or a Blast?” 

    From WIRED

    Robin George Andrews

    Scientists have begun to decipher the seismic signals that reveal how explosive a volcanic eruption is going to be.

    Volcanoes such as the recent outburst in Iceland, seen here on May 24, can switch from effusive to explosive. Much depends on the consistency on the magma itself. Courtesy of Sigurjón Jónsson.

    Last December, a gloopy ooze of lava began extruding out of the summit of La Soufrière- a volcano on the Caribbean island of St. Vincent. The effusion was slow at first; no one was threatened. Then in late March and early April, the volcano began to emit seismic waves associated with swiftly rising magma. Noxious fumes vigorously vented from the peak.

    Fearing a magmatic bomb was imminent, scientists sounded the alarm, and the government ordered a full evacuation of the island’s north on April 8. The next day, the volcano began catastrophically exploding. The evacuation had come just in time: At the time of writing, no lives have been lost.

    Simultaneously, something superficially similar but profoundly different was happening up on the edge of the Arctic.

    Increasingly intense tectonic earthquakes had been rumbling beneath Iceland’s Reykjanes Peninsula since late 2019, strongly implying that the underworld was opening up, making space for magma to ascend. Early in 2021, as a subterranean serpent of magma migrated around the peninsula, looking for an escape hatch to the surface, the ground itself began to change shape. Then in mid-March, the first fissure of several snaked through the earth roughly where scientists expected it might, spilling lava into an uninhabited valley named Geldingadalur.

    Here, locals immediately flocked to the eruption, picnicking and posing for selfies a literal stone’s throw away from the lava flows. A concert recently took place there, with people treating the ridges like the seats of an amphitheater.

    In both cases, scientists didn’t just accurately suggest a new eruption was on its way. They also forecast the two very different forms these eruptions would take. And while the “when” part of the equation is never easy to forecast, getting the “how” part right is especially challenging, especially in the case of the explosive eruption at La Soufrière. “That’s a tricky one, and they nailed it, they absolutely nailed it,” said Diana Roman, a volcanologist at Carnegie Institution for Science (US).

    Volcanologists have developed an increasingly detailed understanding of the conditions that are likely to produce an explosive eruption. The presence or absence of underground water matters, for instance, as does the gassiness and gloopiness of the magma itself. And in a recent series of studies, researchers have shown how to read hidden signals—from seismic waves to satellite observations—so that they may better forecast exactly how the eruption will develop: with a bang or a whimper.

    Something Wicked This Way Comes

    As with skyscrapers or cathedrals, the architectural designs of Earth’s volcanoes differ wildly. You can get tall and steep volcanoes, ultra-expansive and shallowly sloped volcanoes, and colossal, wide-open calderas. Sometimes there isn’t a volcano at all, but chains of small depressions or swarms of fissures scarring the earth like claw marks.

    Lava flows from the Geldingadalur volcano have been relatively languid and predictable. Photograph: Anton Brink/Anadolu Agency/Getty Images.

    Eruption forecasting asks a lot of questions. Chief among them is: When? At its core, this question is equivalent to asking when magma from below will travel up through a conduit (the pipe between the magma and the surface opening) and break through, as lava flows and ash, as volcanic glass and bombs.

    When magma ascends from depth, it can alter a volcano’s architecture, literally changing the shape of the land above. Migrating magma flows can also force rock apart, generating volcano-tectonic earthquakes. And when the pressure keeping magma trapped underground declines, it liberates trapped gas, which can escape to the surface.

    Eruption forecasters look for any of those three signs: changes in a volcano’s shape, its seismic soundscape, or its outgassing. If you spy changes in all three—changes that are clearly very different from the volcano’s everyday behavior—then “there is no doubt that something is going to happen,” said Maurizio Ripepe, a geophysicist at the University of Florence [Università degli Studi di Firenze] (IT). That something is often, eventually, an eruption.

    Change doesn’t always mean an uptick in activity. Most volcanoes get noisier and twitchier before erupting, but sometimes the opposite is true. Seismologists in Iceland, for example, recorded a drop in volcanic tremor immediately prior to the opening of Reykjanes’ first five fissures. When the sixth drop happened, said Thorbjörg Ágústsdóttir, a seismologist at Iceland Geosurvey [jarðmælingar á íslandi](IS), scientists forecast that a sixth fissure was about to appear—and they were right.

    The “How” of the Equation

    Increasingly, it’s also possible to forecast not just when or if a volcano will erupt, but how.

    Unspooling the history of each specific volcano is key, as individual volcanoes tend to have their own eruptive style. To find it, scientists will examine the geological strata around a volcano, forensically exhuming and examining the remains of old eruptions. The last eruption on Iceland’s Reykjanes Peninsula had occurred 800 years ago, long before the advent of modern science. But because of this sort of detective work, scientists knew that the eruptions there have always been relatively tranquil affairs. If a recent eruption history is available, one documented in real time by scientists, all the better; that’s why scientists knew La Soufrière was likely to speedily switch from an effusive to an explosive eruption style.

    The latest work on eruption forecasting goes far beyond these historical catalogs. Take Stromboli, a volcano barely sticking above the waters of the Tyrrhenian Sea. This picturesque isle spends much of its time exploding—usually small blasts that harm no one. After studying how it changes shape for two decades, Ripepe and his colleagues have determined that it inflates just before it explodes [Nature Communications]. Moreover, the exact change in shape reveals whether the blast will be major or minor. Since October 2019, the volcano has had an early warning system. It can detect the type of inflation indicative of the most extreme explosions, the sort that have killed people in the past, up to 10 minutes before the blast arrives.

    Stromboli subtly inflates just before it explodes.Photograph: Bruno Guerreiro/Getty Images.

    Stromboli is a relatively simple volcano, though, one in which the plumbing from the magma to the skylight up top remains more or less open. “The magma movement does not generate any fractures. It just comes up,” Ripepe said.

    Most volcanoes are more complicated: They harbor a diverse array of magma types that need to force their way out of the volcano. That means they produce eruptions that “change a lot as they happen,” said Arianna Soldati, a volcanologist at North Carolina State University (US). Over the course of days, weeks, months, or years, an eruption can go back and forth between oozing and exploding. Is it possible to forecast these changes?

    Soldati, Roman, and their colleagues found a way to test this by looking to the Big Island of Hawaii. Kīlauea, near the island’s southeastern coast, had been continuously erupting in some form or another since 1983.

    But in the spring and summer of 2018, the volcano put on a hell of a show: The lava lake at its summit drained away, as if someone had pulled the plug from a bath; magma made its way underground to the eastern flanks of the volcano and tore open cracks in the earth, gushing out of them for three months straight, sometimes shooting skyward as tall fountains of molten rock.

    As this happened, the researchers took lava samples, concentrating on one feature in particular: viscosity. Gloopier, stickier magma traps more gas. When this viscous magma reaches the surface, its gas violently decompresses, creating an explosion. Runnier magma, by contrast, lets gas escape gradually, like a soda left unattended on a table.

    In 2018, the viscosity of the lava on Kīlauea kept changing. Older, colder magma was more viscous, while newly tapped magma from depth was hotter and more fluid.

    A study of the 2018 eruptions on Kīlauea, Hawai‘i, connected the consistency of the magma coming up to specific seismic signals. Courtesy of Cedric Letsch.

    KILAUEA VOLCANO. U.S. Geological Survey.

    Roman and colleagues discovered that they could track these changes by monitoring the seismic waves emerging from the volcano and comparing them with the varying viscosity of the lava they sampled. For reasons yet to be determined, as runnier magma ascends, it forces the rocky walls on either side of it only a little bit apart. Gloopier magma, by contrast, exerts a strong force, pushing open a wider pathway. In a paper published this April in Nature, the researchers showed that they could use seismic waves, which differed depending on the way the rock was forced open, to forecast the change in the erupted lava’s viscosity hours to days in advance of that magma’s eruption.

    “Having found something that tells us, yes, if you have this kind of seismicity, viscosity is increasing, and if it’s above this threshold, it could be more explosive—that is super cool,” said Soldati. “For monitoring and hazards, this actually has the potential to be impactful now.”

    Nanoscopic Nuisances

    Many factors influence magma viscosity. One in particular has been overlooked, mostly because it’s nearly invisible.

    Danilo Di Genova, a geoscientist at the University of Bayreuth [Universität Bayreuth] (DE), studies nanolites—crystals about one-hundredth of the size of your average bacterium. They are thought to form at the top of the conduit as magma gushes up it. If you get enough of these crystals, they can lock up the magma, imprison trapped gas, and increase the viscosity. But unless you have very powerful microscopes to look at freshly erupted lava, they’ll be imperceptible.

    Di Genova has long been interested in how nanolites form. His experiments using silicon oil—a proxy for basalt, a commonplace runny magma—showed that if just 3 percent of an oil-particle mixture is made of nano-size particles, the viscosity spikes.

    He then turned to the real thing. He and his colleagues attempted to simulate what magma would experience as it rose through a conduit to the surface. They subjected lab-melted basaltic rock from Mount Etna to gradual heating, pulses of sudden cooling, hydration, and dehydration. At times, they placed the magma inside a synchrotron, a type of particle accelerator. Within this contraption, powerful x-rays interact with a crystal’s atoms to reveal their properties and—if the crystals are small enough—their existence.

    As reported last year in Science Advances, the experiments gave the team a working model of how nanolites form. If an eruption begins and magma suddenly accelerates up through the conduit, it rapidly depressurizes. That lets water come out of the molten rock and form bubbles, which dehydrates the magma.

    This action changes the thermal properties of the magma, making it a lot easier for crystals to be present even at extremely high temperatures. If the magma’s ascent is sufficiently rapid and the magma is speedily dehydrated, a cornucopia of nanolites comes into being, which significantly increases the magma’s viscosity.

    This change doesn’t give off a noticeable signal. But merely knowing it exists, said Di Genova, may enable researchers to explain why volcanoes with otherwise runny magma, like Vesuvius or Etna, can sometimes produce epic explosions. Seismic signals can trace how quickly magma is ascending, so perhaps that may be used to forecast a last-minute nanolite population boom, one that leads to a catastrophic blast.

    Sweeping Away the Fog

    These advances aside, scientists are still a long way from replacing eruption probabilities with certainties.

    One reason is that “most of the world’s volcanoes are not that well monitored,” said Seth Moran, a research seismologist at the US Geological Survey’s Cascades Volcano Observatory. This includes many of America’s Cascades volcanoes, several of which have a propensity for giant explosions. “It’s not easy to forecast an eruption if there are sufficient instruments on the ground,” said Roman. “But it’s very, very difficult to forecast an eruption if there are no instruments on the volcano.”

    Another problem is that some eruptions currently have no clear-cut precursors. One notorious type is called a phreatic blast: Magma cooks overlying pockets of water, eventually triggering pressure-cooker-like detonations. One rocked New Zealand’s Whakaari volcano in December 2019, killing 22 people visiting the small island. Another shook Japan’s Ontake volcano in 2014, killing 63 hikers.

    New Zealand’s Whakaari volcano gave no warning before it catastrophically exploded in December 2019, killing 22 people.Photograph: Westend61/Getty Images.

    A recent study led by Társilo Girona, a geophysicist at the University of Alaska, Fairbanks (US), found that satellites can detect gradual, year-over-year upticks in the thermal radiation coming off all sorts of volcanoes in the run-up to an eruption. A retrospective analysis showed that such a temperature increase was detected before Ontake’s 2014 phreatic explosion, with a peak around the time of the event.

    Perhaps monitoring from space will become the best way to see future phreatic eruptions coming. But so far, no successful long-term forecast of a phreatic eruption has taken place. “Phreatic eruptions are terrifying,” said Jackie Caplan-Auerbach, a volcanologist and seismologist at Western Washington University (US). “You really don’t know they’re coming.”

    It’s not just explosions that can prove tricky to forecast. Nyiragongo-a mountainous volcano in the Democratic Republic of Congo, suddenly erupted on May 22 of this year, spilling fast-moving lava toward the city of Goma. Despite being monitored, the volcano gave no clear warning it was about to erupt, and several people perished.

    And no matter what type of eruption you are forecasting, the price of a false positive is crippling. “When you evacuate people and nothing happens, then the next evacuation is going to be orders of magnitude more difficult to get people to take seriously,” said Roman.

    But there are reasons to be optimistic. Scientists are grasping the physics underlying all volcanoes better than ever. Individual volcanoes are also becoming more familiar because of “a mixture of instinct and experience and learned knowledge,” said David Pyle, a volcanologist at the University of Oxford (UK). Soon, he predicts, machine learning programs, capable of identifying patterns in data faster than any human, will become a major player.

    Certainty in eruption forecasting—the if, when or how—will probably never come to pass. But day by day, the potentially deadly fog of uncertainty dissipates a little more, and someone who would have died a few decades ago during an eruption now gets to live.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: