Tagged: WIRED Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:11 am on November 15, 2021 Permalink | Reply
    Tags: "NASA Tries to Save Hubble- Again", , , , , WIRED   

    From WIRED : “NASA Tries to Save Hubble- Again” 

    From WIRED


    The space telescope’s latest hardware problem has kept it offline for two weeks, raising concerns that the decades-old spacecraft is running out of time.

    When engineers encounter an unknown problem, they’re meticulous. The slow process is designed to protect Hubble’s systems so it can assist scientists for as long as possible. Photograph: NASA.

    The Hubble Space telescope, one of the most famous telescopes of the 20th and 21st centuries, has faltered once again. After a computer hardware problem arose in late October, NASA engineers put Hubble into a coma, suspending its science operations as they carefully attempt to bring its systems back online.

    Engineers managed to revive one of its instruments earlier this week, offering hope that they will end the telescope’s convalescence as they restart its other systems, one at a time. “I think we are on a path to recovery,” says Jim Jeletic, Hubble’s deputy project manager.

    The problem began on October 23, when the school bus-sized space probe’s instruments didn’t receive a standard synchronization message generated by its control unit. Two days later, NASA engineers saw that the instruments missed multiple such messages, so they put them in “safe mode,” powering down some systems and shuttering the cameras.

    Some problems are fairly easy to fix, like when a random high-energy particle hits the probe and flips a bit on a switch. But when engineers encounter an unknown problem, they’re meticulous. The slow process is designed to protect Hubble’s systems and make sure the spacecraft continues to thrive and enable scientific discovery for as long as possible. “You don’t want to continually put the instruments in and out of safe mode. You’re powering things on and off, you’re changing the temperature of things over and over again, and we try to minimize that,” Jeletic says.

    In this case, they successfully brought the Advanced Camera for Surveys back online on November 7.

    National Aeronautics Space Agency(US)/European Space Agency [Agence spatiale européenne][Europäische Weltraumorganisation](EU) Advanced Camera for Surveys (ACS) on the NASA/ESA Hubble Space Telescope(US)

    It’s one of the newer cameras, installed in 2002, and it’s designed for imaging large areas of the sky at once and in great detail. Now they’re watching closely as it collects data again this week, checking to see whether the error returns. If the camera continues working smoothly, the engineers will proceed to testing Hubble’s other instruments.

    Hubble has had its share of hiccups over its long and productive career, during which it has documented everything from ancient galaxies to the birth and death of nearby stars. It launched in 1990, just a few months after the fall of the Berlin Wall, and it was deployed by the crew aboard the space shuttle Discovery. It now orbits about 340 miles above the Earth. On five occasions since its deployment, astronauts on NASA shuttles have conducted servicing missions to repair and upgrade its systems, boosting the impressive longevity of the telescope, which was originally expected to only last about a decade. Astronauts aboard the shuttle Atlantis completed the final such mission in May, 2009, when they repaired its spectrograph, among other things. Since then, all other reboot attempts have been conducted from Earth; engineers are no longer able to replace the telescope’s hardware.

    Hubble’s current glitch isn’t unprecedented. In fact, it’s the second one this year. In July, engineers put the telescope’s instruments in safe mode for about a month when the payload computer, which coordinates and monitors the science instruments, went offline. When they started using a backup power unit, they were able to make the science instruments operational again.

    Jeletic and his team also try to anticipate potential mishaps. For example, they found that the thin wires Hubble’s gyroscopes depend on gradually corrode and break, and three of its six gyros have failed. Without gyros, Hubble can’t target anything properly. But on the last servicing mission, astronauts replaced the gyros and enhanced the wires so that they can’t corrode, solving the problem.

    Nevertheless, each new hitch inevitably raises concerns about the aging telescope, which has been instrumental in so many astronomical accomplishments, including pinning down the age of the universe and discovering the smaller moons of Pluto. “I think it’s been utterly transformational,” says Adam Riess, an astronomer at The Johns Hopkins University (US) in Baltimore. He shared the 2011 Nobel Prize in Physics for showing how measurements of exploding stars, or supernovas, reveal the accelerating expansion of the universe, a project that benefited from Hubble data.

    Saul Perlmutter (center) [The Supernova Cosmology Project] shared the 2006 Shaw Prize in Astronomy, the 2011 Nobel Prize in Physics, and the 2015 Breakthrough Prize in Fundamental Physics with Brian P. Schmidt (right) and Adam Riess (left) [The High-z Supernova Search Team] for providing evidence that the expansion of the universe is accelerating.

    To this day, the telescope continues to be oversubscribed by at least fivefold, Riess says, meaning astronomers have more than five times as many proposals for using Hubble as there is available telescope time.

    The space telescope has also served as an educational tool and kindled public interest in space science for a whole generation. “Everybody knows Hubble,” says Jeyhan Kartaltepe, an astronomer at The Rochester Institute of Technology (US), whose work on multiple galaxy surveys makes extensive use of Hubble images. “It has become a household name. People enjoy reading articles about what Hubble has discovered, and they enjoy seeing the pictures. I think people have an immediate association of Hubble with astronomy.”

    Hubble’s latest hardware challenges come just a month before its successor, the James Webb Space Telescope, is scheduled to launch into orbit.

    National Aeronautics Space Agency(USA)/European Space Agency [Agence spatiale européenne][Europäische Weltraumorganisation](EU)/ Canadian Space Agency [Agence Spatiale Canadienne](CA) Webb Infrared Space Telescope(US) James Webb Space Telescope annotated. Scheduled for launch in October 2021 delayed to December 2021.

    Like its iconic predecessor, the new telescope will collect troves of spectacular images, though it’s designed to probe wavelengths more in the infrared range, allowing it to penetrate dusty parts of galaxies and stellar nebulae. Riess expects it to be similarly popular with astronomers and with the public.

    Hubble has easily surpassed its expected lifespan, and the same goes for NASA’s Chandra X-ray Observatory, which launched in 1999 and remains operational, although it was designed to last only five years.

    National Aeronautics and Space Administration Chandra X-ray telescope(US)

    This is a good sign for Webb, similarly planned for a five-year lifespan. Unlike Hubble, however, it will orbit much farther away, making it inaccessible to astronauts. That means any problems that arise will have to be fixed remotely.

    But Hubble helped set the stage for its successor. For example, after Hubble launched, engineers realized that its mirror wasn’t curved properly, initially resulting in blurry images. Webb’s design allows for engineers to adjust the curvature remotely if an error like that crops up.

    Astronomers appreciate the hard work of Hubble’s engineers and operators. “Their dedication to keep on rescuing the telescope from all its fits of pique and changes of mood is fantastic. I’m so proud of them backing the scientists who are using the data,” says Julianne Dalcanton, an astronomer at The University of Washington (US) who has used Hubble frequently throughout her career, including to map Andromeda, our galactic neighbor.

    Andromeda Galaxy. Credit: Adam Evans.

    She, Kartaltepe, and other astronomers look forward to a time when both Hubble and Webb are in the sky, taking observations together, especially as they’ll learn different things from the telescopes’ respective instruments and wavelength coverage.

    While Jeletic and his team don’t yet know when Hubble will be back online, he expects all systems to eventually be up and running once again. “Some day Hubble will die, like every other spacecraft,” he says. “But hopefully that’s still a long ways off.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 9:43 am on November 6, 2021 Permalink | Reply
    Tags: "How to Prepare for Power Outages", , , WIRED   

    From WIRED : “How to Prepare for Power Outages” 

    From WIRED

    Tushar Nene

    Whether there’s a flood or a fallen tree, your power will go out eventually. Here’s how to prepare for an outage that lasts minutes, hours, or days.

    Photograph: Scott Heins/Getty Images.

    “I live in the Philadelphia area, and that puts me in the direct line of fire for two major water-type attacks. We get the remnants of hurricanes in the summertime and what’s known as nor’easters in the winter. (For those not from the Northeast, that’s a cyclone of cold frozen hatred that hovers up our coast.) Sure, they each bring their own brand of natural strife, but they also make us vulnerable to every geek’s nightmare: the dreaded power outage. And since my place fully runs on electricity (no gas or oil), I’ve had to develop a playbook for those dark times.

    Whether it’s feet of snow or downed power lines, we need our electricity. Having been a Cub Scout as a lad, I am thankfully well prepared, but I realize that there are probably many people out there that aren’t. This guide is for you to bookmark forever.


    The best all-around solution is also the most expensive. You can get your home rigged with a built-in generator that will take over when your main power goes out.

    For full-home protection, a generator will set you back a few thousand dollars. It may be pricey, but it is still a viable option for those who’ve got the scratch, and pretty much solves everything all in one go. You could invest in a portable generator to save money, but the money you save comes at the cost of how long and how many devices it can power. Oh, and regardless of what you do, make sure to follow these generator safety tips from The Department of Energy (US).

    Preventative Measures: Protect Your Electronics

    So first off—protecting your electronics isn’t paranoia. I’ve seen it all in my IT pro day job, including boxes that fry for no apparent reason. Protecting your gear from the spikes or surges a power outage may bring is important. You may rely on cloud services, but your desktop workstation or gaming rig still needs you to look out for them. Forget having to redownload your stuff again—replacing hardware, especially these days, is out of control. You may need to trade in your first-born to replace a video card (and not even one that does ray tracing).

    Spikes happen. In rough weather power lines can fall from the weight of ice and snow, felled trees can cause massive damage, and transformers can pop in a glittering array of sparks.

    What does that mean? That $5.99 surge protector you bought and plugged your computer/TV/game console into isn’t really doing you a lot of good. Instead of cheap surge protectors, battery backups or UPS (uninterruptible power supply) units are a far better choice to protect your hardware. In addition to protection from spikes and surges, they pick up the load for everything plugged into it when power drops. This gives you a window of time to shut down your equipment properly without the risk of them going up in cinders or losing any data.

    In IT we use massive ones to make sure servers and other large-scale devices stay up during power issues, but you can buy smaller home models on the cheap to do the same. For a standard user’s computer system (plus monitor and printer), a 450 VA or 650 VA UPS unit should do just fine and will set you back south of $100. The more stuff you plug into it, the higher VA rating you want. For a modern gaming rig, you’re probably looking at something more in the 1200 to 1500 VA range to keep it safe. Which is still only around $200.

    And what’s that compared to trying to replace an RTX 3080?

    Let There Be Light

    The worst part of an outage is when night falls, and in the winter months that can come early in the evening. Without power your place is enveloped in darkness, and basic tasks like just walking to the kitchen can result in slips, bumps, and unnecessary injury in general.

    The first thing I keep on hand—in strategic places around the house—are LED lanterns. They’re low-cost, use very little power, and can go for months without having to replace the batteries. Keep one near a stairwell or on the kitchen counter so you can navigate your now-enshrouded home safely. And if you need to go somewhere else in the house? They’re portable. While being able to traverse your own place safely is important, the secondary effect is eliminating the need for the flashlight app on your phone, which is usually a battery vampire.

    And since you’re likely going to have to be better friends with your analog entertainment, setting a lantern next to your couch or favorite chair instantly creates a cozy reading nook in the dark. And if you have several to spare, you have a lit gaming surface for tabletop RPG’s or board games (which everyone should always have in stock).

    Alternatively, I have a couple of shake flashlights that rely on human power instead of double A’s. Shaking the flashlight runs a magnet back and forth through a coil to store charge in a capacitor, and voilà! A powered light. They’re not only effective but fun for kids too, and if nothing else give you a solid reason to thank the world’s lucky stars for Michael Faraday and his legendary work in electromagnetics.

    Next on the list is something that’s a bit more old school—the candle. It may sound obvious, but don’t act like just having one didn’t get you an extra heart container in The Legend of Zelda back in the day. Having candles (and, of course, matches or lighters) can again light a path for you to get wherever you need to go. Granted, it is, you know, fire, so you’ll have to pay attention to them unlike no-fuss LED lanterns, but they’re cheap, burn for a while, and I dare say contribute visual and olfactory ambiance to the occasion.

    Right now I’m running a scent called Black Tea and Lemon because I have excellent taste. I even have the sadly limited-edition A1 steak-scented ones, so, you know, you can find whatever floats your boat.

    If you have a fireplace, building a fire is an easy and cheap way to not only light a room up but heat it up when the mercury starts to drop. If you have gas or oil heat this may not be too big of an issue for you, but I have an electric heat pump, so my living room fireplace is my go-to power outage hangout.

    I try to always keep a cord of firewood on hand during the winter along with kindling or starter cubes in my inventory, but if you don’t have kindling and aren’t the Human Torch, this might add a degree of difficulty. And that’s why I keep paper phone books instead of pitching them. Sure, “you’ve got the internet,” but the thin pages from phone books make for great kindling, especially if you store your firewood outside and it’s not totally dried out yet.

    That’s right, we can keep the phone book in business for alternative service in our digital age. Look at that. I’m a jobs creator.

    The Juice Must Flow

    So now we know how to prevent our larger electronics from taking hits, but what about your mobile tech? Your smartphone is probably the most important tool you have: It can still handle calls even when the power’s out, and you can get data on your cellular network. If you have it as part of your data plan (you should) it can also serve as a Wi-Fi hotspot to provide laptops and tablets with wireless data. All combined, this can be a crushing power draw. And without electricity, those connected devices will start running out of juice soon too. Keeping portable chargers and battery packs in stock and juiced up is a great way to keep power going for your mobile devices.

    Now before you go buy some random ones, take an inventory of what you have, how much juice those things need, and what kind of plugs you require. Let’s look at a basic example:

    Tushar’s phone: Samsung Note20 Ultra – 4,500 mAh battery / USB-C.
    Tushar’s tablet: Samsung Galaxy S6 Lite – 7,040 mAh battery / USB-C.

    To be able to fully charge both devices once when they drop to zero with a full charge would require the sum of those plus about 25 percent, which is 14,425 mAh. I’m not about to get into rated versus real battery capacity and efficiency here, so for now just trust me. So to be safe, I should have 15,000 mAh in capacity available in my power banks. I mean I actually have 20,000, but you know. That only set me back $100, and we have some great suggestions here. This puts mobile gaming back on the menu, and extends the life of my phone’s Wi-Fi hotspot. Now that many laptops also come with USB-C fast charging and power, that equipment can also be included in your calculations.

    For some additional references on mobile gaming more advanced than your phone, a Nintendo Switch has about a 4,300 mAh battery, and if you’re one of the folks that reserved a Steam Deck for this winter, that battery should run about the same.

    Larger things that require AC power and outlets for, like a TV, you’re going to need an electric power source or generator. Again, the more juice you want the more it’s going to cost. You can get a 1440 W power station with 660 Wh capacity for around $750, and that can charge your gaming laptop and if needed, run a TV for about 10 hours. If you have a Chromecast handy then you have your streaming apps at your fingertips on a large screen.

    Fuel the Body

    We’ve talked about fueling your tech, now let’s talk about fueling your body. Depending on whether your range is gas or electric (and how long you can keep your fridge—which we assume is running on electricity—closed and cold), you may need some other options for hot, healthy food.

    It should go without staying to have chips and crackers and all other sorts of snacks, but none of that is a meal. You have no idea how long a power outage is going to last, your refrigerator isn’t working, and really friends, have some self-respect! Keep the pantry stocked with bread and shelf-stable sandwich materials—PB&J or otherwise. I also make sure I’ve got some canned goods too. There’s not really much point to keeping your home lit and your games and tech going if you’re in zombie mode because you haven’t eaten.

    If your home is all electric and you have dietary needs, really have to cook, or are super picky about, you know, having hot food, then a small butane, propane, or charcoal camping/tailgate grill should be in your inventory as well. You can procure a small portable charcoal grill for as low as $50, and those butane cassette grills YouTubers use for cooking videos are about the same price. And while I know we’re trying to shield ourselves from the elements, I’ve braved going outside to use my full-size propane grill when necessary.

    This guide isn’t by any means complete, but it will protect your tech, keep you warm and lit, and make sure you still have food, the internet, and portable power. Other things you should have prepared are blankets to keep warm, pots filled with water or bottled water to stay hydrated, and a first aid kit, just in case. And don’t forget: Sometimes the best emergency tool at your disposal are your friends and neighbors.

    See the full article here.


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 8:24 am on July 24, 2021 Permalink | Reply
    Tags: "20 Years Ago Steve Jobs Built the ‘Coolest Computer Ever.’ It Bombed", Apple Computer, , , The Power Mac G4 Cube, WIRED   

    From WIRED : “20 Years Ago Steve Jobs Built the ‘Coolest Computer Ever.’ It Bombed” 

    From WIRED

    07.24.2020 [Re-issued 7.24.21]
    Steven Levy

    Plus: An interview from the archives, the most-read story in WIRED history, and bottled-up screams.

    The Power Mac G4 Cube, released in 2000 and discontinued in 2001, violated the wisdom of Jobs’ product plan. Photograph: Apple/Getty Images.

    The Plain View

    This month marks the 20th anniversary of the Power Mac G4 Cube, which debuted July 19, 2000. It also marks the 19th anniversary of Apple’s announcement that it was “putting the Cube on ice”. That’s not my joke, it’s Apple’s, straight from the headline of its July 3, 2001, press release that officially pulled the plug.

    The idea of such a quick turnaround was nowhere in the mind of Apple CEO Steve Jobs on the eve of the product’s announcement at that summer 2000 Macworld Expo. I was reminded of this last week, as I listened to a cassette tape recorded 20 years prior, almost to the day. It documented a two-hour session with Jobs in Cupertino, California, shortly before the launch. The main reason he had summoned me to Apple’s headquarters was sitting under the over of a dark sheet of fabric on the long table in the boardroom of One Infinite Loop.

    “We have made the coolest computer ever,” he told me. “I guess I’ll just show it to you.”

    He yanked off the fabric, exposing an 8-inch stump of transparent plastic with a block of electronics suspended inside. It looked less like a computer than a toaster born from an immaculate conception between Philip K. Dick and Ludwig Mies van der Rohe. (But the fingerprints were, of course, Jony Ive’s.) Alongside it were two speakers encased in Christmas-ornament-sized, glasslike spheres.

    “The Cube,” Jobs said, in a stage whisper, hardly containing his excitement.

    He began by emphasizing that while the Cube was powerful, it was air-cooled. (Jobs hated fans. Hated them.) He demonstrated how it didn’t have a power switch, but could sense a wave of your hand to turn on the juice. He showed me how Apple had eliminated the tray that held CDs—with the Cube, you just hovered the disk over the slot and the machine inhaled it.

    And then he got to the plastics. It was as if Jobs had taken to heart that guy in The Graduate who gave career advice to Benjamin Braddock. “We are doing more with plastics than anyone else in the world,” he told me. “These are all specially formulated, and it’s all proprietary, just us. It took us six months just to formulate these plastics. They make bulletproof vests out of it! And it’s incredibly sturdy, and it’s just beautiful! There’s never been anything like that. How do you make something like that? Nobody ever made anything like that! Isn’t that beautiful? I think it’s stunning!”

    I admitted it was gorgeous. But I had a question for him. Earlier in the conversation, he had drawn Apple’s product matrix, four squares representing laptop and desktop, high and low end. Since returning to Apple in 1997, he had filled in all the quadrants with the iMac, Power Mac, iBook, and PowerBook. The Cube violated the wisdom of his product plan. It didn’t have the power features of the high-end Power Mac, like slots or huge storage. And it was way more expensive than the low-end iMac, even before you spent for a necessary separate display required of Cube owners. Knowing I was risking his ire, I asked him: Just who was going to buy this?

    Jobs didn’t miss a beat. “That’s easy!” he said. “A ton of people who are pros. Every designer is going to buy one.”

    Here was his justification for violating his matrix theory: “We realized there was an incredible opportunity to make something in the middle, sort of a love child, that was truly a breakthrough,” he said. The implicit message was that it was so great that people would alter their buying patterns to purchase one.

    That didn’t happen. For one thing, the price was prohibitive—by the time you bought the display, it was almost three times the price of an iMac and even more than some PowerMacs. By and large, people don’t spend their art budget on computers.

    That wasn’t the only issue with the G4 Cube. Those plastics were hard to manufacture, and people reported flaws. The air cooling had problems. If you left a sheet of paper on top of the device, it would shut down to prevent overheating. And because it had no On button, a stray wave of your hand would send the machine into action, like it or not.

    In any case, the G4 Cube failed to push buttons on the computer-buying public. Jobs told me it would sell millions. But Apple sold fewer than 150,000 units. The apotheosis of Apple design was also the apex of Apple hubris. Listening to the tape, I was struck by how much Jobs had been drunk on the elixir of aesthetics. “Do you really want to put a hole in this thing and put a button there?” Jobs asked me, justifying the lack of a power switch. “Look at the energy we put into this slot drive so you wouldn’t have a tray, and you want to ruin that and put a button in?”

    But here is something else about Jobs and the Cube that speaks not of failure but why he was a successful leader: Once it was clear that his Cube was a brick, he was quick to cut his losses and move on.

    In a 2017 talk at University of Oxford (UK), Apple CEO Tim Cook talked about the G4 Cube, which he described as “a spectacular commercial failure, from the first day, almost.” But Jobs’ reaction to the bad sales figures showed how quickly, when it became necessary, he could abandon even a product dear to his heart. “Steve, of everyone I’ve known in life,” Cook said at Oxford, “could be the most avid proponent of some position, and within minutes or days, if new information came out, you would think that he never ever thought that before.”

    But he did think it, and I have the tape to prove it. Happy birthday to Steve Jobs’ digital love child.

    Time Travel

    My July 2000 Newsweek article about the Cube came with a sidebar of excerpts from my interview with Steve Jobs. Here are a few:

    Levy: Last January you dropped the “interim” from your CEO title. Has this had any impact?

    Jobs: No, even when I first came and wasn’t sure how long I’d be here, I made decisions for the long term. The reason I finally changed the title was that it was becoming sort of a joke. And I don’t want anything at Apple to become a joke.

    Levy: Rumors have recirculated about you becoming CEO of Disney. Is there anything about running a media giant that appeals to you?

    Jobs: I was thinking of giving you a witty answer, like “Isn’t that what I’m doing now?” But no, it doesn’t appeal to me at all. I’m a product person. I believe it’s possible to express your feelings and your caring about things from your products, whether that product is a computer system or Toy Story 2. It’s wonderful to make a pure expression of something and then make a million copies. Like the G4 Cube. There will be a million copies of this out there.

    Levy: The G4 Cube reminds a lot of people that your previous company, Next, also made a cube-shaped machine.

    Jobs: Yeah, we did one before. Cubes are very efficient spaces. What makes this one [special] for me is not the fact that it’s a cube but it’s like a brain in a beaker. It’s just hanging from this perfectly clear, pristine crystal enclosure. That’s what’s so drop-dead about it. It’s incredibly functional. The whole thing is perfect.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 9:57 am on June 20, 2021 Permalink | Reply
    Tags: "The US Government Is Finally Moving at the Speed of Tech", , WIRED   

    From WIRED : Women in STEM-Lina Khan “The US Government Is Finally Moving at the Speed of Tech” 

    From WIRED

    Gilad Edelman

    Lina Khan’s ascendance to the top of the FTC, and a set of bipartisan antitrust proposals, shows just how much has changed in Washington—and how suddenly.

    Lina Khan’s overwhelming confirmation to the FTC is likely a harbinger of antitrust reform. Photograph: Saul Loeb/AFP/Bloomberg/Getty Images.

    In the summer of 2017, my boss at the Washington Monthly, a policy-focused magazine in DC, asked me to cover a bombshell story: the Democratic Party had included an anti-monopoly section in its “Better Deal” 2018 midterm agenda.

    I use the term “bombshell” ironically. The Monthly had been publishing meticulous stories about the tolls of lax antitrust enforcement for a decade, to little fanfare. Now, finally, people in power were paying attention. To the general public, some general statements about economic concentration in a document that hardly anyone paid attention to did not amount to a major story. But in our corner of the policy world, in 2017, it was a big deal merely to hear Chuck Schumer speak the word “antitrust.” My piece went on the cover.

    I’ve been thinking about that experience recently, as antitrust headlines seem to be everywhere. It is often suggested that law and government can never keep up with the pace of technology. And yet the events of the past few weeks suggest that the recent effort to regulate the biggest tech companies may be an exception to that rule. Amazon Prime membership didn’t exist until 2005, 11 years after Amazon’s founding, and didn’t hit even 20 million subscribers until 2013. Google was 10 years old when it launched the Chrome browser. Facebook had been around for eight years before it bought Instagram and 10 when it acquired WhatsApp.

    Now consider antitrust. Four years ago, Lina Khan was a month out of law school, where she had published a groundbreaking article arguing that the prevailing legal doctrine was allowing Amazon to get away with anticompetitive behavior. Antitrust law was not yet a high-profile issue, and Khan’s suggestion that it might apply to tech companies whose core consumer offerings were free or famously cheap was considered bizarre by much of the legal establishment. This week, Khan, at all of 32 years old, was appointed chair of the Federal Trade Commission, one of the two agencies with the most power to enforce competition law. Congress, meanwhile, has introduced a set of bills that represent the most ambitious bipartisan proposals to update antitrust law in decades, with the tech industry as their explicit target. Politics, in other words, may finally be moving at the speed of tech.

    In hindsight, what seems most remarkable about the Better Deal agenda is that it didn’t mention tech companies at all. Up to that point, the anti-monopoly movement in DC policy circles had been much more focused on traditional industries. Khan got her start writing about consolidation in businesses like meatpacking and Halloween candy. Silicon Valley still seemed politically untouchable. Taking on the likes of Facebook and Google, I wrote at the time, would “require angering some of the Democrats’ most important and deep-pocketed donors, something the party has not yet revealed an appetite for.”

    How did things change so quickly? There is no one smoking gun, but rather an accumulation of grievances that turned both Democrats and Republicans more and more against the tech companies. For Democrats, the key factor was the creeping sense that social media platforms, whatever the political leanings of their founders, had helped Donald Trump get elected. Facebook’s Cambridge Analytica scandal in 2018 supercharged those suspicions. Investigative reports, meanwhile, kept finding evidence that far-right and racist material was spreading on social media. At the same time—and in part as a reaction to social media platforms implementing more aggressive content moderation to mollify both advertisers and liberal critics—conservatives were growing ever more concerned that liberals in Silicon Valley were discriminating against them. And Republican politicians were picking up on the political potency of that talking point.

    The result is that we find ourselves living in a world that looks very different from the one we were living in just a few years ago. New antitrust cases against tech giants are popping up left and right, keeping the issue firmly in the public consciousness. The companies are devoting unprecedented sums toward lobbying, advocacy, and advertising to try to avert a crackdown. And in the sharpest break with the past, Congress and the White House are taking concrete steps to restructure markets that have been left to their own devices for two and a half decades.

    It’s all so much, so fast, that it’s hard to keep track of the various subplots. The introduction of the five House antitrust bills and the elevation of Khan to FTC chair, for example, look like two separate stories. But they’re really two parts of the same story: Khan was herself the key investigator behind the House antitrust subcommittee’s investigation of Apple, Amazon, Facebook, and Google, begun in 2019. The bills introduced last week are the fruits of that investigation. (While the time between the start of the investigation and the release of legislative proposals has felt like an eternity to those of us who follow this closely, it wouldn’t be bad for a Silicon Valley product launch. It took Amazon three years to bring the Kindle to market.)

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 2:01 pm on June 6, 2021 Permalink | Reply
    Tags: "Will a Volcanic Eruption Be a Burp or a Blast?", , Geldingadalur volcano - Iceland, Japan’s Ontake volcano., Kīlauea Volcano in Hawaii (US), La Soufrière- a volcano on the Caribbean island of St. Vincent., Mount Stromboli-one of three active volcanoes in Italy., New Zealand’s Whakaari volcano., Nyiragongo-a mountainous volcano in the Democratic Republic of Congo., Reykjanes volcano- Iceland, Scientists have begun to decipher the seismic signals that reveal how explosive a volcanic eruption is going to be., , WIRED   

    From WIRED : Women in STEM- Arianna Soldati NC State (US); Diana Roman Carnegie Institution for Science (US); Jackie Caplan-Auerbach Western Washington University (US) “Will a Volcanic Eruption Be a Burp or a Blast?” 

    From WIRED

    Robin George Andrews

    Scientists have begun to decipher the seismic signals that reveal how explosive a volcanic eruption is going to be.

    Volcanoes such as the recent outburst in Iceland, seen here on May 24, can switch from effusive to explosive. Much depends on the consistency on the magma itself. Courtesy of Sigurjón Jónsson.

    Last December, a gloopy ooze of lava began extruding out of the summit of La Soufrière- a volcano on the Caribbean island of St. Vincent. The effusion was slow at first; no one was threatened. Then in late March and early April, the volcano began to emit seismic waves associated with swiftly rising magma. Noxious fumes vigorously vented from the peak.

    Fearing a magmatic bomb was imminent, scientists sounded the alarm, and the government ordered a full evacuation of the island’s north on April 8. The next day, the volcano began catastrophically exploding. The evacuation had come just in time: At the time of writing, no lives have been lost.

    Simultaneously, something superficially similar but profoundly different was happening up on the edge of the Arctic.

    Increasingly intense tectonic earthquakes had been rumbling beneath Iceland’s Reykjanes Peninsula since late 2019, strongly implying that the underworld was opening up, making space for magma to ascend. Early in 2021, as a subterranean serpent of magma migrated around the peninsula, looking for an escape hatch to the surface, the ground itself began to change shape. Then in mid-March, the first fissure of several snaked through the earth roughly where scientists expected it might, spilling lava into an uninhabited valley named Geldingadalur.

    Here, locals immediately flocked to the eruption, picnicking and posing for selfies a literal stone’s throw away from the lava flows. A concert recently took place there, with people treating the ridges like the seats of an amphitheater.

    In both cases, scientists didn’t just accurately suggest a new eruption was on its way. They also forecast the two very different forms these eruptions would take. And while the “when” part of the equation is never easy to forecast, getting the “how” part right is especially challenging, especially in the case of the explosive eruption at La Soufrière. “That’s a tricky one, and they nailed it, they absolutely nailed it,” said Diana Roman, a volcanologist at Carnegie Institution for Science (US).

    Volcanologists have developed an increasingly detailed understanding of the conditions that are likely to produce an explosive eruption. The presence or absence of underground water matters, for instance, as does the gassiness and gloopiness of the magma itself. And in a recent series of studies, researchers have shown how to read hidden signals—from seismic waves to satellite observations—so that they may better forecast exactly how the eruption will develop: with a bang or a whimper.

    Something Wicked This Way Comes

    As with skyscrapers or cathedrals, the architectural designs of Earth’s volcanoes differ wildly. You can get tall and steep volcanoes, ultra-expansive and shallowly sloped volcanoes, and colossal, wide-open calderas. Sometimes there isn’t a volcano at all, but chains of small depressions or swarms of fissures scarring the earth like claw marks.

    Lava flows from the Geldingadalur volcano have been relatively languid and predictable. Photograph: Anton Brink/Anadolu Agency/Getty Images.

    Eruption forecasting asks a lot of questions. Chief among them is: When? At its core, this question is equivalent to asking when magma from below will travel up through a conduit (the pipe between the magma and the surface opening) and break through, as lava flows and ash, as volcanic glass and bombs.

    When magma ascends from depth, it can alter a volcano’s architecture, literally changing the shape of the land above. Migrating magma flows can also force rock apart, generating volcano-tectonic earthquakes. And when the pressure keeping magma trapped underground declines, it liberates trapped gas, which can escape to the surface.

    Eruption forecasters look for any of those three signs: changes in a volcano’s shape, its seismic soundscape, or its outgassing. If you spy changes in all three—changes that are clearly very different from the volcano’s everyday behavior—then “there is no doubt that something is going to happen,” said Maurizio Ripepe, a geophysicist at the University of Florence [Università degli Studi di Firenze] (IT). That something is often, eventually, an eruption.

    Change doesn’t always mean an uptick in activity. Most volcanoes get noisier and twitchier before erupting, but sometimes the opposite is true. Seismologists in Iceland, for example, recorded a drop in volcanic tremor immediately prior to the opening of Reykjanes’ first five fissures. When the sixth drop happened, said Thorbjörg Ágústsdóttir, a seismologist at Iceland Geosurvey [jarðmælingar á íslandi](IS), scientists forecast that a sixth fissure was about to appear—and they were right.

    The “How” of the Equation

    Increasingly, it’s also possible to forecast not just when or if a volcano will erupt, but how.

    Unspooling the history of each specific volcano is key, as individual volcanoes tend to have their own eruptive style. To find it, scientists will examine the geological strata around a volcano, forensically exhuming and examining the remains of old eruptions. The last eruption on Iceland’s Reykjanes Peninsula had occurred 800 years ago, long before the advent of modern science. But because of this sort of detective work, scientists knew that the eruptions there have always been relatively tranquil affairs. If a recent eruption history is available, one documented in real time by scientists, all the better; that’s why scientists knew La Soufrière was likely to speedily switch from an effusive to an explosive eruption style.

    The latest work on eruption forecasting goes far beyond these historical catalogs. Take Stromboli, a volcano barely sticking above the waters of the Tyrrhenian Sea. This picturesque isle spends much of its time exploding—usually small blasts that harm no one. After studying how it changes shape for two decades, Ripepe and his colleagues have determined that it inflates just before it explodes [Nature Communications]. Moreover, the exact change in shape reveals whether the blast will be major or minor. Since October 2019, the volcano has had an early warning system. It can detect the type of inflation indicative of the most extreme explosions, the sort that have killed people in the past, up to 10 minutes before the blast arrives.

    Stromboli subtly inflates just before it explodes.Photograph: Bruno Guerreiro/Getty Images.

    Stromboli is a relatively simple volcano, though, one in which the plumbing from the magma to the skylight up top remains more or less open. “The magma movement does not generate any fractures. It just comes up,” Ripepe said.

    Most volcanoes are more complicated: They harbor a diverse array of magma types that need to force their way out of the volcano. That means they produce eruptions that “change a lot as they happen,” said Arianna Soldati, a volcanologist at North Carolina State University (US). Over the course of days, weeks, months, or years, an eruption can go back and forth between oozing and exploding. Is it possible to forecast these changes?

    Soldati, Roman, and their colleagues found a way to test this by looking to the Big Island of Hawaii. Kīlauea, near the island’s southeastern coast, had been continuously erupting in some form or another since 1983.

    But in the spring and summer of 2018, the volcano put on a hell of a show: The lava lake at its summit drained away, as if someone had pulled the plug from a bath; magma made its way underground to the eastern flanks of the volcano and tore open cracks in the earth, gushing out of them for three months straight, sometimes shooting skyward as tall fountains of molten rock.

    As this happened, the researchers took lava samples, concentrating on one feature in particular: viscosity. Gloopier, stickier magma traps more gas. When this viscous magma reaches the surface, its gas violently decompresses, creating an explosion. Runnier magma, by contrast, lets gas escape gradually, like a soda left unattended on a table.

    In 2018, the viscosity of the lava on Kīlauea kept changing. Older, colder magma was more viscous, while newly tapped magma from depth was hotter and more fluid.

    A study of the 2018 eruptions on Kīlauea, Hawai‘i, connected the consistency of the magma coming up to specific seismic signals. Courtesy of Cedric Letsch.

    KILAUEA VOLCANO. U.S. Geological Survey.

    Roman and colleagues discovered that they could track these changes by monitoring the seismic waves emerging from the volcano and comparing them with the varying viscosity of the lava they sampled. For reasons yet to be determined, as runnier magma ascends, it forces the rocky walls on either side of it only a little bit apart. Gloopier magma, by contrast, exerts a strong force, pushing open a wider pathway. In a paper published this April in Nature, the researchers showed that they could use seismic waves, which differed depending on the way the rock was forced open, to forecast the change in the erupted lava’s viscosity hours to days in advance of that magma’s eruption.

    “Having found something that tells us, yes, if you have this kind of seismicity, viscosity is increasing, and if it’s above this threshold, it could be more explosive—that is super cool,” said Soldati. “For monitoring and hazards, this actually has the potential to be impactful now.”

    Nanoscopic Nuisances

    Many factors influence magma viscosity. One in particular has been overlooked, mostly because it’s nearly invisible.

    Danilo Di Genova, a geoscientist at the University of Bayreuth [Universität Bayreuth] (DE), studies nanolites—crystals about one-hundredth of the size of your average bacterium. They are thought to form at the top of the conduit as magma gushes up it. If you get enough of these crystals, they can lock up the magma, imprison trapped gas, and increase the viscosity. But unless you have very powerful microscopes to look at freshly erupted lava, they’ll be imperceptible.

    Di Genova has long been interested in how nanolites form. His experiments using silicon oil—a proxy for basalt, a commonplace runny magma—showed that if just 3 percent of an oil-particle mixture is made of nano-size particles, the viscosity spikes.

    He then turned to the real thing. He and his colleagues attempted to simulate what magma would experience as it rose through a conduit to the surface. They subjected lab-melted basaltic rock from Mount Etna to gradual heating, pulses of sudden cooling, hydration, and dehydration. At times, they placed the magma inside a synchrotron, a type of particle accelerator. Within this contraption, powerful x-rays interact with a crystal’s atoms to reveal their properties and—if the crystals are small enough—their existence.

    As reported last year in Science Advances, the experiments gave the team a working model of how nanolites form. If an eruption begins and magma suddenly accelerates up through the conduit, it rapidly depressurizes. That lets water come out of the molten rock and form bubbles, which dehydrates the magma.

    This action changes the thermal properties of the magma, making it a lot easier for crystals to be present even at extremely high temperatures. If the magma’s ascent is sufficiently rapid and the magma is speedily dehydrated, a cornucopia of nanolites comes into being, which significantly increases the magma’s viscosity.

    This change doesn’t give off a noticeable signal. But merely knowing it exists, said Di Genova, may enable researchers to explain why volcanoes with otherwise runny magma, like Vesuvius or Etna, can sometimes produce epic explosions. Seismic signals can trace how quickly magma is ascending, so perhaps that may be used to forecast a last-minute nanolite population boom, one that leads to a catastrophic blast.

    Sweeping Away the Fog

    These advances aside, scientists are still a long way from replacing eruption probabilities with certainties.

    One reason is that “most of the world’s volcanoes are not that well monitored,” said Seth Moran, a research seismologist at the US Geological Survey’s Cascades Volcano Observatory. This includes many of America’s Cascades volcanoes, several of which have a propensity for giant explosions. “It’s not easy to forecast an eruption if there are sufficient instruments on the ground,” said Roman. “But it’s very, very difficult to forecast an eruption if there are no instruments on the volcano.”

    Another problem is that some eruptions currently have no clear-cut precursors. One notorious type is called a phreatic blast: Magma cooks overlying pockets of water, eventually triggering pressure-cooker-like detonations. One rocked New Zealand’s Whakaari volcano in December 2019, killing 22 people visiting the small island. Another shook Japan’s Ontake volcano in 2014, killing 63 hikers.

    New Zealand’s Whakaari volcano gave no warning before it catastrophically exploded in December 2019, killing 22 people.Photograph: Westend61/Getty Images.

    A recent study led by Társilo Girona, a geophysicist at the University of Alaska, Fairbanks (US), found that satellites can detect gradual, year-over-year upticks in the thermal radiation coming off all sorts of volcanoes in the run-up to an eruption. A retrospective analysis showed that such a temperature increase was detected before Ontake’s 2014 phreatic explosion, with a peak around the time of the event.

    Perhaps monitoring from space will become the best way to see future phreatic eruptions coming. But so far, no successful long-term forecast of a phreatic eruption has taken place. “Phreatic eruptions are terrifying,” said Jackie Caplan-Auerbach, a volcanologist and seismologist at Western Washington University (US). “You really don’t know they’re coming.”

    It’s not just explosions that can prove tricky to forecast. Nyiragongo-a mountainous volcano in the Democratic Republic of Congo, suddenly erupted on May 22 of this year, spilling fast-moving lava toward the city of Goma. Despite being monitored, the volcano gave no clear warning it was about to erupt, and several people perished.

    And no matter what type of eruption you are forecasting, the price of a false positive is crippling. “When you evacuate people and nothing happens, then the next evacuation is going to be orders of magnitude more difficult to get people to take seriously,” said Roman.

    But there are reasons to be optimistic. Scientists are grasping the physics underlying all volcanoes better than ever. Individual volcanoes are also becoming more familiar because of “a mixture of instinct and experience and learned knowledge,” said David Pyle, a volcanologist at the University of Oxford (UK). Soon, he predicts, machine learning programs, capable of identifying patterns in data faster than any human, will become a major player.

    Certainty in eruption forecasting—the if, when or how—will probably never come to pass. But day by day, the potentially deadly fog of uncertainty dissipates a little more, and someone who would have died a few decades ago during an eruption now gets to live.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 11:08 am on May 2, 2021 Permalink | Reply
    Tags: "High-Energy Cosmic Ray Sources Get Mapped Out for the First Time", , , , , High-energy neutrinos are produced when even-higher-energy cosmic rays scatter off light or matter in the environment where they’re created., Pierre Auger Observatory in Argentina., Telescope Array project located in the high desert in Millard County Utah (US), The question is: What’s out there in space doing the accelerating?, , WIRED, Zwicky Transient Factory   

    From WIRED : “High-Energy Cosmic Ray Sources Get Mapped Out for the First Time” 

    From WIRED

    Natalie Wolchover

    Starburst galaxies, active galactic nuclei, and tidal disruption events (from left) have emerged as top candidates for the dominant source of ultrahigh-energy cosmic rays.Photograph: Daniel Chang/Quanta Magazine.

    In the 1930s, the French physicist Pierre Auger placed Geiger counters along a ridge in the Alps and observed that they would sometimes spontaneously click at the same time, even when they were up to 300 meters apart. He knew that the coincident clicks came from cosmic rays, charged particles from space that bang into air molecules in the sky, triggering particle showers that rain down to the ground. But Auger realized that for cosmic rays to trigger the kind of enormous showers he was seeing, they must carry fantastical amounts of energy—so much, he wrote in 1939, that “it is actually impossible to imagine a single process able to give to a particle such an energy.”

    Upon constructing larger arrays of Geiger counters and other kinds of detectors, physicists learned that cosmic rays reach energies at least 100,000 times higher than Auger supposed.

    A cosmic ray is just an atomic nucleus—a proton or a cluster of protons and neutrons. Yet the rare ones known as “ultrahigh-energy” cosmic rays have as much energy as professionally served tennis balls. They’re millions of times more energetic than the protons that hurtle around the circular tunnel of the Large Hadron Collider in Europe at 99.9999991 percent of the speed of light. In fact, the most energetic cosmic ray ever detected, nicknamed the “Oh-My-God particle,” struck the sky in 1991 going something like 99.99999999999999999999951 percent of the speed of light, giving it roughly the energy of a bowling ball dropped from shoulder height onto a toe. “You would have to build a collider as large as the orbit of the planet Mercury to accelerate protons to the energies we see,” said Ralph Engel, an astrophysicist at KIT – Karlsruhe Institute of Technology [ Karlsruher Institut für Technologie] (DE) and the coleader of the world’s largest cosmic-ray observatory, the Pierre Auger Observatory in Argentina.

    The question is: What’s out there in space doing the accelerating?

    Supernova explosions are now thought to be capable of producing the astonishingly energetic cosmic rays that Auger first observed 82 years ago. Supernovas can’t possibly yield the far more astonishing particles that have been seen since. The origins of these ultrahigh-energy cosmic rays remain uncertain. But a series of recent advances has significantly narrowed the search.

    In 2017, the Auger Observatory announced a major discovery [Science]. With its 1,600 particle detectors and 27 telescopes dotting a patch of Argentinian prairie the size of Rhode Island, the observatory had recorded the air showers of hundreds of thousands of ultrahigh-energy cosmic rays over the previous 13 years.

    The team reported that 6 percent more of the rays come from one half of the sky than the other—the first pattern ever definitively detected in the arrival directions of cosmic rays.

    Recently, three theorists at New York University (US) offered an elegant explanation for the imbalance that experts see as highly convincing. The new paper [The Astrophysical Journal], by Chen Ding, Noémie Globus, and Glennys Farrar, implies that ultra-powerful cosmic-ray accelerators are ubiquitous, cosmically speaking, rather than rare.

    The Auger Observatory and the Telescope Array in Utah have also detected smaller, subtler cosmic ray “hot spots” in the sky—presumably the locations of nearby sources. Certain candidate objects sit at the right locations.

    More clues have arrived in the form of super-energetic neutrinos, which are produced by ultrahigh-energy cosmic rays. Collectively, the recent discoveries have focused the search for the universe’s ultra-powerful accelerators on three main contenders. Now theorists are busy modeling these astrophysical objects to see whether they’re indeed capable of flinging fast-enough particles toward us, and if so, how.

    These speculations are brand-new and unconstrained by any data. “If you go to high energies, things are really unexplored,” Engel said. “You really go somewhere where everything is blank.”

    A Fine Imbalance

    To know what’s making ultrahigh-energy cosmic rays, step one is to see where they’re coming from. The trouble is that, because the particles are electrically charged, they don’t travel here in straight lines; their paths bend as they pass through magnetic fields.

    Moreover, the ultrahigh-energy particles are rare, striking each square kilometer of Earth’s sky only about once per year. Identifying any pattern in their arrival directions requires teasing out subtle statistical imbalances from a huge data set.

    No one knew how much data would be needed before patterns would emerge. Physicists spent decades building ever larger arrays of detectors without seeing even a hint of a pattern. Then in the early 1990s, the Scottish astrophysicist Alan Watson and the American physicist Jim Cronin decided to go really big. They embarked on what would become the 3,000-square-kilometer Auger Observatory.

    Finally, that was enough. When the Auger team reported in Science in 2017 that it had detected a 6 percent imbalance between two halves of the sky—where an excess of particles from one particular direction in the sky smoothly transitioned into a deficit centered in the opposite direction—“that was fantastically exciting,” said Watson. “I’ve worked in this field for a very, very long time”—since the 1960s—“and this is the first time we’ve had an anisotropy.”

    Illustration: Samuel Velasco/Quanta Magazine; Source: arxiv.org/pdf/2101.04564 [above].

    But the data was also puzzling. The direction of the cosmic-ray excess was nowhere near the center of the Milky Way galaxy, supporting the long-standing hypothesis that ultrahigh-energy cosmic rays come from outside the galaxy. But it was nowhere near anything. It didn’t correspond to the location of some powerful astrophysical object like a supermassive black hole in a neighboring galaxy. It wasn’t the Virgo cluster, the dense nearby concentration of galaxies. It was just a dull, dark spot near the constellation Canis Major.

    Noémie Globus, then a postdoc at the Hebrew University of Jerusalem (IL), immediately saw a way to explain the pattern. She began by making a simplification: that every bit of matter in the universe has equal probability of producing some small number of ultrahigh-energy cosmic rays. She then mapped out how those cosmic rays would bend slightly as they emanate from nearby galaxies, galaxy groups, and clusters—collectively known as the large-scale structure of the cosmos—and travel here through the weak magnetic fields of intergalactic space. Naturally, her pretend map was just a blurry picture of the large-scale structure itself, with the highest concentration of cosmic rays coming from Virgo.

    Her cosmic-ray excess wasn’t in the right spot to explain Auger’s data, but she thought she knew why: because she hadn’t adequately accounted for the magnetic field of the Milky Way. In 2019, Globus moved to NYU to work with the astrophysicist Glennys Farrar, whose 2012 model of the Milky Way’s magnetic field, developed with her then graduate student Ronnie Jansson, remains state of the art. Although no one yet understands why the galaxy’s magnetic field is shaped the way it is, Farrar and Jansson inferred its geometry from 40,000 measurements of polarized light. They ascertained that magnetic field lines arc both clockwise and counterclockwise along the spiral arms of the galaxy and emanate vertically from the galactic disk, twisting as they rise.

    Farrar’s graduate student Chen Ding wrote code that refined Globus’ map of ultrahigh-energy cosmic rays coming from the large-scale structure, then passed this input through the distorting lens of the galactic magnetic field as modeled by Farrar and Jansson. “And lo and behold we get this remarkable agreement with the observations,” Farrar said.

    Virgo-originating cosmic rays bend around in the galaxy’s twisting field lines so that they strike us from the direction of Canis Major, where Auger sees the center of its excess. The researchers analyzed how the resulting pattern would change for cosmic rays of different energies. They consistently found a close match with different subsets of Auger’s data.

    The researchers’ “continuous model” of the origins of ultrahigh-energy cosmic rays is a simplification—every piece of matter does not emit ultrahigh-energy cosmic rays. But its striking success reveals that the actual sources of the rays are abundant and spread evenly throughout all matter, tracing the large-scale structure. The study, which will appear in The Astrophysical Journal Letters [above], has garnered widespread praise. “This is really a fantastic step,” Watson said.

    Immediately, certain stocks have risen: in particular, three types of candidate objects that thread the needle of being relatively common in the cosmos yet potentially special enough to yield Oh-My-God particles.

    Icarus Stars

    In 2008, Farrar and a coauthor proposed that cataclysms called tidal disruption events (TDEs) might be the source of ultrahigh-energy cosmic rays.

    A TDE occurs when a star pulls an Icarus and gets too close to a supermassive black hole. The star’s front feels so much more gravity than its back that the star gets ripped to smithereens and swirls into the abyss. The swirling lasts about a year. While it lasts, two jets of material—the subatomic shreds of the disrupted star—shoot out from the black hole in opposite directions. Shock waves and magnetic fields in these beams might then conspire to accelerate nuclei to ultrahigh energies before slingshotting them into space.

    Tidal disruption events occur roughly once every 100,000 years in every galaxy, which is the cosmological equivalent of happening everywhere all the time. Since galaxies trace the matter distribution, TDEs could explain the success of Ding, Globus, and Farrar’s continuous model.

    Moreover, the relatively brief flash of a TDE solves other puzzles. By the time a TDE’s cosmic ray reaches us, the TDE will have been dark for thousands of years. Other cosmic rays from the same TDE might take separate bent paths; some might not arrive for centuries. The transient nature of a TDE could explain why there seems to be so little pattern to cosmic rays’ arrival directions, with no strong correlations with the positions of known objects. “I’m inclined now to believe they are transients, mostly,” Farrar said of the rays’ origins.

    The TDE hypothesis got another boost recently, from an observation reported in Nature Astronomy in February.

    Robert Stein, one of the paper’s authors, was operating a telescope in California called the Zwicky Transient Factory in October 2019 when an alert came in from the IceCube neutrino observatory in Antarctica. IceCube had spotted a particularly energetic neutrino.

    High-energy neutrinos are produced when even-higher-energy cosmic rays scatter off light or matter in the environment where they’re created. Luckily, the neutrinos, being neutral, travel to us in straight lines, so they point directly back to the source of their parent cosmic ray.

    Stein swiveled the telescope in the arrival direction of IceCube’s neutrino. “We immediately saw there was a tidal disruption event from the position that the neutrino had arrived from,” he said.

    The correspondence makes it more likely that TDEs are at least one source of ultrahigh-energy cosmic rays. However, the neutrino’s energy was probably too low to prove that TDEs produce the very highest-energy rays. Some researchers strongly question whether these transients can accelerate nuclei to the extreme end of the observed energy spectrum; theorists are still exploring how the events might accelerate particles in the first place.

    Meanwhile, other facts have turned some researchers’ attention elsewhere.

    Starburst Superwinds

    Cosmic-ray observatories such as Auger and the Telescope Array have also found a few hot spots—small, subtle concentrations in the arrival directions of the very highest-energy cosmic rays. In 2018, Auger published the results of a comparison of its hot spots to the locations of astrophysical objects within a few hundred million light-years of here. (Cosmic rays from farther away would lose too much energy in mid-journey collisions.)

    In the cross-correlation contest, no type of object performed exceptionally well—understandably, given the deflection cosmic rays experience. But the strongest correlation surprised many experts: About 10 percent of the rays came from within 13 degrees of the directions of so-called “starburst galaxies.” “They were not on my plate originally,” said Michael Unger of the Karlsruhe Institute of Technology, a member of the Auger team.

    Illustration: Samuel Velasco/Quanta Magazine; Source: arxiv.org/pdf/2101.04564.

    No one was more thrilled than Luis Anchordoqui, an astrophysicist at Lehman College of the City University of New York (US), who proposed starburst galaxies as the origin of ultrahigh-energy cosmic rays in 1999 [Physical Review D]. “I can be kind of biased on these things because I was the one proposing the model that now the data is pointing to,” he said.

    Starburst galaxies constantly manufacture a lot of huge stars. The massive stars live fast and die young in supernova explosions, and Anchordoqui argues that the “superwind” formed by the collective shock waves of all the supernovas is what accelerates cosmic rays to the mind-boggling speeds that we detect.

    Not everyone is sure that this mechanism would work. “The question is: How fast are those shocks?” said Frank Rieger, an astrophysicist at Ruprecht Karl University of Heidelberg [Ruprecht-Karls-Universität Heidelberg] (DE). “Should I expect those to go to the highest energies? At the moment I am doubtful about it.”

    Other researchers argue that objects inside starburst galaxies might be acting as cosmic-ray accelerators, and that the cross-correlation study is simply picking up on an abundance of these other objects. “As a person who thinks of transient events as a natural source, those are very enriched in starburst galaxies, so I have no trouble,” said Farrar.

    Active Galaxies

    In the cross-correlation study, another kind of object performed almost but not quite as well as starburst galaxies: objects called active galactic nuclei, or AGNs.

    AGNs are the white-hot centers of “active” galaxies, in which plasma engulfs the central supermassive black hole. The black hole sucks the plasma in while shooting out enormous, long-lasting jets.

    The high-power members of an especially bright subset called “radio-loud” AGNs are the most luminous persistent objects in the universe, so they’ve long been leading candidates for the source of ultrahigh-energy cosmic rays.

    However, these powerful radio-loud AGNs are too rare in the cosmos to pass the Ding, Globus and Farrar test: They couldn’t possibly be tracers for the large-scale structure. In fact, within our cosmic neighborhood, there are almost none. “They’re nice sources but not in our backyard,” Rieger said.

    Less powerful radio-loud AGNs are much more common and could potentially resemble the continuous model. Centaurus A, for instance, the nearest radio-loud AGN, sits right at the Auger Observatory’s most prominent hot spot. (So does a starburst galaxy.)

    For a long time Rieger and other specialists seriously struggled to get low-power AGNs to accelerate protons to Oh-My-God-particle levels. But a recent finding has brought them “back in the game,” he said.

    Astrophysicists have long known that about 90 percent of all cosmic rays are protons (that is, hydrogen nuclei); another 9 percent are helium nuclei. The rays can be heavier nuclei such as oxygen or even iron, but experts long assumed that these would get ripped apart by the violent processes needed to accelerate ultrahigh-energy cosmic rays.

    Then, in surprising findings in the early 2010s, Auger Observatory scientists inferred from the shapes of the air showers that ultrahigh-energy rays are mostly middleweight nuclei, such as carbon, nitrogen and silicon. These nuclei will achieve the same energy as protons while traveling at lower speeds. And that, in turn, makes it easier to imagine how any of the candidate cosmic accelerators might work.

    For example, Rieger has identified a mechanism that would allow low-power AGNs to accelerate heavier cosmic rays to ultrahigh energies: A particle could drift from side to side in an AGN’s jet, getting kicked each time it reenters the fastest part of the flow. “In that case they find they can do that with the low-power radio sources,” Rieger said. “Those would be much more in our backyard.”

    Another paper explored whether tidal disruption events would naturally produce middleweight nuclei. “The answer is that it could happen if the stars that are disrupted are white dwarfs,” said Cecilia Lunardini, an astrophysicist at Arizona State University (US) who co-authored the paper. “White dwarfs have this sort of composition—carbon, nitrogen.” Of course, TDEs can happen to any “unfortunate star,” Lunardini said. “But there are lots of white dwarfs, so I don’t see this as something very contrived.”

    Researchers continue to explore the implications of the highest-energy cosmic rays being on the heavy side. But they can agree that it makes the problem of how to accelerate them easier. “The heavy composition towards higher energy relaxes things much more,” Rieger said.

    Primary Source

    As the short list of candidate accelerators crystallizes, the search for the right answer will continue to be led by new observations. Everyone is excited for AugerPrime, an upgraded observatory; starting later this year, it will identify the composition of each individual cosmic ray event, rather than estimating the overall composition. That way, researchers can isolate the protons, which deflect the least on their way to Earth, and look back at their arrival directions to identify individual sources. (These sources would presumably produce the heavier nuclei as well.)

    Many experts suspect that a mix of sources might contribute to the ultrahigh-energy cosmic-ray spectrum. But they generally expect one source type to dominate, and only one to reach the extreme end of the spectrum. “My money is on that it’s only one,” said Unger.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 2:02 pm on February 28, 2021 Permalink | Reply
    Tags: "A Decades-Long Quest Reveals New Details of Antimatter", Around 1970 researchers at DOE’s SLAC Accelerator Laboratory seemed to triumphantly confirm the quark model when they shot high-speed electrons at protons and saw the electrons ricochet off objects , At DOE's Brookhaven National Laboratory’s planned Electron-Ion Collider experimenters will probe the spin of the proton sea., “NuSea” experiment at DOE's Fermi National Accelerator Laboratory., , Complications arise because gluons feel the very force that they carry., Down antiquarks seemed to significantly outnumber up antiquarks., For a brief period around half a century ago physicists thought they had the proton sorted., In 1964 Murray Gell-Mann and George Zweig independently proposed what became known as the quark model—the idea that protons neutrons and related rarer particles are bundles of three quarks., In 1991 the New Muon Collaboration in Geneva scattered muons-the heavier siblings of electrons off of protons and deuterons., In reality the proton’s interior swirls with a fluctuating number of six kinds of quarks-their oppositely charged antimatter counterparts (antiquarks) and “gluon” particles that bind the others , It often goes unmentioned that protons-the positively charged matter particles at the center of atoms-are part antimatter., , , Pions and other mesons are made of one quark and one antiquark., QCD or quantum chromodynamics formulated in 1973 describes the “strong force” the strongest force of nature in which particles called gluons connect bundles of quarks., SeaQuest experiment at DOE's Fermi National Accelerator Laboratory, Self-dealing gluons render the QCD equations generally unsolvable., Since the 1940s physicists have seen protons and neutrons passing pions back and forth inside atomic nuclei like teammates tossing basketballs to each other-an activity that helps link them together., The proton morphs into a neutron and a pion made of one up quark and one down antiquark., Twenty years ago physicists began investigating a mysterious asymmetry inside the proton. Their results show how antimatter helps stabilize every atom’s core., We learn that a proton is a bundle of three elementary particles-quarks-two “up” quarks and a “down” quark with electric charges (+2/3 and −1/3 respectively) making its charge of +1., WIRED   

    From WIRED: “A Decades-Long Quest Reveals New Details of Antimatter” 

    From WIRED

    Natalie Wolchover

    Twenty years ago physicists began investigating a mysterious asymmetry inside the proton. Their results show how antimatter helps stabilize every atom’s core.

    From a distance, a proton appears to be made out of three particles called quarks. But look closer, and a sea of particles pops in and out of existence.Video credit: Olena Shmahalo/Quanta Magazine.

    It often goes unmentioned that protons-the positively charged matter particles at the center of atoms- are part antimatter.

    We learn in school that a proton is a bundle of three elementary particles called quarks—two “up” quarks and a “down” quark whose electric charges (+2/3 and −1/3 respectively) combine to give the proton its charge of +1. But that simplistic picture glosses over a far stranger, as-yet-unresolved story.

    In reality, the proton’s interior swirls with a fluctuating number of six kinds of quarks, their oppositely charged antimatter counterparts (antiquarks), and “gluon” particles that bind the others together, morph into them, and readily multiply. Somehow, the roiling maelstrom winds up perfectly stable and superficially simple—mimicking, in certain respects, a trio of quarks. “How it all works out, that’s quite frankly something of a miracle,” said Donald Geesaman, a nuclear physicist at Argonne National Laboratory in Illinois.

    Thirty years ago, researchers discovered a striking feature of this “proton sea.” Theorists had expected it to contain an even spread of different types of antimatter; instead, down antiquarks seemed to significantly outnumber up antiquarks. Then, a decade later, another group saw hints of puzzling variations in the down-to-up antiquark ratio. But the results were right on the edge of the experiment’s sensitivity.

    So, 20 years ago, Geesaman and a colleague, Paul Reimer, embarked on a new experiment to investigate. That experiment, called SeaQuest, has finally finished, and the researchers report their findings in the journal Nature. They measured the proton’s inner antimatter in more detail than ever before, finding that there are, on average, 1.4 down antiquarks for every up antiquark.

    Credit: Samuel Velasco/Quanta Magazine.

    The data immediately favors two theoretical models of the proton sea. “This is the first real evidence backing up those models that has come out,” said Reimer.

    One is the “pion cloud” model, a popular, decades-old approach that emphasizes the proton’s tendency to emit and reabsorb particles called pions, which belong to a group of particles known as mesons. The other model, the so-called statistical model, treats the proton like a container full of gas.

    Planned future experiments will help researchers choose between the two pictures. But whichever model is right, SeaQuest’s hard data about the proton’s inner antimatter will be immediately useful, especially for physicists who smash protons together at nearly light speed in Europe’s Large Hadron Collider. When they know exactly what’s in the colliding objects, they can better piece through the collision debris looking for evidence of new particles or effects. Juan Rojo of Vrije University of Amsterdam [Vrije Universiteit Amsterdam](NL), who helps analyze LHC data, said the SeaQuest measurement “could have a big impact” on the search for new physics, which is currently “limited by our knowledge of the proton structure, in particular of its antimatter content.”

    Three’s Company

    For a brief period around half a century ago physicists thought they had the proton sorted.

    In 1964 Murray Gell-Mann and George Zweig independently proposed what became known as the quark model—the idea that protons neutrons and related rarer particles are bundles of three quarks (as Gell-Mann dubbed them), while pions and other mesons are made of one quark and one antiquark. The scheme made sense of the cacophony of particles spraying from high-energy particle accelerators, since their spectrum of charges could all be constructed out of two- and three-part combos. Then, around 1970 researchers at Stanford’s SLAC accelerator seemed to triumphantly confirm the quark model when they shot high-speed electrons at protons and saw the electrons ricochet off objects inside [Science].

    But the picture soon grew murkier. “As we started trying to measure the properties of those three quarks more and more, we discovered that there were some additional things going on,” said Chuck Brown, an 80-year-old member of the SeaQuest team at the Fermi National Accelerator Laboratory who has worked on quark experiments since the 1970s.

    Scrutiny of the three quarks’ momentum indicated that their masses accounted for a minor fraction of the proton’s total mass. Furthermore, when SLAC shot faster electrons at protons, researchers saw the electrons ping off of more things inside. The faster the electrons, the shorter their wavelengths, which made them sensitive to more fine-grained features of the proton, as if they’d cranked up the resolution of a microscope. More and more internal particles were revealed, seemingly without limit. There’s no highest resolution “that we know of,” Geesaman said.

    The results began to make more sense as physicists worked out the true theory that the quark model only approximates: quantum chromodynamics, or QCD. Formulated in 1973, QCD describes the “strong force,” the strongest force of nature, in which particles called gluons connect bundles of quarks.

    QCD predicts the very maelstrom that scattering experiments observed. The complications arise because gluons feel the very force that they carry. (They differ in this way from photons, which carry the simpler electromagnetic force.) This self-dealing creates a quagmire inside the proton, giving gluons free rein to arise, proliferate and split into short-lived quark-antiquark pairs. From afar, these closely spaced, oppositely charged quarks and antiquarks cancel out and go unnoticed. (Only three unbalanced “valence” quarks—two ups and a down—contribute to the proton’s overall charge.) But physicists realized that when they shot in faster electrons, they were hitting the small targets.

    Yet the oddities continued.

    Self-dealing gluons render the QCD equations generally unsolvable, so physicists couldn’t—and still can’t—calculate the theory’s precise predictions. But they had no reason to think gluons should split more often into one type of quark-antiquark pair—the down type—than the other. “We would expect equal amounts of both to be produced,” said Mary Alberg, a nuclear theorist at Seattle University, explaining the reasoning at the time.

    Hence the shock when, in 1991, the New Muon Collaboration in Geneva scattered muons, the heavier siblings of electrons, off of protons and deuterons (consisting of one proton and one neutron), compared the results, and inferred [Physical Review Letters] that more down antiquarks than up antiquarks seemed to be splashing around in the proton sea.

    Proton Parts

    Theorists soon came out with a number of possible ways to explain the proton’s asymmetry.

    One involves the pion. Since the 1940s physicists have seen protons and neutrons passing pions back and forth inside atomic nuclei like teammates tossing basketballs to each other-an activity that helps link them together. In mulling over the proton, researchers realized that it can also toss a basketball to itself—that is, it can briefly emit and reabsorb a positively charged pion, turning into a neutron in the meantime. “If you’re doing an experiment and you think you’re looking at a proton, you’re fooling yourself, because some of the time that proton is going to fluctuate into this neutron-pion pair,” said Alberg.

    Specifically, the proton morphs into a neutron and a pion made of one up quark and one down antiquark. Because this phantasmal pion has a down antiquark (a pion containing an up antiquark can’t materialize as easily), theorists such as Alberg, Gerald Miller and Tony Thomas argued that the pion cloud idea explains the proton’s measured down antiquark surplus.

    Credit: Samuel Velasco/Quanta Magazine.

    Several other arguments emerged as well. Claude Bourrely and collaborators in France developed the statistical model, which treats the proton’s internal particles as if they’re gas molecules in a room, whipping about at a distribution of speeds that depend on whether they possess integer or half-integer amounts of angular momentum. When tuned to fit data from numerous scattering experiments, the model divined a down-antiquark excess.

    The models did not make identical predictions. Much of the proton’s total mass comes from the energy of individual particles that burst in and out of the proton sea, and these particles carry a range of energies. Models made different predictions for how the ratio of down and up antiquarks should change as you count antiquarks that carry more energy. Physicists measure a related quantity called the antiquark’s momentum fraction.

    When the “NuSea” experiment at Fermilab measured [Nuclear Physics B – Proceedings Supplements] the down-to-up ratio as a function of antiquark momentum in 1999, their answer “just lit everybody up,” Alberg recalled. The data suggested that among antiquarks with ample momentum—so much, in fact, that they were right on the end of the apparatus’s range of detection—up antiquarks suddenly became more prevalent than downs. “Every theorist was saying, ‘Wait a minute,’” said Alberg. “Why, when those antiquarks get a bigger share of the momentum, should this curve start to turn over?”

    As theorists scratched their heads, Geesaman and Reimer, who worked on NuSea and knew that the data on the edge sometimes isn’t trustworthy, set out to build an experiment that could comfortably explore a larger antiquark momentum range. They called it SeaQuest.

    Junk Spawned

    Long on questions about the proton but short on cash, they started assembling the experiment out of used parts. “Our motto was: Reduce, reuse, recycle,” Reimer said.

    They acquired some old scintillators from a lab in Hamburg, leftover particle detectors from Los Alamos National Laboratory, and radiation-blocking iron slabs first used in a cyclotron at Columbia University in the 1950s. They could repurpose NuSea’s room-size magnet, and they could run their new experiment off of Fermilab’s existing proton accelerator. The Frankenstein assemblage was not without its charms. The beeper indicating when protons were flowing into their apparatus dated back five decades, said Brown, who helped find all the pieces. “When it beeps, it gives you a warm feeling in your tummy.”

    The nuclear physicist Paul Reimer (left) amid SeaQuest, an experiment at Fermilab assembled mostly out of used parts. Credit: DOE’s Fermi National Accelerator Laboratory.

    Gradually they got it working. In the experiment, protons strike two targets: a vial of hydrogen, which is essentially protons, and a vial of deuterium—atoms with one proton and one neutron in the nucleus.

    When a proton hits either target, one of its valence quarks sometimes annihilates with one of the antiquarks in the target proton or neutron. “When annihilation occurs, it has a unique signature,” Reimer said, yielding a muon and an antimuon. These particles, along with other “junk” produced in the collision, then encounter those old iron slabs. “The muons can go through; everything else stops,” he said. By detecting the muons on the other side and reconstructing their original paths and speeds, “you can work backwards to work out what momentum fraction the antiquarks carry.”

    Because protons and neutrons mirror each other—each has up-type particles in place of the other’s down-type particles, and vice versa—comparing the data from the two vials directly indicates the ratio of down antiquarks to up antiquarks in the proton—directly, that is, after 20 years of work.

    In 2019, Alberg and Miller calculated [Physical Review C] what SeaQuest should observe based on the pion cloud idea. Their prediction matches the new SeaQuest data well.

    The new data—which shows a gradually rising, then plateauing, down-to-up ratio, not a sudden reversal—also agrees with Bourrely and company’s more flexible statistical model. Yet Miller calls this rival model “descriptive, rather than predictive,” since it’s tuned to fit data rather than to identify a physical mechanism behind the down antiquark excess. By contrast, “the thing I’m really proud of in our calculation is that it was a true prediction,” Alberg said. “We didn’t dial any parameters.”

    In an email, Bourrely argued that “the statistical model is more powerful than that of Alberg and Miller,” since it accounts for scattering experiments in which particles both are and aren’t polarized. Miller vehemently disagreed, noting that pion clouds explain not only the proton’s antimatter content but various particles’ magnetic moments, charge distributions and decay times, as well as the “binding, and therefore existence, of all nuclei.” He added that the pion mechanism is “important in the broad sense of why do nuclei exist, why do we exist.”

    In the ultimate quest to understand the proton, the deciding factor might be its spin, or intrinsic angular momentum. A muon scattering experiment in the late 1980s showed that the spins of the proton’s three valence quarks account for no more than 30 percent of the proton’s total spin. The “proton spin crisis” is: What contributes the other 70 percent? Once again, said Brown, the Fermilab old-timer, “something else must be going on.”

    At Fermilab, and eventually at Brookhaven National Laboratory’s planned Electron-Ion Collider experimenters will probe the spin of the proton sea.

    Electron-Ion Collider (EIC) at DOE’s Brookhaven National Laboratory, to be built inside the tunnel that currently houses the Relativistic Heavy Ion Collider [RHIC].

    Already Alberg and Miller are working on calculations of the full “meson cloud” surrounding protons, which includes, along with pions, rarer “rho mesons.” Pions don’t possess spin, but rho mesons do, so they must contribute to the overall spin of the proton in a way Alberg and Miller hope to determine.

    Fermilab’s SpinQuest experiment, involving many of the same people and parts as SeaQuest, is “almost ready to go,” Brown said. “With luck we’ll take data this spring; it will depend”—at least, partly—“on the progress of the vaccine against the virus. It’s sort of amusing that a question this deep and obscure inside the nucleus is depending on the response of this country to the Covid virus. We’re all interconnected, aren’t we?”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 1:47 pm on January 31, 2021 Permalink | Reply
    Tags: "How Universes Might Bubble Up and Collide", , , “True” vacuum states vs “False” vacuum states, , BEC-Bose-Einstein condensate, , , False vacuum decay: It’s this process that may have started our cosmos with a bang., Quantum annealer: a limited quantum computer., The difference between bubble universes and their surroundings comes down to the energy of space itself., To understand how universes might inflate and bump into each other in the hypothetical multiverse physicists are studying digital and physical analogs of the process., WIRED   

    From WIRED: “How Universes Might Bubble Up and Collide” 

    From WIRED

    Charlie Wood

    To understand how universes might inflate and bump into each other in the hypothetical multiverse physicists are studying digital and physical analogs of the process.

    Credit: E.R. DEGGINGER/Science Source.

    What lies beyond all we can see? The question may seem unanswerable. Nevertheless, some cosmologists have a response: Our universe is a swelling bubble. Outside it, more bubble universes exist, all immersed in an eternally expanding and energized sea—the multiverse.

    The idea is polarizing. Some physicists embrace the multiverse to explain why our bubble looks so special (only certain bubbles can host life), while others reject the theory for making no testable predictions (since it predicts all conceivable universes). But some researchers expect that they just haven’t been clever enough to work out the precise consequences of the theory yet.

    Now, various teams are developing new ways to infer exactly how the multiverse bubbles and what happens when those bubble universes collide.

    “It’s a long shot,” said Jonathan Braden, a cosmologist at the University of Toronto who is involved in the effort, but, he said, it’s a search for evidence “for something you thought you could never test.”

    The multiverse hypothesis sprang from efforts to understand our own universe’s birth. In the large-scale structure of the universe, theorists see signs of an explosive growth spurt during the cosmos’s infancy. In the early 1980s, as physicists investigated how space might have started—and stopped—inflating, an unsettling picture emerged. The researchers realized that while space may have stopped inflating here (in our bubble universe) and there (in other bubbles), quantum effects should continue to inflate most of space, an idea known as eternal inflation.

    The difference between bubble universes and their surroundings comes down to the energy of space itself. When space is as empty as possible and can’t possibly lose more energy, it exists in what physicists call a “true” vacuum state. Think of a ball lying on the floor—it can’t fall any further. But systems can also have “false” vacuum states. Imagine a ball in a bowl on a table. The ball can roll around a bit while more or less staying put. But a large enough jolt will land it on the floor—in the true vacuum.

    In the cosmological context, space can get similarly stuck in a false vacuum state. A speck of false vacuum will occasionally relax into true vacuum (likely through a random quantum event), and this true vacuum will balloon outward as a swelling bubble, feasting on the false vacuum’s excess energy, in a process called false vacuum decay. It’s this process that may have started our cosmos with a bang. “A vacuum bubble could have been the first event in the history of our universe,” said Hiranya Peiris, a cosmologist at University College London (UK).

    But physicists struggle mightily to predict how vacuum bubbles behave. A bubble’s future depends on countless minute details that add up. Bubbles also change rapidly—their walls approach the speed of light as they fly outward—and feature quantum mechanical randomness and waviness. Different assumptions about these processes give conflicting predictions, with no way to tell which ones might resemble reality. It’s as though “you’ve taken a lot of things that are just very hard for physicists to deal with and mushed them all together and said, ‘Go ahead and figure out what’s going on,’” Braden said.

    Since they can’t prod actual vacuum bubbles in the multiverse, physicists have sought digital and physical analogs of them.

    One group recently coaxed vacuum bubble-like behavior out of a simple simulation. The researchers, including John Preskill, a prominent theoretical physicist at the California Institute of Technology (USA), started with “the [most] baby version of this problem that you can think of,” as co-author Ashley Milsted put it: a line of about 1,000 digital arrows that could point up or down. The place where a string of mainly up arrows met a string of largely down arrows marked a bubble wall, and by flipping arrows, the researchers could make bubble walls move and collide. In certain circumstances, this model perfectly mimics the behavior of more complicated systems in nature. The researchers hoped to use it to simulate false vacuum decay and bubble collisions.

    At first the simple setup didn’t act realistically. When bubble walls crashed together, they rebounded perfectly, with none of the expected intricate reverberations or outflows of particles (in the form of flipped arrows rippling down the line). But after adding some mathematical flourishes, the team saw colliding walls that spewed out energetic particles—with more particles appearing as the collisions grew more violent.

    Credit: Quanta Magazine; S. M. Freeney et. al., Physical Review Letters

    But the results, which appeared in December, foreshadow a dead end in this problem for traditional computation. The researchers found that as the resulting particles mingle, they become “entangled,” entering a shared quantum state. Their state grows exponentially more complicated with each additional particle, choking simulations on even the mightiest supercomputers.

    For that reason, the researchers say that further discoveries about bubble behavior might have to wait for mature quantum computers—devices whose computational elements (qubits) can handle quantum entanglement because they experience it firsthand.

    Meanwhile, other researchers hope to get nature to do the math for them.

    Michael Spannowsky and Steven Abel, physicists at Durham University in the United Kingdom, believe they can sidestep the tricky calculations by using an apparatus that plays by the same quantum rules that the vacuum does. “If you can encode your system on a device that’s realized in nature, you don’t have to calculate it,” Spannowsky said. “It becomes more of an experiment than a theoretical prediction.”

    That device is known as a quantum annealer. A limited quantum computer, it specializes in solving optimization problems by letting qubits seek out the lowest-energy configuration available—a process not unlike false vacuum decay.

    Using a commercial quantum annealer called D-Wave, Abel and Spannowsky programmed a string of about 200 qubits to emulate a quantum field with a higher- and a lower-energy state, analogous to a false vacuum and a true vacuum. They then let the system loose and watched how the former decayed into the latter—leading to the birth of a vacuum bubble.

    The experiment, described last June, merely verified known quantum effects and did not reveal anything new about vacuum decay. But the researchers hope to eventually use D-Wave to tiptoe beyond current theoretical predictions.

    A third approach aims to leave the computers behind and blow bubbles directly.

    Quantum bubbles that inflate at nearly light speed aren’t easy to come by, but in 2014, physicists in Australia and New Zealand proposed a way to make some in the lab using an exotic state of matter known as a Bose-Einstein condensate (BEC). When cooled to nearly absolute zero, a thin cloud of gas can condense into a BEC, whose uncommon quantum mechanical properties include the ability to interfere with another BEC, much as two lasers can interfere. If two condensates interfere in just the right way, the group predicted, experimentalists should be able to capture direct images of bubbles forming in the condensate—ones that act similarly to the putative bubbles of the multiverse.

    “Because it’s an experiment, it contains by definition all the physics that nature wants to put in it including quantum effects and classical effects,” Peiris said.

    Peiris leads a team of physicists studying how to steady the condensate blend against collapse from unrelated effects. After years of work, she and her colleagues are finally ready to set up a prototype experiment, and they hope to be blowing condensate bubbles in the next few years.

    If all goes well, they’ll answer two questions: the rate at which bubbles form, and how the inflation of one bubble changes the odds that another bubble will inflate nearby. These queries can’t even be formulated with current mathematics, said Braden, who contributed to the theoretical groundwork for the experiment.

    That information will help cosmologists like Braden and Peiris to calculate exactly how a whack from a neighboring bubble universe in the distant past might have set our cosmos quivering. One likely scar from such an encounter would be a circular cold spot in the sky, which Peiris and others have searched for and not found. But other details—such as whether the collision also produces gravitational waves—depend on unknown bubble specifics.

    If the multiverse is just a mirage, physics may still benefit from the bounty of tools being developed to uncover it. To understand the multiverse is to understand the physics of space, which is everywhere.

    False vacuum decay “seems like a ubiquitous feature of physics,” Peiris said, and “I personally don’t believe pencil-and-paper theory calculations are going to get us there.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 12:39 pm on January 10, 2021 Permalink | Reply
    Tags: "A Newfound Source of Cellular Order in the Chemistry of Life", , , Cell biologists seem to find condensates everywhere they look., Cell signaling proteins can also exhibit phase separation behavior., , Condensates also helped to solve a different cellular mystery—not inside the nucleus but along the cell membrane., Enzymes need to find their substrates and signaling molecules need to find their receptors., , Inside cells droplets called condensates merge divide and dissolve. Their dance may regulate vital processes., Methyltransferases, Oligomer formation, , Ribosomes are cells’ protein-making factories and the number of them in a cell often limits its rate of growth., Some proteins spontaneously gather into transient assemblies called condensates., WIRED   

    From WIRED: “A Newfound Source of Cellular Order in the Chemistry of Life” 

    From WIRED

    Viviane Callier

    Inside cells, droplets called condensates merge, divide, and dissolve. Their dance may regulate vital processes.

    Credit: Ed Reschke/Getty Images.

    Imagine packing all the people in the world into the Great Salt Lake in Utah—all of us jammed shoulder to shoulder, yet also charging past one another at insanely high speeds. That gives you some idea of how densely crowded the 5 billion proteins in a typical cell are, said Anthony Hyman, a British cell biologist and a director of the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany.

    Somehow in that bustling cytoplasm, enzymes need to find their substrates, and signaling molecules need to find their receptors, so the cell can carry out the work of growing, dividing and surviving. If cells were sloshing bags of evenly mixed cytoplasm, that would be difficult to achieve. But they are not. Membrane-bounded organelles help to organize some of the contents, usefully compartmentalizing sets of materials and providing surfaces that enable important processes, such as the production of ATP, the biochemical fuel of cells. But, as scientists are still only beginning to appreciate, they are only one source of order.

    Recent experiments reveal that some proteins spontaneously gather into transient assemblies called condensates, in response to molecular forces that precisely balance transitions between the formation and dissolution of droplets inside the cell. Condensates, sometimes referred to as membraneless organelles, can sequester specific proteins from the rest of the cytoplasm, preventing unwanted biochemical reactions and greatly increasing the efficiency of useful ones. These discoveries are changing our fundamental understanding of how cells work.

    For instance, condensates may explain the speed of many cellular processes. “The key thing about a condensate—it’s not like a factory; it’s more like a flash mob. You turn on the radio, and everyone comes together, and then you turn it off and everyone disappears,” Hyman said.

    As such, the mechanism is “exquisitely regulatable,” said Gary Karpen, a cell biologist at the University of California, Berkeley, and the Lawrence Berkeley National Laboratory. “You can form these things and dissolve them quite readily by just changing concentrations of molecules” or chemically modifying the proteins. This precision provides leverage for control over a host of other phenomena, including gene expression.

    The first hint of this mechanism arrived in the summer of 2008, when Hyman and his then-postdoctoral fellow Cliff Brangwynne (now a Howard Hughes Medical Institute investigator at Princeton University) were teaching at the famed Marine Biological Laboratory physiology course and studying the embryonic development of C. elegans roundworms. When they and their students observed that aggregates of RNA in the fertilized worm egg formed droplets that could split away or fuse with each other, Hyman and Brangwynne hypothesized that these “P granules” formed through phase separation in the cytoplasm, just like oil droplets in a vinaigrette.

    That proposal, published in 2009 in Science, didn’t get much attention at the time. But more papers on phase separation in cells trickled out around 2012, including a key experiment [Nature] in Michael Rosen’s lab at the University of Texas Southwestern Medical Center in Dallas, which showed that cell signaling proteins can also exhibit this phase separation behavior. By 2015, the stream of papers had turned into a torrent, and since then there’s been a veritable flood of research on biomolecular condensates, these liquid-like cell compartments with both elastic and viscous properties.

    Credit: Samuel Velasco/Quanta Magazine.

    Now cell biologists seem to find condensates everywhere they look: in the regulation of gene expression, the formation of mitotic spindles, the assembly of ribosomes, and many more cellular processes in the nucleus and cytoplasm. These condensates aren’t just novel but thought-provoking: The idea that their functions emerge from the collective behaviors of the molecules has become the central concept in condensate biology, and it contrasts sharply with the classic picture of pairs of biochemical agents and their targets fitting together like locks and keys. Researchers are still figuring out how to probe the functionality of these emergent properties; that will require the development of new techniques to measure and manipulate the viscosity and other properties of tiny droplets in a cell.

    What Drives Droplet Formation

    When biologists were first trying to explain what drives the phase separation phenomenon behind condensation in living cells, the structure of the proteins themselves offered a natural place to start. Well-folded proteins typically have a mix of hydrophilic and hydrophobic amino acids. The hydrophobic amino acids tend to bury themselves inside the protein folds, away from water molecules, while the hydrophilic amino acids get drawn to the surface. These hydrophobic and hydrophilic amino acids determine how the protein folds and holds its shape.

    But some protein chains have relatively few hydrophobic amino acids, so they have no reason to fold. Instead, these intrinsically disordered proteins (IDPs) fluctuate in form and engage in many weak multivalent interactions. IDP interactions were thought for years to be the best explanation for the fluidlike droplet behavior.

    Nucleoli appear as green dots in this stained tissue from a roundworm. Each cell, regardless of its size, has a single nucleolus. Recent research has shown that the size of nucleoli depends on the concentration of nucleolar proteins in a cell. Credit: Stephanie Weber.

    Last year, however, Brangwynne published a couple of papers highlighting that IDPs are important, but that “the field has gone too far in emphasizing them.” Most proteins involved in condensates, he says, have a common architecture with some structured domains and some disordered regions. To seed condensates, the molecules must have many weak multivalent interactions with others, and there’s another way to achieve that: oligomerization.

    Oligomerization occurs when proteins bind to each other and form larger complexes with repeating units, called oligomers. As the concentration of proteins increases, so does the phase separation and the oligomer formation. In a talk at the American Society for Cell Biology meeting in December, Brangwynne showed that as the concentration of oligomers increases, the strength of their interactions eventually overcomes the nucleation barrier, the energy required to create a surface separating the condensate from the rest of the cytoplasm. At that point, the proteins are containing themselves within a droplet.

    In the past five years, researchers have taken big strides in understanding how this collective behavior of proteins arises from tiny physical and chemical forces. But they are still learning how (and whether) cells actually use this phenomenon to grow and divide.

    Condensates and Gene Expression

    Condensates seem to be involved in many aspects of cellular biology, but one area that has received particular attention is gene expression and the production of proteins.

    Ribosomes are cells’ protein-making factories, and the number of them in a cell often limits its rate of growth. Work by Brangwynne and others suggests that fast-growing cells might get some help from the biggest condensate in the nucleus: the nucleolus. The nucleolus facilitates the rapid transcription of ribosomal RNAs by gathering up all of the required transcription machinery, including the specific enzyme (RNA polymerase I) that makes them.

    A few years ago, Brangwynne and his then-postdoc Stephanie Weber, who is now an assistant professor at McGill University in Montreal, investigated how the size of the nucleolus (and therefore the speed of ribosomal RNA synthesis) was controlled in early C. elegans embryos. Because the mother worm contributes the same number of proteins to every embryo, small embryos have high concentrations of proteins and large embryos have low concentrations. And as the researchers reported in a 2015 Current Biology paper, the size of the nucleoli is concentration-dependent: Small cells have large nucleoli and large cells have small ones.

    Brangwynne and Weber found that by artificially changing cell size, they could raise and lower the protein concentration and the size of the resulting nucleoli. In fact, if they lowered the concentration below a critical threshold, there was no phase separation and no nucleolus. The researchers derived a mathematical model based on the physics of condensate formation that could exactly predict the size of nucleoli in cells.

    Now Weber is looking for condensates in bacteria, which have smaller cells and no membrane-bound compartments. “Maybe this is an even more important mechanism for compartmentalization, because they [bacteria] don’t have an alternative,” she suggested.

    In this series of images, purified bacterial transcription factor in solution acts like a fluid by condensing into spherical droplets that then fuse together. Researchers are studying whether condensates might play a role in regulating bacterial cells as well as eukaryotic ones.Credit: John Wall.

    Last summer, Weber published a study [PNAS] showing that in cells of slow-growing E. coli bacteria, the RNA polymerase enzyme is uniformly distributed, but in fast-growing cells it clusters in droplets. The fast-growing cells may need to concentrate the polymerase around ribosomal genes to synthesize ribosomal RNA efficiently.

    “It looks like it [phase separation] is in all domains of life, and a universal mechanism that has then been able to specialize into a whole bunch of different functions,” Weber said.

    Although Weber and Brangwynne showed that active transcription occurs in one large condensate, the nucleolus, other condensates in the nucleus do the opposite. Large portions of the DNA in the nucleus are classified as heterochromatin because they are more compact and generally not expressed as proteins. In 2017, Karpen, Amy Strom (who is now a postdoc in Brangwynne’s lab) and their colleagues showed [Nature] that a certain protein will undergo phase separation and form droplets on the heterochromatin in Drosophila embryos. These droplets can fuse with each other, possibly providing a mechanism for compacting heterochromatin inside the nucleus.

    The results also suggested an exciting possible explanation for a long-standing mystery. Years ago, geneticists discovered that if they took an actively expressed gene and placed it right next to the heterochromatin, the gene would be silenced, as if the heterochromatin state was spreading. “This phenomenon of spreading was something that arose early on, and no one really understood it,” Karpen said.

    Later, researchers discovered enzymes involved in epigenetic regulation called methyltransferases, and they hypothesized that the methyltransferases would simply proceed from one histone to the next down the DNA strand from the heterochromatin into the adjacent euchromatin, a kind of “enzymatic, processive mechanism,” Karpen said. This has been the dominant model to explain the spreading phenomenon for the last 20 years. But Karpen thinks that the condensates that sit on the heterochromatin, like wet beads on a string, could be products of a different mechanism that accounts for the spreading of the silent heterochromatin state. “These are fundamentally different ways to think about how the biology works,” he said. He’s now working to test the hypothesis.

    In these fruit fly embryos, the chromosomes (pink) thicken and separate as the cells divide. A heterochromatin protein (green) then begins to condense into small droplets that grow and fuse, seemingly to help organize the genetic material for the cell’s use.Credit: Gary Karpen.

    The Formation of Filaments

    Condensates also helped to solve a different cellular mystery—not inside the nucleus but along the cell membrane. When a ligand binds to a receptor protein on a cell’s surface, it initiates a cascade of molecular changes and movements that convey a signal through the cytoplasm. But for that to happen, something first has to gather together all the dispersed players in the mechanism. Researchers now think phase separation might be a trick cells use to cluster the required signaling molecules at the membrane receptor, explains Lindsay Case, who trained in the Rosen lab as a postdoc and is starting her own lab at the Massachusetts Institute of Technology this month.

    Case notes that protein modifications that are commonly used for transducing signals, such as the addition of phosphoryl groups, change the valency of a protein—that is, its capacity to interact with other molecules. The modifications therefore also affect proteins’ propensity to form condensates. “If you think about what a cell is doing, it is actually regulating this parameter of valency,” Case said.

    Condensates may also play an important role in regulating and organizing the polymerization of small monomer subunits into long protein filaments. “Because you’re bringing molecules together for a longer period of time than you would outside the condensate, that favors polymerization,” Case said. In her postdoctoral research, she found that condensates enhance the polymerization of actin into filaments that help specialized kidney cells maintain their unusual shapes.

    The polymerization of tubulin is key to the formation of the mitotic spindles that help cells divide. Hyman became interested in understanding the formation of mitotic spindles during his graduate studies in the Laboratory of Molecular Biology at the University of Cambridge in the 1980s. There, he studied how the single-celled C. elegans embryo forms a mitotic spindle before splitting into two cells. Now he’s exploring the role of condensates in this process.

    Credit: Samuel Velasco/Quanta Magazine.

    In one in vitro experiment, Hyman and his team created droplets of the microtubule-binding tau protein and then added tubulin, which migrates into the tau droplets. When they added nucleotides to the drops to simulate polymerization, the tubulin monomers assembled into beautiful microtubules. Hyman and his colleagues have proposed that phase separation could be a general way for cells to initiate the polymerization of microtubules and the formation of the mitotic spindle.

    The tau protein is also known for forming the protein aggregates that are the hallmarks of Alzheimer’s disease. In fact, many neurodegenerative conditions, such as amyotrophic lateral sclerosis (ALS) and Parkinson’s disease, involve the faulty formation of protein aggregates in cells.

    To investigate how these aggregates might form, Hyman’s team focused on a protein called FUS that has mutant forms associated with ALS. The FUS protein is normally found in the nucleus, but in stressed cells, the protein leaves the nucleus and goes into the cytoplasm, where it forms into droplets. Hyman’s team found that when they made droplets of mutated FUS proteins in vitro, after only about eight hours the droplets solidified into what he calls “horrible aggregates.” The mutant proteins drove a liquid-to-solid phase transition far faster than normal form of FUS did.

    Maybe the question isn’t why the aggregates form in disease, but why they don’t form in healthy cells. “One of the things I often ask in group meetings is: Why is the cell not scrambled eggs?” Hyman said in his talk at the cell biology meeting; the protein content of the cytoplasm is “so concentrated that it should just crash out of solution.”

    Two types of proteins (red, yellow) isolated from the nucleoli of frog eggs can spontaneously organize into condensate droplets. By altering the concentrations of each protein in the solution, researchers can make either or both of the types of condensates grow or disappear.Credit: Marina Feric & Clifford Brangwynne.

    A clue came when researchers in Hyman’s lab added the cellular fuel ATP to condensates of purified stress granule proteins and saw those condensates vanish. To investigate further, the researchers put egg whites in test tubes, added ATP to one tube and salt to the other, and then heated them. While the egg whites in the salt aggregated, the ones with ATP did not: The ATP was preventing protein aggregation at the concentrations found in living cells.

    But how? It remained a puzzle until Hyman fortuitously met a chemist when presenting a seminar in Bangalore. The chemist noted that in industrial processes, additives called hydrotropes are used to increase the solubility of hydrophobic molecules. Returning to his lab, Hyman and his colleagues found that ATP worked exceptionally well as a hydrotrope.

    Intriguingly, ATP is a very abundant metabolite in cells, with a typical concentration of 3-5 millimolar. Most enzymes that use ATP operate efficiently with concentrations three orders of magnitude lower. Why, then, is ATP so concentrated inside cells, if it isn’t needed to drive metabolic reactions?

    One candidate explanation, Hyman suggests, is that ATP doesn’t act as a hydrotrope below 3-5 millimolar. “One possibility is that in the origin of life, ATP might have evolved as a biological hydrotrope to keep biomolecules soluble in high concentration and was later co-opted as energy,” he said.

    It’s difficult to test that hypothesis experimentally, Hyman admits, because it is challenging to manipulate ATP’s hydrotropic properties without also affecting its energy function. But if the idea is correct, it might help to explain why protein aggregates commonly form in diseases associated with aging, because ATP production becomes less efficient with age.

    Other Uses for Droplets

    Protein aggregates are clearly bad in neurodegenerative diseases. But the transition from liquid to solid phases can be adaptive in other circumstances.

    Take primordial oocytes, cells in the ovaries that can lie dormant for decades before maturing into an egg. Each of these cells has a Balbiani body, a large condensate of amyloid protein found in the oocytes of organisms ranging from spiders to humans. The Balbiani body is believed to protect mitochondria during the oocyte’s dormant phase by clustering a majority of the mitochondria together [PubMed.gov] with long amyloid protein fibers. When the oocyte starts to mature into an egg, those amyloid fibers dissolve and the Balbiani body disappears, explains Elvan Böke, a cell and developmental biologist at the Center for Genomic Regulation in Barcelona. Böke is working to understand how these amyloid fibers assemble and dissolve, which could lead to new strategies for treating infertility or neurodegenerative diseases.

    Protein aggregates can also solve problems that require very quick physiological responses, like stopping bleeding after injury. For example, Mucor circinelloides is a fungal species with interconnected, pressurized networks of rootlike hyphae through which nutrients flow. Researchers at the Temasek Life Sciences Laboratory led by the evolutionary cell biologist Greg Jedd recently discovered [CurrentBiology] that when they injured the tip of a Mucor hypha, the protoplasm gushed out at first but almost instantaneously formed a gelatinous plug that stopped the bleeding.

    Jedd suspected that this response was mediated by a long polymer, probably a protein with a repetitive structure. The researchers identified two candidate proteins and found that, without them, injured fungi catastrophically bled out into a puddle of protoplasm.

    Jedd and his colleagues studied the structure of the two proteins, which they called gellin A and gellin B. The proteins had 10 repetitive domains, some of which had hydrophobic amino acids that could bind to cell membranes. The proteins also unfolded at forces similar to those they would experience when the protoplasm comes gushing out at the site of an injury. “There’s this massive acceleration in flow, and so we were thinking that maybe this is the trigger that is telling the gellin to change its state,” Jedd said. The plug, triggered by a physical cue that causes the gellin to transition from liquid to solid phase, is irreversibly solidified.

    In contrast, in the fungal species Neurospora, the hyphae are divided into compartments, with pores that regulate the flow of water and nutrients. Jedd wanted to know how the pores were opened and closed. “What we discovered [PNAS] is some intrinsically disordered proteins that seem to be undergoing a condensation to aggregate at the pore, to provide a mechanism for closing it,” Jedd explained.

    The Neurospora proteins that were candidates for this job, Jedd’s team learned, had repeated mixed-charge domains that could be found in some mammalian proteins, too. When the researchers synthesized proteins of varying compositions but with similar mixes of lengths and charge patterning and introduced them into mammalian cells, they found that the proteins could be incorporated into nuclear speckles, which are condensates in the mammalian cell nucleus that help to regulate gene expression, as they and colleagues led by Rohit Pappu of Washington University in St. Louis reported in a 2020 Molecular Cell paper.

    The fungal and mammalian kingdoms seem to have arrived independently at a strategy of using disordered sequences in mechanisms based on condensation, Jedd said, “but they’re using it for entirely different reasons, in different compartments.”

    Reconsidering Old Explanations

    Phase separation has turned out to be ubiquitous, and researchers have generated lots of ideas about how this phenomenon could be involved in various cell functions. “There’s lots of exciting possibilities that [phase separation] raises, so that’s what I think drives … interest in the field,” Karpen said. But he also cautions that while it is relatively easy to show that a molecule undergoes phase separation in a test tube, demonstrating that phase separation has a function in the cell is much more challenging. “We still don’t know so much,” he said.

    Brangwynne agreed. “If you’re really honest, it’s still pretty much at a hand-wavy stage, the whole field,” he said. “It’s very early days for understanding how this all works. The fact that it’s hand-wavy doesn’t mean that liquid phase separation isn’t the key driving force. In fact, I think it is. But how does it really work?”

    The uncertainties do not discourage Hyman, either. “What phase separation is allowing everyone to do is go back and look at old problems which stalled out and think: Can we now think about this a different way?” he said. “All the structural biology that was done has just been brilliant—but many problems stalled out. They couldn’t actually explain things. And that’s what phase separation has allowed, is for everyone to think again about these problems.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 4:05 pm on January 3, 2021 Permalink | Reply
    Tags: "The Milky Way Gets a New Origin Story", , , , , , WIRED   

    From WIRED: “The Milky Way Gets a New Origin Story” 

    From WIRED

    Charlie Wood

    A large Milky Way–like galaxy collides with a smaller dwarf galaxy in this digital simulation. Astronomers believe that at least one major collision like this happened early in the Milky Way’s development.

    A large Milky Way-like galaxy collides with a smaller dwarf galaxy in this digital simulation. Astronomers believe that at least one major collision like this happened early in the Milky Way’s development.Credit: VIDEO: KOPPELMAN, VILLALOBOS & HELMI.

    When the Khoisan hunter-gatherers of sub-Saharan Africa gazed upon the meandering trail of stars and dust that split the night sky, they saw the embers of a campfire. Polynesian sailors perceived a cloud-eating shark. The ancient Greeks saw a stream of milk, gala, which would eventually give rise to the modern term “galaxy.”

    In the 20th century, astronomers discovered that our silver river is just one piece of a vast island of stars, and they penned their own galactic origin story. In the simplest telling, it held that our Milky Way galaxy came together nearly 14 billion years ago when enormous clouds of gas and dust coalesced under the force of gravity. Over time, two structures emerged: first, a vast spherical “halo,” and later, a dense, bright disk. Billions of years after that, our own solar system spun into being inside this disk, so that when we look out at night, we see spilt milk—an edge-on view of the disk splashed across the sky.

    Yet over the past two years, researchers have rewritten nearly every major chapter of the galaxy’s history. What happened? They got better data.

    On April 25, 2018, a European spacecraft by the name of Gaia released a staggering quantity of information about the sky.

    ESA (EU)/GAIA satellite .

    Critically, Gaia’s years-long data set described the detailed motions of roughly 1 billion stars. Previous surveys had mapped the movement of just thousands. The data brought a previously static swath of the galaxy to life. “Gaia started a new revolution,” said Federico Sestito, an astronomer at the Strasbourg Astronomical Observatory in France.

    Gaia EDR3 StarTrails 600.

    The river of stars in the southern sky. ESA/GAIA (Gaia DR2 skymap)

    Data from more than 1.8 billion stars have been used to create this map of the entire sky. It shows the total brightness and color of stars observed by ESA’s Gaia satellite and released as part of Gaia’s Early Data Release 3.

    Astronomers raced to download the dynamic star map, and a flurry of discoveries followed. They found that parts of the disk, for example, appeared impossibly ancient. They also found evidence of epic collisions that shaped the Milky Way’s violent youth, as well as new signs that the galaxy continues to churn in an unexpected way.

    The Gaia satellite has revolutionized our understanding of the Milky Way since its launch in December 2013. Credit:Video from ESA/ATG Media Lab.

    Taken together, these results have spun a new story about our galaxy’s turbulent past and its ever-evolving future. “Our picture of the Milky Way has changed so quickly,” said Michael Petersen, an astronomer at the University of Edinburgh. “The theme is that the Milky Way is not a static object. Things are changing rapidly everywhere.”

    The Earliest Stars

    To peer back to the galaxy’s earliest days, astronomers seek stars that were around back then. These stars were fashioned only from hydrogen and helium, the cosmos’s rawest materials. Fortunately, the smaller stars from this early stock are also slow to burn, so many are still shining.

    After decades of surveys, researchers had assembled a catalog of 42 such ancients, known as ultra metal-poor stars (to astronomers, any atom bulkier than helium qualifies as metallic). According to the standard story of the Milky Way, these stars should be swarming throughout the halo, the first part of the galaxy to form. By contrast, stars in the disk—which was thought to have taken perhaps an additional billion years to spin itself flat—should be contaminated with heaver elements such as carbon and oxygen.

    In late 2017, Sestito set out to study how this metal-poor swarm moves by writing code to analyze the upcoming Gaia results. Perhaps their spherical paths could offer some clues as to how the halo came to be, he thought.

    In the days following Gaia’s data release, he extracted the 42 ancient stars from the full data set, then tracked their motions. He found that most were streaming through the halo, as predicted. But some—roughly 1 in 4—weren’t. Rather, they appeared stuck in the disk [MNRAS], the Milky Way’s youngest region. “What the hell,” Sestito wondered, though he used a different four-letter term. “What’s going on?”

    Follow-up research confirmed that the stars really are long-term residents of the disk, and not just tourists passing through. From two recent surveys, Sestito and colleagues amassed a library of roughly 5,000 metal-poor stars. A few hundred of them appear to be permanent denizens of the disk [MNRAS]. Another group sifted through about 500 stars identified by another survey, finding that about 1 in 10 of these stars lie flat in circular, sunlike orbits [MNRAS]. And a third research group found stars of various metallicities (and therefore various ages) moving in flat disk orbits. “This is something completely new,” said lead author Paola Di Matteo, an astronomer at the Paris Observatory.

    How did these anachronisms get there? Sestito speculated that perhaps pockets of pristine gas managed to dodge all the metals expelled from supernovas for eons, then collapsed to form stars that looked deceptively old. Or the disk may have started taking shape when the halo did, nearly 1 billion years ahead of schedule.

    To see which was more probable, he connected with Tobias Buck, a researcher at the Leibniz Institute for Astrophysics in Potsdam, Germany, who specializes in crafting digital galaxy simulations. Past efforts had generally produced halos first and disks second, as expected. But these were relatively low-resolution efforts.

    Galaxy simulation
    In these digital simulations, a Milky Way–like galaxy forms and evolves over 13.8 billion years — from the early universe to the present day. The leftmost column shows the distribution of invisible dark matter; the center column the temperature of gas (where blue is cold and red is hot); and the right column the density of stars. Each row highlights a different size scale: The top row is a zoomed-in look at the galactic disk; the center column a mid-range view of the galactic halo; and the bottom row a zoomed-out view of the environment around the galaxy.

    Buck increased the crispness of his simulations by about a factor of 10. At that resolution, each run demanded intensive computational resources. Even though he had access to Germany’s Leibniz Supercomputing Center, a single simulation required three months of computing time. He repeated the exercise six times.

    Of those six, five produced Milky Way doppelgängers. Two of those featured substantial numbers of metal-poor disk stars.

    How did those ancient stars get into the disk? Simply put, they were stellar immigrants. Some of them were born in clouds that predated the Milky Way. Then the clouds just happened to deposit some of their stars into orbits that would eventually form part of the galactic disk. Other stars came from small “dwarf” galaxies that slammed into the Milky Way and aligned with an emerging disk.

    The results, which the group published in November [MNRAS], suggest that the classic galaxy formation models were incomplete. Gas clouds do collapse into spherical halos, as expected. But stars arriving at just the right angles can kick-start a disk at the same time. “[Theorists] weren’t wrong,” Buck said. “They were missing part of the picture.”

    A Violent Youth

    The complications don’t end there. With Gaia, astronomers have found direct evidence of cataclysmic collisions. Astronomers assumed that the Milky Way had a hectic youth, but Helmer Koppelman, an astronomer now at the Institute for Advanced Study in Princeton, New Jersey, used the Gaia data to help pinpoint specific debris from one of the largest mergers.

    Gaia’s 2018 data release fell on a Wednesday, and the mad rush to download the catalog froze its website, Koppelman recalled. He processed the data on Thursday, and by Friday he knew he was on to something big. In every direction, he saw a huge number of halo stars ping-ponging back and forth in the center of the Milky Way in the same peculiar way—a clue that they had come from a single dwarf galaxy. Koppelman and his colleagues had a brief paper [The Astrophysical Journal Letters] ready by Sunday and followed it up with a more detailed analysis that June [Nature].

    The galactic wreckage was everywhere. Perhaps half of all the stars in the inner 60,000 light-years of the halo (which extends hundreds of thousands of light-years in every direction) came from this lone collision, which may have boosted the young Milky Way’s mass by as much as 10 percent. “This is a game changer for me,” Koppelman said. “I expected many different smaller objects.”

    A simulation shows the formation and evolution of a Milky Way–like galaxy over about 10 billion years. Many smaller dwarf galaxies accrete onto the main galaxy, often becoming a part of it.Video credit: Tobias Buck.

    The group named the incoming galaxy Gaia-Enceladus, after the Greek goddess Gaia—one of the primordial deities—and her Titan son Enceladus. Another team at the University of Cambridge independently discovered the galaxy around the same time [MNRAS], dubbing it the Sausage for its appearance in certain orbital charts.

    When the Milky Way and Gaia-Enceladus collided, perhaps 10 billion years ago, the Milky Way’s delicate disk may have suffered widespread damage. Astronomers debate why our galactic disk seems to have two parts: a thin disk, and a thicker one where stars bungee up and down while orbiting the galactic center. Research led by Di Matteo[Astronomy & Astrophysics] now suggests that Gaia-Enceladus exploded much of the disk, puffing it up during the collision. “The first ancient disk formed pretty fast, and then we think Gaia-Enceladus kind of destroyed it,” Koppelman said.

    Hints of additional mergers have been spotted in bundles of stars known as globular clusters. Diederik Kruijssen, an astronomer at Heidelberg University in Germany, used galaxy simulations to train a neural network to scrutinize globular clusters. He had it study their ages, makeup, and orbits. From that data, the neural network could reconstruct the collisions that assembled the galaxies. Then he set it loose on data from the real Milky Way. The program reconstructed known events such as Gaia-Enceladus, as well as an older, more significant merger that the group has dubbed Kraken.

    In August, Kruijssen’s group published a merger lineage of the Milky Way and the dwarf galaxies that formed it [MNRAS]. They also predicted the existence of 10 additional past collisions that they’re hoping will be confirmed with independent observations. “We haven’t found the other 10 yet,” Kruijssen said, “but we will.”

    All these mergers have led some astronomers to suggest [The Astrophysical Journal] that the halo may be made almost exclusively of immigrant stars. Models from the 1960s and ’70s predicted that most Milky Way halo stars should have formed in place. But as more and more stars have been identified as galactic interlopers, astronomers may not need to assume that many, if any, stars are natives, said Di Matteo.

    A Still-Growing Galaxy

    The Milky Way has enjoyed a relatively quiet history in recent eons, but newcomers continue to stream in. Stargazers in the Southern Hemisphere can spot with the naked eye a pair of dwarf galaxies called the Large and Small Magellanic Clouds. Astronomers long believed the pair to be our steadfast orbiting companions, like moons of the Milky Way.

    Then a series of Hubble Space Telescope observations [The Astrophysical Journal] between 2006 and 2013 found that they were more like incoming meteorites. Nitya Kallivayalil, an astronomer at the University of Virginia, clocked the clouds as coming in hot at about 330 kilometers per second—nearly twice as fast as had been predicted.

    When a team led by Jorge Peñarrubia, an astronomer at the Royal Observatory of Edinburgh, crunched the numbers a few years later, they concluded that the speedy clouds must be extremely hefty—perhaps 10 times bulkier than previously thought.

    “It’s been surprise after surprise,” Peñarrubia said.

    Various groups have predicted that the unexpectedly beefy dwarfs might be dragging parts of the Milky Way around, and this year Peñarrubia teamed up with Petersen to find proof.

    The problem with looking for galaxy-wide motion is that the Milky Way is a raging blizzard of stars, with astronomers looking outward from one of the snowflakes. So Peñarrubia and Petersen spent most of lockdown figuring out how to neutralize the motions of the Earth and the sun, and how to average out the motion of halo stars so that the halo’s outer fringe could serve as a stationary backdrop.

    When they calibrated the data in this way, they found that the Earth, the sun, and the rest of the disk in which they sit are lurching in one direction—not toward the Large Magellanic Cloud’s current position, but toward its position around a billion years ago (the galaxy is a lumbering beast with slow reflexes, Petersen explained). They recently detailed their findings in Nature Astronomy.

    The sliding of the disk against the halo undermines a fundamental assumption: that the Milky Way is an object in balance. It may spin and slip through space, but most astronomers assumed that after billions of years, the mature disk and the halo had settled into a stable configuration.

    Peñarrubia and Petersen’s analysis proves that assumption wrong. Even after 14 billion years, mergers continue to sculpt the overall shape of the galaxy. This realization is just the latest change in how we understand the great stream of milk across the sky.

    “Everything we thought we knew about the future and the history of the Milky Way,” said Petersen, “we need a new model to describe that.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: