Tagged: Science News Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:36 am on July 7, 2020 Permalink | Reply
    Tags: "Self-destructive civilizations may doom our search for alien intelligence", , , Science News   

    From Science News: “Self-destructive civilizations may doom our search for alien intelligence” 

    From Science News

    July 6, 2020
    Tom Siegfried

    A lack of signals from space may also be bad news for Earthlings.

    SETI/Allen Telescope Array situated at the Hat Creek Radio Observatory, 290 miles (470 km) northeast of San Francisco, California, USA, Altitude 986 m (3,235 ft)

    On Earth, civilizations have limited lifetimes.

    Roman civilization, for instance, lasted less than a thousand years from the founding of its republic to the fall of its empire (after a long decline). In the New World, Maya civilization spanned roughly two millennia (maybe a little longer depending on when you date its beginning). In the late Bronze Age, the Greek Mycenaean civilization lasted a mere five centuries or so. As for American civilization (as in the United States of), at the rate things are going it won’t last even that long.

    For some reason, civilization is not a self-perpetuating state of affairs on this planet. And perhaps not on other planets, either. In fact, limits to civilization lifetimes may explain why extraterrestrial aliens have not yet communicated with Earthlings. A new analysis suggests that the entire Milky Way galaxy currently houses only a few dozen worlds equipped with sufficiently sophisticated technology to send us a message. They are probably scattered at such great distances that any signals sent our way haven’t had time to get here. And by the time a signal arrives, there may be nobody here around to hear it.

    “We may imagine a galaxy in which intelligent life is widespread, but communication unlikely,” write Tom Westby and Christopher Conselice on June 10 in The Astrophysical Journal.

    Westby and Conselice, of the University of Nottingham in England, base their analysis on a modified version of the Drake equation, proposed nearly 60 years ago by the astronomer Frank Drake.

    Drake Equation, Frank Drake, Seti Institute

    At a time when most scientists didn’t take communicating with E.T. seriously, Drake identified the factors that would, in principle, permit an estimate of how many communicating civilizations might exist in the galaxy. His equation provided the framework for all subsequent scientific assessment of the prospects for extraterrestrial intelligence.

    Westby and Conselice accept the Drake equation as “a tool for estimating the number of planets in our galaxy that host intelligent life with the capability of releasing signals which could be detectable from Earth.” (Such Communicating Extra-Terrestrial Intelligent civilizations are sometimes referred to by the acronym CETI.) But since some of its terms are impossible to measure today (such as how many stars have planets, and how many planets are capable of hosting life), Westby and Conselice adopt a novel approach by making assumptions that can circumvent the lack of data needed to fill in the Drake equation’s blanks.

    Westby and Conselice begin by assuming it takes 5 billion years for intelligent, technologically advanced life to evolve — because that’s (approximately) how long it took on Earth. In some scenarios they assume that any habitable planet that lasts that long will, in fact, evolve such life. Given those data points, the task of counting galactic civilizations then involves figuring out how many stars are old enough and how many planets orbit those stars at a distance providing Goldilocks temperatures plus water and other raw materials needed to create and sustain biological beings.

    For one thing, that means the stellar system must possess sufficient quantities of metals — in astronomers’ argot, elements heavier than hydrogen or helium. Carbon, oxygen, nitrogen and other more complex substances must be available for life to both evolve and build radio transmitters or lasers to send signals through space.

    So in their new CETI equation, Westby and Conselice show how the number of intelligent, communicating civilizations in the galaxy today depends on how many stars the galaxy contains, how many of them are more than 5 billion years old, with how many habitable planets, and the average lifetime of an advanced civilization. Crunching all sorts of numbers about star formation rates and ages, results of planet searches and other astronomical studies yields estimates for each term in the CETI equation. It turns out that some of those factors don’t limit alien life’s prospects very much. Almost all the stars in the galaxy are older than 5 billion years, for instance (and their average age is almost 10 billion years).

    Some of those stars would be ruled out as E.T. habitats because of a lack of raw materials. Assuming the most pessimistic scenario — that life requires stars to have at least as much metal as the sun — eliminates about two-thirds of the galaxy’s stars. Of those remaining, the fraction with planets in an orbit conducive to habitability is probably about 20 percent.

    Since the galaxy is home to more than 200 billion stars, age, metal content and habitability limits still leave billions of possible CETI abodes. But that’s before factoring in civilization lifetime. It’s safe to say that a communicating civilization can last 100 years, since Earth’s technology has been emitting radio waves for that long. But if no high-tech society survives for more than a century, very few will be around at this particular time to communicate with us. With the strictest set of assumptions, assuming 100 years as the average CETI life span computes to only 36 communicating civilizations in the galaxy today. If so, far more movies have been made on Earth about alien civilizations than there actually are alien civilizations.

    Among those 36, the closest neighbor would probably be about 17,000 light-years away, “making communication or even detection of these systems nearly impossible with present technology,” Westby and Conselice write. For an ambitious civilization lifetime of 2,000 years, the nearest CETI neighbor could still be thousands of light-years away. In a wildly optimistic case, with an average high-tech lifetime of a million years, the closest civilization should be within 300 light-years and maybe as close as 20.

    “The lifetime of civilizations in our galaxy is a big unknown … and is by far the most important factor in the CETI equation,” Westby and Conselice note. “It is clear that … very long lifetimes are needed for … the galaxy to contain even a few possible active contemporary civilizations.”

    If you’re wondering how different assumptions can affect the prospects for getting alien e-mail, you can check out a tool at the Alien Civilization Calculator website created by physicists Steve Wooding and Dominik Czernia. Their tool permits you to plug in values to either the new CETI equation or the original Drake equation to see how different assumptions affect the galaxy’s population of alien civilizations.

    All such calculations are pretty imprecise. The uncertainty range for Westby and Conselice’s estimate of 36 civilizations, for instance, is four to 211. But the lack of precision is not as meaningful as the underlying message — the importance of civilization lifetime for the odds of receiving a message. And that message implies, as Westby and Conselice emphasize, that no news from E.T. is a bad sign for the lifetime of civilization on Earth.

    Since most stars in the galaxy are much older than the sun, the absence of signals so far suggests that most communicating civilizations have already come and gone, like the Maya and Myceneans. If that’s the case, an ability to communicate may signify an ability to self-annihilate.

    “Perhaps the key aspect of intelligent life, at least as we know it, is the ability to self-destroy,” Westby and Conselice comment. “As far as we can tell, when a civilization develops the technology to communicate over large distances it also has the technology to destroy itself and this is unfortunately likely universal.”

    In other words, Earth’s entire civilization will go the way of the Roman Empire sooner rather than later. There are plenty of likely roads to ruin. Nuclear holocaust is always a possibility, although nowadays it seems more likely that a viral pandemic will reboot the planet’s biosphere. Or climate change might do the job. If all else fails, there’s always social media.

    Yet there is always hope that high-tech societies can survive longer. Maybe long-lived alien civilizations are not so far away after all, but simply have chosen not to communicate with use because we don’t seem to be sufficiently civilized.

    See the full article here .

    Further to this discussion:

    Breakthrough Listen Project

    1

    UC Observatories Lick Autmated Planet Finder, fully robotic 2.4-meter optical telescope at Lick Observatory, situated on the summit of Mount Hamilton, east of San Jose, California, USA




    GBO radio telescope, Green Bank, West Virginia, USA


    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia


    SKA Meerkat telescope, 90 km outside the small Northern Cape town of Carnarvon, SA

    Newly added

    CfA/VERITAS, a major ground-based gamma-ray observatory with an array of four Čerenkov Telescopes for gamma-ray astronomy in the GeV – TeV energy range. Located at Fred Lawrence Whipple Observatory,Mount Hopkins, Arizona, US in AZ, USA, Altitude 2,606 m (8,550 ft)


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 1:04 pm on June 25, 2020 Permalink | Reply
    Tags: "Physicists spot a new class of neutrinos from the sun", , , , Borexino Collaboration, , , Science News,   

    From Science News: “Physicists spot a new class of neutrinos from the sun” 

    From Science News

    June 24, 2020
    Emily Conover

    1
    Neutrinos from the sun’s second-most prominent nuclear fusion process have been spotted in the Borexino detector (inside shown with light-detecting sensors). Borexino Collaboration

    Neutrinos spit out by the main processes that power the sun are finally accounted for, physicists report.

    Two sets of nuclear fusion reactions predominate in the sun’s core and both produce the lightweight subatomic particles in abundance. Scientists had previously detected neutrinos from the most prevalent process. Now, for the first time, neutrinos from the second set of reactions have been spotted, researchers with the Borexino experiment said June 23 in a talk at the Neutrino 2020 virtual meeting.

    “With this outcome, Borexino has completely unraveled the two processes powering the sun,” said physicist Gioacchino Ranucci of Italy’s National Institute for Nuclear Physics in Milan.

    In the sun’s core, hydrogen fuses into helium in two ways. One, known as the proton-proton chain, is the source of about 99 percent of the star’s energy. The other group of fusion reactions is the CNO cycle, for carbon, nitrogen and oxygen — elements that allow the reactions to proceed. Borexino had previously spotted neutrinos from the proton-proton chain (SN: 9/1/14). But until now, neutrinos from the CNO cycle were MIA.

    “They’re top of everybody’s list to try and identify and to spot,” says physicist Malcolm Fairbairn of King’s College London. “Now they think they’ve spotted them, which is a major achievement, really an extremely difficult measurement to make.”

    Located deep underground at the Gran Sasso National Laboratory in Italy, Borexino searches for flashes of light produced as neutrinos knock into electrons in a large vat of liquid.

    Gran Sasso LABORATORI NAZIONALI del GRAN SASSO, located in the Abruzzo region of central Italy

    INFN/Borexino Solar Neutrino detector, at Laboratori Nazionali del Gran Sasso, situated below Gran Sasso mountain in Italy

    Researchers have spent years fine-tuning the experiment to detect the elusive neutrinos that herald the CNO cycle. Although difficult to observe, the particles are plentiful, Borexino confirmed. On Earth, around 700 million neutrinos from the sun’s CNO cycle pass through a square centimeter each second, the researchers report.

    The result, presented for the first time at the virtual meeting, must still clear the hurdle of peer review in a scientific journal before it is fully official.

    Studying these particles could help reveal how much of the sun is composed of elements heavier than hydrogen and helium, a property known as metallicity. That’s because the rate at which CNO cycle neutrinos are produced depends on the sun’s content of carbon, nitrogen and oxygen. Different types of measurements currently disagree about the sun’s metallicity, with one technique suggesting higher metallicity than another. In the future, more sensitive measurements of CNO neutrinos could help scientists disentangle the problem.

    The CNO cycle is even more important in stars heavier than the sun, where it is the main fusion process. Studying this cycle in the sun can help physicists understand the inner workings of other stars, says Zara Bagdasarian, a physicist at the University of California, Berkeley and a member of the Borexino Collaboration. “It’s very important for us to understand how the sun works.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:46 am on June 21, 2020 Permalink | Reply
    Tags: "Machine learning helped demystify a California earthquake swarm", , , , , , Science News   

    From Science News: “Machine learning helped demystify a California earthquake swarm” 

    From Science News

    June 18, 2020
    Carolyn Gramling

    New data show the spread of the tiny quakes through complex fault networks over time.

    1
    By training computers to identify tiny earthquake signals recorded by seismographs, scientists found that circulating groundwater probably triggered a four-year-long earthquake swarm in Southern California.Credit: Furchin/E+/Getty Images

    Circulating groundwater triggered a four-year-long swarm of tiny earthquakes that rumbled beneath the Southern California town of Cahuilla, researchers report in the June 19 Science. By training computers to recognize such faint rumbles, the scientists were able not only to identify the probable culprit behind the quakes, but also to track how such mysterious swarms can spread through complex fault networks in space and time.

    Seismic signals are constantly being recorded in tectonically active Southern California, says seismologist Zachary Ross of Caltech. Using that rich database, Ross and colleagues have been training computers to distinguish the telltale ground movements of minute earthquakes from other things that gently shake the ground, such as construction reverberations or distant rumbles of the ocean (SN: 4/18/19). The millions of tiny quakes revealed by this machine learning technique, he says, can be used to create high-resolution, 3-D images of what lies beneath the ground’s surface in a particular region.

    In 2017, the researchers noted an uptick in tiny quake activity in the Cahuilla region that had, at that point, been going on for about a year. Most of the quakes were far too small to be felt but were detectable by the sensors. Over the next few years, the team used their computer algorithm to identify 22,000 such quakes from early 2016 to late 2019, ranging in magnitude from 0.7 to 4.4.

    Such a cluster of small quakes, with no standout, large mainshock, is called a swarm. “Swarms are different from a standard mainshock-aftershock sequence,” which are typically linked to the transfer of stress from fault to fault in the subsurface, Ross says. The leading candidates for swarm triggering come down to groundwater circulation or a kind of slow slippage on an active fault, known as fault creep.

    “Swarms have been somewhat enigmatic for quite a while,” says David Shelly, a U.S. Geological Survey geophysicist based in Golden, Colo., who was not connected with the study. They are particularly common in volcanic and hydrothermal areas, he says, “and so sometimes, it’s a bit harder to interpret the ones that aren’t in those types of areas,” like the Cahuilla swarm (SN: 5/14/20).

    “This one is particularly cool, because it’s [a] rare, slow-motion swarm,” Shelly adds. “Most might last a few days, weeks or months. This one lasted four years. Having it spread out in time like that gives a little more opportunity to examine some of the nuances of what’s going on.”

    Data from the Cahuilla swarm, which is winding down but “not quite over,” Ross says, revealed not only the complex network of faults beneath the surface, but also the evolution of the fault zone over time. “You can see that the sequence [of earthquakes] originated from a region that’s only on the order of tens of meters wide,” Ross says. But over the next four years, he adds, that region grew, creating an expanding front of earthquake epicenters that spread out at a rate of about 5 meters per day, until it became about 30 times the size of the original zone.

    That diffusive spread, Ross says, suggests that moving groundwater is triggering the swarm. Although the team didn’t directly observe fluids moving underground, the scientists speculate that beneath the fault zone lies a reservoir of groundwater that previously had been sealed off from the zone. At some point, that seal broke, and the groundwater was able to seep into one of the faults, triggering the first quakes. From there, it moved through the fault system over the next few years, triggering more quakes in its wake. Eventually, the seeping groundwater probably ran up against an impermeable barrier, which is bringing the swarm to a gradual halt.

    Being able to identify what causes such mysterious events is extremely important when it comes to communicating with people about earthquake hazards, Ross says. “Typically, we have very limited explanations that we can provide to the public on what’s happening,” he says. “It gives us something that we can explain in concrete terms.”

    And this discovery, he adds, “gives me a lot of confidence” to continue to apply this technique, such as on the last 40 years of amassed seismic data in Southern California, which likely contains many more previously undetected swarms.

    The study highlights how seismologists are increasingly acknowledging the importance of fluids in the crust, Shelly says. And, he adds, it emphasizes how having so many tiny quakes can illuminate the hidden world of the subsurface. “It’s kind of like having a special telescope to look down into the crust,” he adds. Combining this wealth of seismic data with machine learning is “the future of earthquake analysis.”

    ______________________________________________

    Earthquake Alert

    1

    Earthquake Alert

    Earthquake Network projectEarthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    3
    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 1:14 pm on June 11, 2020 Permalink | Reply
    Tags: "This weird quantum state of matter was made in orbit for the first time", , , , , Science News, To make a Bose-Einstein condensate atoms must be cooled while trapped with magnetic fields.   

    From Science News: “This weird quantum state of matter was made in orbit for the first time” 

    From Science News

    6.11.20
    Emily Conover

    1
    Scientists created weird quantum matter in orbit with the Cold Atom Lab, delivered to the International Space Station in 2018 by the Cygnus spacecraft, as seen from inside the space station. Credit: NASA

    On the International Space Station, astronauts are weightless. Atoms are, too.

    That weightlessness makes it easier to study a weird quantum state of matter known as a Bose-Einstein condensate. Now, the first Bose-Einstein condensates made on the space station are reported in the June 11 Nature.

    The ability to study the strange state of matter in orbit will aid scientists’ understanding of fundamental physics as well as make possible new, more sensitive quantum measurements, says Lisa Wörner of the German Aerospace Center Institute of Quantum Technologies in Bremen. “I cannot overstate the importance of this experiment to the community,” she says.

    A Bose-Einstein condensate occurs when certain types of atoms are cooled to such low temperatures that they take on one unified state. “It’s as though they’re joining arms and behaving as one harmonious object,” says physicist David Aveline of NASA’s Jet Propulsion Laboratory in Pasadena, Calif. To produce the weird state of matter in orbit, he and colleagues created the Cold Atom Lab, which was installed on the space station in 2018.

    In orbit, the atoms are in free fall, continuously plummeting under the force of gravity, producing a weightlessness like that felt by riders when a roller coaster suddenly drops. Those conditions, known as microgravity, make the space station an ideal environment for studying Bose-Einstein condensates.

    To make a Bose-Einstein condensate, atoms must be cooled while trapped with magnetic fields. On Earth, the trap must be strong enough to prop the atoms up against gravity. Because that’s not a concern in microgravity, the trap can be weakened, allowing the cloud of atoms to expand and cool. This process allows the condensate to achieve lower temperatures than are possible with the same methods on Earth. In the Cold Atom Lab, rubidium atoms reached tenths of billionths of kelvins.

    2
    Within the Cold Atom Lab, atoms are gradually cooled and condensed into progressively tighter clouds (four clumps are shown in this illustration, growing tighter from bottom to top). Coils produce magnetic fields that help trap the atoms, while lasers (red) assist in cooling. Credit: NASA

    Bose-Einstein condensates are already the record holders for the lowest known temperatures (SN: 4/13/15). With further improvement to the cooling techniques, scientists expect that the Cold Atom Lab could go even colder, to temperatures below any known in the universe.

    Another boon of microgravity is that measurements of the bizarre matter can be made for longer periods of time. Normally, atoms are released from the trap and then imaged quickly before gravity pulls them out of view. But in microgravity, researchers found that they could observe the released atoms for as long as 1.1 seconds. On Earth, the same techniques yield observation times of about 40 milliseconds.

    Those longer observation times could allow for more sensitive measurements. The atoms could be used to detect forces, including how Earth’s gravity varies over time and across different parts of the planet. Future experiments on dedicated satellites could make new, more sensitive measurements of phenomena such as sea level rise.

    And the quantum matter could also be used to test fundamental principles of physics, such as Einstein’s equivalence principle (SN: 12/4/17). That’s the idea that objects of different masses or compositions — or in this case, different types of atoms — will fall due to gravity at the same rate.

    Previous experiments have studied Bose-Einstein condensates on a rocket shot into space that quickly fell back to Earth and in a tower that launched an apparatus upward and let it fall back down. But the short duration of such flights limits how many experiments can be performed.

    “This is of course the big advantage” of the space station, says physicist Maike Lachmann of Leibniz University Hannover in Germany, who coauthored a perspective article that appears in the same issue of Nature. With about two years already logged on the space station, the Cold Atom Lab has already had plenty of time for experiments. “They can do very, very exciting things,” Lachmann says.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:12 am on June 4, 2020 Permalink | Reply
    Tags: "A Milky Way flash implicates magnetars as a source of fast radio bursts", A young active magnetar about 30000 light-years away dubbed SGR 1935+2154, , , , , Science News, STARE2-Survey for Transient Astronomical Radio Emission 2   

    From Science News: “A Milky Way flash implicates magnetars as a source of fast radio bursts” 

    From Science News

    6.4.20
    Maria Temming

    1
    A bright radio burst generated by a magnetar (one illustrated) in our galaxy hints that similar objects are responsible for at least some of the fast radio bursts in other galaxies, which have puzzled astronomers for over a decade. L. Calçada/ESO

    High-energy event nearby could help explain mystery signals from distant galaxies.

    Astronomers think they’ve spotted the first example of a superbright blast of radio waves, called a fast radio burst, originating within the Milky Way.

    Dozens of these bursts have been sighted in other galaxies — all too far away to see the celestial engines that power them (SN: 2/7/20). But the outburst in our own galaxy, detected simultaneously by two radio arrays on April 28, was close enough to see that it was generated by a highly magnetic neutron star called a magnetar.

    That observation is a smoking gun that magnetars are behind at least some of the extragalactic fast radio bursts, or FRBs, that have defied explanation for over a decade (SN: 7/25/14). Researchers describe the magnetar’s radio burst online at arXiv.org on May 20 and May 21.

    “When I first heard about it, I thought, ‘No way. Too good to be true,’” says Ben Margalit, an astrophysicist at the University of California, Berkeley, who wasn’t involved in the observations. “Just, wow. It’s really an incredible discovery.”

    In addition to giving magnetars an edge over other proposed explanations for FRBs, such as those involving black holes and stellar collisions, observations of this Milky Way magnetar may clear up a debate among theorists about how magnetars crank out such powerful radio waves.

    Researchers first noted an intense radio outburst from a young, active magnetar about 30,000 light-years away, dubbed SGR 1935+2154, in an astronomer’s telegram. The Canadian Hydrogen Intensity Mapping Experiment, or , radio telescope in British Columbia had detected about 30 decillion, or 3 × 10^34 ergs of energy from the burst.

    CHIME Canadian Hydrogen Intensity Mapping Experiment -A partnership between the University of British Columbia, the University of Toronto, McGill University, Yale and the National Research Council in British Columbia, at the Dominion Radio Astrophysical Observatory in Penticton, British Columbia, CA Altitude 545 m (1,788 ft)

    That was far brighter than any flash of radio waves previously seen from any of the five magnetars in and around the Milky Way known to emit radio pulses.

    That report inspired another group of astronomers to check concurrent data from the Survey for Transient Astronomical Radio Emission 2, or STARE2, detectors in the southwestern United States. STARE2, which watches the sky for radio signals at a different set of frequencies than CHIME, measured a whopping 2.2 × 10^35 ergs from the burst.

    “This thing put out, in a millisecond, as much energy as the sun puts out in 100 seconds,” says Caltech astronomer Vikram Ravi, who was on the team that analyzed the STARE2 data. That made this event 4,000 times as energetic as the brightest millisecond radio pulse ever seen in the Milky Way. If such an intense burst had happened in a nearby galaxy, it would have looked just like a fast radio burst [ https://arxiv.org/abs/2005.10828 ].

    “I was basically in shock,” says radio astronomer Christopher Bochenek of Caltech, who combed through the STARE2 data to find the burst. “It took me a while, and a call to a friend, to calm me down enough to go and make sure that this thing was actually real.”

    The weakest FRB that has been observed in another galaxy was still about 40 times more energetic than SGR 1935+2154’s radio flare. But that’s “pretty close, on astronomical terms,” says Keith Bannister, a radio astronomer at Australia’s Commonwealth Scientific and Industrial Research Organization in Sydney, who was not involved in the work. Magnetars like this “could be responsible for some fraction, if not all of the FRBs that we’ve seen so far,” he says. “This motivates future studies to try and find similar sorts of objects in other, nearby galaxies.”

    If magnetars do generate extragalactic FRBs, then SGR 1935+2154 could give new insight about how these objects do it. Theorists currently have many competing ideas about magnetar FRBs, Margalit says. Some think the FRB radio waves originate right in the thick of the star’s intense magnetic fields. Others suspect radio waves are emitted when matter ejected from the magnetar collides with material farther out in space.

    Different magnetar FRB scenarios come with different predictions about the appearance of X-rays that should be emitted along with the radio waves. Extragalactic FRBs are so far away that “the X-rays are kind of hopeless to detect,” Margalit says. But SGR 1935+2154 is close enough that spaceborne detectors saw a gush of X-rays from the magnetar at the same time as the radio burst. A closer look at the brightness, timing and frequency of those X-rays could help theorists evaluate magnetar FRB models, Margalit says.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:38 am on June 2, 2020 Permalink | Reply
    Tags: , Book review: "Tree Story", Dendrochronology, , Peel away the hard rough bark and there is a living document history recorded in rings of wood cells., Science News, Trouet of the University of Arizona’s Laboratory of Tree-Ring Research in Tucson is a dendroclimatologist; she uses tree rings to study Earth’s past climate., What tree rings can tell us about the past.   

    From Science News a book review: “‘Tree Story’ explores what tree rings can tell us about the past” 

    From Science News

    June 1, 2020
    Carolyn Gramling

    1
    Tree Story
    Valerie Trouet
    Johns Hopkins University Press
    $27

    2
    Every tree has a unique pattern of tree rings, and studying these patterns can help scientists learn about past climates and ancient civilizations. Credit: Ja’Crispy/iStock/Getty Images Plus

    Once you look at trees through the eyes of a dendrochronologist, you never quite see the leafy wonders the same way again. Peel away the hard, rough bark and there is a living document, history recorded in rings of wood cells. Each tree ring pattern of growth is unique, as the width of a ring depends on how much water was available that year. By comparing and compiling databases of these “fingerprints” from many different trees in many different parts of the world, scientists can peer into past climates, past ecosystems and even past civilizations.

    Humans’ and trees’ histories have long been intertwined. In her new book Tree Story, tree ring researcher Valerie Trouet examines this shared past as she describes the curious, convoluted history of dendrochronology. It’s a field that was born a little over a century ago, almost as a hobby for an astronomer at the University of Arizona.

    Andrew Douglass was interested in tree rings for what they might tell him about how past solar cycles influenced Earth’s climate. He began amassing a tree ring collection dating back to the mid-15th century. Then Douglass began examining an even older source of data: ancient wooden beams from Puebloan ruins in the U.S. Southwest. By linking the patterns in the beams to his own tree ring samples, he created a long chronological history for the region — and so the science of dendrochronology was born. Through this new dating technique, Douglass also solved a long-standing mystery, calculating ages for the different Puebloan sites ranging from the 10th to the 14th century.

    Trees rings have documented other pivotal moments in human history, Trouet explains. Unusually wet years from 1211 to 1225 may have given a boost to grasses in central Asia’s steppe — fodder for Genghis Khan’s mounted forces and key to the rapid expansion of the Mongol Empire. The 1986 Chernobyl nuclear power plant accident left its mark in the strangely aligned wood cells of surviving pine trees. Wood patterns in a violin crafted by Antonio Stradivari (and worth an estimated $20 million) authenticated not only the violin’s age but its geographic origins.

    Tree ring data spanning over 1,000 years was also instrumental in helping scientists reconstruct the planet’s recent climate history and in highlighting the dramatic warming observed in the last century.

    3
    Tree rings preserve clues about past environments. A wide ring, for example, records a rainy year; a thin ring corresponds to a dry one. Even forest fires leave telltale marks.Credit: C. Chang

    Trouet, a member of the University of Arizona’s Laboratory of Tree-Ring Research in Tucson, is a dendroclimatologist; she uses tree rings to study Earth’s past climate. She tells of “the thrill of the chase” to find the oldest, least disturbed trees on Earth, with circular rings and growth related only to changes in climate. These trees have helped her identify, for example, periods of medieval drought in northern Africa that are linked to a large-scale weather pattern known as the North Atlantic Oscillation — also the probable reason for a historically documented period of warmth in Europe known as the Medieval Climate Anomaly, she suggests.

    Now, she and colleagues are examining tree rings from Europe to trace how the high-speed jet stream winds that encircle the Northern Hemisphere have shifted over time. The waviness of the jet stream — how far south these winds might dip and curl — is linked to patterns of storms across the northern latitudes. Understanding those links in the past, Trouet argues, could provide clues to how storminess may change in the future, as the planet’s climate changes.

    Tree Story gives readers a lively, sometimes visceral feel for Trouet’s work. She describes the beauty of tiny wood cells smaller in di-ameter than a human hair, and the elbow grease involved in manually twisting a borer into the heart of a tree to retrieve a sample. “This requires quite a bit of upper-body strength, especially if you’re coring dozens of trees a day, and this often comes as a surprise to dendro newbies.” Trouet’s humor also comes through when she describes how fieldwork is sometimes driven by testosterone–fueled stubbornness, and how she has had to convince male colleagues hunting for trees in the mountains that it’s OK to admit to being tired, hungry or cold. “As a woman scientist, I got 99 problems, but at least starving or freezing to death to protect my ego ain’t one.”

    Peppered throughout the book are italicized terms and helpful definitions of scientific jargon such as “crossdating” (matching ring patterns among different trees, whether alive or dead, to create a consistent chronology). I particularly enjoyed getting a glimpse into odd tree ring lingo: To “hit the pith” is to core all the way to the oldest part of a tree; “cookies” are the round cross sections of a fallen trunk, cut with a chainsaw or an ax.

    Trouet loves trees, but she says she is not a tree-hugger, nor does she believe trees are sentient. Instead, she is drawn to unlocking the secrets the trees contain. “Wood is gorgeous,” she writes. “And finding matching tree ring patterns is like solving a puzzle — it is addictive.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:14 am on June 2, 2020 Permalink | Reply
    Tags: "Chicxulub collision put Earth’s crust in hot water for over a million years", , Asteroid crater on the Yucatán peninsula reveals surprising new geologic details., , Science News   

    From Science News: “Chicxulub collision put Earth’s crust in hot water for over a million years” 

    From Science News

    6.1.20
    Carolyn Gramling

    Asteroid crater on the Yucatán peninsula reveals surprising new geologic details.

    1
    About 66 million years ago, an asteroid slammed into Earth at what is now Chicxulub, on Mexico’s Yucatán peninsula (illustrated). In addition to triggering a mass extinction, the impact sent superheated water cycling deep into the crust.
    Science Photo Library/Alamy Stock Photo

    The asteroid that slammed into Earth 66 million years ago left behind more than a legacy of mass destruction. That impact also sent superheated seawater swirling through the crust below for more than a million years, chemically overhauling the rocks. Similar transformative hydrothermal systems, left in the wake of powerful impacts much earlier in Earth’s history, may have been a crucible for early microbial life on Earth, researchers report May 29 in Science Advances.

    The massive Chicxulub crater on Mexico’s Yucatán peninsula is the fingerprint of a killer, probably responsible for the destruction of more than 75 percent of life on Earth, including all nonbird dinosaurs (SN: 1/25/17). In 2016, a team of scientists made a historic trek to the partially submerged crater, drilling deep into the rock to study the crime scene from numerous angles.

    One of those researchers was planetary scientist David Kring of the Lunar and Planetary Institute in Houston. A dozen years earlier, Kring had found evidence at Chicxulub that the layers of rock bearing the signs of impact — telltale features such as shocked quartz and melted spherules — were subsequently cut through by veins of newer minerals such as quartz and anhydrite. Such veins, Kring thought, suggest that hot hydrothermal fluids had been circulating beneath Chicxulub some time after the impact.

    Hydrothermal systems can occur where Earth is tectonically active, such as where tectonic plates pull the seafloor apart, or where mantle plumes like the one beneath Yellowstone rise up into the crust. The molten rock rising through the crust in these regions superheats water already circulating within the crust.

    But the Yucatán peninsula is tectonically quiescent, and has been for 66 million years, Kring says. So, as part of the International Ocean Discovery Program’s Expedition 364 to Chicxulub, he and colleagues drilled 1,335 meters below the ring of the crater, retrieving long cores of sediment and rock.

    The team then analyzed the minerals found in the cores. “It was immediately obvious that they had been hydrothermally altered. It was pervasive and apparent,” Kring says. The intense heat of the circulating seawater caused chemical reactions within the rock, transforming some minerals into others. By identifying the different types of minerals, the team determined that the initial temperature of the fluids was more than 300° Celsius, later cooling to about 90° C.

    The chemically altered rocks beneath the crater extended down about four or five kilometers below the crater’s peak ring, a circular, mountainous region within the vast crater. The hydrothermally altered zone covers a volume more than nine times that of the Yellowstone Caldera system, Kring says. Paleomagnetic data suggest that the hydrothermal system lasted for more than a million years.

    2
    A core of rock and sediment extracted from within the Chicxulub impact crater revealed centimeter-sized cavities within the rocks containing hydrothermally altered minerals. Here, tiny cavities within impact breccia — a type of rock formed of broken fragments cemented together by fine-grained sediment — contain analcime (transparent crystals), which forms at temperatures around 200° Celsius and dachiardite (red crystals), which forms at temperatures around 250° Credit: C.D. Kring

    Those conditions, the researchers say, may have also been capable of fostering life akin to the extremophiles that thrive in Yellowstone’s boiling pools. In addition to the metal-rich fluids that could provide an energy source for microbes, the Chicxulub cores revealed that the rocks were both porous and permeable — in other words, filled with interconnected nooks and crannies that could have been cozy shelters for microbes.

    “It looks like a perfect habitat,” Kring says.

    Kring has previously suggested that the very same destructive impacts that annihilate life may also create appealing habitats — not just on Earth, but potentially on other planetary bodies such as Mars. Even more tantalizing is the possibility that hydrothermal systems, engendered beneath ancient impacts, may have been where life on Earth began (SN: 3/1/13).

    Evidence from lunar craters suggests that Earth was heavily bombarded by asteroids about 3.9 billion years ago (SN: 10/18/04). Most of those more ancient craters on Earth have long since vanished or been altered by the constant tectonic recycling of Earth’s surface (SN: 12/18/18). So the hydrothermal system beneath Chicxulub offers a window into what such systems might have actually looked like much deeper in the past, says geophysicist Norman Sleep of Stanford University, who was not involved in the study. “It shows the reality of the process,” Sleep says.

    The new study may set the stage for the possibility of life thriving beneath an impact. But whether a microbial cast of characters was actually present beneath Chicxulub is a question for future studies, Kring says.

    “Let me be clear: This paper has no evidence of microbial life,” Kring says. “We just have all the properties of hydrothermal systems that do support life elsewhere on Earth.”

    Ancient environments that provided water, chemical building blocks and energy “are very promising candidates for hosting [life’s] origins and early evolution,” says NASA astrobiologist David Des Marais, who was not involved in the study. Impact-generated hydrothermal systems aren’t the only such environments; researchers have also made a compelling case for hot springs, Des Marais says [Astrobiology].

    That’s an ongoing debate, he notes, adding “I consider hydrothermal systems to be highly promising exploration targets for astrobiology.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:06 am on May 26, 2020 Permalink | Reply
    Tags: "A star shredded by a black hole may have spit out an extremely energetic neutrino", , , , , Science News   

    From Science News: “A star shredded by a black hole may have spit out an extremely energetic neutrino” 

    From Science News

    If true, this would be only the second time such a neutrino has been traced back to its source.

    1
    A high-energy neutrino may have been born when a star was ripped apart by a black hole (illustrated), scientists report. Gas (red) stripped from the star spirals toward the black hole. Some stellar material is swallowed up while some is flung outward (blue). Credit: M.Weiss/CXC/NASA

    5.26.20
    Emily Conover

    A neutrino that plowed into the Antarctic ice offers up a cautionary message: Don’t stray too close to the edge of an abyss.

    The subatomic particle may have been blasted outward when a star was ripped to pieces during a close encounter with a black hole, physicists report May 11 at arXiv.org. If it holds up, the result would be the first direct evidence that such star-shredding events can accelerate subatomic particles to extreme energies. And it would mark only the second time that a high-energy neutrino has been traced back to its cosmic origins.

    With no electric charge and very little mass, neutrinos are known to blast across the cosmos at high energies. But scientists have yet to fully track down how the particles get so juiced up.

    One potential source of energetic neutrinos is what’s called a tidal disruption event. When a star gets too close to a supermassive black hole, gravitational forces pull the star apart (SN: 10/11/19). Some of the star’s guts spiral toward the black hole, forming a hot pancake of gas called an accretion disk before the black hole gobbles the gas up. Other bits of the doomed star are spewed outward. Scientists had predicted that such violent events might beget energetic neutrinos like the one detected.

    Spotted on October 1, 2019, the little neutrino packed a punch: an energy of 200 trillion electron volts. That’s about 30 times the energy of the protons in the most powerful human-made particle accelerator, the Large Hadron Collider. The neutrino’s signature was picked up by IceCube, a detector frozen deep in the Antarctic ice. That detector senses light produced when neutrinos interact with the ice.

    When IceCube finds a high-energy neutrino, astronomers scour the sky for anything unusual in the direction from which the particle came, such as a short-lived flash of light, or transient, in the sky. This time, astronomers with the Zwicky Transient Facility came up with a possible match: a tidal disruption event called AT2019dsg.

    First observed in April 2019, that event had been spied emitting light of various wavelengths: visible, ultraviolet, radio and X-rays. And the maelstrom was still raging when IceCube detected the neutrino, according to a team of physicists including Marek Kowalski of the Deutsches Elektronen-Synchrotron, or DESY, in Zeuthen, Germany.

    While intriguing, the association between the neutrino and the shredded star is not certain, says IceCube physicist Francis Halzen of the University of Wisconsin–Madison, who was not involved with the new study. “I don’t know if I have to bet my wallet, but I probably would,” Halzen says. “But it doesn’t have much money in it.”

    The probability that a neutrino and a similar tidal disruption event would overlap by chance is only 0.2 percent, the researchers report. But that doesn’t meet physicists’ stringent burden of proof. “Just one event is difficult to convince [us] this source is really a neutrino emitter,” says astrophysicist Kohta Murase of Penn State University. “I am waiting for more data.”

    Kowalski declined to comment for this article, as the paper has not yet been accepted for publication in a scientific journal.

    To have birthed such an energetic neutrino, the star-shredding event must have first accelerated protons to high energies. Those protons must then have crashed into other protons or photons (particles of light). That process produces other particles, called pions, that emit neutrinos as they decay.

    Now, scientists are aiming to pin down exactly how that acceleration happened. The protons might have been launched within a wind of debris that flowed outward in all directions. Or they could have been accelerated in a powerful, geyserlike jet of matter and radiation.

    AT2019dsg shows some unusual features that any explanation should be able to account for. X-rays produced in the event, for example, appeared to drop off rapidly. So physicists WalterWinter of DESY and Cecilia Lunardini of Arizona State University in Tempe suggest May 13 at arXiv.org that the event did produce a jet, but that a cocoon of material gradually shrouded the region, hiding the X-rays from view while still allowing the neutrino to escape. Lunardini declined to comment because the paper is not yet published in a journal.

    But Murase argues that for the jet to be hidden, that means it can’t be that powerful of an outflow, making it hard to explain the energetic neutrino this way. “If it injects a lot of energy, this energy gets out,” he says. In a third study posted May 18 at arXiv.org, Murase and colleagues favor the idea that the protons get accelerated in an outward flowing wind or in a corona, a superhot region near the black hole’s accretion disk.

    Determining where these particles come from can help scientists better understand some of the most extreme environments in the cosmos. Previously, astronomers had matched up a different energetic neutrino with a blazar experiencing a flare-up (SN:7/12/18). A blazar is a bright source of light powered by a supermassive black hole at the center of a galaxy. Both a blazar flare and a tidal disruption event “are very special activities, which is when a lot of energy is released in a small amount of time,” says astrophysicist Ke Fang of Stanford University, who was not involved with the study.

    Making more observations of high-energy neutrinos is crucial, Fang says. “This is the only way we can clearly understand how the universe is operating at this extreme energy.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 7:13 am on May 11, 2020 Permalink | Reply
    Tags: "How tiny ‘dead’ galaxies get their groove back and make stars again", , , , , Science News   

    From Science News: “How tiny ‘dead’ galaxies get their groove back and make stars again” 

    From Science News

    May 8, 2020
    Ken Croswell

    Gas falling into the dwarf galaxy must fight the galaxy’s old stars before making new ones.

    1
    Unlike most of its tiny peers, little Leo P glitters with freshly minted stars, seen here behind bright stars in the Milky Way and in front of distant galaxies. From left to right, this Hubble image spans just 4,800 light-years, showing how small Leo P is. Credit: Hubble Space Telescope/NASA, ESA; K. McQuinn/Rutgers Univ.

    Talk about sibling rivalry. Most of the smallest galaxies are “dead,” making no new stars. Now, computer simulations reveal why it is so hard for a tiny galaxy to rejuvenate itself: The galaxy’s existing stars fight the birth of any new ones, even after fresh fuel for star formation falls into the galaxy.

    These simulations also show that eventually new stars can arise [ https://arxiv.org/abs/2004.09530 ] and make the galaxy sparkle again. But it can take many billions of years for a little galaxy to get its star-making mojo back, researchers report April 20 at arXiv.org.

    Galaxies spawn new stars from gas, but the gas must be cold and dense to collapse into stars. That requirement spelled big trouble for little galaxies soon after the universe’s birth, when ultraviolet radiation from galaxies broke intergalactic hydrogen atoms into protons and electrons (SN: 11/7/19). This process, called reionization, let radiation stream through space and heat gas inside galaxies. The smallest galaxies had so little gas to begin with that it all got zapped by the ionizing radiation. As a result, the typical low-mass dwarf galaxy stopped creating stars long ago.

    In these tiny galaxies, “reionization killed star formation,” says Martin Rey, an astrophysicist at Lund Observatory in Sweden. All the stars in most low-mass dwarf galaxies today are therefore ancient (SN: 4/3/15).

    Galaxies spawn new stars from gas, but the gas must be cold and dense to collapse into stars. That requirement spelled big trouble for little galaxies soon after the universe’s birth, when ultraviolet radiation from galaxies broke intergalactic hydrogen atoms into protons and electrons (SN: 11/7/19). This process, called reionization, let radiation stream through space and heat gas inside galaxies. The smallest galaxies had so little gas to begin with that it all got zapped by the ionizing radiation. As a result, the typical low-mass dwarf galaxy stopped creating stars long ago.

    In these tiny galaxies, “reionization killed star formation,” says Martin Rey, an astrophysicist at Lund Observatory in Sweden. All the stars in most low-mass dwarf galaxies today are therefore ancient (SN: 4/3/15).

    But two unrelated dwarf galaxies in the constellation Leo, named Leo P and Leo T, never got the memo. Puny even by dwarf standards, both galaxies still eke out new stars. They are the least luminous star-forming galaxies known: Leo P’s stellar mass is only 560,000 solar masses, about 0.001 percent of the Milky Way’s total, and Leo T has even fewer stars (SN: 5/9/18). Both galaxies are the Milky Way’s neighbors — Leo P is 5 million light-years and Leo T is just 1.3 million light-years away — so astronomers can see the newborn stars.

    To explain how such small galaxies thrive today, Rey’s team ran computer simulations of the gas, stars and dark matter in low-mass dwarf galaxies. The simulations showed that infalling gas can resuscitate dwarf galaxies and reignite their star formation, a finding supported by earlier work (SN: 7/12/18). But “we find that it takes quite a long time,” Rey says.

    To explain how such small galaxies thrive today, Rey’s team ran computer simulations of the gas, stars and dark matter in low-mass dwarf galaxies. The simulations showed that infalling gas can resuscitate dwarf galaxies and reignite their star formation, a finding supported by earlier work (SN: 7/12/18). But “we find that it takes quite a long time,” Rey says.

    The problem is that old stars in the dwarf galaxy prevent the birth of new stars by stirring up the gas, the researchers found. In particular, exploding white dwarf stars and winds from large red aging stars heat the gas, delaying the new era of star birth. In fact, 6 to 8 billion years — about half the age of the universe (SN: 7/24/18) — can pass before little dwarf galaxies resume their star-making careers and resemble Leo P and Leo T, the scientists report.

    “Their results are very plausible,” says Kristen McQuinn, an astrophysicist at Rutgers University in Piscataway, N.J., who has studied Leo P in the past but was not involved with the new work. She says including the negative effects of the galaxy’s own stars makes the simulations unique.

    Rey and his colleagues also found something else. “The kind of surprise and cherry on the cake was the fact that we can predict a new class of galaxies,” Rey says. The simulations show that some low-mass dwarf galaxies have acquired gas but not yet begun to mint new stars.

    No definite examples of gas-rich dwarf galaxies that lack star formation are known, Rey says, but he predicts future observations will uncover them. New optical telescopes should find the faint old stars in these galaxies, and radio telescopes should detect their hydrogen gas, which may someday spawn new stars.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:07 am on May 7, 2020 Permalink | Reply
    Tags: "Warming water can create a tropical ecosystem but a fragile one", , , , , Science News   

    From Science News: “Warming water can create a tropical ecosystem, but a fragile one” 

    From Science News

    5.6.20
    Jake Buehler

    Warm water discharged into the Sea of Japan let tropical fish flourish in an artificial hot spot.

    1
    A cutribbon wrasse (Stethojulis interrupta) is one of the tropical species that disappeared from the Otomi Peninsula in Japan after a nearby power plant shut down operations and stopped releasing warm water. Mark Rosenstein/iNaturalist.org (CC BY-NC-SA 4.0)

    A decade ago, the waters off the Otomi Peninsula in the Sea of Japan, were a tepid haven. Schools of sapphire damselfish flitted above herds of long-spined urchins. The site was a hot spot of tropical biodiversity far from the equator, thanks to warm water exhaust from a nearby nuclear power plant. But when the plant ceased operations in 2012, those tropical species vanished.

    After the plant shut down, Otomi’s average bottom temperature fell by 3 degrees Celsius, and the site lost most of its tropical fishes, fisheries scientist Reiji Masuda of Kyoto University reports May 6 in PLOS ONE. The die-off of tropical fishes and invertebrates was “striking,” he says. Otomi quickly reverted to a cool-water ecosystem.

    The life and death of the reef is providing a sneak peek into the future of temperate habitats under climate change. This research suggests that even modest warming can result in dramatic changes to cool-water reefs, with some temperate habitats converting to more tropical ones. But these emerging reefs may not match the diversity or health of other more established tropical reefs at first, leaving them as ecologically fragile as the Otomi reef proved to be.

    While some temperate reefs are changing rapidly with global warming, they aren’t exact transplants of more established tropical ecosystems, says David Booth, a marine ecologist at the University of Technology Sydney not involved in the new study. Booth studies increasingly tropical Australian reefs.

    “People always ask us, ‘Oh, that means even though the Barrier Reef’s in trouble with bleaching, in a couple of years Sydney will be the new Barrier Reef?’” Booth says. Sydney is merely acquiring a handful of tropical fish and coral, he says, “so, it ain’t the Barrier Reef by any means. Just a coral community starting, that’s all.”

    Rapid die-off

    In October 2003, while studying groupers at Otomi, Masuda noticed lots of tropical fishes that seemed out of place. Parts of southern Japan host tropical reefs, but Otomi sits at about 35° N, a zone typically occupied by seaweeds and associated fishes. The source of this anomaly was the Takahama nuclear power plant, only 2 kilometers away, which released warm water into the ocean after using it to cool reactors.

    In 2004, Masuda began surveying Otomi and two other nearby sites, cataloging and counting fish. Then the Tōhoku earthquake and tsunami struck in 2011, precipitating the Fukushima Daiichi nuclear disaster. Japan stopped running all of its nuclear plants in response, including Takahama in 2012. As the warm discharge ceased, Otomi became an impromptu natural experiment in resiliency (SN: 12/5/14), and Masuda kept collecting data for the next five years.

    Soon, he started seeing dead and dying fish everywhere. “In normal marine environments, we scarcely see a dead fish,” says Masuda, since fish usually die by being eaten. But around Otomi, fish were succumbing en masse to the cold temperatures instead.

    2
    Neon damselfish (Pomacentrus coelestis) once congregated in Otomi’s warm water (left). But after a nearby nuclear power plant shut down, the waters cooled. Now, Japanese rockbass (Sebastes cheni) and the wrasse Halichoeres tenuispinis — typical species in temperate Japan — swim among sargassum seaweed (right). Credit: Reiji Masuda

    Masuda was also surprised at how quickly Otomi shifted back to a temperate ecosystem. “Only two months after the die-out of tropical, poisonous sea urchins, temperate sea urchins appeared,” he says. “The sargassum seaweed bed recovered with some temperate fishes such as common wrasse and rockfish.”

    Sneak peek

    Otomi may provide a preview of some of the changes temperate reefs could experience as the global climate warms. After decades of warm water, Otomi still had no shelter-providing corals or large, tropical predators.

    That lack of predators may have been behind Otomi’s high densities of tropical urchins, which had stripped the seabed clear of algae, obliterating access to food and shelter for many other species. There was nothing “to control their number and thus to maintain a healthy ecosystem,” he says.

    Masuda thinks it’s possible the die-offs were so severe and abrupt because of this poor ecosystem health. With species diversity lower than other tropical systems, the lack of redundancy can make the whole ecosystem more susceptible to stressors. In this case, that stress was a drop in temperature.

    If there were many different species of urchin in the tropicalized reef, there’d be a higher chance that some could tolerate lower temperatures, Masuda points out. “This applies to fishes, too,” he says. “In healthy tropical ecosystems, there are many species — some should be relatively robust to temperature changes.”

    Elsewhere in Japan, warming seas have already led to complete ecosystem shifts from kelp forests to coral, upending fisheries, Booth notes.

    As for Otomi, it may get another chance to be a natural experiment. In May 2017, the Takahama nuclear reactor turned back on, and Masuda has been diving and collecting data on the return of tropical fishes and urchins as the waters warm. Analyzing this much slower change, he says, “will be another fish to fry.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel