Tagged: Science Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:25 pm on May 11, 2021 Permalink | Reply
    Tags: "‘Something went wrong.’ Some astronomers feel left out of European road map", adioNet – Radio Astronomy in Europe (EU), APPEC-Astroparticle Physics European Consortium (EU), Astronet, Decadal Survey on Astronomy and Astrophysics 2020 (Astro2020) (US), OPTICON-Optical Infrared Coordination Network for Astronomy, Science Magazine   

    From Science Magazine: “‘Something went wrong.’ Some astronomers feel left out of European road map” 

    From Science Magazine

    May. 11, 2021
    Daniel Clery

    1
    A 2008 Astronet road map called for the European Southern Observatory(EU) ELT 39 meter telescope to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).. Some astronomers are upset by the group’s latest effort. Credit: ESO [Observatoire européen austral][Europäische Südsternwarte] (EU).

    For more than 1 year, Astronet, a group of more than 50 astronomers, has labored to draw up priorities for the next 2 decades of European astronomy. A partial draft plan, released in February, lists the field’s most pressing scientific questions, such as how primordial gases coalesced into the first stars and galaxies and whether the atmospheres of exoplanets betray signs of life. To answer them, the plan calls for new facilities including the Einstein Telescope, a gravitational wave detector to be built in a network of underground tunnels; antennas installed on the radio-quiet far side of the Moon; and a fleet of orbiting telescopes to probe exoplanet atmospheres.

    But some are unhappy with what the draft plan left out—particularly in radio and gamma ray astronomy, as well as the study of high-energy particles from space. “Something went wrong,” says Leonid Gurvits, a radio astronomer at the Delft University of Technology [Technische Universiteit Delft] (NL). “It’s not anyone’s intention, it just happened in this unfortunate way.” Astronet organizers say the drafts were not intended to be exhaustive and later revisions will reflect the roughly 200 comments received before a 1 May deadline.

    Astronet mirrors the Decadal Survey on Astronomy and Astrophysics 2020 (Astro2020) (US), which since the 1960s has provided funding agencies and legislators with infrastructure priorities—essentially a wish list of big telescopes and space missions. The current iteration in the United States, known as Astro2020, is expected to release its report in the coming months. It was put together by more than 150 committee and panel members with input from hundreds of submitted white papers as well as dozens of virtual meetings and town halls.

    In its first incarnation, Astronet aimed for something similarly comprehensive, producing a science vision in 2007 and the following year, a road map of facilities and missions. It endorsed efforts now under construction including the Extremely Large Telescope, the Square Kilometre Array, and several space missions.

    Astronet was set up under the auspices of the European Union in 2005 with a 4-year budget of €2.5 million. After updating its vision and road map in 2015, the European Union cut off funding. But the group continued with support of a few tens of thousands of euros per year from funding bodies in eight nations plus the ESO [Observatoire européen austral][Europäische Südsternwarte] (EU). The Astronet board, made up of funding agency representatives, decided this time to produce “something more precise, direct, and to the point,” says board chair Colin Vincent of the STFC [Science & Technology Facilities Council] (UK). The new report, he says, aimed to answer, “What are the science questions, where are we now, and what do we need to progress over the next 20 years?”

    Astronet formed panels of as many as 12 researchers in each of five fields, ranging from the origin and evolution of the universe to understanding the Solar System and conditions for life. It also formed a panel covering computing and another for outreach, education, and diversity. COVID-19 thwarted plans to gather input at town hall meetings. The panels did not solicit white papers but instead drafted reports from their own experience. Drafts reports from the five subject panels were posted on the Astronet website for comments; the computing and workforce reports are still being drafted. The plan was to “throw them out there and see what the community makes of them,” Vincent says. After the comment period closed, Astronet planned virtual town halls and revisions before final release before the end of the year.

    Not everyone was impressed by this approach. Although Astronet tried to get the word out to astronomers across Europe, some complained they only heard about the draft reports 1 month or less before the deadline for comments. “It came totally out of the blue. Most people didn’t know about it,” says radio astronomer Heino Falcke of Radboud University [Radboud Universiteit](NL). “Everyone’s screaming: ‘What’s going on?’” Gurvits says. RadioNet – Radio Astronomy in Europe (EU), a network representing radio astronomy, requested an extension to the comments deadline but was declined.

    Others have complained about what they see as glaring omissions in the draft reports. According to Andreas Haungs of the KIT Karlsruhe Institute of Technology [Karlsruher Institut für Technologie] (DE), who heads the APPEC-Astroparticle Physics European Consortium (EU), the drafts don’t sufficiently credit the work done by astronomers using high-energy gamma rays, neutrinos, or gravitational waves. “I don’t think it really worked,” he says. Falcke says the report contains “not a single word” on the Event Horizon Telescope, which in 2019 produced the first image of a black hole’s shadow. “This is almost an embarrassment,” he says.

    Gerry Gilmore of the University of Cambridge’s (UK) Institute of Astronomy, head of the OPTICON-Optical Infrared Coordination Network for Astronomy | OPTICON Project | H2020 | CORDIS | European Commission (EU) network of optical and infrared astronomers, counters that the Astronet reports are “discussion documents, the start of a conversation.” Vincent also defends the process. “The important thing was to get people to respond. That’s the point of the consultation,” he says, acknowledging that the drafts may have appeared parochial to some. He says the subject panels are revising their drafts, and in June, the EAS European Astronomical Society (EU) will hold an open meeting to discuss those revisions, with further iterations continuing over the summer.

    Astronet has a difficult task. Europe already has the European Space Agency [Agence spatiale européenne][Europäische Weltraumorganisation](EU), which launches many space telescopes, and the ESO, which manages a group of top optical telescopes in Chile. With independent budgets, funded directly by national governments, those agencies autonomously work out their own long-term strategies. Linda Tacconi of the MPG Institute for extraterrestrial Physics [Max-Planck-Institut für außerirdische Physik] (DE), who is leading Voyage 2050 – Cosmos (EU), ESA’s latest science planning process, says it has been slowed by COVID-19. “Therefore, Voyage 2050 could not be included in the Astronet report,” she says.

    That leaves Astronet to bring some order to the disconnected groups of astronomers who aren’t covered by those two agencies. Dealing with so many national funders isn’t easy either, Vincent says. “A prescriptive approach wouldn’t be as successful,” he says. “We need a common understanding on what is needed and [national agencies] can make a variety of responses on what to bring to the party.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:40 am on February 26, 2021 Permalink | Reply
    Tags: "Fleets of radar satellites are measuring movements on Earth like never before", , Science Magazine   

    From Science Magazine: “Fleets of radar satellites are measuring movements on Earth like never before” 

    From Science Magazine

    Feb. 25, 2021
    Julia Rosen

    1
    The surface of Ethiopia’s Corbetti volcano has been rising nearly 7 centimeters per year, a sign of underground activity. In this image, red shows the most cumulative inflation. Credit: UNIVERSITY OF BRISTOL(UK)/COMET.

    East Africa has been called the cradle of humanity. But the geologically active region has also given birth to dozens of volcanoes. Few have been monitored for warnings of a potential eruption, and until recently, most were believed to be dormant. Then, Juliet Biggs decided to take a closer look—or rather, a farther look.

    Biggs, a geophysicist at the University of Bristol(UK), uses a technique called interferometric synthetic aperture radar (InSAR) to detect tiny movements of Earth’s surface from space. In a series of studies, she and her co-authors analyzed satellite data on the East African volcanoes. According to their latest results, which were published last month, 14 have been imperceptibly growing or shrinking in the past 5 years—a clue that magma or water is moving underground and that the volcanoes are not completely asleep. “It’s really changed the way these volcanoes are viewed, from something that’s kind of dormant to really very active systems,” Biggs says. After data showed that the Corbetti volcano, which abuts the fast-growing city of Hawassa, Ethiopia, is inflating steadily at a rate of 6.6 centimeters per year, Biggs’s Ethiopian colleagues included it in the country’s geological hazard monitoring network.

    No other technology could produce such a comprehensive survey. Individual GPS stations can track surface movements of less than 1 millimeter, but InSAR can measure changes almost as subtle across a swath hundreds of kilometers wide. That has made it a vital tool for earth scientists studying the heaves and sighs of our restive planet. “We tend to think of the ground as this solid platform,” Biggs says, “and actually, it’s really not.”

    With InSAR, scientists are tracking how ice streams flow, how faults slip in earthquakes, and how the ground moves as fluids are pumped in or out. “Everywhere you look on Earth, you see something new,” says Paul Rosen, an InSAR pioneer at NASA’s Jet Propulsion Laboratory (JPL). “It’s a little bit like kids in a candy store.”

    And the flood of InSAR data is growing fast. Since 2018, the number of civil and commercial SAR satellites in orbit has more than doubled. And at least a dozen more are set to launch this year, which would bring the total to more than 60. With the help of computing advances that make data processing easier, the satellite fleets may soon be able to detect daily or even hourly surface changes at just about every patch of ground on Earth.

    2
    A port explosion rocked Beirut in August 2020. Before-and-after radar images were used to identify areas (red) with surface changes due to damaged buildings.
    ARIA; JPL-CALTECH(US); EARTH OBSERVATORY OF SINGAPORE at NANYANG TECHNOLOGICAL UNIVERSITY(SG); NASA EARTH APPLIED SCIENCES DISASTERS PROGRAM(US); MODIFIED COPERNICUS SENTINEL(EU) DATA (2020).

    ESA (EU) Sentinel-1B.

    As the technology grows more powerful and ubiquitous, InSAR is spreading beyond the geosciences. With InSAR data, railroads are monitoring the condition of their tracks and cities are monitoring shifts in buildings caused by construction. “It’s popping up everywhere,” says Dáire Boyle, who follows trends in the space industry for Evenflow, a consulting firm in Brussels. Analysts value the SAR market at roughly $4 billion, and expect that figure to nearly double over the next 5 years.

    Many believe InSAR will eventually underpin our daily lives. From measuring the water stored in mountain snowpacks to enabling quick responses to natural disasters, InSAR data will prove invaluable to governments and industries, says Cathleen Jones, a science team leader for NISAR, an upcoming joint SAR mission from NASA and the Indian Space Research Organisation [ɪsroʊ](IN). “I want it to become so socially relevant that they can’t go back to not having this data.”

    Synthetic aperture radar, the “SAR” on which InSAR depends, originated in the 1950s as a tool for airborne military reconnaissance. Like traditional radar, SAR instruments captured images of the planet by sending out microwave pulses and recording the echoes. And like a traditional radar, the instruments could penetrate clouds and worked equally well at night. A key difference was the “synthetic” aspect of SAR. Larger radar antennas, like larger apertures on a camera, collect more of the echoes and enable sharper pictures. But building a single antenna large enough to take a high-resolution image isn’t practical. Researchers realized they could instead create an artificially large aperture by combining the signals received on a much smaller antenna as it moved through space. Today, SAR satellites with antennas just a few meters across can produce images with pixel resolutions as sharp as half a meter—better than many satellite-borne cameras.

    SAR images, on their own, suffice for many types of surveillance, from counterterrorism to tracking oil spills in the ocean. But InSAR goes further, by looking for differences between multiple SAR images. The technique takes advantage of phase information in the returning microwaves—in other words, where a signal is in its sinusoidal path when it hits the antenna. Any phase difference in the signal between SAR images taken from the same position at different times means the round-trip distance has changed, and can reveal surface movements down to a few millimeters. “There’s nothing else that compares to it,” says Michelle Sneed, a hydrologist at the U.S. Geological Survey. “I’m still amazed by it after a couple of decades.”

    The 1978 launch of Seasat, NASA’s first ocean-observing satellite, provided data for early InSAR efforts.

    7
    Seasat. Credit: NASA.

    Seasat operated for just 105 days before a power failure brought the mission to an untimely end. But in that time, it collected repeat images of California’s Imperial Valley taken over the course of 12 days. Scientists at JPL later compared those images using InSAR to show the subtle swelling of fields as they soaked up irrigation water. “It is not hard to think of numerous applications for the type of instrument demonstrated,” the authors wrote in a 1989 paper. And they were right.

    A classic InSAR study came in 1993, when a team of scientists in France used data from the SAR-enabled European Remote Sensing satellite to study a powerful earthquake that rocked Landers, California, the year before. By analyzing images taken before and after the quake, they calculated that the fault had slipped by up to 6 meters, which agreed with detailed field observations. The InSAR data also revealed how the ground buckled for kilometers around the fault—illustrating the full effects of the temblor at an unprecedented scale.

    3
    Changes between radar images taken before and after 2019 earthquakes in Ridgecrest, California, reveal slip along perpendicular faults. Each color cycle represents 11.5 centimeters of ground motion. Credit: Sang-Ho Yun/JPL-Caltech/NASA; Original ALOS-2 data provided by JAXA (2019).

    High water marks

    Flooding damage in the Bahamas, from Hurricane Dorian in 2019, was remotely assessed by NASA’s Jet Propulsion Laboratory and the Earth Observatory of Singapore using synthetic aperture radar (SAR) data from Europe’s Sentinel-1-satellites. By comparing SAR images acquired before and after the hurricane, researchers can see shifts in ground surface related to flooding. (Yellow, orange and red indicate increasingly substantial surface change.) A surge in SAR data is enabling researchers to study changes like these at finer scales and frequencies.

    4
    (Image) ARIA; JPL-Caltech; Earth Observatory of Singapore; Nanyang Technological University; NASA Earth Applied Sciences Disasters Program; modified Copernicus Sentinel data (2019).

    By the 2000s, many earth scientists were using InSAR—and grappling with its limitations. There were few SAR satellites in orbit, and they tended to switch between instruments or imaging modes to accommodate different users’ needs, making the data hard to use for InSAR. The early missions collected the repeat images needed for InSAR only about once a month, and researchers often had to correct for their wobbly orbits. That meant that although scientists could study an event after it happened, they could rarely watch it unfold in real time.

    Leaders at the European Space Agency (ESA) were convinced there was a better way.

    Malcolm Davidson remembers the excitement and anxiety he felt on 3 April 2014, the day the first Sentinel-1 satellite launched. “All your life goes into a few minutes,” says Davidson, mission scientist for ESA’s flagship SAR program. He also remembers the relief when the satellite safely reached orbit, and the awe that came over him when he saw its first image, of ocean swells. “It was very convincing that the mission was going to do great things,” he says.

    With Sentinel-1, the plan was simple: “We cut out all the experiments, and we said, ‘Look, this is a mapping machine.’” He and his colleagues chose a primary imaging mode to use over land—surveying a 250-kilometer swath at a resolution of 5 meters by 20 meters—that they hoped would satisfy most researchers, and made sure the orbits would overlap precisely, so all the data would be suitable for InSAR. The first satellite, Sentinel-1a, retraced its path every 12 days. Then, in 2016, ESA launched a clone that made repeat images available about every 6 days for many places on Earth.

    SAR missions like Italy’s COSMO-SkyMed and Germany’s TerraSAR-X also support InSAR and can achieve even higher resolutions.

    8
    COSMO-SkyMed. Credit: https://www.telespazio.com/en/programmes/cosmo-skymed .

    9
    TerraSAR-X. Credit: ESA.

    But they do not distribute data freely like Sentinel, which many credit for driving a transition from opportunistic experiments to what Davidson sees as “a more operational view of the world.” With Sentinel-1 data, Norway created a national deformation map that has helped identify rockslide hazards and revealed that parts of Oslo’s main train station were sinking. Water managers in California rely on the data to track groundwater use and subsidence. And in Belgium, it is used to monitor the structural integrity of bridges. “It can all be done remotely now, saving time, saving money,” Boyle says.

    5
    Norway’s national deformation map, based on InSAR data from Sentinel-1 satellites, showed parts of Oslo’s train station sinking (red dots) by more than 1 centimeter per year, perhaps because of nearby construction.
    MODIFIED COPERNICUS SENTINEL DATA (2014–16) Credit: ESA(EU) SEOM INSARAP STUDY; INSAR NORWAY PROJECT; NGU; NORUT; PPO LABS/CC BY-SA 3.0 IGO.

    The large and growing body of InSAR data has also revealed small surface movements that were previously hidden by noise. As radar signals pass through the atmosphere, they slow down by an amount that depends on the weather, producing variability that can swamp tiny but important displacements. Thanks to long-term records from missions like Sentinel, researchers can now tease information from the noise, for example, helping them track movements of just a few millimeters per year in Earth’s crust—enough to strain faults and eventually cause earthquakes.

    Such efforts would not have been possible without huge gains in computing power. In the 1990s, stacking a single pair of SAR images could take days, Sneed says, and interpreting the results could take much longer. Now, researchers can process hundreds of images overnight, and they increasingly rely on artificial intelligence (AI) algorithms to make sense of the data. In one recent test, an AI algorithm was tasked with identifying small fault movements known as slow earthquakes. It correctly found simulated and historical events, including ones that had eluded human InSAR experts, says Bertrand Rouet-Leduc, a geophysicist at DOE’s Los Alamos National Laboratory(US) who presented preliminary results in December 2020 at the annual meeting of the American Geophysical Union.

    Rouet-Leduc and his team now plan to monitor faults around the world using the same approach. He says it’s mostly a matter of exploiting the vast quantity of data that “sits on servers without being looked at,” because it’s simply too much for scientists to tackle. The researchers hope they will be able to answer questions like when and why slow earthquakes happen, and whether they can trigger big, damaging events by increasing stress on other parts of a fault.

    6
    InSAR is being used to identify instabilities in pit mines that could lead to slope failures.
    Credit: Dares Technology.

    Commercial users often lack the expertise to process InSAR data, so hundreds of companies have sprung up to help. One, Dares Technology(ES), monitors the ground for the construction, mining, and oil and gas industries. By tracking surface changes as fluids are injected or extracted from an oil reservoir, for example, Dares can help companies estimate pumping efficiency and prevent dangerous well failures.

    In the beginning, convincing clients that InSAR data were useful and trustworthy was difficult, says Dares CEO Javier Duro. Now, he says, “Everybody wants to include InSAR in their operations.” Duro is particularly interested in detecting precursors to accidents, for example, by looking for signs of instability in the walls of open-pit mines or in the dams used to store mine tailings. The company usually sends out several alerts per month to clients, who can take actions to avoid disasters. “Typically, InSAR data have been used for back analysis,” Duro says. “Our mission is to focus on the present and the future, and try to predict what could happen.”

    The surge in satellites promises to bring yet another InSAR revolution. Italy, Japan, Argentina, and China all plan to launch additional SAR satellites soon, and NISAR, the NASA-ISRO mission, will take flight in late 2022 or early 2023. NISAR will image Earth’s full land surface every 6 days, on average, says Rosen, the mission’s project scientist. Its two radar sensors will help researchers track many things, including crop growth and changes in woody biomass—crucial for understanding the climate system. With a better view of Antarctica than other missions, NISAR can also monitor changes in ice.

    Taken together, Sentinel-1, NISAR, and the other civil satellites will image most places on Earth at least every 12 hours, Rosen says. But the temporal resolution of InSAR will remain constrained by the revisit rate of the individual missions, because the technique can’t be done with imagery from different missions. However, private companies with large constellations of microsatellites hope to vault the field into yet another realm, by radically increasing revisit frequencies.

    On 24 January, a SpaceX Falcon 9 rocket blasted off from Cape Canaveral, Florida, carrying three satellites, each about the size of a minifridge and weighing less than 100 kilograms, from Iceye(FI). The Finnish SAR startup has raised more than $150 million toward its audacious goal of imaging every square meter of Earth every hour.

    The launch brought Iceye’s commercial constellation to six, giving it an early lead over rival companies such as Capella Space(US)—which had two satellites on the same rocket—and Umbra, both based in California. Iceye plans to add at least eight more satellites this year, allowing it to revisit most of the globe once a day. “That is groundbreaking,” says Pekka Laurila, who co-founded Iceye as an undergraduate at Aalto University [ Aalto-yliopisto](FI) and now serves as the company’s chief strategy officer.

    Ultimately, Iceye hopes to assemble a constellation of as many as 100 satellites as it approaches its hourly monitoring objective. That would open up new applications, like tracking how buildings and dams expand during the heat of the day and contract at night—a clue to their structural integrity. Already, Iceye data have been used to guide ships through Arctic sea ice and to track illegal fishing vessels. “If you can work closer to real time, you can actually do something about it,” Laurila says.

    So far, though, Iceye has focused on flood monitoring, which can guide disaster response efforts. In fact, the company provided some of the first images of Grand Bahama after Hurricane Dorian devastated the island in 2019, Laurila says. Precise flood data are also valuable to insurers, who can use them to trigger automatic insurance payouts after an event instead of processing claims and sending out inspectors. Until now, Iceye has tracked floods using regular SAR data, but it hopes to start to apply InSAR as it increases its revisit frequencies, because the technique can measure the height and extent of inundation much more precisely.

    And that’s just the beginning of what Laurila hopes Iceye will do. His ultimate goal is to build a “new layer of digital infrastructure” that will provide a “real-time, always-available, objective view on the world,” he says. He believes that, like modern GPS, reliable SAR and InSAR data will support myriad applications, many of which have yet to be imagined. “Nobody thought of your Uber and pizza delivery when they thought of GPS,” Laurila says.

    If Iceye and its peers succeed, they will expose the shifts and shudders of the planet, day in and day out. They will spy tilting buildings and slumping slopes, and they will witness the growth of crops and the flow of commodities around the world. If space-based imagery often portrays Earth as quiet and still, InSAR reveals the true restlessness of our living planet.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 3:44 pm on February 6, 2021 Permalink | Reply
    Tags: "The cloak-and-dagger tale behind this year’s most anticipated result in particle physics", Any hints of new physics will emerge from the gap between the measured result and theorists’ prediction., As early as March the Muon g-2 experiment at Fermi National Accelerator Laboratory (Fermilab) will report a new measurement of the magnetism of the muon-a heavier short-lived cousin of the electron., As they orbit each muon decays to produce a positron which flies into one of the detectors lining the ring., At Brookhaven the Muon G-2 experiment collected data from 1997 to 2001., , , Like the electron the muon spins like a top and its spin imbues it with magnetism., , Physicists eagerly await the new measurement because if the discrepancy is real something new must be causing it., Quantum theory also demands that the muon is enshrouded by "virtual particles and antiparticles"., Science Magazine, The extra magnetism makes the muons precess faster than they orbit-roughly 30 times for every 29 orbits-an effect that in principle makes it simple to measure the excess., The gap between theorists’ consensus value for the muon’s magnetism and the Brookhaven value is now 3.7 times the total uncertainty-not too far from the five times needed to claim something new., The measures that g-2 experimenters are taking to ensure they don’t fool themselves into claiming a false discovery are the stuff of spy novels ., The positrons have higher energy when the muons are spinning in the direction they are circulating and lower energy when they are spinning the opposite way., This would be a very clear sign of new physics so it would be a huge deal., Those “virtual particles” increase the muon’s magnetism by about 0.001%-an excess denoted as g-2., Using a purer more intense muon beam the revamped g-2 ultimately aims to reduce the experimental uncertainty to one-quarter of its current value.   

    From Fermi National Accelerator Laboratory via Science Magazine: “The cloak-and-dagger tale behind this year’s most anticipated result in particle physics” 

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From DOE’s Fermi National Accelerator Laboratory , an enduring source of strength for the US contribution to scientific research world wide.

    via

    Science Magazine

    Jan. 27, 2021
    Adrian Cho

    FNAL Muon G-2 studio. As muons race around a ring at the Fermi National Accelerator Laboratory, their spin axes twirl, reflecting the influence of unseen particles.

    In 1986, the TV journalist Dan Rather was attacked in New York City. A deranged assailant pummeled him while cryptically demanding, “Kenneth, what’s the frequency?” The query became a pop culture meme, and the rock band R.E.M. even based a hit song on it. Now, it could be the motto for the team about to deliver the year’s most anticipated result in particle physics.

    As early as March, the Muon g-2 experiment at Fermi National Accelerator Laboratory (Fermilab) will report a new measurement of the magnetism of the muon, a heavier, short-lived cousin of the electron. The effort entails measuring a single frequency with exquisite precision. In tantalizing results dating back to 2001, g-2 found that the muon is slightly more magnetic than theory predicts. If confirmed, the excess would signal, for the first time in decades, the existence of novel massive particles that an atom smasher might be able to produce, says Aida El-Khadra, a theorist at the University of Illinois, Urbana-Champaign. “This would be a very clear sign of new physics, so it would be a huge deal.”

    The measures that g-2 experimenters are taking to ensure they don’t fool themselves into claiming a false discovery are the stuff of spy novels, involving locked cabinets, sealed envelopes, and a second, secret frequency known to just two people, both outside the g-2 team. “My wife won’t pick me for responsible jobs like this, so I don’t know why an important experiment did,” says Joseph Lykken, Fermilab’s chief research officer, one of the keepers of the secret.

    Like the electron, the muon spins like a top, and its spin imbues it with magnetism. Quantum theory also demands that the muon is enshrouded by particles and antiparticles flitting in and out of the vacuum too quickly to be observed directly. Those “virtual particles” increase the muon’s magnetism by about 0.001%, an excess denoted as g-2. Theorists can predict the excess very precisely, assuming the vacuum fizzes with only the particles in their prevailing theory. But those predictions won’t jibe with the measured value if the vacuum also hides massive new particles. (The electron exhibits similar effects, but is less sensitive to new particles than the muon because it is much less massive.)

    To measure the telltale magnetism, g-2 researchers fire a beam of muons (or, to be more precise, their antimatter counterparts) into a 15-meter-wide circular particle accelerator. Thousands of muons enter the ring with their spin axis pointing in the direction they travel, like a football thrown by a right-handed quarterback. A vertical magnetic field bends their trajectories around the ring and also makes their spin axis twirl, or precess, like a wobbling gyroscope.

    Were it not for the extra magnetism from the virtual particles, the muons would precess at the same rate that they orbit the ring and, thus, always spin in their direction of travel. However, the extra magnetism makes the muons precess faster than they orbit, roughly 30 times for every 29 orbits—an effect that, in principle, makes it simple to measure the excess.

    As they orbit, each muon decays to produce a positron, which flies into one of the detectors lining the ring. The positrons have higher energy when the muons are spinning in the direction they are circulating and lower energy when they are spinning the opposite way. So as the muons go around and around, the flux of high-energy positrons oscillates at a frequency that reveals how much extra magnetism the virtual particles create.

    To measure that frequency with enough precision to search for new particles, physicists must tightly control every aspect of the experiment, says Chris Polly, a physicist at Fermilab and co-spokesperson for the 200-member g-2 team. For example, to make the ring’s magnetic field uniform to 25 parts in 1 million, researchers have adorned the poles of its electromagnets with more than 9000 strips of steel thinner than a sheet of paper, says Polly, who has worked on the g-2 experiment since its inception in 1989 at Brookhaven National Laboratory in Upton, New York.

    Brookhaven Muon g-2 ring.

    Each sheet acts as a magnetic “shim” that makes a minuscule adjustment in the field.

    At Brookhaven, the experiment collected data from 1997 to 2001. Ultimately, researchers measured the muon’s magnetism to a precision of 0.6 parts in 1 billion, arriving at a value about 2.4 parts per billion bigger than the theoretical value at the time. In 2013, they hauled the 700-ton ring 5000 kilometers by barge to Fermilab in Batavia, Illinois.

    FNAL G-2 magnet from Brookhaven Lab finds a new home in the FNAL Muon G-2 experiment.

    Using a purer, more intense muon beam, the revamped g-2 ultimately aims to reduce the experimental uncertainty to one-quarter of its current value. The result to be announced this spring won’t reach that goal, says Lee Roberts, a g-2 physicist at Boston University. But if it matches the Brookhaven result, it would strengthen the case for new particles lurking in the vacuum.

    However, g-2 researchers must ensure they don’t fool themselves while making the more than 100 tiny corrections that the various aspects of the experiment require. To avoid subconsciously steering the frequency toward the value they want, the experimenters blind themselves to the true frequency until they’ve finalized their analysis.

    The blinding has multiple layers, but the last is the most important. To hide the true frequency at which the flux of positrons oscillates, the experiment runs on a clock that ticks not in real nanoseconds, but at an unknown frequency, chosen at random. At the start of each monthslong run, Lykken and Fermilab’s Greg Bock punch an eight-digit value into a frequency generator that’s kept under lock and key. The last step in the measurement is to open the sealed envelope containing the unknown frequency, the key to converting the clock readings into real time. “It’s like the Academy Awards,” Lykken says.

    Any hints of new physics will emerge from the gap between the measured result and theorists’ prediction. That prediction has its own uncertainties, but over the past 15 years, the calculations have become more precise and consistent, and the disagreement between theory and experiment is now bigger than ever. The gap between theorists’ consensus value for the muon’s magnetism and the Brookhaven value is now 3.7 times the total uncertainty, El-Khadra says, not too far from the five times needed to claim a discovery.

    Nevertheless, the discrepancy may be less exciting than it was 20 years ago, says William Marciano, a theorist at Brookhaven. At that time, many physicists thought it could be a hint of supersymmetry, a theory that predicts a heavier partner for each standard model particle. But if such partners lurk in the vacuum, the world’s largest atom smasher, Europe’s Large Hadron Collider, probably would have blasted them out by now, Marciano says. “It’s not impossible to explain [the muon’s magnetism] with supersymmetry,” Marciano says, “but you have to stand on your head to do it.”

    Still, physicists eagerly await the new measurement because, if the discrepancy is real, something new must be causing it. The team is still deciding when it will unblind the data, says Roberts, who has worked on g-2 since it began. “At Brookhaven, I was always sitting on the edge of my chair [during unblinding], and I think I will be here, too.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.

     
  • richardmitnick 11:52 am on January 15, 2021 Permalink | Reply
    Tags: "How the famed Arecibo telescope fell—and how it might rise again", , , , , , Science Magazine, Some 130 people worked at the observatory and many more derive indirect economic benefits from it., Some place blame at the feet of NSF’s astronomy division which for more than a decade tried to offload Arecibo so it could divert funds to operating newer telescopes., Some think the new plan is a pipe dream., The Gregorian dome and other new equipment added 300 tons to the platform., The loss dismayed scientists worldwide. Arecibo was still a scientific trailblazer. Its powerful radar could bounce radio waves off other planets and asteroids revealing the contours of their surfaces, Together with Arecibo staff researchers last month delivered a white paper to NSF describing plans for a new $400 million telescope on the same site., Why did cables that had held up the platform for decades suddenly fail so spectacularly?   

    From Science Magazine: “How the famed Arecibo telescope fell—and how it might rise again” 

    From Science Magazine

    Jan. 14, 2021
    Daniel Clery
    dclery@science-int.co.uk

    1
    On 1 December 2020, the 900-ton instrument platform of the Arecibo Observatory crashed into its dish, which is cradled in a natural sinkhole. Credit: RICARDO ARDUENGO/AFP VIA GETTY IMAGES.

    In the early morning of 10 August 2020, Sravani Vaddi, a postdoc astronomer at the Arecibo Observatory in Puerto Rico, was working from home, but her thoughts were at Arecibo’s giant radio telescope. At 2 a.m., she had one precious hour to focus the 305-meter dish on NGC 7469, a distant galaxy. At its center, two supermassive black holes wheeled around each other, following an earlier galaxy merger. Vaddi wanted to see whether having two dark hearts instead of the usual one made the galaxy shine more brightly by stirring up gases and stoking starbirth. Radio emissions from the glowing gases would help her find out.

    When she checked in near the end of her observations, computer servers suggested the telescope wasn’t pointing at the galaxy anymore. She couldn’t get an on-site telescope operator on the phone, so she gave up and went to bed.

    She woke up to a full inbox. At 2:45 a.m., toward the end of her slot, an 8-centimeter-thick steel cable, one of 18 suspending a 900-ton instrument platform high above the dish, had pulled out of its socket at one end and fallen, slicing into the dish. “I was totally shocked. How could a cable break?” she says. Although she didn’t know it at the time, the photons she gathered from NGC 7469 would be the last ones Arecibo would ever scoop up.

    The rest of the story is now well known. A second support cable snapped 3 months later, on 6 November, and the National Science Foundation (NSF), which owns the observatory, said attempting repairs was too dangerous: Arecibo would be dismantled. On 1 December, fate took control as more cables snapped and the platform, as heavy as 2000 grand pianos, came crashing down into the dish.


    Snippet: Arecibo telescope collapses.

    The loss dismayed scientists worldwide. Although 57 years old, Arecibo was still a scientific trailblazer. Its powerful radar could bounce radio waves off other planets and asteroids, revealing the contours of their surfaces. Other antennas could heat plasma in Earth’s upper atmosphere, creating artificial aurorae for study. And for most of Arecibo’s life, it was the biggest radio dish in the world, able to sense the faintest emissions, from the metronomic beats of distant stellar beacons called pulsars to the whisper of rarefied gases between galaxies.

    The public, familiar with the majestic dish from films such as Contact and GoldenEye, also felt the loss. And it was a bitter blow to the people of Puerto Rico, who embraced hosting the technological marvel. Some 130 people work at the observatory, and many more derive indirect economic benefits from it. Every schoolchild on the island goes on a field trip to see the telescope, and those experiences often lead to science careers, says astrobiologist Abel Méndez of the University of Puerto Rico, Arecibo. With its fall, “Puerto Rico loses much more than any other place,” he says.

    Along with the grief have come sharper questions. After surviving numerous earthquakes and hurricanes, why did this scientific crown jewel collapse so unceremoniously on a calm winter morning? Some engineers and astronomers think manufacturing flaws or poor maintenance in a tropical, corrosive environment doomed the suspension cables. Others place blame at the feet of NSF’s astronomy division, which for more than a decade tried to offload Arecibo so it could divert funds to operating newer telescopes. “Somehow, we lost a $300 million instrument, a magnificent, really expensive instrument, for a few million dollars,” says Richard Behnke, an Arecibo staffer from 1970 to 1982 who went on to head the geospace science division at NSF. “Things should not collapse like that. It’s not acceptable stewardship at all.”

    Meanwhile, astronomers are looking to the future. “First we mourned, then we had a wake, then we got down to work,” says Joanna Rankin, an astronomer at the University of Vermont. Together with Arecibo staff, researchers last month delivered a white paper to NSF describing plans for a new $400 million telescope on the same site. Although any rebuilding effort faces major political and financial hurdles, the proposal aims for an instrument with even more dazzling capabilities than the one that was lost. “There’s been a remarkable amount of commitment and energy,” Rankin says.

    Originally, Arecibo had little to do with astronomy. The Pentagon’s Advanced Research Projects Agency funded its construction in the early 1960s as part of an effort to detect and intercept incoming Soviet missiles. Researchers thought radars might be able to spot missile trails left in the ionosphere, the upper part of the atmosphere where the Sun’s radiation ionizes air molecules. But little was known about the ionosphere at the time. Arecibo’s large dish, built in a natural sinkhole in Puerto Rico’s karst landscape, was meant to serve as a giant radar for probing it.

    Upgrades after NSF took over the facility in 1969 made it alluring for more kinds of science. The original wire-mesh surface was replaced with aluminum panels that enabled observations at higher frequencies. NASA added a more powerful radar transmitter that could track Earth-threatening asteroids—and also used it to beam a message to possible civilizations among the stars. In the subsequent decades, a string of high-profile discoveries burnished the telescope’s reputation: a binary pulsar system whose subtly slowing pulses provided the first indirect evidence of gravitational waves, radar maps of Venus’s cloud-veiled surface that revealed evidence for volcanic repaving, and the very first planet outside our Solar System (albeit one orbiting a pulsar).

    One of the telescope’s quirks is that the curve of its dish is spherical rather than parabolic like most other radio telescopes. That shape enables the telescope to track objects that aren’t directly overhead, even though the dish can’t tilt. But it also focuses incoming rays to a line rather than a point, requiring elongated receivers. A 1997 upgrade added the igloo-shaped “Gregorian dome,” which housed additional reflectors to focus the radio waves to a point where detectors and transmitters covering many frequencies could be mounted. “It became a completely different telescope and enabled it to stay on the cutting edge,” says Robert Kerr, who was observatory director for two spells in the past 15 years.

    The beefed-up scope won a starring role in the NANOGrav project, which in 2007 began to monitor pulsar beats for fluctuations caused by passing gravitational waves. Arecibo also aided the hunt for fast radio bursts, short and powerful blasts that have been one of radio astronomy’s biggest mysteries of the past decade. In 2016, the telescope detected the first burst that repeated, showing that whatever produces the blasts is not destroyed in the process. (Highly magnetized neutron stars are the leading candidate.) “There was a new discovery every year,” remembers astronomer Joan Schmelz, who was deputy director from 2015 to 2018.

    Although the 1997 upgrade kept Arecibo in the vanguard, it may also have contributed to its demise. The telescope’s cables were designed with a safety factor of just over two, so everyday loads on the cables would be less than half of the load that would break them. That surprises Robert Lark, a civil engineer at Cardiff University, who says that bridge cables typically have safety factors of six or more.

    4
    The 110-ton Gregorian dome, added to Arecibo in 1997, boosted capabilities but the added weight may have hastened the platform’s collapse. Credit: DAVID PARKER/SCIENCE SOURCE.

    The Gregorian dome and other new equipment added 300 tons to the platform. Although six auxiliary cables were added to bring the safety factor back to two, Kerr says it never quite got there. It was one of these auxiliary cables that failed in August. “One of the difficulties of adding or replacing cables is the accurate distribution of load,” Lark says. “The new cable could have been bearing more than it should.”

    The end of the cable pulled free from its socket at the top of one of the platform’s three support towers, says engineer Ramón Lugo, principal investigator for Arecibo at the University of Central Florida (UCF), which leads the consortium that now manages the observatory for NSF. Engineers make sockets by inserting the cable end into a cone-shaped steel cavity, splaying the cable’s wires, and filling the cavity with molten metal such as zinc. The zinc adheres to the wires and forms a plug that locks them in place.

    Engineers from Cornell University, which managed Arecibo from its construction until 2011, got an unexpected glimpse into one of Arecibo’s sockets in the early 1980s, after an old cable was replaced and shipped to Cornell for inspection. Engineer Leigh Phoenix, who was on the team that carried out the postmortem, says the socket appeared to be faulty. The zinc was distributed unevenly and was poorly adhered to the splayed wires. “It provided an avenue for water to get in,” Phoenix says. The team also found broken and cracked wires in the socket. “It would be alarming if it had been allowed to continue,” he says.

    After the August failure of the auxiliary cable, UCF brought in three engineering firms to assess the situation. Their suspicion was that similar manufacturing faults in this cable’s socket were to blame, Lugo says. They did not think the entire structure was at risk—even though staff were hearing individual wires break at a rate of about one per day across all of the telescope’s cables. The wires were known to corrode in the tropical environment, but with 160 of them bundled into each main cable, the breakage didn’t cause immediate alarm.

    The lead engineering firm, Thornton Tomasetti, built a full structural model of the telescope. It showed that the four main cables running to the platform from the crippled tower, known as Tower 4, were now bearing a load equal to about 60% of their breaking strength: a safety factor of 1.67. After inspecting the structure, all three firms concluded it was stable and that the loss of another cable wouldn’t cause a collapse.

    Thornton Tomasetti recommended replacing all the auxiliary cables because the socket failure made all of them suspect—and because inspections showed some other cables had slipped as much as 1 centimeter from their sockets. Lugo says Arecibo staff wrote up a 500-page proposal for the repairs in 2 weeks. NSF approved the $10.5 million request, and orders were placed for new cables. Then, on 6 November, the second cable broke: a main cable, with just six visible broken wires. And this time, it did not separate from its socket: It snapped.

    The mission to save the telescope was now urgent. The engineers had to reduce the load on the three main cables still attached to Tower 4, now shouldering more than 75% of their breaking load, but they couldn’t risk putting people on the towers or platform. They looked at using helicopters to install extra cables or sever platform components to reduce its weight. They even considered sacrificing the entire 110 tons of the Gregorian dome, but the violent recoil of the platform after the dome was cut loose would have been “a bad thing,” Lugo says. There was no good option.

    One firm—Wiss, Janney, Elstner Associates—favored stabilizing the telescope by relaxing the backstays that stretch from the towers to the ground, installing extra support cables, and removing mass from the platform before starting restoration work. But Thornton Tomasetti and the third firm, WSP, concluded that, after two cables had broken well below their design strength, none of them could be trusted. “Although it saddens us to make this recommendation, we believe the structure should be demolished in a controlled way as soon as pragmatically possible,” principal engineer John Abruzzo of Thornton Tomasetti said in his report. So, at a 19 November press briefing, NSF called time on the telescope. “We understand how much Arecibo means to [the scientific] community and to Puerto Rico,” said Sean Jones, head of the Directorate for Mathematical and Physical Sciences. “There is no path forward that allows us to do so safely.”

    On 1 December, less than 2 weeks later, Lugo, who had temporarily relocated to Puerto Rico, stopped to buy breakfast before driving up to the observatory. Just after 8 a.m., he got a call telling him the platform had collapsed. “I felt like throwing up,” he says. One hour later he was on-site talking to staff who had heard and felt the crash. “There were a lot of glazed over expressions, they were all crying,” he says. Cameras on a drone had caught the remaining Tower 4 cables snapping within seconds of each other while a fixed camera watched the platform fall. Arecibo’s giant telescope was no more.

    So why did cables that had held up the platform for decades suddenly fail so spectacularly? Decades earlier, staff noted cable wires snapping and suspected that corrosion from water was to blame. In 1976, managers tackled the problem by painting the cables to seal them off from the elements and installing fans to blow dry air through the length of the cables. Phoenix says that reduced the rate of wire breaks, but it’s unclear how long those practices were maintained. Kerr says the fans weren’t in use when he took over in 2007, nor was he aware of when the cables were last painted. “Someone may have dropped the ball,” he says.

    Lugo insists procedures were continued since UCF took over in 2018. “We were doing what was being done prior,” he says. “It was not poorly maintained,” Rankin agrees. “The Puerto Rico staff are incredible: They did every possible thing.”

    Natural disasters hastened the end, Lugo says. Hurricane Maria battered Puerto Rico in 2017. Phoenix says it was “an opportunity for trouble,” because the storm’s winds could have picked up seawater, whose salt makes it especially corrosive, and dumped some on the telescope. The observatory was also shaken by a series of earthquakes in December 2019 and January 2020.

    Others say the NSF astronomy division’s efforts to hand off the telescope didn’t help. In 2006, the division convened an independent panel of astronomers for one of its “senior reviews” of existing facilities. To pay for planned new telescopes, such as the Atacama Large Millimeter/submillimeter Array in Chile and the Daniel K. Inouye Solar Telescope in Hawaii, economies were needed. Among other measures, the panel recommended closing Arecibo by 2011 unless partners were found to share operating costs. The astronomy division began to ramp down its roughly $10 million annual spending on Arecibo. NSF’s atmospheric and geospace division increased its funding from $2 million to $4 million and NASA chipped in a few million dollars for tracking near-Earth asteroids. But Arecibo wasn’t out of the woods.

    Following an open competition, management of the observatory was transferred in 2011 from Cornell to a collaboration led by SRI International, a nonprofit research institute. NSF’s astronomy division still wanted more savings, however. In 2018, UCF stepped up to take over management, with support from Puerto Rico’s Metropolitan University and the company Yang Enterprises, on the understanding that the astronomy division would gradually reduce its contribution to $2 million annually.

    Two management changes in 7 years and the slow dwindling of funds took a toll, supporters say. “People would leave or retire when there are no raises. The best people would go elsewhere,” says planetary scientist Michael Nolan of the University of Arizona, who was Arecibo director from 2008 to 2011. And when old hands move on, something goes with them, Phoenix says. “Knowledge gets lost without that continuity.” In response to questions from Science, an NSF spokesperson says, “Funding from NSF covered scheduled maintenance for the facility and should not have negatively affected the observatory’s ability to maintain the 305-meter telescope.”

    Although Kerr is convinced neglect was a factor, he believes the collapse had no single cause. “We drove that telescope hard. It’s an old piece of steel in the tropics, too heavy, it failed.” But he does think the 1997 upgrade, although scientifically valuable, was a mistake. “If it had not been upgraded, it would still be standing.”

    After the shock of last month’s collapse wore off, observatory managers gave a group of staff and outside researchers 3 weeks to come up with a plan to replace the telescope. “We need something concrete to put in front of people,” Lugo says. “We want to develop a system that will be relevant for another 50 years.” The planners are aiming for a replacement that would surpass the capabilities of the original, be more flexible, and satisfy the needs of planetary and atmospheric scientists as well as astronomers. And they are trying to do that for less than $400 million—roughly the cost of making a Hollywood blockbuster.

    The researchers first considered a new fixed dish, along with an array of independently steerable smaller ones. But in the white paper delivered to NSF last month, they went with something more ambitious: a flat, 300-meter-wide, rigid platform, bridging the sinkhole, and studded with more than 1000 closely packed 9-meter dishes. The dishes would not steer but the disk would, with hydraulics tilting it more than 45° from the horizontal. At such an extreme tilt, one edge of the disk would be higher than Arecibo’s existing support towers. Steering “will be a great mechanical challenge,” says Anish Roshi, head of astrophysics at the observatory.

    In this design, modern receivers built into each dish could cover a broader frequency range than its predecessor and, fired synchronously, the collective radar of 1000 dishes could send out a more powerful beam than a single transmitter. Dubbed the Next Generation Arecibo Telescope, it would be nearly twice as sensitive and have four times the radar power of the original. The steerable platform would enable it to see more than twice as much of the sky as its predecessor, while the field of view of its 1000 dishes would cover a swath 500 times larger.

    The extreme tilt was designed to bring an important target within view: the supermassive black hole that sits in the galactic center. The 2020 Nobel Prize in Physics was awarded in part to astronomers who peered through a haze of dust and gas at the heart of the Galaxy to painstakingly track a star following a tortuous orbit in the grip of the black hole. If radio astronomers could discover a pulsar in a similar orbit, its steady clock would allow them to study the behemoth’s gravitational field in fine detail. “It would be a better probe than anything that exists now,” Roshi says.

    But some think the plan is a pipe dream. When choosing major projects, NSF and funders in Congress traditionally follow the recommendations of the decadal survey in astrophysics, a priority-setting exercise that at the turn of each decade asks the field what it wants to do next. The current one is already complete and will report in the coming months. “If you skip to the front of the line, those other projects would be furious,” Behnke says.

    In theory, Congress could choose to set aside extra funds for a pet project, as happened after the 90-meter telescope at Green Bank Observatory collapsed in 1988. West Virginia’s influential senator pushed through funding for a replacement, resulting in the Robert C. Byrd Green Bank Telescope, inaugurated in 2000 and the world’s largest steerable dish. But Puerto Rico, with only a nonvoting representative in Congress, has little clout, even though it could use a leg up after being battered by earthquakes and hurricanes. “In terms of economy, [Puerto Rico] needs it,” Méndez says.

    Lugo says advocates for a new telescope are talking to private foundations. And late last month Puerto Rico Governor Wanda Vázquez Garced allocated $8 million to clean up the site and design a replacement. Lugo says the money will go to a feasibility study of the new design. “We have to be optimistic that we will make this happen.”

    But for researchers who relied on data gathered by Arecibo’s big eye, it won’t happen soon enough, leaving them to cast around for other, less capable instruments to continue their work. “I had so many projects in mind,” Vaddi says. “Along with the cable, this broke all my projects.”

    With additional reporting by Rodrigo Pérez Ortega.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 4:46 pm on January 7, 2021 Permalink | Reply
    Tags: "After decades of effort scientists are finally seeing black holes—or are they?, , , , , , , Richard Genzel-MPE, Science Magazine   

    From Science Magazine: “After decades of effort scientists are finally seeing black holes—or are they? 

    From Science Magazine

    Jan. 7, 2021
    Adrian Cho

    1
    General relativity makes very specific predictions about what black holes are and how they should appear, as shown in this simulation. Credit: GODDARD SPACE FLIGHT CENTER/JEREMY SCHNITTMAN.

    While working on his doctorate in theoretical physics in the early 1970s, Saul Teukolsky solved a problem that seemed purely hypothetical. Imagine a black hole, the ghostly knot of gravity that forms when, say, a massive star burns out and collapses to an infinitesimal point. Suppose you perturb it, as you might strike a bell. How does the black hole respond?

    Teukolsky, then a graduate student at the California Institute of Technology (Caltech), attacked the problem with pencil, paper, and Albert Einstein’s theory of gravity, general relativity. Like a bell, the black hole would oscillate at one main frequency and multiple overtones, he found. The oscillations would quickly fade as the black hole radiated gravitational waves—ripples in the fabric of space itself. It was a sweet problem, says Teukolsky, now at Cornell University. And it was completely abstract—until 5 years ago.

    In February 2016, experimenters with the Laser Interferometer Gravitational-Wave Observatory (LIGO), a pair of huge instruments in Louisiana and Washington, reported the first observation of fleeting gravitational ripples, which had emanated from two black holes, each about 30 times as massive as the Sun, spiraling into each other 1.3 billion light-years away.


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project


    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    LIGO even sensed the “ring down”: the shudder of the bigger black hole produced by the merger. Teukolsky’s old thesis was suddenly cutting-edge physics.

    The thought that anything I did would ever have implications for anything measurable in my lifetime was so far-fetched that the last 5 years have seemed like living in a dream world,” Teukolsky says. “I have to pinch myself, it doesn’t feel real.”

    Fantastical though it may seem, scientists can now study black holes as real objects. Gravitational wave detectors have spotted four dozen black hole mergers since LIGO’s breakthrough detection. In April 2019, an international collaboration called the Event Horizon Telescope (EHT) produced the first image of a black hole. By training radio telescopes around the globe on the supermassive black hole in the heart of the nearby galaxy Messier 87 (M87), EHT imaged a fiery ring of hot gas surrounding the black hole’s inky “shadow.”

    Messier 87*, The first image of the event horizon of a black hole. This is the supermassive black hole at the center of the galaxy Messier 87. Image via JPL/ Event Horizon Telescope Collaboration released on 10 April 2019.

    EHT map.

    Meanwhile, astronomers are tracking stars that zip close to the black hole in the center of our own Galaxy, following paths that may hold clues to the nature of the black hole itself.

    The observations are already challenging astrophysicists’ assumptions about how black holes form and influence their surroundings. The smaller black holes detected by LIGO and, now, the European gravitational wave detector Virgo in Italy have proved heavier and more varied than expected, straining astrophysicists’ understanding of the massive stars from which they presumably form. And the environment around the supermassive black hole in our Galaxy appears surprisingly fertile, teeming with young stars not expected to form in such a maelstrom. But some scientists feel the pull of a more fundamental question: Are they really seeing the black holes predicted by Einstein’s theory?

    Some theorists say the answer is most likely a ho-hum yes. “I don’t think we’re going to learn anything more about general relativity or the theory of black holes from any of this,” says Robert Wald, a gravitational theorist at the University of Chicago. Others aren’t so sure. “Are black holes strictly the same as you would expect with general relativity or are they different?” asks Clifford Will, a gravitational theorist at the University of Florida. “That’s going to be a major thrust of future observations.” Any anomalies would require a rethink of Einstein’s theory, which physicists suspect is not the final word on gravity, as it doesn’t jibe with the other cornerstone of modern physics, quantum mechanics.

    Using multiple techniques, researchers are already gaining different, complementary views of these strange objects, says Andrea Ghez, an astrophysicist at the University of California, Los Angeles, who shared the 2020 Nobel Prize in Physics for inferring the existence of the supermassive black hole in the heart of our Galaxy. “We’re still a long way from putting a complete picture together,” she says, “but we’re certainly getting more of the puzzle pieces in place.”

    Andrea Ghez has centered her work at the W.M Keck Observatory.

    W.M. Keck Observatory, two ten meter telescopes operated by Caltech and the University of California, Maunakea Hawaii USA, altitude 4,207 m (13,802 ft). Credit: Caltech.

    Consisting of pure gravitational energy, a black hole is a ball of contradictions. It contains no matter, but, like a bowling ball, possesses mass and can spin. It has no surface, but has a size. It behaves like an imposing, weighty object, but is really just a peculiar region of space.

    Or so says general relativity, which Einstein published in 1915. Two centuries earlier, Isaac Newton had posited that gravity is a force that somehow reaches through space to attract massive objects to one another. Einstein went deeper and argued that gravity arises because massive things such as stars and planets warp space and time—more accurately, spacetime—causing the trajectories of freely falling objects to curve into, say, the parabolic arc of a thrown ball.

    Early predictions of general relativity differed only slightly from those of Newton’s theory. Whereas Newton predicted that a planet should orbit its star in an ellipse, general relativity predicts that the orientation of the ellipse should advance slightly, or precess, with each orbit. In the first triumph of the theory, Einstein showed it accounted for the previously unexplained precession of the orbit of the planet Mercury. Only years later did physicists realize the theory also implied something far more radical.

    In 1939, theorist J. Robert Oppenheimer and colleagues calculated that when a sufficiently massive star burned out, no known force could stop its core from collapsing to an infinitesimal point, leaving behind its gravitational field as a permanent pit in spacetime. Within a certain distance of the point, gravity would be so strong that not even light could escape. Anything closer would be cut off from the rest of the universe, David Finkelstein, a theorist at Caltech, argued in 1958. This “event horizon” isn’t a physical surface. An astronaut falling past it would notice nothing special. Nevertheless, reasoned Finkelstein, who died just days before LIGO’s announcement in 2016, the horizon would act like a one-way membrane, letting things fall in, but preventing anything from getting out.

    According to general relativity, these objects—eventually named black holes by famed theorist John Archibald Wheeler—should also exhibit a shocking sameness. In 1963, Roy Kerr, a mathematician from New Zealand, worked out how a spinning black hole of a given mass would warp and twist spacetime. Others soon proved that, in general relativity, mass and spin are the only characteristics a black hole can have, implying that Kerr’s mathematical formula, known as the Kerr metric, describes every black hole there is. Wheeler dubbed the result the no-hair theorem to emphasize that two black holes of the same mass and spin are as indistinguishable as bald pates. Wheeler himself was bald, Teukolsky notes, “so maybe it was bald pride.”

    Some physicists suspected black holes might not exist outside theorists’ imaginations, says Sean Carroll, a theorist at Caltech. Skeptics argued that black holes might be an artifact of general relativity’s subtle math, or that they might only form under unrealistic conditions, such as the collapse of a perfectly spherical star. However, in the late 1960s, Roger Penrose, a theorist at the University of Oxford, dispelled such doubts with rigorous math, for which he shared the 2020 Nobel Prize in Physics. “Penrose exactly proved that, no, no, even if you have a lumpy thing, as long as the density became high enough, it was going to collapse to a black hole,” Carroll says.

    Soon enough, astronomers began to see signs of actual black holes. They spotted tiny x-ray sources, such as Cygnus X-1, each in orbit around a star. Astrophysicists deduced that the x-rays came from gas flowing from the star and heating up as it fell onto the mysterious object. The temperature of the gas and the details of the orbit implied the x-ray source was too massive and too small to be anything but a black hole. Similar reasoning suggested quasars, distant galaxies spewing radiation, are powered by supermassive black holes in their centers.

    But no one could be sure those black holes actually are what theorists had pictured, notes Feryal Özel, an astrophysicist at the University of Arizona (UA). For example, “Very little that we have done so far establishes the presence of an event horizon,” she says. “That is an open question.”

    Now, with multiple ways to peer at black holes, scientists can start to test their understanding and look for surprises that could revolutionize physics. “Even though it’s very unlikely, it would be so amazingly important if we found that there was any deviation” from the predictions of general relativity, Carroll says. “It’s a very high-risk, high-reward question.”

    Scientists hope to answer three specific questions: Do the observed black holes really have event horizons? Are they as featureless as the no-hair theorem says? And do they distort spacetime exactly as the Kerr metric predicts?

    Perhaps the simplest tool for answering them is one that Ghez developed. Since 1995, she and colleagues have used the 10-meter Keck telescope in Hawaii to track stars around a radio source known as Sagittarius A* (Sgr A*) in the center of our Galaxy. In 1998, the stars’ high speeds revealed they orbit an object 4 million times as massive as the Sun. Because Sgr A* packs so much mass into such a small volume, general relativity predicts it must be a supermassive black hole. Reinhard Genzel, an astrophysicist at the Max Planck Institute for Extraterrestrial Physics, independently tracked the stars to reach the same conclusion and shared the Nobel Prize with Ghez.

    Richard Genzel studied back holes at the VLT of the European Southern Observatory.

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo.

    Much of the information comes from a single star, dubbed SO2 by Ghez, which whips around Sgr A* once every 16 years.

    Star S0-2 Andrea Ghez Keck/UCLA Galactic Center Group at SGR A*, the supermassive black hole at the center of the Milky Way.

    Just as the orbit of Mercury around the Sun precesses, so, too, should the orbit of SO2. Ghez and colleagues are now trying to tease out that precession from the extremely complicated data. “We’re right on the cusp,” she says. “We have a signal, but we’re still trying to convince ourselves that it’s real.” (In April 2020, Genzel and colleagues claimed to have seen the precession.)

    If they get a little lucky, Ghez and company hope to look for other anomalies that would probe the nature of the supermassive black hole. Close to the black hole, its spin should modify the precession of a star’s orbit in a way that’s predictable from Kerr’s mathematical description. “If there were stars even closer than the ones they’ve seen—maybe 10 times closer—then you could test whether the Kerr metric is exactly correct,” Will says.

    The star tracking will likely never probe very close to the event horizon of Sgr A*, which could fit within the orbit of Mercury. But EHT, which combines data from 11 radio telescopes or arrays around the world to form, essentially, one big telescope, has offered a closer look at a different supermassive black hole, the 6.5-billion-solar-mass beast in M87.

    The famous image the team released 2 years ago, which resembles a fiery circus hoop, is more complicated than it looks. The bright ring emanates from hot gas, but the dark center is not the black hole itself. Rather it is a “shadow” cast by the black hole as its gravity distorts or “lenses” the light from the gas in front of it. The edge of shadow marks not the event horizon, but rather a distance about 50% farther out where spacetime is distorted just enough so that passing light circles the black hole, neither escaping nor falling into the maw.

    Even so, the image holds clues about the object at its center. The spectrum of the glowing ring could reveal, for example, whether the object has a physical surface rather than an event horizon. Matter crashing onto a surface would shine even brighter than stuff sliding into a black hole, Özel explains. (So far researchers have seen no spectral distortion.) The shadow’s shape can also test the classical picture of a black hole. A spinning black hole’s event horizon should bulge at the equator. However, other effects in general relativity should counteract that effect on the shadow. “Because of a very funky cancellation of squishing in different directions, the shadow still looks circular,” Özel says. “That’s why the shape of the shadow becomes a direct test of the no-hair theorem.”

    Some researchers doubt EHT can image the black hole with enough precision for such tests. Samuel Gralla, a theorist at UA, questions whether EHT is even seeing a black hole shadow or merely viewing the disk of gas swirling around the black hole from the top down, in which case the dark spot is simply the eye of that astrophysical hurricane. But Özel says that even with limited resolution, EHT can contribute significantly to testing general relativity in the conceptual terra incognita around a black hole.

    Gravitational waves, in contrast, convey information straight from the black holes themselves. Churned out when black holes spiral together at half the speed of light, these ripples in spacetime pass unimpeded through ordinary matter. LIGO and Virgo have now detected mergers of black holes with masses ranging from three to 86 solar masses.

    The mergers can probe the black holes in several ways, says Frank Ohme, a gravitational theorist and LIGO member at the Max Planck Institute for Gravitational Physics. Assuming the objects are classical black holes, researchers can calculate from general relativity how the chirplike gravitational wave signal from a merger should speed up, climax in a spike, and then ring down. If the massive partners are actually larger material objects, then as they draw close they should distort each other, altering the peak of the signal. So far, researchers see no alterations, Ohme says.

    The merger produces a perturbed black hole just like the one in Teukolsky’s old thesis, offering another test of general relativity. The final black hole undulates briefly but powerfully, at one main frequency and multiple shorter lived overtones. According to the no-hair theorem, those frequencies and lifetimes only depend on the final black hole’s mass and spin. “If you analyze each mode individually, they all have to point to the same black hole mass and spin or something’s wrong,” Ohme says.

    In September 2019, Teukolsky and colleagues teased out the main vibration and a single overtone from a particularly loud merger. If experimenters can improve the sensitivity of their detectors, Ohme says, they might be able to spot two or three overtones—enough to start to test the no-hair theorem.

    Future instruments may make such tests much easier. The 30-meter optical telescopes being built in Chile and Hawaii should scrutinize the neighborhood of Sgr A* with a resolution roughly 80 times better than current instruments, Ghez says, possibly spying closer stars. Similarly, EHT researchers are adding more radio dishes to their network, which should enable them to image the black hole in M87 more precisely. They’re also trying to image Sgr A*.

    Meanwhile, gravitational wave researchers are already planning the next generation of more sensitive detectors, including the Laser Interferometer Space Antenna (LISA), made up of three satellites flying in formation millions of kilometers apart. To be launched in the 2030s, LISA would be so sensitive that it could spot an ordinary stellar-mass black hole spiraling into a much bigger supermassive black hole in a distant galaxy, says Nicolas Yunes, a theoretical physicist at the University of Illinois, Urbana-Champaign.

    The smaller black hole would serve as a precise probe of the spacetime around the bigger black hole, revealing whether it warps and twists exactly as the Kerr metric dictates. An affirmative result would cement the case that black holes are what general relativity predicts, Yunes says. “But you have to wait for LISA.”

    In the meantime, the sudden observability of black holes has changed the lives of gravitational physicists. Once the domain of thought experiments and elegant but abstract calculations like Teukolsky’s, general relativity and black holes are suddenly the hottest things in fundamental physics, with experts in general relativity feeding vital input to billion-dollar experiments. “I felt this transition very literally myself,” Ohme says. “It was really a small niche community, and with the detection of gravitational waves that all changed.”


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    See the full article here .

     
  • richardmitnick 10:25 am on January 4, 2021 Permalink | Reply
    Tags: "A new mandate highlights costs and benefits of making all scientific articles free to read", , Are publishing fees affordable for authors?, Are publishing fees affordable for universities?, , How does open access benefit authors?, How does open access work for authors?, In 2018 a group of funders sent shock waves through the world of scientific publishing- proposing an unprecedented rule: The funded scientists would be required to make journal articles available., Is open access the future of scientific publishing?, Science Magazine, Who has qualms about open access?   

    From Science Magazine: “A new mandate highlights costs, benefits of making all scientific articles free to read” 

    From Science Magazine

    Jan. 1, 2021
    Jeffrey Brainard

    1
    Credit: DAVIDE BONAZZI/SALZMAN ART.

    In 2018, a group of mostly European funders sent shock waves through the world of scientific publishing by proposing an unprecedented rule: The scientists they funded would be required to make journal articles developed with their support immediately free to read when published.

    The new requirement, which takes effect starting this month, seeks to upend decades of tradition in scientific publishing, whereby scientists publish their research in journals for free and publishers make money by charging universities and other institutions for subscriptions. Advocates of the new scheme, called Plan S (the “S” stands for the intended “shock” to the status quo), hope to destroy subscription paywalls and speed scientific progress by allowing findings to be shared more freely. It’s part of a larger shift in scientific communication that began more than 20 years ago and has recently picked up steam.

    Scientists have several ways to comply with Plan S, including by paying publishers a fee to make an article freely available on a journal website, or depositing the article in a free public repository where anyone can download it. The mandate is the first by an international coalition of funders, which now includes 17 agencies and six foundations, including the Wellcome Trust and Howard Hughes Medical Institute, two of the world’s largest funders of biomedical research.

    The group, which calls itself Coalition S, has fallen short of its initial aspiration to catalyze a truly international movement, however. Officials in three top producers of scientific papers—China, India, and the United States—have expressed general support for open access, but have not signed on to Plan S. Its mandate for immediate open access will apply to authors who produced only about 6% of the world’s papers in 2017, according to an estimate by the Clarivate analytics firm, publisher of the Web of Science database.

    Still, there’s reason to think Coalition S will make an outsize impact, says Johan Rooryck, Coalition S’s executive director and a linguist at Leiden University. In 2017, 35% of papers published in Nature and 31% of those in Science cited at least one coalition member as a funding source. “The people who get [Coalition S] funding are very prominent scientists who put out very visible papers,” Rooryck says. “We punch above our weight.” In a dramatic sign of that influence, the Nature and Cell Press families of journals—stables of high-profile publications—announced in recent weeks that they would allow authors to publish papers outside their paywall, for hefty fees.

    Other recent developments point to growing support for open access. In 2017, for the first time, the majority of new papers across all scholarly disciplines, most of them in the sciences, were published open access, according to the Curtin Open Knowledge Initiative. More recently, most major publishers removed paywalls from articles about COVID-19 last year in an attempt to speed development of vaccines and treatments.

    Despite these and other signs of momentum, some publishing specialists say Plan S and other open-access measures could be financially stressful and ultimately unsustainable for publishers and the research institutions and authors who foot the bill. As debate continues about just how far and fast the movement will go, Science offers this guide for authors readying to plunge in.

    How does open access benefit authors?

    Authors who make their work open access may reap benefits, but their magnitude depends partly on what you measure.

    One yardstick is a paper’s impact. Some studies have reported up to triple the number of citations for open-access articles on average compared with paywalled ones. But authors may be likely to publish their best work open access, which might bring it more citations. A recent analysis that used statistical methods to control for this tendency found a far more modest citation advantage for open access—8%—and only for a minority of “superstar” papers.

    Mark McCabe of SKEMA Business School and Christopher Snyder of Dartmouth College studied how citations to articles changed when their journal volumes moved from behind paywalls to entirely open access, and compared them with citations for articles that remained paywalled. For each article in their sample of more than 200,000 papers in ecology and other fields, the researchers accounted for other characteristics that affect citations, such as a paper’s age: Newly published papers usually receive a burst of citations at first but fewer later. The modest citation advantage from open access accrued only to high-quality papers, defined as having already garnered 11 or more citations during a 2-year period before the paper became open access, McCabe and Snyder reported in November 2020.

    Other studies have found that open-access articles have a larger reach by other measures, including the number of downloads and online views. They also have an edge in Altmetric scores, a composite of an article’s mentions on social media and in news stories and policy documents.

    These nonscholarly mentions buttress reports that open access enables a broader audience, beyond the core scientific community, to read research findings. In November 2020, Springer Nature and partners released findings from a survey of 6000 visitors to its websites. They reported that an “astonishing” 28% were general users, including patients, teachers, and lawyers. Another 15% worked in industry or medical jobs that required them to read but not publish research.

    Even for faculty members who can read subscription-based journals through their institution’s libraries, open access could allow quicker access to articles in journals to which the institution doesn’t subscribe. Some 57% of academics surveyed said they “almost always” or “frequently” had trouble accessing the full content of Springer Nature’s articles.

    How does open access work for authors?

    Open access comes in different varieties, or colors, each with its own costs and benefits.

    In what’s called gold open access, articles carry a license making them freely available on publication. Typically the publisher charges a fee to offset lost subscription revenue and cover the cost of publishing. In recent years, the median paid, after discounts, was about $2600, according to a 2020 study by Nina Schönfelder of Bielefeld University. More selective journals, such as The Lancet Global Health, have charged up to $5000. The Nature Research family of journals has set its top open-access fee at €9500 (about $11,600), and Cell Press will charge $9900 for its flagship, Cell. Some journals are entirely gold open access; other, “hybrid” journals offer authors a choice between free publication behind a paywall or open access for a fee.

    A growing number of universities and research institutions, especially in Europe, are striking deals in which they pay a publisher a single fee that covers open-access publishing by their authors and also lets people on their campuses read content that remains behind paywalls. The largest such agreement was reached in 2019 between Springer Nature and 700 German research institutions and libraries. Since the first such deal in 2015, the number grew to 137 in 2020, according to the ESAC Transformative Agreement Registry. However, the deals last year covered publication fees for only 3% of papers produced globally.

    A variant called green open access allows authors to avoid publication fees. In this arrangement, authors publish in journals—even ones that use paywalls instead of charging authors—but also make their article freely available in an online repository. U.S. policy already requires the final, published versions of papers developed with federal funding to be deposited within 12 months in a repository such as the National Institutes of Health’s PubMed Central, and many publishers do this automatically. Other authors can use online tools to find repositories. The Directory of Open Access Repositories lists more than 5500 of them.

    Publishers typically impose a 6- or 12-month embargo before authors can deposit the final, peer-reviewed version of a paywalled article, but this runs afoul of the Plan S requirement for immediate open access. (The embargo policies of thousands of journals globally are listed in a database called Sherpa/Romeo.) As a compromise, many publishers including the Science family of journals allow authors to immediately post a nearly final, peer-reviewed version of a paper in an institutional repository. Plan S accepts this form of green open access, but has added a controversial provision that these accepted manuscripts be licensed for free distribution. Some publishers have complained that this approach threatens their subscription revenues because it could widen free reading of these articles.

    Rooryck says Coalition S canvassed major publishers and found none was planning to routinely reject submitted manuscripts funded by Coalition S members because of the prospect that the authors would immediately post them when accepted. A spokesperson for publishing giant Elsevier told Science that all its journals will offer authors funded by Coalition S members the option to publish open access for a fee, allowing authors to comply with Plan S without violating embargoes.

    See the full article for the infographic “The many colors of open access”.

    Are publishing fees affordable for authors?

    Where a researcher works strongly influences how much money is available for open-access fees. In Europe, institutions used dedicated internal funds to pay fees for 50% of articles their authors published in hybrid journals (those that publish both open-access and subscription content), but in the rest of the world, the figure was only 25%, according to a 2020 survey by Springer Nature. Authors also tap funders and other sources, including their own personal funds. European scholars reported paying out of their own wallets for just 1% of the articles, compared with 16% in other countries.

    In Italy, the Nature group’s new €9500 open-access fee has riled some researchers. That figure is “insane, there’s no way on Earth to justify that,” says Manlio De Domenico, who leads a network science lab at the Bruno Kessler Foundation. The annual research budget for his 10-person lab recently included a total of €8000 for open-access fees for the entire year. “We can spend the money better another way,” he says—to pay Ph.D. students and, in normal times, fund travel to conferences and other labs. “To me, the trade-off is clear.” (The Nature group says the price reflects its costs to produce such highly selective journals; journals don’t normally collect fees for papers they review but don’t publish.)

    Nor do open-access publication fees hew closely to the laws of demand. One would expect fees to increase with the prestige of the journal, but a recent study by Schönfelder suggests that’s not always true. She examined the relationship between fees paid by U.K. funders and the impact factor—a measure based on the average number of citations per article—of the journals where the papers appeared. She found a strong correlation in journals that published only open-access articles but a weaker one in hybrid journals. Hybrid journals tended to cost more than purely open-access journals, too.

    In a paper published last year [The MIT Press Journals], Schönfelder suggested her findings reflect the legacy of the subscription prices of large, traditional publishers such as Elsevier and Springer Nature, which publish many hybrid journals. These highly profitable companies with large shares of the publishing market have operated with limited competitive pressure. “If [their] pricing behavior wins through, the open-access transformation will come at a much higher cost than expected today,” Schönfelder wrote.

    A complete shift to open access could lead publishers to boost publishing fees even further, to try to make up for lost subscription revenues, says Claudio Aspesi, a publishing industry consultant based in Switzerland. Although just over 30% of all papers published in 2019 were paid open access, subscriptions still accounted for more than 90% of publishers’ revenues that year, according to Delta Think, a consulting and marketing firm.

    Coalition S seeks to exert downward pressure on prices by increasing transparency. When a grantee’s research is published, Plan S requires publishers to disclose to funders the basis for their prices, including the cost of services such as proofreading, copy editing, and organizing peer review. Rooryck says the coalition will share the information with authors and libraries, many of which help fund publishing fees. He expects the practice will increase price competition or provide “at a minimum, confidence that some of these prices are fair.”

    Who has qualms about open access?

    Despite wide acknowledgment by scientists, publishers, librarians, and policymakers of open access’ potential benefits, many are reluctant to go all in.

    Even in Europe, where the movement for open access has been especially strong, Plan S is unusual. Of 60 funders surveyed in 2019, only 37 had an open-access policy, and only 23 monitored compliance, according to a report prepared for SPARC Europe, a nonprofit that advocates for open access.

    Some authors remain hesitant, too. In multiple surveys, authors have ranked open-access publishing below their need to publish in prestigious, high-impact journals to gain tenure and promotion. And they may be wary of a perception among some scientists that journals that carry only gold open-access articles lack rigor. (That view, researchers say, may reflect that such journals are relatively new, which lowers their impact factor.)

    A recent study also hints at inequities, finding that established, funded researchers at prestigious institutions are more likely to pay to publish their work open access. Anthony Olejniczak and Molly Wilson of the Academic Analytics Research Center, part of a data firm in Columbus, Ohio, examined the demographics and publishing patterns of more than 180,000 U.S. scholars. Overall, 84% of biological scientists and 66% in the physical and mathematical sciences had authored or co-authored at least one gold open-access paper between 2014 and 2018. Those authors were more likely to have advanced faculty rank and federal grants and to work at one of the 65 leading research universities that belong to the Association of American Universities, Olejniczak and Wilson report in an upcoming paper in Quantitative Science Studies.

    Olejniczak and Wilson hypothesize that scientists who choose to pay for open access not only need financial resources, but also the sense of job security that tenure confers. “This is a good news, bad news story,” Olejniczak says. “Open access is thriving, and it’s growing.” But, he adds, publishers collecting the fees should consider ways to accommodate a wider diversity of authors.

    Are publishing fees affordable for universities?

    One tenet of the open-access movement has been that publishing fees can be funded by redirecting money university libraries currently spend on journal subscriptions—but that assumption faces questions. Although the “transformative” agreements that cover both reading and publishing of articles have rapidly increased the percentage of articles published open access at some institutions, the details of these deals (like traditional, subscription-only ones) are often secret and have other features that make it difficult to compare bottom-line costs. Comparing costs across institutions is also challenging because these deals usually involve large packages of journals, with the exact lineup varying by institution.

    Still, it is clear that making most articles gold open access could wallop the library budgets of research-intensive universities whose scientists publish the most papers. Many institutions that publish little research would save money by dropping subscriptions and letting faculty members read articles for free, analysts say, and publishers would look to recoup the lost revenue through publishing fees.

    Pay It Forward, a report published by librarians at the University of California (UC) and colleagues in 2016, remains one of the most comprehensive analyses of the impact of these shifts on universities. They calculated what each of UC’s 10 campuses and three comparison institutions would have paid to publish as gold open access all articles from between 2009 and 2013 that listed one of their faculty members as a corresponding author.

    A key finding: At most of the research-intensive institutions studied—such as the UC campuses in Los Angeles and San Francisco and Harvard University—simply redirecting funds from journal subscriptions wouldn’t cover the open-access fees. Those institutions could charge the difference to federal grants, but they would still have to cover fees on papers from studies done without grant funding. Harvard, for example, might have to boost its total library spending by 71%, or nearly $6 million.

    Rich universities like Harvard could potentially tap their huge endowments and copious research funding to cover these costs, but other universities could struggle. U.S. university library budgets have lagged the rate of inflation in higher education for years and now face cuts because of the coronavirus pandemic.

    Some researchers interviewed for UC’s study said they were reluctant to spend grant money on open-access publishing fees because they would eat into funds for research. “But in practice, we found [faculty members] are independently spending millions of dollars” from grants on fees, says MacKenzie Smith, university librarian at UC Davis and one of the study’s co-authors. UC is conducting an experiment that limits the universities’ contribution to per-article publication fees in order to encourage faculty members to consider other funding sources and journals with lower fees. “We want to get authors more engaged in the cost aspect of publishing, or at least mindful of it,” Smith says.

    Is open access the future of scientific publishing?

    If paying for open-access publication becomes the default route for scientists, and publishers hike prices as expected, many analysts worry publishing will become a luxury that only better funded researchers can afford. That could create a self-reinforcing cycle in which well-funded researchers publish more, potentially attracting more attention—and more funding.

    If that comes to pass, it could be especially hard on early-career researchers and authors in the developing world who lack their own grants, and on those in disciplines that traditionally receive less funding, such as math. Although publishers offer waivers for authors, many do not always cover the entire publishing fee or disclose what percentage of requests they grant.

    Small, nonprofit societies that currently depend on subscription fees from their journals could also lose out in an open-access world, because the dynamics of the pay-to-publish model tend to favor publishers and journals that produce a high volume of articles, which affords economies of scale.

    “I am worried that in the zeal to go that last mile” to make a larger portion of articles open access, “we could end up really hurting the scientific enterprise,” says Sudip Parikh, CEO of AAAS, which publishes the Science family of journals. One of them, Science Advances, charges an open-access fee of $4500, whereas the rest operate on the traditional subscription-only model. Parikh says AAAS is considering other options to make papers free to read, but he wasn’t ready to discuss them when Science went to press. “I don’t pretend to know the answer yet,” he says. “But it feels like there are other possibilities” besides publishing fees.

    One model for sustaining open access without relying on per-article publishing fees comes from Latin America. Brazil and other countries have funded the creation of free open-access journals and article repositories, and the region in 2019 had the world’s highest percentage of scholarly articles available open access, 61%, according to the Curtin Open Knowledge Initiative.

    Debate continues about how to control publishing costs. Many advocates for open access say making it more affordable will require a vast shift in the culture of science. In particular, tenure and promotion committees will need to lower their expectations that authors publish in prestigious, costly journals.

    But some argue that even if funders and institutions must cough up more money to help authors publish open access, the potential to accelerate scientific discovery would justify the added cost. The journal publishing industry’s annual revenues of about $10 billion represent less than 1% of total global spending on R&D—and, in this view, it’s reasonable to divert more of the total to scholarly communications that are essential to making the entire enterprise run.

    It’s unlikely, though, that all scientific articles will ever become open access, says Rick Anderson, university librarian at Brigham Young University, who has written extensively about business models for journal publishing. “It just seems to me like the barriers to universal open access are too great,” he says. “Every open-access model solves some problems and creates other problems.”

    “What I think is much more likely in the future, almost inevitable, is a fairly diverse landscape of open-access and subscription models,” Anderson adds. “I haven’t yet seen anything that has convinced me that toll [subscription-based] access is going to go away entirely.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:51 am on December 30, 2020 Permalink | Reply
    Tags: "Massive 2021 U.S. spending bill leaves research advocates hoping for more", Agricultural research, , , Census Bureau, DOD-Department Of Defense, , Environmental Protection Agency (EPA), , , , , Science Magazine,   

    From Science Magazine: “Massive 2021 U.S. spending bill leaves research advocates hoping for more” 

    From Science Magazine

    Dec. 22, 2020

    With reporting by Jeffrey Mervis, Jocelyn Kaiser, Adrian Cho, Erik Stokstad, and David Malakoff.

    1
    President Donald Trump will leave office next year having overseen robust growth in federal science spending over his 4 years in office despite his administration’s repeated efforts to slash research budgets. Credit: Carlos Barria/Reuters.

    The massive $1.4 trillion spending bill that the U.S. Congress finally agreed upon this week should once again reverse the deep cuts the President Donald Trump had proposed for most science agencies, although the outgoing politician has threatened not to sign the bill. Even if he does, the modest hikes for 2021 have left the research community wanting more.

    The final budget package includes increases of 3% for the National Institutes of Health (NIH), 2.5% for the National Science Foundation (NSF), 2.3% for NASA science, and 0.4% for the Department of Energy (DOE’s) science office. Those numbers (see details below) put the cherry on top of 4 years of robust growth under Trump despite his persistent attempts to eviscerate federal science budgets.

    NIH’s budget now stands at $42.9 billion, a 33% rise over its 2016 level of $32.3 billion. Similarly, spending by DOE science tops $7 billion, compared with $5.4 billion in 2017, a boost of 30%. NASA science programs rose by 8% and 11% in 2018 and 2019, respectively, before slowing in 2020 and 2021. NSF’s budget, now nearly $8.5 billion, has grown the least among the four biggest federal science agencies. But even so, a 14% rise since 2017 compares favorably with an overall increase of only 4% during the second term of former President Barack Obama.

    For 2021, lawmakers carved out increases for science despite giving themselves almost no additional money, compared with this year, to spend on all civilian programs. That generosity reflects a bipartisan consensus on the value of academic research.

    Yet many research advocates are greeting the 2021 numbers with a collective shrug. “The long-overdue full-year appropriations package will provide federal research agencies much-needed funding predictability after an incredibly challenging year,” says Lauren Brookmeyer of the Science Coalition, a group of 50 major research universities.

    Research advocates are reacting in a similar fashion to the $900 billion COVID-19 relief measure that was attached to the annual spending bill. Although they recognize the relief bill’s importance as an economic stimulus, it falls far short of what they had sought.

    University administrators had lobbied for $122 billion to recover from the impact of the pandemic on their faculty and students. In addition, they had calculated that federal agencies needed at least $26 billion more to finance research lost or delayed when campuses were shut down in the spring.

    The relief bill contains only $22 billion for higher education, however, and nothing explicitly for bolstering the research enterprise. The shortfall means academic researchers will look to both Congress and President-elect Joe Biden for help, says Peter McPherson, president of the Association of Public and Land-grant Universities. “We urge lawmakers to view this deal as only a step toward providing more comprehensive relief.”

    Although Biden won’t take the oath of office until 20 January 2021, his transition team is already laying the groundwork for both the next relief package and his first budget submission to Congress in February 2021. Together, they will be a test of whether science can retain its bipartisan support.

    Here are some highlights of what Congress allocated to key research agencies for the fiscal year that ends on 30 September 2021.

    NIH

    The omnibus bill gives NIH a $1.25 billion raise, to $42.9 billion. That 3% boost falls short of the 4.8% raise proposed by Senate appropriators and a whopping 13% increase approved by the U.S. House of Representatives that relied on emergency spending. Trump would have cut NIH’s budget by $2.87 billion.

    The 3% boost is the smallest in recent years for NIH. Research advocates are “disappointed,” says Yvette Seger, director of science policy for the Federation of American Societies for Experimental Biology, but they realize lawmakers were limited by statutory budget caps. “We hope that this is a 1-year anomaly,” she adds.

    The bill includes $300 million for Alzheimer’s disease, for a total of $3.1 billion, continuing a steep 5-year rise in funding for the disease. The National Cancer Institute’s $6.5 billion total includes $37.5 million more for investigator-initiated grants to address a glut of applications that have driven down success rates.

    The Brain Research through Advancing Innovative Neurotechnologies brain-mapping initiative receives $560 million, a $60 million increase. Research on a universal flu vaccine rises $20 million, to $220 million. New initiatives include $10 million each for research on premature births and tick-borne diseases, and $50 million for studies on using artificial intelligence to treat chronic diseases. Research to prevent gun violence holds steady, at $12.5 million, the second year of funding after a 23-year de facto ban.

    DOE science

    In the three previous budget cycles, the office’s budget had boomed, increasing by a total of 29.8%. This year, the office—the largest federal funder of the physical sciences—receives a boost of just 0.4% to $7.026 billion.

    Congress did minimal juggling of priorities among the office’s six research programs. Advanced scientific computing research, which supports the office’s supercomputing efforts, received a bump of 3.6% to $1.015 billion. Basic energy sciences, which funds research on chemistry, materials sciences, and related fields and also runs DOE’s x-ray and neutron sources, got a 1.4% increase, to $2.245 billion. The budget for biological and environmental research crept up 0.4% to $753 million. Fusion energy sciences and high-energy physics get only $1 million more than they received this year, some $671 million and $1.046 billion, respectively, whereas the budget for nuclear physics is flat, at $713 million.

    Lawmakers rallied behind the federal push to build exascale supercomputers, instructing the Office of Science to spend no less than $475 million on the effort. They also endorsed emerging quantum information sciences, requiring the office to spend $245 million on the work. The budget does not include a one-time infusion of $6.25 billion passed this summer by the House.

    Although DOE’s Advanced Research Projects Agency-Energy gets only a $2 million boost, to $427 million, another section of the massive spending bill authorizes rapid growth for its work translating basic research into commercially ready technologies. Lawmakers foresee the decade-old agency nearly doubling its budget by 2025, to $761 million.

    NSF

    This year’s final appropriation continues the slow but steady budget growth at the foundation, which celebrated its 70th anniversary this year. Both its research and education accounts will rise by 2.5%, to $6.9 billion and $968 million, respectively.

    Although lawmakers pledge fealty to the idea of keeping their hands off how the agency allocates its pot of money for academic research, they once again set either floors or targets for many programs aimed at groups traditionally underrepresented in science and engineering and states that lag in attracting NSF funding. They also instructed NSF, NASA, and the National Institute of Standards and Technology (NIST) to examine the “racial and cultural makeup” of their workforces and draw up plans to promote “greater racial and cultural acceptance and diversity.” Pointedly, the bill does not include a proposal from the chair of the House science committee, Representative Eddie Bernice Johnson (D–TX), for NSF to finance a study of systemic racism in U.S. academic research.

    The spending bill also asks NSF to outline its plans for the site in Arecibo, Puerto Rico, that now holds the remains of a 57-year-old, agency-funded radio telescope that recently collapsed. In particular, lawmakers want to know how NSF will decide whether to build a new observatory, and the estimated cost of such a facility.

    NASA

    The 2.3% increase for NASA’s $7.3 billion science mission directorate maintains the status quo among its five discipline-based program areas. It also provides $127 million for the space agency’s science education initiatives, once again ignoring the president’s request to eliminate the program.

    The spending bill frees up NASA to choose any commercial rocket to launch a multibillion-dollar payload that would orbit one of Jupiter’s moons. Previous spending bills had required that the Europa Clipper mission use the Space Launch System being developed to return astronauts to the Moon and beyond.

    Congress poured cold water on the Trump administration’s plans to land a human on the moon by 2024. Trump had requested $3.1 billion for a human landing system, but lawmakers provided just $850 million, not nearly enough to meet the administration’s timetable. The incoming Biden administration is expected to revisit the plan.

    Census Bureau

    Social scientists are applauding Congress for providing what they say is sufficient funding to complete work on the besieged 2020 census. The $1.025 billion includes $91 million from a contingency fund that can be tapped if needed.

    At the same time, lawmakers did not mandate a 4-month extension to deliver the results of this year’s census, something agency officials had previously said they needed to cope with the disruptions caused by the pandemic and a truncated count in the fall. The administration later withdrew that request, and social scientists are worried the time crunch could impact data quality.

    “Stakeholders will resume their efforts to convince Congress to provide these extensions as soon as the 117th Congress convenes next month,” says Terri Ann Lowenthal, a former congressional aide and longtime census watcher. “Congress must offer certainty to the Census Bureau’s career experts as bureau staff works to finish data processing, tabulate the apportionment counts, and then prepare the redistricting files for the state, which are more complex.”

    Agricultural research

    The U.S. Department of Agriculture (USDA) receives $3.3 billion for its research program, $125 million above this year, including the Agricultural Research Service and the National Institute of Food and Agriculture. USDA’s primary competitive grants program, the Agriculture and Food Research Initiative (AFRI), got a $10 million raise to $435 million. That marked another incremental win for Supporters of Agricultural Research, an advocacy coalition that has been working to boost AFRI’s budget. Over the past 6 years, it has helped persuade Congress to increase AFRI’s annual budget by $110 million.

    U.S. Geological Survey

    The agency’s overall budget remains flat at $1.32 billion, an increase of just $45 million over this year. Within its natural hazards program, which will stay nearly constant at $175 million, the only new funding is $4 million for landslide research and preparedness, which doubles that effort to $8 million. Congress said that boost should go toward studying the risk of a serious landslide in Alaska’s Prince William Sound, which might cause a tsunami threatening towns including Whittier and Cordova.

    The water resources program fares better with a 12%, $29 million increase to $263 million. The Hydrologic Instrumentation Facility, which improves stream gages and other monitoring devices, wins a $16 million boost to work on a Next Generation Water Observing System, which will provide faster data on water quantity and quality.

    Environmental Protection Agency (EPA)

    2

    The agency survived the last of the Trump administration’s requests to gut its budget. Brushing aside the proposed 28% cut, lawmakers provided a 2% increase overall and a slightly lower 1.8% increase to its science and technology program. But Congress wants its focus sharpened on the high-profile chemicals known as per- and polyfluoroalkyl substances (PFAS), which are widely used in coatings that resist heat, water, and grease. EPA won a 20% increase to its PFAS research and regulatory activities, which will now be funded at $49 million.

    Defense research

    3

    Basic research funded by the Pentagon got a 2.6%, $68 million boost to $2.67 billion. Congress ended up rejecting many cuts proposed by the Trump administration and the Senate.

    NIST

    The agency’s core science programs got a 4.5%, $34 million raise to $788 million. The overall budget was essentially flat at $1.03 billion.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:59 am on December 30, 2020 Permalink | Reply
    Tags: "Report finds holes in U.S. policies on foreign influence in research", A new report by a congressional watchdog says U.S. agencies need to flesh out and clarify their policies for monitoring the foreign ties of the researchers they fund., , At least one of the agencies under scrutiny—the National Science Foundation (NSF)—is pushing back on the idea its policies are lax., At the Pentagon one unnamed unit has nine open cases “involving foreign influence at U.S. universities.”, , GAO says “NSF estimates that it has taken administrative action against nearly 20 grant recipients who failed to disclose foreign ties.”, Grassley asked GAO to focus on foreign influences over researchers working in the U.S. who receive federal funds., Joint Committee on Research Environments (JCORE), NASA “has 14 open cases of grantee fraud with a foreign influence component.”, Science Magazine, The data suggest NIH has been the most aggressive by far., The GAO report examines the practices of the government’s five biggest funders of research: NIH; NSF; NASA; DOE; and DOD., The GAO report was requested by Senator Chuck Grassley (R-IA) chairman of the Senate Committee on Finance., There are also nonfinancial conflicts that could sway the results- e.g. when a researcher takes on more work than he or she can handle., To date the effort has turned up 455 researchers “of possible concern.”   

    From Science Magazine: “Report finds holes in U.S. policies on foreign influence in research” 

    From Science Magazine

    Dec. 28, 2020
    Jeffrey Mervis

    1
    The U.S. Department of Defense is one of five research agencies faulted for gaps in monitoring foreign influences.
    Credit: E. PETERSEN/SCIENCE.

    A new report by a congressional watchdog says U.S. agencies need to flesh out and clarify their policies for monitoring the foreign ties of the researchers they fund.

    The report, by the Government Accountability Office (GAO), is likely to spur efforts in Congress aimed at preventing China and other nations from using funding and other connections to gain improper access to research funded by the U.S. government. But at least one of the agencies under scrutiny—the National Science Foundation (NSF)—is pushing back on the idea its policies are lax. It is warning that tougher rules could hinder its ability to fund the best science.

    The GAO report was requested by Senator Chuck Grassley (R-IA), chairman of the Senate Committee on Finance, who in hearings has prodded research agencies to “pick up their game” when it comes to preventing improper foreign influence. It examines the practices of the government’s five biggest funders of academic research: the National Institutes of Health (NIH), NSF, NASA, the Department of Energy (DOE), and the Department of Defense (DOD). The report recommends they adopt explicit and uniform policies on what grantees need to do to comply with federal laws relating to three issues:

    Financial conflicts of interest (CoI).
    Nonfinancial conflicts that include unrealistic time commitments or duplication of research.
    Disclosure of all sources of research funding, both foreign and domestic.

    Although all five agencies have disclosure policies, the GAO report says DOD and DOE lack an agencywide financial CoI policy. None of the agencies define and ask grantees to describe potential nonfinancial conflicts. Those gaps and ambiguities, GAO concludes, have led to “incomplete or inaccurate information from researchers that … impede the agency’s ability to assess conflicts that could lead to foreign influence.”

    GAO also chides the outgoing administration for failing to deliver long-promised guidance that is being developed by the White House Office of Science and Technology Policy (OSTP) in an interagency process begun nearly 2 years ago. “Most agencies are waiting for the issuance of OSTP’s guidance before they update their policies,” the report notes, adding that the delay has deprived them of “timely information needed to fully address the threats.”

    Fresh numbers

    To date, federal research agencies have taken different approaches in dealing with unwanted foreign influence over their grantees. The GAO report documents, in some cases for the first time, the extent of those activities at the five agencies it examined.

    The data suggest NIH has been the most aggressive, by far. Three years ago, an in-house team at the biomedical behemoth began to try to identify grantees who had not properly disclosed foreign ties. One key element involved comparing information that grantees had provided in their funding applications about non-NIH sources of support with acknowledgments of support in the footnotes of their publications.

    To date, that effort has turned up 455 researchers “of possible concern,” the GAO report notes. And the effort appears to be ongoing; in June, NIH officials said they had vetted 399 such cases. Of those, NIH told GAO that six have led to criminal complaints filed by the U.S. Department of Justice. An additional 32 cases were referred to the inspector general of NIH’s parent agency, the Department of Health and Human Services.

    The report doesn’t provide a similar tally for the other agencies. “They vary in how they collect the data, and it’s hard to separate potential cases of foreign influence from other types of alleged violations,” says Candice Wright, the acting director of GAO’s Science, Technology Assessment, and Analytics office, which conducted the study.

    At the same time, GAO was able to collect some preliminary or incomplete numbers from the four other agencies; those figures suggest none has taken NIH’s proactive stance. Instead, GAO says, those agencies rely heavily on tips from sources outside the agency—including the FBI or an individual with insider knowledge of an alleged violation—to trigger an investigation.

    For example, GAO says “NSF estimates that it has taken administrative action against nearly 20 grant recipients who failed to disclose foreign ties.” That number matches what NSF officials reported this summer. (Wright says GAO received no information on the size of the initial pool of allegations.)

    NASA “has 14 open cases of grantee fraud with a foreign influence component,” according to the GAO report, which hints it is a growing problem. “The number of such cases has approximately doubled in the last year,” it notes.

    At the Pentagon, one unnamed unit has nine open cases “involving foreign influence at U.S. universities,” the report notes. DOE’s inspector general, meanwhile, has reported “21 active cases involving foreign influence.”

    Why they want to know

    Grassley asked GAO to focus on foreign influences over researchers working in the U.S. who receive federal funds. So GAO honed in on the lack of federal policies explicitly designed to detect efforts by foreign entities to game the traditionally open U.S. research enterprise, say, by telling grantees to keep mum about the relationship or by trying to shape the direction of the research.

    But agency officials say upholding research integrity consists of more than just learning about who else might be funding someone applying for a grant. For example, NSF says its disclosure policies are designed to obtain a wide range of information that helps the agency with its grantmaking. Knowing such things as an applicant’s background, collaborators, and access to relevant resources helps NSF make better decisions on who to fund, explains Rebecca Keiser, NSF’s chief of research security strategy and policy. And every bit of information is useful: “All means all” sources, she emphasizes.

    In contrast, Keiser says, NSF’s policy governing conflicts of interest is meant to ensure the results of the funded research have not been skewed because of any number of outside factors. The most obvious are financial conflicts, in which a scientist stands to profit from the outcome.

    But there are also nonfinancial conflicts that could sway the results. One example is when a researcher takes on more work than he or she can handle. That overbooking is called a conflict of commitment. The researcher’s institution is the arbiter of whether any particular relationship—such as with a company or foreign university—crosses the line, Keiser adds, and how the problem should be resolved.

    As federal officials press for more reporting rules on potential foreign influence, it’s important they not conflate conflict of interest and conflict of commitment policies, Keiser says, a point NSF Director Sethuraman Panchanathan emphasized in a 4-page written response to the GAO report. “Not every foreign relationship represents a conflict of commitment,” Keiser says. “And we wanted to make that clear to GAO.

    “We look at disclosures to determine capacity and potential overlapping research,” she continues. “We want investigators to be comfortable disclosing any connection that bears on their work, without fear that it will automatically be labeled a conflict.”

    What’s JCORE?

    Keiser is part of the interagency OSTP group called the Joint Committee on Research Environments (JCORE) that is examining foreign influence policies. It has come up with a definition of both financial and nonfinancial conflicts as part of the pending guidelines relating to foreign collaborations. Although OSTP Director Kelvin Droegemeier and other committee members have made numerous presentations this year to the academic research community, GAO found they have been flying under the radar of their intended audience.

    The agency asked 52 rank-and-file scientists—chosen because they hold large awards from at least two of the five agencies being examined—whether they were familiar with JCORE and its attempt to refine federal policy on foreign influence in research; 49 said they didn’t know about it. “I’m concerned by that number,” Keiser says. “We obviously need to do more outreach.”

    Congress is likely to provide one such forum in the months ahead. A Grassley staffer says the issue remains “a high priority for” the senator, who is in line to lead the powerful Judiciary Committee should Republicans retain control of the Senate. “The government has a ton of blind spots” when it comes to foreign influence, according to the aide, “and the GAO has done a good job identifying those gaps.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:20 am on December 30, 2020 Permalink | Reply
    Tags: "For those pursuing scientific careers 2020 brought plenty of challenges", , , Science Magazine, Stories published by Science Careers., The COVID-19 pandemic posed daunting challenges to those pursuing scientific careers this year.   

    From Science Magazine: “For those pursuing scientific careers 2020 brought plenty of challenges” 

    From Science Magazine

    Dec. 29, 2020

    1
    Credit: GREGORY BULL/AP PHOTO.

    The COVID-19 pandemic posed daunting challenges to those pursuing scientific careers this year, so it’s no surprise that it took center stage in stories published by Science Careers. Some of those stories are now months old, but the issues—limited job openings, obstacles for parents, mental health concerns—remain front of mind for many. And the pandemic wasn’t the only story of the year. Here’s a look back at some headlines from an eventful 12 months.

    ‘The spark has ignited.’ Latin American scientists intensify fight against sexual harassment.

    In February, we reported on progress made—and gaps that remain—in university policies and procedures.

    The pandemic is hitting scientist parents hard, and some solutions may backfire

    Data from the early months of the pandemic quantified lost work hours and productivity.

    Virtual scientific conferences open doors to researchers around the world.

    Our survey of spring and summer virtual meetings showed that most saw higher—and perhaps more diverse—attendance than in previous years.

    Graduate programs drop GRE after online version raises concerns about fairness.

    In June, we reported on a new wrinkle to the “GRExit” movement, spurred by the pandemic.

    As the pandemic erodes grad student mental health, academics sound the alarm.

    Two surveys conducted over the summer put numbers to the issue.

    Amid pandemic, U.S. faculty job openings plummet.

    Our analysis of job boards during the fall application season revealed that STEM postings were down by about 70%.

    “A time of reckoning.” How scientists confronted anti-Black racism and built community in 2020.

    In a year of racial unrest, social media events celebrating Black scientists took off.

    Our top Working Life essays from the year.

    Each week we publish a personal essay in our Working Life series. These were the year’s most read pieces.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: