Tagged: Science Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:16 pm on June 3, 2021 Permalink | Reply
    Tags: "New internet woven from ‘spooky’ quantum links could supercharge science and commerce", Billions of dollars have poured into research on quantum computers and sensors., In China a quantum satellite named Micius sent entangled particle pairs to ground stations 1200 kilometers apart., Quantum entanglment, Quantum networking has begun to muscle its way back into the spotlight., Quantum networks can start to prove their worth as soon as a few distant nodes are reliably entangled., Science, The Chinese achievement set off alarms in Washington D.C., The devices will flourish only when they are yoked to each other over long distances., The first networks capable of transmitting individual entangled photons have begun to take shape., The U.S. passed the 2018 National Quantum Initiative Act.   

    From Science : “New internet woven from ‘spooky’ quantum links could supercharge science and commerce” 

    From Science

    Jun. 3, 2021
    Gabriel Popkin

    Eden Figueroa is trying to coax delicate quantum information out of the lab and into the connected world.
    John Paraskevas/Newsday/Pars International.

    A beam of ethereal blue laser light enters a specialized crystal. There it turns red, a sign that each photon has split into a pair with lower energies—and a mysterious connection. The particles are now quantum mechanically “entangled,” linked like identical twins who know each other’s thoughts despite living in distant cities. The photons zip through a tangle of fibers, then ever so gently deposit the information they encode into waiting clouds of atoms.

    The transmogrifications are “a little bit like magic,” exults Eden Figueroa, a physicist at Stony Brook University-SUNY (US). He and colleagues have concocted the setup on a few laboratory benches cluttered with lenses and mirrors. But they have a much bigger canvas in mind.

    By year’s end, drivers in the largest U.S. metro areas—including, largely thanks to Figueroa, the suburbs of New York City—may unwittingly rumble over the tenuous strands of a new and potentially revolutionary network: a “quantum internet” stitched together by entangled photons like those in Figueroa’s lab.

    Billions of dollars have poured into research on quantum computers and sensors, but many experts say the devices will flourish only when they are yoked to each other over long distances. The vision parallels the way the web vaulted the personal computer from a glorified typewriter and game console to an indispensable telecommunications portal. Through entanglement, a strange quantum mechanical property once derided by Albert Einstein as a “spooky distant effect,” researchers aim to create intimate instantaneous links across long distances. A quantum internet could weld telescopes into arrays with ultrahigh resolution; precisely synchronize clocks; and yield hypersecure communication networks for finance and elections, and make it possible to do quantum computing from anywhere. It could also lead to applications nobody’s yet dreamed of.

    Putting these fragile links into the warm, buzzing world will not be easy, however. Most strands that exist today can send entangled photons to receivers just tens of kilometers apart. And the quantum links are fleeting, destroyed as the photons are received and measured. Researchers dream of sustaining entanglement indefinitely, using streams of photons to weave lasting quantum connections across the globe.

    For that, they will need the quantum equivalent of optical repeaters-the components of today’s telecommunications networks that keep light signals strong across thousands of kilometers of optical fiber. Several teams have already demonstrated key elements of quantum repeaters and say they’re well on their way to building extended networks. “We’ve solved all the scientific problems,” says Mikhail Lukin, a physicist at Harvard University (US). “I’m extremely optimistic that on the scale of 5 to 10 years … we’ll have continental-scale network prototypes.”

    On the night of 29 October 1969, 2 months after Woodstock and as the Vietnam War raged, Charley Kline, a student at the University of California-Los Angeles (US), fired off a message to a computer just over 500 kilometers away at the Stanford Research Institute in Menlo Park, California. It was the launch of the Advanced Research Projects Agency Network (ARPANET). From that precarious two-node beginning—Kline’s intended message was “login” but only “lo” made it through before the system crashed—the internet has swelled into today’s globe-encompassing network. About 2 decades ago, physicists began to wonder whether the same infrastructure could shuttle around something more exotic: quantum information.

    It was a heady time: A mathematician named Peter Shor had, in 1994, devised a quantum code that could break a leading encryption algorithm, something classical computers could not do. Shor’s algorithm suggested quantum computers, which exploit the ability of very small or cold objects to simultaneously exist in multiple, “superposed” states, might have a killer application—cracking codes—and ignited a decadeslong effort to build them. Some researchers wondered whether a quantum internet might vastly enhance the power of those machines.

    But building a quantum computer was daunting enough. Like entanglement, the superposed states essential to its power are fragile, collapsing when measured or otherwise perturbed by the outside world. As the field focused on general-purpose quantum computers, thoughts about linking those computers were mostly banished to a distant future. The quantum internet, Figueroa quips, became “like the hipster version” of quantum computers.

    More recently, with quantum computing starting to become a reality, quantum networking has begun to muscle its way back into the spotlight. To do something useful, a quantum computer will require hundreds of quantum bits, or qubits—still well beyond today’s numbers. But quantum networks can start to prove their worth as soon as a few distant nodes are reliably entangled. “We don’t need many qubits in order to do something interesting,” says Stephanie Wehner, research lead for the quantum internet division at Delft University of Technology [Technische Universiteit Delft] (NL).

    The first networks capable of transmitting individual entangled photons have begun to take shape. A 2017 report from China was one of the most spectacular: A quantum satellite named Micius sent entangled particle pairs to ground stations 1200 kilometers apart. The achievement set off alarms in Washington, D.C., that eventually led to the passage of the 2018 National Quantum Initiative Act, signed into law by then-President Donald Trump and intended to spur U.S. quantum technology. The Department of Energy (DOE) (US), which has led efforts to envision a U.S. quantum internet, added to the momentum in April, announcing $25 million for R&D on a quantum internet to link up national labs and universities. “Let’s get our science facilities connected, show that this works, and provide a framework for the rest of the country to hop on and scale it up,” says Chris Fall, who until recently led the DOE Office of Science (US).

    The Chinese group, led by Jian-Wei Pan, a physicist at the University of Science and Technology [中国科学技术大学] (CN) at Chinese Academy of Sciences [中国科学院](CN), has continued to develop its network. According to a January Nature paper, it now spans more than 4600 kilometers, using fibers and nonquantum relays. Shorter quantum links have been demonstrated in other countries.

    Industry and government are starting to use those first links for secure communication through a method called quantum key distribution, often abbreviated QKD. QKD enables two parties to share a secret key by making simultaneous measurements on pairs of entangled photons. The quantum connection keeps the key safe from tampering or eavesdropping, because any intervening measurement would destroy the entanglement; information encrypted with the key then travels through ordinary channels. QKD is used to secure some Swiss elections, and banks have tested it. But many experts question its importance, because simpler encryption techniques are also impervious to known attacks, including Shor’s algorithm. Moreover, QKD does not guarantee security at sending and receiving nodes, which remain vulnerable.

    A full-fledged quantum network aims higher. It wouldn’t just transmit entangled particles; it “distributes entanglement as a resource,” says Neil Zimmerman, a physicist at the National Institute of Standards and Technology (US), enabling devices to be entangled for long periods, sharing and exploiting quantum information.

    Science might be the first to benefit. One possible use is very long baseline interferometry. The method has already linked radio telescopes around the globe, effectively creating a single, giant dish powerful enough to image a black hole at the center of a distant galaxy [Event Horizon Telescope-EHT]. Combining light from far-flung optical telescopes is far more challenging. But physicists have proposed schemes to capture light gathered by the telescopes in quantum memories and use entangled photons to extract and merge its phase information, the key to ultrahigh resolution. Entangling distributed quantum sensors could also lead to more sensitive detector networks for dark matter and gravitational waves.

    More practical applications include ultrasecure elections and hack-proof communication in which the information itself—and not just a secret key for decoding it, as in QKD—is shared between entangled nodes. Entanglement could synchronize atomic clocks and prevent the delays and errors that accumulate as information is sent between them. And it could offer a way to link up quantum computers, increasing their power. Quantum computers of the near future will likely be limited to a few hundred qubits each, but if entangled together, they may be able to tackle more sophisticated computations.

    A quantum internet would be woven together by photons that are entangled, meaning they share a quantum state. But quantum repeaters would be needed to relay the fragile photons between far-flung users. Credit: N. Desai/Science.

    Taking this idea further, some also envision an analog of cloud computing: so-called blind quantum computing. The thinking is that the most powerful quantum computers will one day be located at national laboratories, universities, and companies, much as supercomputers are today. Designers of drugs and materials or stock traders might want to run quantum algorithms from distant locations without divulging their programs’ contents. In theory, users could encode the problem on a local device that’s entangled with a remote quantum computer—exploiting the distant computer’s power while leaving it blind to the problem being solved.

    “As a physicist, I think [blind quantum computing] is very beautiful,” says Tracy Northup of the University of Innsbruck [Leopold-Franzens-Universität Innsbruck] (AT).

    Researchers have taken early steps toward fully entangled networks. In 2015, Wehner and colleagues entangled photons with electron spins in nitrogen atoms, encased within two tiny diamonds 1.3 kilometers apart on the TU Delft campus. The photons were then sent to an intermediate station, where they interacted with each other to entangle the diamond nodes. The experiment set a record for the distance of “heralded” entanglement—meaning researchers could confirm and use it—and the link lasted for up to several microseconds.

    More expansive networks, however, will likely require quantum repeaters to copy, correct, amplify, and rebroadcast virtually every signal. And although repeaters are a relatively straightforward technology for the classical internet, a quantum repeater has to elude the “no-cloning” theorem—which holds, essentially, that a quantum state cannot be copied.

    One popular repeater design starts with two identical, entangled photon pairs at separate sources. One photon from each pair flies toward distant end points, which could be quantum computers, sensors, or other repeaters. Let’s call them Alice and Bob, as quantum physicists are wont to do.

    The other halves of each pair zip inward, toward the heart of the repeater. That device must trap the photon that arrives first, coax its information into a quantum memory—perhaps a diamond or atom cloud—correct any errors that have accumulated in transit, and coddle it until the other photon arrives. The repeater then needs to mate the two in a way that entangles their far-flung twins. This process, known as entanglement swapping, creates a link between the distant end points, Alice and Bob. Additional repeaters could daisy-chain Alice to a Carol and Bob to a Dave, ultimately spanning big distances.

    Figueroa traces his drive to build such a device to his 2008 Ph.D. thesis defense at the University of Calgary (CA). After the young Mexican-born physicist described how he entangled atoms with light, a theorist asked what he was going to do with the setup. “At the time—shame on me—I didn’t have an answer. To me, it was a toy I could play with,” Figueroa recalls. “He told me: ‘A quantum repeater is what you’re going to do with it.’”

    Inspired, Figueroa pursued the system at the MPG Institute for Quantum Optics [MPG Institut für Quantenoptik] (DE) before landing at Stony Brook. He decided early on that commercial quantum repeaters should operate at room temperature—a break from most quantum lab experiments, which are conducted at very cold temperatures to minimize thermal vibrations that could upset fragile quantum states.

    Figueroa is counting on rubidium vapor for one component of a repeater, the quantum memory. Atoms of rubidium, a heavy cousin of the more familiar lithium and sodium, are appealing because their internal quantum states can be set and controlled by light. In Figueroa’s lab, entangled photons from the frequency-splitting crystal enter plastic cells containing 1 trillion or so rubidium atoms each. There, each photon’s information is encoded as a superposition among the atoms, where it lasts for a fraction of a millisecond—pretty good for a quantum experiment.

    Figueroa is still developing the second stage of the repeater: using computer-controlled bursts of laser light to correct errors and sustain the clouds’ quantum states. Additional laser pulses will then send photons carrying entanglement from the memories to measurement devices to entangle the end users.

    Impurity atoms in minuscule diamonds like the one at the heart of this chip can store and relay quantum information. Credit: QuTech.

    Lukin builds quantum repeaters using a different medium: silicon atoms encased in diamonds. Incoming photons can tweak the quantum spin of a silicon electron, creating a potentially stable memory; in a 2020 Nature paper, his team reported catching and storing quantum states for more than one-fifth of a second, far longer than in the rubidium memory. Although the diamonds must be chilled to within a fraction of a degree above absolute zero, Lukin says the fridges needed are fast becoming compact and efficient. “Right now it’s the least of my worries.”

    At TU Delft, Wehner and her colleagues are pushing the diamond approach as well, but with nitrogen atoms instead of silicon. Last month in Science, the team reported entangling three diamonds in the lab, creating a miniature quantum network. First, the researchers used photons to entangle two different diamonds, Alice and Bob. At Bob, the entanglement was transferred from nitrogen to a spin in a carbon nucleus: a long-lived quantum memory. The entanglement process was then repeated between Bob’s nitrogen atom and one in a third diamond, Charlie. A joint measurement on Bob’s nitrogen atom and carbon nucleus then transferred the entanglement to the third leg, Alice to Charlie.

    Although the distances were much shorter and the efficiency lower than real-world quantum networks will require, the controllable swapping of entanglement demonstrated “the working principle of a quantum repeater,” says TU Delft physicist Ronald Hanson, who led the experiment. It is “something that has never been done.”

    Pan’s team has also demonstrated a partial repeater, with atom clouds serving as the quantum memories. But in a study published in 2019 in Nature Photonics, his team demonstrated an early prototype of a radically different scheme: sending such large numbers of entangled photons through parallel fibers that at least one might survive the journey. Although potentially avoiding the need for repeaters, the network would require the ability to entangle at least several hundred photons, Pan says; his current record is 12. Using satellites to generate entanglement, another technology Pan is developing, could also reduce the need for repeaters because photons can survive much longer journeys through space than through fibers.

    A true quantum repeater, most experts agree, remains years away, and may ultimately use technologies common in today’s quantum computers, such as superconductors or trapped ions, rather than diamonds or atom clouds. Such a device will need to capture nearly every photon that hits it and will probably require quantum computers of at least a few hundred qubits to correct and process signals. In a yin-yang sort of way, better quantum computers could boost the quantum internet—which in turn could supercharge quantum computing.

    While physicists labor to perfect repeaters, they are racing to link sites within single metropolitan areas, for which repeaters are not needed. In a study posted to arXiv in February, Figueroa sent photons from two atom-cloud memories in his lab through 79 kilometers of commercial fibers to Brookhaven National Laboratory, where the photons were merged—a step toward end-to-end entanglement of the type demonstrated by the TU Delft group. By next year, he plans to deploy two of his quantum memories—compacted to the size of a minirefrigerator—midway between his university and the New York City office of his startup company, Qunnect, to see if they boost the odds of photons surviving the journey.

    Embryonic quantum networks are also being built in the Boston, Los Angeles, and Washington, D.C., regions, and two networks will link DOE’s Argonne National Laboratory (US) and DOE’s Fermi National Accelerator Laboratory (US) in Illinois to several Chicago-area universities. TU Delft researchers hope to soon extend their record-long entanglement to a commercial telecommunications facility in The Hague, Netherlands, and other fledgling networks are growing in Europe and Asia.

    The ultimate goal is to use repeaters to link these small networks into an intercontinental internet. But first, researchers face more mundane challenges, including building better photon sources and detectors, minimizing losses at fiber connections, and efficiently converting photons between the native frequency of a particular quantum system—say, an atom cloud or diamond—and the infrared wavelengths that telecom fibers conduct. “Those real-world problems,” Zimmerman says, “may actually be bigger than fiber attenuation.”

    Some doubt the technology will live up to the hype. Entanglement “is a very odd, very special kind of property,” says Kurt Jacobs, a physicist at the Army Research Laboratory. “It doesn’t necessarily lend itself to all kinds of applications.” For clock synchronization, for example, the advantage over classical methods scales only as the square root of the number of entangled devices. A threefold gain requires linking nine clocks—which may be more trouble than it’s worth. “It’s always going to be harder to have a functional quantum network than a classical one,” Jacobs says.

    To such doubts, David Awschalom, a physicist at the University of Chicago (US) who is spearheading one of the Midwest networks, counters, “We’re at the transistor level of quantum technology.” It took a few years after the transistor was invented in 1947 before companies found uses for it in radios, hearing aids, and other devices. Transistors are now etched by the billions into chips in every new computer, smartphone, and car.

    Future generations may look back on this moment the way we look nostalgically at ARPANET, a pure infant version of the internet, its vast potential yet to be recognized and commercialized. “You can be sure that we haven’t yet thought of some of the most important things this technology will do,” Awschalom says. “It would take extraordinary arrogance to believe you’ve done that.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 1:52 pm on March 12, 2021 Permalink | Reply
    Tags: "Giant gravitational wave detectors could hear murmurs from across universe", , , , European Space Agency(EU)/National Aeronautics and Space Administration (US) eLISA space based- the future of gravitational wave research., , KAGRA Large-scale Cryogenic Graviationai wave Telescope Project(JP), , Science,   

    From Science Magazine: “Giant gravitational wave detectors could hear murmurs from across universe” 

    From Science Magazine

    Mar. 10, 2021
    Adrian Cho

    Just 5 years ago, physicists opened a new window on the universe when they first detected gravitational waves, ripples in space itself set off when massive black holes or neutron stars collide. Even as discoveries pour in, researchers are already planning bigger, more sensitive detectors. And a Ford versus Ferrari kind of rivalry has emerged, with scientists in the United States simply proposing bigger detectors, and researchers in Europe pursuing a more radical design.

    “Right now, we’re only catching the rarest, loudest events, but there’s a whole lot more, murmuring through the universe,” says Jocelyn Read, an astrophysicist at California State University, Fullerton(US), who’s working on the U.S. effort. Physicists hope to have the new detectors running in the 2030s, which means they have to start planning now, says David Reitze, a physicist at the California Institute of Technology(US). “Gravitational wave discoveries have captivated the world, so now is a great time to be thinking about what comes next.”

    Current detectors are all L-shaped instruments called interferometers. Laser light bounces between mirrors suspended at either end of each arm, and some of it leaks through to meet at the crook of the L. There, the light interferes in a way that depends on the arms’ relative lengths. By monitoring that interference, physicists can spot a passing gravitational wave, which will generally make the lengths of the arms waver by different amounts.

    Caltech/MIT Advanced aLigo

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    European Space Agency(EU)/National Aeronautics and Space Administration (US) eLISA space based, the future of gravitational wave research.

    To tamp down other vibrations, the interferometer must be housed in a vacuum chamber and the weighty mirrors hung from sophisticated suspension systems. And to detect the tiny stretching of space, the interferometer arms must be long. In the Laser Interferometer Gravitational-Wave Observatory (LIGO), twin instruments in Louisiana and Washington state that spotted the first gravitational wave from two black holes whirling into each other, the arms are 4 kilometers long. Europe’s Virgo detector in Italy has 3-kilometer-long arms.

    In spite of the detectors’ sizes, a gravitational wave changes the relative lengths of their arms by less than the width of a proton.

    The dozens of black hole mergers that LIGO and Virgo have spotted have shown that stellar-mass black holes, created when massive stars collapse to points, are more varied in mass than theorists expected.

    Masses in the Stellar Graveyard GWTC-2 plot v1.0 BY LIGO-Virgo. Credit: Frank Elavsky and Aaron Geller at Northwestern University(US).

    In 2017, LIGO and Virgo delivered another revelation, detecting two neutron stars spiraling together and alerting astronomers to the merger’s location on the sky. Within hours telescopes of all types had studied the aftermath of the resulting “kilonova,” observing how the explosion forged copious heavy elements.

    Researchers now want a detector 10 times more sensitive, which they say would have mind-boggling potential. It could spot all black hole mergers within the observable universe and even peer back to the time before the first stars to search for primordial black holes that formed in the big bang. It should also spot hundreds of kilonovae, laying bare the nature of the ultradense matter in neutron stars.

    The U.S. vision for such a dream machine is simple. “We’re just going to make it really, really big,” says Read, who is helping design Cosmic Explorer, an interferometer with arms 40 kilometers long—essentially, a LIGO detector scaled up 10-fold.

    The “cookie cutter design” might enable the United States to afford multiple, widely separated detectors, which would help pinpoint sources on the sky as LIGO and Virgo do now, says Barry Barish, a physicist at Caltech who directed the construction of LIGO.

    Siting such mammoth wave catchers may be tricky. The 40-kilometer arms have to be straight, but Earth is round. If the crook of the L sits on the ground, then the ends of the interferometers might have to rest on berms 30 meters high. So U.S. researchers hope to find bowl-like areas that might accommodate the structure more naturally.

    In contrast, European physicists envision a single subterranean gravitational wave observatory, called the Einstein Telescope [above], that would do it all. “We want to realize an infrastructure that is able to host all the evolutions [of detectors] for 50 years,” says Michele Punturo, a physicist with Italy’s National Institute for Nuclear Physics(IT) in Perugia and co-chair of the ET steering committee.

    The ET would comprise multiple V-shaped interferometers with arms 10 kilometers long, arranged in an equilateral triangle deep underground to help shield out vibrations. With interferometers pointed in three directions, the ET could determine the polarization of gravitational waves—the direction in which they stretch space—to help locate sources on the sky and probe the fundamental nature of the waves.

    The tunnels would actually house two sets of interferometers. The signals detected by LIGO and Virgo hum at frequencies that range from about 10 to 2000 cycles per second and rise as a pair of objects spirals together. But picking up lower frequencies of just a few cycles per second would open new realms. To detect them, a second interferometer that uses a lower power laser and mirrors cooled to near absolute zero would nestle in each corner of the ET. (Such mirrors are already in use at Japan’s KAGRA Large-scale Cryogenic Graviationai wave Telescope Project(JP) which has 3-kilometer arms and is striving to catch up with LIGO and Virgo.)

    By going to lower frequencies, the ET could detect the merger of black holes hundreds of times as massive as the Sun. It could also catch neutron-star pairs hours before they actually merge, giving astronomers advance warning of kilonova explosions, says Marica Branchesi, an astronomer at Italy’s Gran Sasso Science Institute. “The early emission [of light] is extremely important, because there is a lot of physics there,” she says.

    The ET should cost €1.7 billion, including €900 million for the tunneling and basic infrastructure, Punturo says. Researchers are considering two sites, one near where Belgium, Germany, and the Netherlands meet and another on the island of Sardinia. The plan is under review by the European Strategy Forum on Research Infrastructures, which could put the ET on its to-do list this summer. “This is an important political step,” Punturo says, but not final approval for construction.

    The U.S. proposal is less mature. Researchers want the National Science Foundation(US) to provide $65 million for design work so a decision on the billion-dollar machine can be made in the mid-2020s, Barish says. Physicists hope to have both Cosmic Explorer and the ET running in the mid-2030s, at the same time as the planned Laser Interferometer Space Antenna, a constellation of three spacecraft millions of kilometers apart that will sense gravitational waves of far lower frequencies from supermassive black holes in the centers of galaxies.

    Gravity is talking. Lisa will listen. Dialogos of Eide.

    European Space Agency(EU)/National Aeronautics and Space Administration (US) eLISA space based, the future of gravitational wave research.

    The push for new gravitational wave detectors isn’t necessarily a competition. “What we really want is to have ET and Cosmic Explorer and, ideally, even a third detector of similar sensitivity,” says Stefan Hild, a physicist at Maastricht University [Universiteit Maastricht](NL) who works on the ET. Reitze notes, however, that timing and cost could “push towards convergence and simplicity in designs.” Instead of a Ford and a Ferrari, perhaps physicists will end up building a few Audis.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 9:22 am on September 9, 2020 Permalink | Reply
    Tags: "One of quantum physics’ greatest paradoxes may have lost its leading explanation", , , One of the most plausible mechanisms for quantum collapse—gravity—has suffered a setback., , , , Science, The basic idea is that the gravitational field of any object stands outside quantum theory. It resists being placed into awkward combinations or “superpositions” of different states., The gravity hypothesis traces its origins to Hungarian physicists Károlyházy Frigyes in the 1960s and Lajos Diósi in the 1980s.   

    From Science: “One of quantum physics’ greatest paradoxes may have lost its leading explanation” 

    From Science

    Sep. 7, 2020
    George Musser

    Gravity is unlikely to be the cause of quantum collapse, suggests an underground experiment at Italy’s Gran Sasso National Laboratory.
    Tommaso Guicciardini/Science Source.

    It’s one of the oddest tenets of quantum theory: a particle can be in two places at once—yet we only ever see it here or there. Textbooks state that the act of observing the particle “collapses” it, such that it appears at random in only one of its two locations. But physicists quarrel over why that would happen, if indeed it does. Now, one of the most plausible mechanisms for quantum collapse—gravity—has suffered a setback.

    The gravity hypothesis traces its origins to Hungarian physicists Károlyházy Frigyes in the 1960s and Lajos Diósi in the 1980s. The basic idea is that the gravitational field of any object stands outside quantum theory. It resists being placed into awkward combinations, or “superpositions,” of different states. So if a particle is made to be both here and there, its gravitational field tries to do the same—but the field cannot endure the tension for long; it collapses and takes the particle with it.

    Renowned University of Oxford mathematician Roger Penrose championed the hypothesis in the late 1980s because, he says, it removes the anthropocentric notion that the measurement itself somehow causes the collapse. “It takes place in the physics, and it’s not because somebody comes and looks at it.”

    Still, the hypothesis seemed impossible to probe with any realistic technology, notes Diósi, now at the Wigner Research Center, and a co-author on the new paper. “For 30 years, I had been always criticized in my country that I speculated on something which was totally untestable.”

    New methods now make this doable. In the new study, Diósi and other scientists looked for one of the many ways, whether by gravity or some other mechanism, that a quantum collapse would reveal itself: A particle that collapses would swerve randomly, heating up the system of which it is part. “It is as if you gave a kick to a particle,” says co-author Sandro Donadi of the Frankfurt Institute for Advanced Studies.

    If the particle is charged, it will emit a photon of radiation as it swerves. And multiple particles subject to the same gravitational lurch will emit in unison. “You have an amplified effect,” says co-author Cătălina Curceanu of National Institute for Nuclear Physics in Rome.

    To test this idea, the researchers built a detector out of a crystal of germanium the size of a coffee cup. They looked for excess x-ray and gamma ray emissions from protons in the germanium nuclei, which create electrical pulses in the material. The scientists chose this portion of the spectrum to maximize the amplification. They then wrapped the crystal in lead and placed it 1.4 kilometers underground in the Gran Sasso National Laboratory in central Italy to shield it from other radiation sources.

    Gran Sasso LABORATORI NAZIONALI del GRAN SASSO, located in the Abruzzo region of central Italy.

    Over 2 months in 2014 and 2015, they saw 576 photons, close to the 506 expected from naturally occurring radioactivity, they report today in Nature Physics.

    By comparison, Penrose’s model predicted 70,000 such photons. “You should see some collapse effect in the germanium experiment, but we don’t,” Curceanu says. That suggests gravity is not, in fact, shaking particles out of their quantum superpositions. (The experiment also constrained, though did not rule out, collapse mechanisms that do not involve gravity.)

    To confirm the result, physicists need to engineer those superpositions directly, as opposed to relying on random natural occurrences, says Ivette Fuentes of the University of Southampton: “You should, in principle, be able to make a superposition of massive particles. So let’s do it.” She says her team is working to create clouds of 100 million sodium atoms at a temperature just above absolute zero.

    Although Penrose praises the new work, he thinks it’s not really possible to test his version of the model. He says he was never comfortable with particle swerves, because they might cause the universe to gain or lose energy, violating a basic principle of physics. He has spent the pandemic lockdown creating a new and improved model. “It doesn’t produce a heating or radiation,” he says. In that case, gravity might be causing collapse, yet hiding its tracks.

    Other factors such as interactions between germanium protons and electrons might also cloak the signal, says theoretical physicist Maaneli Derakhshani of Rutgers University, New Brunswick, NJ, USA. All in all, he says, if gravity does cause collapse, the process has to be more complicated than Penrose originally proposed. “One could reasonably argue that … the juice isn’t worth the squeeze.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 1:13 pm on January 25, 2018 Permalink | Reply
    Tags: , , , , , Science   

    From Science: “Renewed measurements of muon’s magnetism could open door to new physics” 

    Science Magazine

    Jan. 25, 2018
    Adrian Cho

    The magnetism of muons is measured as the short-lived particles circulate in a 700-ton ring. FNAL.

    Next week, physicists will pick up an old quest for new physics. A team of 190 researchers at Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, will begin measuring to exquisite precision the magnetism of a fleeting particle called the muon. They hope to firm up tantalizing hints from an earlier incarnation of the experiment, which suggested that the particle is ever so slightly more magnetic than predicted by the prevailing standard model of particle physics. That would give researchers something they have desired for decades: proof of physics beyond the standard model.

    “Physics could use a little shot of love from nature right now,” says David Hertzog, a physicist at the University of Washington in Seattle and co-spokesperson for the experiment, which is known as Muon g-2 (pronounced “gee minus two”). Physicists are feeling increasingly stymied because the world’s biggest atom smasher, the Large Hadron Collider (LHC) near Geneva, Switzerland, has yet to blast out particles beyond those in the standard model. However, g-2 could provide indirect evidence of particles too heavy to be produced by the LHC.

    The muon is a heavier, unstable cousin of the electron. Because it is charged, it will circle in a magnetic field. Each muon is also magnetized like a miniature bar magnet. Place a muon in a magnetic field perpendicular to the orientation of its magnetization, and its magnetic polarity will turn, or precess, just like a twirling compass needle.

    At first glance, theory predicts that in a magnetic field a muon’s magnetism should precess at the same rate as the particle itself circulates, so that if it starts out polarized in the direction it’s flying, it will remain locked that way throughout its orbit. Thanks to quantum uncertainty, however, the muon continually emits and reabsorbs other particles. That haze of particles popping in and out of existence increases the muon’s magnetism and make it precess slightly faster than it circulates.

    Because the muon can emit and reabsorb any particle, its magnetism tallies all possible particles—even new ones too massive for the LHC to make. Other charged particles could also sample this unseen zoo, says Aida El-Khadra, a theorist at the University of Illinois in Urbana. But, she adds, “The muon hits the sweet spot of being light enough to be long-lived and heavy enough to be sensitive to new physics.”

    From 1997 to 2001, researchers on the original g-2 experiment at Brookhaven National Laboratory in Upton, New York, tested this promise by shooting the particles by the thousands into a ring-shaped vacuum chamber 45 meters in diameter, sandwiched between superconducting magnets.

    Over hundreds of microseconds, the positively charged muons decay into positrons, which tend to be spat out in the direction of the muons’ polarization. Physicists can track the muons’ precession by watching for positrons with detectors lining the edge of the ring.

    The g-2 team first reported a slight excess in the muon’s magnetism in 2001. That result quickly faded as theorists found a simple math mistake in the standard model prediction (Science, 21 December 2001, p. 2449). Still, by the time the team reported on the last of its Brookhaven data in 2004, the discrepancy had re-emerged. Since then, the result has grown, as theorists improved their standard model calculations. They had struggled to account for the process in which the muon emits and reabsorbs particles called hadrons, says Michel Davier, a theorist at the University of Paris-South in Orsay, France. By using data from electron-positron colliders, he says, the theorists managed to reduce this largest uncertainty.

    Physicists measure the strength of signals in multiples of the experimental uncertainty, σ, and the discrepancy now stands at 3.5 σ—short of the 5 σ needed to claim a discovery, but interesting enough to warrant trying again.

    In 2013, the g-2 team lugged the experiment on a 5000-kilometer odyssey from Brookhaven to Fermilab, taking the ring by barge around the U.S. eastern seaboard and up the Mississippi River. Since then, they have made the magnetic field three times more uniform, and at Fermilab, they can generate far purer muon beams. “It’s really a whole new experiment,” says Lee Roberts, a g-2 physicist at Boston University. “Everything is better.”

    Over 3 years, the team aims to collect 21 times more data than during its time at Brookhaven, Roberts says. By next year, Hertzog says, the team hopes to have enough data for a first result, which could push the discrepancy above 5 σ.

    Will the muon end up being a portal to new physics? JoAnne Hewett, a theorist at SLAC National Accelerator Laboratory in Menlo Park, California, hesitates to wager. “In my physics lifetime, every 3-σ deviation from the standard model has gone away,” she says. “If it weren’t for that baggage, I’d be cautiously optimistic.”

    The magnetism of muons is measured as the short-lived particles circulate in a 700-ton ring.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 3:40 pm on January 24, 2018 Permalink | Reply
    Tags: , , Science, SULF-Shanghai Superintense Ultrafast Laser Facility   

    From Science: “Physicists are planning to build lasers so powerful they could rip apart empty space” 

    Science Magazine

    Jan. 24, 2018
    Edwin Cartlidge

    A laser in Shanghai, China, has set power records yet fits on tabletops.

    Inside a cramped laboratory in Shanghai, China, physicist Ruxin Li and colleagues are breaking records with the most powerful pulses of light the world has ever seen. At the heart of their laser, called the Shanghai Superintense Ultrafast Laser Facility (SULF), is a single cylinder of titanium-doped sapphire about the width of a Frisbee. After kindling light in the crystal and shunting it through a system of lenses and mirrors, the SULF distills it into pulses of mind-boggling power. In 2016, it achieved an unprecedented 5.3 million billion watts, or petawatts (PW). The lights in Shanghai do not dim each time the laser fires, however. Although the pulses are extraordinarily powerful, they are also infinitesimally brief, lasting less than a trillionth of a second. The researchers are now upgrading their laser and hope to beat their own record by the end of this year with a 10-PW shot, which would pack more than 1000 times the power of all the world’s electrical grids combined.

    The group’s ambitions don’t end there. This year, Li and colleagues intend to start building a 100-PW laser known as the Station of Extreme Light (SEL). By 2023, it could be flinging pulses into a chamber 20 meters underground, subjecting targets to extremes of temperature and pressure not normally found on Earth, a boon to astrophysicists and materials scientists alike. The laser could also power demonstrations of a new way to accelerate particles for use in medicine and high-energy physics. But most alluring, Li says, would be showing that light could tear electrons and their antimatter counterparts, positrons, from empty space—a phenomenon known as “breaking the vacuum.” It would be a striking illustration that matter and energy are interchangeable, as Albert Einstein’s famous E=mc2 equation states. Although nuclear weapons attest to the conversion of matter into immense amounts of heat and light, doing the reverse is not so easy. But Li says the SEL is up to the task. “That would be very exciting,” he says. “It would mean you could generate something from nothing.”

    The Chinese group is “definitely leading the way” to 100 PW, says Philip Bucksbaum, an atomic physicist at Stanford University in Palo Alto, California. But there is plenty of competition. In the next few years, 10-PW devices should switch on in Romania and the Czech Republic as part of Europe’s Extreme Light Infrastructure, although the project recently put off its goal of building a 100-PW-scale device. Physicists in Russia have drawn up a design for a 180-PW laser known as the Exawatt Center for Extreme Light Studies (XCELS), while Japanese researchers have put forward proposals for a 30-PW device.

    Largely missing from the fray are U.S. scientists, who have fallen behind in the race to high powers, according to a study published last month by a National Academies of Sciences, Engineering, and Medicine group that was chaired by Bucksbaum. The study calls on the Department of Energy to plan for at least one high-power laser facility, and that gives hope to researchers at the University of Rochester in New York, who are developing plans for a 75-PW laser, the Optical Parametric Amplifier Line (OPAL). It would take advantage of beamlines at OMEGA-EP, one of the country’s most powerful lasers. “The [Academies] report is encouraging,” says Jonathan Zuegel, who heads the OPAL.

    Invented in 1960, lasers use an external “pump,” such as a flash lamp, to excite electrons within the atoms of a lasing material—usually a gas, crystal, or semiconductor. When one of these excited electrons falls back to its original state it emits a photon, which in turn stimulates another electron to emit a photon, and so on. Unlike the spreading beams of a flashlight, the photons in a laser emerge in a tightly packed stream at specific wavelengths.

    Because power equals energy divided by time, there are basically two ways to maximize it: Either boost the energy of your laser, or shorten the duration of its pulses. In the 1970s, researchers at Lawrence Livermore National Laboratory (LLNL) in California focused on the former, boosting laser energy by routing beams through additional lasing crystals made of glass doped with neodymium. Beams above a certain intensity, however, can damage the amplifiers. To avoid this, LLNL had to make the amplifiers ever larger, many tens of centimeters in diameter. But in 1983, Gerard Mourou, now at the École Polytechnique near Paris, and his colleagues made a breakthrough. He realized that a short laser pulse could be stretched in time—thereby making it less intense—by a diffraction grating that spreads the pulse into its component colors. After being safely amplified to higher energies, the light could be recompressed with a second grating. The end result: a more powerful pulse and an intact amplifier.

    This “chirped-pulse amplification” has become a staple of high-power lasers. In 1996, it enabled LLNL researchers to generate the world’s first petawatt pulse with the Nova laser.

    LLNL Nova Laser

    Since then, LLNL has pushed to higher energies in pursuit of laser-driven fusion. The lab’s National Ignition Facility (NIF) creates pulses with a mammoth 1.8 megajoules of energy in an effort to heat tiny capsules of hydrogen to fusion temperatures.


    However, those pulses are comparatively long and they still generate only about 1 PW of power.

    To get to higher powers, scientists have turned to the time domain: packing the energy of a pulse into ever-shorter durations. One approach is to amplify the light in titanium-doped sapphire crystals, which produce light with a large spread of frequencies. In a mirrored laser chamber, those pulses bounce back and forth, and the individual frequency components can be made to cancel each other out over most of their pulse length, while reinforcing each other in a fleeting pulse just a few tens of femtoseconds long. Pump those pulses with a few hundred joules of energy and you get 10 PW of peak power. That’s how the SULF and other sapphire-based lasers can break power records with equipment that fits in a large room and costs just tens of millions of dollars, whereas NIF costs $3.5 billion and needs a building 10 stories high that covers the area of three U.S. football fields.

    Raising pulse power by another order of magnitude, from 10 PW to 100 PW, will require more wizardry. One approach is to boost the energy of the pulse from hundreds to thousands of joules. But titanium-sapphire lasers struggle to achieve those energies because the big crystals needed for damage-free amplification tend to lase at right angles to the beam—thereby sapping energy from the pulses. So scientists at the SEL, XCELS, and OPAL are pinning their hopes on what are known as optical parametric amplifiers. These take a pulse stretched out by an optical grating and send it into an artificial “nonlinear” crystal, in which the energy of a second, “pump” beam can be channeled into the pulse. Recompressing the resulting high-energy pulse raises its power.

    To approach 100 PW, one option is to combine several such pulses—four 30-PW pulses in the case of the SEL and a dozen 15-PW pulses at the XCELS. But precisely overlapping pulses just tens of femtoseconds long will be “very, very difficult,” says LLNL laser physicist Constantin Haefner. They could be thrown off course by even the smallest vibration or change in temperature, he argues. The OPAL, in contrast, will attempt to generate 75 PW using a single beam.

    Mourou envisions a different route to 100 PW: adding a second round of pulse compression. He proposes using thin plastic films to broaden the spectrum of 10-PW laser pulses, then squeezing the pulses to as little as a couple of femtoseconds to boost their power to about 100 PW.

    Once the laser builders summon the power, another challenge will loom: bringing the beams to a singularly tight focus. Many scientists care more about intensity—the power per unit area—than the total number of petawatts. Achieve a sharper focus, and the intensity goes up. If a 100-PW pulse can be focused to a spot measuring just 3 micrometers across, as Li is planning for the SEL, the intensity in that tiny area will be an astonishing 1024 watts per square centimeter (W/cm2)—some 25 orders of magnitude, or 10 trillion trillion times, more intense than the sunlight striking Earth.

    Those intensities will open the possibility of breaking the vacuum. According to the theory of quantum electrodynamics (QED), which describes how electromagnetic fields interact with matter, the vacuum is not as empty as classical physics would have us believe. Over extremely short time scales, pairs of electrons and positrons, their antimatter counterparts, flicker into existence, born of quantum mechanical uncertainty. Because of their mutual attraction, they annihilate each another almost as soon as they form.

    But a very intense laser could, in principle, separate the particles before they collide. Like any electromagnetic wave, a laser beam contains an electric field that whips back and forth. As the beam’s intensity rises, so, too, does the strength of its electric field. At intensities around 1024 W/cm2, the field would be strong enough to start to break the mutual attraction between some of the electron-positron pairs, says Alexander Sergeev, former director of the Russian Academy of Sciences’s (RAS’s) Institute of Applied Physics (IAP) in Nizhny Novgorod and now president of RAS. The laser field would then shake the particles, causing them to emit electromagnetic waves—in this case, gamma rays. The gamma rays would, in turn, generate new electron-positron pairs, and so on, resulting in an avalanche of particles and radiation that could be detected. “This will be completely new physics,” Sergeev says. He adds that the gamma ray photons would be energetic enough to push atomic nuclei into excited states, ushering in a new branch of physics known as “nuclear photonics”—the use of intense light to control nuclear processes.

    Amplifiers for the University of Rochester’s OMEGA-EP, lit up by flash lamps, could drive a U.S. high-power laser. UNIVERSITY OF ROCHESTER LABORATORY FOR LASER ENERGETICS/EUGENE KOWALUK

    One way to break the vacuum would be to simply focus a single laser beam onto an empty spot inside a vacuum chamber. But colliding two beams makes it easier, because this jacks up the momentum needed to generate the mass for electrons and positrons. The SEL would collide photons indirectly. First, the pulses would eject electrons from a helium gas target. Other photons from the laser beam would ricochet off the electrons and be boosted into high-energy gamma rays. Some of these in turn would collide with optical photons from the beam.

    Documenting these head-on photon collisions would itself be a major scientific achievement. Whereas classical physics insists that two light beams will pass right through each other untouched, some of the earliest predictions of QED stipulate that converging photons occasionally scatter off one another. “The predictions go back to the early 1930s,” says Tom Heinzl, a theoretical physicist at Plymouth University in the United Kingdom. “It would be good if we could confirm them experimentally.”

    Besides making lasers more powerful, researchers also want to make them shoot faster. The flash lamps that pump the initial energy into many lasers must be cooled for minutes or hours between shots, making it hard to carry out research that relies on plenty of data, such as investigating whether, very occasionally, photons transform into particles of the mysterious dark matter thought to make up much of the universe’s mass. “Chances are you would need a lot of shots to see that,” says Manuel Hegelich, a physicist at the University of Texas in Austin.

    A higher repetition rate is also key to using a high-power laser to drive beams of particles. In one scheme, an intense beam would transform a metal target into a plasma, liberating electrons that, in turn, would eject protons from nuclei on the metal’s surface. Doctors could use those proton pulses to destroy cancers—and a higher firing rate would make it easier to administer the treatment in small, individual doses.

    Physicists, for their part, dream of particle accelerators powered by rapid-fire laser pulses. When an intense laser pulse strikes a plasma of electrons and positive ions, it shoves the lighter electrons forward, separating the charges and creating a secondary electric field that pulls the ions along behind the light like water in the wake of a speedboat. This “laser wakefield acceleration” can accelerate charged particles to high energies in the space of a millimeter or two, compared with many meters for conventional accelerators. Electrons thus accelerated could be wiggled by magnets to create a so-called free-electron laser (FEL), which generates exceptionally bright and brief flashes of x-rays that can illuminate short-lived chemical and biological phenomena. A laser-powered FEL could be far more compact and cheaper than those powered by conventional accelerators.

    In the long term, electrons accelerated by high-repetition PW pulses could slash the cost of particle physicists’ dream machine: a 30-kilometer-long electron-positron collider that would be a successor to the Large Hadron Collider at CERN, the European particle physics laboratory near Geneva, Switzerland. A device based on a 100-PW laser could be at least 10 times shorter and cheaper than the roughly $10 billion machine now envisaged, says Stuart Mangles, a plasma physicist at Imperial College London

    Both the linear collider and rapid-fire FELs would need thousands, if not millions, of shots per second, well beyond current technology. One possibility, being investigated by Mourou and colleagues, is to try to combine the output of thousands of quick-firing fiber amplifiers, which don’t need to be pumped with flash tubes. Another option is to replace the flash tubes with diode lasers, which are expensive, but could get cheaper with mass production.

    For the moment, however, Li’s group in China and its U.S. and Russian counterparts are concentrating on power. Efim Khazanov, a laser physicist at IAP, says the XCELS could be up and running by about 2026—assuming the government agrees to the cost: roughly 12 billion rubles (about $200 million). The OPAL, meanwhile, would be a relative bargain at between $50 million and $100 million, Zuegel says.

    But the first laser to rip open the vacuum is likely to be the SEL, in China. An international committee of scientists last July described the laser’s conceptual design as “unambiguous and convincing,” and Li hopes to get government approval for funding—about $100 million—early this year. Li says other countries need not feel left in the shadows as the world’s most powerful laser turns on, because the SEL will operate as an international user facility. Zuegel says he doesn’t “like being second,” but acknowledges that the Chinese group is in a strong position. “China has plenty of bucks,” he says. “And it has a lot of really smart people. It is still catching up on a lot of the technology, but it’s catching up fast.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 10:12 am on January 23, 2018 Permalink | Reply
    Tags: , , Online tool calculates reproducibility scores of PubMed papers, Science   

    From Science: “Online tool calculates reproducibility scores of PubMed papers” 

    Science Magazine

    Jan. 22, 2018
    Dalmeet Singh Chawla

    Scientific societies are seeking new tools to measure the reproducibility of published research findings, amid concerns that many cannot be reproduced independently. National Eye Institute, National Institutes of Health/Flickr (CC BY NC 2.0).

    A new online tool unveiled 19 January measures the reproducibility of published scientific papers by analyzing data about articles that cite them.

    The software comes at a time when scientific societies and journals are alarmed by evidence that findings in many published articles are not reproducible and are struggling to find reliable methods to evaluate whether they are.

    The tool, developed by the for-profit firm Verum Analytics in New Haven, Connecticut, generates a metric called the r-factor that indicates the veracity of a journal article based on the number of other studies that confirm or refute its findings. The r-factor metric has drawn much criticism from academics who said its relatively simple approach might not be sufficient to solve the multifaceted problem that measuring reproducibility presents.

    Early reaction to the new tool suggests that Verum has not fully allayed those concerns. The Verum developers concede the tool still has limitations; they said they released it to receive feedback about how well it works and how it could be improved. Verum has developed the project as a labor of love, and Co-Founder Josh Nicholson said he hopes the release of the early version tool will attract potential funders to help improve it.

    Verum announced the methodology underlying the tool, based on the r-factor, in a preprint paper [BioRXiv] last August and refined it in the new tool. It relies solely on data from freely available research papers in the popular biomedical search engine PubMed.

    Nicholson and his colleagues developed the tool by first manually examining 48,000 excerpts of text in articles that cited other published papers. Verum’s workers classified each of these passages as either confirming, refuting, or mentioning the other papers. Verum then used these classifications to train an algorithm to autonomously recognize each kind of passage in papers outside this sample group.

    Based on a sample of about 10,000 excerpts, Verum’s developers claim their tool correctly classifies passages accurately 93% of the time. But it detects mentioning citations much more precisely than confirming or refuting ones, which were much less common in their sample. The vast majority of articles mention previous studies without confirming or refuting their claims; only about 8% of all citations are confirmatory and only about 1% are refuting.

    The tool’s users can apply the algorithm by entering an article’s unique PubMed identifier code. The algorithm scours PubMed to find articles that cite the paper of interest and all passages that confirm, refute, or mention the paper. The tool then generates an r-factor score for the paper by dividing the number of confirming papers by the sum of the confirming and refuting papers.

    This formula tends to assign high scores, close to 1, to papers seldom refuted. The low number of refuting papers in Verum’s database means that many articles have r-factors of 1—which tends to limit the tool’s usefulness. (R-factors also contain a subscript number indicating the total number of studies that attempted to replicate the paper—an r-factor of 116 means the tool scanned 16 replication studies.)

    Psychologist Christopher Chartier of Ashland University in Ohio, who developed an online platform that assists with the logistics of replication studies, tried the new tool at the request of ScienceInsider. “It appears to do what it claims to do, but I don’t find much value in the results,” he says. One reason, he says, is that r-factors may be skewed by a publication bias—where scholarly journals favorably publish positive results over negative results. “We simply can’t trust the published literature to be a reliable and valid indicator of a finding’s replicability,” Chartier said.

    “Attempting to estimate the robustness of a published research finding is notoriously difficult,” said Marcus Munafò, a biological psychologist at the University of Bristol in the United Kingdom, a key figure in tackling irreproducibility [nature human behavior] . It’s difficult, he said, to know the precision or quality of individual confirmatory or refuting studies without reading them.

    Another limitation in Verum’s tool is that because it trawls only freely available papers on PubMed, it misses paywalled scholarly literature.

    Still, the Verum team will press on. Next on their agenda is to increase the number of sample papers used to train their algorithm to improve its accuracy in recognizing confirming and refuting papers.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 6:27 am on November 2, 2017 Permalink | Reply
    Tags: , , , , , , Earth-sized alien worlds are out there. Now astronomers are figuring out how to detect life on them, Exobiology, , , NASA Deep Space Climate Observatory, NASA HabEx, , , Science,   

    From Science: “Earth-sized alien worlds are out there. Now, astronomers are figuring out how to detect life on them” 

    Science Magazine

    Nov. 1, 2017
    Daniel Clery

    Stephen Kane spends a lot of time staring at bad pictures of a planet. The images are just a few pixels across and nearly featureless. Yet Kane, an astronomer at the University of California, Riverside, has tracked subtle changes in the pixels over time. They are enough for him and his colleagues to conclude that the planet has oceans, continents, and clouds. That it has seasons. And that it rotates once every 24 hours.

    He knows his findings are correct because the planet in question is Earth.

    An image from the Deep Space Climate Observatory satellite (left), degraded to a handful of pixels (right), is a stand-in for how an Earth-like planet around another star might look through a future space telescope.

    Kane took images from the Deep Space Climate Observatory satellite, which has a camera pointing constantly at Earth from a vantage partway to the sun, and intentionally degraded them from 4 million pixels to just a handful.

    NASA Deep Space Climate Observatory

    The images are a glimpse into a future when telescopes will be able to just make out rocky, Earth-sized planets around other stars. Kane says he and his colleagues are trying to figure out “what we can expect to see when we can finally directly image an exoplanet.” Their exercise shows that even a precious few pixels can help scientists make the ultimate diagnosis: Does a planet harbor life?

    Finding conclusive evidence of life, or biosignatures, on a planet light-years away might seem impossible, given that space agencies have spent billions of dollars sending robot probes to much closer bodies that might be habitable, such as Mars and the moons of Saturn, without detecting even a whiff of life. But astronomers hope that a true Earth twin, bursting with flora and fauna, would reveal its secrets to even a distant observer.

    Detecting them won’t be easy, considering the meager harvest of photons astronomers are likely to get from such a tiny, distant world, its signal almost swamped by its much brighter nearby star. The new generation of space telescopes heading toward the launch pad, including NASA’s mammoth James Webb Space Telescope (JWST), have only an outside chance of probing an Earth twin in sufficient detail.

    NASA/ESA/CSA Webb Telescope annotated

    But they will be able to sample light from a range of other planets, and astronomers are already dreaming of a space telescope that might produce an image of an Earth-like planet as good as Kane’s pixelated views of Earth. To prepare for the coming flood of exoplanet data, and help telescope designers know what to look for, researchers are now compiling lists of possible biosignatures, from spectral hints of gases that might emanate from living things to pigments that could reside in alien plants or microbes.

    There is unlikely to be a single smoking gun. Instead, context and multiple lines of evidence will be key to a detection of alien life. Finding a specific gas—oxygen, say—in an alien atmosphere isn’t enough without figuring out how the gas could have gotten there. Knowing that the planet’s average temperature supports liquid water is a start, but the length of the planet’s day and seasons and its temperature extremes count, too. Even an understanding of the planet’s star is imperative, to know whether it provides steady, nourishing light or unpredictable blasts of harmful radiation.

    “Each [observation] will provide crucial evidence to piece together to say if there is life,” says Mary Voytek, head of NASA’s astrobiology program in Washington, D.C.

    In the heady early days following the discovery of the first exoplanet around a normal star in 1995, space agencies drew up plans for extremely ambitious—and expensive—missions to study Earth twins that could harbor life. Some concepts for NASA’s Terrestrial Planet Finder and the European Space Agency’s Darwin mission envisaged multiple giant telescopes flying in precise formation and combining their light to increase resolution. But neither mission got off the drawing board. “It was too soon,” Voytek says. “We didn’t have the data to plan it or build it.”

    Instead, efforts focused on exploring the diversity of exoplanets, using both ground-based telescopes and missions such as NASA’s Kepler spacecraft.

    NASA/Kepler Telescope

    Altogether they have identified more than 3500 confirmed exoplanets, including about 30 roughly Earth-sized worlds capable of retaining liquid water. But such surveys give researchers only the most basic physical information about the planets: their orbits, size, and mass. In order to find out what the planets are like, researchers need spectra: light that has passed through the planet’s atmosphere or been reflected from its surface, broken into its component wavelengths.

    Most telescopes don’t have the resolution to separate a tiny, dim planet from its star, which is at least a billion times brighter. But even if astronomers can’t see a planet directly, they can still get a spectrum if the planet transits, or passes in front of the star, in the course of its orbit. As the planet transits, starlight shines through its atmosphere; gases there absorb particular wavelengths and leave characteristic dips in the star’s spectrum.

    Astronomers can also study a transiting planet by observing the star’s light as the planet’s orbit carries it behind the star.

    Planet transit. NASA/Ames

    Before the planet is eclipsed, the spectrum will include both starlight and light reflected from the planet; afterward, the planet’s contribution will disappear. Subtracting the two spectra should reveal traces of the planet.

    Teasing a recognizable signal from the data is far from easy. Because only a tiny fraction of the star’s light probes the atmosphere, the spectral signal is minuscule, and hard to distinguish from irregularities in the starlight itself and from absorption by Earth’s own atmosphere. Most scientists would be “surprised at how horrible the data is,” says exoplanet researcher Sara Seager of the Massachusetts Institute of Technology in Cambridge.

    In spite of those hurdles, the Hubble and Spitzer space telescopes, plus a few others, have used these methods to detect atmospheric gases, including sodium, water, carbon monoxide and dioxide, and methane, from a handful of the easiest targets.

    NASA/ESA Hubble Telescope

    NASA/Spitzer Infrared Telescope

    Most are “hot Jupiters”—big planets in close-in orbits, their atmospheres puffed up by the heat of their star.

    In an artist’s concept, a petaled starshade flying at a distance of tens of thousands of kilometers from a space telescope blocks a star’s light, opening a clear view of its planets. NASA/JPL.

    The approach will pay much greater dividends after the launch of the JWST in 2019. Its 6.5-meter mirror will collect far more light from candidate stars than existing telescopes can, allowing it to tease out fainter exoplanet signatures, and its spectrographs will produce much better data.


    And it will be sensitive to the infrared wavelengths where the absorption lines of molecules such as water, methane, and carbon monoxide and dioxide are most prominent.

    Once astronomers have such spectra, one of the main gases that they hope to find is oxygen. Not only does it have strong and distinctive absorption lines, but many believe its presence is the strongest sign that life exists on a planet.

    Oxygen-producing photosynthesis made Earth what it is today. First cyanobacteria in the oceans and then other microbes and plants have pumped out oxygen for billions of years, so that it now makes up 21% of the atmosphere—an abundance that would be easily detectable from afar. Photosynthesis is evolution’s “killer app,” says Victoria Meadows, head of the NASA-sponsored Virtual Planet Laboratory (VPL) at the University of Washington in Seattle. It uses a prolific source of energy, sunlight, to transform two molecules thought to be common on most terrestrial planets—water and carbon dioxide—into sugary fuel for multicellular life. Meadows reckons it is a safe bet that something similar has evolved elsewhere. “Oxygen is still the first thing to go after,” she says.

    Fifteen years ago, when exoplanets were new and researchers started thinking about how to scan them for life, “Champagne would have flowed” if oxygen had been detected, Meadows recalls. But since then, researchers have realized that things are not that simple: Lifeless planets can have atmospheres full of oxygen, and life can proliferate without ever producing the gas. That was the case on Earth, where, for 2 billion years, microbes practiced a form of photosynthesis that did not produce oxygen or many other gases. “We’ve had to make ourselves more aware of how we could be fooled,” Meadows says.

    To learn what a genuine biosignature might look like, and what might be a false alarm, Meadows and her colleagues at the VPL explore computer models of exoplanet atmospheres, based on data from exoplanets as well as observations of more familiar planets, including Earth. They also do physical experiments in vacuum chambers. They recreate the gaseous cocktails that may surround exoplanets, illuminate them with simulated starlight of various kinds, and see what can be measured.

    Over the past few years, VPL researchers have used such models to identify nonbiological processes that could make oxygen and produce a “false positive” signal. For example, a planet with abundant surface water might form around a star that, in its early years, surges in brightness, perhaps heating the young planet enough to boil off its oceans. Intense ultraviolet light from the star would bombard the resulting water vapor, perhaps splitting it into hydrogen and oxygen. The lighter hydrogen could escape into space, leaving an atmosphere rich in oxygen around a planet devoid of life. “Know thy star, know thy planet,” recites Siddharth Hegde of Cornell University’s Carl Sagan Institute.

    Discovering methane in the same place as oxygen, however, would strengthen the case for life. Although geological processes can produce methane, without any need for life, most methane on Earth comes from microbes that live in landfill sites and in the guts of ruminants. Methane and oxygen together make a redox pair: two molecules that will readily react by exchanging electrons. If they both existed in the same atmosphere, they would quickly combine to produce carbon dioxide and water. But if they persist at levels high enough to be detectable, something must be replenishing them. “It’s largely accepted that if you have redox molecules in large abundance they must be produced by life,” Hegde says.

    Some argue that by focusing on oxygen and methane—typical of life on Earth—researchers are ignoring other possibilities. If there is one thing astronomers have learned about exoplanets so far, it is that familiar planets are a poor guide to exoplanets’ huge diversity of size and nature. And studies of extremophiles, microbes that thrive in inhospitable environments on Earth, suggest life can spring up in unlikely places. Exobiology may be entirely unlike its counterpart on Earth, and so its gaseous byproducts might be radically different, too.

    But what gases to look for? Seager and her colleagues compiled a list of 14,000 compounds that might exist as a gas at “habitable” temperatures, between the freezing and boiling points of water; to keep the list manageable they restricted it to small molecules, with no more than six nonhydrogen atoms. About 2500 are made of the biogenic atoms carbon, nitrogen, oxygen, phosphorus, sulfur, and hydrogen, and about 600 are actually produced by life on Earth. Detecting high levels of any of these gases, if they can’t be explained by nonbiological processes, could be a sign of alien biology, Seager and her colleagues argue.


    Light shining through the atmospheres of transiting exoplanets is likely to be the mainstay of biosignature searches for years to come. But the technique tends to sample the thin upper reaches of a planet’s atmosphere; far less starlight may penetrate the thick gases that hug the surface, where most biological activity is likely to occur. The transit technique also works best for hot Jupiters, which by nature are less likely to host life than small rocky planets with thinner atmospheres. The JWST may be able to tease out atmospheric spectra from small planets if they orbit small, dim stars like red dwarfs, which won’t swamp the planet’s spectrum. But these red dwarfs have a habit of spewing out flares that would make it hard for life to establish itself on a nearby planet.

    To look for signs of life on a terrestrial planet around a sunlike star, astronomers will probably have to capture its light directly, to form a spectrum or even an actual image. That requires blocking the overwhelming glare of the star. Ground-based telescopes equipped with “coronagraphs,” which precisely mask a star so nearby objects can be seen, can now capture only the biggest exoplanets in the widest orbits. To see terrestrial planets will require a similarly equipped telescope in space, above the distorting effect of the atmosphere. NASA’s Wide Field Infrared Survey Telescope (WFIRST), expected to launch in the mid-2020s, is meant to fill that need.


    Even better, WFIRST could be used in concert with a “starshade”—a separate spacecraft stationed 50,000 kilometers from the telescope that unfurls a circular mask tens of meters across to block out starlight. A starshade is more effective than a coronagraph at limiting the amount of light going into the telescope. It not only blocks the star directly, but also suppresses diffraction with an elaborate petaled edge. That reduces the stray scattered light that can make it hard to spot faint planets. A starshade is a much more expensive prospect than a coronagraph, however, and aligning telescope and starshade over huge distances will be a challenge.

    Direct imaging will provide much better spectra than transit observations because light will pass through the full depth of the planet’s atmosphere twice, rather than skimming through its outer edges. But it also opens up the possibility of detecting life directly, instead of through its waste gases in the atmosphere. If organisms, whether they are plants, algae, or other microbes, cover a large proportion of a planet’s surface, their pigments may leave a spectral imprint in the reflected light. Earthlight contains an obvious imprint of this sort. Known as the “red edge,” it is the dramatic change in the reflectance of green plants at a wavelength of about 720 nanometers. Below that wavelength, plants absorb as much light as possible for photosynthesis, reflecting only a few percent. At longer wavelengths, the reflectance jumps to almost 50%, and the brightness of the spectrum rises abruptly, like a cliff. “An alien observer could easily tell if there is life on Earth,” Hegde says.

    There’s no reason to assume that alien life will take the form of green plants. So Hegde and his colleagues are compiling a database of reflectance spectra for different types of microbes. Among the hundreds the team has logged are many extremophiles, which fill marginal niches on Earth but may be a dominant life form on an exoplanet. Many of the microbes on the list have not had their reflectance spectra measured, so the Cornell team is filling in those gaps. Detecting pigments on an exoplanet surface would be extremely challenging. But a tell-tale color in the faint light of a distant world could join other clues—spectral absorption lines from atmospheric gases, for example—to form “a jigsaw puzzle which overall gives us a picture of the planet,” Hegde says.

    None of the telescopes available now or in the next decade is designed specifically to directly image exoplanets, so biosignature searches must compete with other branches of astronomy for scarce observing time. What researchers really hanker after is a large space telescope purpose-built to image Earth-like alien worlds—a new incarnation of the idea behind NASA’s ill-fated Terrestrial Planet Finder.

    The Habitable Exoplanet Imaging Mission, or HabEx, a mission concept now being studied by NASA, could be the answer. Its telescope would have a mirror up to 6.5 meters across—as big as the JWST’s—but would be armed with instruments sensitive to a broader wavelength range, from the ultraviolet to the near-infrared, to capture the widest range of spectral biosignatures. The telescope would be designed to reduce scattered light and have a coronagraph and starshade to allow direct imaging of Earth-sized exoplanets.

    Such a mission would reveal Earth-like planets at a level of detail researchers can now only dream about—probing atmospheres, revealing any surface pigments, and even delivering the sort of blocky surface images that Kane has been simulating. But will that be enough to conclude we are not alone in the universe? “There’s a lot of uncertainty about what would be required to put the last nail in the coffin,” Kane says. “But if HabEx is built according to its current design, it should provide a pretty convincing case.”

    NASA HabEx: The Planet Hunter

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 8:20 am on October 2, 2017 Permalink | Reply
    Tags: , , , , , , , Science, , University of Tübingen, ,   

    From Science: “Sloshing, supersonic gas may have built the baby universe’s biggest black holes” 


    Sep. 28, 2017
    Joshua Sokol

    Supermassive black holes a billion times heavier than the sun are too big to have formed conventionally. NASA Goddard Space Flight Center

    A central mystery surrounds the supermassive black holes that haunt the cores of galaxies: How did they get so big so fast? Now, a new, computer simulation–based study suggests that these giants were formed and fed by massive clouds of gas sloshing around in the aftermath of the big bang.

    “This really is a new pathway,” says Volker Bromm, an astrophysicist at the University of Texas in Austin who was not part of the research team. “But it’s not … the one and only pathway.”

    Astronomers know that, when the universe was just a billion years old, some supermassive black holes were already a billion times heavier than the sun. That’s much too big for them to have been built up through the slow mergers of small black holes formed in the conventional way, from collapsed stars a few dozen times the mass of the sun. Instead, the prevailing idea is that these behemoths had a head start. They could have condensed directly out of seed clouds of hydrogen gas weighing tens of thousands of solar masses, and grown from there by gravitationally swallowing up more gas. But the list of plausible ways for these “direct-collapse” scenarios to happen is short, and each option requires a perfect storm of circumstances.

    For theorists tinkering with computer models, the trouble lies in getting a massive amount of gas to pile up long enough to collapse all at once, into a vortex that feeds a nascent black hole like water down a sink drain. If any parts of the gas cloud cool down or clump up early, they will fragment and coalesce into stars instead. Once formed, radiation from the stars would blow away the rest of the gas cloud.

    Computer models show how supersonic streams of gas coalesce around nuggets of dark matter—forming the seed of a supermassive black hole. Shingo Hirano

    One option, pioneered by Bromm and others, is to bathe a gas cloud in ultraviolet light, perhaps from stars in a next-door galaxy, and keep it warm enough to resist clumping. But having a galaxy close enough to provide that service would be quite the coincidence.

    The new study proposes a different origin. Both the early universe and the current one are composed of familiar matter like hydrogen, plus unseen clumps of dark matter.

    Dark Matter Research

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Scientists studying the cosmic microwave background hope to learn about more than just how the universe grew—it could also offer insight into dark matter, dark energy and the mass of the neutrino.

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Dark Matter Particle Explorer China

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB deep in Sudbury’s Creighton Mine

    LUX Dark matter Experiment at SURF, Lead, SD, USA

    ADMX Axion Dark Matter Experiment, U Uashington

    Today, these two components move in sync. But very early on, normal matter may have sloshed back and forth at supersonic speeds across a skeleton provided by colder, more sluggish dark matter. In the study, published today in Science, simulations show that where these surges were strong, and crossed the path of heavy clumps of dark matter, the gas resisted premature collapse into stars and instead flowed into the seed of a supermassive black hole. These scenarios would be rare, but would still roughly match the number of supermassive black holes seen today, says Shingo Hirano, an astrophysicist at the University of Texas and lead author of the study.

    Priya Natarajan, an astrophysicist at Yale University, says the new simulation represents important computational progress. But because it would have taken place at a very distant, early moment in the history of the universe, it will be difficult to verify. “I think the mechanism itself in detail is not going to be testable,” she says. “We will never see the gas actually sloshing and falling in.”

    But Bromm is more optimistic, especially if such direct-collapse black hole seeds also formed slightly later in the history of the universe. He, Natarajan, and other astronomers have been looking for these kinds baby black holes, hoping to confirm that they do, indeed, exist and then trying to work out their origins from the downstream consequences.

    In 2016, they found several candidates, which seem to have formed through direct collapse and are now accreting matter from clouds of gas. And earlier this year, astronomers showed that the early, distant universe is missing the glow of x-ray light that would be expected from a multitude of small black holes—another sign favoring the sudden birth of big seeds that go on to be supermassive black holes. Bromm is hopeful that upcoming observations will provide more definite evidence, along with opportunities to evaluate the different origin theories. “We have these predictions, we have the signatures, and then we see what we find,” he says. “So the game is on.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 7:09 am on September 27, 2017 Permalink | Reply
    Tags: After lengthy campaign, , , , Australia gets its own space agency, , , Science   

    From Science: “After lengthy campaign, Australia gets its own space agency” 


    Sep. 25, 2017
    Leigh Dayton

    Australia will establish a space agency to pursue commercial and research activities. inefekt69

    Australia was the third nation after the United States and the USSR to build and launch a satellite from its own rocket range. But after the Weapons Research Establishment Satellite (WRESAT) took to the skies on 29 November 1967, the country’s space efforts dwindled. Australia’s last microsatellite—launched from a Japanese facility—died in 2007. Along with Iceland, Australia was one of only two Organisation for Economic Co-operation and Development nations without a space agency.

    But that’s about to change. The government announced today at the 68th International Astronautical Congress in Adelaide, Australia, that it will establish a national space agency.

    The decision caps a yearlong campaign to boost Australia’s space efforts, led by groups from universities, industry, and government bodies. “The creation of an Australian space agency is very exciting news,” says Michael Brown, a Melbourne, Australia–based Monash University astronomer.

    “The establishment of an Australian Space Agency is a strong nod of support for the current space sector in Australia,” says astronomer and astrophysicist Lee Spitler of Macquarie University here. He adds that what is left of the country’s space industry operates as a “grassroots movement across a small number of companies, university groups, and the defense sector.”

    Australia depends heavily foreign-built or operated satellites for communications, remote sensing, and astronomical research. Its share of the $330 billion global space economy is only 0.8%.

    Despite persistent calls for a national space agency, the current government took no steps until last July, when Arthur Sinodinos, the federal minister for industry, innovation and science, set up an expert review group to study the country’s space industry capabilities. To date, the group has received nearly 200 written submissions and held meetings across the country.

    Facing calls for action last week from the participants at the Adelaide meeting, Acting Industry Minister Michaelia Cash announced that the working group will develop a charter for the space agency that will be included in a wider space industry strategy.

    It is about time, says astronomer Alan Duffy at Swinburne University of Technology in Melbourne: “These announcements come at a special anniversary. It’s 50 years since the launch of WRESAT.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 11:07 am on September 25, 2017 Permalink | Reply
    Tags: , , Double-blind peer review, Nature Publishing Group (NPG) in London, Science   

    From Science: “Few authors choose anonymous peer review, massive study of Nature journals shows 


    Sep. 22, 2017
    Martin Enserink

    Scientists from India and China far more often ask Nature’s journals for double-blind peer review than those from Western countries. Emily Petersen

    Once you’ve submitted your paper to a journal, how important is it that the reviewers know who wrote it?

    Surveys have suggested that many researchers would prefer anonymity because they think it would result in a more impartial assessment of their manuscript. But a new study by the Nature Publishing Group (NPG) in London shows that only one in eight authors actually chose to have their reviewers blinded when given the option. The study, presented here at the Eighth International Congress on Peer Review, also found that papers submitted for double-blind review are far less likely to be accepted.

    Most papers are reviewed in single-blind fashion—that is, the reviewers know who the authors are, but not vice versa. In theory, that knowledge allows them to exercise a conscious or unconscious bias against researchers from certain countries, ethnic minorities, or women, and be kinder to people who are already well-known in their field. Double-blind reviews, the argument goes, would remove those prejudices. A 2007 study of Behavioral Ecology found that the journal published more articles by female authors when using double-blind reviews—although that conclusion was challenged by other researchers a year later. In a survey of more than 4000 researchers published in 2013, three-quarters said they thought double-blind review is “the most effective method.”

    But that approach also has drawbacks. Journals have checklists for authors on how to make a manuscript anonymous by avoiding phrases like “we previously showed” and by removing certain types of meta-information from computer files—but some researchers say they find it almost impossible to ensure complete anonymity.

    “If I am going to remove every trace that could identify myself and my coauthors there wouldn’t be much left of the paper,” music researcher Alexander Jensenius from the University of Oslo wrote on his blog. Indeed, experience shows that reviewers can sometimes tell who wrote a paper, based on previous work or other information. At Conservation Biology, which switched to double-blind reviews in 2014, reviewers who make a guess get it right about half of the time, says the journal’s editor, Mark Burgman of Imperial College London. “But that’s not the end of the world,” he says. Double-blind review, he says, “sends a message that you’re determined to try and circumvent any unconscious bias in the review process.”

    In 2013 NPG began offering its authors anonymous peer review as an option for two journals, Nature Geoscience and Nature Climate Change. Only one in five authors requested it, Nature reported 2 years later—far less than editors had expected. But the authors’ reactions were so positive that NPG decided to expand the option to all of its journals.

    At the peer review congress last week, NPG’s Elisa De Ranieri presented data on 106,373 submissions to the group’s 25 Nature-branded journals between March 2015 and February 2017. In only 12% of cases did the authors opt for double-blind review. They chose double-blind reviews most often for papers in the group’s most prestigious journal, Nature (14%), compared to 12% for Nature “sister journals” and 9% for the open-access journal Nature Communications.

    The data suggest that concerns about possible discrimination may have been a factor. Some 32% of Indian authors and 22% of Chinese authors opted for double-blind review, compared with only 8% of authors from France and 7% from the United States. The option was also more popular among researchers from less prestigious institutes, based on their 2016 Times Higher Education rankings. There was no difference in the choices of men and women, De Ranieri noted, a finding that she called surprising.

    Burgman suspects that the demand for double-blind review is suppressed by fears that it could backfire on the author. “There’s the idea that if you go double blind, you have something to hide,” he says. That may also explain why women were not more likely to demand double blind reviews than men, he says. Burgman says he thinks making double-blind reviews the standard, as Conservation Biology has done, is the best course. “It has not markedly changed the kind or numbers of submissions we receive,” he says. “But we do get informal feedback from a lot of people who say: ‘This is a great thing.’”

    Authors choosing double-blind review in hope of improving their chances of success will be disappointed by the Nature study. Only 8% of those papers were actually sent out for review after being submitted, compared to 23% of those opting for single-blind review. (Nature’s editors decide whether to send a paper for review or simply reject it, and the editors know the identity of the authors.) And only 25% of papers under double-blind review were eventually accepted, versus 44% for papers that went the single-blind route.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: