Tagged: WIRED Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:47 am on July 26, 2020 Permalink | Reply
    Tags: , , , , , Quantum gravity-seek to unify Albert Einstein’s general theory of relativity with quantum mechanics., WIRED   

    From WIRED: “Looking for Gravitons? Check for the ‘Buzz’” 


    From WIRED

    07.26.2020
    Thomas Lewton


    If gravity plays by the rules of quantum mechanics, particles called gravitons should gingerly jostle ordinary objects.Video: Alexander Dracott/Quanta Magazine

    MANY PHYSICISTS ASSUME that gravitons exist, but few think that we will ever see them. These hypothetical elementary particles are a cornerstone of theories of quantum gravity, which seek to unify Albert Einstein’s general theory of relativity with quantum mechanics. But they are notoriously hard—perhaps impossible—to observe in nature.

    The world of gravitons only becomes apparent when you zoom in to the fabric of space-time at the smallest possible scales, which requires a device that can harness truly extreme amounts of energy. Unfortunately, any measuring device capable of directly probing down to this “Planck length” would necessarily be so massive that it would collapse into a black hole. “It appears that Nature conspires to forbid any measurement of distance with error smaller than the Planck length,” said Freeman Dyson, the celebrated theoretical physicist, in a 2013 talk presenting a back-of-the-envelope calculation of this limit.

    And so gravitons, according to conventional thinking, might only reveal themselves in the universe’s most extreme places: around the time of the Big Bang, or in the heart of black holes. “The problem with black holes is that they’re black, and so nothing comes out,” said Daniel Holz, an astrophysicist at the University of Chicago. “And the quantum gravity stuff is happening right at the center of this—so that’s too bad.”

    But recently published papers challenge this view, suggesting that gravitons may create observable “noise” in gravitational wave detectors such as LIGO, the Laser Interferometer Gravitational-Wave Observatory.

    MIT /Caltech Advanced aLigo

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    “We’ve found that the quantum fuzziness of space-time is imprinted on matter as a kind of jitter,” said Maulik Parikh, a cosmologist at Arizona State University and a coauthor of one of the papers.

    And while it’s still unclear if existing or even future gravitational wave observatories have the sensitivity needed to detect this noise, these calculations have made the near-impossible at least plausible. By considering how gravitons interact with a detector en masse, they have given a solid theoretical footing to the idea of graviton noise—and taken physicists one step closer to an experimental proof that deep down, gravity plays by the rules of quantum mechanics.

    The Jitter of the Wave

    Dyson’s 2013 calculation convinced many people that gravitational wave detectors were, at best, impractical probes for learning about quantum gravity.

    “There’s a kind of default consensus that it’s a waste of time to think about quantum effects and gravitational radiation,” said Frank Wilczek, a Nobel Prize-winning physicist at MIT who was a coauthor with Parikh on the new paper. Indeed, neither Wilczek, Parikh, nor George Zahariade, a cosmologist at Arizona State and the third coauthor, took the possibility seriously until after the 2015 discovery of gravitational waves by LIGO [Physical Review Letters]. “There’s nothing like actual experimental results to focus the attention,” said Wilczek.

    1
    Maulik Parikh, Frank Wilczek and George Zahariade (from left) calculated how gravitational wave detectors could find evidence for gravitons.Courtesy of Maulik Parikh; Katherine Taylor for Quanta Magazine; Ryan Rahn.

    Gravitons are thought to carry the force of gravity in a way that’s similar to how photons carry the electromagnetic force. Just as light rays can be pictured as a well-behaved collection of photons, gravitational waves—ripples in space-time created by violent cosmic processes—are thought to be made up of gravitons. With this in mind, the authors asked whether gravitational wave detectors are, in principle, sensitive enough to see gravitons. “That’s like asking, how can a surfer on a wave tell just from the motion that the wave is made up of droplets of water?” said Parikh.

    Unlike Dyson, whose broad-brush calculation focused on a single graviton, they considered the effects of many gravitons. “We were always inspired by Brownian motion,” said Parikh, referring to the random jiggle and shake of microscopic particles in a fluid. Einstein used Brownian motion to deduce the existence of atoms, which bombard the microscopic particles. In the same way, the collective behavior of many gravitons might subtly reshape a gravitational wave.

    Gravitational wave detectors can, at their simplest, be thought of as two masses separated by some distance. When a gravitational wave passes by, this distance will increase and decrease as the wave stretches and squashes the space between the masses. Add gravitons into the mix, however, and you add a new motion on top of the usual ripples in space-time. As the detector absorbs and emits gravitons, the masses randomly jitter. This is graviton noise. How big the jitter is, and thus whether it can be detected, ultimately depends on the type of gravitational wave hitting the detector.

    Gravitational fields exist in different “quantum states,” depending on how they were created. Most often, a gravitational wave is produced in a “coherent state,” which is akin to ripples on a pond. Detectors like LIGO are tuned to search for these conventional gravitational waves, which are emitted from black holes and neutron stars as they spiral around each other and collide.

    3
    Next-generation gravitational wave detectors could be made of fleets of spacecraft. The LISA Pathfinder mission, shown here being prepared for its December 2015 launch, successfully tested the technologies that would be needed for these next-gen detectors.Photograph: P. BAUDON/ESA-CNES-Arianespace/Optique Vidéo du CSG.

    ESA/LISA Pathfinder

    Gravity is talking. Lisa will listen. Dialogos of Eide

    ESA/NASA eLISA space based, the future of gravitational wave research

    Even coherent gravitational waves produce graviton noise, but—as Dyson also found—it’s far too small to ever measure. This is because the jitter created as the detector absorbs gravitons is “exquisitely balanced” with the jitter created when it emits gravitons, said Wilczek, who had hoped that their calculation would lead to a bigger noise for coherent states. “It was a little disappointing,” he said.

    Undeterred, Parikh, Wilczek and Zahariade examined several other types of gravitational waves that Dyson did not consider. They found that one quantum state in particular, called a squeezed state, produces a much more pronounced graviton noise. In fact, Parikh, Wilczek and Zahariade found that the noise increases exponentially the more the gravitons are squeezed.

    Their theoretical exploration suggested—against prevailing wisdom—that graviton noise is in principle observable. Moreover, detecting this noise would tell physicists about the exotic sources that might create squeezed gravitational waves. “They are thinking about it in a very serious way, and they’re approaching it in a precise language,” said Erik Verlinde, a theoretical physicist at the University of Amsterdam.

    “We always had this image that gravitons would bombard detectors in some way, and so there would be a little bit of jitter,” said Parikh. “But,” Zahariade added, “when we understood how this graviton noise term arises mathematically, it was a beautiful moment.”

    5
    Erik Verlinde has co-authored a proposal to look for graviton noise directly in the bubbling vacuum of space-time.Photograph: Ilvy Njiokiktjien/Quanta Magazine.

    The calculations were worked out over three years and are summarized in a recent paper [The Noise of Gravitons]. The paper describing the complete set of calculations is currently under peer review.

    Yet while squeezed light is routinely made in the lab—including at LIGO—it’s still unknown whether squeezed gravitational waves exist. Wilczek suspects that the final stages of black hole mergers, where gravitational fields are very strong and changing rapidly, could produce this squeezing effect. Inflation—a period in the early universe when space-time expanded very rapidly—could also lead to squeezing. The authors now plan to build precise models of these cosmological events and the gravitational waves they emit.

    “This opens the door to very difficult calculations that are going to be a challenge to carry through to the end,” said Wilczek. “But the good news is that it gets really interesting and potentially realistic as an experimental target.”

    A Hologram Shake

    Rather than looking to quantum sources in the cosmos, other physicists hope to see graviton noise directly in the bubbling vacuum of space-time, where particles fleetingly pop into existence and then disappear. As they appear, these virtual particles cause space-time to gently warp around them, creating random fluctuations known as space-time foam.

    This quantum world might seem inaccessible to experiment. But it’s not—if the universe obeys the “holographic principle,” in which the fabric of space-time emerges in the same way that a 3D hologram pops out of a 2D pattern. If the holographic principle is true, quantum particles like the graviton live on the lower-dimensional surface and encode the familiar force of gravity in higher-dimensional space-time.

    In such a scenario, the effects of quantum gravity can be amplified into the everyday world of experiments like LIGO. Recent work by Verlinde and Kathryn Zurek, a theoretical physicist at the California Institute of Technology, proposes using LIGO or another sensitive interferometer to observe the bubbling vacuum that surrounds the instrument.

    In a holographic universe, the interferometer sits in higher-dimensional space-time, which is closely wrapped in a lower-dimensional quantum surface. Adding up the tiny fluctuations across the surface creates a noise that is big enough to be detected by the interferometer. “We’ve shown that the effects due to quantum gravity are not just determined by the Planck scale, but also by [the interferometer’s] scale,” said Verlinde.

    6
    Kathryn Zurek emphasizes that it’s important for theoretical physicists to think outside the narrow range of what is conventional and acceptable, especially when unorthodox ideas can be connected to experiment. “The principles of quantum mechanics are kind of crazy when you think about it,” she said, “but it’s based on a postulate that gives rise to consequences, and so you can go and see if it describes nature.” Courtesy of Caltech.

    If their assumptions about the holographic principle hold true, graviton noise will become an experimental target for LIGO, or even for a tabletop experiment. In 2015 at the Fermi National Accelerator Laboratory, a tabletop experiment called the Holometer looked for evidence that the universe is holographic—and was found wanting. “The theoretical ideas at that time were very primitive,” said Verlinde, noting that the calculations in his paper with Zurek are grounded on the more in-depth holographic methods developed since then. If the calculations enable researchers to precisely predict what this graviton noise looks like, he thinks their odds of discovery are better—although still rather unlikely.

    Zurek and Verlinde’s approach will only work if our universe is holographic—a conjecture that is far from established. Describing their attitude as “more of a wild west mentality,” Zurek said, “It’s high risk and unlikely to succeed, but what the heck, we don’t understand quantum gravity.”

    Uncharted Territory

    By contrast, Parikh, Wilczek and Zahariade’s calculation is built on physics that few would disagree with. “We did a very conservative calculation, which is almost certainly correct,” said Parikh. “It essentially just assumes there exists something called the graviton and that gravity can be quantized.”

    But the trio acknowledge that more theoretical legwork must be done before it’s known whether current or planned gravitational wave detectors can discover graviton noise. “It would require several lucky breaks,” said Parikh. Not only must the universe harbor exotic sources that create squeezed gravitational waves, but the graviton noise must be distinguishable from the many other sources of noise that LIGO is already subject to.

    “So far, LIGO hasn’t shown any signs of physics that breaks with the predictions of Einstein’s general relativity,” said Holz, who is a member of the LIGO collaboration. “That’s where you start: General relativity is amazing.” Still, he acknowledges that gravitational wave detectors are our best hope for making new fundamental discoveries about the universe, because the terrain is “completely uncharted.”

    Wilczek argues that if researchers develop an understanding of what graviton noise might look like, gravitational wave detectors can be adjusted to improve the chances of finding it. “Naturally, people have been focusing on trying to fish out signals, and not worrying about the interesting properties of the noise,” said Wilczek. “But if you have that in mind, you would maybe design something different.” (Holz clarified that LIGO researchers have already studied some possible cosmic noise signals [Nature].)

    Despite the challenges ahead, Wilczek said he is “guardedly optimistic” that their work will lead to predictions that can be probed experimentally. In any case, he hopes the paper will spur other theorists to study graviton noise.

    “Fundamental physics is a hard business. You can famously write the whole thing on a T-shirt, and it’s hard to make additions or changes to that,” Wilczek said. “I don’t see how this is going to lead there directly, but it opens a new window on the world.

    “And then we’ll see what we see.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:47 am on July 5, 2020 Permalink | Reply
    Tags: "Astronomers Are Uncovering the Magnetic Soul of the Universe", , , , , , , WIRED   

    From Quanta Magazine via WIRED: “Astronomers Are Uncovering the Magnetic Soul of the Universe” 

    From Quanta Magazine

    via


    WIRED

    07.05.2020
    Natalie Wolchover

    Researchers are discovering that magnetic fields permeate much of the cosmos. If these fields date back to the Big Bang, they could solve a cosmological mystery.

    1
    Hidden magnetic field lines stretch millions of light years across the universe.Illustration: Pauline Voß/Quanta Magazine.

    Anytime astronomers figure out a new way of looking for magnetic fields in ever more remote regions of the cosmos, inexplicably, they find them.

    These force fields—the same entities that emanate from fridge magnets—surround Earth, the sun, and all galaxies. Twenty years ago, astronomers started to detect magnetism permeating entire galaxy clusters, including the space between one galaxy and the next. Invisible field lines swoop through intergalactic space like the grooves of a fingerprint.

    Last year, astronomers finally managed to examine a far sparser region of space—the expanse between galaxy clusters. There, they discovered the largest magnetic field yet: 10 million light-years of magnetized space spanning the entire length of this “filament” of the cosmic web [Science]. A second magnetized filament has already been spotted elsewhere in the cosmos by means of the same techniques. “We are just looking at the tip of the iceberg, probably,” said Federica Govoni of the National Institute for Astrophysics in Cagliari, Italy, who led the first detection.

    The question is: Where did these enormous magnetic fields come from?

    “It clearly cannot be related to the activity of single galaxies or single explosions or, I don’t know, winds from supernovae,” said Franco Vazza, an astrophysicist at the University of Bologna who makes state-of-the-art computer simulations of cosmic magnetic fields. “This goes much beyond that.”

    One possibility is that cosmic magnetism is primordial, tracing all the way back to the birth of the universe. In that case, weak magnetism should exist everywhere, even in the “voids” of the cosmic web—the very darkest, emptiest regions of the universe. The omnipresent magnetism would have seeded the stronger fields that blossomed in galaxies and clusters.

    2
    The cosmic web, shown here in a computer simulation, is the large-scale structure of the universe. Dense regions are filled with galaxies and galaxy clusters. Thin filaments connect these clumps. Voids are nearly empty regions of space.Illustration: Springel & others/Virgo Consortium.

    Primordial magnetism might also help resolve another cosmological conundrum known as the Hubble tension—probably the hottest topic in cosmology.

    The problem at the heart of the Hubble tension is that the universe seems to be expanding significantly faster than expected based on its known ingredients. In a paper posted online in April and under review with Physical Review Letters, the cosmologists Karsten Jedamzik and Levon Pogosian argue that weak magnetic fields in the early universe would lead to the faster cosmic expansion rate seen today.

    Primordial magnetism relieves the Hubble tension so simply that Jedamzik and Pogosian’s paper has drawn swift attention. “This is an excellent paper and idea,” said Marc Kamionkowski, a theoretical cosmologist at Johns Hopkins University who has proposed other solutions to the Hubble tension.

    Kamionkowski and others say more checks are needed to ensure that the early magnetism doesn’t throw off other cosmological calculations. And even if the idea works on paper, researchers will need to find conclusive evidence of primordial magnetism to be sure it’s the missing agent that shaped the universe.

    Still, in all the years of talk about the Hubble tension, it’s perhaps strange that no one considered magnetism before. According to Pogosian, who is a professor at Simon Fraser University in Canada, most cosmologists hardly think about magnetism. “Everyone knows it’s one of those big puzzles,” he said. But for decades, there was no way to tell whether magnetism is truly ubiquitous and thus a primordial component of the cosmos, so cosmologists largely stopped paying attention.

    Meanwhile, astrophysicists kept collecting data. The weight of evidence has led most of them to suspect that magnetism is indeed everywhere.

    The Magnetic Soul of the Universe

    In the year 1600, the English scientist William Gilbert’s studies of lodestones—naturally magnetized rocks that people had been fashioning into compasses for thousands of years—led him to opine that their magnetic force “imitates a soul.” He correctly surmised that Earth itself is a “great magnet,” and that lodestones “look toward the poles of the Earth.”

    Magnetic fields arise anytime electric charge flows. Earth’s field, for instance, emanates from its inner “dynamo,” the current of liquid iron churning in its core. The fields of fridge magnets and lodestones come from electrons spinning around their constituent atoms.


    Cosmological simulations illustrate two possible explanations for how magnetic fields came to permeate galaxy clusters. At left, the fields grow from uniform “seed” fields that filled the cosmos in the moments after the Big Bang. At right, astrophysical processes such as star formation and the flow of matter into supermassive black holes create magnetized winds that spill out from galaxies.Video: F. Vazza.

    However, once a “seed” magnetic field arises from charged particles in motion, it can become bigger and stronger by aligning weaker fields with it. Magnetism “is a little bit like a living organism,” said Torsten Enßlin, a theoretical astrophysicist at the Max Planck Institute for Astrophysics in Garching, Germany, “because magnetic fields tap into every free energy source they can hold onto and grow. They can spread and affect other areas with their presence, where they grow as well.”

    Ruth Durrer, a theoretical cosmologist at the University of Geneva, explained that magnetism is the only force apart from gravity that can shape the large-scale structure of the cosmos, because only magnetism and gravity can “reach out to you” across vast distances. Electricity, by contrast, is local and short-lived, since the positive and negative charge in any region will neutralize overall. But you can’t cancel out magnetic fields; they tend to add up and survive.

    Yet for all their power, these force fields keep low profiles. They are immaterial, perceptible only when acting upon other things. “You can’t just take a picture of a magnetic field; it doesn’t work like that,” said Reinout van Weeren, an astronomer at Leiden University who was involved in the recent detections of magnetized filaments.

    In their paper last year, van Weeren and 28 coauthors inferred the presence of a magnetic field in the filament between galaxy clusters Abell 399 and Abell 401 from the way the field redirects high-speed electrons and other charged particles passing through it. As their paths twist in the field, these charged particles release faint “synchrotron radiation.”

    The synchrotron signal is strongest at low radio frequencies, making it ripe for detection by LOFAR, an array of 20,000 low-frequency radio antennas spread across Europe.

    ASTRON LOFAR European Map

    The team actually gathered data from the filament back in 2014 during a single eight-hour stretch, but the data sat waiting as the radio astronomy community spent years figuring out how to improve the calibration of LOFAR’s measurements. Earth’s atmosphere refracts radio waves that pass through it, so LOFAR views the cosmos as if from the bottom of a swimming pool. The researchers solved the problem by tracking the wobble of “beacons” in the sky—radio emitters with precisely known locations—and correcting for this wobble to deblur all the data. When they applied the deblurring algorithm to data from the filament, they saw the glow of synchrotron emissions right away.

    3
    LOFAR consists of 20,000 individual radio antennas spread across Europe.Photograph: ASTRON.

    The filament looks magnetized throughout, not just near the galaxy clusters that are moving toward each other from either end. The researchers hope that a 50-hour data set they’re analyzing now will reveal more detail. Additional observations have recently uncovered magnetic fields extending throughout a second filament. Researchers plan to publish this work soon.

    The presence of enormous magnetic fields in at least these two filaments provides important new information. “It has spurred quite some activity,” van Weeren said, “because now we know that magnetic fields are relatively strong.”

    A Light Through the Voids

    If these magnetic fields arose in the infant universe, the question becomes: how? “People have been thinking about this problem for a long time,” said Tanmay Vachaspati of Arizona State University.

    In 1991, Vachaspati proposed that magnetic fields might have arisen during the electroweak phase transition—the moment, a split second after the Big Bang, when the electromagnetic and weak nuclear forces became distinct. Others have suggested that magnetism materialized microseconds later, when protons formed. Or soon after that: The late astrophysicist Ted Harrison argued in the earliest primordial magnetogenesis theory in 1973 that the turbulent plasma of protons and electrons might have spun up the first magnetic fields. Still others have proposed that space became magnetized before all this, during cosmic inflation—the explosive expansion of space that purportedly jump-started the Big Bang itself. It’s also possible that it didn’t happen until the growth of structures a billion years later.

    The way to test theories of magnetogenesis is to study the pattern of magnetic fields in the most pristine patches of intergalactic space, such as the quiet parts of filaments and the even emptier voids. Certain details—such as whether the field lines are smooth, helical, or “curved every which way, like a ball of yarn or something” (per Vachaspati), and how the pattern changes in different places and on different scales—carry rich information that can be compared to theory and simulations. For example, if the magnetic fields arose during the electroweak phase transition, as Vachaspati proposed, then the resulting field lines should be helical, “like a corkscrew,” he said.

    The hitch is that it’s difficult to detect force fields that have nothing to push on.

    One method, pioneered by the English scientist Michael Faraday back in 1845, detects a magnetic field from the way it rotates the polarization direction of light passing through it. The amount of “Faraday rotation” depends on the strength of the magnetic field and the frequency of the light. So by measuring the polarization at different frequencies, you can infer the strength of magnetism along the line of sight. “If you do it from different places, you can make a 3D map,” said Enßlin.

    4
    Illustration: Samuel Velasco/Quanta Magazine.

    Researchers have started to make [MNRAS] rough Faraday rotation measurements using LOFAR, but the telescope has trouble picking out the extremely faint signal. Valentina Vacca, an astronomer and a colleague of Govoni’s at the National Institute for Astrophysics, devised an algorithm a few years ago for teasing out subtle Faraday rotation signals statistically, by stacking together many measurements of empty places. “In principle, this can be used for voids,” Vacca said.

    But the Faraday technique will really take off when the next-generation radio telescope, a gargantuan international project called the Square Kilometer Array, starts up in 2027. “SKA should produce a fantastic Faraday grid,” Enßlin said.

    For now, the only evidence of magnetism in the voids is what observers don’t see when they look at objects called blazars located behind voids.

    Blazars are bright beams of gamma rays and other energetic light and matter powered by supermassive black holes. As the gamma rays travel through space, they sometimes collide with ancient microwaves, morphing into an electron and a positron as a result. These particles then fizzle and turn into lower-energy gamma rays.

    But if the blazar’s light passes through a magnetized void, the lower-energy gamma rays will appear to be missing, reasoned Andrii Neronov and Ievgen Vovk of the Geneva Observatory in 2010. The magnetic field will deflect the electrons and positrons out of the line of sight. When they decay into lower-energy gamma rays, those gamma rays won’t be pointed at us.

    5
    Illustration: Samuel Velasco/Quanta Magazine.

    Indeed, when Neronov and Vovk analyzed data from a suitably located blazar, they saw its high-energy gamma rays, but not the low-energy gamma-ray signal. “It’s the absence of a signal that is a signal,” Vachaspati said.

    A nonsignal is hardly a smoking gun, and alternative explanations for the missing gamma rays have been suggested. However, follow-up observations have increasingly pointed to Neronov and Vovk’s hypothesis that voids are magnetized. “It’s the majority view,” Durrer said. Most convincingly, in 2015, one team overlaid many measurements of blazars behind voids and managed to tease [Physical Review Letters] out a faint halo of low-energy gamma rays around the blazars. The effect is exactly what would be expected if the particles were being scattered by faint magnetic fields—measuring only about a millionth of a trillionth as strong as a fridge magnet’s.

    Cosmology’s Biggest Mystery

    Strikingly, this exact amount of primordial magnetism may be just what’s needed to resolve the Hubble tension—the problem of the universe’s curiously fast expansion.

    That’s what Pogosian realized when he saw recent computer simulations [Physical Review Letters] by Karsten Jedamzik of the University of Montpellier in France and a collaborator. The researchers added weak magnetic fields to a simulated, plasma-filled young universe and found that protons and electrons in the plasma flew along the magnetic field lines and accumulated in the regions of weakest field strength. This clumping effect made the protons and electrons combine into hydrogen—an early phase change known as recombination—earlier than they would have otherwise.

    Pogosian, reading Jedamzik’s paper, saw that this could address the Hubble tension. Cosmologists calculate how fast space should be expanding today by observing ancient light emitted during recombination. The light shows a young universe studded with blobs that formed from sound waves sloshing around in the primordial plasma. If recombination happened earlier than supposed due to the clumping effect of magnetic fields, then sound waves couldn’t have propagated as far beforehand, and the resulting blobs would be smaller. That means the blobs we see in the sky from the time of recombination must be closer to us than researchers supposed. The light coming from the blobs must have traveled a shorter distance to reach us, meaning the light must have been traversing faster-expanding space. “It’s like trying to run on an expanding surface; you cover less distance,” Pogosian said.

    The upshot is that smaller blobs mean a higher inferred cosmic expansion rate—bringing the inferred rate much closer to measurements of how fast supernovas and other astronomical objects actually seem to be flying apart.

    “I thought, wow,” Pogosian said, “this could be pointing us to [magnetic fields’] actual presence. So I wrote Karsten immediately.” The two got together in Montpellier in February, just before the lockdown. Their calculations indicated that, indeed, the amount of primordial magnetism needed to address the Hubble tension also agrees with the blazar observations and the estimated size of initial fields needed to grow the enormous magnetic fields spanning galaxy clusters and filaments. “So it all sort of comes together,” Pogosian said, “if this turns out to be right.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:35 am on May 14, 2020 Permalink | Reply
    Tags: "A Secret Space Plane is Carrying a Solar Experiment to Orbit", , Space-based solar power is all about getting solar power to Earth no matter the weather or the time of day., The next step would be to develop an experimental space solar power satellite and actually send energy from orbit to Earth., The science of microwave power beaming is fully understood., WIRED   

    From WIRED: “A Secret Space Plane is Carrying a Solar Experiment to Orbit” 


    From WIRED

    05.14.2020
    Daniel Oberhaus

    The idea of beaming solar energy to Earth with radio waves is decades old. But this weekend, the technology gets its first test in orbit.

    1
    Photograph: U.S. Department of Defense

    On Saturday, the US Air Force is expected to launch its secret space plane, X-37B, for a long-duration mission in low Earth orbit. The robotic orbiter looks like a smaller version of the space shuttle and has spent nearly eight of the past 10 years in space conducting classified experiments for the military. Almost nothing is known about what X-37B does up there, but ahead of its sixth launch the Air Force gave some rare details about its cargo.

    In addition to its usual suite of secret military tech, the X-37B will also host a few unclassified experiments during its upcoming sojourn in space. NASA is sending up two experiments to study the effects of radiation on seeds, and the US Air Force Academy is using the space plane to deploy a small research satellite. But the real star of the show is a small solar panel developed by the physicists at the Naval Research Lab that will be used to conduct the first orbital experiment with space-based solar power.

    “This is a major step forward,” says Paul Jaffe, an electronics engineer at the Naval Research Lab and lead researcher on the project. “This is the first time that any component geared towards a solar-powered satellite system has ever been tested in orbit.”

    Space-based solar power is all about getting solar power to Earth no matter the weather or the time of day. The basic idea is to convert the sun’s energy into microwaves and beam it down. Unlike terrestrial solar panels, satellites in a sufficiently high orbit might only experience darkness for a few minutes per day. If this energy could be captured, it could provide an inexhaustible source of power no matter where you are on the planet.

    It’s an idea that was cooked up by the science fiction writer Isaac Asimov in the 1940s; since then, beamed power experiments have been successfully tested several times on Earth. But the experiment on X-37B will be the first time the core technologies behind microwave solar power will be tested in orbit.

    “The science of microwave power beaming is fully understood; it is the engineering challenges of scaling known technology to a size never before seen on orbit that we need to progress,” says Ian Cash, the director of the International Electric Company Limited, which is developing a space solar platform called CASSIOPeiA. “But every endeavour must start with a first step.”

    The experiment built by Jaffe and his colleagues at NRL is what he calls a “sandwich” module. It’s a three-tiered system for converting sunlight into electricity and then converting the electricity into microwaves. Usually, the conversion system is sandwiched between a high-performance solar panel and the antenna that is used to transmit the energy. But for this mission, Jaffe and his colleagues won’t be radiating the energy from space to Earth, because the radio signal would interfere with other experiments on the space plane. Instead, the sandwich module will send the radio signals through a cable so researchers at NRL can study the power output from the system.

    The experiment built by Jaffe and his colleagues at NRL is what he calls a “sandwich” module. It’s a three-tiered system for converting sunlight into electricity and then converting the electricity into microwaves. Usually, the conversion system is sandwiched between a high-performance solar panel and the antenna that is used to transmit the energy. But for this mission, Jaffe and his colleagues won’t be radiating the energy from space to Earth, because the radio signal would interfere with other experiments on the space plane. Instead, the sandwich module will send the radio signals through a cable so researchers at NRL can study the power output from the system.

    The entire NRL experiment could fit in a pizza box and won’t produce enough energy to power a light bulb. But Jaffe says the experiment is a critical step toward a free-flying space-based power satellite. “There’s been a lot of work doing studies and analyses, and a lot less work on actual prototyping,” Jaffe says. “This isn’t necessarily the most refined version of what could be accomplished, but the main goal was to get up to space with a proof of concept.”

    Jaffe has been working on space-based solar power for more than a decade at NRL and first unveiled his sandwich module prototype in 2014. This design was meant to solve a number of challenges that have plagued space-based solar power research for years. One of the biggest problems with the concept is that the solar panels in orbit have to be massive to collect enough sunlight to be useful for applications on Earth. Even if these structures could be built in principle, they would be incredibly expensive and challenging to launch.

    “It would be too large and cumbersome to launch a completed system,” says Chris DePuma, an NRL electronics engineer and the program manager for the experiment. “The sandwich module is a way to reduce mass and modularize the system so it could be assembled in orbit.” But before robots start building giant solar farms in space, there are a number of fundamental issues with the panels themselves that need to be addressed.

    Jaffe says one of the hardest challenges has been thermal management. In space, the solar panel facing the sun may reach temperatures of up to 300 degrees Fahrenheit, while the electronics facing away from it must operate at just a few degrees above absolute zero. These electronics are just a few inches away from each other, so Jaffe and his colleagues had to figure out how to accommodate both extremes. Jaffe says this mainly involved swapping out materials and redesigning parts of the module so that the solar panel was isolated from the electronics, which operate better at lower temperatures. The upcoming X-37B mission will put this space-hardened version of the sandwich module to the test.

    Flying the test on the Air Force’s secret space plane came with some compromises. If this kind of experiment was implemented on a satellite, it would be placed in an orbit where sunlight would almost always be available. But the X-37B will be flying in low Earth orbit, which means that it will pass through the planet’s shadow roughly every 90 minutes. Still, DePuma says the benefits of flying on the space plane are worth the trade-offs. “We got to focus a lot more on our experiment, rather than having to design a propulsion system and all the other things a satellite has,” he says. “It will just collect our data and send it to us in periodic distributions.”

    If everything goes well, Jaffe says the next step would be to develop an experimental space solar power satellite and actually send energy from orbit to Earth. He acknowledged that this would require convincing the Department of Defense that the time and money are worth the effort. But the military is clearly interested in the technology. Last October, the Air Force Research Lab announced a $100 million program to develop hardware for a solar power satellite.

    Jaffe sees space-based solar power initially enabling some unique use cases, like drones that never have to land or around-the-clock power for remote military bases. But Cash sees even bigger things in store for the technology. “Space solar power solves the biggest challenge of scaling existing terrestrial renewables, that of storage,” he says. “With the dramatic cost reductions offered by reusable space launch, space solar power could well become the cheapest source of continuous carbon-free power.”

    Jaffe likes to compare the space-based solar power concept to GPS. If you told someone a few decades ago that a network of satellites loaded with atomic clocks would become the linchpin of modern society, they’d have thought you were nuts. But today, GPS guides everything from ride-share services to nuclear warheads. In fact, many of its most salient applications weren’t even imagined when the first GPS satellites were launched. Jaffe believes the same may turn out to be true for space-based solar power. Beaming solar energy from space to Earth sounds extravagant, esoteric, and borderline impossible—until it isn’t.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:44 am on April 6, 2020 Permalink | Reply
    Tags: "Why Do Matter Particles Come in Threes?", , , , , The Standard Model of particle physics, WIRED   

    From WIRED: “Why Do Matter Particles Come in Threes?” 


    From WIRED

    04.05.2020
    Charlie Wood

    Nobel Prize–winning physicist Steven Weinberg’s new paper tackles the mystery of why the laws of nature appear to have been composed in triplicate.

    1
    Puzzlingly, the laws of nature appear to be composed in triplicate, with three copies of all matter particles, each heavier than the last but otherwise identical.Illustration: Lucy Reading-Ikkanda/Quanta Magazine.

    The universe has cooked up all sorts of bizarre and beautiful forms of matter, from blazing stars to purring cats, out of just three basic ingredients. Electrons and two types of quarks, dubbed “up” and “down,” mix in various ways to produce every atom in existence.

    But puzzlingly, this family of matter particles—the up quark, down quark, and electron—is not the only one. Physicists have discovered that they make up the first of three successive “generations” of particles, each heavier than the last. The second- and third-generation particles transform into their lighter counterparts too quickly to form exotic cats, but they otherwise behave identically. It’s as if the laws of nature were composed in triplicate. “We don’t know why,” said Heather Logan, a particle physicist at Carleton University.

    In the 1970s, when physicists first worked out the standard model of particle physics—the still reigning set of equations describing the known elementary particles and their interactions—they sought some deep principle that would explain why three generations of each type of matter particle exist. No one cracked the code, and the question was largely set aside. Now, though, the Nobel Prize–winning physicist Steven Weinberg, one of the architects of the standard model, has revived the old puzzle. Weinberg, who is 86 and a professor at the University of Texas, Austin, argued in a recent paper in the journal Physical Review D that an intriguing pattern in the particles’ masses could lead the way forward.

    “Weinberg’s paper is a bit of lightning in the dark,” said Anthony Zee, a theoretical physicist at the University of California, Santa Barbara. “All of a sudden a titan in the field is suddenly working again on these problems.”

    “I’m very happy to see that he thinks it’s important to revisit this problem,” said Mu-Chun Chen, a physicist at the University of California, Irvine. Many theorists are ready to give up, she said, but “we should still be optimistic.”

    The standard model does not predict why each particle has the mass that it does. Physicists measure these values experimentally and manually plug the results into the equations. Measurements show that the minuscule electron weighs 0.5 megaelectron volts (MeV), while its second- and third-generation counterparts, called the muon and the tau particle, tip the scales at 105 and 1,776 MeV, respectively. Similarly, the first-generation up and down quarks are relative lightweights, while the “charm” and “strange” quarks comprising the second quark generation are middleweights, and the “top” and “bottom” quarks of the third generation are heavy, the top weighing a monstrous 173,210 MeV.

    The spread in the masses is vast. When physicists squint, though, they see a tantalizing structure in where the masses fall. The particles cluster into somewhat evenly spaced generations: The third-generation particles all weigh thousands of MeV, second-generation particles weigh roughly hundreds of MeV, and first-generation particles come in at around an MeV each. “As you go each level down, they get exponentially lighter,” says Patrick Fox, a particle physicist at the Fermi National Accelerator Laboratory in Illinois.

    In the equations of the standard model, the mass of each particle corresponds to the degree to which it “feels” a universe-filling field known as the Higgs field. Top quarks are heavy because they experience intense drag as they move through the Higgs field, like a fly stuck in honey, while wispy electrons flit through it like butterflies in air. In this framework, how each particle feels the field is an intrinsic attribute of the particle.

    3
    The Standard Model of particle physics includes three copies of each type of matter particle, which form the quadrants of the outer ring of the diagram.Illustration: Lucy Reading-Ikkanda/Quanta Magazine.

    In the heady days of the standard model’s youth, explaining where these attributes came from was seen as the next logical step. Zee recalls asking his then-graduate student Stephen Barr to calculate the mass of the electron as his doctoral project—a task Weinberg’s recent paper struggles with today, more than 40 years later. Barr and Zee published a rough idea in 1978 [Physical Review D], but string theory exploded onto the scene just a few years later, Zee says, sweeping away such efforts.

    Barr and Zee’s main idea, partly inspired by Weinberg’s earlier works, was to follow the mass. Compared with the top quark’s ponderous bulk, the masses of the electron and other particles look like rounding errors. Perhaps that’s because they are. Barr and Zee suggested that only the heft of the heavier particles is fundamental in some sense.

    A 2008 theory by Fox and Bogdan Dobrescu of Fermilab [JHEP] picked up where they left off. The top quark’s mass happens to be roughly the same as the average energy of the Higgs field, so Fox and Dobrescu assumed that only the top quark slogs through the field in the standard way. “The top is clearly special in some regard,” Fox said.

    The other particles experience the Higgs field indirectly. This is possible because quantum mechanical uncertainty allows particles to materialize for brief moments. These fleeting apparitions form clouds of “virtual” particles around more permanent entities. When virtual top quarks crowd around a (second-generation) muon, for example, they could expose the muon to the Higgs field by means of a mutual interaction with a new theoretical particle, giving the muon a bit of mass. But because the exposure is indirect, the particle stays much lighter than the top.

    A second round of this game of quantum telephone makes the first-generation electron lighter again by a similar factor, explaining the rough generational spacing of thousands, hundreds, and a few MeV of mass. (The lightest particles of all, neutrinos, also come in three generations. But they act so differently from the other fundamental massive particles that they don’t fit into such schemes.)

    Weinberg’s recent publication considers a whole variety of ways this telephone game could work. He grants the ability to feel the Higgs field to the entire third generation of matter particles—that is, the top quark, bottom quark and tau particle. Mass trickles down to the second and first generations from there via interactions with exotic virtual particles.

    Weinberg’s and Fox and Dobrescu’s attempts both fall short, however. The latter two ended up increasing (rather than decreasing) the number of unexplained constants in the standard model in order to account for the three-generation particle masses. Weinberg’s proposal gets the relationships between certain masses wrong and fails to describe how higher-generation particles can transform into lower-generation ones (the phenomenon that explains why we don’t see atoms made of second- or third-generation particles). Weinberg was not available to discuss his work, but Fox suggests that Weinberg likely wrote the paper to encourage newcomers to take up the challenge and to flag the problems they’re bound to run into.

    Fox sees these hurdles not as fatal blows, but as signs that the theories need more tweaking. “Nature is never exactly how you imagine it at first pass,” he said. “You have some beautiful idea and it sort of gets you 80 percent of the way there.”

    Others aren’t convinced that singling out the third generation and massaging temporary clouds of particles is the right path in the first place. “It seems rather ad hoc, because it’s something you put in by hand,” Chen said. She hopes to explain the three generations by embedding the standard model within a larger framework like string theory. One model she studies reduces the number of fundamental mass values by adding several new Higgs-like fields to the universe, although the exotic particles associated with these hypothesized fields are far too heavy to search for with Europe’s Large Hadron Collider.

    The only solid evidence that could support or distinguish between theories of the matter particles’ masses would be the discovery of the various exotic particles each predicts. The Large Hadron Collider hasn’t seen any, but Fox hasn’t entirely lost hope that the phantasms could someday show up. He believes that experiments probing rare particle transformations, such as the muon-to-electron decay that Fermilab’s Mu2e experiment will study when it goes online this year, have the best chance of indirectly detecting the meddling particles and shaking the standard model.

    “We don’t know if any of this makes sense,” he said. “We’ll have to wait and see.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:57 am on March 8, 2020 Permalink | Reply
    Tags: "A Computer Science Proof Holds Answers for Math and Physics", , , , Game Show Physics, , The commuting operator model of entanglement, The computer researchers: Henry Yuen the University of Toronto a; Zhengfeng Ji the University of Technology Sydney; Anand Natarajan and Thomas Vidick Caltech; John Wright the University of Texas, The Connes embedding conjecture, The correspondence between entanglement and computing came as a jolt to many researchers., The problems that can be verified through interactions with entangled quantum provers called MIP* equals the class of problems no harder than the halting problem a class called RE. “MIP*=RE.”, The tensor product model, WIRED   

    From WIRED: “A Computer Science Proof Holds Answers for Math and Physics” 


    From WIRED

    03.08.2020
    Kevin Hartnett

    An advance in our understanding of quantum computing offers stunning solutions to problems that have long puzzled mathematicians and physicists.

    In 1935, Albert Einstein, working with Boris Podolsky and Nathan Rosen, grappled with a possibility revealed by the new laws of quantum physics: that two particles could be entangled, or correlated, even across vast distances.

    The very next year, Alan Turing formulated the first general theory of computing and proved that there exists a problem that computers will never be able to solve.

    These two ideas revolutionized their respective disciplines. They also seemed to have nothing to do with each other. But now a landmark proof has combined them while solving a raft of open problems in computer science, physics, and mathematics.

    The new proof establishes that quantum computers that calculate with entangled quantum bits or qubits, rather than classical 1s and 0s, can theoretically be used to verify answers to an incredibly vast set of problems. The correspondence between entanglement and computing came as a jolt to many researchers.

    “It was a complete surprise,” said Miguel Navascués, who studies quantum physics at the Institute for Quantum Optics and Quantum Information in Vienna.

    The proof’s co-authors set out to determine the limits of an approach to verifying answers to computational problems. That approach involves entanglement. By finding that limit the researchers ended up settling two other questions almost as a byproduct: Tsirelson’s problem in physics, about how to mathematically model entanglement, and a related problem in pure mathematics called the Connes embedding conjecture.

    In the end, the results cascaded like dominoes.

    “The ideas all came from the same time. It’s neat that they come back together again in this dramatic way,” said Henry Yuen of the University of Toronto and an author of the proof, along with Zhengfeng Ji of the University of Technology Sydney, Anand Natarajan and Thomas Vidick of the California Institute of Technology, and John Wright of the University of Texas, Austin. The five researchers are all computer scientists.

    Undecidable Problems

    Turing defined a basic framework for thinking about computation before computers really existed. In nearly the same breath, he showed that there was a certain problem computers were provably incapable of addressing. It has to do with whether a program ever stops.

    Typically, computer programs receive inputs and produce outputs. But sometimes they get stuck in infinite loops and spin their wheels forever. When that happens at home, there’s only one thing left to do.

    “You have to manually kill the program. Just cut it off,” Yuen said.

    Turing proved that there’s no all-purpose algorithm that can determine whether a computer program will halt or run forever. You have to run the program to find out.

    1
    The computer scientists Henry Yuen, Thomas Vidick, Zhengfeng Ji, Anand Natarajan and John Wright co-authored a proof about verifying answers to computational problems and ended up solving major problems in math and quantum physics.Courtesy of (Yuen) Andrea Lao; (Vidick) Courtesy of Caltech; (Ji) Anna Zhu; (Natarajan) David Sella; (Wright) Soya Park.

    “You’ve waited a million years and a program hasn’t halted. Do you just need to wait 2 million years? There’s no way of telling,” said William Slofstra, a mathematician at the University of Waterloo.

    In technical terms, Turing proved that this halting problem is undecidable — even the most powerful computer imaginable couldn’t solve it.

    After Turing, computer scientists began to classify other problems by their difficulty. Harder problems require more computational resources to solve — more running time, more memory. This is the study of computational complexity.

    Ultimately, every problem presents two big questions: “How hard is it to solve?” and “How hard is it to verify that an answer is correct?”

    Interrogate to Verify

    When problems are relatively simple, you can check the answer yourself. But when they get more complicated, even checking an answer can be an overwhelming task. However, in 1985 computer scientists realized it’s possible to develop confidence that an answer is correct even when you can’t confirm it yourself.

    The method follows the logic of a police interrogation.

    If a suspect tells an elaborate story, maybe you can’t go out into the world to confirm every detail. But by asking the right questions, you can catch your suspect in a lie or develop confidence that the story checks out.

    In computer science terms, the two parties in an interrogation are a powerful computer that proposes a solution to a problem—known as the prover—and a less powerful computer that wants to ask the prover questions to determine whether the answer is correct. This second computer is called the verifier.

    To take a simple example, imagine you’re colorblind and someone else—the prover—claims two marbles are different colors. You can’t check this claim by yourself, but through clever interrogation you can still determine whether it’s true.

    Put the two marbles behind your back and mix them up. Then ask the prover to tell you which is which. If they really are different colors, the prover should answer the question correctly every time. If the marbles are actually the same color—meaning they look identical—the prover will guess wrong half the time.

    “If I see you succeed a lot more than half the time, I’m pretty sure they’re not” the same color, Vidick said.

    By asking a prover questions, you can verify solutions to a wider class of problems than you can on your own.

    In 1988, computer scientists considered what happens when two provers propose solutions to the same problem. After all, if you have two suspects to interrogate, it’s even easier to solve a crime, or verify a solution, since you can play them against each other.

    “It gives more leverage to the verifier. You interrogate, ask related questions, cross-check the answers,” Vidick said. If the suspects are telling the truth, their responses should align most of the time. If they’re lying, the answers will conflict more often.

    Similarly, researchers showed that by interrogating two provers separately about their answers, you can quickly verify solutions to an even larger class of problems than you can when you only have one prover to interrogate.

    Computational complexity may seem entirely theoretical, but it’s also closely connected to the real world. The resources that computers need to solve and verify problems—time and memory—are fundamentally physical. For this reason, new discoveries in physics can change computational complexity.

    “If you choose a different set of physics, like quantum rather than classical, you get a different complexity theory out of it,” Natarajan said.

    The new proof is the end result of 21st-century computer scientists confronting one of the strangest ideas of 20th-century physics: entanglement.

    The Connes Embedding Conjecture

    When two particles are entangled, they don’t actually affect each other—they have no causal relationship. Einstein and his co-authors elaborated on this idea in their 1935 paper. Afterward, physicists and mathematicians tried to come up with a mathematical way of describing what entanglement really meant.

    Yet the effort came out a little muddled. Scientists came up with two different mathematical models for entanglement—and it wasn’t clear that they were equivalent to each other.

    In a roundabout way, this potential dissonance ended up producing an important problem in pure mathematics called the Connes embedding conjecture. Eventually, it also served as a fissure that the five computer scientists took advantage of in their new proof.

    The first way of modeling entanglement was to think of the particles as spatially isolated from each other. One is on Earth, say, and the other is on Mars; the distance between them is what prevents causality. This is called the tensor product model.

    But in some situations, it’s not entirely obvious when two things are causally separate from each other. So mathematicians came up with a second, more general way of describing causal independence.

    When the order in which you perform two operations doesn’t affect the outcome, the operations “commute”: 3 x 2 is the same as 2 x 3. In this second model, particles are entangled when their properties are correlated but the order in which you perform your measurements doesn’t matter: Measure particle A to predict the momentum of particle B or vice versa. Either way, you get the same answer. This is called the commuting operator model of entanglement.

    Both descriptions of entanglement use arrays of numbers organized into rows and columns called matrices. The tensor product model uses matrices with a finite number of rows and columns. The commuting operator model uses a more general object that functions like a matrix with an infinite number of rows and columns.

    Over time, mathematicians began to study these matrices as objects of interest in their own right, completely apart from any connection to the physical world. As part of this work, a mathematician named Alain Connes conjectured in 1976 that it should be possible to approximate many infinite-dimensional matrices with finite-dimensional ones. This is one implication of the Connes embedding conjecture.

    The following decade a physicist named Boris Tsirelson posed a version of the problem that grounded it in physics once more. Tsirelson conjectured that the tensor product and commuting operator models of entanglement were roughly equivalent. This makes sense, since they’re theoretically two different ways of describing the same physical phenomenon. Subsequent work showed that because of the connection between matrices and the physical models that use them, the Connes embedding conjecture and Tsirelson’s problem imply each other: Solve one, and you solve the other.

    Yet the solution to both problems ended up coming from a third place altogether.

    Game Show Physics

    In the 1960s, a physicist named John Bell came up with a test for determining whether entanglement was a real physical phenomenon, rather than just a theoretical notion. The test involved a kind of game whose outcome reveals whether something more than ordinary, non-quantum physics is at work.

    Computer scientists would later realize that this test about entanglement could also be used as a tool for verifying answers to very complicated problems.

    But first, to see how the games work, let’s imagine two players, Alice and Bob, and a 3-by-3 grid. A referee assigns Alice a row and tells her to enter a 0 or a 1 in each box so that the digits sum to an odd number. Bob gets a column and has to fill it out so that it sums to an even number. They win if they put the same number in the one place her row and his column overlap. They’re not allowed to communicate.

    Under normal circumstances, the best they can do is win 89% of the time. But under quantum circumstances, they can do better.

    Imagine Alice and Bob split a pair of entangled particles. They perform measurements on their respective particles and use the results to dictate whether to write 1 or 0 in each box. Because the particles are entangled, the results of their measurements are going to be correlated, which means their answers will correlate as well — meaning they can win the game 100% of the time.

    2
    Illustration: Lucy Reading-Ikkanda/Quanta Magazine

    So if you see two players winning the game at unexpectedly high rates, you can conclude that they are using something other than classical physics to their advantage. Such Bell-type experiments are now called “nonlocal” games, in reference to the separation between the players. Physicists actually perform them in laboratories.

    “People have run experiments over the years that really show this spooky thing is real,” said Yuen.

    As when analyzing any game, you might want to know how often players can win a nonlocal game, provided they play the best they can. For example, with solitaire, you can calculate how often someone playing perfectly is likely to win.

    But in 2016, William Slofstra proved that there’s no general algorithm for calculating the exact maximum winning probability for all nonlocal games. So researchers wondered: Could you at least approximate the maximum-winning percentage?

    Computer scientists have homed in on an answer using the two models describing entanglement. An algorithm that uses the tensor product model establishes a floor, or minimum value, on the approximate maximum-winning probability for all nonlocal games. Another algorithm, which uses the commuting operator model, establishes a ceiling.

    These algorithms produce more precise answers the longer they run. If Tsirelson’s prediction is true, and the two models really are equivalent, the floor and the ceiling should keep pinching closer together, narrowing in on a single value for the approximate maximum-winning percentage.

    But if Tsirelson’s prediction is false, and the two models are not equivalent, “the ceiling and the floor will forever stay separated,” Yuen said. There will be no way to calculate even an approximate winning percentage for nonlocal games.

    In their new work, the five researchers used this question — about whether the ceiling and floor converge and Tsirelson’s problem is true or false — to solve a separate question about when it’s possible to verify the answer to a computational problem.

    Entangled Assistance

    In the early 2000s, computer scientists began to wonder: How does it change the range of problems you can verify if you interrogate two provers that share entangled particles?

    Most assumed that entanglement worked against verification. After all, two suspects would have an easier time telling a consistent lie if they had some means of coordinating their answers.

    But over the last few years, computer scientists have realized that the opposite is true: By interrogating provers that share entangled particles, you can verify a much larger class of problems than you can without entanglement.

    “Entanglement is a way to generate correlations that you think might help them lie or cheat,” Vidick said. “But in fact you can use that to your advantage.”

    To understand how, you first need to grasp the almost otherworldly scale of the problems whose solutions you could verify through this interactive procedure.

    Imagine a graph—a collection of dots (vertices) connected by lines (edges). You might want to know whether it’s possible to color the vertices using three colors, so that no vertices connected by an edge have the same color. If you can, the graph is “three-colorable.”

    If you hand a pair of entangled provers a very large graph, and they report back that it can be three-colored, you’ll wonder: Is there a way to verify their answer?

    For very big graphs, it would be impossible to check the work directly. So instead, you could ask each prover to tell you the color of one of two connected vertices. If they each report a different color, and they keep doing so every time you ask, you’ll gain confidence that the three-coloring really works.

    But even this interrogation strategy fails as graphs get really big—with more edges and vertices than there are atoms in the universe. Even the task of stating a specific question (“Tell me the color of XYZ vertex”) is more than you, the verifier, can manage: The amount of data required to name a specific vertex is more than you can hold in your working memory.

    But entanglement makes it possible for the provers to come up with the questions themselves.

    “The verifier doesn’t have to compute the questions. The verifier forces the provers to compute the questions for them,” Wright said.

    The verifier wants the provers to report the colors of connected vertices. If the vertices aren’t connected, then the answers to the questions won’t say anything about whether the graph is three-colored. In other words, the verifier wants the provers to ask correlated questions: One prover asks about vertex ABC and the other asks about vertex XYZ. The hope is that the two vertices are connected to each other, even though neither prover knows which vertex the other is thinking about. (Just as Alice and Bob hope to fill in the same number in the same square even though neither knows which row or column the other has been asked about.)

    If two provers were coming up with these questions completely on their own, there’d be no way to force them to select connected, or correlated, vertices in a way that would allow the verifier to validate their answers. But such correlation is exactly what entanglement enables.

    “We’re going to use entanglement to offload almost everything onto the provers. We make them select questions by themselves,” Vidick said.

    At the end of this procedure, the provers each report a color. The verifier checks whether they’re the same or not. If the graph really is three-colorable, the provers should never report the same color.

    “If there is a three-coloring, the provers will be able to convince you there is one,” Yuen said.

    As it turns out, this verification procedure is another example of a nonlocal game. The provers “win” if they convince you their solution is correct.

    In 2012, Vidick and Tsuyoshi Ito proved that it’s possible to play a wide variety of nonlocal games with entangled provers to verify answers to at least the same number of problems you can verify by interrogating two classical computers. That is, using entangled provers doesn’t work against verification. And last year, Natarajan and Wright proved that interacting with entangled provers actually expands the class of problems that can be verified.

    But computer scientists didn’t know the full range of problems that can be verified in this way. Until now.

    A Cascade of Consequences

    In their new paper, the five computer scientists prove that interrogating entangled provers makes it possible to verify answers to unsolvable problems, including the halting problem.

    “The verification capability of this type of model is really mind-boggling,” Yuen said.

    But the halting problem can’t be solved. And that fact is the spark that sets the final proof in motion.

    Imagine you hand a program to a pair of entangled provers. You ask them to tell you whether it will halt. You’re prepared to verify their answer through a kind of nonlocal game: The provers generate questions and “win” based on the coordination between their answers.

    If the program does in fact halt, the provers should be able to win this game 100 percent of the time—similar to how if a graph is actually three-colorable, entangled provers should never report the same color for two connected vertices. If it doesn’t halt, the provers should only win by chance—50 percent of the time.

    That means if someone asks you to determine the approximate maximum-winning probability for a specific instance of this nonlocal game, you will first need to solve the halting problem. And solving the halting problem is impossible. Which means that calculating the approximate maximum-winning probability for nonlocal games is undecidable, just like the halting problem.

    This in turn means that the answer to Tsirelson’s problem is no—the two models of entanglement are not equivalent. Because if they were, you could pinch the floor and the ceiling together to calculate an approximate maximum-winning probability.

    “There cannot be such an algorithm, so the two [models] must be different,” said David Pérez-García of the Complutense University of Madrid.

    The new paper proves that the class of problems that can be verified through interactions with entangled quantum provers, a class called MIP*, is exactly equal to the class of problems that are no harder than the halting problem, a class called RE. The title of the paper states it succinctly: “MIP* = RE.”

    In the course of proving that the two complexity classes are equal, the computer scientists proved that Tsirelson’s problem is false, which, due to previous work, meant that the Connes embedding conjecture is also false.

    For researchers in these fields, it was stunning that answers to such big problems would fall out from a seemingly unrelated proof in computer science.

    “If I see a paper that says MIP* = RE, I don’t think it has anything to do with my work,” said Navascués, who co-authored previous work tying Tsirelson’s problem and the Connes embedding conjecture together. “For me it was a complete surprise.”

    Quantum physicists and mathematicians are just beginning to digest the proof. Prior to the new work, mathematicians had wondered whether they could get away with approximating infinite-dimensional matrices by using large finite-dimensional ones instead. Now, because the Connes embedding conjecture is false, they know they can’t.

    “Their result implies that’s impossible,” said Slofstra.

    The computer scientists themselves did not aim to answer the Connes embedding conjecture, and as a result, they’re not in the best position to explain the implications of one of the problems they ended up solving.

    “Personally, I’m not a mathematician. I don’t understand the original formulation of the Connes embedding conjecture well,” said Natarajan.

    He and his co-authors anticipate that mathematicians will translate this new result into the language of their own field. In a blog post announcing the proof, Vidick wrote, “I don’t doubt that eventually complexity theory will not be needed to obtain the purely mathematical consequences.”

    Yet as other researchers run with the proof, the line of inquiry that prompted it is coming to a halt. For more than three decades, computer scientists have been trying to figure out just how far interactive verification will take them. They are now confronted with the answer, in the form of a long paper with a simple title and echoes of Turing.

    “There’s this long sequence of works just wondering how powerful” a verification procedure with two entangled quantum provers can be, Natarajan said. “Now we know how powerful it is. That story is at an end.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 1:00 pm on March 1, 2020 Permalink | Reply
    Tags: (TRGB)-Tip of the Red Giant Branch stars, A New Way to Gauge the Universe's Expansion Rate., Adam Riess-Johns HopKins University, , , , , Trouble with the Hubble constant., Wendy Freedman-University of Chicago, WIRED   

    From WIRED: “Science Has a New Way to Gauge the Universe’s Expansion Rate” 

    From WIRED

    03.01.2020
    Natalie Wolchover

    Cosmologists want to know how fast the universe is growing, but their data doesn’t match predictions. Wendy Freedman thinks red giant stars can help.

    3
    Wendy Freedman, a cosmologist at the University of Chicago, led the team that made the first measurement of the Hubble constant to within 10% accuracy.Courtesy of University of Chicago.

    1
    Antares, seen at center, is a red supergiant star near the end of its life. Similar red-giant stars have complicated the debate over the Hubble constant.Illustration: Judy Schmidt.

    The big news in cosmology for several years has been the mounting evidence that the universe is expanding faster than expected. When cosmologists extrapolate data from the early universe to predict what the cosmos should be like now, they predict a relatively slow cosmic expansion rate. When they directly measure the speed at which astronomical objects are hurtling away from us, they find that space is expanding about 9% faster than the prediction. The discrepancy may mean that something big is missing from our understanding of the cosmos.

    The issue reached a crescendo over the past year. Last March, the main group measuring cosmic expansion released their updated analysis, once again arriving at an expansion rate that far outstrips expectations. Then in July, a new measurement [https://arxiv.org/abs/1907.04869v1] of cosmic expansion using objects called quasars, when combined with the other measurement, pushed past “five sigma,” a statistical level that physicists usually treat as their standard of proof of an unaccounted-for physical effect. In this case, cosmologists say there might be some extra cosmic ingredient, beyond dark matter, dark energy and everything else they already include in their equations, that speeds the universe up.

    But that’s if the measurements are correct. A new line of evidence, first announced last summer, suggests that the cosmic expansion rate may fall much closer to the rate predicted by early-universe measurements and the standard theory of cosmology.

    Wendy Freedman, a decorated cosmologist at the University of Chicago and Carnegie Observatories, measured the expansion rate, known as the Hubble constant, using stars that she considers cleaner probes of expansion than other objects. Using these “tip of the red giant branch” (TRGB) stars, she and her team arrived at a significantly lower Hubble rate than other observers.

    Although Freedman is known for her careful and innovative work, some researchers pushed back on her methods after she introduced the result last summer. They argued that her team used outdated data for part of their analysis and an unfamiliar calibration technique. The critics thought that if Freedman’s team used newer data, their Hubble value would increase and come in line with other astronomical probes.

    It did not. In a paper posted online on February 5 and accepted for publication in The Astrophysical Journal, Freedman’s team described their analysis of TRGB stars in detail, summarized their consistency checks, and responded to critiques. The new paper reports an even slower cosmic expansion rate than last summer’s result, a tad closer to the early-universe rate. The more up-to-date data that critics thought would increase Freedman’s Hubble value had the opposite effect. “It made it go down,” she said.

    The Trouble With Dust

    The question of whether the universe expands faster than expected first cropped up in 2013, when the Planck satellite precisely mapped ancient microwaves coming from all directions in the sky.

    CMB per ESA/Planck

    Cosmic Background Radiation per ESA/Planck

    The microwaves revealed a detailed snapshot of the early universe from which the Planck team could deduce the cosmos’s precise ingredients, like the amount of dark matter. Plugging those ingredients into Albert Einstein’s gravity equations allowed the scientists to calculate the expected expansion rate of space today, which Planck’s final, full analysis pegged at 67.4 kilometers per second per megaparsec, give or take 1%. That is, when we peer into space, we should see astronomical objects receding from us 67.4 kilometers per second faster with each megaparsec of distance, just as dots on an inflating balloon separate faster the farther apart they are.

    But Adam Riess, a cosmologist at Johns Hopkins University and the Nobel Prize–winning co-discoverer of dark energy, had for a few years been getting a higher value in direct measurements of the cosmic expansion rate.

    4
    Adam Riess- Johns Hopkins University

    The trend continued; as of their latest analysis last March, Riess’s team pegged the Hubble constant at 74 kilometers per second per megaparsec, 9% higher than the 67.4 extrapolated from the early universe.

    3
    Illustration: Quanta Magazine

    The catch is that directly measuring the Hubble constant is very tricky. To do so, astronomers like Riess and Freedman must first find and calibrate “standard candles”: astronomical objects that have a well-known distance and intrinsic brightness.

    Standard Candles to measure age and distance of the universe from supernovae. NASA

    With these values in hand, they can infer the distances to standard candles that are fainter and farther away. They then compare these distances with how fast the objects are moving, revealing the Hubble constant.

    Riess and his team use pulsating stars called cepheids as their standard candles. The stars’ distances can be measured with parallax and other methods, and they pulsate with a frequency that correlates with how intrinsically bright they are.

    Parallax method ESA

    This lets the astronomers gauge the relative distances to fainter cepheids in farther-away galaxies, which gives them the distances of “Type 1a supernovas” in those same galaxies — explosions that serve as brighter, though rarer, standard candles. These are used to measure the distances to hundreds of farther-away supernovas, whose recessional speed divided by their distance gives the Hubble constant.

    Riess’s team’s Hubble value of 74 became more convincing last year when an independent measurement using quasars yielded the similar result of 73.3, a measurement based on objects called masers landed at 73.9, and an additional independent quasar measurement returned 74.2.

    But Freedman, who helped pioneer the cepheid method now used by Riess, has long worried about possible sources of error. Cepheids change as they age, which is not ideal for standard candles. Cepheids also tend to exist in dense stellar regions, which has two nefarious effects: First, those regions are often filled with dust, which blocks starlight and makes objects look farther than they are. And second, crowding can make them look brighter and thus closer than they are, potentially leading to overestimation of the Hubble constant. That’s why Freedman set out to use tip of the red giant branch stars.

    TRGBs are what stars like our sun briefly become before they die. As red giants, they gradually grow brighter until they reach a characteristic peak brightness caused by the sudden ignition of helium in their cores. These peaking red giants are always the same, which makes them good standard candles; moreover, as old stars, they inhabit the clean, sparse outskirts of galaxies, rather than dusty, crowded regions. “In terms of simplicity, tip of the red giant branch wins hands down,” said Barry Madore, Freedman’s husband and main collaborator, also of Chicago and Carnegie Observatories.

    First, Freedman, Madore and their team had to calibrate the TRGB stars, figuring out how bright they are at some known distance. Only then could they compare the brightness (and thereby deduce the distance) of TRGBs and supernovas farther away.

    For their standard candles, they chose the population of TRGB stars in the Large Magellanic Cloud, a nearby galaxy whose distance is extremely well known. The Large Magellanic Cloud is dusty, so the stars’ brightness can’t be directly observed. Instead, Freedman and her collaborators measured the intrinsic brightness of TRGBs in two other, essentially dust-free (but not as precisely located) places: a galaxy called IC 1613, and the Small Magellanic Cloud.

    TRGBs in these pristine places are like the sun when it’s high in the sky, whereas TRGBs in the Large Magellanic Cloud are like the sun near the horizon — reddened and dimmed by the dust in the atmosphere. (Dust makes objects look redder because it preferentially scatters blue light.) By comparing the colors of stars in dusty and clean places, the researchers could infer how much dust there is in the dusty region. They found that there’s more dust in the Large Magellanic Cloud than previously thought. That revealed how much the dust dims the stars there, and thus how bright they truly are — allowing the stars to be used as standard candles.

    The team independently checked the relative distances of the Large and Small Magellanic Clouds and galaxy IC 1613 using other methods, and they performed a number of other consistency checks on their result. Their TRGB distance ladder yields a Hubble value of 69.6, well below the measurements using cepheids, quasars and masers and within shouting distance of the prediction from the early-universe data.

    “We run all of these tests, keep on getting the same answer,” Madore said. “And Adam [Riess] doesn’t like it.”

    The Mystery Endures

    Riess said that although he “appreciates being able to read more about” the team’s methods, he still thinks their TRGB calibration could be off. “Estimating the amount of dust that dims the tip of the red giant branch in the Large Magellanic Cloud is very difficult,” he said. One possible source of error, he said, is that the Small Magellanic Cloud has an extended shape, with TRGBs located at different distances that shouldn’t necessarily be averaged together. (Freedman says her team measured TRGBs only in the central portion of the cloud.)

    Riess emphasizes that the TRGB result must be weighed against several other independent measurements that get a higher Hubble value.

    Dan Scolnic, an astrophysicist at Duke University who collaborates with Riess on the cepheid measurements, also questioned Freedman’s calibration method, saying, “Getting to the bottom of this is going to be one of the most important things the [community] does in the next couple of years.”

    The controversy will get resolved as telescopes gather more data, including highly accurate direct measurements of distances to TRGB stars. Over the next few years the Gaia satellite should provide these observations.

    ESA/GAIA satellite

    Other clues may come even sooner. Freedman said her team has used yet another new method to make a not-yet-published Hubble measurement that agrees with their number from the TRGB stars. Although she wouldn’t go into details about the forthcoming result, she said, “At the moment we think the case is exceedingly strong” that the TRGB measurement is correct.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:51 am on January 5, 2020 Permalink | Reply
    Tags: Analysis of data from hundreds of supernovas—the stellar explosions that provided the first evidence for cosmic acceleration, , , , , , , WIRED   

    From WIRED: “Does Dark Energy Really Exist? Cosmologists Battle It Out” 

    Wired logo

    From WIRED

    December 17, 2019
    Natalie Wolchover

    1
    The supernova SN 2007af shines clearly near the lower-right edge of the spiral galaxy NGC 5584. ESO

    Dark energy, mysterious as it sounds, has become part of the furniture in cosmology. The evidence that this repulsive energy infuses space has stacked up since 1998. That was the year astronomers first discovered that the expansion of the universe has been speeding up over time, with dark energy acting as the accelerator. As space expands, new space arises, and with it more of this repulsive energy, causing space to expand even faster.

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex Mittelmann Cold creation

    Saul Perlmutter [The Supernova Cosmology Project] shared the 2006 Shaw Prize in Astronomy, the 2011 Nobel Prize in Physics, and the 2015 Breakthrough Prize in Fundamental Physics with Brian P. Schmidt and Adam Riess [The High-z Supernova Search Team] for providing evidence that the expansion of the universe is accelerating.

    Two decades later, multiple independent measurements agree that dark energy comprises about 70 percent of the universe’s contents. It is so baked into our current understanding of the cosmos that it came as a surprise when a recent paper published in the journal Astronomy & Astrophysics questioned whether it’s there at all.

    The four authors, including the Oxford physicist Subir Sarkar, performed their own analysis of data from hundreds of supernovas—the stellar explosions that provided the first evidence for cosmic acceleration, a discovery that earned three astronomers the 2011 Nobel Prize in Physics. When Sarkar and his colleagues looked at supernovas, they didn’t see a universe that’s accelerating uniformly in all directions due to dark energy. Rather, they say supernovas look the way they do because our region of the cosmos is accelerating in a particular direction—roughly toward the constellation Centaurus in the southern sky.

    Standard Candles to measure age and distance of the universe from supernovae NASA

    Outside experts almost immediately began picking the paper apart, finding apparent flaws in its methodology. Now, two cosmologists have formalized those arguments and others in a paper that was posted online on December 6 and submitted to The Astrophysical Journal. The authors, David Rubin and his student Jessica Heitlauf of the University of Hawaii, Manoa, detail four main problems with Sarkar and company’s data handling. “Is the expansion of the universe accelerating?” their paper title asks. “All signs still point to yes.”

    Outside researchers praised the thorough dissection. “The arguments by Rubin et al. are very convincing,” said Dragan Huterer, a cosmologist at the University of Michigan. “Some of them I was aware of upon looking at the original [Astronomy & Astrophysics paper], and others are new to me but make a lot of sense.”

    However, Sarkar and his co-authors—Jacques Colin and Roya Mohayaee of the Paris Institute of Astrophysics and Mohamed Rameez of the University of Copenhagen—don’t agree with the criticisms. Days after Rubin and Heitlauf’s paper appeared, they posted a rebuttal of the rebuttal.

    The cosmology community remains unmoved. Huterer said this latest response at times “misses the point” and attempts to debate statistical principles that are “not negotiable.” Dan Scolnic, a supernova cosmologist at Duke University, reaffirmed that “the evidence for dark energy from supernovas alone is significant and secure.”

    A Moving Shot

    The expansion of space stretches light, reddening its color. Supernovas appear more “redshifted” the farther away they are, because their light has to travel farther through expanding space. If space expanded at a constant rate, a supernova’s redshift would be directly proportional to its distance, and thus to its brightness.

    But in an accelerating universe filled with dark energy, space expanded less quickly in the past than it does now. This means a supernova’s light will have stretched less during its long journey to Earth, given how slowly space expanded during much of the time. A supernova located at a given distance away (indicated by its brightness) will appear significantly less redshifted than it would in a universe without dark energy. Indeed, researchers find that the redshift and brightness of supernovas scales in just this way.

    3
    Illustration: Dillon Brout

    In their recent paper, Sarkar and collaborators took an unconventional approach to the analysis. Normally, any study of supernova data has to account for Earth’s movement: As Earth orbits the sun, which orbits the galaxy, which orbits the local group of galaxies, we and our telescopes hurtle through space at around 600 kilometers per second. Our net motion is toward a dense region near Centaurus. Consequently, light coming from that direction is subject to the Doppler shift, which makes it look bluer than the light from the opposite side of the sky.

    It’s standard to correct for this motion and to transform supernova data into a stationary reference frame. But Sarkar and company did not. “If you don’t subtract that [motion], then it puts the same Doppler shift into the supernova data,” Rubin explained in an interview. “Our claim is that most of the effect is due to the solar system’s motion.”

    Another problem with the paper, according to Rubin and Heitlauf, is that Sarkar and colleagues made a “plainly incorrect assumption”: They failed to account for the fact that cosmic dust absorbs more blue light than red.

    Because of this, a supernova in a relatively “clean,” dust-free region looks especially blue, since there’s less dust that would otherwise absorb its blue light. The lack of dust also means that it will appear brighter. Thus, the faraway supernovas we spot with our telescopes are disproportionately blue and bright. If you don’t control for the color-dependent effect of dust, you will infer less difference between the brightness of nearby supernovas (on average, dustier and redder) and faraway supernovas (on average, bluer and brighter)—and as a result, you will infer less cosmic acceleration.

    The combination of these and other unusual decisions allowed Sarkar’s group to model their supernova data with a “dipole” term, an acceleration that points in a single direction, and only a small, or possibly zero, “monopole” term describing the kind of uniform acceleration that signifies dark energy.

    This dipole model has two other problems, said Rubin and Heitlauf. First, the model includes a term that says how quickly the dipole acceleration drops to zero as you move away from Earth; Sarkar and company made this distance small, which means that their model isn’t tested by a large sampling of supernovas. And second, the model doesn’t satisfy a consistency check involving the relationship between the dipole and monopole terms in the equations.

    Not All the Same

    The day Rubin and Heitlauf’s paper appeared, Sarkar said by email, “We do not think any revisions need to be made to our analysis.” He and his team soon posted their rebuttal of the duo’s four points, mostly rehashing earlier justifications. They cited research by Natallia Karpenka, a cosmologist who has left academia for a career in finance, to support one of their choices, but they misconstrued her work, Rubin said. Four other cosmologists contacted by Quanta said the group’s response doesn’t change their view.

    Those who find the back-and-forth about data analysis hard to follow should note that the data from supernovas matches other evidence of cosmic acceleration. Over the years, dark energy has been inferred from the ancient light called the cosmic microwave background, fluctuations in the density of the universe called baryon acoustic oscillations, the gravitationally distorted shapes of galaxies, and the clustering of matter in the universe.

    Sarkar and colleagues ground their work in a respectable body of research on the “cosmological fitting problem.” Calculations of cosmological parameters like the density of dark energy (which is represented in Albert Einstein’s gravity equations by the Greek letter lambda) tend to treat the universe as smooth, averaging over the universe’s inhomogeneities, such as its galaxies and voids. The fitting problem asks whether this approximation might lead to incorrect inferences about the values of constants like lambda, or if it might even suggest the presence of a lambda that doesn’t exist.

    But the latest research on the question—including a major cosmological simulation published this summer—rejects that possibility. Inhomogeneities “could change lambda by 1 or 2 percent,” said Ruth Durrer of the University of Geneva, a co-author on that paper, “but could not get rid of it. It’s simply impossible.”

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    Timeline of the Inflationary Universe WMAP

    The Dark Energy Survey (DES) is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. DES began searching the Southern skies on August 31, 2013.

    According to Einstein’s theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up. To explain cosmic acceleration, cosmologists are faced with two possibilities: either 70% of the universe exists in an exotic form, now called dark energy, that exhibits a gravitational force opposite to the attractive gravity of ordinary matter, or General Relativity must be replaced by a new theory of gravity on cosmic scales.

    DES is designed to probe the origin of the accelerating universe and help uncover the nature of dark energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 400 scientists from over 25 institutions in the United States, Spain, the United Kingdom, Brazil, Germany, Switzerland, and Australia are working on the project. The collaboration built and is using an extremely sensitive 570-Megapixel digital camera, DECam, mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory, high in the Chilean Andes, to carry out the project.

    Over six years (2013-2019), the DES collaboration used 758 nights of observation to carry out a deep, wide-area survey to record information from 300 million galaxies that are billions of light-years from Earth. The survey imaged 5000 square degrees of the southern sky in five optical filters to obtain detailed information about each galaxy. A fraction of the survey time is used to observe smaller patches of sky roughly once a week to discover and study thousands of supernovae and other astrophysical transients.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:37 am on December 29, 2019 Permalink | Reply
    Tags: "Now Entering Orbit: Tiny Lego-like Modular Satellites", Athena a joint project between NOAA; NASA; and the Air Force’s Space and Missile Systems Center., , NovaWurks, satlets, The computing experiment Maestro led by David Barnhart of the University of Southern California’s Information Sciences Institute., WIRED   

    From WIRED: “Now Entering Orbit: Tiny Lego-like Modular Satellites” 

    Wired logo

    From WIRED

    12.29.2019
    Sarah Scoles

    Space is getting closer, thanks to small, cheap “satlets” that network themselves to solve problems in flight.

    1
    An earlier NovaWurks satellite, as seen from the International Space Station. This was the company’s first test of ‘satlets’ in space.Courtesy of Novawurks

    Just about a year ago, SpaceX sent the rocketry equivalent of a clown car to space: A rocket crowded with more than 60 small satellites. Inside one of them, Excite, were even more.

    2
    eXCITe (with SeeMe on top) [DARPA]
    https://space.skyrocket.de/doc_sdat/excite.htm, Gunter’s Space page

    It was actually a satellite made of other satellites, all clones of each other, all capable of joining together and working together. It was one of the first in-space tests of such a contraption—but in the coming years, this modular approach is likely to show up on more and more missions.

    Excite was flung into space courtesy of a company called NovaWurks, which makes “satlets.” The suffix—like that of “piglets”—implies littleness, and indeed these 14 satlets are smaller than a standard piece of paper and only a few inches thick. Even at that size, they supply everything a satellite needs—a way to communicate with Earth, a way to move in space, a way to process data, and a source of power. You just hook your camera, radiation sensor, or computer circuit in before launch and then send the whole package to space. Each satlet, which NovaWurks calls a HISat, can also physically join up with others, forming one larger unit that shares resources.

    On this launch day, liftoff was as perfect as a picture. Once the rocket soared to its appointed height, Excite entered its orbit. All seemed good, and most of the attached instruments—as well as the spacecraft itself—performed pretty much as expected. But Excite wasn’t able to send commands to some of the devices aboard. The spacecraft had technical difficulties connecting to some payloads, and three of the eight payloads plugged into the satlets couldn’t hear and obey their groundmasters.

    Nonetheless, this failure has been seen as an acceptable bump along a very compelling road. Plug-and-play satellites are like the Konmari method transposed to space: They cost less money, they take less time, and because they let engineers focus on instruments rather than logistics, they spark more joy. Organizations like NASA, the Air Force, and the National Reconnaissance Office are all realizing they like that type of joy, and are pumping out contracts and programs that provide this new technology with a ride to orbit. And NovaWurks was one of the first companies to actually take the idea to space.

    DARPA—the Department of Defense’s advanced R&D organization —got the modular party started early with a project called Phoenix. One of its goals, says program manager Todd Master, was to figure out whether it would be possible to combine small satellites into a larger one. Sort of like Legos, except rather than merely snapping them together, getting them to work together. In 2012, the agency started doing business with NovaWurks, which eventually became the prime contractor for that part of the project.

    The great promise of satlets is that they are agnostic about what instruments they support and about what function they fulfill. They can be mass-produced, which both slashes costs and dents the idea that each new instrument to be sent into orbit requires a whole new satellite. Instead, you can buy a satlet (or 15) that will provide everything your camera, radar device, radio detector, infrared sensor, or data processor will need. In theory, the set can also fix itself after launch by reallocating resources: A group of linked satlets can share functions among themselves and adjust their effort based on changing needs. If a battery in one gets a bad cell, for instance, its partners can help out.

    This approach makes sending stuff to space less risky, and potentially faster for developers, because they don’t need to build an entire satellite from scratch. Other companies plan to offer mass-produced satellite platforms. But few others can connect a set of them into one larger system, and make all the elements play together. With that particular pitch, NovaWurks won around 40 million Darpa dollars, and the partners jointly recruited parties to put payloads aboard.

    One of those was the computing experiment Maestro, led by David Barnhart of the University of Southern California’s Information Sciences Institute. Chips and processors in space systems have lagged, because of power limitations and the need to cope with radiation. “In the particular case of processors, the ones that were the most radiation-tolerant were also the slowest you could possibly imagine,” says Barnhart. His goal: To demonstrate that a processor with 49 cores, hardened against space radiation, could work.

    The upside of being part of the Excite launch, for Maestro, was simple: It was free. “The downside is everything is experimental,” he adds. Indeed, because of the communications glitch, Barnhart didn’t get any data back. But his team did learn that they could build both the payload and the software to make sure the cores are working in orbit.

    Another Excite payload that didn’t get to exercise its full potential was R3S, a NASA instrument that aimed to help understand how much radiation airline workers encounter. “They were never able to turn on R3S,” says Carrie Rhoades, Langley Research Center’s smallsat lead. But she, like Barnhart, doesn’t rue that result. “It was a high-risk project in the first place,” she says. “We should be taking those kind of risks.”

    The National Reconnaissance Office, which runs the US federal government’s surveillance satellites, is taking a similar approach, playing around with small standardized systems that engineers can hook instruments into. Like NASA, the agency historically has launched hugely expensive satellites that sometimes keep doing their jobs for more than 20 years, meaning they may not have the latest-greatest stuff inside. Sending up few extremely costly satellites can tamp down on risk-taking, because there’s no good way to fix a problem in a far-out orbit, or change design before launch.

    In response, and to take advantage of commercial technology, the NRO established a new “Greenlighting” program in 2017, to provide developers with a quick, cheap way to test technology in space. The NRO has created a standard interface, the size of a deck of cards, that people can stick their experiments into. Multiple interfaces can be stacked together, and experiments swapped in or out, before launch. The stack can distribute resources to multiple experiments, but it unlike a HISat it must hook into the body of an actual satellite.

    One of the first Greenlighting experiments deemed ready for space was a processor the size of a quarter that had been developed by the oil and gas industry. The idea was to see how something designed to survive the rigors of energy extraction could fare in another harsh environment. Greenlighting also can test subcomponents, such as materials that might end up in future full-scale satellites. In November of 2019, Greenlighting launched four experiments, and has others on the horizon.

    Meanwhile, NovaWurks’ glitches apparently haven’t dampened business. In September, the company was bought for an undisclosed sum in the seven digits by Saturn, a manufacturer that will use satlets to make communications satellites.

    NovaWurks’s satlets are also key to Athena, a joint project between NOAA, NASA, and the Air Force’s Space and Missile Systems Center. As part of climate change research, the effort will measure solar energy that Earth reflects and absorbs, gathered via a very small telescope attached to the satlet. Because the team only needs to develop the telescope itself, and not the vehicle to host it, they can work more quickly and easily than before. Athena will test technology that might later go on a larger, more complicated mission.

    That “quickly” is important not just from a tech side but also from a human side: Missions often take years and years of work on the ground before they are even scheduled to book a ride on a rocket. It may be a long time before any one engineer gets to work on something that’s going to space, and even longer before that project actually achieves liftoff. “Some of the younger engineers were a bit disenfranchised,” says Michelle Garn, Athena’s project manager. Standardizing the satellite infrastructure and keeping it small means engineers can get stuff space-ready in a few months—and take bigger risks.

    For NASA, that culture change toward embracing risk has been the most challenging part of accessorize-your-satlet sort of work, because it’s such a shift from the way the agency has operated in the past. But maybe NASA and other space places like the NRO are adjusting to the idea that it’s okay to have smaller ambitions sometimes, and that when you shrink your goals, it’s okay to risk screwing up, and even to actually screw up. Perhaps these agencies can soon accept the idea that a mission sometimes can be little more than a missionlet.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 4:32 pm on November 7, 2019 Permalink | Reply
    Tags: Asperger's lives on as a unifying label and a source of strength., “The diagnosis of Asperger’s enabled the creation of a large and very supportive community and allowed people to find relevant resources. Changes in DSM-5 jeopardized that.”, Girls were much less likely to receive an Asperger’s diagnosis than boys., Placing people “on the spectrum” equalizes access to resources including insurance coverage., Teen climate activist Greta Thunberg describes Asperger's as a superpower., The DSM-5 essentially made Asperger’s a non-diagnosis., The World Health Organization also is eliminating Asperger syndrome from its International Classification of Diseases., WIRED   

    From WIRED: “The Enduring Power of Asperger’s, Even as a Non-Diagnosis” 

    Wired logo

    From WIRED

    11.07.2019
    Michele Cohen Marill

    Six years after it ceased to be an official diagnosis, Asperger’s lives on as a unifying label and a source of strength.

    1
    The teen climate activist Greta Thunberg describes Asperger’s as a superpower, in the right circumstances.Photograph: Spencer Platt/Getty Images

    Sixteen-year-old Swedish activist Greta Thunberg is the symbol of a climate change generation gap, a girl rebuking adults for their inaction in preventing a future apocalypse. Thunberg’s riveting speech at the UN’s Climate Action Summit has been viewed more than 2 million times on YouTube, and she was considered a viable contender for the Nobel Peace Prize.

    In a tweet, Thunberg explained what made her so fearless: “I have Aspergers and that means I’m sometimes a bit different from the norm. And—given the right circumstances—being different is a superpower. #aspiepower.”

    People with Asperger’s applaud the way she reframed a “disorder,” as it used to be called in the Diagnostic and Statistical Manual of Mental Disorders, into an asset. But Thunberg’s comments also fuel a lingering debate about whether Asperger’s even exists as a distinct condition—and if it doesn’t, why people are still so attached to the designation.

    Asperger syndrome, first coined in 1981, describes people who have problems with social interaction, repetitive behaviors, and an intense focus on singular interests. Sheldon Cooper, the theoretical physicist on the long-running TV show “The Big Bang Theory,” became an exaggerated prototype, a brilliant person who missed social cues and couldn’t grasp irony.

    His awkwardness spawned humorous predicaments, but in real life, people with Asperger’s can face more daunting challenges. It became a diagnosis in 1994, distinct from autistic disorder, but the lines were blurry even then. In 2013, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (known as DSM-5) eliminated Asperger’s and redefined the autism spectrum as encompassing level 1 (“requiring support”) to level 3 (“requiring very substantial support”).

    “Technically, the DSM-5 essentially made Asperger’s a non-diagnosis,” says Dania Jekel, executive director of the Asperger/Autism Network, which formed after Asperger’s first gained official status, from an outpouring of people seeking resources and a sense of community.

    The World Health Organization also is eliminating Asperger syndrome from its International Classification of Diseases. The ICD-11, which was adopted this year and will be implemented globally by 2022, instead calls it “autism spectrum disorder without disorder of intellectual development and with mild or no impairment of functional language.” Proponents of the change hope to reduce stereotypes. For example, girls were much less likely to receive an Asperger’s diagnosis than boys, and girls were more likely to be diagnosed at an older age—a disparity that points to bias. Meanwhile, placing people “on the spectrum” equalizes access to resources, including insurance coverage.

    Sixteen-year-old Swedish activist Greta Thunberg is the symbol of a climate change generation gap, a girl rebuking adults for their inaction in preventing a future apocalypse. Thunberg’s riveting speech at the UN’s Climate Action Summit has been viewed more than 2 million times on YouTube, and she was considered a viable contender for the Nobel Peace Prize.

    In a tweet, Thunberg explained what made her so fearless: “I have Aspergers and that means I’m sometimes a bit different from the norm. And—given the right circumstances—being different is a superpower. #aspiepower.”

    People with Asperger’s applaud the way she reframed a “disorder,” as it used to be called in the Diagnostic and Statistical Manual of Mental Disorders, into an asset. But Thunberg’s comments also fuel a lingering debate about whether Asperger’s even exists as a distinct condition—and if it doesn’t, why people are still so attached to the designation.

    Asperger syndrome, first coined in 1981, describes people who have problems with social interaction, repetitive behaviors, and an intense focus on singular interests. Sheldon Cooper, the theoretical physicist on the long-running TV show “The Big Bang Theory,” became an exaggerated prototype, a brilliant person who missed social cues and couldn’t grasp irony.

    His awkwardness spawned humorous predicaments, but in real life, people with Asperger’s can face more daunting challenges. It became a diagnosis in 1994, distinct from autistic disorder, but the lines were blurry even then. In 2013, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (known as DSM-5) eliminated Asperger’s and redefined the autism spectrum as encompassing level 1 (“requiring support”) to level 3 (“requiring very substantial support”).

    “Technically, the DSM-5 essentially made Asperger’s a non-diagnosis,” says Dania Jekel, executive director of the Asperger/Autism Network, which formed after Asperger’s first gained official status, from an outpouring of people seeking resources and a sense of community.

    The World Health Organization also is eliminating Asperger syndrome from its International Classification of Diseases. The ICD-11, which was adopted this year and will be implemented globally by 2022, instead calls it “autism spectrum disorder without disorder of intellectual development and with mild or no impairment of functional language.” Proponents of the change hope to reduce stereotypes. For example, girls were much less likely to receive an Asperger’s diagnosis than boys, and girls were more likely to be diagnosed at an older age—a disparity that points to bias. Meanwhile, placing people “on the spectrum” equalizes access to resources, including insurance coverage.Sixteen-year-old Swedish activist Greta Thunberg is the symbol of a climate change generation gap, a girl rebuking adults for their inaction in preventing a future apocalypse. Thunberg’s riveting speech at the UN’s Climate Action Summit has been viewed more than 2 million times on YouTube, and she was considered a viable contender for the Nobel Peace Prize.

    In a tweet, Thunberg explained what made her so fearless: “I have Aspergers and that means I’m sometimes a bit different from the norm. And—given the right circumstances—being different is a superpower. #aspiepower.”

    People with Asperger’s applaud the way she reframed a “disorder,” as it used to be called in the Diagnostic and Statistical Manual of Mental Disorders, into an asset. But Thunberg’s comments also fuel a lingering debate about whether Asperger’s even exists as a distinct condition—and if it doesn’t, why people are still so attached to the designation.

    Asperger syndrome, first coined in 1981, describes people who have problems with social interaction, repetitive behaviors, and an intense focus on singular interests. Sheldon Cooper, the theoretical physicist on the long-running TV show “The Big Bang Theory,” became an exaggerated prototype, a brilliant person who missed social cues and couldn’t grasp irony.

    His awkwardness spawned humorous predicaments, but in real life, people with Asperger’s can face more daunting challenges. It became a diagnosis in 1994, distinct from autistic disorder, but the lines were blurry even then. In 2013, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (known as DSM-5) eliminated Asperger’s and redefined the autism spectrum as encompassing level 1 (“requiring support”) to level 3 (“requiring very substantial support”).

    “Technically, the DSM-5 essentially made Asperger’s a non-diagnosis,” says Dania Jekel, executive director of the Asperger/Autism Network, which formed after Asperger’s first gained official status, from an outpouring of people seeking resources and a sense of community.

    The World Health Organization also is eliminating Asperger syndrome from its International Classification of Diseases. The ICD-11, which was adopted this year and will be implemented globally by 2022, instead calls it “autism spectrum disorder without disorder of intellectual development and with mild or no impairment of functional language.” Proponents of the change hope to reduce stereotypes. For example, girls were much less likely to receive an Asperger’s diagnosis than boys, and girls were more likely to be diagnosed at an older age—a disparity that points to bias. Meanwhile, placing people “on the spectrum” equalizes access to resources, including insurance coverage.Sixteen-year-old Swedish activist Greta Thunberg is the symbol of a climate change generation gap, a girl rebuking adults for their inaction in preventing a future apocalypse. Thunberg’s riveting speech at the UN’s Climate Action Summit has been viewed more than 2 million times on YouTube, and she was considered a viable contender for the Nobel Peace Prize.

    In a tweet, Thunberg explained what made her so fearless: “I have Aspergers and that means I’m sometimes a bit different from the norm. And—given the right circumstances—being different is a superpower. #aspiepower.”

    People with Asperger’s applaud the way she reframed a “disorder,” as it used to be called in the Diagnostic and Statistical Manual of Mental Disorders, into an asset. But Thunberg’s comments also fuel a lingering debate about whether Asperger’s even exists as a distinct condition—and if it doesn’t, why people are still so attached to the designation.

    Asperger syndrome, first coined in 1981, describes people who have problems with social interaction, repetitive behaviors, and an intense focus on singular interests. Sheldon Cooper, the theoretical physicist on the long-running TV show “The Big Bang Theory,” became an exaggerated prototype, a brilliant person who missed social cues and couldn’t grasp irony.

    His awkwardness spawned humorous predicaments, but in real life, people with Asperger’s can face more daunting challenges. It became a diagnosis in 1994, distinct from autistic disorder, but the lines were blurry even then. In 2013, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (known as DSM-5) eliminated Asperger’s and redefined the autism spectrum as encompassing level 1 (“requiring support”) to level 3 (“requiring very substantial support”).

    “Technically, the DSM-5 essentially made Asperger’s a non-diagnosis,” says Dania Jekel, executive director of the Asperger/Autism Network, which formed after Asperger’s first gained official status, from an outpouring of people seeking resources and a sense of community.

    The World Health Organization also is eliminating Asperger syndrome from its International Classification of Diseases. The ICD-11, which was adopted this year and will be implemented globally by 2022, instead calls it “autism spectrum disorder without disorder of intellectual development and with mild or no impairment of functional language.” Proponents of the change hope to reduce stereotypes. For example, girls were much less likely to receive an Asperger’s diagnosis than boys, and girls were more likely to be diagnosed at an older age—a disparity that points to bias. Meanwhile, placing people “on the spectrum” equalizes access to resources, including insurance coverage.Sixteen-year-old Swedish activist Greta Thunberg is the symbol of a climate change generation gap, a girl rebuking adults for their inaction in preventing a future apocalypse. Thunberg’s riveting speech at the UN’s Climate Action Summit has been viewed more than 2 million times on YouTube, and she was considered a viable contender for the Nobel Peace Prize.

    In a tweet, Thunberg explained what made her so fearless: “I have Aspergers and that means I’m sometimes a bit different from the norm. And—given the right circumstances—being different is a superpower. #aspiepower.”

    People with Asperger’s applaud the way she reframed a “disorder,” as it used to be called in the Diagnostic and Statistical Manual of Mental Disorders, into an asset. But Thunberg’s comments also fuel a lingering debate about whether Asperger’s even exists as a distinct condition—and if it doesn’t, why people are still so attached to the designation.

    Asperger syndrome, first coined in 1981, describes people who have problems with social interaction, repetitive behaviors, and an intense focus on singular interests. Sheldon Cooper, the theoretical physicist on the long-running TV show “The Big Bang Theory,” became an exaggerated prototype, a brilliant person who missed social cues and couldn’t grasp irony.

    His awkwardness spawned humorous predicaments, but in real life, people with Asperger’s can face more daunting challenges. It became a diagnosis in 1994, distinct from autistic disorder, but the lines were blurry even then. In 2013, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (known as DSM-5) eliminated Asperger’s and redefined the autism spectrum as encompassing level 1 (“requiring support”) to level 3 (“requiring very substantial support”).

    “Technically, the DSM-5 essentially made Asperger’s a non-diagnosis,” says Dania Jekel, executive director of the Asperger/Autism Network, which formed after Asperger’s first gained official status, from an outpouring of people seeking resources and a sense of community.

    The World Health Organization also is eliminating Asperger syndrome from its International Classification of Diseases. The ICD-11, which was adopted this year and will be implemented globally by 2022, instead calls it “autism spectrum disorder without disorder of intellectual development and with mild or no impairment of functional language.” Proponents of the change hope to reduce stereotypes. For example, girls were much less likely to receive an Asperger’s diagnosis than boys, and girls were more likely to be diagnosed at an older age—a disparity that points to bias. Meanwhile, placing people “on the spectrum” equalizes access to resources, including insurance coverage._________________

    ________________________________________________
    “The diagnosis of Asperger’s enabled the creation of a large and very supportive community and allowed people to find relevant resources. Changes in DSM-5 jeopardized that.”

    Dania Jekel
    ________________________________________________

    But Jekel worries that some people with Asperger’s-like attributes will return to the ambiguous space they once occupied—too well-functioning to be diagnosed on the autism spectrum, but still in need of significant support. “Twenty-two years ago, there was a whole group of people who were unidentified, had no resources, didn’t know each other,” she says. “The diagnosis of Asperger’s enabled the creation of a large and very supportive community and allowed people to find relevant resources. Changes in DSM-5 jeopardized that.”

    Erika Schwarz, for example, wasn’t diagnosed until she was 39. Asperger’s explained a lot about her struggles in the workplace and with personal relationships. It made her wonder how different her life might have been if she had known—and had help learning how to cope. “It does give you a space to have a bit of compassion for yourself,” she says.

    When she watches Thunberg on the world stage, she remembers herself as a young girl, intensely concerned about environmental degradation. “All the things I worried about as a kid, they’re validated,” says Schwarz, 50, who is now an environmental artist.

    Yet Thunberg’s rise to icon status has also stirred long-standing resentments about how people view the rungs of the spectrum. The levels in the current DSM definition of autism are based on support needs, which can be fluid. “I would put myself at all three levels, inconsistently,” says Terra Vance, founder and chief editor of the online publication the Aspergian. But the levels also can feel like a ranking: more impaired or less so.

    While #aspiepower endures on Twitter, so does #AllAutistics, a symbol of inclusivity and solidarity among people on the spectrum—even those who can’t speak or require help with daily functions. “Using the word ‘aspie’ doesn’t make you an aspie supremacist,” tweeted one person who used the hashtag #AllAutistics. “Thinking that ‘aspies’ are special shiny autistics who are functionally different from ‘severe’ autistics is aspie supremacy. Fight that. Always.”

    The use of the term Asperger’s is further complicated by the history of its namesake, Hans Asperger, an Austrian pediatrician who first defined autism in 1944 in its “profound” and “high-functioning” forms. Asperger worked at the University Pediatric Clinic in Vienna, at a time when children with significant disabilities who were deemed a burden to the state were covertly “euthanized,” according to Nazi eugenics.

    After the war, Asperger was viewed as having been a protector of children whom he considered to have potential despite their challenges, and he continued to have a distinguished career; he never was a member of the Nazi party. Yet extensive research unearthed evidence that Asperger sent at least some children to a clinic that was known as a center of “child euthanasia.”

    For some, that disturbing history is reason enough to erase the term Asperger’s. But its use endures beyond the shadow of its origins. With an Asperger’s diagnosis, people felt enormous relief at finally being understood, and many don’t want to give up that identity.

    Stephen Shore is an educator and author who identifies as autistic with the subtype of Asperger’s. That designation remains useful, he says, even to clinicians. Still, Shore doesn’t express strong feelings about the change in wording. He’s more concerned that people obtain the support they need and focus on the abilities they have. “What I find is that autistic people who are successful have found a way either by chance or design to use their highly focused interest and skill to become successful,” says Shore, who is on the board of Autism Speaks, a national advocacy and research organization.

    Even though it’s been six years since the DSM-5 did away with the Asperger’s diagnosis, the name still evokes a sense of belonging. In New York City, Aspies for Social Success has about 1,000 members for whom it organizes outings and support groups, including Aspie Raps (rap sessions) at the New York Public Library, followed by dinner at a restaurant. Paradoxically, people who would typically feel anxious at social events look forward to meeting other people on the spectrum.

    “What works is that we’re all communicating on the same wavelength,” says executive director Stephen Katz, who was diagnosed at age 50. “Some people describe it as having a different operating system than the rest of the population.”

    A change in the language isn’t going to disrupt that connection.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:58 am on October 27, 2019 Permalink | Reply
    Tags: "Physicists Get Close to Knowing the Mass of the Neutrino", , , , , , , The KATRIN experiment is working to “weigh the ghost"., WIRED   

    From WIRED: “Physicists Get Close to Knowing the Mass of the Neutrino” 

    Wired logo

    From WIRED

    10.27.2019

    The KATRIN experiment is working to “weigh the ghost,” which could point to new laws of particle physics and reshape theories of cosmology.

    KATRIN experiment aims to measure the mass of the neutrino using a huge device called a spectrometer (interior shown)Karlsruhe Institute of Technology, Germany

    1
    Photograph: Forschungszentrum Karlsruhe

    2
    The main spectrometer of the KATRIN experiment being transported to the Karlsruhe Research Center in Germany in 2006. Photograph: Forschungszentrum Karlsruhe

    Of all the known particles in the universe, only photons outnumber neutrinos. Despite their abundance, however, neutrinos are hard to catch and inspect, as they interact with matter only very weakly. About 1,000 trillion of the ghostly particles pass through your body every second—with nary a flinch from even a single atom.

    The fact that they’re ubiquitous, yet we don’t even know what they weigh, is kind of crazy,” said Deborah Harris, a physicist at the Fermi National Accelerator Laboratory near Chicago and York University in Toronto.

    Physicists have long tried to weigh the ghost. And in September, after 18 years of planning, building and calibrating, the Karlsruhe Tritium Neutrino (KATRIN) experiment in southwestern Germany announced its first results: It found that the neutrino can’t weigh more than 1.1 electron-volts (eV), or about one-five-hundred-thousandth the mass of the electron.

    This initial estimate, from only one month’s worth of data, improves on previous measurements using similar techniques that placed the upper limit on the neutrino mass at 2 eV. As its data accrues, KATRIN aims to nail the actual mass rather than giving an upper bound.

    Why Mass Matters

    Mass is one of the most basic and important characteristics of fundamental particles. The neutrino is the only known particle whose mass remains a mystery. Measuring its mass would help point toward new laws of physics beyond the Standard Model, the remarkably successful yet incomplete description for how the universe’s known particles and forces interact. Its measured mass would also serve as a check on cosmologists’ theories for how the universe evolved.

    “Depending on what the mass of the neutrino turns out to be, it may lead to very exciting times in cosmology,” said Diana Parno, a physicist at Carnegie Mellon University and a member of the KATRIN team.

    Until about two decades ago, neutrinos—which were theoretically predicted in 1930 and discovered in 1956—were presumed to be massless. “When I was in grad school, my textbooks all said neutrinos didn’t have mass,” Harris said.

    That changed when, in a discovery that would win the 2015 Nobel Prize, physicists found that neutrinos could morph from one kind to another, oscillating between three “flavor” states: electron, muon and tau. These oscillations can only happen if neutrinos also have three possible mass states, where each flavor has distinct probabilities of being in each of the three mass states. The mass states travel through space differently, so by the time a neutrino goes from point A to point B, this mix of probabilities will have changed, and a detector could measure a different flavor.

    If there are three different mass states, then they can’t all be zero—thus, neutrinos have mass. According to recent neutrino oscillation data (which reveals the differences between the mass states rather than their actual values), if the lightest mass state is zero, the heaviest must be at least 0.0495 eV.

    Still, that’s so light compared to the mass of other particles that physicists aren’t sure how neutrinos get such tiny masses. Other particles in the Standard Model acquire mass by interacting with the Higgs field, a field of energy that fills all space and drags on massive particles. But for neutrinos, “the mass is so small, you need some additional theory to explain that,” Parno said.

    Figuring out how neutrinos acquire mass may resolve other, seemingly related mysteries, such as why there is more matter than antimatter in the universe. Competing theories for the mass-generating mechanism predict different values for the three mass states. While neutrino oscillation experiments have measured the differences between the mass states, experiments like KATRIN home in on a kind of average of the three. Combining the two types of measurements can reveal the value of each mass state, favoring certain theories of neutrino mass over others.

    Cosmic Questions

    Neutrino mass is also of cosmic importance. Despite their minuscule mass, so many neutrinos were born during the Big Bang that their collective gravity influenced how all the matter in the universe clumped together into stars and galaxies. About a second after the Big Bang, neutrinos were flying around at almost light speed—so fast that they escaped the gravitational pull of other matter. But then they started to slow, which enabled them to help corral atoms, stars and galaxies. The point at which neutrinos began to slow down depends on their mass. Heavier neutrinos would have decelerated sooner and helped make the universe clumpier.

    By measuring the cosmic clumpiness, cosmologists can infer the neutrino’s mass. But this indirect method hinges on the assumption that models of the cosmos are correct, so if it gives a different answer than direct measurements of the neutrino mass, this might indicate that cosmological theories are wrong.

    So far, the indirect cosmological approach has been more sensitive than direct mass measurements by experiments like KATRIN. Recent cosmological data from the Planck satellite suggests that the sum of the three neutrino mass states can’t be greater than 0.12 eV, and in August, another analysis of cosmological observations [Physical Review Letters] found that the lightest mass must be less than 0.086 eV. These all fall well below KATRIN’s upper bound, so there’s no contradiction between the two approaches yet. But as KATRIN collects more data, discrepancies could arise.

    What’s Next

    The long-awaited KATRIN experiment weighs neutrinos by using tritium, a heavy isotope of hydrogen. When tritium undergoes beta decay, its nucleus emits an electron and an electron-flavored neutrino. By measuring the energy of the most energetic electrons, physicists can deduce the energy—and thus the mass (or really, a weighted average of the three contributing masses)—of the electron neutrino.

    If KATRIN finds a mass of around 0.2 or 0.3 eV, cosmologists will have a hard time reconciling their observations, said Marilena Loverde, a cosmologist at Stony Brook University. One possible explanation would be some new phenomenon that causes the cosmological influence of the neutrino’s mass to wane over time. For instance, maybe the neutrino decays into even lighter unknown particles, whose near-light speeds render them incapable of clumping matter together. Or maybe the mechanism that gives mass to neutrinos has changed over cosmic history.

    If, on the other hand, the neutrino mass is close to what cosmological observations predict, KATRIN won’t be sensitive enough to measure it. It can only weigh neutrinos down to 0.2 eV. If neutrinos are lighter than that, physicists will need more sensitive experiments to close in on its mass and resolve the particle physics and cosmology questions. Three potentially more sensitive projects—Project 8, Electron Capture on Holmium, and HOLMES—are already taking data with proof-of-concept instruments.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: