From WIRED: “The Enduring Power of Asperger’s, Even as a Non-Diagnosis”

Wired logo

From WIRED

11.07.2019
Michele Cohen Marill

Six years after it ceased to be an official diagnosis, Asperger’s lives on as a unifying label and a source of strength.

1
The teen climate activist Greta Thunberg describes Asperger’s as a superpower, in the right circumstances.Photograph: Spencer Platt/Getty Images

Sixteen-year-old Swedish activist Greta Thunberg is the symbol of a climate change generation gap, a girl rebuking adults for their inaction in preventing a future apocalypse. Thunberg’s riveting speech at the UN’s Climate Action Summit has been viewed more than 2 million times on YouTube, and she was considered a viable contender for the Nobel Peace Prize.

In a tweet, Thunberg explained what made her so fearless: “I have Aspergers and that means I’m sometimes a bit different from the norm. And—given the right circumstances—being different is a superpower. #aspiepower.”

People with Asperger’s applaud the way she reframed a “disorder,” as it used to be called in the Diagnostic and Statistical Manual of Mental Disorders, into an asset. But Thunberg’s comments also fuel a lingering debate about whether Asperger’s even exists as a distinct condition—and if it doesn’t, why people are still so attached to the designation.

Asperger syndrome, first coined in 1981, describes people who have problems with social interaction, repetitive behaviors, and an intense focus on singular interests. Sheldon Cooper, the theoretical physicist on the long-running TV show “The Big Bang Theory,” became an exaggerated prototype, a brilliant person who missed social cues and couldn’t grasp irony.

His awkwardness spawned humorous predicaments, but in real life, people with Asperger’s can face more daunting challenges. It became a diagnosis in 1994, distinct from autistic disorder, but the lines were blurry even then. In 2013, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (known as DSM-5) eliminated Asperger’s and redefined the autism spectrum as encompassing level 1 (“requiring support”) to level 3 (“requiring very substantial support”).

“Technically, the DSM-5 essentially made Asperger’s a non-diagnosis,” says Dania Jekel, executive director of the Asperger/Autism Network, which formed after Asperger’s first gained official status, from an outpouring of people seeking resources and a sense of community.

The World Health Organization also is eliminating Asperger syndrome from its International Classification of Diseases. The ICD-11, which was adopted this year and will be implemented globally by 2022, instead calls it “autism spectrum disorder without disorder of intellectual development and with mild or no impairment of functional language.” Proponents of the change hope to reduce stereotypes. For example, girls were much less likely to receive an Asperger’s diagnosis than boys, and girls were more likely to be diagnosed at an older age—a disparity that points to bias. Meanwhile, placing people “on the spectrum” equalizes access to resources, including insurance coverage.

Sixteen-year-old Swedish activist Greta Thunberg is the symbol of a climate change generation gap, a girl rebuking adults for their inaction in preventing a future apocalypse. Thunberg’s riveting speech at the UN’s Climate Action Summit has been viewed more than 2 million times on YouTube, and she was considered a viable contender for the Nobel Peace Prize.

In a tweet, Thunberg explained what made her so fearless: “I have Aspergers and that means I’m sometimes a bit different from the norm. And—given the right circumstances—being different is a superpower. #aspiepower.”

People with Asperger’s applaud the way she reframed a “disorder,” as it used to be called in the Diagnostic and Statistical Manual of Mental Disorders, into an asset. But Thunberg’s comments also fuel a lingering debate about whether Asperger’s even exists as a distinct condition—and if it doesn’t, why people are still so attached to the designation.

Asperger syndrome, first coined in 1981, describes people who have problems with social interaction, repetitive behaviors, and an intense focus on singular interests. Sheldon Cooper, the theoretical physicist on the long-running TV show “The Big Bang Theory,” became an exaggerated prototype, a brilliant person who missed social cues and couldn’t grasp irony.

His awkwardness spawned humorous predicaments, but in real life, people with Asperger’s can face more daunting challenges. It became a diagnosis in 1994, distinct from autistic disorder, but the lines were blurry even then. In 2013, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (known as DSM-5) eliminated Asperger’s and redefined the autism spectrum as encompassing level 1 (“requiring support”) to level 3 (“requiring very substantial support”).

“Technically, the DSM-5 essentially made Asperger’s a non-diagnosis,” says Dania Jekel, executive director of the Asperger/Autism Network, which formed after Asperger’s first gained official status, from an outpouring of people seeking resources and a sense of community.

The World Health Organization also is eliminating Asperger syndrome from its International Classification of Diseases. The ICD-11, which was adopted this year and will be implemented globally by 2022, instead calls it “autism spectrum disorder without disorder of intellectual development and with mild or no impairment of functional language.” Proponents of the change hope to reduce stereotypes. For example, girls were much less likely to receive an Asperger’s diagnosis than boys, and girls were more likely to be diagnosed at an older age—a disparity that points to bias. Meanwhile, placing people “on the spectrum” equalizes access to resources, including insurance coverage.Sixteen-year-old Swedish activist Greta Thunberg is the symbol of a climate change generation gap, a girl rebuking adults for their inaction in preventing a future apocalypse. Thunberg’s riveting speech at the UN’s Climate Action Summit has been viewed more than 2 million times on YouTube, and she was considered a viable contender for the Nobel Peace Prize.

In a tweet, Thunberg explained what made her so fearless: “I have Aspergers and that means I’m sometimes a bit different from the norm. And—given the right circumstances—being different is a superpower. #aspiepower.”

People with Asperger’s applaud the way she reframed a “disorder,” as it used to be called in the Diagnostic and Statistical Manual of Mental Disorders, into an asset. But Thunberg’s comments also fuel a lingering debate about whether Asperger’s even exists as a distinct condition—and if it doesn’t, why people are still so attached to the designation.

Asperger syndrome, first coined in 1981, describes people who have problems with social interaction, repetitive behaviors, and an intense focus on singular interests. Sheldon Cooper, the theoretical physicist on the long-running TV show “The Big Bang Theory,” became an exaggerated prototype, a brilliant person who missed social cues and couldn’t grasp irony.

His awkwardness spawned humorous predicaments, but in real life, people with Asperger’s can face more daunting challenges. It became a diagnosis in 1994, distinct from autistic disorder, but the lines were blurry even then. In 2013, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (known as DSM-5) eliminated Asperger’s and redefined the autism spectrum as encompassing level 1 (“requiring support”) to level 3 (“requiring very substantial support”).

“Technically, the DSM-5 essentially made Asperger’s a non-diagnosis,” says Dania Jekel, executive director of the Asperger/Autism Network, which formed after Asperger’s first gained official status, from an outpouring of people seeking resources and a sense of community.

The World Health Organization also is eliminating Asperger syndrome from its International Classification of Diseases. The ICD-11, which was adopted this year and will be implemented globally by 2022, instead calls it “autism spectrum disorder without disorder of intellectual development and with mild or no impairment of functional language.” Proponents of the change hope to reduce stereotypes. For example, girls were much less likely to receive an Asperger’s diagnosis than boys, and girls were more likely to be diagnosed at an older age—a disparity that points to bias. Meanwhile, placing people “on the spectrum” equalizes access to resources, including insurance coverage.Sixteen-year-old Swedish activist Greta Thunberg is the symbol of a climate change generation gap, a girl rebuking adults for their inaction in preventing a future apocalypse. Thunberg’s riveting speech at the UN’s Climate Action Summit has been viewed more than 2 million times on YouTube, and she was considered a viable contender for the Nobel Peace Prize.

In a tweet, Thunberg explained what made her so fearless: “I have Aspergers and that means I’m sometimes a bit different from the norm. And—given the right circumstances—being different is a superpower. #aspiepower.”

People with Asperger’s applaud the way she reframed a “disorder,” as it used to be called in the Diagnostic and Statistical Manual of Mental Disorders, into an asset. But Thunberg’s comments also fuel a lingering debate about whether Asperger’s even exists as a distinct condition—and if it doesn’t, why people are still so attached to the designation.

Asperger syndrome, first coined in 1981, describes people who have problems with social interaction, repetitive behaviors, and an intense focus on singular interests. Sheldon Cooper, the theoretical physicist on the long-running TV show “The Big Bang Theory,” became an exaggerated prototype, a brilliant person who missed social cues and couldn’t grasp irony.

His awkwardness spawned humorous predicaments, but in real life, people with Asperger’s can face more daunting challenges. It became a diagnosis in 1994, distinct from autistic disorder, but the lines were blurry even then. In 2013, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (known as DSM-5) eliminated Asperger’s and redefined the autism spectrum as encompassing level 1 (“requiring support”) to level 3 (“requiring very substantial support”).

“Technically, the DSM-5 essentially made Asperger’s a non-diagnosis,” says Dania Jekel, executive director of the Asperger/Autism Network, which formed after Asperger’s first gained official status, from an outpouring of people seeking resources and a sense of community.

The World Health Organization also is eliminating Asperger syndrome from its International Classification of Diseases. The ICD-11, which was adopted this year and will be implemented globally by 2022, instead calls it “autism spectrum disorder without disorder of intellectual development and with mild or no impairment of functional language.” Proponents of the change hope to reduce stereotypes. For example, girls were much less likely to receive an Asperger’s diagnosis than boys, and girls were more likely to be diagnosed at an older age—a disparity that points to bias. Meanwhile, placing people “on the spectrum” equalizes access to resources, including insurance coverage.Sixteen-year-old Swedish activist Greta Thunberg is the symbol of a climate change generation gap, a girl rebuking adults for their inaction in preventing a future apocalypse. Thunberg’s riveting speech at the UN’s Climate Action Summit has been viewed more than 2 million times on YouTube, and she was considered a viable contender for the Nobel Peace Prize.

In a tweet, Thunberg explained what made her so fearless: “I have Aspergers and that means I’m sometimes a bit different from the norm. And—given the right circumstances—being different is a superpower. #aspiepower.”

People with Asperger’s applaud the way she reframed a “disorder,” as it used to be called in the Diagnostic and Statistical Manual of Mental Disorders, into an asset. But Thunberg’s comments also fuel a lingering debate about whether Asperger’s even exists as a distinct condition—and if it doesn’t, why people are still so attached to the designation.

Asperger syndrome, first coined in 1981, describes people who have problems with social interaction, repetitive behaviors, and an intense focus on singular interests. Sheldon Cooper, the theoretical physicist on the long-running TV show “The Big Bang Theory,” became an exaggerated prototype, a brilliant person who missed social cues and couldn’t grasp irony.

His awkwardness spawned humorous predicaments, but in real life, people with Asperger’s can face more daunting challenges. It became a diagnosis in 1994, distinct from autistic disorder, but the lines were blurry even then. In 2013, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (known as DSM-5) eliminated Asperger’s and redefined the autism spectrum as encompassing level 1 (“requiring support”) to level 3 (“requiring very substantial support”).

“Technically, the DSM-5 essentially made Asperger’s a non-diagnosis,” says Dania Jekel, executive director of the Asperger/Autism Network, which formed after Asperger’s first gained official status, from an outpouring of people seeking resources and a sense of community.

The World Health Organization also is eliminating Asperger syndrome from its International Classification of Diseases. The ICD-11, which was adopted this year and will be implemented globally by 2022, instead calls it “autism spectrum disorder without disorder of intellectual development and with mild or no impairment of functional language.” Proponents of the change hope to reduce stereotypes. For example, girls were much less likely to receive an Asperger’s diagnosis than boys, and girls were more likely to be diagnosed at an older age—a disparity that points to bias. Meanwhile, placing people “on the spectrum” equalizes access to resources, including insurance coverage._________________

________________________________________________
“The diagnosis of Asperger’s enabled the creation of a large and very supportive community and allowed people to find relevant resources. Changes in DSM-5 jeopardized that.”

Dania Jekel
________________________________________________

But Jekel worries that some people with Asperger’s-like attributes will return to the ambiguous space they once occupied—too well-functioning to be diagnosed on the autism spectrum, but still in need of significant support. “Twenty-two years ago, there was a whole group of people who were unidentified, had no resources, didn’t know each other,” she says. “The diagnosis of Asperger’s enabled the creation of a large and very supportive community and allowed people to find relevant resources. Changes in DSM-5 jeopardized that.”

Erika Schwarz, for example, wasn’t diagnosed until she was 39. Asperger’s explained a lot about her struggles in the workplace and with personal relationships. It made her wonder how different her life might have been if she had known—and had help learning how to cope. “It does give you a space to have a bit of compassion for yourself,” she says.

When she watches Thunberg on the world stage, she remembers herself as a young girl, intensely concerned about environmental degradation. “All the things I worried about as a kid, they’re validated,” says Schwarz, 50, who is now an environmental artist.

Yet Thunberg’s rise to icon status has also stirred long-standing resentments about how people view the rungs of the spectrum. The levels in the current DSM definition of autism are based on support needs, which can be fluid. “I would put myself at all three levels, inconsistently,” says Terra Vance, founder and chief editor of the online publication the Aspergian. But the levels also can feel like a ranking: more impaired or less so.

While #aspiepower endures on Twitter, so does #AllAutistics, a symbol of inclusivity and solidarity among people on the spectrum—even those who can’t speak or require help with daily functions. “Using the word ‘aspie’ doesn’t make you an aspie supremacist,” tweeted one person who used the hashtag #AllAutistics. “Thinking that ‘aspies’ are special shiny autistics who are functionally different from ‘severe’ autistics is aspie supremacy. Fight that. Always.”

The use of the term Asperger’s is further complicated by the history of its namesake, Hans Asperger, an Austrian pediatrician who first defined autism in 1944 in its “profound” and “high-functioning” forms. Asperger worked at the University Pediatric Clinic in Vienna, at a time when children with significant disabilities who were deemed a burden to the state were covertly “euthanized,” according to Nazi eugenics.

After the war, Asperger was viewed as having been a protector of children whom he considered to have potential despite their challenges, and he continued to have a distinguished career; he never was a member of the Nazi party. Yet extensive research unearthed evidence that Asperger sent at least some children to a clinic that was known as a center of “child euthanasia.”

For some, that disturbing history is reason enough to erase the term Asperger’s. But its use endures beyond the shadow of its origins. With an Asperger’s diagnosis, people felt enormous relief at finally being understood, and many don’t want to give up that identity.

Stephen Shore is an educator and author who identifies as autistic with the subtype of Asperger’s. That designation remains useful, he says, even to clinicians. Still, Shore doesn’t express strong feelings about the change in wording. He’s more concerned that people obtain the support they need and focus on the abilities they have. “What I find is that autistic people who are successful have found a way either by chance or design to use their highly focused interest and skill to become successful,” says Shore, who is on the board of Autism Speaks, a national advocacy and research organization.

Even though it’s been six years since the DSM-5 did away with the Asperger’s diagnosis, the name still evokes a sense of belonging. In New York City, Aspies for Social Success has about 1,000 members for whom it organizes outings and support groups, including Aspie Raps (rap sessions) at the New York Public Library, followed by dinner at a restaurant. Paradoxically, people who would typically feel anxious at social events look forward to meeting other people on the spectrum.

“What works is that we’re all communicating on the same wavelength,” says executive director Stephen Katz, who was diagnosed at age 50. “Some people describe it as having a different operating system than the rest of the population.”

A change in the language isn’t going to disrupt that connection.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

#aspergers-lives-on-as-a-unifying-label-and-a-source-of-strength, #the-diagnosis-of-aspergers-enabled-the-creation-of-a-large-and-very-supportive-community-and-allowed-people-to-find-relevant-resources-changes-in-dsm-5-jeopardized-that, #girls-were-much-less-likely-to-receive-an-aspergers-diagnosis-than-boys, #placing-people-on-the-spectrum-equalizes-access-to-resources-including-insurance-coverage, #teen-climate-activist-greta-thunberg-describes-aspergers-as-a-superpower, #the-dsm-5-essentially-made-aspergers-a-non-diagnosis, #the-world-health-organization-also-is-eliminating-asperger-syndrome-from-its-international-classification-of-diseases, #wired

From WIRED: “Physicists Get Close to Knowing the Mass of the Neutrino”

Wired logo

From WIRED

10.27.2019

The KATRIN experiment is working to “weigh the ghost,” which could point to new laws of particle physics and reshape theories of cosmology.

KATRIN experiment aims to measure the mass of the neutrino using a huge device called a spectrometer (interior shown)Karlsruhe Institute of Technology, Germany

1
Photograph: Forschungszentrum Karlsruhe

2
The main spectrometer of the KATRIN experiment being transported to the Karlsruhe Research Center in Germany in 2006. Photograph: Forschungszentrum Karlsruhe

Of all the known particles in the universe, only photons outnumber neutrinos. Despite their abundance, however, neutrinos are hard to catch and inspect, as they interact with matter only very weakly. About 1,000 trillion of the ghostly particles pass through your body every second—with nary a flinch from even a single atom.

The fact that they’re ubiquitous, yet we don’t even know what they weigh, is kind of crazy,” said Deborah Harris, a physicist at the Fermi National Accelerator Laboratory near Chicago and York University in Toronto.

Physicists have long tried to weigh the ghost. And in September, after 18 years of planning, building and calibrating, the Karlsruhe Tritium Neutrino (KATRIN) experiment in southwestern Germany announced its first results: It found that the neutrino can’t weigh more than 1.1 electron-volts (eV), or about one-five-hundred-thousandth the mass of the electron.

This initial estimate, from only one month’s worth of data, improves on previous measurements using similar techniques that placed the upper limit on the neutrino mass at 2 eV. As its data accrues, KATRIN aims to nail the actual mass rather than giving an upper bound.

Why Mass Matters

Mass is one of the most basic and important characteristics of fundamental particles. The neutrino is the only known particle whose mass remains a mystery. Measuring its mass would help point toward new laws of physics beyond the Standard Model, the remarkably successful yet incomplete description for how the universe’s known particles and forces interact. Its measured mass would also serve as a check on cosmologists’ theories for how the universe evolved.

“Depending on what the mass of the neutrino turns out to be, it may lead to very exciting times in cosmology,” said Diana Parno, a physicist at Carnegie Mellon University and a member of the KATRIN team.

Until about two decades ago, neutrinos—which were theoretically predicted in 1930 and discovered in 1956—were presumed to be massless. “When I was in grad school, my textbooks all said neutrinos didn’t have mass,” Harris said.

That changed when, in a discovery that would win the 2015 Nobel Prize, physicists found that neutrinos could morph from one kind to another, oscillating between three “flavor” states: electron, muon and tau. These oscillations can only happen if neutrinos also have three possible mass states, where each flavor has distinct probabilities of being in each of the three mass states. The mass states travel through space differently, so by the time a neutrino goes from point A to point B, this mix of probabilities will have changed, and a detector could measure a different flavor.

If there are three different mass states, then they can’t all be zero—thus, neutrinos have mass. According to recent neutrino oscillation data (which reveals the differences between the mass states rather than their actual values), if the lightest mass state is zero, the heaviest must be at least 0.0495 eV.

Still, that’s so light compared to the mass of other particles that physicists aren’t sure how neutrinos get such tiny masses. Other particles in the Standard Model acquire mass by interacting with the Higgs field, a field of energy that fills all space and drags on massive particles. But for neutrinos, “the mass is so small, you need some additional theory to explain that,” Parno said.

Figuring out how neutrinos acquire mass may resolve other, seemingly related mysteries, such as why there is more matter than antimatter in the universe. Competing theories for the mass-generating mechanism predict different values for the three mass states. While neutrino oscillation experiments have measured the differences between the mass states, experiments like KATRIN home in on a kind of average of the three. Combining the two types of measurements can reveal the value of each mass state, favoring certain theories of neutrino mass over others.

Cosmic Questions

Neutrino mass is also of cosmic importance. Despite their minuscule mass, so many neutrinos were born during the Big Bang that their collective gravity influenced how all the matter in the universe clumped together into stars and galaxies. About a second after the Big Bang, neutrinos were flying around at almost light speed—so fast that they escaped the gravitational pull of other matter. But then they started to slow, which enabled them to help corral atoms, stars and galaxies. The point at which neutrinos began to slow down depends on their mass. Heavier neutrinos would have decelerated sooner and helped make the universe clumpier.

By measuring the cosmic clumpiness, cosmologists can infer the neutrino’s mass. But this indirect method hinges on the assumption that models of the cosmos are correct, so if it gives a different answer than direct measurements of the neutrino mass, this might indicate that cosmological theories are wrong.

So far, the indirect cosmological approach has been more sensitive than direct mass measurements by experiments like KATRIN. Recent cosmological data from the Planck satellite suggests that the sum of the three neutrino mass states can’t be greater than 0.12 eV, and in August, another analysis of cosmological observations [Physical Review Letters] found that the lightest mass must be less than 0.086 eV. These all fall well below KATRIN’s upper bound, so there’s no contradiction between the two approaches yet. But as KATRIN collects more data, discrepancies could arise.

What’s Next

The long-awaited KATRIN experiment weighs neutrinos by using tritium, a heavy isotope of hydrogen. When tritium undergoes beta decay, its nucleus emits an electron and an electron-flavored neutrino. By measuring the energy of the most energetic electrons, physicists can deduce the energy—and thus the mass (or really, a weighted average of the three contributing masses)—of the electron neutrino.

If KATRIN finds a mass of around 0.2 or 0.3 eV, cosmologists will have a hard time reconciling their observations, said Marilena Loverde, a cosmologist at Stony Brook University. One possible explanation would be some new phenomenon that causes the cosmological influence of the neutrino’s mass to wane over time. For instance, maybe the neutrino decays into even lighter unknown particles, whose near-light speeds render them incapable of clumping matter together. Or maybe the mechanism that gives mass to neutrinos has changed over cosmic history.

If, on the other hand, the neutrino mass is close to what cosmological observations predict, KATRIN won’t be sensitive enough to measure it. It can only weigh neutrinos down to 0.2 eV. If neutrinos are lighter than that, physicists will need more sensitive experiments to close in on its mass and resolve the particle physics and cosmology questions. Three potentially more sensitive projects—Project 8, Electron Capture on Holmium, and HOLMES—are already taking data with proof-of-concept instruments.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

#physicists-get-close-to-knowing-the-mass-of-the-neutrino, #astronomy, #astrophysics, #basic-research, #cosmology, #neutrinos, #particle-physics, #the-katrin-experiment-is-working-to-weigh-the-ghost, #wired

From WIRED: “Space Photos of the Week: Reading the Universe in Infrared”

Wired logo

From WIRED

Telescopes that see things in a different spectrum show us the hidden secrets of the stars.

The human eye can process light wavelengths in the range of 380 to 740 nanometers. However, there’s a whole swath of “light” that we are unable to see. Cue the fancy telescopes! This week we are going to look at photos of space that are filtered for the infrared—wavelengths from 700 nanometers to 1 millimeter in size. By filtering for infrared scientists are able to peer through the visible stuff that gets in the way, like gas and dust and other material, to see heat, and in space there’s a lot of hot stuff. This is why NASA has telescopes like Spitzer that orbit the Earth looking at the universe in infrared, showing us stuff our puny eyes could never see on their own.

NASA/Spitzer Infrared Telescope

1
Here’s a space photo cool enough to make Andy Warhol proud: This four-part series shows the Whirlpool galaxy and its partner up above, a satellite galaxy called NGC 5195. This series serves as a good example of how different features can appear when cameras filter for different wavelengths of light. The far left image is taken in visible light, a remarkable scene even though the galaxy is more than 23 million light years from Earth. The second image adds a little extra: Visible light is shown in blue and green, and the bright red streaks are infrared—revealing new star activity and hot ionized material.Photograph: NASA/JPL-Caltech

2
This infrared image of the Orion nebula allows astronomers to see dust that’s aglow from star formation. The central light-blue region is the hottest part of the nebula, and as the byproducts of the star factory are ejected out, they cool off and appear red.Photograph: ESA/NASA/JPL-Caltech

3
Cygnus X is a ginormous star complex containing around 3 million solar masses and also is one of the largest known protostar factories. This image shows CygnusX in infrared light, glowing hot. The bright white spots are where stars are forming, with the red tendrils showing the gas and dust being expelled after their births.Photograph: NASA Goddard

4
This may appear like a scary pit of magma, we’re in fact looking at the Whirlpool galaxy seen earlier. By filtering out visible light and showing only the near-infrared, researchers can see the skeletal structure of the center of the galaxy, made of bending smooth dust lanes. This dust clumps around stars, so an image like this can give researchers a good idea of how much dust is lingering in a galaxy.Photograph: NASA Goddard

5
Talk about a butterfly effect: This space oddity is actually a busy stellar nursery called W40. The butterfly “wings” are large bubbles of hot interstellar gas blowing out from the violent births of these stars. Some stars in this region are so large they are 10 times the mass of our Sun.Photograph: NASA/JPL-Caltech

6
At the center of our Milky Way galaxy is the galactic core, glowing brightly with the many stars located there. Unencumbered by all the gas and dust, NASA’s Spitzer Space Telescope reveals the red glow of hot ionized material. In addition to a wealth of stars, the center of our galaxy boasts a massive black hole, 4 million times the mass of our Sun. As stars pass by this behemoth, they get devoured and hot energy is spat out—and that radiance helps us know what’s cooking in this active area.Photograph: NASA, JPL-Caltech, Susan Stolovy (SSC/Caltech) et al.

Want to see things in a different light? Check out WIRED’s full collection of photos here.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

#astronomy, #astrophysics, #basic-research, #cosmology, #nasa-spitzer, #reading-the-universe-in-infrared, #wired

From LANL via WIRED: “AI Helps Seismologists Predict Earthquakes”

LANL bloc

Los Alamos National Laboratory

via

Wired logo

From WIRED

Machine learning is bringing seismologists closer to an elusive goal: forecasting quakes well before they strike.

1
Remnants of a 2,000-year-old spruce forest on Neskowin Beach, Oregon — one of dozens of “ghost forests” along the Oregon and Washington coast. It’s thought that a mega-earthquake of the Cascadia subduction zone felled the trees, and that the stumps were then buried by tsunami debris.Photograph: Race Jones/Outlive Creative

In May of last year, after a 13-month slumber, the ground beneath Washington’s Puget Sound rumbled to life. The quake began more than 20 miles below the Olympic mountains and, over the course of a few weeks, drifted northwest, reaching Canada’s Vancouver Island. It then briefly reversed course, migrating back across the US border before going silent again. All told, the monthlong earthquake likely released enough energy to register as a magnitude 6. By the time it was done, the southern tip of Vancouver Island had been thrust a centimeter or so closer to the Pacific Ocean.

Because the quake was so spread out in time and space, however, it’s likely that no one felt it. These kinds of phantom earthquakes, which occur deeper underground than conventional, fast earthquakes, are known as “slow slips.” They occur roughly once a year in the Pacific Northwest, along a stretch of fault where the Juan de Fuca plate is slowly wedging itself beneath the North American plate. More than a dozen slow slips have been detected by the region’s sprawling network of seismic stations since 2003. And for the past year and a half, these events have been the focus of a new effort at earthquake prediction by the geophysicist Paul Johnson.

Johnson’s team is among a handful of groups that are using machine learning to try to demystify earthquake physics and tease out the warning signs of impending quakes. Two years ago, using pattern-finding algorithms similar to those behind recent advances in image and speech recognition and other forms of artificial intelligence, he and his collaborators successfully predicted temblors in a model laboratory system—a feat that has since been duplicated by researchers in Europe.

Now, in a paper posted this week on the scientific preprint site arxiv.org, Johnson and his team report that they’ve tested their algorithm on slow slip quakes in the Pacific Northwest. The paper has yet to undergo peer review, but outside experts say the results are tantalizing. According to Johnson, they indicate that the algorithm can predict the start of a slow slip earthquake to “within a few days—and possibly better.”

“This is an exciting development,” said Maarten de Hoop, a seismologist at Rice University who was not involved with the work. “For the first time, I think there’s a moment where we’re really making progress” toward earthquake prediction.

Mostafa Mousavi, a geophysicist at Stanford University, called the new results “interesting and motivating.” He, de Hoop, and others in the field stress that machine learning has a long way to go before it can reliably predict catastrophic earthquakes—and that some hurdles may be difficult, if not impossible, to surmount. Still, in a field where scientists have struggled for decades and seen few glimmers of hope, machine learning may be their best shot.

Sticks and Slips

The late seismologist Charles Richter, for whom the Richter magnitude scale is named, noted in 1977 that earthquake prediction can provide “a happy hunting ground for amateurs, cranks, and outright publicity-seeking fakers.” Today, many seismologists will tell you that they’ve seen their fair share of all three.

But there have also been reputable scientists who concocted theories that, in hindsight, seem woefully misguided, if not downright wacky. There was the University of Athens geophysicist Panayiotis Varotsos, who claimed he could detect impending earthquakes by measuring “seismic electric signals.” There was Brian Brady, the physicist from the US Bureau of Mines who in the early 1980s sounded successive false alarms in Peru, basing them on a tenuous notion that rock bursts in underground mines were telltale signs of coming quakes.

Paul Johnson is well aware of this checkered history. He knows that the mere phrase “earthquake prediction” is taboo in many quarters. He knows about the six Italian scientists who were convicted of manslaughter in 2012 for downplaying the chances of an earthquake near the central Italian town of L’Aquila, days before the region was devastated by a magnitude 6.3 temblor. (The convictions were later overturned.) He knows about the prominent seismologists who have forcefully declared that “earthquakes cannot be predicted.”

But Johnson also knows that earthquakes are physical processes, no different in that respect from the collapse of a dying star or the shifting of the winds. And though he stresses that his primary aim is to better understand fault physics, he hasn’t shied away from the prediction problem.

2
Paul Johnson, a geophysicist at Los Alamos National Laboratory, photographed in 2008 with a block of acrylic plastic, one of the materials his team uses to simulate earthquakes in the laboratory.Photograph: Los Alamos National Laboratory

More than a decade ago, Johnson began studying “laboratory earthquakes,” made with sliding blocks separated by thin layers of granular material. Like tectonic plates, the blocks don’t slide smoothly but in fits and starts: They’ll typically stick together for seconds at a time, held in place by friction, until the shear stress grows large enough that they suddenly slip. That slip—the laboratory version of an earthquake—releases the stress, and then the stick-slip cycle begins anew.

When Johnson and his colleagues recorded the acoustic signal emitted during those stick-slip cycles, they noticed sharp peaks just before each slip. Those precursor events were the laboratory equivalent of the seismic waves produced by foreshocks before an earthquake. But just as seismologists have struggled to translate foreshocks into forecasts of when the main quake will occur, Johnson and his colleagues couldn’t figure out how to turn the precursor events into reliable predictions of laboratory quakes. “We were sort of at a dead end,” Johnson recalled. “I couldn’t see any way to proceed.”

At a meeting a few years ago in Los Alamos, Johnson explained his dilemma to a group of theoreticians. They suggested he reanalyze his data using machine learning—an approach that was well known by then for its prowess at recognizing patterns in audio data.

Together, the scientists hatched a plan. They would take the roughly five minutes of audio recorded during each experimental run—encompassing 20 or so stick-slip cycles—and chop it up into many tiny segments. For each segment, the researchers calculated more than 80 statistical features, including the mean signal, the variation about that mean, and information about whether the segment contained a precursor event. Because the researchers were analyzing the data in hindsight, they also knew how much time had elapsed between each sound segment and the subsequent failure of the laboratory fault.

Armed with this training data, they used what’s known as a “random forest” machine learning algorithm to systematically look for combinations of features that were strongly associated with the amount of time left before failure. After seeing a couple of minutes’ worth of experimental data, the algorithm could begin to predict failure times based on the features of the acoustic emission alone.

Johnson and his co-workers chose to employ a random forest algorithm to predict the time before the next slip in part because—compared with neural networks and other popular machine learning algorithms—random forests are relatively easy to interpret. The algorithm essentially works like a decision tree in which each branch splits the data set according to some statistical feature. The tree thus preserves a record of which features the algorithm used to make its predictions—and the relative importance of each feature in helping the algorithm arrive at those predictions.

3
A polarizing lens shows the buildup of stress as a model tectonic plate slides laterally along a fault line in an experiment at Los Alamos National Laboratory.Photograph: Los Alamos National Laboratory.

When the Los Alamos researchers probed those inner workings of their algorithm, what they learned surprised them. The statistical feature the algorithm leaned on most heavily for its predictions was unrelated to the precursor events just before a laboratory quake. Rather, it was the variance—a measure of how the signal fluctuates about the mean—and it was broadcast throughout the stick-slip cycle, not just in the moments immediately before failure. The variance would start off small and then gradually climb during the run-up to a quake, presumably as the grains between the blocks increasingly jostled one another under the mounting shear stress. Just by knowing this variance, the algorithm could make a decent guess at when a slip would occur; information about precursor events helped refine those guesses.

The finding had big potential implications. For decades, would-be earthquake prognosticators had keyed in on foreshocks and other isolated seismic events. The Los Alamos result suggested that everyone had been looking in the wrong place—that the key to prediction lay instead in the more subtle information broadcast during the relatively calm periods between the big seismic events.

To be sure, sliding blocks don’t begin to capture the chemical, thermal and morphological complexity of true geological faults. To show that machine learning could predict real earthquakes, Johnson needed to test it out on a real fault. What better place to do that, he figured, than in the Pacific Northwest?

Out of the Lab

Most if not all of the places on Earth that can experience a magnitude 9 earthquake are subduction zones, where one tectonic plate dives beneath another. A subduction zone just east of Japan was responsible for the Tohoku earthquake and the subsequent tsunami that devastated the country’s coastline in 2011. One day, the Cascadia subduction zone, where the Juan de Fuca plate dives beneath the North American plate, will similarly devastate Puget Sound, Vancouver Island and the surrounding Pacific Northwest.

Cascadia plate zones

Cascadia subduction zone

The Cascadia subduction zone stretches along roughly 1,000 kilometers of the Pacific coastline from Cape Mendocino in Northern California to Vancouver Island. The last time it breached, in January 1700, it begot a magnitude 9 temblor and a tsunami that reached the coast of Japan. Geological records suggest that throughout the Holocene, the fault has produced such megaquakes roughly once every half-millennium, give or take a few hundred years. Statistically speaking, the next big one is due any century now.

That’s one reason seismologists have paid such close attention to the region’s slow slip earthquakes. The slow slips in the lower reaches of a subduction-zone fault are thought to transmit small amounts of stress to the brittle crust above, where fast, catastrophic quakes occur. With each slow slip in the Puget Sound-Vancouver Island area, the chances of a Pacific Northwest megaquake ratchet up ever so slightly. Indeed, a slow slip was observed in Japan in the month leading up to the Tohoku quake.

For Johnson, however, there’s another reason to pay attention to slow slip earthquakes: They produce lots and lots of data. For comparison, there have been no major fast earthquakes on the stretch of fault between Puget Sound and Vancouver Island in the past 12 years. In the same time span, the fault has produced a dozen slow slips, each one recorded in a detailed seismic catalog.

That seismic catalog is the real-world counterpart to the acoustic recordings from Johnson’s laboratory earthquake experiment. Just as they did with the acoustic recordings, Johnson and his co-workers chopped the seismic data into small segments, characterizing each segment with a suite of statistical features. They then fed that training data, along with information about the timing of past slow slip events, to their machine learning algorithm.

After being trained on data from 2007 to 2013, the algorithm was able to make predictions about slow slips that occurred between 2013 and 2018, based on the data logged in the months before each event. The key feature was the seismic energy, a quantity closely related to the variance of the acoustic signal in the laboratory experiments. Like the variance, the seismic energy climbed in a characteristic fashion in the run-up to each slow slip.

The Cascadia forecasts weren’t quite as accurate as the ones for laboratory quakes. The correlation coefficients characterizing how well the predictions fit observations were substantially lower in the new results than they were in the laboratory study. Still, the algorithm was able to predict all but one of the five slow slips that occurred between 2013 and 2018, pinpointing the start times, Johnson says, to within a matter of days. (A slow slip that occurred in August 2019 wasn’t included in the study.)

For de Hoop, the big takeaway is that “machine learning techniques have given us a corridor, an entry into searching in data to look for things that we have never identified or seen before.” But he cautions that there’s more work to be done. “An important step has been taken—an extremely important step. But it is like a tiny little step in the right direction.”

Sobering Truths

The goal of earthquake forecasting has never been to predict slow slips. Rather, it’s to predict sudden, catastrophic quakes that pose danger to life and limb. For the machine learning approach, this presents a seeming paradox: The biggest earthquakes, the ones that seismologists would most like to be able to foretell, are also the rarest. How will a machine learning algorithm ever get enough training data to predict them with confidence?

The Los Alamos group is betting that their algorithms won’t actually need to train on catastrophic earthquakes to predict them. Recent studies suggest that the seismic patterns before small earthquakes are statistically similar to those of their larger counterparts, and on any given day, dozens of small earthquakes may occur on a single fault. A computer trained on thousands of those small temblors might be versatile enough to predict the big ones. Machine learning algorithms might also be able to train on computer simulations of fast earthquakes that could one day serve as proxies for real data.

But even so, scientists will confront this sobering truth: Although the physical processes that drive a fault to the brink of an earthquake may be predictable, the actual triggering of a quake—the growth of a small seismic disturbance into full-blown fault rupture—is believed by most scientists to contain at least an element of randomness. Assuming that’s so, no matter how well machines are trained, they may never be able to predict earthquakes as well as scientists predict other natural disasters.

“We don’t know what forecasting in regards to timing means yet,” Johnson said. “Would it be like a hurricane? No, I don’t think so.”

In the best-case scenario, predictions of big earthquakes will probably have time bounds of weeks, months or years. Such forecasts probably couldn’t be used, say, to coordinate a mass evacuation on the eve of a temblor. But they could increase public preparedness, help public officials target their efforts to retrofit unsafe buildings, and otherwise mitigate hazards of catastrophic earthquakes.

Johnson sees that as a goal worth striving for. Ever the realist, however, he knows it will take time. “I’m not saying we’re going to predict earthquakes in my lifetime,” he said, “but … we’re going to make a hell of a lot of progress.”

See the full article here .

Earthquake Alert

1

Earthquake Alert

Earthquake Network project Earthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

Get the app in the Google Play store.

3
Smartphone network spatial distribution (green and red dots) on December 4, 2015

Meet The Quake-Catcher Network

QCN bloc

Quake-Catcher Network

The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

Below, the QCN Quake Catcher Network map
QCN Quake Catcher Network map

ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

Watch a video describing how ShakeAlert works in English or Spanish.

The primary project partners include:

United States Geological Survey
California Governor’s Office of Emergency Services (CalOES)
California Geological Survey
California Institute of Technology
University of California Berkeley
University of Washington
University of Oregon
Gordon and Betty Moore Foundation

The Earthquake Threat

Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

Part of the Solution

Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

System Goal

The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

Current Status

The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

Authorities

The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

For More Information

Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
rdegroot@usgs.gov
626-583-7225

Learn more about EEW Research

ShakeAlert Fact Sheet

ShakeAlert Implementation Plan

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

Los Alamos National Laboratory’s mission is to solve national security challenges through scientific excellence.

LANL campus

Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, The Babcock & Wilcox Company, and URS for the Department of Energy’s National Nuclear Security Administration.

#ai-and-machine-learning, #earthquake-alert-network, #earthquakes, #lanl-los-alamos-national-lab, #qcn-quake-catcher-network, #shake-alert-system, #wired

From WIRED: “Are We All Wrong About Black Holes?”

Wired logo

From WIRED

09.08.2019
Brendan Z. Foster

1
Craig Callender, a philosopher of science at the University of California San Diego, argues that the connection between black holes and thermodynamics is less ironclad than assumed.Photograph: Peggy Peattie/Quanta Magazine

In the early 1970s, people studying general relativity, our modern theory of gravity, noticed rough similarities between the properties of black holes and the laws of thermodynamics. Stephen Hawking proved that the area of a black hole’s event horizon—the surface that marks its boundary—cannot decrease. That sounded suspiciously like the second law of thermodynamics, which says entropy—a measure of disorder—cannot decrease.

Yet at the time, Hawking and others emphasized that the laws of black holes only looked like thermodynamics on paper; they did not actually relate to thermodynamic concepts like temperature or entropy.

Then in quick succession, a pair of brilliant results—one by Hawking himself—suggested that the equations governing black holes were in fact actual expressions of the thermodynamic laws applied to black holes. In 1972, Jacob Bekenstein argued that a black hole’s surface area was proportional to its entropy [Physical Review D], and thus the second law similarity was a true identity. And in 1974, Hawking found that black holes appear to emit radiation [Nature]—what we now call Hawking radiation—and this radiation would have exactly the same “temperature” in the thermodynamic analogy.

This connection gave physicists a tantalizing window into what many consider the biggest problem in theoretical physics—how to combine quantum mechanics, our theory of the very small, with general relativity. After all, thermodynamics comes from statistical mechanics, which describes the behavior of all the unseen atoms in a system. If a black hole is obeying thermodynamic laws, we can presume that a statistical description of all its fundamental, indivisible parts can be made. But in the case of a black hole, those parts aren’t atoms. They must be a kind of basic unit of gravity that makes up the fabric of space and time.

Modern researchers insist that any candidate for a theory of quantum gravity must explain how the laws of black hole thermodynamics arise from microscopic gravity, and in particular, why the entropy-to-area connection happens. And few question the truth of the connection between black hole thermodynamics and ordinary thermodynamics.

But what if the connection between the two really is little more than a rough analogy, with little physical reality? What would that mean for the past decades of work in string theory, loop quantum gravity, and beyond? Craig Callender, a philosopher of science at the University of California, San Diego, argues that the notorious laws of black hole thermodynamics may be nothing more than a useful analogy stretched too far [Phil Sci]. The interview has been condensed and edited for clarity.

Why did people ever think to connect black holes and thermodynamics?

Callender: In the early ’70s, people noticed a few similarities between the two. One is that both seem to possess an equilibrium-like state. I have a box of gas. It can be described by a small handful of parameters—say, pressure, volume, and temperature. Same thing with a black hole. It might be described with just its mass, angular momentum, and charge. Further details don’t matter to either system.

Nor does this state tell me what happened beforehand. I walk into a room and see a box of gas with stable values of pressure, volume and temperature. Did it just settle into that state, or did that happen last week, or perhaps a million years ago? Can’t tell. The black hole is similar. You can’t tell what type of matter fell in or when it collapsed.

The second feature is that Hawking proved that the area of black holes is always non-decreasing. That reminds one of the thermodynamic second law, that entropy always increases. So both systems seem to be heading toward simply described states.

Now grab a thermodynamics textbook, locate the laws, and see if you can find true statements when you replace the thermodynamic terms with black hole variables. In many cases you can, and the analogy improves.

Hawking then discovers Hawking radiation, which further improves the analogy. At that point, most physicists start claiming the analogy is so good that it’s more than an analogy—it’s an identity! That’s a super-strong and surprising claim. It says that black hole laws, most of which are features of the geometry of space-time, are somehow identical to the physical principles underlying the physics of steam engines.

Because the identity plays a huge role in quantum gravity, I want to reconsider this identity claim. Few in the foundations of physics have done so.

So what’s the statistical mechanics for black holes?

Well, that’s a good question. Why does ordinary thermodynamics hold? Well, we know that all these macroscopic thermodynamic systems are composed of particles. The laws of thermodynamics turn out to be descriptions of the most statistically likely configurations to happen from the microscopic point of view.

Why does black hole thermodynamics hold? Are the laws also the statistically most likely way for black holes to behave? Although there are speculations in this direction, so far we don’t have a solid microscopic understanding of black hole physics. Absent this, the identity claim seems even more surprising.

What led you to start thinking about the analogy?

Many people are worried about whether theoretical physics has become too speculative. There’s a lot of commentary about whether holography, the string landscape—all sorts of things—are tethered enough to experiment. I have similar concerns. So my former Ph.D. student John Dougherty and I thought, where did it all start?

To our mind a lot of it starts with this claimed identity between black holes and thermodynamics. When you look in the literature, you see people say, “The only evidence we have for quantum gravity, the only solid hint, is black hole thermodynamics.”

If that’s the main thing we’re bouncing off for quantum gravity, then we ought to examine it very carefully. If it turns out to be a poor clue, maybe it would be better to spread our bets a little wider, instead of going all in on this identity.

What problems do you see with treating a black hole as a thermodynamic system?

I see basically three. The first problem is: What is a black hole? People often think of black holes as just kind of a dark sphere, like in a Hollywood movie or something; they’re thinking of it like a star that collapsed. But a mathematical black hole, the basis of black hole thermodynamics, is not the material from the star that’s collapsed. That’s all gone into the singularity. The black hole is what’s left.

The black hole isn’t a solid thing at the center. The system is really the entire space-time.

Yes, it’s this global notion for which black hole thermodynamics was developed, in which case the system really is the whole space-time.

Here is another way to think about the worry. Suppose a star collapses and forms an event horizon. But now another star falls past this event horizon and it collapses, so it’s inside the first. You can’t think that each one has its own little horizon that is behaving thermodynamically. It’s only the one horizon.

Here’s another. The event horizon changes shape depending on what’s about to be thrown into it. It’s clairvoyant. Weird, but there is nothing spooky here so long as we remember that the event horizon is only defined globally. It’s not a locally observable quantity.

The picture is more counterintuitive than people usually think. To me, if the system is global, then it’s not at all like thermodynamics.

The second objection is: Black hole thermodynamics is really a pale shadow of thermodynamics. I was surprised to see the analogy wasn’t as thorough as I expected it to be. If you grab a thermodynamics textbook and start replacing claims with their black hole counterparts, you will not find the analogy goes that deep.


Craig Callender explains why the connection between black holes and thermodynamics is little more than an analogy.

For instance, the zeroth law of thermodynamics sets up the whole theory and a notion of equilibrium — the basic idea that the features of the system aren’t changing. And it says that if one system is in equilibrium with another — A with B, and B with C — then A must be in equilibrium with C. The foundation of thermodynamics is this equilibrium relation, which sets up the meaning of temperature.

The zeroth law for black holes is that the surface gravity of a black hole, which measures the gravitational acceleration, is a constant on the horizon. So that assumes temperature being constant is the zeroth law. That’s not really right. Here we see a pale shadow of the original zeroth law.

The counterpart of equilibrium is supposed to be “stationary,” a technical term that basically says the black hole is spinning at a constant rate. But there’s no sense in which one black hole can be “stationary with” another black hole. You can take any thermodynamic object and cut it in half and say one half is in equilibrium with the other half. But you can’t take a black hole and cut it in half. You can’t say that this half is stationary with the other half.

Here’s another way in which the analogy falls flat. Black hole entropy is given by the black hole area. Well, area is length squared, volume is length cubed. So what do we make of all those thermodynamic relations that include volume, like Boyle’s law? Is volume, which is length times area, really length times entropy? That would ruin the analogy. So we have to say that volume is not the counterpart of volume, which is surprising.

The most famous connection between black holes and thermodynamics comes from the notion of entropy. For normal stuff, we think of entropy as a measure of the disorder of the underlying atoms. But in the 1970s, Jacob Bekenstein said that the surface area of a black hole’s event horizon is equivalent to entropy. What’s the basis of this?

This is my third concern. Bekenstein says, if I throw something into a black hole, the entropy vanishes. But this can’t happen, he thinks, according to the laws of thermodynamics, for entropy must always increase. So some sort of compensation must be paid when you throw things into a black hole.

Bekenstein notices a solution. When I throw something into the black hole, the mass goes up, and so does the area. If I identify the area of the black hole as the entropy, then I’ve found my compensation. There is a nice deal between the two—one goes down while the other one goes up—and it saves the second law.

When I saw that I thought, aha, he’s thinking that not knowing about the system anymore means its entropy value has changed. I immediately saw that this is pretty objectionable, because it identifies entropy with uncertainty and our knowledge.

There’s a long debate in the foundations of statistical mechanics about whether entropy is a subjective notion or an objective notion. I’m firmly on the side of thinking it’s an objective notion. I think trees unobserved in a forest go to equilibrium regardless of what anyone knows about them or not, that the way heat flows has nothing to do with knowledge, and so on.

Chuck a steam engine behind the event horizon. We can’t know anything about it apart from its mass, but I claim it can still do as much work as before. If you don’t believe me, we can test this by having a physicist jump into the black hole and follow the steam engine! There is only need for compensation if you think that what you can no longer know about ceases to exist.

Do you think it’s possible to patch up black hole thermodynamics, or is it all hopeless?

My mind is open, but I have to admit that I’m deeply skeptical about it. My suspicion is that black hole “thermodynamics” is really an interesting set of relationships about information from the point of view of the exterior of the black hole. It’s all about forgetting information.

Because thermodynamics is more than information theory, I don’t think there’s a deep thermodynamic principle operating through the universe that causes black holes to behave the way they do, and I worry that physics is all in on it being a great hint for quantum gravity when it might not be.

Playing the role of the Socratic gadfly in the foundations of physics is sometimes important. In this case, looking back invites a bit of skepticism that may be useful going forward.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

#albert-einsteins-theory-of-general-relativity, #astronomy, #astrophysics, #basic-research, #black-holes, #cosmology, #craig-callender, #hawking-radiation, #second-law-of-thermodynamics, #wired

From WIRED: “Space Photos of the Week: Tune Into Neptune”

Wired logo

From WIRED

08.31.2019
Shannon Stirone

Voyager 2 completed its historic flyby 30 years ago. No probe has been back since.

NASA/Voyager 2

2
We’re looking at Neptune’s south pole, just barely illuminated by the sun. Voyager 2 took this photo from 560,000 miles away, and even at that extreme distance, the camera on the spacecraft managed to pick up features in the atmosphere as small as 75 miles in diameter. One example: Look on the lower left at the edge of the curve, where you can see a bright white strip of clouds that appears to stretch upward into the shadow. NASA JPL-Caltech.



Thirty years ago NASA’s Voyager 2 spacecraft flew past Neptune, completing its epic journey through the outer solar system. The eighth and outermost planet in our neighborhood, Neptune is considered one of the ice giants, along with Uranus. But that name is a misnomer since the planet is actually covered in gas, and whatever ice is below that is basically slushy.

When Voyager launched we had no idea what Uranus and Neptune looked like up close. The mission uncovered two worlds very unlike any other planets in our solar system, and we now know that both have rings, as well as robust storms and bizarre icy moons. And as scientists discover more exoplanets around other stars, many of them end up looking an awful lot like Neptune—which means that the Voyager 2 planetary data from long ago turns out to be a good model for other planets we might discover in the future.

3
Did you know Neptune has rings? Most large planets do. Voyager 2 snapped this photo in 1989 during its flyby, and this was the first photo of said rings in detail. Like those surrounding Jupiter and Uranus. Neptune’s rings are likely made out of carbon-containing molecules that have been irradiated by the Sun and become darker as a result. NASA/JPL-Caltech

4
As Voyager flew by Neptune it kept turning its camera, capturing this beautiful image of a shadowed, crescent Neptune along with its moon Triton. Triton is dwarfed by the sheer size of Neptune, and the darkness of space around them and their shadow feels like a fitting ending to Voyager 2’s journey. NASA/JPL-Caltech

5
Triton from 25,000 miles away: This moon is one of the most interesting in the entire solar system. It’s covered with a snakeskin-textured terrain and even has dust devil-like plumes of nitrogen ice jutting out into space. Something else strange is happening on this desolate moon; the surface is pocked with circular depressions that don’t exist anywhere else in the solar system. Scientists suspect that the frozen substances on the surface could be sinking into the ground or melting away, but until we swing by there again, there’s no way to know for sure. NASA/JPL-Caltech

6
This image combines red and green filters on Voyager 2’s narrow angle camera to show off the true blue of Neptune’s rich atmosphere, composed mostly of helium, hydrogen, and methane. The methane in the upper atmosphere is responsible for absorbing all the red light from the Sun, which is why Neptune is such a deep azure. The winds in that atmosphere can move at speeds of 1,000 miles per hour, though, which keeps things mixed up: The dark oval storm in the north has since disappeared, and this is the only time it has been captured in a photo with the smaller storm below, nicknamed “Skeeter.” NASA/JPL-Caltech

7
The ultramarine blue planet almost glows from 4.4 million miles away. While Voyager 2’s mission to Neptune brought the astronomical community a whole new perspective on a far-off planet, it also introduced scientists to many more mysteries, which might not be solved for many decades. NASA/JPL-Caltech

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

#astronomy, #astrophysics, #basic-research, #cosmology, #neptune, #voyager-2, #voyager-2-completed-its-historic-flyby-30-years-ago-no-probe-has-been-back-since, #wired

From WIRED: “Quantum Darwinism Could Explain What Makes Reality Real”

Wired logo

From WIRED

07.28.19
Philip Ball

1
Contrary to popular belief, says physicist Adán Cabello, “quantum theory perfectly describes the emergence of the classical world.” Olena Shmahalo/Quanta Magazine

It’s not surprising that quantum physics has a reputation for being weird and counterintuitive. The world we’re living in sure doesn’t feel quantum mechanical. And until the 20th century, everyone assumed that the classical laws of physics devised by Isaac Newton and others—according to which objects have well-defined positions and properties at all times—would work at every scale. But Max Planck, Albert Einstein, Niels Bohr and their contemporaries discovered that down among atoms and subatomic particles, this concreteness dissolves into a soup of possibilities. An atom typically can’t be assigned a definite position, for example—we can merely calculate the probability of finding it in various places. The vexing question then becomes: How do quantum probabilities coalesce into the sharp focus of the classical world?

Physicists sometimes talk about this changeover as the “quantum-classical transition.” But in fact there’s no reason to think that the large and the small have fundamentally different rules, or that there’s a sudden switch between them. Over the past several decades, researchers have achieved a greater understanding of how quantum mechanics inevitably becomes classical mechanics through an interaction between a particle or other microscopic system and its surrounding environment.

One of the most remarkable ideas in this theoretical framework is that the definite properties of objects that we associate with classical physics—position and speed, say—are selected from a menu of quantum possibilities in a process loosely analogous to natural selection in evolution: The properties that survive are in some sense the “fittest.” As in natural selection, the survivors are those that make the most copies of themselves. This means that many independent observers can make measurements of a quantum system and agree on the outcome—a hallmark of classical behavior.

This idea, called quantum Darwinism (QD), explains a lot about why we experience the world the way we do rather than in the peculiar way it manifests at the scale of atoms and fundamental particles. Although aspects of the puzzle remain unresolved, QD helps heal the apparent rift between quantum and classical physics.

3
Chaoyang Lu (left) and Jian-Wei Pan of the University of Science and Technology of China in Hefei led a recent experiment that tested quantum Darwinism in an artificial environment made of interacting photons. Chaoyang Lu

Only recently, however, has quantum Darwinism been put to the experimental test. Three research groups, working independently in Italy, China and Germany, have looked for the telltale signature of the natural selection process by which information about a quantum system gets repeatedly imprinted on various controlled environments. These tests are rudimentary, and experts say there’s still much more to be done before we can feel sure that QD provides the right picture of how our concrete reality condenses from the multiple options that quantum mechanics offers. Yet so far, the theory checks out.

Survival of the Fittest

At the heart of quantum Darwinism is the slippery notion of measurement—the process of making an observation. In classical physics, what you see is simply how things are. You observe a tennis ball traveling at 200 kilometers per hour because that’s its speed. What more is there to say?

In quantum physics that’s no longer true. It’s not at all obvious what the formal mathematical procedures of quantum mechanics say about “how things are” in a quantum object; they’re just a prescription telling us what we might see if we make a measurement. Take, for example, the way a quantum particle can have a range of possible states, known as a “superposition.” This doesn’t really mean it is in several states at once; rather, it means that if we make a measurement we will see one of those outcomes. Before the measurement, the various superposed states interfere with one another in a wavelike manner, producing outcomes with higher or lower probabilities.

But why can’t we see a quantum superposition? Why can’t all possibilities for the state of a particle survive right up to the human scale?

The answer often given is that superpositions are fragile, easily disrupted when a delicate quantum system is buffeted by its noisy environment. But that’s not quite right. When any two quantum objects interact, they get “entangled” with each other, entering a shared quantum state in which the possibilities for their properties are interdependent. So say an atom is put into a superposition of two possible states for the quantum property called spin: “up” and “down.” Now the atom is released into the air, where it collides with an air molecule and becomes entangled with it. The two are now in a joint superposition. If the atom is spin-up, then the air molecule might be pushed one way, while, if the atom is spin-down, the air molecule goes another way—and these two possibilities coexist. As the particles experience yet more collisions with other air molecules, the entanglement spreads, and the superposition initially specific to the atom becomes ever more diffuse. The atom’s superposed states no longer interfere coherently with one another because they are now entangled with other states in the surrounding environment—including, perhaps, some large measuring instrument. To that measuring device, it looks as though the atom’s superposition has vanished and been replaced by a menu of possible classical-like outcomes that no longer interfere with one another.

This process by which “quantumness” disappears into the environment is called decoherence. It’s a crucial part of the quantum-classical transition, explaining why quantum behavior becomes hard to see in large systems with many interacting particles. The process happens extremely fast. If a typical dust grain floating in the air were put into a quantum superposition of two different physical locations separated by about the width of the grain itself, collisions with air molecules would cause decoherence—making the superposition undetectable—in about 10−31 seconds. Even in a vacuum, light photons would trigger such decoherence very quickly: You couldn’t look at the grain without destroying its superposition.

Surprisingly, although decoherence is a straightforward consequence of quantum mechanics, it was only identified in the 1970s, by the late German physicist Heinz-Dieter Zeh. The Polish-American physicist Wojciech Zurek further developed the idea in the early 1980s and made it better known, and there is now good experimental support for it.

5
Wojciech Zurek, a theoretical physicist at Los Alamos National Laboratory in New Mexico, developed the quantum Darwinism theory in the 2000s to account for the emergence of objective, classical reality. Los Alamos National Laboratory

But to explain the emergence of objective, classical reality, it’s not enough to say that decoherence washes away quantum behavior and thereby makes it appear classical to an observer. Somehow, it’s possible for multiple observers to agree about the properties of quantum systems. Zurek, who works at Los Alamos National Laboratory in New Mexico, argues that two things must therefore be true.

First, quantum systems must have states that are especially robust in the face of disruptive decoherence by the environment. Zurek calls these “pointer states,” because they can be encoded in the possible states of a pointer on the dial of a measuring instrument. A particular location of a particle, for instance, or its speed, the value of its quantum spin, or its polarization direction can be registered as the position of a pointer on a measuring device. Zurek argues that classical behavior—the existence of well-defined, stable, objective properties—is possible only because pointer states of quantum objects exist.

What’s special mathematically about pointer states is that the decoherence-inducing interactions with the environment don’t scramble them: Either the pointer state is preserved, or it is simply transformed into a state that looks nearly identical. This implies that the environment doesn’t squash quantumness indiscriminately but selects some states while trashing others. A particle’s position is resilient to decoherence, for example. Superpositions of different locations, however, are not pointer states: Interactions with the environment decohere them into localized pointer states, so that only one can be observed. Zurek described this “environment-induced superselection” of pointer states in the 1980s [Physical Review D].

But there’s a second condition that a quantum property must meet to be observed. Although immunity to interaction with the environment assures the stability of a pointer state, we still have to get at the information about it somehow. We can do that only if it gets imprinted in the object’s environment. When you see an object, for example, that information is delivered to your retina by the photons scattering off it. They carry information to you in the form of a partial replica of certain aspects of the object, saying something about its position, shape and color. Lots of replicas are needed if many observers are to agree on a measured value—a hallmark of classicality. Thus, as Zurek argued in the 2000s, our ability to observe some property depends not only on whether it is selected as a pointer state, but also on how substantial a footprint it makes in the environment. The states that are best at creating replicas in the environment—the “fittest,” you might say—are the only ones accessible to measurement. That’s why Zurek calls the idea quantum Darwinism [Nature Physics].

It turns out that the same stability property that promotes environment-induced superselection of pointer states also promotes quantum Darwinian fitness, or the capacity to generate replicas. “The environment, through its monitoring efforts, decoheres systems,” Zurek said, “and the very same process that is responsible for decoherence should inscribe multiple copies of the information in the environment.”

Information Overload

It doesn’t matter, of course, whether information about a quantum system that gets imprinted in the environment is actually read out by a human observer; all that matters for classical behavior to emerge is that the information get there so that it could be read out in principle. “A system doesn’t have to be under study in any formal sense” to become classical, said Jess Riedel, a physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a proponent of quantum Darwinism.


“QD putatively explains, or helps to explain, all of classicality, including everyday macroscopic objects that aren’t in a laboratory, or that existed before there were any humans.”

About a decade ago, while Riedel was working as a graduate student with Zurek, the two showed theoretically that information from some simple, idealized quantum systems is “copied prolifically into the environment,” Riedel said, “so that it’s necessary to access only a small amount of the environment to infer the value of the variables.” They calculated [Physical Review Letters] that a grain of dust one micrometer across, after being illuminated by the sun for just one microsecond, will have its location imprinted about 100 million times in the scattered photons.

It’s because of this redundancy that objective, classical-like properties exist at all. Ten observers can each measure the position of a dust grain and find that it’s in the same location, because each can access a distinct replica of the information. In this view, we can assign an objective “position” to the speck not because it “has” such a position (whatever that means) but because its position state can imprint many identical replicas in the environment, so that different observers can reach a consensus.

What’s more, you don’t have to monitor much of the environment to gather most of the available information—and you don’t gain significantly more by monitoring more than a fraction of the environment. “The information one can gather about the system quickly saturates,” Riedel said.

This redundancy is the distinguishing feature of QD, explained Mauro Paternostro, a physicist at Queen’s University Belfast who was involved in one of the three new experiments. “It’s the property that characterizes the transition towards classicality,” he said.

Quantum Darwinism challenges a common myth about quantum mechanics, according to the theoretical physicist Adán Cabello of the University of Seville in Spain: namely, that the transition between the quantum and classical worlds is not understood and that measurement outcomes cannot be described by quantum theory. On the contrary, he said, “quantum theory perfectly describes the emergence of the classical world.”

Just how perfectly remains contentious, however. Some researchers think decoherence and QD provide a complete account of the quantum-classical transition. But although these ideas attempt to explain why superpositions vanish at large scales and why only concrete “classical” properties remain, there’s still the question of why measurements give unique outcomes. When a particular location of a particle is selected, what happens to the other possibilities inherent in its quantum description? Were they ever in any sense real? Researchers are compelled to adopt philosophical interpretations of quantum mechanics precisely because no one can figure out a way to answer that question experimentally.

Into the Lab

Quantum Darwinism looks fairly persuasive on paper. But until recently that was as far as it got. In the past year, three teams of researchers have independently put the theory to the experimental test by looking for its key feature: how a quantum system imprints replicas of itself on its environment.

The experiments depended on the ability to closely monitor what information about a quantum system gets imparted to its environment. That’s not feasible for, say, a dust grain floating among countless billions of air molecules. So two of the teams created a quantum object in a kind of “artificial environment” with only a few particles in it. Both experiments—one by Paternostro [Physical Review A] and collaborators at Sapienza University of Rome, and the other by the quantum-information expert Jian-Wei Pan [https://arxiv.org/abs/1808.07388] and co-authors at the University of Science and Technology of China—used a single photon as the quantum system, with a handful of other photons serving as the “environment” that interacts with it and broadcasts information about it.

Both teams passed laser photons through optical devices that could combine them into multiply entangled groups. They then interrogated the environment photons to see what information they encoded about the system photon’s pointer state—in this case its polarization (the orientation of its oscillating electromagnetic fields), one of the quantum properties able to pass through the filter of quantum Darwinian selection.

A key prediction of QD is the saturation effect: Pretty much all the information you can gather about the quantum system should be available if you monitor just a handful of surrounding particles. “Any small fraction of the interacting environment is enough to provide the maximal classical information about the observed system,” Pan said.

The two teams found precisely this. Measurements of just one of the environment photons revealed a lot of the available information about the system photon’s polarization, and measuring an increasing fraction of the environment photons provided diminishing returns. Even a single photon can act as an environment that introduces decoherence and selection, Pan explained, if it interacts strongly enough with the lone system photon. When interactions are weaker, a larger environment must be monitored.

6
Fedor Jelezko, director of the Institute for Quantum Optics at Ulm University in Germany. Ulm University

7
A team led by Jelezko probed the state of a nitrogen “defect” inside a synthetic diamond (shown mounted on the right) by monitoring surrounding carbon atoms. Their findings confirmed predictions of a theory known as quantum Darwinism.
Ulm University

The third experimental test of QD, led by the quantum-optical physicist Fedor Jelezko at Ulm University in Germany in collaboration with Zurek and others, used a very different system and environment, consisting of a lone nitrogen atom substituting for a carbon atom in the crystal lattice of a diamond—a so-called nitrogen-vacancy defect. Because the nitrogen atom has one more electron than carbon, this excess electron cannot pair up with those on neighboring carbon atoms to form a chemical bond. As a result, the nitrogen atom’s unpaired electron acts as a lone “spin,” which is like an arrow pointing up or down or, in general, in a superposition of both possible directions.

This spin can interact magnetically with those of the roughly 0.3 percent of carbon nuclei present in the diamond as the isotope carbon-13, which, unlike the more abundant carbon-12, also has spin. On average, each nitrogen-vacancy spin is strongly coupled to four carbon-13 spins within a distance of about 1 nanometer.

By controlling and monitoring the spins using lasers and radio-frequency pulses, the researchers could measure how a change in the nitrogen spin is registered by changes in the nuclear spins of the environment. As they reported in a preprint last September, they too observed the characteristic redundancy predicted by QD: The state of the nitrogen spin is “recorded” as multiple copies in the surroundings, and the information about the spin saturates quickly as more of the environment is considered.

Zurek says that because the photon experiments create copies in an artificial way that simulates an actual environment, they don’t incorporate a selection process that picks out “natural” pointer states resilient to decoherence. Rather, the researchers themselves impose the pointer states. In contrast, the diamond environment does elicit pointer states. “The diamond scheme also has problems, because of the size of the environment,” Zurek added, “but at least it is, well, natural.”

Generalizing Quantum Darwinism

So far, so good for quantum Darwinism. “All these studies see what is expected, at least approximately,” Zurek said.

Riedel says we could hardly expect otherwise, though: In his view, QD is really just the careful and systematic application of standard quantum mechanics to the interaction of a quantum system with its environment. Although this is virtually impossible to do in practice for most quantum measurements, if you can sufficiently simplify a measurement, the predictions are clear, he said: “QD is most like an internal self-consistency check on quantum theory itself.”

But although these studies seem consistent with QD, they can’t be taken as proof that it is the sole description for the emergence of classicality, or even that it’s wholly correct. For one thing, says Cabello, the three experiments offer only schematic versions of what a real environment consists of. What’s more, the experiments don’t cleanly rule out other ways to view the emergence of classicality. A theory called “spectrum broadcasting,” for example, developed by Pawel Horodecki at the Gdańsk University of Technology in Poland and collaborators, attempts to generalize QD. Spectrum broadcast theory (which has only been worked through for a few idealized cases) identifies those states of an entangled quantum system and environment that provide objective information that many observers can obtain without perturbing it. In other words, it aims to ensure not just that different observers can access replicas of the system in the environment, but that by doing so they don’t affect the other replicas. That too is a feature of genuinely “classical” measurements.

Horodecki and other theorists have also sought to embed QD in a theoretical framework that doesn’t demand any arbitrary division of the world into a system and its environment, but just considers how classical reality can emerge from interactions between various quantum systems. Paternostro says it might be challenging to find experimental methods capable of identifying the rather subtle distinctions between the predictions of these theories.

Still, researchers are trying, and the very attempt should refine our ability to probe the workings of the quantum realm. “The best argument for performing these experiments probably is that they are good exercise,” Riedel said. “Directly illustrating QD can require some very difficult measurements that will push the boundaries of existing laboratory techniques.” The only way we can find out what measurement really means, it seems, is by making better measurements.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

#a-quantum-particle-can-have-a-range-of-possible-states-known-as-a-superposition, #quantum-classical-transition, #but-why-cant-we-see-a-quantum-superposition, #classical-mechanics, #darwin-survival-of-the-fittest, #many-independent-observers-can-make-measurements-of-a-quantum-system-and-agree-on-the-outcome-a-hallmark-of-classical-behavior, #quantum-darwinism, #quantum-entanglement, #quantum-physics, #the-definite-properties-of-objects-that-we-associate-with-classical-physics-position-and-speed-say-are-selected-from-a-menu-of-quantum-possibilities, #the-process-is-loosely-analogous-to-natural-selection-in-evolution, #the-vexing-question-then-becomes-how-do-quantum-probabilities-coalesce-into-the-sharp-focus-of-the-classical-world, #this-doesnt-really-mean-it-is-in-several-states-at-once-rather-it-means-that-if-we-make-a-measurement-we-will-see-one-of-those-outcomes, #this-process-by-which-quantumness-disappears-into-the-environment-is-called-decoherence, #wired