Tagged: Scientific American Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:11 pm on July 2, 2021 Permalink | Reply
    Tags: "AI Designs Quantum Physics Experiments Beyond What Any Human Has Conceived", , MELVIN had seemingly solved the problem of creating highly complex entangled states involving multiple photons. How?, MELVIN was a machine-learning algorithm., , Scientific American, The algorithm had rediscovered a type of experimental arrangement that had been devised in the early 1990s., When two photons interact they become entangled and both can only be mathematically described using a single shared quantum state.   

    From Scientific American : “AI Designs Quantum Physics Experiments Beyond What Any Human Has Conceived” 

    From Scientific American

    July 2, 2021
    Anil Ananthaswamy

    1
    Credit: Getty Images.

    Quantum physicist Mario Krenn remembers sitting in a café in Vienna in early 2016, poring over computer printouts, trying to make sense of what MELVIN had found. MELVIN was a machine-learning algorithm Krenn had built, a kind of artificial intelligence. Its job was to mix and match the building blocks of standard quantum experiments and find solutions to new problems. And it did find many interesting ones. But there was one that made no sense.

    “The first thing I thought was, ‘My program has a bug, because the solution cannot exist,’” Krenn says. MELVIN had seemingly solved the problem of creating highly complex entangled states involving multiple photons (entangled states being those that once made Albert Einstein invoke the specter of “spooky action at a distance”). Krenn and his colleagues had not explicitly provided MELVIN the rules needed to generate such complex states, yet it had found a way. Eventually, he realized that the algorithm had rediscovered a type of experimental arrangement that had been devised in the early 1990s. But those experiments had been much simpler. MELVIN had cracked a far more complex puzzle.

    “When we understood what was going on, we were immediately able to generalize [the solution],” says Krenn, who is now at the University of Toronto (CA). Since then, other teams have started performing the experiments identified by MELVIN, allowing them to test the conceptual underpinnings of quantum mechanics in new ways. Meanwhile Krenn, Anton Zeilinger of the University of Vienna [Universität Wien] (AT) and their colleagues have refined their machine-learning algorithms. Their latest effort, an AI called THESEUS, has upped the ante: it is orders of magnitude faster than MELVIN, and humans can readily parse its output. While it would take Krenn and his colleagues days or even weeks to understand MELVIN’s meanderings, they can almost immediately figure out what THESEUS is saying.

    “It is amazing work,” says theoretical quantum physicist Renato Renner of the Institute for Theoretical Physics at the Swiss Federal Institute of Technology ETH Zürich [Eidgenössische Technische Hochschule Zürich)](CH), who reviewed a 2020 study about THESEUS by Krenn and Zeilinger but was not directly involved in these efforts.

    Krenn stumbled on this entire research program somewhat by accident when he and his colleagues were trying to figure out how to experimentally create quantum states of photons entangled in a very particular manner: When two photons interact, they become entangled, and both can only be mathematically described using a single shared quantum state. If you measure the state of one photon, the measurement instantly fixes the state of the other even if the two are kilometers apart (hence Einstein’s derisive comments on entanglement being “spooky”).

    In 1989 three physicists—Daniel Greenberger, the late Michael Horne and Zeilinger—described an entangled state that came to be known as “GHZ” (after their initials). It involved four photons, each of which could be in a quantum superposition of, say, two states, 0 and 1 (a quantum state called a qubit). In their paper, the GHZ state involved entangling four qubits such that the entire system was in a two-dimensional quantum superposition of states 0000 and 1111. If you measured one of the photons and found it in state 0, the superposition would collapse, and the other photons would also be in state 0. The same went for state 1. In the late 1990s Zeilinger and his colleagues experimentally observed GHZ states using three qubits for the first time.

    Krenn and his colleagues were aiming for GHZ states of higher dimensions. They wanted to work with three photons, where each photon had a dimensionality of three, meaning it could be in a superposition of three states: 0, 1 and 2. This quantum state is called a qutrit. The entanglement the team was after was a three-dimensional GHZ state that was a superposition of states 000, 111 and 222. Such states are important ingredients for secure quantum communications and faster quantum computing. In late 2013 the researchers spent weeks designing experiments on blackboards and doing the calculations to see if their setups could generate the required quantum states. But each time they failed. “I thought, ‘This is absolutely insane. Why can’t we come up with a setup?’” says Krenn says.

    To speed up the process, Krenn first wrote a computer program that took an experimental setup and calculated the output. Then he upgraded the program to allow it to incorporate in its calculations the same building blocks that experimenters use to create and manipulate photons on an optical bench: lasers, nonlinear crystals, beam splitters, phase shifters, holograms, and the like. The program searched through a large space of configurations by randomly mixing and matching the building blocks, performed the calculations and spat out the result. MELVIN was born. “Within a few hours, the program found a solution that we scientists—three experimentalists and one theorist—could not come up with for months,” Krenn says. “That was a crazy day. I could not believe that it happened.”

    Then he gave MELVIN more smarts. Anytime it found a setup that did something useful, MELVIN added that setup to its toolbox. “The algorithm remembers that and tries to reuse it for more complex solutions,” Krenn says.

    It was this more evolved MELVIN that left Krenn scratching his head in a Viennese café. He had set it running with an experimental toolbox that contained two crystals, each capable of generating a pair of photons entangled in three dimensions. Krenn’s naive expectation was that MELVIN would find configurations that combined these pairs of photons to create entangled states of at most nine dimensions. But “it actually found one solution, an extremely rare case, that has much higher entanglement than the rest of the states,” Krenn says.

    Eventually, he figured out that MELVIN had used a technique that multiple teams had developed nearly three decades ago. In 1991 one method was designed by Xin Yu Zou, Li Jun Wang and Leonard Mandel, all then at the University of Rochester (US). And in 1994 Zeilinger, then at the University of Innsbruck [Leopold-Franzens-Universität Innsbruck] (AT), and his colleagues came up with another. Conceptually, these experiments attempted something similar, but the configuration that Zeilinger and his colleagues devised is simpler to understand. It starts with one crystal that generates a pair of photons (A and B). The paths of these photons go right through another crystal, which can also generate two photons (C and D). The paths of photon A from the first crystal and of photon C from the second overlap exactly and lead to the same detector. If that detector clicks, it is impossible to tell whether the photon originated from the first or the second crystal. The same goes for photons B and D.

    A phase shifter is a device that effectively increases the path a photon travels as some fraction of its wavelength. If you were to introduce a phase shifter in one of the paths between the crystals and kept changing the amount of phase shift, you could cause constructive and destructive interference at the detectors. For example, each of the crystals could be generating, say, 1,000 pairs of photons per second. With constructive interference, the detectors would register 4,000 pairs of photons per second. And with destructive interference, they would detect none: the system as a whole would not create any photons even though individual crystals would be generating 1,000 pairs a second. “That is actually quite crazy, when you think about it,” Krenn says.

    MELVIN’s funky solution involved such overlapping paths. What had flummoxed Krenn was that the algorithm had only two crystals in its toolbox. And instead of using those crystals at the beginning of the experimental setup, it had wedged them inside an interferometer (a device that splits the path of, say, a photon into two and then recombines them). After much effort, he realized that the setup MELVIN had found was equivalent to one involving more than two crystals, each generating pairs of photons, such that their paths to the detectors overlapped. The configuration could be used to generate high-dimensional entangled states.

    Quantum physicist Nora Tischler, who was a Ph.D. student working with Zeilinger on an unrelated topic when MELVIN was being put through its paces, was paying attention to these developments. “It was kind of clear from the beginning [that such an] experiment wouldn’t exist if it hadn’t been discovered by an algorithm,” she says.

    Besides generating complex entangled states, the setup using more than two crystals with overlapping paths can be employed to perform a generalized form of Zeilinger’s 1994 quantum interference experiments with two crystals. Aephraim Steinberg, an experimentalist at the University of Toronto, who is a colleague of Krenn’s but has not worked on these projects, is impressed by what the AI found. “This is a generalization that (to my knowledge) no human dreamed up in the intervening decades and might never have done,” he says. “It’s a gorgeous first example of the kind of new explorations these thinking machines can take us on.”

    In one such generalized configuration with four crystals, each generating a pair of photons, and overlapping paths leading to four detectors, quantum interference can create situations where either all four detectors click (constructive interference) or none of them do so (destructive interference).

    But until recently, carrying out such an experiment remained a distant dream. Then, in a March preprint paper, a team led by Lan-Tian Feng of the University of Science and Technology [中国科学技术大学] (CN) at Chinese Academy of Sciences [中国科学院](CN) , in collaboration with Krenn, reported that they had fabricated the entire setup on a single photonic chip and performed the experiment. The researchers collected data for more than 16 hours: a feat made possible because of the photonic chip’s incredible optical stability, something that would have been impossible to achieve in a larger-scale tabletop experiment. For starters, the setup would require a square meter’s worth of optical elements precisely aligned on an optical bench, Steinberg says. Besides, “a single optical element jittering or drifting by a thousandth of the diameter of a human hair during those 16 hours could be enough to wash out the effect,” he says.

    During their early attempts to simplify and generalize what MELVIN had found, Krenn and his colleagues realized that the solution resembled abstract mathematical forms called graphs, which contain vertices and edges and are used to depict pairwise relations between objects. For these quantum experiments, every path a photon takes is represented by a vertex. And a crystal, for example, is represented by an edge connecting two vertices. MELVIN first produced such a graph and then performed a mathematical operation on it. The operation, called “perfect matching,” involves generating an equivalent graph in which each vertex is connected to only one edge. This process makes calculating the final quantum state much easier, although it is still hard for humans to understand.

    That changed with MELVIN’s successor THESEUS, which generates much simpler graphs by winnowing the first complex graph representing a solution that it finds down to the bare minimum number of edges and vertices (such that any further deletion destroys the setup’s ability to generate the desired quantum states). Such graphs are simpler than MELVIN’s perfect matching graphs, so it is even easier to make sense of any AI-generated solution.

    Renner is particularly impressed by THESEUS’s human-interpretable outputs. “The solution is designed in such a way that the number of connections in the graph is minimized,” he says. “And that’s naturally a solution we can better understand than if you had a very complex graph.”

    Eric Cavalcanti of Griffith University (AU) is both impressed by the work and circumspect about it. “These machine-learning techniques represent an interesting development. For a human scientist looking at the data and interpreting it, some of the solutions may look like ‘creative’ new solutions. But at this stage, these algorithms are still far from a level where it could be said that they are having truly new ideas or coming up with new concepts,” he says. “On the other hand, I do think that one day they will get there. So these are baby steps—but we have to start somewhere.”

    Steinberg agrees. “For now, they are just amazing tools,” he says. “And like all the best tools, they’re already enabling us to do some things we probably wouldn’t have done without them.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    Scientific American , the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 4:52 pm on July 2, 2021 Permalink | Reply
    Tags: "Dataome": everything from cave paintings and books to flash drives and cloud servers and the structures sustaining them., "The Origin of Technosignatures", By contrast the general search for other living systems-or biosignatures-really is all about eating; reproducing; and not to put too fine a point on it making waste., Our quest for technosignatures is actually in the end about the detection of extraterrestrial dataomes., Scientific American, Technosignatures are a consequence of dataomes just as biosignatures are a consequence of genomes., The arrival of a dataome on a world represents an origin event., The search for extraterrestrial intelligence stands out in the quest to find life elsewhere because it assumes that certain kinds of life will manipulate and exploit its environment with intention., The search for structured electromagnetic signals; intentional manipulation of matter and energy; alien megastructures; industrial pollution; nighttime lighting systems., Today the search for intention is represented by a still-coalescing field of cosmic “technosignatures.   

    From Scientific American : “The Origin of Technosignatures” 

    From Scientific American

    July 2, 2021
    Caleb A. Scharf

    1

    The search for extraterrestrial intelligence stands out in the quest to find life elsewhere because it assumes that certain kinds of life will manipulate and exploit its environment with intention. And that intention may go far beyond just supporting essential survival and function. By contrast the general search for other living systems-or biosignatures-really is all about eating; reproducing; and not to put too fine a point on it making waste.

    The assumption of intention has a long history [Acta Astronautica]. Back in the late 1800s and early 1900s the American astronomer Percival Lowell convinced himself, and others, of “non-natural features” on the surface of Mars, and associated these with the efforts of an advanced but dying species to channel water from the polar regions. Around the same time, Nikola Tesla suggested the possibility of using wireless transmission to contact Mars, and even thought that he might have picked up repeating, structured signals from beyond the Earth. Nearly a century earlier, the great mathematician and physicist Carl Friedrich Gauss had also thought about active contact, and suggested carving up the Siberian tundra to make a geometric signal that could be seen by extraterrestrials.

    Today the search for intention is represented by a still-coalescing field of cosmic “technosignatures,” which encompasses the search for structured electromagnetic signals as well as a wide variety of other evidence of intentional manipulation of matter and energy—from alien megastructures to industrial pollution, or nighttime lighting systems on distant worlds.

    But there’s a puzzle that really comes ahead of all of this. We tend to automatically assume that technology in all of the forms known to us is a marker of “advanced” life and its intentions, but we seldom ask the fundamental question of why technology happens in the first place.

    I started thinking about this conundrum back in 2018, and it leads to a deeper way to quantify intelligent life, based on the external information that a species generates, utilizes, propagates and encodes in what we call technology—everything from cave paintings and books to flash drives and cloud servers and the structures sustaining them. To give this a label I called it the “dataome.” One consequence of this reframing of the nature of our world is that our quest for technosignatures is actually in the end about the detection of extraterrestrial dataomes.

    A critical aspect of this reframing is that a dataome may be much more like a living system than any kind of isolated, inert, synthetic system. This rather provocative (well, okay, very provocative) idea is one of the conclusions I draw in a much more detailed investigation my new book The Ascent of Information. Our informational world, our dataome, is best thought of as a symbiotic entity to us (and to life on Earth in general). It genuinely is another “ome,” not unlike the microbiomes that exist in an intimate and inextricable relationship with all multicellular life.

    As such, the arrival of a dataome on a world represents an origin event. Just as the origin of biological life is, we presume, represented by the successful encoding of self-propagating, evolving information in a substrate of organic molecules. A dataome is the successful encoding of self-propagating, evolving information into a different substrate, and with a seemingly different spatial and temporal distribution— routing much of its function through a biological system like us. And like other major origin events it involves the wholesale restructuring of the planetary environment, from the utilization of energy to fundamental chemical changes in atmospheres or oceans.

    In other words, I’d claim that technosignatures are a consequence of dataomes just as biosignatures are a consequence of genomes.

    That distinction may seem subtle, but it’s important. Many remotely observable biosignatures are a result of the inner chemistry of life; metabolic byproducts like oxygen or methane in planetary atmospheres for example. Others are consequences of how life harvests energy, such as the colors of pigments associated with photosynthesis. All of these signatures are deeply rooted in the genomes of life, and ultimately that’s how we understand their basis and likelihood, and how we disentangle these markers from challenging and incomplete astronomical measurements.

    Analogous to biosignatures, technosignatures must be rooted in the dataomes that coexist with biological life (or perhaps that had once coexisted with biological life). To understand the basis and likelihood of techosignatures we therefore need to recognize and study the nature of dataomes.

    For example, a dataome and its biological symbiotes may exist in uneasy Darwinian balance, where the interests of each side are not always aligned, but coexistence provides a statistical advantage to each. This could be a key factor for evaluating observations about environmental compositions and energy transformations on other worlds. We ourselves are experiencing an increase in the carbon content of our atmosphere that can be associated with the exponential growth of our dataome, yet that compositional change is not good for preserving the conditions that our biological selves have thrived in.

    Projecting where our own dataome is taking us could provide clues to the scales and qualities of technosignatures elsewhere. If we only think about technosignatures as if they’re an arbitrary collection of phenomena rather than a consequence of something Darwinian in nature, it could be easy to miss what’s going on out there in the cosmos.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    Scientific American , the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 1:30 pm on May 30, 2021 Permalink | Reply
    Tags: "Maybe Dark Matter Is More Than One Thing", , , , , , , , Scientific American   

    From Scientific American : “Maybe Dark Matter Is More Than One Thing” 

    May 30, 2021
    Avi Loeb

    1
    The barred spiral galaxy NGC 1300. Credit: Getty Images.

    The label “dark matter” encapsulates our ignorance regarding the nature of most of the matter in the universe. It contributes five times more than ordinary matter to the cosmic mass budget. But we cannot see it. We infer its existence only indirectly through its gravitational influence on visible matter.

    The standard model of cosmology successfully explains the gravitational growth of present-day galaxies and their clustering as driven by primordial fluctuations in an ocean of invisible particles with initially small random motions.

    But this “cold dark matter” might actually be a mixture of different particles. It could be made of weakly interacting massive particles; hypothetical particles like axions; or even dark atoms that do not interact with ordinary matter or light. We have not detected any of these invisible particles yet, but we have measured the imprint of the fluctuations in their primordial spatial distribution as slight variations across the sky in the brightness of the cosmic microwave background [CMB], the relic radiation left over from the hot big bang.

    Many experiments are searching for the signatures of various types of dark matter, both on the sky and in laboratory experiments, including the Large Hadron Collider.

    This search has so far been unsuccessful. In addition to specific types of elementary particles, primordial black holes have been mostly ruled out as a dominant component of dark matter, with a limited open window in the range of asteroid masses waiting to be eliminated.

    In a 2005 paper [Physical Review D], I showed, with Matias Zaldarriaga, that cold dark matter particles could cluster gravitationally on scales down to an Earth mass. Evidence for such tiny clumps of dark matter has not been found yet; observers have only studied much bigger systems, namely galaxies like our own Milky Way, containing gas and stars as their inner core, which is surrounded by a halo of dark matter.

    As revealed by the pathbreaking work of Vera Rubin, the dynamics of gas and stars in galaxies indeed imply the existence of invisible mass in a halo that extends well outside the inner region where ordinary matter concentrates.

    Surprisingly, the need for dark matter in galaxies like the Milky Way appears only in the outer region where the acceleration drops below a universal value, which equals roughly the speed of light divided by the age of the universe. This is an unexpected fact within the standard dark matter interpretation. The fundamental flavor of a universal acceleration threshold raises the possibility that perhaps we are not missing invisible matter but rather witnessing a change in the effect of gravity on the dynamics of visible matter at low accelerations.

    This was the idea pioneered by Moti Milgrom, who in 1983 proposed a phenomenological theory of “modified Newtonian dynamics” (MOND) to explain away the dark matter problem.

    Remarkably, his simple prescription for modified dynamics at low accelerations accounts for the nearly flat rotation curves in many galaxy halos extremely well, even after four decades of scrutiny. As expected in MOND, all existing data on Milky Way–size galaxies shows a tight correlation between the circular speed in the outskirts of galaxies and the total amount of ordinary matter (also labeled, baryonic matter), manifesting the so-called “baryonic Tully-Fisher relation.” [MNRAS] In a 1995 paper [The Astrophysical Journal], I showed with my first graduate student, Daniel Eisenstein, that the tightness of this relation is not trivially explained in the standard dark matter interpretation. Even if dark matter exists, MOND raises the fundamental question: why do the dark matter particles introduce a fundamental acceleration scale to the dynamics of galaxies? Is this an important hint about their nature?

    MOND faces challenges on scales larger than galaxies. More massive systems such as galaxy clusters— where Fritz Zwicky first posited dark matter’s existence and coined its name—show evidence for missing mass even though their acceleration tends to be above the threshold scale in MOND.
    _____________________________________________________________________________________

    Dark Matter Background
    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, some 30 years later, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com.


    Coma cluster via NASA/ESA Hubble.


    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.
    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.
    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).


    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL).


    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970. https://home.dtm.ciw.edu.


    _____________________________________________________________________________________

    Moreover, the acoustic oscillations detected to exquisite precision in the brightness fluctuations of the cosmic microwave background, imply the presence of a dominant component of matter that streams freely, in addition to the ordinary matter and radiation fluids that are tightly coupled by electromagnetic interactions.

    But what about the smallest scales? Together with my postdoc Mohammad Safarzadeh, I studied recently the latest data available from the Gaia survey of ultrafaint dwarf galaxies that are satellites of the Milky Way [Annual Reviews]. We showed that their behavior deviates from MOND’s expectations. Just like clusters of galaxies, dwarf galaxies appear to argue against the universality of MOND on all scales.

    Does the success of MOND on Milky Way scales and its failures on both smaller and larger scales offer new insights about the nature of dark matter? One possibility is that dark matter is strongly self-interacting and avoids galactic cores. With Neal Weiner, I showed in a 2011 paper that a dark sector interaction resembling the electric force between charged particles could facilitate the avoidance of galactic cores by dark matter, with a diminishing effect at the high collision speeds characteristic of galaxy clusters.

    Another possibility that I suggested with Julian Muñoz in a 2018 paper, was inspired by the EDGES experiment, which reported unexpected excess cooling of hydrogen atoms during the cosmic dawn.

    We showed that if some dark matter particles possess a small electric charge, they could scatter off ordinary matter and cool hydrogen atoms below expectations, as reported.

    Explaining one anomaly by the conjecture that a fraction of the dark matter particles are slightly electrically charged is far more speculative than explaining six anomalies by the conjecture that the interstellar object ‘Oumuamua is a thin film pushed by sunlight. Nevertheless, speculations on the nature of dark matter receive far more federal funding and mainstream legitimacy than the search for technosignatures of alien civilizations.

    More definitive clues are needed to figure out the nature of dark matter. Here’s hoping that the coming decades will bring a resolution to this cosmic mystery, with all pieces of the jigsaw puzzle falling into place. Alternatively, we might seek a smarter kid on the cosmic block who would whisper the answer in our direction. Although it might feel like cheating in an exam, we should keep in mind that there is no teacher in sight looking over our shoulders.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    Scientific American , the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 7:53 pm on May 2, 2021 Permalink | Reply
    Tags: "The Fermilab Muon Measurement Might or Might Not Point to New Physics- But...", CERN LHCb Collaboration, Scientific American, The long lived Muon g-2 experiment- from CERN to Brookhaven to Fermilab.   

    From Scientific American: “The Fermilab Muon Measurement Might or Might Not Point to New Physics- But…” 

    From Scientific American

    May 2, 2021
    Robert P. Crease

    It was important either way, because the experiment that generated it was breathtakingly precise.

    Physicists should be ecstatic right now. Taken at face value, the surprisingly strong magnetism of the elementary particles called muons, revealed by an experiment this month [4.7.21] by CERN’s LHCb Collaboration] [Nature], suggests that the established theory of fundamental particles is incomplete. If the discrepancy pans out, it would be the first time that the theory has failed to account for observations since its inception five decades ago—and there is nothing physicists love more than proving a theory wrong.

    But, on April 7, particle physicists all over the world were excited and energized by the announcement of a measurement of the behavior of muons—the heavier, unstable subatomic cousins of electrons—that differed significantly from the expected value. At the April 7 announcement by FNAL [Physical Review Letters], the participating physicists displayed a graph with two error bars, one for the theoretical prediction and the other for the experimental measurement.

    A century from now, looking back on this moment, will historians understand this excitement? Two results on the same day? They certainly won’t see a major turning point in the history of science. No puzzle was solved, no new particle or field was discovered, no paradigm shifted in our picture of nature. What happened on April 7 was just an announcement that the muon’s wobble—its value is called g-2—had been measured a little more precisely than before, and that the international high-energy physics community was therefore a little more confident that other particles and fields are out there yet to be discovered.

    A century from now, looking back on this moment, will historians understand this excitement? They certainly won’t see a major turning point in the history of science. No puzzle was solved, no new particle or field was discovered, no paradigm shifted in our picture of nature. What happened on April 7 was just two announcements that the muon’s wobble—its value is called g-2—had been measured a little more precisely than before, and that the international high-energy physics community was therefore a little more confident that other particles and fields are out there yet to be discovered.

    Nevertheless, historians of science will see this as a special moment, not because of the measurement but because of the measuring. The first results of the experiment at Fermilab was the outcome of a remarkable and perhaps even unprecedented set of interactions between an extraordinarily diverse set of scientific cultures that, over 60 years, evolved independently yet required each other.

    Early theoretical calculations of g-2 according to quantum electrodynamics received a jolt in 1966 when Cornell University (US) theorist Toichiro Kinoshita realized that his previous studies had well-prepared him to work out its value. His first calculations were by hand, but soon his calculations became too unwieldy to be performed that way and he became dependent on computers and special software. To make the prediction ever more precise, he had to incorporate work by different groups of theorists who specialized in the vast and diverse panoply of interacting particles and forces that subtly influence the g-2 value. ((Kinoshita is retired, and today the theoretical value is worked on by more than 100 physicists.) The result was a specific prediction, relying on the contributions of many theorists, with a minuscule error bar that made a clear experimental target.

    The initial experimental work on a g-2 measurement, which began at European Organization for Nuclear Research (Organisation européenne pour la recherche nucléaire)(CH) [CERN] in 1959, involved a multistep process. The experimenters used a particle accelerator to make unstable particles called pions, then channeled these into a flat magnet where the pions decayed into muons. The muons were forced to turn in circles, and the whirling muons were made to “walk” in steps down the magnet. The muons emerged from the other end of the magnet into a field-free region where their orientation could be measured, allowing the experimenters to infer their g-2.

    The next experiment, which started at CERN in 1966, used a more powerful accelerator to produce and inject larger numbers of pions into a five-meter-diameter storage ring with a magnetic gradient to contain the resulting hordes of muons. The third CERN experiment, which began operations in 1969, was a major leap forward. It used a much larger 14-meter-diameter storage ring and ran at a certain “magic” energy where the electric field would not affect the muon spin. This made it possible to have a uniform magnetic field, dramatically sharpening the sensitivity of the measurement. But with that enhanced sensitivity came new sources of precision-sabotaging instrumental noise; another set of methods had to be applied to reduce uncertainties in the magnets and to measure the magnetic field.

    The fourth generation of g-2 experiments—begun at DOE’s Brookhaven National Laboratory (US) in 1999—required even more years of laborious struggle to beat back sources of error and control various disruptive factors. Like the third CERN experiment, it used a storage ring, 3.1 giga-electron-volt muons, the magic energy, and a uniform field; but unlike the CERN experiment it had a higher flux, muon injection rather than pion injection, superconducting magnets, a trolley equipped with NMR probes that could be run around inside the vacuum chamber to check the magnetic field, and a kicker inside the storage ring.

    These and other features added to the experiment’s complexity and expense. The experiment involved 60 physicists from 11 institutes; it issued its g-2 value in 2004. In 2013, the Brookhaven g-2 storage ring was transported to Fermilab and given new life, rebuilt and operated with a host of ever-more subtle and sophisticated new tricks needed to further push the outer limits of precision.

    DOE’s Fermi National Accelerator Laboratory(US) G-2 magnet from DOE’s Brookhaven National Laboratory(US) finds a new home in the FNAL Muon G-2 experiment. The move by barge and truck.

    Ultimately, all those overlapping decades of work collectively produced the measurement announced this month, one with a tiny error bar that made it meaningful to compare with the theoretical prediction, which by then also had a narrow error bar.

    The late Francis Farley, the spokesperson for the very first g-2 experiment at CERN, once told me, “What the theorists do and what we experimenters do is completely different. They talk about Feynman diagrams, amplitudes, integrals, expansions and a whole lot of complex mathematics. We hook up an accelerator to beam lines and steering magnets to the device itself, which is stuffed with wires, thousands of cables, timing devices, sensors and such things. It’s two totally different worlds! But they come out with a number, we come out with a number, and these numbers agree to parts per million! It’s unbelievable! That is the most astonishing thing for me!”

    All the excitement sprang from the tiny but indisputable gap—2.5 parts per billion—between the two. If either bar had been wider, it would have blended into the other, and the measurement would not have indicated physics awaiting discovery. To make the experiment happen, the scientific community, and the government agencies providing the funding, had placed enormous trust in the international team of collaborators.

    What will amaze historians of science in the future, I think, will be that today’s scientists could produce that puny but revealing gap at all.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 7:16 pm on May 2, 2021 Permalink | Reply
    Tags: , , , , Large Hadron Collider’s LHCb detector, Many people would say supersymmetry is almost dead., , , , , Scientific American, Some solutions nevertheless exist that could miraculously fit both. One is the leptoquark—a hypothetical particle that could have the ability to transform a quark into either a muon or an electron ., , , The data that the LHC has produced so far suggest that typical superpartners-if they exist-cannot weigh less than 1000 protons., The LHCb muon anomalies suffer from the same problem as the new muon-magnetism finding: various possible explanations exist but they are all “ad hoc”, There is one other major contender that might reconcile both the LHCb and Muon g – 2 discrepancies. It is a particle called the Z′ boson because of its similarity with the Z boson.   

    From Scientific American: “Muon Results Throw Physicists’ Best Theories into Confusion” 

    From Scientific American

    April 29, 2021
    Davide Castelvecchi

    The Large Hadron Collider’s LHCb detector reported anomalies in the behavior of muons, two weeks before the FNAL Muon g – 2 experiment announced a puzzling finding about muon magnetism.

    Physicists should be ecstatic right now. Taken at face value, the surprisingly strong magnetism of the elementary particles called muons, revealed by an experiment this month [Nature], suggests that the established theory of fundamental particles is incomplete. If the discrepancy pans out, it would be the first time that the theory has failed to account for observations since its inception five decades ago—and there is nothing physicists love more than proving a theory wrong.

    The Muon g − 2 collaboration at the Fermi National Accelerator Laboratory (Fermilab) outside Chicago, Illinois, reported the latest measurements in a webcast on 7 April, and published them in Physical Review Letters. The results are “extremely encouraging” for those hoping to discover other particles, says Susan Gardner, a physicist at the University of Kentucky (US) in Lexington.

    But rather than pointing to a new and revolutionary theory, the result—announced on 7 April by the FNAL Muon g – 2 experiment near Chicago, Illinois—poses a riddle. It seems maddeningly hard to explain it in a way that is compatible with everything else physicists know about elementary particles. And additional anomalies in the muon’s behaviour, reported in March by a collider experiment [LHCb above], only make that task harder. The result is that researchers have to perform the theoretical-physics equivalent of a triple somersault to make an explanation work.

    Zombie models

    Take supersymmetry, or SUSY, a theory that many physicists once thought was the most promising for extending the current paradigm, the standard model of particle physics.

    Supersymmetry comes in many variants, but in general, it posits that every particle in the standard model has a yet-to-be-discovered heavier counterpart, called a superpartner. Superpartners could be among the ‘virtual particles’ that constantly pop in and out of the empty space surrounding the muon, a quantum effect that would help to explain why this particle’s magnetic field is stronger than expected.

    If so, these particles could solve two mysteries at once: muon magnetism and dark matter, the unseen stuff that, through its gravitational pull, seems to keep galaxies from flying apart.

    Until ten years ago, various lines of evidence had suggested that a superpartner weighing as much as a few hundred protons could constitute dark matter. Many expected that the collisions at the Large Hadron Collider (LHC) outside Geneva, Switzerland, would produce a plethora of these new particles, but so far none has materialized.

    “Many people would say supersymmetry is almost dead,” says Dominik Stöckinger, a theoretical physicist at the Dresden University of Technology [Technische Universität Dresden] (DE), who is a member of the Muon g – 2 collaboration. But he still sees it as a plausible way to explain his experiment’s findings. “If you look at it in comparison to any other ideas, it’s not worse than the others,” he says.

    The data that the LHC has produced so far suggest that typical superpartners-if they exist-cannot weigh less than 1,000 protons (the bounds can be higher depending on the type of superparticle and the flavour of supersymmetry theory).

    There is one way in which Muon g – 2 could resurrect supersymmetry and also provide evidence for dark matter, Stöckinger says. There could be not one superpartner, but two appearing in LHC collisions, both of roughly similar masses—say, around 550 and 500 protons. Collisions would create the more massive one, which would then rapidly decay into two particles: the lighter superpartner plus a run-of-the-mill, standard-model particle carrying away the 50 protons’ worth of mass difference.

    The LHC detectors are well-equipped to reveal this kind of decay as long as the ordinary particle—the one that carries away the mass difference between the two superpartners—is large enough. But a very light particle could escape unobserved. “This is well-known to be a blind spot for LHC,” says Michael Peskin, a theoretician at the DOE’s SLAC National Accelerator Laboratory (US) in Menlo Park, California at Stanford University (US).

    The trouble is that models that include two superpartners with similar masses also tend to predict that the Universe should contain a much larger amount of dark matter than astronomers observe. So an additional mechanism would be needed—one that can reduce the amount of predicted dark matter, Peskin explains. This adds complexity to the theory. For it to fit the observations, all its parts would have to work “just so”.

    Meanwhile, physicists have uncovered more hints that muons behave oddly. An experiment at the LHC, called LHCb, has found tentative evidence that muons occur significantly less often than electrons as the breakdown products of certain heavier particles called B mesons. According to the standard model, muons are supposed to be identical to electrons in every way except for their mass, which is 207 times larger. As a consequence, B mesons should produce electrons and muons at rates that are nearly equal.

    The LHCb muon anomalies suffer from the same problem as the new muon-magnetism finding: various possible explanations exist but they are all “ad hoc”, says physicist Adam Falkowski, at the Paris-Saclay University [Université Paris-Saclay] (FR). “I’m quite appalled by this procession of zombie SUSY models dragged out of their graves,” says Falkowski.

    The task of explaining Muon g – 2’s results becomes even harder when researchers try concoct a theory that fits both those findings and the LHCb results, physicists say. “Extremely few models could explain both simultaneously,” says Stöckinger. In particular, the supersymmetry model that explains Muon g – 2 and dark matter would do nothing for LHCb.

    Some solutions nevertheless exist that could miraculously fit both. One is the leptoquark—a hypothetical particle that could have the ability to transform a quark into either a muon or an electron (which are both examples of a lepton). Leptoquarks could resurrect an attempt made by physicists in the 1970s to achieve a ‘grand unification’ of particle physics, showing that its three fundamental forces—strong, weak and electromagnetic—are all aspects of the same force.

    Most of the grand-unification schemes of that era failed experimental tests, and the surviving leptoquark models have become more complicated—but they still have their fans. “Leptoquarks could solve another big mystery: why different families of particles have such different masses,” says Gino Isidori, a theoretician at the University of Zürich [Universität Zürich ] (CH) in Switzerland. One family is made of the lighter quarks—the constituents of protons and neutrons—and the electron. Another has heavier quarks and the muon, and a third family has even heavier counterparts.

    Apart from the leptoquark, there is one other major contender that might reconcile both the LHCb and Muon g – 2 discrepancies. It is a particle called the Z′ boson because of its similarity with the Z boson, which carries the ‘weak force’ responsible for nuclear decay. It, too, could help to solve the mystery of the three families, says Ben Allanach, a theorist at the University of Cambridge (UK). “We’re building models where some features come out very naturally, you can understand these hierarchies,” he says. He adds that both leptoquarks and the Z′ boson have an advantage: they still have not been completely ruled out by the LHC, but the machine should ultimately see them if they exist.

    The LHC is currently undergoing an upgrade, and it will start to smash protons together again in April 2022. The coming deluge of data could strengthen the muon anomalies and perhaps provide hints of the long-sought new particles (although a proposed electron–positron collider, primarily designed to study the Higgs boson, might be needed to address some of the LHC’s blind spots, Peskin says). Meanwhile, beginning next year, Muon g – 2 will release further measurements. Once it’s known more precisely, the size of the discrepancy between muon magnetism and theory could itself rule out some explanations and point to others.

    Unless, that is, the discrepancies disappear and the standard model wins again. A new calculation, reported this month, of the standard model’s prediction for muon magnetism gave a value much closer to the experimental result. So far, those who have bet against the standard model have always lost, which makes physicists cautious. “We are—maybe—at the beginning of a new era,” Stöckinger says.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 6:15 pm on May 2, 2021 Permalink | Reply
    Tags: "How Big Data Are Unlocking the Mysteries of Autism", Scientific American, Spark For Autism   

    From Scientific American: “How Big Data Are Unlocking the Mysteries of Autism” 

    From Scientific American

    April 30, 2021
    Wendy Chung

    1
    Artist’s visualization of genomic data. Credit: Nobi Prizue/Getty Images.

    When I started my pediatric genetic practice over 20 years ago, I was frustrated by constantly having to tell families and patients that I couldn’t answer many of their questions about autism and what the future held for them. What were the causes of their child’s particular behavioral and medical challenges? Would their child talk? Have seizures? What I did know was that research was the key to unlocking the mysteries of a remarkably heterogeneous disorder that affects more than five million Americans and has no FDA-approved treatments. Now, thanks in large part to the impact of genetic research, those answers are starting to come into focus.

    Five years ago we launched SPARK ( Simons Foundation Powering Autism Research for Knowledge) to harness the power of big data by engaging hundreds of thousands of individuals with autism and their family members to participate in research. The more people who participate, the deeper and richer these data sets become, catalyzing research that is expanding our knowledge of both biology and behavior to develop more precise approaches to medical and behavioral issues.

    SPARK is the world’s largest autism research study to date with over 250,000 participants, more than 100,000 of whom have provided DNA samples through the simple act of spitting in a tube. We have generated genomic data that have been de-identified and made available to qualified researchers. SPARK has itself been able to analyze 19,000 genes to find possible connections to autism; worked with 31 of the nation’s leading medical schools and autism research centers; and helped thousands of participating families enroll in nearly 100 additional autism research studies.

    Genetic research has taught us that what we commonly call autism is actually a spectrum of hundreds of conditions that vary widely among adults and children. Across this spectrum, individuals share core symptoms and challenges with social interaction, restricted interests and/or repetitive behaviors.

    We now know that genes play a central role in the causes of these “autisms,” which are the result of genetic changes in combination with other causes including prenatal factors. To date, research employing data science and machine learning has identified approximately 150 genes related to autism, but suggests there may be as many as 500 or more. Finding additional genes and commonalities among individuals who share similar genetic differences is crucial to advancing autism research and developing improved supports and treatments. Essentially, we will take a page from the playbook that oncologists use to treat certain types of cancer based upon their genetic signatures and apply targeted therapeutic strategies to help people with autism.

    But in order to get answers faster and be certain of these results, SPARK and our research partners need a huge sample size: “bigger data.” To ensure an accurate inventory of all the major genetic contributors, and learn if and how different genetic variants contribute to autistic behaviors, we need not only the largest but also the most diverse group of participants.

    The genetic, medical and behavioral data SPARK collects from people with autism and their families is rich in detail and can be leveraged by many different investigators. Access to rich data sets draws talented scientists to the field of autism science to develop new methods of finding patterns in the data, better predicting associated behavioral and medical issues, and, perhaps, identifying more effective supports and treatments.

    Genetic research is already providing answers and insights about prognosis. For example, one SPARK family’s genetic result is strongly associated with a lack of spoken language but an ability to understand language. Armed with this information, the medical team provided the child with an assistive communication device that decreased tantrums that arose from the child’s frustration at being unable to express himself. An adult who was diagnosed at age 11 with a form of autism that used to be known as Asperger’s syndrome recently learned that the cause of her autism is KMT2C-related syndrome, a rare genetic disorder caused by changes in the gene KMT2C.

    Some genetic syndromes associated with autism also confer cancer risks, so receiving these results is particularly important. We have returned genetic results to families with mutations in PTEN, which is associated with a higher risk of breast, thyroid, kidney and uterine cancer. A genetic diagnosis means that they can now be screened earlier and more frequently for specific cancers.

    In other cases, SPARK has identified genetic causes of autism that can be treated. Through whole exome sequencing, SPARK identified a case of phenylketonuria (PKU) that was missed during newborn screening. This inherited disorder causes a buildup of amino acid in the blood, which can cause behavior and movement problems, seizures and developmental disabilities. With this knowledge, the family started their child on treatment with a specialized diet including low levels of phenylalanine.

    Today, thanks to a growing community of families affected by autism who, literally, give a part of themselves to help understand the vast complexities of autism, I can tell about 10 percent of parents what genetic change caused their child’s autism.

    We know that big data, with each person representing their unique profile of someone impacted by autism, will lead to many of the answers we seek. Better genetic insights, gleaned through complex analysis of rich data, will help provide the means to support individuals—children and adults across the spectrum—through early intervention, assistive communication, tailored education and, someday, genetically-based treatments. We strive to enable every person with autism to be the best possible version of themselves.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 5:16 pm on May 2, 2021 Permalink | Reply
    Tags: "Adaptive Optics Branches Out", , Scientific American   

    From Scientific American: “Adaptive Optics Branches Out” 

    From Scientific American

    April 1, 2021
    Tony Travouillon
    Céline d’Orgeville
    Francis Bennet

    A tool built for astronomy finds new life combating space debris and enabling quantum encryption.

    1
    LASER LIGHT creates an artificial star to calibrate the adaptive optics system on the European Southern Observatory’s Very Large Telescope in Chile. Credit: Y. Beletsky (Las Cumbres Observatory (CL)) and European Southern Observatory (EU)

    For astronomers, it’s a magical moment: you’re staring at a monitor, and a blurry image of a cosmological object sharpens up, revealing new details. We call this “closing the loop,” a reference to the adaptive optics loop, a tool that enables telescopes to correct for haziness caused by turbulence in the atmosphere. Adaptive optics essentially untwinkles the stars, canceling out the air between us and space to turn a fuzzy image crisp.

    One night last year our team at the Australian National University (AU) was closing the loop on a new imaging system made to resolve the details of space debris. Sitting in the control room of our observatory on Mount Stromlo, overlooking the capital city of Canberra, we selected a weather satellite for this first test.

    It was an easy target: its large body and solar panels are unmistakable, offering a good way to test the performance of our system.

    For some of us, this was the first time we had used a telescope to observe something that was not a star, galaxy or other cosmic phenomenon. This satellite represents one of the thousands of human-made objects that circle our planet, a swarm of spacecraft—some active, most not—that pose a growing risk of overcrowding near-Earth orbits. Our test was part of an effort to build systems to tackle the problem of space debris and preserve these orbital passages for future use. It is one of several new ways that we are using adaptive optics, which has traditionally been used for astronomical observations, to accomplish different goals. After more than three decades of perfecting this technology, astronomers have realized that they can apply their expertise to any application that requires sending or receiving photons of light between space and the ground.

    The Fight against the Atmosphere

    The layer of gas between Earth and the rest of the cosmos keeps us alive, but it also constantly changes the path of any photon of light that travels through it. The culprit is atmospheric turbulence caused by the mixing of air of different temperatures. Light bends, or refracts, when it travels through different mediums, which is why a straw in a glass of water looks like it leans at a different angle under the water than above it—when the light bouncing off the straw moves from water into air, it changes course. The same thing happens when light travels through air of different temperatures. When light passes from warm air to cool air, it slows down and its path changes.

    This effect is why stars twinkle and why astronomers have such a hard time taking precise images of the sky. We quantify the impact of atmospheric turbulence with “the seeing,” a parameter that describes the angular size of the blurred spot of a star as seen through a ground-based telescope. The more turbulent the atmosphere, the worse the seeing. At a good site, on a high mountain with low turbulence, the seeing is typically between 0.5 and one arcsecond, meaning that any telescope will be limited to this range of resolution. The problem is that modern telescopes are capable of resolution significantly better than that. From a purely optical point of view, the resolution of a telescope is dictated by the “diffraction limit,” which is proportional to the wavelength of the light that is collected and inversely proportional to the diameter of the telescope collecting that light. The wavelengths we observe depend only on the chemical composition of our celestial targets, so those cannot be changed. The only way to build telescopes that can resolve smaller and smaller objects is therefore to increase their diameter. A telescope with a two-meter diameter mirror can, for example, resolve objects that are 0.05 arcsecond in optical wavelengths (the equivalent of resolving a large coin 100 kilometers away). But even a very good site with low seeing will degrade this resolution by a factor of 10.

    It is thus easy to see the attraction of putting telescopes in space, beyond the reach of the atmosphere. But there are still very good reasons to build telescopes on the ground. Space telescopes cannot be too large, because rockets can carry only so much weight. It is also difficult to send humans into space to service and upgrade them. The largest space telescope currently under construction is the James Webb Space Telescope, and its primary mirror is 6.5 meters wide.

    On the ground, the largest telescope mirrors are more than 10 meters wide; now being built, the Extremely Large Telescope will have a primary mirror that extends 39 meters.

    Ground-based telescopes can also be upgraded throughout their lifetimes, always receiving the latest generation of instrumentation. But to use these telescopes to the fullest, we must actively remove the effects of the atmosphere.

    The first adaptive optics concepts were proposed in the early 1950s and first used in the 1970s by the U.S. military, notably for satellite imaging from the ground. Astronomers had to wait until the 1990s to apply the technology in their observatories. Adaptive optics relies on three key components. The first is a wavefront sensor, a fast digital camera equipped with a set of optics to map out the distorted shape of the light waves heading toward the telescope. This sensor measures the distortion caused by the atmosphere in real time. Because measurements must keep up with fast changes in the atmosphere, it needs to make a new map several hundred to several thousand times per second. To get enough photons in such short exposures, the wavefront sensor requires a bright source of light above the atmosphere. The stars themselves are rarely bright enough for this purpose. But astronomers are a resourceful bunch—they simply create their own artificial stars by shining a laser upward.

    This reference light source—the laser guide star—is the second key component of the adaptive optics system. Our atmosphere has a layer of sodium atoms that is a few kilometers thick and located at an altitude of 90 kilometers, well above the turbulence causing the distortions. Scientists can excite these sodium atoms using a specially tuned laser. The sodium atoms in the upper atmosphere absorb bright orange laser light (the same color emitted by the sodium street lamps in many cities) and then reemit it, producing a glowing artificial star. With the laser attached to the side of the telescope and tracking its movements, this artificial star is always visible to the wavefront sensor.

    Now that we can continuously track the shape of the wavefront, we need to correct for its aberrations. This is the job of the third major component of the system: the deformable mirror. The mirror is made of a thin reflective membrane, under which sits a matrix of actuators, mechanisms that push and pull the membrane to shape the reflected light. Every time the wavefront sensor makes a measurement, it sends this information to the mirror, which deforms in a way that compensates for the distortions in the incoming light, effectively removing the aberrations caused by the atmosphere. The atmosphere changes so fast that these corrections must be made every millisecond or so. That is a major mechanical and computational challenge. The deformable mirror hardware must be capable of making thousands of motions every second, and it must be paired with a computer and wavefront sensor that can keep up with this speed. There are up to a few thousand actuators, each moving the deformable mirror surface by a few microns. Keeping up with this constant updating process in a self-correcting fashion is what we mean by “closing the loop.”

    Although the technique is difficult and complex, by now astronomers have largely mastered adaptive optics, and all major optical observatories are fitted with these systems. There are even specialized versions used for different types of observations. “Classical” adaptive optics uses only one guide star and one deformable mirror, which enable atmospheric turbulence correction over a rather limited patch of sky. More complex systems such as Multi Conjugate Adaptive Optics use multiple guide stars and multiple deformable mirrors to probe and correct for a larger volume of atmospheric turbulence above the telescope. This approach opens up windows of atmospheric-free astronomical observations that are 10 to 20 times larger than what classical adaptive optics can achieve—but at a significantly higher price. In other situations—for example, when astronomers want to study an individual target, such as an exoplanet—the important factor is not field size but near-perfect image resolution. In this situation, an Extreme Adaptive Optics system uses faster and higher-resolution wavefront sensors and mirrors, usually coupled with a filter to block the light of the host star and enable imaging of the dim exoplanets orbiting it. We have now reached an age where it is not a stretch to expect any telescope to come with its own adaptive optics system, and we are beginning to expand the use of this technology beyond astronomy.

    1
    Credit: 5W Infographics.

    The Problem of Space Junk

    Ironically one of these new applications helped to inspire the early development of adaptive optics: the observation of objects in close orbit around our planet. This research area, commonly called space situational awareness, includes the observation and study of human-made objects (satellites) as well as natural objects (meteoroids). A legitimate fear is that the ever increasing number of spacecraft being launched will also increase the number of collisions between them, resulting in even more debris. The worst-case scenario is that a cascading effect will ensue, rendering certain orbits completely unusable. This catastrophic, yet rather likely, possibility is called the Kessler syndrome, after Donald J. Kessler, the NASA scientist who predicted it as early as 1978.

    About 34,000 human-made objects larger than 10 centimeters are now orbiting Earth; only about 10 percent are active satellites. Space junk is accumulating at the altitudes most heavily used for human activities in space, mainly in low-Earth orbit (some 300 to 2,000 kilometers above the ground) and geostationary orbit (around 36,000 kilometers). Although we can track the larger objects with radar, optical telescopes and laser-tracking stations, there are several hundreds of thousands of pieces of debris in the one- to 10-centimeter range, as well as 100 million more pieces of debris that are smaller than a centimeter, whose positions are basically unknown.

    The collision scenes in the 2013 movie Gravity gruesomely illustrate what would happen if a large piece of junk were to collide with, for instance, the International Space Station. NASA reports that over the past 20 years the station has had to perform about one evasive maneuver a year to avoid space debris that is flying too close, and the trend is increasing, with three maneuvers made in 2020. Space junk has the power to significantly disturb our current way of life, which, often unbeknownst to us, largely relies on space technologies. Satellites are necessary for cell phones, television and the Internet, of course, but also global positioning, banking, Earth observations for weather predictions, emergency responses to natural disasters, transport and many other activities that are critical to our daily lives.

    A number of projects are aiming to clean up space, but these efforts are technologically difficult, politically complex and expensive. Meanwhile some scientists, including our team at the Australian National University, are working to develop mitigation strategies from the ground. Working from Earth is easier and more affordable and can rely on technologies that we already do well, such as adaptive optics.

    Various subtle differences exist between the way we use adaptive optics for astronomy and the way we apply it to space situational awareness. The speed of satellites depends on their distance from Earth. At the altitude of 400 kilometers above the ground, the International Space Station, for instance, is flying at the incredible pace of eight kilometers per second and completes a full orbit every hour and a half. This is much faster than the apparent motion of the sun and stars, which take a day to circle overhead due to Earth’s rotation. Because of this speed, when telescopes track satellites, the atmospheric turbulence appears to change much more rapidly, and adaptive optics systems have to make corrections 10 to 20 times faster than if they were tracking astronomical objects. We must also point the guide star laser beam slightly ahead of the satellite to probe the atmosphere where the satellite will be a few milliseconds later.

    Adaptive optics can be used to track and take images of satellites and debris in low-Earth orbit and to improve the tracking of objects in low, medium and geostationary orbits. One of the ways we track space objects is light detection and ranging, a technique more commonly known as LIDAR. We project a tracking laser (not to be confused with the guide star laser) into the sky to bounce off a satellite, and we measure the time it takes to come back to us to determine the spacecraft’s precise distance to Earth. In this case, the adaptive optics system preconditions the laser beam by intentionally distorting its light before it travels through the turbulent atmosphere. We calculate our distortions to counteract the effects of turbulence so that the laser beam is undisturbed after it exits the atmosphere.

    In addition to tracking space debris, we hope to be able to use this technique to push objects off course if they are heading for a collision. The small amount of pressure exerted when a photon of laser light reflects from the surface of debris could modify the orbit of an object with a large area-to-mass ratio. To be effective, we need adaptive optics to focus the laser beam precisely where we want it to go. This strategy would not reduce the amount of debris in orbit, but it could help prevent debris-on-debris collisions and possibly delay the onset of the catastrophic Kessler scenario. Eventually such systems could be employed around the globe to help manage the space environment.

    2
    Two images of the planet Neptune taken by the Very Large Telescope IN FOCUS: Two images of the planet Neptune taken by the Very Large Telescope—one before the adaptive optics system is switched on and one after—show the difference the technology makes. Credit: P. Weilbacher (AIP) and ESO.

    Quantum Transmissions

    Space safety is not the only application that can benefit from adaptive optics. Encrypted communications are essential to many of the technological advances we have seen in recent decades. Tap-and-go payment systems from mobile phones and wristwatches, online banking and e-commerce all rely on high-speed secure communications technology. The encryption we use for these communications is based on hard-to-solve mathematical problems, and it works only because current computers cannot solve these problems fast enough to break the encryption. Quantum computers, which may soon have the ability to solve these problems faster than their classical counterparts, threaten traditional encryption. Cryptographers are constantly inventing new techniques to secure data, but no one has been able to achieve a completely secure encryption protocol. Quantum cryptography aims to change that.

    Quantum encryption relies on the nature of light and the laws of quantum physics. The backbone of any quantum-encryption system is a quantum “key.” Quantum sources can provide an endless supply of truly random numbers to create keys that are unbreakable, replacing classically derived keys that are made in a predictable and therefore decipherable way. These keys can be generated at a very high rate, and we need to use them only once, thereby providing a provably unbreakable cypher.

    To send a quantum-encrypted communication over long distances without a fiber-optic connection, we must transmit laser light from an optical telescope on the ground to a receiving telescope on a satellite and back again. The problem with sending these signals is the same one we face when we use a laser to push a piece of space debris: the atmosphere changes the path of the transmission. But we can use the same adaptive optics technologies to send and receive these quantum signals, vastly increasing the amount of data we can transfer. This strategy may allow optical communication to compete with large radio-frequency satellite communication dishes, with the advantage of being quantum-compatible. There are other hurdles to implementing quantum communications—for instance, the need to store and route quantum information without disturbing the quantum state. But researchers are actively working on these challenges, and we hope to eventually create a global quantum-secure network. Adaptive optics is a critical part of working toward this dream.

    The Atmospheric Highway

    Suddenly a technology once reserved for studying the heavens may help us meet some of the great goals of the future—protecting the safety of space and communicating securely. These new applications will in turn push adaptive optics forward, to the benefit of astronomy as well.

    Traditionally adaptive optics was only viable for large observatories where the cost was justified by big performance gains. But space monitoring and communication strongly benefit from adaptive optics even on modest apertures. We find ourselves in a situation where all these communities can help one another. Undersubscribed telescopes could find a new life once equipped with adaptive optics, and space debris monitors are hungry for more telescope access to cover as many latitudes and longitudes as possible. For future observatories, astronomers are considering adding technical requirements to their telescopes and instruments to make them compatible with other space research applications such as space situational awareness and communication. Not only does it strengthen their science case, it gives them access to new sources of funding, including private enterprises.

    We are entering a multidisciplinary age where the sky is a common resource. While we are sharpening images of the sky, we are blurring the lines between all the activities that use a telescope as their primary tool. Scientists and engineers building adaptive optics systems are now broadening their collaborative circles and putting themselves in the middle of this new dynamic.

    Adaptive optics is also being used more without telescopes. An important and now rather mainstream use of adaptive optics is in medical imagery and ophthalmology, to correct for the aberrations introduced by imaging through living tissues and the eye. Other uses include optimum laser focusing for industrial laser tools and even antimissile military lasers. There has never been a more exciting time to explore the potential of adaptive optics in space and on Earth.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 3:00 pm on April 4, 2021 Permalink | Reply
    Tags: "When Did Life First Emerge in the Universe?", Scientific American   

    From Scientific American: “When Did Life First Emerge in the Universe?” 

    From Scientific American

    April 4, 2021
    Avi Loeb

    1
    Artist’s conception of GN-z11, the earliest known galaxy in the universe. Credit: Pablo Carlos Budassi Wikimedia (CC BY-SA 4.0).

    About 15 million years after the big bang, the entire universe had cooled to the point where the electromagnetic radiation left over from its hot beginning was at about room temperature. In a 2013 paper [International Journal of Astrobiology], I labeled this phase as the “habitable epoch of the early universe.” If we had lived at that time, we wouldn’t have needed the sun to keep us warm; that cosmic radiation background would have sufficed.

    Did life start that early? Probably not. The hot, dense conditions in the first 20 minutes after the big bang produced only hydrogen and helium along with a tiny trace of lithium (one in 10 billion atoms) and a negligible abundance of heavier elements. But life as we know it requires water and organic compounds, whose existence had to wait until the first stars fused hydrogen and helium into oxygen and carbon in their interiors about 50 million years later. The initial bottleneck for life was not a suitable temperature, as it is today, but rather the production of the essential elements.

    Given the limited initial supply of heavy elements, how early did life actually start? Most stars in the universe formed billions of years before the sun. Based on the cosmic star formation history, I showed in collaboration with Rafael Batista and David Sloan [Journal of Cosmology and Astroparticle Physics] that life near sunlike stars most likely began over the most recent few billion years in cosmic history. In the future, however, it might continue to emerge on planets orbiting dwarf stars, like our nearest neighbor, Proxima Centauri, which will endure hundreds of times longer than the sun’s.

    Centauris Alpha Beta Proxima, 27 February 2012. Skatebiker.

    Ultimately, it would be desirable for humanity to relocate to a habitable planet around a dwarf star like Proxima Centauri b, where it could keep itself warm near a natural nuclear furnace for up to 10 trillion years into the future (stars are merely fusion reactors confined by gravity, with the benefit of being more stable and durable than the magnetically confined versions that we produce in our laboratories).

    As far as we know, water is the only liquid that can support the chemistry of life—but there is much we don’t know. Could alternative liquids have existed in the early universe as a result of warming by the cosmic radiation background alone? In a new paper with Manasvi Lingam [The Extended Habitable Epoch of the Universe for Liquids Other than Water] we show that ammonia, methanol and hydrogen sulfide could exist as liquids just after the first stars formed and that ethane and propane might be liquids somewhat later. The relevance of these substances to life is unknown, but they can be studied experimentally. If we ever succeed in creating synthetic life, as is being attempted in Jack Szostak’s laboratory at Harvard University (US), we could check whether life can emerge in liquids other than water.

    One way to determine how early life started in the cosmos is to examine whether it formed on planets around the oldest stars. Such stars are expected to be deficient in elements heavier than helium, which astrophysicists call “metals.” (in our language, unlike that of most people, oxygen, for example, is considered a metal). Indeed, metal-poor stars have been discovered in the periphery of the Milky Way, and have been recognized as potential members of the earliest generation of stars in the universe. These stars often exhibit an enhanced abundance of carbon, making them “carbon enhanced metal poor” (CEMP) stars. My former student Natalie Mashian and I suggested that planets around CEMP stars might be made mostly of carbon, so their surfaces could provide a rich foundation for nourishing early life.

    We could therefore search for planets that transit, or pass in front of, CEMP stars and show biosignatures in their atmospheric composition.

    Planet transit. NASA/Ames.

    This would allow us to determine observationally how far back in time life may have started in the cosmos, based on the ages of these stars. Similarly, we could estimate the age of interstellar technological equipment that we might discover floating near Earth (or which might have crashed on the moon), based on long-lived radioactive elements or the extent of scars from impacts of dust particles on its surface.

    A complementary strategy is to search for technological signals from early distant civilizations that harnessed enough energy to make them detectable across the vast cosmic scale. One possible signal would be a flash of light from a collimated light beam generated to propel light sails. Others could be associated with cosmic engineering projects, such as moving stars around. Communication signals are not expected to be detectable across the universe, because the signal travel time would require billions of years in each direction and no participant would be patient enough to engage in such a slow exchange of information.

    But life’s signatures will not last forever. The prospects for life in the distant future are gloomy. The dark and frigid conditions that will result from the accelerated expansion of the universe by dark energy will likely extinguish all forms of life 10 trillion years from now. Until then, we could cherish the temporary gifts that nature had blessed us with. Our actions will be a source of pride for our descendants if they sustain a civilization intelligent enough to endure for trillions of years. Here’s hoping that we will act wisely enough to be remembered favorably in their “big history” books.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 5:40 pm on February 24, 2021 Permalink | Reply
    Tags: "Mystery of Spinning Atomic Fragments Solved at Last", Atomic nuclei rich in protons and neutrons are unstable., How pieces of splitting nuclei get their spins., Jonathan Wilson cautions more work is needed to explain how exactly spin results after scission., , Scientific American, The scientists examined nuclei resulting from the fission of various unstable elemental isotopes: thorium-232; uranium-238; and californium-252., The scientists focused on the gamma rays released after nuclear fission which encoded information on the spin of the resulting fragments., This work could help scientists design better nuclear reactors in the future.   

    From Scientific American: “Mystery of Spinning Atomic Fragments Solved at Last” 


    From Scientific American

    February 24, 2021
    Charles Q. Choi

    1
    Credit: Getty Images.

    New experiments have answered the decades-old question of how pieces of splitting nuclei get their spins.

    For more than 40 years, a subatomic mystery has puzzled scientists: Why do the fragments of splitting atomic nuclei emerge spinning from the wreckage? Now researchers find these perplexing gyrations might be explained by an effect akin to what happens when you snap a rubber band.

    To get an idea why this whirling is baffling, imagine you have a tall stack of coins. It would be unsurprising if this unstable tower fell. However, after this stack collapsed, you likely would not expect all the coins to begin spinning as they hit the floor.

    Much like a tall stack of coins, atomic nuclei rich in protons and neutrons are unstable. Instead of collapsing, such heavy nuclei are prone to splitting, a reaction known as nuclear fission. The resulting shards come out spinning, which can prove especially bewildering when the nuclei that split were not spinning themselves. Just as you would not expect an object to start moving on its own without some force acting on it, a body beginning to spin in absence of an initiating torque would seem decidedly supernatural, in apparent violation of the law of conservation of angular momentum.

    This “makes it look like something was created from nothing,” says study lead author Jonathan Wilson, a nuclear physicist at Paris-Saclay University [Université Paris-Saclay](FR)‘s Irene Joliot-Curie Laboratory in Orsay, France. “Nature pulls a conjuring trick on us. We start with an object with no spin, and after splitting apart, both chunks are spinning. But, of course, angular momentum must still be conserved.”

    Previous research found that fission begins when the shape of a nucleus becomes unstable as a consequence of jostling between the protons; since they are positively charged, they naturally repel each other. As the nucleus elongates, the nascent fragments form a neck between them. When the nucleus ultimately disintegrates, these pieces move apart rapidly and the neck snaps quickly, a process known as scission.

    Over the decades, scientists have devised a dozen or so different theories for this spinning, Wilson says. One class of explanations suggests the spin arises before scission given the bending, wriggling, tilting and twisting of the particles making up the nucleus before the split, motions resulting from thermal excitations, quantum fluctuations or both. Another set of ideas posits that the spin occurs after scission consequent to forces such as repulsion between the protons in the fragments. However, “the results of the experiments looking into this all contradicted each other,” Wilson says.

    Now Wilson and his colleagues have conclusively determined that this spinning results after the split, findings they detailed online February 24 in Nature. “This is wonderful new data,” says nuclear physicist George Bertsch at the University of Washington at Seattle(US), who did not participate in this study. “It’s really an important advance in our understanding of nuclear fission.”

    In the new study, the scientists examined nuclei resulting from the fission of various unstable elemental isotopes: thorium-232, uranium-238 and californium-252. They focused on the gamma rays released after nuclear fission, which encoded information on the spin of the resulting fragments.

    If the spinning resulted from effects before scission, one would expect the fragments to have equal and opposite spins. But “this is not what we observe,” Wilson says. Instead, it appears that each fragment spins in a manner independent of its partner, a result that held true across all examined batches of nuclei regardless of the respective isotopes.

    The researchers suspect that when a nucleus lengthens and splits, its remnants start off somewhat resembling teardrops. These fragments each possess a quality akin to surface tension that drives them to reduce their surface area by adopting more stable spherical shapes, much as bubbles do, Wilson explains. The release of this energy causes the remnants to heat and spin, a bit like how stretching a rubber band to the point of snapping leads to a chaotic, elastic flailing of fragments.

    Wilson adds this scenario is complicated by the fact that each chunk of nuclear debris is not simply a uniform piece of rubber, but rather resembles a bag of buzzing bees, given how its particles are all moving and often colliding with each other. “They’re like two miniature swarms that part ways and start doing their own things,” he says.

    All in all, “these findings give big support to the idea that the shapes of nuclei at the point at which they’re coming apart is what determines their energy and the properties of the fragments,” Bertsch says. “This is important for directing the theory of fission to be more predictive and allow us to more confidently discuss how it can make elements.”

    One reason Wilson suggests previous analyses of fissioning atoms did not deduce the origins of these gyrations was because they did not have the benefit of modern, ultrahigh-resolution detectors and contemporary, computationally intensive data-analysis methods. Previous work also often focused more on exploring the exotic structures of “extreme” superheavy neutron-rich nuclei to see how standard nuclear theory could account for such distinctly unusual cases. Much of that prior work deliberately avoided collecting and analyzing the huge amount of extra data needed to investigate how the nuclear fragments spun, whereas this new study explicitly focused on analyzing such details, he explains. “For me, the most surprising thing about the measurement is that it could be done at all with such clear results,” Bertsch says.

    Wilson cautions more work is needed to explain how exactly spin results after scission. “Our theory is simplistic, for sure,” he notes. “It can explain about 85 percent of the variations we see in spin as a function of mass, but a more sophisticated theory could certainly be able to make more accurate predictions. It’s a starting point; we’re not claiming anything more than that.” Other scientists at the European Commission’s Joint Research Center facility in Geel, Belgium, he adds, have now also confirmed the observations with a different technique, and that those independent results should be published soon.

    These findings may not only solve a decades-long mystery but could help scientists design better nuclear reactors in the future. Specifically, they could help shed light on the nature of the gamma rays emitted by spinning nuclear fragments during fission, which can heat reactor cores and surrounding materials. Currently these heating effects are not fully understood, particularly how they vary between different types of nuclear-power systems.

    “There’s up to a 30 percent discrepancy between the models and the actual data about these heating effects,” Wilson says. “Our findings are just a part of the full picture one would want in simulating future reactors, but a full picture is necessary.”

    These studies of subatomic angular momentum could also help scientists figure out which superheavy elements and other exotic atomic nuclei they can synthesize to shed more light on the still-murky depths of nuclear structure. “About 7,000 nuclei can theoretically exist, but only 4,000 of those can be accessed in the laboratory,” Wilson says. “Understanding more about how spin gets generated in fission fragments can help us understand what nuclear states we can access.”

    Future research, for instance, could explore what might happen when nuclei are driven to fission when bombarded by light or charged particles. In such cases, Wilson says, the incoming energy might potentially lead to pre-scission influences on the spinning of the resulting fragments.

    “Even though fission was discovered 80 years ago, it’s so complex that we’re still seeing interesting results today,” Wilson says. “The story of fission is not complete—there are more experiments to do, for sure.”

    See the full article here. .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: