Tagged: Scientific American (US) Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:51 pm on September 5, 2021 Permalink | Reply
    Tags: "Evolution" was contingency-driven in that how organisms evolved depended on chance factors and the environment that existed at that time., "Special creation": the theory that organisms were immutable and had been specially designed to fit the biological niches was the dominant idea., "When Lord Kelvin Nearly Killed Darwin's Theory", , “Natural selection” differed from other models of evolution by showing how diversity could arise naturally by a process of Malthusian population selection pressures., Charles Lyell's "Principles of Geology" cemented the idea that the Earth had been around for hundreds of millions of years., Darwin's major work “On the Origin of Species" was published in 1859., , Geologists and paleontologists started using the features of the Earth itself such as erosion; sedimentation; and the layers of strata and the fossils embedded in them to estimate its age., It was the discovery of radioactivity that decisively changed the picture because it led to an entirely new way of measuring the age of the Earth., Scientific American (US), The extremely long window of time opened up by the new geology allowed Darwin and his co-creator Alfred Russell Wallace to develop the central new insight of “natural selection”., The radical change in the status of human beings implied by their model-from being specially created in the image of God to just another accidental byproduct was an immediate source of controversy., The theory of "uniformitarianism"-that major geological features were caused by the very slow accumulation of tiny changes-gained ground., Various theories of evolution predated Darwin but whatever version one favored one thing was clear: it needed a very long time for its consequences to work itself out., When Kelvin died in 1907 at the age of 83 it was not clear if he had accepted that his estimates were no longer valid.   

    From Scientific American (US) : “When Lord Kelvin Nearly Killed Darwin’s Theory” 

    From Scientific American (US)

    September 5, 2021
    Mano Singham

    The eminent 19th century physicist argued—wrongly, it turned out—that Earth wasn’t old enough to have let natural selection play out.

    1
    Lord Kelvin. Credit: Getty Images.

    The famous opening lines “It was the best of times. It was the worst of times” of Charles Dickens’ 1859 novel The Tale of Two Cities referred to the period of French revolution. But he could equally well have been describing his contemporary Charles Darwin’s experience with his theory of evolution by natural selection. Darwin was born at the best of times in 1809 when conditions were highly conducive for his theory to flourish but he died in 1882 at the worst of times because there was a real danger that it might soon be killed off. Darwin’s nemesis was the eminent physicist Lord Kelvin and the weapon used against him was the age of the Earth.

    Various theories of evolution predated Darwin but whatever version one favored one thing was clear: it needed a very long time for its consequences to work itself out. Precisely how long was hard to pin down, but it was believed to require tens or hundreds of millions of years. From 1650 on, the dominant theory about the age of the Earth, based on the work of Bishop Ussher, Isaac Newton, and many other scholars who used various textual sources, was that it was about 6,000 years.

    Theories of geology and biology had to accommodate themselves to this short timeframe. The geological theory known as catastrophism postulated that major features such as the Grand Canyon, Himalayas, etc. had emerged as a result of sudden and violent upheavals. When it came to biological diversity, the idea of special creation, that organisms were immutable and had been specially designed to fit the biological niches in which they found themselves was the dominant idea.

    Such a short window of time would have made it impossible for anyone to credibly propose the emergence of new species by a process of slow evolution. But around 1785, ideas of the age of the Earth began to undergo a radical change as geologists and paleontologists started using the features of the Earth itself such as erosion; sedimentation; and the layers of strata and the fossils embedded in them to estimate its age. The theory of uniformitarianism-that major geological features were caused by the very slow accumulation of tiny changes-gained ground culminating in Charles Lyell’s epic three volume work Principles of Geology in 1830 that cemented the idea that the Earth had been around for hundreds of millions of years and possibly much longer, so long that it seemed impossible to fix an actual age.

    2
    Charles Darwin. Credit: Getty Images.

    Darwin was also a keen student of geology and was familiar with Lyell’s work (they later became close friends) and had the first two volumes of his book with him as he made his five-year voyage around the globe on The Beagle from 1831-1836, where the ideas for his theory of evolution germinated as he observed the patterns of species in the various locations that he visited. Darwin knew from his work with pigeons that even deliberately breeding for specific characteristics took a long time to produce them. But how much time was necessary? He felt that it required at least hundreds of millions of years. The work of Lyell and other geologists gave him the luxury of assuming that sufficient time existed for natural selection to do its work.

    Darwin also came of age at a time when the idea that species were immutable had begun to crumble. While Darwin was taught and accepted the still dominant special creation theory, he was also familiar with the general idea of evolution. His own grandfather Erasmus Darwin had in 1794 published a book Zoonomia that explored proto-evolutionary ideas. Jean Baptiste Lamarck had published his own model of how evolution worked in 1802. The various models proposed for the mechanism of evolution, such as Lamarckian evolution, orthogenesis, and use-disuse, all implied some level of teleology, that there was a directionality inherent in the process.

    The extremely long window of time opened up by the new geology allowed Darwin and his co-creator Alfred Russell Wallace, working independently, to develop the central new insight of natural selection that differed from other models of evolution by showing how diversity could arise naturally by a process of Malthusian population selection pressures, without any kind of mystical agency directing the process towards any specific ends.

    According to their theory, evolution was contingency-driven in that how organisms evolved depended on chance factors and the environment that existed at that time, and that if we ran the clock again, we could get very different outcomes in which human beings, as we know them, may not appear. Their work was first presented in a joint paper in 1858. Darwin’s major work On the Origin of Species published a year later was a closely argued compilation of a massive amount of evidence that helped establish evolution as a fact.

    The radical change in the status of human beings implied by their model, from being specially created in the image of God to just another accidental byproduct of the evolutionary process just like all other species, was an immediate source of controversy because it challenged a key religious tenet that human beings were special. This was why natural selection aroused such opposition even while evolution was accepted. Many scientists of that time were religious and believed in theistic evolution that said that a supernatural agency was guiding the process to produce the desired ends.

    While all models of evolution required very long times, natural selection required much longer times than any guided selection process. Hence the younger the age of the Earth, the more likely it was that natural selection could not be the mechanism. It was physicists, led by the eminent Kelvin, himself a theistic evolutionist, that led the charge for a young Earth, though it must be emphasized that ‘young’ at that time meant around 100 million years or less. Even religious scientists of that time had abandoned the idea of an Earth being just 6,000 years old.

    Beginning around 1860, Kelvin and other physicists started estimating the ages of the Earth and Sun using the nebular hypothesis proposed around 1750 by Immanuel Kant and Pierre Laplace. This model treated the Earth and the Sun as starting as rotating clouds of particles that coalesced under gravity to form molten balls, with the Earth subsequently solidifying and cooling. Kelvin used the laws of thermodynamics and other physics principles to arrive at estimates of 20-400 million years. By 1879 the upper limit had been lowered to about 100 million years for the Earth and an even shorter upper limit of 20 million years for the Sun, much less than the 200 million years or so believed to be required for natural selection to work. Since physics was considered to be the most rigorous of the sciences at that time, things looked bad for natural selection.

    When Darwin died in 1882, he was mourned as a great scientist who had radically changed our understanding of how the vast diversity of organisms we see around us came about, dethroning the idea that they were immutable. But because of the shrinking age of the Earth, he died with a major cloud hanging over his mechanism of natural selection. Darwin’s final words on the topic, written in 1880 just two years before his death, expressed a plaintive hope that future developments might reconcile the needs of natural selection with physics calculations.

    “With respect to the lapse of time not having been sufficient since our planet was consolidated for the assumed amount of organic change, and this objection, as urged by [Lord Kelvin], is probably one of the gravest as yet advanced, I can only say, firstly that we do not know at what rate species change as measured in years, and secondly that many philosophers are not yet willing to admit that we know enough of the constitution of the universe and of the interior of our globe to speculate with safety on its past duration.”

    It turned out Darwin was prescient that improved knowledge of the interior of the Earth might change the calculations in his favor but at the time he seemed to be grasping at straws. In fact, in the near term the problem got even worse because Kelvin and others produced new calculations that resulted in the age of the Earth being reduced even more, so that by 1895 the consensus physics view was that the age of Earth lay in the range 20-40 million years. Natural selection appeared to be doomed.

    But physicists were now encountering stiffer opposition from other disciplines. Geologists were adamant that their models based on the accumulating evidence on sedimentation and erosion, while not as rigorous as the physics models, were well enough established that they were confident of their lower limit of 100 million years. Paleontologists were also arguing that the fossil record was not consistent with the physicists’ shorter ages. Both groups argued that the physicists must have gone awry somewhere, even if they could not point out the specific flaws.

    Beginning in 1895, this impasse began to be broken when physicist John Perry, a former assistant of Kelvin’s, challenged the latter’s assumption that the Earth was a rigid and homogeneous body, saying there was little evidence to support it. By introducing inhomogeneity and convective flow in the Earth’s interior, he found that Kelvin’s estimates for the age of the Earth could change by as much as a factor as 100, shifting the upper limit into the billions of years. Other physicists also chimed in with similar upward shifts and this encouraged geologists, paleontologists, and biologists to ignore the physicists’ arguments for a young Earth.

    It was the discovery of radioactivity that decisively changed the picture because it led to an entirely new way of measuring the age of the Earth, by allowing scientists to calculate the ages of rocks. Since the oldest rock that could be found set a lower limit for the age of the Earth, the race was on to find older and older rocks using this method and records fell rapidly, leading to ages of 141 million years by 1905, 1.64 billion years by 1911, 1.9 billion years by 1935, 3.35 billion years by 1947, and to 4.5 billion years by 1953, which is where the current consensus lies.

    When Kelvin died in 1907 at the age of 83 it was not clear if he had accepted that his estimates were no longer valid. But Darwin’s hope that Kelvin would be proven wrong, and that eventually it would be shown that sufficient time existed for natural selection to work, was realized, 30 years after his death. His theory now has all the time it needs.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    Scientific American (US) , the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 1:11 pm on July 13, 2021 Permalink | Reply
    Tags: "How to Tell if Extraterrestrial Visitors Are Friend or Foe", Scientific American (US)   

    From Scientific American (US) : “How to Tell if Extraterrestrial Visitors Are Friend or Foe” 

    From Scientific American (US)

    July 12, 2021
    Avi Loeb

    1
    Credit: David Wall. Getty Images.

    Despite the naive storylines about interstellar travel in science fiction, biological creatures were not selected by Darwinian evolution to survive travel between stars. Such a trip would necessarily span many generations, since even at the speed of light, it would take tens of thousands of years to travel between stars in our galaxy’s disk and 10 times longer across its halo. If we ever encounter traces of aliens, therefore, it will likely be in the form of technology, not biology. Technological debris could have accumulated in interstellar space over the past billions of years, just as plastic bottles have accumulated on the surface of the ocean. The chance of detecting alien technological relics can be simply calculated from their number per unit volume near us rather than from the Drake equation, which applies strictly to communication signals from living civilizations.

    On a recent podcast about my book Extraterrestrial, I was asked whether extraterrestrial intelligence should be expected to follow the rational underpinning of morality, as neatly formulated by the German philosopher Immanuel Kant. This would be of concern to us during an encounter. Based on human history, I expressed doubt that morality would garner a global commitment from all intelligent beings in the Milky Way.

    Instead, a code of conduct that allows systems of alien technology to dominate the galaxy would also make them more likely to be the way we would first encounter extraterrestrials. Practically, this rule will act as a sort of Darwinian evolution by natural selection, favoring systems that can persevere over long times and distances; and multiply quickly and spread at the highest speed with self-repair mechanisms that mitigate damage along their journey. Such systems could have reached the habitable zones around all stars within the Milky Way, including our sun, by now. Most stars formed billions of years before ours did, and technological equipment sent from habitable planets near them could have predated us by enough time to dominate the galaxy before we came to exist as a technological species.

    Our own artificial intelligence systems are likely to supersede many features of human intelligence within the coming decade. It is therefore reasonable to imagine AI systems connected to 3-D printers that would replicate themselves on planet surfaces and adapt to changing circumstances along their journey between planets through machine learning. They could hibernate during long journeys and turn on close to stars, using starlight to recharge their energy supply. With this in mind, it is conceivable that the flat thin structure that might have characterized the interstellar object ‘Oumuamua was meant to collect sunlight and recharge its batteries. The same dish could have also served as a receiver for communication signals from probes that were already deposited on habitable planets, like Earth or Mars.

    And speaking about such probes—if one or more of the unidentified aerial phenomena (UAP) discussed in the Pentagon report to Congress is potentially extraterrestrial in origin, then scientists have an obligation to decipher their purpose by collecting more data on their behavior. Owing to the long time-delay of any signals from their point of origin, these objects are likely to act autonomously. How could we tell whether an autonomous extraterrestrial AI system is a friend or a foe?

    Initial impressions can be misleading, as in the story of the Trojan Horse used by the Greeks to enter the city of Troy and win the Trojan War. Therefore, we should first study the behavior of alien probes to figure out what type of data they are seeking. Second, we should examine how they respond to our actions. And with no choice left, we should engage their attention in a way that would promote our interests.

    But most importantly, humanity should avoid sending mixed messages to these probes, because that would confuse our interpretation of their response. Any decision on how to act must be coordinated by an international organization such as the United Nations and policed consistently by all governments on Earth. In particular, it would be prudent to appoint a forum composed of our most accomplished experts in the areas of computing (to interpret the meaning of any signal we intercept), physics (to understand the physical characteristics of the systems with which we interact) and strategy (to coordinate the best policy for accomplishing our goals).

    Ultimately, we might need to employ our own AI in order to properly interpret the alien AI. The experience will be as humbling as relying on our kids to make sense of new content on the internet by admitting that their computer skills exceed ours. The quality of expertise and AI might be more important than physical strength or natural intelligence in determining the outcome of a technological battlefield.

    Being the smartest species on Earth, our fate has been under our control so far. This may not hold true after our encounter with extraterrestrial AI systems. Hence, technological maturity obtains a sense of urgency for Darwinian survival in the global competition of Milky Way civilizations. Only by becoming sufficiently advanced can we overcome threats from alien technological equipment. Here’s hoping that in the galactic race, our AI systems will outsmart the aliens. Just as in the gunfights of the Wild West, the survivor might be the one who is first to draw a weapon without hesitation.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    Scientific American (US) , the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 12:54 pm on July 13, 2021 Permalink | Reply
    Tags: "Plasma Particle Accelerators Could Find New Physics", , Accelerators come in two shapes: circular (synchrotron) or linear (linac)., At the start of the 20th century scientists had little knowledge of the building blocks that form our physical world., , By the end of the century they had discovered not just all the elements that are the basis of all observed matter but a slew of even more fundamental particles that make up our cosmos., CERN CLIC collider, CERN is proposing a 100-kilometer-circumference electron-positron and proton-proton collider called the Future Circular Collider., , , , , International Linear Collider (ILC), , , , Plasma is often called the fourth state of matter., , Scientific American (US),   

    From Scientific American (US) : “Plasma Particle Accelerators Could Find New Physics” 

    From Scientific American (US)

    July 2021
    Chandrashekhar Joshi

    1
    Credit: Peter and Maria Hoey.

    At the start of the 20th century scientists had little knowledge of the building blocks that form our physical world. By the end of the century they had discovered not just all the elements that are the basis of all observed matter but a slew of even more fundamental particles that make up our cosmos, our planet and ourselves. The tool responsible for this revolution was the particle accelerator.

    The pinnacle achievement of particle accelerators came in 2012, when the Large Hadron Collider (LHC) uncovered the long-sought Higgs boson particle.

    The LHC is a 27-kilometer accelerating ring that collides two beams of protons with seven trillion electron volts (TeV) of energy each at CERN near Geneva.

    It is the biggest, most complex and arguably the most expensive scientific device ever built. The Higgs boson was the latest piece in the reigning theory of particle physics called the Standard Model. Yet in the almost 10 years since that discovery, no additional particles have emerged from this machine or any other accelerator.

    Have we found all the particles there are to find? Doubtful. The Standard Model of particle physics does not account for dark matter—particles that are plentiful yet invisible in the universe. A popular extension of the Standard Model called supersymmetry predicts many more particles out there than the ones we know about.

    And physicists have other profound unanswered questions such as: Are there extra dimensions of space? And why is there a great matter-antimatter imbalance in the observable universe? To solve these riddles, we will likely need a particle collider more powerful than those we have today.

    Many scientists support a plan to build the International Linear Collider (ILC), a straight-line-shaped accelerator that will produce collision energies of 250 billion (giga) electron volts (GeV).

    Though not as powerful as the LHC, the ILC would collide electrons with their antimatter counterparts, positrons—both fundamental particles that are expected to produce much cleaner data than the proton-proton collisions in the LHC. Unfortunately, the design of the ILC calls for a facility about 20 kilometers long and is expected to cost more than $10 billion—a price so high that no country has so far committed to host it.

    In the meantime, there are plans to upgrade the energy of the LHC to 27 TeV in the existing tunnel by increasing the strength of the superconducting magnets used to bend the protons. Beyond that, CERN is proposing a 100-kilometer-circumference electron-positron and proton-proton collider called the Future Circular Collider.

    Such a machine could reach the unprecedented energy of 100 TeV in proton-proton collisions. Yet the cost of this project will likely match or surpass the ILC. Even if it is built, work on it cannot begin until the LHC stops operation after 2035.

    But these gargantuan and costly machines are not the only options. Since the 1980s physicists have been developing alternative concepts for colliders. Among them is one known as a plasma-based accelerator, which shows great promise for delivering a TeV-scale collider that may be more compact and much cheaper than machines based on the present technology.

    The Particle Zoo

    The story of particle accelerators began in 1897 at the Cavendish physics laboratory at the University of Cambridge (UK).

    There J. J. Thomson created the earliest version of a particle accelerator using a tabletop cathode-ray tube like the ones used in most television sets before flat screens. He discovered a negatively charged particle—the electron.

    Soon physicists identified the other two atomic ingredients—protons and neutrons—using radioactive particles as projectiles to bombard atoms. And in the 1930s came the first circular particle accelerator—a palm-size device invented by Ernest Lawrence called the cyclotron, which could accelerate protons to about 80 kilovolts.

    2
    Ernest Lawrence’s First Cyclotron, 1930 Stock Photo – Alamy.

    Thereafter accelerator technology evolved rapidly, and scientists were able to increase the energy of accelerated charged particles to probe the atomic nucleus. These advances led to the discovery of a zoo of hundreds of subnuclear particles, launching the era of accelerator-based high-energy physics. As the energy of accelerator beams rapidly increased in the final quarter of the past century, the zoo particles were shown to be built from just 17 fundamental particles predicted by the Standard Model [above]. All of these, except the Higgs boson, had been discovered in accelerator experiments by the late 1990s. The Higgs’s eventual appearance [above] at the LHC made the Standard Model the crowning achievement of modern particle physics.

    Aside from being some of the most successful instruments of scientific discovery in history, accelerators have found a multitude of applications in medicine and in our daily lives. They are used in CT scanners, for x-rays of bones and for radiotherapy of malignant tumors. They are vital in food sterilization and for generating radioactive isotopes for myriad medical tests and treatments. They are the basis of x-ray free-electron lasers, which are being used by thousands of scientists and engineers to do cutting-edge research in physical, life and biological sciences.

    3
    Scientist tests a prototype plasma accelerator at the Facility for Advanced Accelerator Experimental Tests (FACET) at the DOE’s SLAC National Accelerator Laboratory (US) in California. Credit: Brad Plummer and SLAC National Accelerator Laboratory.

    Accelerator Basics

    Accelerators come in two shapes: circular (synchrotron) or linear (linac). All are powered by radio waves or microwaves that can accelerate particles to near light speed. At the LHC, for instance, two proton beams running in opposite directions repeatedly pass through sections of so-called radio-frequency cavities spaced along the ring.

    Radio waves inside these cavities create electric fields that oscillate between positive and negative to ensure that the positively charged protons always feel a pull forward. This pull speeds up the protons and transfers energy to them. Once the particles have gained enough energy, magnetic lenses focus the proton beams to several very precise collision points along the ring. When they crash, they produce extremely high energy densities, leading to the birth of new, higher-mass particles.

    When charged particles are bent in a circle, however, they emit “synchrotron radiation.” For any given radius of the ring, this energy loss is far less for heavier particles such as protons, which is why the LHC is a proton collider. But for electrons the loss is too great, particularly as their energy increases, so future accelerators that aim to collide electrons and positrons must either be linear colliders or have very large radii that minimize the curvature and thus the radiation the electrons emit.

    The size of an accelerator complex for a given beam energy ultimately depends on how much radio-frequency power can be pumped into the accelerating structure before the structure suffers electrical breakdown. Traditional accelerators have used copper to build this accelerating structure, and the breakdown threshold has meant that the maximum energy that can be added per meter is between 20 million and 50 million electron volts (MeV). Accelerator scientists have experimented with new types of accelerating structures that work at higher frequencies, thereby increasing the electrical breakdown threshold. They have also been working on improving the strength of the accelerating fields within superconducting cavities that are now routinely used in both synchrotrons and linacs. These advances are important and will almost certainly be implemented before any paradigm-changing concepts disrupt the highly successful conventional accelerator technologies.

    Eventually other strategies may be necessary. In 1982 the U.S. Department of Energy’s program on high-energy physics started a modest initiative to investigate entirely new ways to accelerate charged particles. This program generated many ideas; three among them look particularly promising.

    The first is called two-beam acceleration. This scheme uses a relatively cheap but very high-charge electron pulse to create high-frequency radiation in a cavity and then transfers this radiation to a second cavity to accelerate a secondary electron pulse. This concept is being tested at CERN on a machine called the Compact Linear Collider (CLIC).

    Another idea is to collide muons, which are much heavier cousins to electrons. Their larger mass means they can be accelerated in a circle without losing as much energy to synchrotron radiation as electrons do. The downside is that muons are unstable particles, with a lifetime of two millionths of a second. They are produced during the decay of particles called pions, which themselves must be produced by colliding an intense proton beam with a special target. No one has ever built a muon accelerator, but there are die-hard proponents of the idea among accelerator scientists.

    Finally, there is plasma-based acceleration. The notion originated in the 1970s with John M. Dawson of the University of California-Los Angeles (US), who proposed using a plasma wake produced by an intense laser pulse or a bunch of electrons to accelerate a second bunch of particles 1,000 or even 10,000 times faster than conventional accelerators can. This concept came to be known as the plasma wakefield accelerator.

    4

    It generated a lot of excitement by raising the prospect of miniaturizing these gigantic machines, much like the integrated circuit miniaturized electronics starting in the 1960s.

    The Fourth State of Matter

    Most people are familiar with three states of matter: solid, liquid and gas. Plasma is often called the fourth state of matter. Though relatively uncommon in our everyday experience, it is the most common state of matter in our universe. By some estimates more than 99 percent of all visible matter in the cosmos is in the plasma state—stars, for instance, are made of plasma. A plasma is basically an ionized gas with equal densities of electrons and ions. Scientists can easily form plasma in laboratories by passing electricity through a gas as in a common fluorescent tube.

    A plasma wakefield accelerator takes advantage of the kind of wake you can find trailing a motorboat or a jet plane. As a boat moves forward, it displaces water, which moves out behind the boat to form a wake. Similarly, a tightly focused but ultraintense laser pulse moving through a plasma at the speed of light can generate a relativistic wake (that is, a wake also propagating nearly at light speed) by exerting radiation pressure and displacing the plasma electrons out of its way. If, instead of a laser pulse, a high-energy, high-current electron bunch is sent through the plasma, the negative charge of these electrons can expel all the plasma electrons, which feel a repulsive force. The heavier plasma ions, which are positively charged, remain stationary. After the pulse passes by, the expelled electrons are attracted back toward the ions by the force between their negative and positive charges. The electrons move so quickly they overshoot the ions and then again feel a backward pull, setting up an oscillating wake. Because of the separation of the plasma electrons from the plasma ions, there is an electric field inside this wake.

    If a second “trailing” electron bunch follows the first “drive” pulse, the electrons in this trailing bunch can gain energy from the wake much in the same way an electron bunch is accelerated by the radio-frequency wave in a conventional accelerator. If there are enough electrons in the trailing bunch, they can absorb sufficient energy from the wake so as to dampen the electric field. Now all the electrons in the trailing bunch see a constant accelerating field and gain energy at the same rate, thereby reducing the energy spread of the beam.

    The main advantage of a plasma accelerator over other schemes is that electric fields in a plasma wake can easily be 1,000 times stronger than those in traditional radio-frequency cavities. Plus, a very significant fraction of the energy that the driver beam transfers to the wake can be extracted by the trailing bunch. These effects make a plasma wakefield-based collider potentially both more compact and cheaper than conventional colliders.

    The Future of Plasma

    Both laser- and electron-driven plasma wakefield accelerators have made tremendous progress in the past two decades. My own team at U.C.L.A. has carried out prototype experiments with SLAC National Accelerator Laboratory physicists at their Facility for Advanced Accelerator Experimental Tests (FACET) in Menlo Park, Calif.

    We injected both drive and trailing electron bunches with an initial energy of 20 GeV and found that the trailing electrons gained up to 9 GeV after traveling through a 1.3-meter-long plasma. We also achieved a gain of 4 GeV in a positron bunch using just a one-meter-long plasma in a proof-of-concept experiment. Several other labs around the world have used laser-driven wakes to produce multi-GeV energy gains in electron bunches.

    Plasma accelerator scientists’ ultimate goal is to realize a linear accelerator that collides tightly focused electron and positron, or electron and electron, beams with a total energy exceeding 1 TeV. To accomplish this feat, we would likely need to connect around 50 individual plasma accelerator stages in series, with each stage adding an energy of 10 GeV.

    Yet aligning and synchronizing the drive and the trailing beams through so many plasma accelerator stages to collide with the desired accuracy presents a huge challenge. The typical radius of the wake is less than one millimeter, and scientists must inject the trailing electron bunch with submicron accuracy. They must synchronize timing between the drive pulse and the trailing beam to less than a hundredth of a trillionth of one second. Any misalignment would lead to a degradation of the beam quality and a loss of energy as well as charge caused by oscillation of the electrons about the plasma wake axis. This loss shows up in the form of hard x-ray emission, known as betatron emission, and places a finite limit on how much energy we can obtain from a plasma accelerator.

    Other technical hurdles also stand in the way of immediately turning this idea into a collider. For instance, the primary figure of merit for a particle collider is the luminosity—basically a measure of how many particles you can squeeze through a given space in a given time. The luminosity multiplied by the cross section—or the chances that two particles will collide— tells you how many collisions of a particular kind per second you are likely to observe at a given energy. The desired luminosity for a 1-TeV electron-positron linear collider is 10^34 cm^–2s^–1. Achieving this luminosity would require the colliding beams to have an average power of 20 megawatts each—10^10 particles per bunch at a repetition rate of 10 kilohertz and a beam size at the collision point of tens of a billionth of a meter. To illustrate how difficult this is, let us focus on the average power requirement. Even if you could transfer energy from the drive beam to the accelerating beam with 50 percent efficiency, 20 megawatts of power will be left behind in the two thin plasma columns. Ideally we could partially recover this power, but it is far from a straightforward task.

    And although scientists have made substantial progress on the technology needed for the electron arm of a plasma-based linear collider, positron acceleration is still in its infancy. A decade of concerted basic science research will most likely be needed to bring positrons to the same point we have reached with electrons. Alternatively, we could collide electrons with electrons or even with protons, where one or both electron arms are based on a plasma wakefield accelerator. Another concept that scientists are exploring at CERN is modulating a many-centimeters-long proton bunch by sending it through a plasma column and using the accompanying plasma wake to accelerate an electron bunch.

    The future for plasma-based accelerators is uncertain but exciting. It seems possible that within a decade we could build 10-GeV plasma accelerators on a large tabletop for various scientific and commercial applications using existing laser and electron beam facilities. But this achievement would still put us a long way from realizing a plasma-based linear collider for new physics discoveries. Even though we have made spectacular experimental progress in plasma accelerator research, the beam parameters achieved to date are not yet what we would need for just the electron arm of a future electron-positron collider that operates at the energy frontier. Yet with the prospects for the International Linear Collider and the Future Circular Collider uncertain, our best bet may be to persist with perfecting an exotic technology that offers size and cost savings. Developing plasma technology is a scientific and engineering grand challenge for this century, and it offers researchers wonderful opportunities for taking risks, being creative, solving fascinating problems—and the tantalizing possibility of discovering new fundamental pieces of nature.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    Scientific American (US) , the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 11:27 am on July 4, 2021 Permalink | Reply
    Tags: "Black Holes; Quantum Entanglement; and the No-Go Theorem", Machine learning-AI, , , Scientific American (US)   

    From Scientific American (US) : “Black Holes; Quantum Entanglement; and the No-Go Theorem” 

    From Scientific American (US)

    July 4, 2021
    Zoë Holmes
    Andrew Sornborger

    1
    Credit: Getty Images.

    Suppose someone—let’s call her Alice—has a book of secrets she wants to destroy so she tosses it into a handy black hole. Given that black holes are nature’s fastest scramblers, acting like giant garbage shredders, Alice’s secrets must be pretty safe, right?

    Now suppose her nemesis, Bob, has a quantum computer that’s entangled with the black hole. (In entangled quantum systems, actions performed on one particle similarly affect their entangled partners, regardless of distance or even if some disappear into a black hole.)

    A famous thought experiment by Patrick Hayden and John Preskill says Bob can observe a few particles of light that leak from the edges of a black hole. Then Bob can run those photons as qubits (the basic processing unit of quantum computing) through the gates of his quantum computer to reveal the particular physics that jumbled Alice’s text. From that, he can reconstruct the book.

    But not so fast.

    Our recent work on quantum machine learning suggests Alice’s book might be gone forever, after all.

    QUANTUM COMPUTERS TO STUDY QUANTUM MECHANICS

    Alice might never have the chance to hide her secrets in a black hole. Still, our new no-go theorem about information scrambling has real-world application to understanding random and chaotic systems in the rapidly expanding fields of quantum machine learning, quantum thermodynamics, and quantum information science.

    Richard Feynman, one of the great physicists of the 20th century, launched the field of quantum computing in a 1981 speech, when he proposed developing quantum computers as the natural platform to simulate quantum systems. They are notoriously difficult to study otherwise.

    Our team at DOE’s Los Alamos National Laboratory (US), along with other collaborators, has focused on studying algorithms for quantum computers and, in particular, machine-learning algorithms—what some like to call artificial intelligence. The research sheds light on what sorts of algorithms will do real work on existing noisy, intermediate-scale quantum computers and on unresolved questions in quantum mechanics at large.

    In particular, we have been studying the training of variational quantum algorithms. They set up a problem-solving landscape where the peaks represent the high-energy (undesirable) points of the system, or problem, and the valleys are the low-energy (desirable) values. To find the solution, the algorithm works its way through a mathematical landscape, examining its features one at a time. The answer lies in the deepest valley.

    ENTANGLEMENT LEADS TO SCRAMBLING

    We wondered if we could apply quantum machine learning to understand scrambling. This quantum phenomenon happens when entanglement grows in a system made of many particles or atoms. Think of the initial conditions of this system as a kind of information—Alice’s book, for instance. As the entanglement among particles within the quantum system grows, the information spreads widely; this scrambling of information is key to understanding quantum chaos, quantum information science, random circuits and a range of other topics.

    A black hole is the ultimate scrambler. By exploring it with a variational quantum algorithm on a theoretical quantum computer entangled with the black hole, we could probe the scalability and applicability of quantum machine learning. We could also learn something new about quantum systems generally. Our idea was to use a variational quantum algorithm that would exploit the leaked photons to learn about the dynamics of the black hole. The approach would be an optimization procedure—again, searching through the mathematical landscape to find the lowest point.

    If we found it, we would reveal the dynamics inside the black hole. Bob could use that information to crack the scrambler’s code and reconstruct Alice’s book.

    Now here’s the rub. The Hayden-Preskill thought experiment assumes Bob can determine the black hole dynamics that are scrambling the information. Instead, we found that the very nature of scrambling prevents Bob from learning those dynamics.

    STALLED OUT ON A BARREN PLATEAU

    Here’s why: the algorithm stalled out on a barren plateau [Nature Communications], which, in machine learning, is as grim as it sounds. During machine-learning training, a barren plateau represents a problem-solving space that is entirely flat as far as the algorithm can see. In this featureless landscape, the algorithm can’t find the downward slope; there’s no clear path to the energy minimum. The algorithm just spins its wheels, unable to learn anything new. It fails to find the solution.

    Our resulting no-go theorem says that any quantum machine-learning strategy will encounter the dreaded barren plateau when applied to an unknown scrambling process.

    The good news is, most physical processes are not as complex as black holes, and we often will have prior knowledge of their dynamics, so the no-go theorem doesn’t condemn quantum machine learning. We just need to carefully pick the problems we apply it to. And we’re not likely to need quantum machine learning to peer inside a black hole to learn about Alice’s book—or anything else—anytime soon.

    So, Alice can rest assured that her secrets are safe, after all.

    See the full article here3 .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    Scientific American (US) , the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: