Tagged: Quanta Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:08 am on January 31, 2023 Permalink | Reply
    Tags: "Astronomers Say They Have Spotted the Universe’s First Stars", , , , , Ginormous balls of hydrogen and helium sculpted from the universe’s primordial gas., , Quanta Magazine   

    From “Quanta Magazine” : “Astronomers Say They Have Spotted the Universe’s First Stars” 

    From “Quanta Magazine”

    1.30.23
    Jonathan O’Callaghan

    1
    The largest stars in the present-day universe are a couple hundred times more massive than our sun. The first stars could have had as much as 100,000 times the sun’s mass. Credit: Merrill Sherman/Quanta Magazine.

    A group of astronomers poring over data from the James Webb Space Telescope (JWST) has glimpsed light from ionized helium in a distant galaxy, which could indicate the presence of the universe’s very first generation of stars.

    These long-sought, inaptly named “Population III” stars would have been ginormous balls of hydrogen and helium sculpted from the universe’s primordial gas. Theorists started imagining these first fireballs in the 1970s, hypothesizing that, after short lifetimes, they exploded as supernovas, forging heavier elements and spewing them into the cosmos. That star stuff later gave rise to Population II stars more abundant in heavy elements, then even richer Population I stars like our sun, as well as planets, asteroids, comets and eventually life itself.

    “We exist, therefore we know there must have been a first generation of stars,” said Rebecca Bowler, an astronomer at the University of Manchester in the United Kingdom.

    Now Xin Wang, an astronomer at the Chinese Academy of Sciences in Beijing, and his colleagues think they’ve found them. “It’s really surreal,” Wang said. Confirmation is still needed; the team’s paper is posted for Nature [below].

    Even if the researchers are wrong, a more convincing detection of the first stars may not be far off. JWST, which is transforming vast swaths of astronomy, is thought capable of peering far enough away in space and time to see them. Already, the gigantic floating telescope has detected distant galaxies whose unusual brightness suggests they may contain Population III stars. And other research groups vying to discover the stars with JWST are analyzing their own data now. “This is absolutely one of the hottest questions going,” said Mike Norman, a physicist at the University of California-San Diego who studies the stars in computer simulations.

    A definitive discovery would allow astronomers to start probing the stars’ size and appearance, when they existed, and how, in the primordial darkness, they suddenly lit up.

    “It’s really one of the most fundamental changes in the history of the universe,” Bowler said.

    Population III

    About 400,000 years after the Big Bang, electrons, protons and neutrons settled down enough to combine into hydrogen and helium atoms. As the temperature kept dropping, dark matter gradually clumped up, pulling the atoms with it. Inside the clumps, hydrogen and helium were squashed by gravity, condensing into enormous balls of gas until, once the balls were dense enough, nuclear fusion suddenly ignited in their centers. The first stars were born.

    The German astronomer Walter Baade categorized the stars in our galaxy into types I and II in 1944. The former includes our sun and other metal-rich stars; the latter contains older stars made of lighter elements. The idea of Population III stars entered the literature decades later. In a 1984 paper that raised their profile, the British astrophysicist Bernard Carr described the vital role this original breed of star may have played in the early universe. “Their heat or explosions could have reionized the universe,” Carr and his colleagues wrote, “… and their heavy-element yield could have produced a burst of pregalactic enrichment,” giving rise to later stars richer in heavier elements.

    Those at the heavier end of the range, so-called supermassive stars, would have been relatively cool, red and bloated, with sizes that could encompass almost our entire solar system. Denser, more modestly sized variants of Population III stars would have shone blue hot, with surface temperatures of some 50,000 degrees Celsius, compared to just 5,500 degrees for our sun.

    In 2001, computer simulations led by Norman explained how such large stars could form [Science (below)]. In the present universe, clouds of gas fragment into lots of small stars. But the simulations showed that gas clouds in the early universe, being much hotter than modern clouds, couldn’t as easily condense and were therefore less efficient at star formation. Instead, entire clouds would collapse into a single, giant star.

    Their immense proportions meant the stars were short-lived, lasting a few million years at most. (More massive stars burn through their available fuel more quickly.) As such, Population III stars wouldn’t have lasted long in the history of the universe — perhaps a few hundred million years as the last pockets of primordial gas dissipated.

    There are many uncertainties. How massive did these stars really become? How late into the universe did they exist? And how abundant were they in the early universe? “They’re completely different stars to the stars in our own galaxy,” Bowler said. “They’re just such interesting objects.”

    2
    Rebecca Bowler, an astronomer at the University of Manchester in the United Kingdom, studies the formation and evolution of galaxies in the early universe. Credit: Anthony Holloway/University of Manchester.

    Because they are so far away and existed so briefly, finding evidence for them has been a challenge. However, in 1999, astronomers at the University of Colorado, Boulder predicted that the stars should produce a telltale signature [The Astrophysical Journal (below)]: specific frequencies of light emitted by helium II, or helium atoms that are missing an electron, when each atom’s remaining electron moves between energy levels. “The helium emission is not actually originating from within the stars themselves,” explained James Trussler, an astronomer at the University of Manchester; rather, it was created when energetic photons from the stars’ hot surfaces plowed into gas surrounding the star.

    “It’s a relatively simple prediction,” said Daniel Schaerer of the University of Geneva, who expanded on the idea in 2002 [Astronomy & Astrophysics]. The hunt was on.

    Finding the First Stars

    In 2015, Schaerer and his colleagues thought they might have found something[The Astrophysical Journal (below)]. They detected a possible hint of a helium II signature in a distant, primitive galaxy that might have been linked to a group of Population III stars. Seen as it appeared 800 million years after the Big Bang, the galaxy looked as if it might contain the first evidence of the first stars in the universe.

    Later work led by Bowler disputed the findings [MNRAS (below)]. “We found evidence for oxygen emission from the source. That ruled out a pure Population III scenario,” she said. An independent group then failed to detect the helium II line seen by the initial team [PASJ]. “It wasn’t there,” Bowler said.

    Astronomers pinned their hopes on JWST [MNRAS], which launched in December 2021. The telescope, with its enormous mirror and unprecedented sensitivity to infrared light, can peer more easily into the early universe than any telescope before it. (Because light takes time to travel here, the telescope sees faint, faraway objects as they appeared long ago.) The telescope can also do spectroscopy, breaking up light into its component wavelengths, which allows it to look for the helium II hallmark of Population III stars.

    Wang’s team analyzed spectroscopy data for more than 2,000 of JWST’s targets. One is a distant galaxy seen as it appeared just 620 million years after the Big Bang. According to the researchers, the galaxy is split into two pieces. Their analysis showed that one half seems to have the key signature of helium II mixed with light from other elements, potentially pointing to a hybrid population of thousands of Population III and other stars. Spectroscopy of the second half of the galaxy has yet to be done, but its brightness hints at a more Population III-rich environment.

    “We are trying to apply for observing time for JWST in the next cycle to cover the entire galaxy,” Wang said, in order to “have a shot of confirming such objects.”

    The galaxy is a “head-scratcher,” according to Norman. If the helium II results stand up to scrutiny, he said, “one possibility is a cluster of Population III stars.” However, he’s unsure if Population III stars and later stars could mix together so readily.

    Daniel Whalen, an astrophysicist at the University of Portsmouth, was similarly cautious. “It definitely could be evidence of a mixture of Population III and Population II stars in one galaxy,” he said. However, although this would be “the first direct evidence” of the universe’s first stars, Whalen said, “it’s not clean evidence.” Other piping hot cosmic objects can produce a similar helium II signature, including scorching disks of material that swirl around black holes.

    Wang thinks his team can rule out a black hole as the source because they did not detect specific oxygen, nitrogen or ionized carbon signatures that would be expected in that case. However, the work still awaits peer review, and even then, follow-up observations will need to confirm its potential findings.

    Hot on the Trail

    Other groups using JWST are also hunting for the first stars.

    Besides looking for helium II, another search method, proposed by the astronomer Rogier Windhorst of Arizona State University and colleagues in 2018, is to use the gravity [The Astrophysical Journal Supplement Series (below)], of giant clusters of galaxies to see individual stars in the early universe. Using a massive object like a cluster to warp light and magnify more distant objects (a technique known as gravitational lensing) is a common way astronomers obtain views of distant galaxies. Windhorst believed that even individual Population III stars approaching the edge of a heavy cluster “could in principle undergo nearly infinite magnification” and pop into view, he said.

    Windhorst leads a JWST program that is attempting the technique. “I’m pretty confident that in a year or two we will have seen some,” he said. “We already have some candidates.” Similarly, Eros Vanzella, an astronomer at the National Institute for Astrophysics in Italy, is leading a program that’s studying a clump of 10 or 20 candidate Population III stars using gravitational lensing. “We are just playing with the data now,” he said.

    And there remains the tantalizing possibility that some of the unexpectedly bright galaxies already seen by JWST in the early universe could owe their brightness to massive Population III stars. “These are exactly the epochs where we expect the first stars are forming,” Vanzella said. “I hope … that in the next weeks or months, the first stars will be detected.”

    Astronomy & Astrophysics
    The Astrophysical Journal 1999
    See the above science paper for instructive material with images.
    Science 2001
    Nature
    The Astrophysical Journal 2015
    See the above science paper for instructive material with images.
    MNRAS 2017
    See the above science paper for instructive material with images.
    PASJ
    See the above science paper for instructive material with images.
    MNRAS 2022
    See the above science paper for instructive material with images.
    The Astrophysical Journal Supplement Series
    See the above science paper for instructive material with images.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:11 am on January 29, 2023 Permalink | Reply
    Tags: "How Quantum Physicists ‘Flipped Time’ (and How They Didn’t)", "Time’s arrow", , Before being measured a particle acts more like a wave., Physicists have coaxed particles of light into undergoing opposite transformations simultaneously like a human turning into a werewolf as the werewolf turns into a human., , Quanta Magazine, , , The essence of quantum strangeness, The perplexing phenomenon could lead to new kinds of quantum technology.   

    From “Quanta Magazine” : “How Quantum Physicists ‘Flipped Time’ (and How They Didn’t)” 

    From “Quanta Magazine”

    1.27.23
    Charlie Wood


    The quantum time flip circuit is like a metronome swinging both ways at once. Kristina Armitage/Quanta Magazine.

    Physicists have coaxed particles of light into undergoing opposite transformations simultaneously, like a human turning into a werewolf as the werewolf turns into a human. In carefully engineered circuits, the photons act as if time were flowing in a quantum combination of forward and backward.

    “For the first time ever, we kind of have a time-traveling machine going in both directions,” said Sonja Franke-Arnold, a quantum physicist at the University of Glasgow in Scotland who was not involved in the research.

    Regrettably for science fiction fans, the devices have nothing in common with a 1982 DeLorean. Throughout the experiments, which were conducted by two independent teams in China and Austria, laboratory clocks continued to tick steadily forward. Only the photons flitting through the circuitry experienced temporal shenanigans. And even for the photons, researchers debate whether the flipping of “time’s arrow” is real or simulated.

    Either way, the perplexing phenomenon could lead to new kinds of quantum technology.

    “You could conceive of circuits in which your information could flow both ways,” said Giulia Rubino, a researcher at the University of Bristol.

    Anything Anytime All at Once

    Physicists first realized a decade ago that the strange rules of quantum mechanics topple commonsense notions of time.

    The essence of quantum strangeness is this: When you look for a particle, you’ll always detect it in a single, pointlike location. But before being measured, a particle acts more like a wave; it has a “wave function” that spreads out and ripples over multiple routes. In this undetermined state, a particle exists in a quantum blend of possible locations known as a superposition.

    In a paper published in 2013, Giulio Chiribella, a physicist now at the University of Hong Kong, and co-authors proposed a circuit that would put events into a superposition of temporal orders, going a step beyond the superposition of locations in space. Four years later, Rubino and her colleagues directly experimentally demonstrated the idea [Science Advances (below)]. They sent a photon down a superposition of two paths: one in which it experienced event A and then event B, and another where it experienced B then A. In some sense, each event seemed to cause the other, a phenomenon that came to be called “indefinite causality”.

    Not content to mess merely with the order of events while time marched onward, Chiribella and a colleague, Zixuan Liu, next took aim at the marching direction, or arrow, of time itself. They sought a quantum apparatus in which time entered a superposition of flowing from the past to the future and vice versa — an indefinite arrow of time.

    To do this, Chiribella and Liu realized they needed a system that could undergo opposite changes, like a metronome whose arm can swing left or right. They imagined putting such a system in a superposition, akin to a musician simultaneously flicking a quantum metronome rightward and leftward. They described a scheme for setting up such a system in 2020.

    Optics wizards immediately started constructing dueling arrows of time in the lab. Last fall, two teams declared success.

    A Two-Timing Game

    Chiribella and Liu had devised a game at which only a quantum two-timer could excel. Playing the game with light involves firing photons through two crystal gadgets, A and B. Passing forward through a gadget rotates a photon’s polarization by an amount that depends on the gadget’s settings. Passing backward through the gadget rotates the polarization in precisely the opposite way.

    Before each round of the game, a referee secretly sets the gadgets in one of two ways: The path forward through A, then backward through B, will either shift a photon’s wave function relative to the time-reversed path (backward through A, then forward through B), or it won’t. The player must figure out which choice the referee made. After the player arranges the gadgets and other optical elements however they want, they send a photon through the maze, perhaps splitting it into a superposition of two paths using a half-silvered mirror. The photon ends up at one of two detectors. If the player has set up their maze in a sufficiently clever way, the click of the detector that has the photon will reveal the referee’s choice.

    When the player sets up the circuit so that the photon moves in only one direction through each gadget, then even if A and B are in an indefinite causal order, the detector’s click will match the secret gadget settings at most about 90% of the time. Only when the photon experiences a superposition that takes it forward and backward through both gadgets — a tactic dubbed the “quantum time flip” — can the player theoretically win every round.

    2
    Merrill Sherman/Quanta Magazine

    Last year, a team in Hefei, China advised by Chiribella and one in Vienna advised by the physicist Časlav Brukner set up quantum time-flip circuits. Over 1 million rounds, the Vienna team guessed correctly 99.45% of the time. Chiribella’s group won 99.6% of its rounds. Both teams shattered the theoretical 90% limit, proving that their photons experienced a superposition of two opposing transformations and hence an indefinite arrow of time.

    Interpreting the Time Flip

    While the researchers have executed and named the quantum time flip, they’re not in perfect agreement regarding which words best capture what they’ve done.

    In Chiribella’s eyes, the experiments have simulated a flipping of time’s arrow. Actually flipping it would require arranging the fabric of space-time itself into a superposition of two geometries where time points in different directions. “Obviously, the experiment is not implementing the inversion of the arrow of time,” he said.

    Brukner, meanwhile, feels that the circuits take a modest step beyond simulation. He points out that the measurable properties of the photons change exactly as they would if they passed through a true superposition of two space-time geometries. And in the quantum world, there is no reality beyond what can be measured. “From the state itself, there is no difference between the simulation and the real thing,” he said.

    Granted, he admits, the circuit can time-flip only photons undergoing polarization changes; if space-time were truly in a superposition, dueling time directions would affect everything.

    Two-Arrow Circuits

    Whatever their philosophical inclinations, physicists hope that the ability to design quantum circuits that flow two ways at once might enable new devices for quantum computing, communication and metrology.

    “This allows you to do more things than just implementing the operations in one order or another,” said Cyril Branciard, a quantum information theorist at the Néel Institute in France.

    “This allows you to do more things than just implementing the operations in one order or another,” said Cyril Branciard, a quantum information theorist at the Néel Institute in France.

    Some researchers speculate that the time-travel flavor of the quantum time flip might enable a future quantum “undo” function. Others anticipate that circuits operating in two directions at once could allow quantum machines to run more efficiently. “You could use this for games where you want to reduce the so-called query complexity,” Rubino said, referring to the number of steps it takes to carry out some task.

    Such practical applications are far from assured. While the time-flip circuits broke a theoretical performance limit in Chiribella and Liu’s guessing game, that was a highly contrived task dreamt up only to highlight their advantage over one-way circuits.

    But bizarre, seemingly niche quantum phenomena have a knack for proving useful. The eminent physicist Anton Zeilinger used to believe that quantum entanglement — a link between separated particles — wasn’t good for anything. Today, entanglement threads together nodes in nascent quantum networks and qubits in prototype quantum computers, and Zeilinger’s work on the phenomenon won him a share of the 2022 Nobel Prize in Physics. For the flippable nature of quantum time, Franke-Arnold said, “it’s very early days.”

    a paper published in 2013
    Science Advances 2017
    described a scheme 2022

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 10:47 pm on January 24, 2023 Permalink | Reply
    Tags: "Mathematicians Find an Infinity of Possible Black Hole Shapes", , In three-dimensional space the surface of a black hole must be a sphere. But a new result shows that in higher dimensions an infinite number of configurations are possible., Quanta Magazine   

    From “Quanta Magazine” : “Mathematicians Find an Infinity of Possible Black Hole Shapes” 

    From “Quanta Magazine”

    1.24.23
    Steve Nadis

    1
    If we were to discover black holes with nonspherical shapes, it would be a sign that our universe has more than three dimensions of space. Kristina Armitage/Quanta Magazine.

    In three-dimensional space, the surface of a black hole must be a sphere. But a new result shows that in higher dimensions, an infinite number of configurations are possible.

    The cosmos seems to have a preference for things that are round. Planets and stars tend to be spheres because gravity pulls clouds of gas and dust toward the center of mass. The same holds for black holes — or, to be more precise, the event horizons of black holes — which must, according to theory, be spherically shaped in a universe with three dimensions of space and one of time.

    But do the same restrictions apply if our universe has higher dimensions, as is sometimes postulated — dimensions we cannot see but whose effects are still palpable? In those settings, are other black hole shapes possible?

    The answer to the latter question, mathematics tells us, is yes. Over the past two decades, researchers have found occasional exceptions to the rule that confines black holes to a spherical shape.

    Now a new paper [below] goes much further, showing in a sweeping mathematical proof that an infinite number of shapes are possible in dimensions five and above. The paper demonstrates that Albert Einstein’s equations of General Relativity can produce a great variety of exotic-looking, higher-dimensional black holes.

    The new work is purely theoretical. It does not tell us whether such black holes exist in nature. But if we were to somehow detect such oddly shaped black holes — perhaps as the microscopic products of collisions at a particle collider — “that would automatically show that our universe is higher-dimensional,” said Marcus Khuri, a geometer at Stony Brook University and co-author of the new work along with Jordan Rainone, a recent Stony Brook math Ph.D. “So it’s now a matter of waiting to see if our experiments can detect any.”

    Black Hole Doughnut

    As with so many stories about black holes, this one begins with Stephen Hawking — specifically, with his 1972 proof that the surface of a black hole, at a fixed moment in time, must be a two-dimensional sphere. (While a black hole is a three-dimensional object, its surface has just two spatial dimensions.)

    Little thought was given to extending Hawking’s theorem until the 1980s and ’90s, when enthusiasm grew for string theory — an idea that requires the existence of perhaps 10 or 11 dimensions. Physicists and mathematicians then started to give serious consideration to what these extra dimensions might imply for black hole topology.

    Black holes are some of the most perplexing predictions of Einstein’s equations — 10 linked nonlinear differential equations that are incredibly challenging to deal with. In general, they can only be explicitly solved under highly symmetrical, and hence simplified, circumstances.

    In 2002, three decades after Hawking’s result, the physicists Roberto Emparan and Harvey Reall — now at the University of Barcelona and the University of Cambridge, respectively — found a highly symmetrical black hole solution to the Einstein equations in five dimensions (four of space plus one of time). Emparan and Reall called this object a “black ring” [Physical Review Letters (below)] — a three-dimensional surface with the general contours of a doughnut.

    It’s difficult to picture a three-dimensional surface in a five-dimensional space, so let’s instead imagine an ordinary circle. For every point on that circle, we can substitute a two-dimensional sphere. The result of this combination of a circle and spheres is a three-dimensional object that might be thought of as a solid, lumpy doughnut.

    In principle, such doughnutlike black holes could form if they were spinning at just the right speed. “If they spin too fast, they would break apart, and if they don’t spin fast enough, they would go back to being a ball,” Rainone said. “Emparan and Reall found a sweet spot: Their ring was spinning just fast enough to stay as a doughnut.”

    Learning about that result gave hope to Rainone, a topologist, who said, “Our universe would be a boring place if every planet, star and black hole resembled a ball.”

    A New Focus

    In 2006, the non-ball black hole universe really began to flower. That year, Greg Galloway of the University of Miami and Richard Schoen of Stanford University generalized Hawking’s theorem to describe all possible shapes that black holes could potentially assume in dimensions beyond four. Included among the allowable shapes: the familiar sphere, the previously demonstrated ring, and a broad class of objects called lens spaces.

    Lens spaces are a particular type of mathematical construction that has long been important in both geometry and topology. “Among all possible shapes the universe could throw at us in three dimensions,” Khuri said, “the sphere is the simplest, and lens spaces are the next-simplest case.”

    Khuri thinks of lens spaces as “folded-up spheres. You are taking a sphere and folding it up in a very complicated way.” To understand how this works, start with a simpler shape — a circle. Divide this circle into upper and lower halves. Then move every point in the bottom half of the circle to the point in the top half that’s diametrically opposite to it. That leaves us with just the upper semicircle and two antipodal points — one at each end of the semicircle. These must be glued to each other, creating a smaller circle with half the circumference of the original.

    Next, move to two dimensions, where things begin to get complicated. Start with a two-dimensional sphere — a hollow ball — and move every point on the bottom half up so that it’s touching the antipodal point on the top half. You’re left with just the top hemisphere. But the points along the equator also have to be “identified” (or attached) with one another, and because of all the crisscrossing required, the resulting surface will become extremely contorted.

    When mathematicians talk about lens spaces, they are usually referring to the three-dimensional variety. Again, let’s start with the simplest example, a solid globe that includes the surface and interior points. Run longitudinal lines down the globe from the north to the south pole. In this case, you have only two lines, which split the globe into two hemispheres (East and West, you might say). You can then identify points on one hemisphere with the antipodal points on the other.

    2
    Merrill Sherman/Quanta Magazine.

    But you can also have many more longitudinal lines and many different ways of connecting the sectors that they define. Mathematicians keep track of these options in a lens space with the notation L(p, q), where p tells you the number of sectors the globe is divided into, while q tells you how those sectors are to be identified with one another. A lens space labeled L(2, 1) indicates two sectors (or hemispheres) with just one way to identify points, which is antipodally.

    If the globe is split into more sectors, there are more ways to knit them together. For example, in an L(4, 3) lens space, there are four sectors, and every upper sector is matched to its lower counterpart three sectors over: upper sector 1 goes to lower sector 4, upper sector 2 goes to lower sector 1, and so forth. “One can think of this [process] as twisting the top to find the correct place on the bottom to glue,” Khuri said. “The amount of twisting is determined by q.” As more twisting becomes necessary, the resulting shapes can get increasingly elaborate.

    “People sometimes ask me: How do I visualize these things?” said Hari Kunduri, a mathematical physicist at McMaster University. “The answer is, I don’t. We just treat these objects mAll the Black Holes

    In 2014, Kunduri and James Lucietti of the University of Edinburgh proved the existence of a black hole of the L(2, 1) type in five dimensions.

    The Kunduri-Lucietti solution, which they refer to as a “black lens,” has a couple of important features. Their solution describes an “asymptotically flat” space-time, meaning that the curvature of space-time, which would be high in the vicinity of a black hole, approaches zero as one moves toward infinity. This characteristic helps ensure that the results are physically relevant. “It’s not so hard to make a black lens,” Kunduri noted. “The hard part is doing that and making space-time flat at infinity.”

    Just as rotation keeps Emparan and Reall’s black ring from collapsing on itself, the Kunduri-Lucietti black lens must spin as well. But Kunduri and Lucietti also used a “matter” field — in this case, a type of electric charge — to hold their lens together.

    In their December 2022 paper, Khuri and Rainone generalized the Kunduri-Lucietti result about as far as one can go. They first proved the existence in five dimensions of black holes with lens topology L(p, q), for any value of p and q greater than or equal to 1 — so long as p is greater than q, and p and q have no prime factors in common.athematically, which speaks to the power of abstraction. It allows you to work without drawing pictures.”

    All the Black Holes

    In 2014, Kunduri and James Lucietti of the University of Edinburgh proved the existence of a black hole of the L(2, 1) type in five dimensions.

    The Kunduri-Lucietti solution, which they refer to as a “black lens,” has a couple of important features. Their solution describes an “asymptotically flat” space-time, meaning that the curvature of space-time, which would be high in the vicinity of a black hole, approaches zero as one moves toward infinity. This characteristic helps ensure that the results are physically relevant. “It’s not so hard to make a black lens,” Kunduri noted. “The hard part is doing that and making space-time flat at infinity.”

    Just as rotation keeps Emparan and Reall’s black ring from collapsing on itself, the Kunduri-Lucietti black lens must spin as well. But Kunduri and Lucietti also used a “matter” field — in this case, a type of electric charge — to hold their lens together.

    In their December 2022 paper [above], Khuri and Rainone generalized the Kunduri-Lucietti result about as far as one can go. They first proved the existence in five dimensions of black holes with lens topology L(p, q), for any value of p and q greater than or equal to 1 — so long as p is greater than q, and p and q have no prime factors in common.

    Then they went further. They found that they could produce a black hole in the shape of any lens space — any values of p and q (satisfying the same stipulations), in any higher dimension — yielding an infinite number of possible black holes in an infinite number of dimensions. There is one caveat, Khuri pointed out: “When you go to dimensions above five, the lens space is just one piece of the total topology.” The black hole is even more complex than the already visually challenging lens space it contains.

    The Khuri-Rainone black holes can rotate but don’t have to. Their solution also pertains to an asymptotically flat space-time. However, Khuri and Rainone needed a somewhat different kind of matter field — one that consists of particles associated with higher dimensions — to preserve the shape of their black holes and prevent defects or irregularities that would compromise their result. The black lenses they constructed, like the black ring, have two independent rotational symmetries (in five dimensions) to make the Einstein equations easier to solve. “It is a simplifying assumption, but one that is not unreasonable,” Rainone said. “And without it, we don’t have a paper.”

    “It’s really nice and original work,” Kunduri said. “They showed that all the possibilities presented by Galloway and Schoen can be explicitly realized,” once the aforementioned rotational symmetries are taken into account.

    Galloway was particularly impressed by the strategy invented by Khuri and Rainone. To prove the existence of a five-dimensional black lens of a given p and q, they first embedded the black hole in a higher-dimensional space-time where its existence was easier to prove, in part because there is more room to move around in. Next, they contracted their space-time to five dimensions while keeping the desired topology intact. “It’s a beautiful idea,” Galloway said.

    The great thing about the procedure that Khuri and Rainone introduced, Kunduri said, “is that it’s very general, applying to all possibilities at once.”

    As for what’s next, Khuri has begun looking into whether lens black hole solutions can exist and remain stable in a vacuum without matter fields to support them. A 2021 paper [below] by Lucietti and Fred Tomlinson concluded that it’s not possible — that some kind of matter field is needed. Their argument, however, was not based on a mathematical proof but on computational evidence, “so it is still an open question,” Khuri said.

    Meanwhile, an even bigger mystery looms. “Are we really living in a higher-dimensional realm?” Khuri asked. Physicists have predicted that tiny black holes could someday be produced at the Large Hadron Collider or another even higher-energy particle accelerator. If an accelerator-produced black hole could be detected during its brief, fraction-of-a-second lifetime and observed to have nonspherical topology, Khuri said, that would be evidence that our universe has more than three dimensions of space and one of time.

    Such a finding could clear up another, somewhat more academic issue. “General Relativity,” Khuri said, “has traditionally been a four-dimensional theory.” In exploring ideas about black holes in dimensions five and above, “we are betting on the fact that general relativity is valid in higher dimensions. If any exotic [nonspherical] black holes are detected, that would tell us our bet was justified.”

    new paper
    2021 paper
    Physical Review Letters 2002

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:57 pm on January 21, 2023 Permalink | Reply
    Tags: "Standard Model of Cosmology Survives a Telescope’s Surprising Finds", , , , Quanta Magazine,   

    From “Quanta Magazine” And The NASA/ESA/CSA James Webb Space Telescope: “Standard Model of Cosmology Survives a Telescope’s Surprising Finds” 

    From “Quanta Magazine”

    And

    NASA Webb Header

    National Aeronautics Space Agency/European Space Agency [La Agencia Espacial Europea] [Agence spatiale européenne][Europäische Weltraumorganization](EU)/ Canadian Space Agency [Agence Spatiale Canadienne](CA) James Webb Infrared Space Telescope annotated, finally launched December 25, 2021, ten years late.

    The NASA/ESA/CSA James Webb Space Telescope

    1.20.23
    Rebecca Boyle

    1
    Samuel Velasco/Quanta Magazine; Source: NASA.

    The Webb telescope has spotted galaxies surprisingly far away in space and deep in the past.

    These four, studied by a team called JADES-JWST Advanced Deep Extragalactic Survey , are all seen as they appeared less than 500 million years after the Big Bang.

    Webb Spectra Reach New Milestone in Redshift Frontier

    The JWST Advanced Deep Extragalactic Survey (JADES) focused on the area in and around the Hubble Space Telescope’s Ultra Deep Field. Using Webb’s NIRCam instrument, scientists observed the field in nine different infrared wavelength ranges. From these images (shown at left), the team searched for faint galaxies that are visible in the infrared but whose spectra abruptly cut off at a critical wavelength known as the Lyman break. Webb’s NIRSpec instrument then yielded a precise measurement of each galaxy’s redshift (shown at right). Four of the galaxies studied are particularly special, as they were revealed to be at an unprecedentedly early epoch. These galaxies date back to less than 400 million years after the big bang, when the universe was only 2% of its current age.

    In the background image blue represents light at 1.15 microns (115W), green is 2.0 microns (200W), and red is 4.44 microns (444W). In the cutout images blue is a combination of 0.9 and 1.15 microns (090W+115W), green is 1.5 and 2.0 microns (150W+200W), and red is 2.0, 2.77, and 4.44 microns (200W+277W+444W).

    JADES is a collaboration of the JWST Near-Infrared Camera (NIRCam) [below] and Near-Infrared Spectrograph (NIRSpec) instrument [below] teams and will comprise about 800 hours of observing time, with full utilization of coordinated parallels. JADES includes 8-10 filters of NIRCam data over the field, in two tiers of depth. Even the medium depth approaches that of the deepest current data, but over a much wider field. The deep tier will likely set the standard for cycle 1 depth with JWST. Spectroscopy of thousands of galaxies will reveal emission lines down to the faintest limits of the high redshift galaxies as well as kinematics and abundances of intermediate redshift galaxies. Extremely deep parallel observations with the mid-infrared instrument at 7.7 and 12 microns will probe the older stars and hot dust of galaxies at cosmic noon and before. Prof. Daniel Eisenstein serves as the NIRCam Extragalactic Team Lead and proposal PI. The international collaboration of JADES has been actively preparing for the data set and is excited to see the long-anticipated promise of JWST come to fruition in cycle 1.

    1
    This image taken by the James Webb Space Telescope highlights the region of study by the JWST Advanced Deep Extragalactic Survey (JADES). This area is in and around the Hubble Space Telescope’s Ultra Deep Field [below]. Scientists used Webb’s NIRCam instrument [below] to observe the field in nine different infrared wavelength ranges. From these images, the team searched for faint galaxies that are visible in the infrared but whose spectra abruptly cut off at a critical wavelength. They conducted additional observations (not shown here) with Webb’s NIRSpec instrument to measure each galaxy’s redshift and reveal the properties of the gas and stars in these galaxies.

    In this image blue represents light at 1.15 microns (115W), green is 2.0 microns (200W), and red is 4.44 microns (444W).

    3
    Simulated JWST/NIRCam mosaic generated using JAGUAR and the NIRCam image simulator Guitarra (C. Willmer, in preparation), at the depth of the JADES Deep program.

    This image is focused on a region of 3’ by 1.5’, and is a composite of the F090W (blue), F115W (green), and F200W (red) filters. The insets show a 5” by 5’ region with multiple high-redshift galaxies, and a 1” by 1” region focused on a galaxy at z = 11.3. Image from Williams et al. (2018, ApJ Supp, 236, 33).

    The cracks in cosmology were supposed to take a while to appear. But when the James Webb Space Telescope (JWST) opened its lens last spring, extremely distant yet very bright galaxies immediately shone into the telescope’s field of view. “They were just so stupidly bright, and they just stood out,” said Rohan Naidu, an astronomer at the Massachusetts Institute of Technology.

    The galaxies’ apparent distances from Earth suggested that they formed much earlier in the history of the universe than anyone anticipated. (The farther away something is, the longer ago its light flared forth.) Doubts swirled, but in December, astronomers confirmed that some of the galaxies are indeed as distant, and therefore as primordial, as they seem. The earliest of those confirmed galaxies shed its light 330 million years after the Big Bang, making it the new record-holder for the earliest known structure in the universe. That galaxy was rather dim, but other candidates loosely pegged to the same time period were already shining bright, meaning they were potentially humongous.

    How could stars ignite inside superheated clouds of gas so soon after the Big Bang? How could they hastily weave themselves into such huge gravitationally bound structures? Finding such big, bright, early galaxies seems akin to finding a fossilized rabbit in Precambrian strata. “There are no big things at early times. It takes a while to get to big things,” said Mike Boylan-Kolchin, a theoretical physicist at the University of Texas-Austin.

    Astronomers began asking whether the profusion of early big things defies the current understanding of the cosmos. Some researchers and media outlets claimed that the telescope’s observations were breaking the standard model of cosmology — a well-tested set of equations called the lambda cold dark matter, or ΛCDM, model — thrillingly pointing to new cosmic ingredients or governing laws.

    It has since become clear, however, that the ΛCDM model is resilient. Instead of forcing researchers to rewrite the rules of cosmology, the JWST findings have astronomers rethinking how galaxies are made, especially in the cosmic beginning. The telescope has not yet broken cosmology, but that doesn’t mean the case of the too-early galaxies will turn out to be anything but epochal.

    Simpler Times

    To see why the detection of very early, bright galaxies is surprising, it helps to understand what cosmologists know — or think they know — about the universe.

    After the Big Bang, the infant universe began cooling off. Within a few million years, the roiling plasma that filled space settled down, and electrons, protons and neutrons combined into atoms, mostly neutral hydrogen. Things were quiet and dark for a period of uncertain duration known as the cosmic dark ages.

    Then something happened.

    Most of the material that flew apart after the Big Bang is made of something we can’t see, called dark matter. It has exerted a powerful influence over the cosmos, especially at first. In the standard picture, cold dark matter (a term that means invisible, slow-moving particles) was flung about the cosmos indiscriminately. In some areas its distribution was denser, and in these regions it began collapsing into clumps. Visible matter, meaning atoms, clustered around the clumps of dark matter. As the atoms cooled off as well, they eventually condensed, and the first stars were born. These new sources of radiation recharged the neutral hydrogen that filled the universe during the so-called “epoch of reionization”.

    Through gravity, larger and more complex structures grew, building a vast cosmic web of galaxies.

    4
    Astronomers with the CEERS-Cosmic Evolution Early Release Science Survey, who are using the James Webb Space Telescope to study the early universe, look at a mosaic of images from the telescope in a visualization lab at the University of Texas-Austin.

    Meanwhile, everything kept flying apart. The astronomer Edwin Hubble figured out in the 1920s that the universe is expanding, and in the late 1990s, his namesake, the Hubble Space Telescope, found evidence that the expansion is accelerating.

    Nobel Prize in Physics for 2011 Expansion of the Universe

    4 October 2011

    The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics for 2011

    with one half to

    Saul Perlmutter
    The Supernova Cosmology Project
    The DOE’s Lawrence Berkeley National Laboratory and The University of California-Berkeley,

    and the other half jointly to

    Brian P. SchmidtThe High-z Supernova Search Team, The Australian National University, Weston Creek, Australia.

    and

    Adam G. Riess

    The High-z Supernova Search Team,The Johns Hopkins University and The Space Telescope Science Institute, Baltimore, MD.

    Written in the stars

    “Some say the world will end in fire, some say in ice…” *

    What will be the final destiny of the Universe? Probably it will end in ice, if we are to believe this year’s Nobel Laureates in Physics. They have studied several dozen exploding stars, called supernovae, and discovered that the Universe is expanding at an ever-accelerating rate. The discovery came as a complete surprise even to the Laureates themselves.

    In 1998, cosmology was shaken at its foundations as two research teams presented their findings. Headed by Saul Perlmutter, one of the teams had set to work in 1988. Brian Schmidt headed another team, launched at the end of 1994, where Adam Riess was to play a crucial role.

    The research teams raced to map the Universe by locating the most distant supernovae. More sophisticated telescopes on the ground and in space, as well as more powerful computers and new digital imaging sensors (CCD, Nobel Prize in Physics in 2009), opened the possibility in the 1990s to add more pieces to the cosmological puzzle.

    The teams used a particular kind of supernova, called Type 1a supernova. It is an explosion of an old compact star that is as heavy as the Sun but as small as the Earth. A single such supernova can emit as much light as a whole galaxy. All in all, the two research teams found over 50 distant supernovae whose light was weaker than expected – this was a sign that the expansion of the Universe was accelerating. The potential pitfalls had been numerous, and the scientists found reassurance in the fact that both groups had reached the same astonishing conclusion.

    For almost a century, the Universe has been known to be expanding as a consequence of the Big Bang about 14 billion years ago. However, the discovery that this expansion is accelerating is astounding. If the expansion will continue to speed up the Universe will end in ice.

    The acceleration is thought to be driven by dark energy, but what that dark energy is remains an enigma – perhaps the greatest in physics today. What is known is that dark energy constitutes about three quarters of the Universe. Therefore the findings of the 2011 Nobel Laureates in Physics have helped to unveil a Universe that to a large extent is unknown to science. And everything is possible again.

    *Robert Frost, Fire and Ice, 1920
    ______________________________________________________________________________

    Think of the universe as a loaf of raisin bread. It starts as a mixture of flour, water, yeast and raisins. When you combine these ingredients, the yeast begins respiring and the loaf begins to rise. The raisins within it — stand-ins for galaxies — stretch further apart from one another as the loaf expands.

    The Hubble telescope saw that the loaf is rising ever faster. The raisins are flying apart at a rate that defies their gravitational attraction. This acceleration appears to be driven by the repulsive energy of space itself — so-called dark energy, which is represented by the Greek letter Λ (pronounced “lambda”). Plug values for Λ, cold dark matter, and regular matter and radiation into the equations of Albert Einstein’s general theory of relativity, and you get a model of how the universe evolves. This “lambda cold dark matter” (ΛCDM) model matches almost all observations of the cosmos.

    One way to test this picture is by looking at very distant galaxies — equivalent to looking back in time to the first few hundred million years after the tremendous clap that started it all. The cosmos was simpler then, its evolution easier to compare against predictions.

    Astronomers first tried to see the earliest structures of the universe using the Hubble telescope in 1995. Over 10 days, Hubble captured 342 exposures of an empty-looking patch of space in the Big Dipper.

    Astronomers were astonished by the abundance hiding in the inky dark: Hubble could see thousands of galaxies at different distances and stages of development, stretching back to much earlier times than anyone expected. Hubble would go on to find some exceedingly distant galaxies — in 2016, astronomers found its most distant one, called GN-z11, a faint smudge that they dated to 400 million years after the Big Bang.

    That was surprisingly early for a galaxy, but it did not cast doubt on the ΛCDM model in part because the galaxy is tiny, with just 1% of the Milky Way’s mass, and in part because it stood alone. Astronomers needed a more powerful telescope to see whether GN-z11 was an oddball or part of a larger population of puzzlingly early galaxies, which could help determine whether we are missing a crucial piece of the ΛCDM recipe.

    Unaccountably Distant

    That next-generation space telescope, named for former NASA leader James Webb, launched on Christmas Day 2021. As soon as JWST was calibrated, light from early galaxies dripped into its sensitive electronics. Astronomers published a flood of papers describing what they saw.

    Researchers use a version of the Doppler effect to gauge the distances of objects. This is similar to figuring out the location of an ambulance based on its siren: The siren sounds higher in pitch as it approaches and then lower as it recedes. The farther away a galaxy is, the faster it moves away from us, and so its light stretches to longer wavelengths and appears redder. The magnitude of this “redshift” is expressed as z, where a given value for z tells you how long an object’s light must have traveled to reach us.

    One of the first papers [The Astrophysical Journal Letters (below)] on JWST data came from Naidu, the MIT astronomer, and his colleagues, whose search algorithm flagged a galaxy that seemed inexplicably bright and unaccountably distant. Naidu dubbed it GLASS-z13, indicating its apparent distance at a redshift of 13 — further away than anything seen before. (The galaxy’s redshift was later revised down to 12.4, and it was renamed GLASS-z12.) Other astronomers working on the various sets of JWST observations were reporting redshift values from 11 to 20, including one galaxy called CEERS-1749 or CR2-z17-1, whose light appears to have left it 13.7 billion years ago, just 220 million years after the Big Bang — barely an eyeblink after the beginning of cosmic time.

    These putative detections suggested that the neat story known as ΛCDM might be incomplete. Somehow, galaxies grew huge right away. “In the early universe, you don’t expect to see massive galaxies. They haven’t had time to form that many stars, and they haven’t merged together,” said Chris Lovell, an astrophysicist at the University of Portsmouth in England. Indeed, in a study published in November [The Astrophysical Journal Letters (below)], researchers analyzed computer simulations of universes governed by the ΛCDM model and found that JWST’s early, bright galaxies were an order of magnitude heavier than the ones that formed concurrently in the simulations.

    Some astronomers and media outlets claimed that JWST was breaking cosmology, but not everyone was convinced. One problem is that ΛCDM’s predictions aren’t always clear-cut. While dark matter and dark energy are simple, visible matter has complex interactions and behaviors, and nobody knows exactly what went down in the first years after the Big Bang; those frenetic early times must be approximated in computer simulations. The other problem is that it’s hard to tell exactly how far away galaxies are.

    In the months since the first papers, the ages of some of the alleged high-redshift galaxies have been reconsidered. Some were demoted to later stages of cosmic evolution because of updated telescope calibrations. CEERS-1749 is found in a region of the sky containing a cluster of galaxies whose light was emitted 12.4 billion years ago, and Naidu says it’s possible the galaxy is actually part of this cluster — a nearer interloper that might be filled with dust that makes it appear more redshifted than it is. According to Naidu, CEERS-1749 is weird no matter how far away it is. “It would be a new type of galaxy that we did not know of: a very low-mass, tiny galaxy that has somehow built up a lot of dust in it, which is something we traditionally do not expect,” he said. “There might just be these new types of objects that are confounding our searches for the very distant galaxies.”

    The Lyman Break

    Everyone knew that the most definitive distance estimates would require JWST’s most powerful capability.

    JWST not only observes starlight through photometry, or measuring brightness, but also through spectroscopy, or measuring the light’s wavelengths. If a photometric observation is like a picture of a face in a crowd, then a spectroscopic observation is like a DNA test that can tell an individual’s family history. Naidu and others who found large early galaxies measured redshift using brightness-derived measurements — essentially looking at faces in the crowd using a really good camera. That method is far from airtight. (At a January meeting of the American Astronomical Society, astronomers quipped that maybe half of the early galaxies observed with photometry alone will turn out to be accurately measured.)

    But in early December, cosmologists announced that they had combined both methods for four galaxies. The JWST Advanced Deep Extragalactic Survey (JADES) team searched for galaxies whose infrared light spectrum abruptly cuts off at a critical wavelength known as the Lyman break. This break occurs because hydrogen floating in the space between galaxies absorbs light. Because of the continuing expansion of the universe — the ever-rising raisin loaf — the light of distant galaxies is shifted, so the wavelength of that abrupt break shifts too. When a galaxy’s light appears to drop off at longer wavelengths, it is more distant. JADES identified spectra with redshifts up to 13.2, meaning the galaxy’s light was emitted 13.4 billion years ago.

    3
    Merrill Sherman/Quanta Magazine.

    As soon as the data was downlinked, JADES researchers began “freaking out” in a shared Slack group, according to Kevin Hainline, an astronomer at the University of Arizona. “It was like, ‘Oh my God, oh my God, we did it we did it we did it!’” he said. “These spectra are just the beginning of what I think is going to be astronomy-changing science.”

    Brant Robertson, a JADES astronomer at the University of California-Santa Cruz, says the findings show that the early universe changed rapidly in its first billion years, with galaxies evolving 10 times quicker than they do today. It’s similar to how “a hummingbird is a small creature,” he said, “but its heart beats so quickly that it is living kind of a different life than other creatures. The heartbeat of these galaxies is happening on a much more rapid timescale than something the size of the Milky Way.”

    But were their hearts beating too fast for ΛCDM to explain?

    Theoretical Possibilities

    As astronomers and the public gaped at JWST images, researchers started working behind the scenes to determine whether the galaxies blinking into our view really upend ΛCDM or just help nail down the numbers we should plug into its equations.

    One important yet poorly understood number concerns the masses of the earliest galaxies. Cosmologists try to determine their masses in order to tell whether they match ΛCDM’s predicted timeline of galaxy growth.

    A galaxy’s mass is derived from its brightness. But Megan Donahue, an astrophysicist at Michigan State University, says that at best, the relationship between mass and brightness is an educated guess, based on assumptions gleaned from known stars and well-studied galaxies.

    One key assumption is that stars always form within a certain statistical range of masses, called the initial mass function (IMF). This IMF parameter is crucial for gleaning a galaxy’s mass from measurements of its brightness, because hot, blue, heavy stars produce more light, while the majority of a galaxy’s mass is typically locked up in cool, red, small stars.

    But it’s possible that the IMF was different in the early universe. If so, JWST’s early galaxies might not be as heavy as their brightness suggests; they might be bright but light. This possibility causes headaches, because changing this basic input to the ΛCDM model could give you almost any answer you want. Lovell says some astronomers consider fiddling with the IMF “the domain of the wicked.”

    “If we don’t understand the initial mass function, then understanding galaxies at high redshift is really a challenge,” said Wendy Freedman, an astrophysicist at the University of Chicago. Her team is working on observations and computer simulations that will help pin down the IMF in different environments.

    Over the course of the fall, many experts came to suspect that tweaks to the IMF and other factors could be enough to square the very ancient galaxies lighting upon JWST’s instruments with ΛCDM. “I think it’s actually more likely that we can accommodate these observations within the standard paradigm,” said Rachel Somerville, an astrophysicist at the Flatiron Institute (which, like Quanta Magazine, is funded by the Simons Foundation). In that case, she said, “what we learn is: How fast can [dark matter] halos collect the gas? How fast can we make the gas cool off and get dense, and make stars? Maybe that happens faster in the early universe; maybe the gas is denser; maybe somehow it is flowing in faster. I think we’re still learning about those processes.”

    Somerville also studies the possibility that black holes interfered with the baby cosmos. Astronomers have noticed [MNRAS (below)] a few glowing supermassive black holes at a redshift of 6 or 7, about a billion years after the Big Bang. It is hard to conceive of how, by that time, stars could have formed, died and then collapsed into black holes that ate everything surrounding them and began spewing radiation.

    But if there are black holes inside the putative early galaxies, that could explain why the galaxies seem so bright, even if they’re not actually very massive, Somerville said.

    Confirmation that ΛCDM can accommodate at least some of JWST’s early galaxies arrived the day before Christmas. Astronomers led by Benjamin Keller at the University of Memphis checked [The Astrophysical Journal Letters (below)] a handful of major supercomputer simulations of ΛCDM universes and found that the simulations could produce galaxies as heavy as the four that were spectroscopically studied by the JADES team. (These four are, notably, smaller and dimmer than other purported early galaxies such as GLASS-z12.) In the team’s analysis, all the simulations yielded galaxies the size of the JADES findings at a redshift of 10. One simulation could create such galaxies at a redshift of 13, the same as what JADES saw, and two others could build the galaxies at an even higher redshift. None of the JADES galaxies was in tension with the current ΛCDM paradigm, Keller and colleagues reported on the preprint server arxiv.org on December 24.

    Though they lack the heft to break the prevailing cosmological model, the JADES galaxies have other special characteristics. Hainline said their stars seem unpolluted by metals from previously exploded stars. This could mean they are Population III stars — the avidly sought first generation of stars to ever ignite — and that they may be contributing to the reionization of the universe. If this is true, then JWST has already peered back to the mysterious period when the universe was set on its present course.

    Extraordinary Evidence

    Spectroscopic confirmation of additional early galaxies could come this spring, depending on how JWST’s time allocation committee divvies things up. An observing campaign called WDEEP will specifically search for galaxies from less than 300 million years after the Big Bang. As researchers confirm more galaxies’ distances and get better at estimating their masses, they’ll help settle ΛCDM’s fate.

    Many other observations are already underway that could change the picture for ΛCDM. Freedman, who is studying the initial mass function, was up at 1 a.m. one night downloading JWST data on variable stars that she uses as “standard candles” for measuring distances and ages. Those measurements could help shake out another potential problem with ΛCDM, known as the Hubble tension. The problem is that the universe currently seems to be expanding faster than ΛCDM predicts for a 13.8-billion-year-old universe. Cosmologists have plenty of possible explanations. Perhaps, some cosmologists speculate, the density of the dark energy that’s accelerating the expansion of the universe is not constant, as in ΛCDM, but changes over time. Changing the expansion history of the universe might not only resolve the Hubble tension but also revise calculations of the age of the universe at a given redshift. JWST might be seeing an early galaxy as it appeared, say, 500 million years after the Big Bang rather than 300 million. Then even the heaviest putative early galaxies in JWST’s mirrors would have had plenty of time to coalesce, says Somerville.

    Astronomers run out of superlatives when they talk about JWST’s early galaxy results. They pepper their conversations with laughter, expletives and exclamations, even as they remind themselves of Carl Sagan’s adage, however overused, that extraordinary claims require extraordinary evidence. They can’t wait to get their hands on more images and spectra, which will help them hone or tweak their models. “Those are the best problems,” said Boylan-Kolchin, “because no matter what you get, the answer is interesting.”

    The Astrophysical Journal Letters 2022
    The Astrophysical Journal Letters 2022
    The Astrophysical Journal Letters 2022
    MNRAS 2017
    See the science papers for instructive material with images.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The NASA/ESA/CSA James Webb Space Telescope is a large infrared telescope with a 6.5-meter primary mirror. Webb was finally launched December 25, 2021, ten years late. Webb will be the premier observatory of the next decade, serving thousands of astronomers worldwide. It will study every phase in the history of our Universe, ranging from the first luminous glows after the Big Bang, to the formation of solar systems capable of supporting life on planets like Earth, to the evolution of our own Solar System.

    Webb is the world’s largest, most powerful, and most complex space science telescope ever built. Webb will solve mysteries in our solar system, look beyond to distant worlds around other stars, and probe the mysterious structures and origins of our universe and our place in it.

    Webb was formerly known as the “Next Generation Space Telescope” (NGST); it was renamed in Sept. 2002 after a former NASA administrator, James Webb.

    Webb is an international collaboration between National Aeronautics and Space Administration, the European Space Agency (ESA), and the Canadian Space Agency (CSA). The NASA Goddard Space Flight Center managed the development effort. The main industrial partner is Northrop Grumman; the Space Telescope Science Institute operates Webb.

    Several innovative technologies have been developed for Webb. These include a folding, segmented primary mirror, adjusted to shape after launch; ultra-lightweight beryllium optics; detectors able to record extremely weak signals, microshutters that enable programmable object selection for the spectrograph; and a cryocooler for cooling the mid-IR detectors to 7K.

    There are four science instruments on Webb: The Near InfraRed Camera (NIRCam), The Near InfraRed Spectrograph (NIRspec), The Mid-InfraRed Instrument (MIRI), and The Fine Guidance Sensor/ Near InfraRed Imager and Slitless Spectrograph (FGS-NIRISS).

    Webb’s instruments are designed to work primarily in the infrared range of the electromagnetic spectrum, with some capability in the visible range. It will be sensitive to light from 0.6 to 28 micrometers in wavelength.
    National Aeronautics Space Agency Webb NIRCam.

    The European Space Agency [La Agencia Espacial Europea] [Agence spatiale européenne][Europäische Weltraumorganization](EU) Webb MIRI schematic.

    Webb has four main science themes: The End of the Dark Ages: First Light and Reionization, The Assembly of Galaxies, The Birth of Stars and Protoplanetary Systems, and Planetary Systems and the Origins of Life.

    Launch was December 25, 2021, ten years late, on an Ariane 5 rocket. The launch was from Arianespace’s ELA-3 launch complex at European Spaceport located near Kourou, French Guiana. Webb is located at the second Lagrange point, about a million miles from the Earth.

    ESA50 Logo large

    Canadian Space Agency

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
    • Bhibuthi bhusan Patel 12:23 am on January 22, 2023 Permalink | Reply

      The gravity equal to dark matter comes from the rotation of galaxy,evolved super massive black hole at the center of galaxy and stars.Rotation is the change in position of a galaxy with others in time axis.

      Like

    • Bhibuthi bhusan Patel 5:35 am on January 22, 2023 Permalink | Reply

      The evolution of galaxy is due to rotation.The dynamic force of rotation is the gravity evolves super massive black hole at the center of galaxy and stars.

      Like

  • richardmitnick 3:06 pm on January 15, 2023 Permalink | Reply
    Tags: "New Algorithm Closes Quantum Supremacy Window", "Quantum error correction", , Quanta Magazine,   

    From “Quanta Magazine” : “New Algorithm Closes Quantum Supremacy Window” 

    From “Quanta Magazine”

    1.9.23
    Ben Brubaker

    1
    In random circuit sampling, researchers take quantum bits and randomly manipulate them. A new paper explores how errors in quantum computers can multiply to thwart these efforts. Credit: Kristina Armitage and Merrill Sherman/Quanta Magazine.

    In what specific cases do quantum computers surpass their classical counterparts? That’s a hard question to answer, in part because today’s quantum computers are finicky things, plagued with errors that can pile up and spoil their calculations.

    By one measure, of course, they’ve already done it. In 2019, physicists at Google announced that they used a 53-qubit machine to achieve quantum supremacy, a symbolic milestone marking the point at which a quantum computer does something beyond the reach of any practical classical algorithm.

    Similar demonstrations by physicists at the University of Science and Technology of China soon followed.

    But rather than focus on an experimental result for one particular machine, computer scientists want to know whether classical algorithms will be able to keep up as quantum computers get bigger and bigger. “The hope is that eventually the quantum side just completely pulls away until there’s no competition anymore,” said Scott Aaronson, a computer scientist at the University of Texas-Austin.

    That general question is still hard to answer, again in part because of those pesky errors. (Future quantum machines will compensate for their imperfections using a technique called “quantum error correction”, but that capability is still a ways off.) Is it possible to get the hoped-for runaway quantum advantage even with uncorrected errors?

    Most researchers suspected the answer was no, but they couldn’t prove it for all cases. Now, in a paper posted to the preprint server arxiv.org, a team of computer scientists has taken a major step toward a comprehensive proof that error correction is necessary for a lasting quantum advantage in random circuit sampling — the bespoke problem that Google used to show quantum supremacy. They did so by developing a classical algorithm that can simulate random circuit sampling experiments when errors are present.

    “It’s a beautiful theoretical result,” Aaronson said, while stressing that the new algorithm is not practically useful for simulating real experiments like Google’s.

    In random circuit sampling experiments, researchers start with an array of qubits, or quantum bits. They then randomly manipulate these qubits with operations called quantum gates. Some gates cause pairs of qubits to become entangled, meaning they share a quantum state and can’t be described separately. Repeated layers of gates bring the qubits into a more complicated entangled state.

    To learn about that quantum state, researchers then measure all the qubits in the array. This causes their collective quantum state to collapse to a random string of ordinary bits — 0s and 1s. The number of possible outcomes grows rapidly with the number of qubits in the array: With 53 qubits, as in Google’s experiment, it’s nearly 10 quadrillion. And not all strings are equally likely. Sampling from a random circuit means repeating such measurements many times to build up a picture of the probability distribution underlying the outcomes.

    The question of quantum advantage is simply this: Is it hard to mimic that probability distribution with a classical algorithm that doesn’t use any entanglement?

    In 2019, researchers proved [Nature Physics] [below] volume that the answer is yes for error-free quantum circuits: It is indeed hard to classically simulate a random circuit sampling experiment when there are no errors. The researchers worked within the framework of computational complexity theory, which classifies the relative difficulty of different problems. In this field, researchers don’t treat the number of qubits as a fixed number such as 53. “Think of it as n, which is some number that’s going to increase,” said Aram Harrow, a physicist at the Massachusetts Institute of Technology. “Then you want to ask: Are we doing things where the effort is exponential in n or polynomial in n?” This is the preferred way to classify an algorithm’s runtime — when n grows large enough, an algorithm that’s exponential in n lags far behind any algorithm that’s polynomial in n. When theorists speak of a problem that’s hard for classical computers but easy for quantum computers, they’re referring to this distinction: The best classical algorithm takes exponential time, while a quantum computer can solve the problem in polynomial time.

    Yet that 2019 paper ignored the effects of errors caused by imperfect gates. This left open the case of a quantum advantage for random circuit sampling without error correction.

    If you imagine continually increasing the number of qubits as complexity theorists do, and you also want to account for errors, you need to decide whether you’re also going to keep adding more layers of gates — increasing the circuit depth, as researchers say. Suppose you keep the circuit depth constant at, say, a relatively shallow three layers, as you increase the number of qubits. You won’t get much entanglement, and the output will still be amenable to classical simulation. On the other hand, if you increase the circuit depth to keep up with the growing number of qubits, the cumulative effects of gate errors will wash out the entanglement, and the output will again become easy to simulate classically.

    But in between lies a Goldilocks zone. Before the new paper, it was still a possibility that quantum advantage could survive here, even as the number of qubits increased. In this intermediate-depth case, you increase the circuit depth extremely slowly as the number of qubits grows: Even though the output will steadily get degraded by errors, it might still be hard to simulate classically at each step.

    The new paper closes this loophole. The authors derived a classical algorithm for simulating random circuit sampling and proved that its runtime is a polynomial function of the time required to run the corresponding quantum experiment. The result forges a tight theoretical connection between the speed of classical and quantum approaches to random circuit sampling.

    The new algorithm works for a major class of intermediate-depth circuits, but its underlying assumptions break down for certain shallower ones, leaving a small gap where efficient classical simulation methods are unknown. But few researchers are holding out hope that random circuit sampling will prove hard to simulate classically in this remaining slim window. “I give it pretty small odds,” said Bill Fefferman, a computer scientist at the University of Chicago and one of the authors of the 2019 theory paper.

    The result suggests that random circuit sampling won’t yield a quantum advantage by the rigorous standards of computational complexity theory. At the same time, it illustrates the fact that polynomial algorithms, which complexity theorists indiscriminately call efficient, aren’t necessarily fast in practice. The new classical algorithm gets progressively slower as the error rate decreases, and at the low error rates achieved in quantum supremacy experiments, it’s far too slow to be practical. With no errors it breaks down altogether, so this result doesn’t contradict anything researchers knew about how hard it is to classically simulate random circuit sampling in the ideal, error-free case. Sergio Boixo, the physicist leading Google’s quantum supremacy research, says he regards the paper “more as a nice confirmation of random circuit sampling than anything else.”

    On one point, all researchers agree: The new algorithm underscores how crucial quantum error correction will be to the long-term success of quantum computing. “That’s the solution, at the end of the day,” Fefferman said.

    Science paper:
    Nature Physics

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:11 pm on January 6, 2023 Permalink | Reply
    Tags: "Inside Ancient Asteroids Gamma Rays Made Building Blocks of Life", , , , , , , , Meteorites could have contributed to the origin of life on Earth., Quanta Magazine   

    From “Quanta Magazine” : “Inside Ancient Asteroids Gamma Rays Made Building Blocks of Life” 

    From “Quanta Magazine”

    1.4.23
    John Rennie-Deputy Editor
    Allison Parshall-Writing Intern

    1
    Credit: Kristina Armitage/Quanta Magazine.

    In 2021, the Hayabusa2 space mission successfully delivered a morsel of the asteroid 162173 Ryugu to Earth — five grams of the oldest, most pristine matter left over from the solar system’s formation 4.5 billion years ago.

    Last spring, scientists revealed that the chemical composition of the asteroid includes 10 amino acids, the building blocks of proteins. The discovery added to the evidence that the primordial soup from which life on Earth arose may have been seasoned with amino acids from pieces of asteroids.

    But where did these amino acids come from? The amino acids flowing through our ecosystems are products of cellular metabolism, mostly in plants. What nonbiological mechanism could have put them in meteorites and asteroids?

    Scientists have thought of several ways, and recent work [ACS Central Science (below)] by researchers in Japan points to a significant new one: a mechanism that uses gamma rays to forge amino acids. Their discovery makes it seem even more likely that meteorites could have contributed to the origin of life on Earth.

    Despite their cachet as an essential part of life’s chemistry, amino acids are simple molecules that can be cooked up artlessly from carbon, oxygen and nitrogen compounds if there’s sufficient energy. Seventy years ago, famous experiments by Stanley Miller and Harold Urey proved that an electrical discharge in a gaseous mixture of methane, ammonia and hydrogen (which at the time was incorrectly thought to mimic Earth’s early atmosphere) was all it took to make a mixture of organic compounds that included amino acids. Later laboratory work suggested that amino acids could also potentially form in sediments near hydrothermal vents on the seafloor, and a discovery in 2018 [Nature (below)] confirmed that this does sometimes occur.

    The possibility that the original amino acids might have come from space began to catch on after 1969, when two large meteorites — the Murchison meteorite in southeastern Australia and the Allende meteorite in Mexico — were recovered promptly after their impacts.

    2
    Murchison meteorite at the The National Museum of Natural History (Washington)

    3
    A 520g individual of the Allende meteorite shower. Allende is a carbonaceous chondrite (CV3) that fell on 1969 February 8 in Mexico.

    Both were carbonaceous chondrites, a rare class of meteorites resembling Ryugu that scientists think accreted from smaller icy bodies after the solar system first formed. Both also contained small but significant amounts of amino acids, although scientists couldn’t rule out the possibility that the amino acids were contaminants or byproducts of their impact.

    Still, space scientists knew that the icy dust bodies that formed carbonaceous chondrites were likely to contain water, ammonia and small carbon molecules like aldehydes and methanol, so the elemental constituents of amino acids would have been present. They needed only a source of energy to facilitate the reaction. Experimental work suggested that ultraviolet radiation from supernovas could have been strong enough to do it. Collisions between the dust bodies could also have heated them enough to produce a similar effect.

    “We know a lot of ways to make amino acids abiologically,” said Scott Sandford, a laboratory astrophysicist at NASA’s Ames Research Center. “And there’s no reason to expect that they didn’t all happen.”

    Now a team of researchers at Yokohama National University in Japan led by the chemists Yoko Kebukawa and Kensei Kobayashi have shown that gamma rays could also have produced the amino acids in chondrites. In their new work, they showed that gamma rays from radioactive elements in the chondrites — most probably aluminum-26 — could convert the carbon, nitrogen and oxygen compounds into amino acids.

    Of course, gamma rays can destroy organic compounds as easily as it can make them. But in the Japanese team’s experiments, “the enhancement of amino acid production by the radioisotopes was more effective than decomposition,” Kebukawa said, so the gamma rays produced more amino acids than they destroyed. From the rates of production observed in their experiments, the researchers calculated very roughly that gamma rays could have raised the concentration of amino acids in a carbonaceous chondrite asteroid to the levels seen in the Murchison meteorite in as little as 1,000 years or as many as 100,000.

    Since gamma rays, unlike ultraviolet light, can penetrate deep into the interior of an asteroid or meteorite, this mechanism could have extra relevance to origin-of-life scenarios. “It opens up a whole new environment in which amino acids can be made,” Sandford said. If meteorites are big enough, “the middle part of them could survive atmospheric entry even if the outside ablates away,” he explained. “So you’re not only making [amino acids] but you’re making them on the path to get to a planet.”

    3
    The meteorites called carbonaceous chondrites, such as the one at left, accreted from smaller icy bodies that contained mixtures of compounds rich in carbon, nitrogen and oxygen. Their conglomerated structure is visible in a magnified cross section. Credits: Susan E. Degginger/Alamy Stock Photo (left); Laurence Garvie/ Buseck Center for Meteorite Studies, Arizona State University.

    One requirement of the new mechanism is that small amounts of liquid water must be present to support the reactions. That might seem like a significant limitation — “I can easily imagine that people think liquid water hardly exists in space environments,” Kebukawa said. But carbonaceous chondrite meteorites are full of minerals such as hydrated silicates and carbonates that only form in the presence of water, she explained, and tiny amounts of water have even been found trapped inside some of the mineral grains in chondrites.

    From such mineralogical evidence, said Vassilissa Vinogradoff, an astrochemist at Aix-Marseille University in France, scientists know that young asteroids held significant amounts of liquid water. “The aqueous alteration phase of these bodies, which is when the amino acids in question would have had a chance to form, was a period of about a million years,” she said — more than long enough to produce the quantities of amino acids observed in meteorites.

    Sandford notes that in experiments he and other researchers have conducted, irradiation of icy mixtures like those in the primordial interstellar molecular clouds can give rise to thousands of compounds relevant to life, including sugars and nucleobases, “and amino acids are virtually always there in the mix. So the universe seems to be kind of hard-wired to make amino acids.”

    Vinogradoff echoed that view and said that the diversity of organic compounds that can be present in meteorites is now known to be vast. “The question has pivoted more to be: Why are these molecules the ones that have proved important for life on Earth?” she said. Why, for example, does terrestrial life use only 20 of the scores of amino acids that can be produced — and why does it almost exclusively use the “left-handed” structures of those molecules when the mirror-image “right-handed” structures naturally form in equal abundance? Those may be the mysteries that dominate chemical studies of life’s earliest origins in the future.

    Science papers:
    ACS Central Science
    See the above science paper for instructive material with images.
    Nature

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:45 pm on January 3, 2023 Permalink | Reply
    Tags: "Google Engineer Long Out of Math Cracks Devilish Problem About Sets", A family of sets is “union-closed” if the combination of any two sets in the family equals an existing set in the family., , Quanta Magazine, Set Theory, The "union-closed conjecture"   

    From “Quanta Magazine” : “Google Engineer Long Out of Math Cracks Devilish Problem About Sets” 

    From “Quanta Magazine”

    1.3.23
    Kevin Hartnett

    1
    A family of sets is “union-closed” if the combination of any two sets in the family equals an existing set in the family. Credit: Kristina Armitage/Quanta Magazine.

    In mid-October, Justin Gilmer flew from California to New York to attend a friend’s wedding. While on the East Coast he visited his former adviser, Michael Saks, a mathematician at Rutgers University, where Gilmer had received his doctorate seven years earlier.

    Saks and Gilmer caught up over lunch, but they didn’t talk about math. In fact, Gilmer had not thought seriously about math since finishing at Rutgers in 2015. That was when he’d decided he didn’t want a career in academia and instead started to teach himself to program. As he and Saks ate, Gilmer told his old mentor about his job at Google, where he works on machine learning and artificial intelligence.

    It was sunny the day Gilmer visited Rutgers. As he walked around, he recalled how in 2013 he’d spent the better part of a year walking those same campus paths, thinking about a problem called the “union-closed conjecture”. It had been a fixation, though a fruitless one: For all his effort, Gilmer had only succeeded in teaching himself why the simple-seeming problem about sets of numbers was so difficult to solve.

    “I think a lot of people think about the problem until they become satisfied that they understand why it’s hard. I probably spent more time on it than most people,” Gilmer said.

    Following his October visit, something unexpected happened: He got a new idea. Gilmer began to think about ways to apply techniques from information theory to solve the union-closed conjecture. He pursued the idea for a month, at every turn expecting it to fail. But instead, the path to a proof kept opening up. Finally, on November 16 he posted a first-of-its-kind result [below] that gets mathematicians much of the way toward proving the full conjecture.

    The paper set off a flurry of follow-up work. Mathematicians at the University of Oxford, the Massachusetts Institute of Technology and the Institute for Advanced Study, among other institutions, quickly built on Gilmer’s novel methods. But before they did, they asked a question of their own: Just who is this guy?

    Half Full

    The union-closed conjecture is about collections of numbers called sets, such as {1, 2} and {2, 3, 4}. You can perform operations on sets, including taking their union, which means combining them. For example, the union of {1, 2} and {2, 3, 4} is {1, 2, 3, 4}.

    A collection, or family, of sets is considered “union-closed” if the union of any two sets in the family equals any existing set in the family. For example, consider this family of four sets:

    {1}, {1, 2}, {2, 3, 4}, {1, 2, 3, 4}.

    Combine any pair and you get a set that’s already in the family, making the family union-closed.

    Mathematicians chatted about versions of the union-closed conjecture as far back as the 1960s, but it received its first formal statement in a 1979 paper by Péter Frankl [below], a Hungarian mathematician who emigrated to Japan in the 1980s and who counts street performing among his pursuits.

    Frankl conjectured that if a family of sets is union-closed, it must have at least one element (or number) that appears in at least half the sets. It was a natural threshold for two reasons.

    First, there are readily available examples of union-closed families in which all elements appear in exactly 50% of the sets. Like all the different sets you can make from the numbers 1 to 10, for instance. There are 1,024 such sets, which form a union-closed family, and each of the 10 elements appears in 512 of them. And second, at the time Frankl made the conjecture no one had ever produced an example of a union-closed family in which the conjecture didn’t hold.

    So 50% seemed like the right prediction.

    That didn’t mean it was easy to prove. In the years since Frankl’s paper, there have been few results. Prior to Gilmer’s work, those papers only managed to establish thresholds that varied with the number of sets in the family (as opposed to being the same 50% threshold for set families of all sizes).

    “It feels like it should be easy, and it’s similar to a lot of problems that are easy, but it has resisted attacks,” said Will Sawin of Columbia University.

    The lack of progress reflected both the tricky nature of the problem and the fact that many mathematicians preferred not to think about it; they worried they’d lose years of their careers chasing a beguiling problem that was impossible to solve. Gilmer remembers a day in 2013 when he went to Saks’ office and brought up the union-closed conjecture. His adviser — who in the past had wrestled with the problem himself — nearly threw him out of the room.

    “Mike said, ‘Justin, you’re going to get me thinking about this problem again and I don’t want to do that,’” said Gilmer.

    An Insight of Uncertainty

    Following his visit to Rutgers, Gilmer rolled the problem around in his mind, trying to understand why it was so hard. He prompted himself with a basic fact: If you have a family of 100 sets, there are 4,950 different ways of choosing two and taking their union. Then he asked himself: How is it possible that 4,950 different unions map back onto just 100 sets if no element appears in those unions with at least some frequency?

    Even at that point he was on his way to a proof, though he didn’t know it yet. Techniques from information theory, which provides a rigorous way of thinking about what to expect when you pull a pair of objects at random, would take him there.

    Information theory developed in the first half of the 20th century, most famously with Claude Shannon’s 1948 paper [below], A Mathematical Theory of Communication. The paper provided a precise way of calculating the amount of information needed to send a message, based on the amount of uncertainty around what exactly the message would say. This link — between information and uncertainty — was Shannon’s remarkable, fundamental insight.

    To take a toy example, imagine I flip a coin five times and send the resulting sequence to you. If it’s a normal coin, it takes five bits of information to transmit. But if it’s a loaded coin — say, 99% likely to land on heads — it takes a lot less. For example, we could agree ahead of time that I’ll send you a 1 (a single bit of information) if the loaded coin lands heads all five times, which it’s very likely to do. There’s more surprise in the outcome of a fair coin flip than there is with a biased one, and therefore more information.

    The same thinking applies to the information contained in sets of numbers. If I have a family of union-closed sets — say the 1,024 sets made from the numbers 1 to 10 — I could pick two sets at random. Then I could communicate the elements of each set to you. The amount of information it takes to send that message reflects the amount of uncertainty around what those elements are: There’s a 50% chance, for example, that the first element in the first set is a 1 (because 1 appears in half the sets in the family), just as there’s a 50% chance the first result in a sequence of fair coin flips is heads.

    Information theory appears often in combinatorics, an area of mathematics concerned with counting objects, which is what Gilmer had studied as a graduate student. But as he flew back home to California, he worried that the way he thought to connect information theory to the closed-union conjecture was the naïve insight of an amateur: Surely working mathematicians had come across this shiny object before and recognized it as fool’s gold.

    “To be honest, I’m a little surprised no one thought of this before,” said Gilmer. “But maybe I shouldn’t be surprised, because I myself had thought about it for a year, and I knew information theory.”

    More Likely Than Not

    Gilmer worked on the problem at night, after finishing his work at Google, and on weekends throughout the second half of October and early November. He was encouraged by ideas that a group of mathematicians had explored years earlier in an open collaboration on the blog of a prominent mathematician named Tim Gowers. He also worked with a textbook by his side so he could look up formulas he’d forgotten.

    “You would think someone who comes up with a great result shouldn’t have to consult Chapter 2 of Elements of Information Theory, but I did,” Gilmer said.

    Gilmer’s strategy was to imagine a union-closed family in which no element appeared in even 1% of all the sets — a counterexample that, if it really existed, would falsify Frankl’s conjecture.

    Let’s say you choose two sets, A and B, from this family at random and consider the elements that could be in those sets, one at a time. Now ask: What are the odds that set A contains the number 1? And set B? Since every element has a little less than a 1% chance of appearing in any given set, you wouldn’t expect either A or B to contain 1. Which means there’s little surprise — and little information gained — if you learn that neither in fact does.

    Next, think about the chance that the union of A and B contains 1. It’s still unlikely, but it’s more likely than the odds that it appears in either of the individual sets. It’s the sum of the likelihood it appears in A and the likelihood it appears in B minus the likelihood it appears in both. So, maybe just under 2%.

    This is still low, but it’s closer to a 50-50 proposition. That means it takes more information to share the result. In other words, if there’s a union-closed family in which no element appears in at least 1% of all the sets, there’s more information in the union of two sets than in either of the sets themselves.

    “The idea of revealing things element by element and looking at the amount of information you learn is extremely clever. That’s the main idea of the proof,” said Ryan Alweiss of Princeton University.

    At this point Gilmer was starting to close in on Frankl’s conjecture. That’s because it’s easy to demonstrate that in a union-closed family, the union of two sets necessarily contains less information than the sets themselves — not more.

    To see why, think about that union-closed family containing the 1,024 different sets you can make from the numbers 1 to 10. If you pick two of those sets at random, on average you’ll end up with sets containing five elements. (Of those 1,024 sets, 252 contain five elements, making that the most common set size.) You’re also likely to end up with a union containing about seven elements. But there are only 120 different ways of making sets containing seven elements.

    The point is, there’s more uncertainty about the contents of two randomly chosen sets than there is about their union. The union skews to larger sets with more elements, for which there are fewer possibilities. When you take the union of two sets in a union-closed family, you kind of know what you’re going to get — like when you flip a biased coin — which means the union contains less information than the sets it’s composed of.

    With that, Gilmer had a proof. He knew if no element appears in even 1% of the sets, the union is forced to contain more information. But the union must contain less information. Therefore there must be at least one element that appears in at least 1% of the sets.

    The Push to 50

    When Gilmer posted his proof on November 16, he included a note that he thought it was possible to use his method to get even closer to a proof of the full conjecture, potentially raising the threshold to 38%.

    Five days later, three different groups of mathematicians posted papers within hours of each other that built on Gilmer’s work to do just that. Additional papers followed [below], but the initial burst seems to have taken Gilmer’s methods as far as they will go; getting to 50% will likely take additional new ideas.

    Still, for some of the authors of the follow-up papers, getting to 38% was relatively straightforward, and they wondered why Gilmer didn’t just do it himself. The simplest explanation turned out to be the correct one: After more than a half-decade out of math, Gilmer just didn’t know how to do some of the technical analytic work required to pull it off.

    “I was a bit rusty, and to be honest, I was stuck,” Gilmer said. “But I was eager to see where the community would take it.”

    Yet Gilmer thinks the same circumstances that left him out of practice probably made his proof possible in the first place.

    “It’s the only way I can explain why I thought about the problem for a year in graduate school and made no progress, I left math for six years, then returned to the problem and made this breakthrough,” he said. “I don’t know how to explain it other than being in machine learning biased my thinking.”

    Science papers:
    Additional papers followed
    three different groups of mathematicians posted papers
    a first-of-its-kind result
    1979 paper by Péter Frankl
    Claude Shannon’s 1948 paper

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:02 pm on December 27, 2022 Permalink | Reply
    Tags: "Cosmic Map of Ultrahigh-Energy Particles Points to Long-Hidden Treasures", “Anisotropy”: the property of a material which allows it to change or assume different properties in different directions as opposed to “isotropy” - uniformity in all orientations., , Certain candidate objects sit at the right locations., Charged particles from space bang into air molecules in the sky triggering particle showers that rain down to the ground., Cosmic rays are produced by high-energy astrophysics sources., More clues have arrived in the form of super-energetic neutrinos., , Quanta Magazine, Starburst galaxies and active galactic nuclei and tidal disruption events have emerged as top candidates for the dominant source of ultrahigh-energy cosmic rays., , The Pierre Auger Observatory in Argentina- the world’s largest cosmic-ray observatory., , The Telescope Array in Utah, To know what’s making ultrahigh-energy cosmic rays step one is to see where they’re coming from.   

    From “Quanta Magazine” : “Cosmic Map of Ultrahigh-Energy Particles Points to Long-Hidden Treasures” 

    From “Quanta Magazine”

    April 27, 2021 [Just found from another Quanta article]
    Natalie Wolchover

    1
    Starburst galaxies, active galactic nuclei and tidal disruption events (from left) have emerged as top candidates for the dominant source of ultrahigh-energy cosmic rays. Daniel Chang for Quanta Magazine.

    In the 1930s, the French physicist Pierre Auger placed Geiger counters along a ridge in the Alps and observed that they would sometimes spontaneously click at the same time, even when they were up to 300 meters apart. He knew that the coincident clicks came from cosmic rays, charged particles from space that bang into air molecules in the sky, triggering particle showers that rain down to the ground.

    But Auger realized that for cosmic rays to trigger the kind of enormous showers he was seeing, they must carry fantastical amounts of energy — so much that, he wrote in 1939 [Reviews of Modern Physics (below)], “it is actually impossible to imagine a single process able to give to a particle such an energy.”

    Upon constructing larger arrays of Geiger counters and other kinds of detectors, physicists learned that cosmic rays reach energies at least 100,000 times higher than Auger supposed.

    A cosmic ray is just an atomic nucleus — a proton or a cluster of protons and neutrons. Yet the rare ones known as “ultrahigh-energy” cosmic rays have as much energy as professionally served tennis balls. They’re millions of times more energetic than the protons that hurtle around the circular tunnel of the Large Hadron Collider in Europe at 99.9999991% of the speed of light.

    In fact, the most energetic cosmic ray ever detected, nicknamed the “Oh-My-God particle,” struck the sky in 1991 going something like 99.99999999999999999999951% of the speed of light, giving it roughly the energy of a bowling ball dropped from shoulder height onto a toe. “You would have to build a collider as large as the orbit of the planet Mercury to accelerate protons to the energies we see,” said Ralph Engel, an astrophysicist at the Karlsruhe Institute of Technology (DE) and the co-leader of the world’s largest cosmic-ray observatory, the Pierre Auger Observatory in Argentina.

    The question is: What’s out there in space doing the accelerating?

    Supernova explosions are now thought to be capable of producing the astonishingly energetic cosmic rays that Auger first observed 82 years ago. Supernovas can’t possibly yield the far more astonishing particles that have been seen since. The origins of these ultrahigh-energy cosmic rays remain uncertain. But a series of recent advances has significantly narrowed the search.

    In 2017, the Auger Observatory announced a major discovery [Science (below)]. With its 1,600 particle detectors and 27 telescopes dotting a patch of Argentinian prairie the size of Rhode Island, the observatory had recorded the air showers of hundreds of thousands of ultrahigh-energy cosmic rays over the previous 13 years. The team reported that 6% more of the rays come from one half of the sky than the other — the first pattern ever definitively detected in the arrival directions of cosmic rays.

    Recently, three theorists at New York University offered an elegant explanation for the imbalance that experts see as highly convincing. The new paper [The Astrophysical Journal Letters (below)], by Chen Ding, Noémie Globus and Glennys Farrar, implies that ultrapowerful cosmic-ray accelerators are ubiquitous, cosmically speaking, rather than rare.

    The Auger Observatory and The Telescope Array in Utah have also detected smaller, subtler cosmic ray “hot spots” in the sky — presumably the locations of nearby sources.

    Certain candidate objects sit at the right locations.

    More clues have arrived in the form of super-energetic neutrinos, which are produced by ultrahigh-energy cosmic rays. Collectively, the recent discoveries have focused the search for the universe’s ultrapowerful accelerators on three main contenders. Now, theorists are busy modeling these astrophysical objects to see whether they’re indeed capable of flinging fast-enough particles toward us, and if so, how.

    These speculations are brand new and unconstrained by any data. “If you go to high energies, things are really unexplored,” Engel said. “You really go somewhere where everything is blank.”

    A Fine Imbalance

    To know what’s making ultrahigh-energy cosmic rays step one is to see where they’re coming from. The trouble is that, because the particles are electrically charged, they don’t travel here in straight lines; their paths bend as they pass through magnetic fields.

    Moreover, the ultrahigh-energy particles are rare, striking each square kilometer of Earth’s sky only about once per year. Identifying any pattern in their arrival directions requires teasing out subtle statistical imbalances from a huge data set.

    No one knew how much data would be needed before patterns would emerge. Physicists spent decades building ever-larger arrays of detectors without seeing even a hint of a pattern. Then in the early 1990s, the Scottish astrophysicist Alan Watson and the American physicist Jim Cronin decided to go really big. They embarked on what would become the 3,000-square-kilometer Auger Observatory [displayed above].

    Finally, that was enough. When the Auger team reported in Science in 2017 [above] that it had detected a 6% imbalance between two halves of the sky — where an excess of particles from one particular direction in the sky smoothly transitioned into a deficit centered in the opposite direction — “that was fantastically exciting,” said Watson. “I’ve worked in this field for a very, very long time” — since the 1960s — “and this is the first time we’ve had an ‘anisotropy'[the property of a material which allows it to change or assume different properties in different directions, as opposed to “isotropy”- uniformity in all orientations].”

    2
    Samuel Velasco/Quanta Magazine; Source: arxiv.org/pdf/2101.04564

    But the data was also puzzling. The direction of the cosmic-ray excess was nowhere near the center of the Milky Way galaxy, supporting the long-standing hypothesis that ultrahigh-energy cosmic rays come from outside the galaxy. But it was nowhere near anything. It didn’t correspond to the location of some powerful astrophysical object like a supermassive black hole in a neighboring galaxy. It wasn’t the Virgo cluster, the dense nearby concentration of galaxies. It was just a dull, dark spot near the constellation Canis Major.

    Noémie Globus, then a postdoc at the Hebrew University of Jerusalem, immediately saw a way to explain the pattern. She began by making a simplification: that every bit of matter in the universe has equal probability of producing some small number of ultrahigh-energy cosmic rays. She then mapped out how those cosmic rays would bend slightly as they emanate from nearby galaxies, galaxy groups and clusters — collectively known as the large-scale structure of the cosmos — and travel here through the weak magnetic fields of intergalactic space. Naturally, her pretend map was just a blurry picture of the large-scale structure itself, with the highest concentration of cosmic rays coming from Virgo.

    Her cosmic-ray excess wasn’t in the right spot to explain Auger’s data, but she thought she knew why: because she hadn’t adequately accounted for the magnetic field of the Milky Way. In 2019, Globus moved to NYU to work with the astrophysicist Glennys Farrar, whose 2012 model [The Astrophysical Journal (below)] of the Milky Way’s magnetic field, developed with her then-graduate student Ronnie Jansson, remains state of the art. Although no one yet understands why the galaxy’s magnetic field is shaped the way it is, Farrar and Jansson inferred its geometry from 40,000 measurements of polarized light. They ascertained that magnetic field lines arc both clockwise and counterclockwise along the spiral arms of the galaxy and emanate vertically from the galactic disk, twisting as they rise.

    Farrar’s graduate student Chen Ding wrote code that refined Globus’ map of ultrahigh-energy cosmic rays coming from the large-scale structure, then passed this input through the distorting lens of the galactic magnetic field as modeled by Farrar and Jansson. “And lo and behold we get this remarkable agreement with the observations,” Farrar said.

    Virgo-originating cosmic rays bend around in the galaxy’s twisting field lines so that they strike us from the direction of Canis Major, where Auger sees the center of its excess. The researchers analyzed how the resulting pattern would change for cosmic rays of different energies. They consistently found a close match with different subsets of Auger’s data.

    The researchers’ “continuous model” of the origins of ultrahigh-energy cosmic rays is a simplification — every piece of matter does not emit ultrahigh-energy cosmic rays. But its striking success reveals that the actual sources of the rays are abundant and spread evenly throughout all matter, tracing the large-scale structure. The study, which will appear in The Astrophysical Journal Letters, has garnered widespread praise. “This is really a fantastic step,” Watson said.

    Immediately, certain stocks have risen: in particular, three types of candidate objects that thread the needle of being relatively common in the cosmos yet potentially special enough to yield Oh-My-God particles.

    Icarus Stars

    In 2008, Farrar and a co-author proposed [below] that cataclysms called tidal disruption events (TDEs) might be the source of ultrahigh-energy cosmic rays.

    A TDE occurs when a star pulls an Icarus and gets too close to a supermassive black hole. The star’s front feels so much more gravity than its back that the star gets ripped to smithereens and swirls into the abyss. The swirling lasts about a year. While it lasts, two jets of material — the subatomic shreds of the disrupted star — shoot out from the black hole in opposite directions. Shock waves and magnetic fields in these beams might then conspire to accelerate nuclei to ultrahigh energies before slingshotting them into space.

    Tidal disruption events occur roughly once every 100,000 years in every galaxy, which is the cosmological equivalent of happening everywhere all the time. Since galaxies trace the matter distribution, TDEs could explain the success of Ding, Globus and Farrar’s continuous model.

    Moreover, the relatively brief flash of a TDE solves other puzzles. By the time a TDE’s cosmic ray reaches us, the TDE will have been dark for thousands of years. Other cosmic rays from the same TDE might take separate bent paths; some might not arrive for centuries. The transient nature of a TDE could explain why there seems to be so little pattern to cosmic rays’ arrival directions, with no strong correlations with the positions of known objects. “I’m inclined now to believe they are transients, mostly,” Farrar said of the rays’ origins.

    The TDE hypothesis got another boost recently, from an observation reported in Nature Astronomy [below] in February.

    Robert Stein, one of the paper’s authors, was operating a telescope in California called the Zwicky Transient Factory in October 2019 when an alert came in from the IceCube neutrino observatory in Antarctica.


    __________________________________________________
    U Wisconsin IceCube neutrino observatory


    IceCube employs more than 5000 detectors lowered on 86 strings into almost 100 holes in the Antarctic ice NSF B. Gudbjartsson, IceCube Collaboration.

    Lunar Icecube.

    IceCube Gen-2 DeepCore PINGU annotated.

    IceCube neutrino detector interior.

    IceCube DeepCore annotated.

    DM-Ice II at IceCube annotated.


    __________________________________________________
    IceCube had spotted a particularly energetic neutrino. High-energy neutrinos are produced when even-higher-energy cosmic rays scatter off light or matter in the environment where they’re created. Luckily, the neutrinos, being neutral, travel to us in straight lines, so they point directly back to the source of their parent cosmic ray.

    Stein swiveled the telescope in the arrival direction of IceCube’s neutrino. “We immediately saw there was a tidal disruption event from the position that the neutrino had arrived from,” he said.

    The correspondence makes it more likely that TDEs are at least one source of ultrahigh-energy cosmic rays. However, the neutrino’s energy was probably too low to prove that TDEs produce the very highest-energy rays. Some researchers strongly question whether these transients can accelerate nuclei to the extreme end of the observed energy spectrum; theorists are still exploring how the events might accelerate particles in the first place.

    Meanwhile, other facts have turned some researchers’ attention elsewhere.

    Starburst Superwinds

    Cosmic-ray observatories such as Auger and the Telescope Array have also found a few hot spots — small, subtle concentrations in the arrival directions of the very highest-energy cosmic rays. In 2018, Auger published [below] the results of a comparison of its hot spots to the locations of astrophysical objects within a few hundred million light-years of here. (Cosmic rays from farther away would lose too much energy in mid-journey collisions.)

    In the cross-correlation contest, no type of object performed exceptionally well — understandably, given the deflection cosmic rays experience. But the strongest correlation surprised many experts: About 10% of the rays came from within 13 degrees of the directions of so-called “starburst galaxies.” “They were not on my plate originally,” said Michael Unger of the Karlsruhe Institute of Technology, a member of the Auger team.

    4

    No one was more thrilled than Luis Anchordoqui, an astrophysicist at Lehman College of the City University of New York, who proposed [Physical Review D (below)] starburst galaxies as the origin of ultrahigh-energy cosmic rays in 1999. “I can be kind of biased on these things because I was the one proposing the model that now the data is pointing to,” he said.

    Starburst galaxies constantly manufacture a lot of huge stars. The massive stars live fast and die young in supernova explosions, and Anchordoqui argues [Physical Review D (below)] that the “superwind” formed by the collective shock waves of all the supernovas is what accelerates cosmic rays to the mind-boggling speeds that we detect.

    Not everyone is sure that this mechanism would work. “The question is: How fast are those shocks?” said Frank Rieger, an astrophysicist at Heidelberg University. “Should I expect those to go to the highest energies? At the moment I am doubtful about it.”

    Other researchers argue that objects inside starburst galaxies might be acting as cosmic-ray accelerators, and that the cross-correlation study is simply picking up on an abundance of these other objects. “As a person who thinks of transient events as a natural source, those are very enriched in starburst galaxies, so I have no trouble,” said Farrar.

    Active Galaxies

    In the cross-correlation study, another kind of object performed almost but not quite as well as starburst galaxies: objects called active galactic nuclei, or AGNs.

    AGNs are the white-hot centers of “active” galaxies, in which plasma engulfs the central supermassive black hole. The black hole sucks the plasma in while shooting out enormous, long-lasting jets.

    The high-power members of an especially bright subset called “radio-loud” AGNs are the most luminous persistent objects in the universe, so they’ve long been leading candidates for the source of ultrahigh-energy cosmic rays.

    However, these powerful radio-loud AGNs are too rare in the cosmos to pass the Ding, Globus and Farrar test: They couldn’t possibly be tracers for the large-scale structure. In fact, within our cosmic neighborhood, there are almost none. “They’re nice sources but not in our backyard,” Rieger said.

    Less powerful radio-loud AGNs are much more common and could potentially resemble the continuous model. Centaurus A, for instance, the nearest radio-loud AGN, sits right at the Auger Observatory’s most prominent hot spot. (So does a starburst galaxy.)

    For a long time Rieger and other specialists seriously struggled to get low-power AGNs to accelerate protons to Oh-My-God-particle levels. But a recent finding has brought them “back in the game,” he said.

    Astrophysicists have long known that about 90% of all cosmic rays are protons (that is, hydrogen nuclei); another 9% are helium nuclei. The rays can be heavier nuclei such as oxygen or even iron, but experts long assumed that these would get ripped apart by the violent processes needed to accelerate ultrahigh-energy cosmic rays.

    Then, in surprising findings [below] in the early 2010s, Auger Observatory scientists inferred from the shapes of the air showers that ultrahigh-energy rays are mostly middleweight nuclei, such as carbon, nitrogen and silicon. These nuclei will achieve the same energy as protons while traveling at lower speeds. And that, in turn, makes it easier to imagine how any of the candidate cosmic accelerators might work.

    For example, Rieger has identified a mechanism [Proceedings of Science (below)] that would allow low-power AGNs to accelerate heavier cosmic rays to ultrahigh energies: A particle could drift from side to side in an AGN’s jet, getting kicked each time it reenters the fastest part of the flow. “In that case they find they can do that with the low-power radio sources,” Rieger said. “Those would be much more in our backyard.”

    Another paper [Scientific Reports (below)] explored whether tidal disruption events would naturally produce middleweight nuclei. “The answer is that it could happen if the stars that are disrupted are white dwarfs,” said Cecilia Lunardini, an astrophysicist at Arizona State University who co-authored the paper. “White dwarfs have this sort of composition — carbon, nitrogen.” Of course, TDEs can happen to any “unfortunate star,” Lunardini said. “But there are lots of white dwarfs, so I don’t see this as something very contrived.”

    Researchers continue to explore the implications of the highest-energy cosmic rays being on the heavy side. But they can agree that it makes the problem of how to accelerate them easier. “The heavy composition towards higher energy relaxes things much more,” Rieger said.

    Primary Source

    As the short list of candidate accelerators crystallizes, the search for the right answer will continue to be led by new observations. Everyone is excited for AugerPrime, an upgraded observatory; starting later this year, it will identify the composition of each individual cosmic ray event, rather than estimating the overall composition. That way, researchers can isolate the protons, which deflect the least on their way to Earth, and look back at their arrival directions to identify individual sources. (These sources would presumably produce the heavier nuclei as well.)

    Many experts suspect that a mix of sources might contribute to the ultrahigh-energy cosmic-ray spectrum. But they generally expect one source type to dominate, and only one to reach the extreme end of the spectrum. “My money is on that it’s only one,” said Unger.

    Reviews of Modern Physics 1939
    Physical Review D 1999
    Farrar and a co-author proposed 2008
    Science 2017
    Physical Review D 2020
    Nature Astronomy 2021
    Scientific Reports 2019
    The Astrophysical Journal Letters 2021
    The Astrophysical Journal 2012 [Draft version October 29, 2018]
    Auger published 2018
    surprising findings 2019
    Proceedings of Science 2019
    See the above six science paper for instructive material with images.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 8:19 pm on December 27, 2022 Permalink | Reply
    Tags: "Inside the Proton the ‘Most Complicated Thing You Could Possibly Imagine’", "QCD": Quantum chromodynamics-the theory governing the interactions of quarks and gluons in protons and neutrons.., After SLAC’s discovery-which won the Nobel Prize in Physics in 1990-scrutiny of the proton intensified., , By using higher-energy electrons physicists can ferret out finer features of the target proton., Deep inelastic scattering, Higher-energy colliders also produce a wider array of collision outcomes letting researchers choose different subsets of the outgoing electrons to analyze., More than a century after Ernest Rutherford discovered the positively charged particle at the heart of every atom physicists are still struggling to fully understand the proton., , , Physicists have carried out hundreds of scattering experiments to date., Physicists infer various aspects of the object’s interior by adjusting how forcefully they bombard it and by choosing which scattered particles they collect in the aftermath., Proof that the proton contains multitudes came from The Stanford Linear Accelerator Center (SLAC) in 1967., Quanta Magazine, , Researchers recently discovered that the proton sometimes includes a charm quark and charm antiquark-colossal particles that are each heavier than the proton itself., The proton is a quantum mechanical object that exists as a haze of probabilities until an experiment forces it to take a concrete form., The proton is the most complicated thing that you could possibly imagine., The proton's forms differ drastically depending on how researchers set up their experiment.   

    From “Quanta Magazine” : “Inside the Proton the ‘Most Complicated Thing You Could Possibly Imagine’” 

    From “Quanta Magazine”

    10.19.22 [A year end retrospective.]
    Charlie Wood

    1
    Researchers recently discovered that the proton sometimes includes a charm quark and charm antiquark, colossal particles that are each heavier than the proton itself. Samuel Velasco/Quanta Magazine.

    More than a century after Ernest Rutherford discovered the positively charged particle at the heart of every atom, physicists are still struggling to fully understand the proton.

    High school physics teachers describe them as featureless balls with one unit each of positive electric charge — the perfect foils for the negatively charged electrons that buzz around them. College students learn that the ball is actually a bundle of three elementary particles called quarks. But decades of research have revealed a deeper truth, one that’s too bizarre to fully capture with words or images.

    “This is the most complicated thing that you could possibly imagine,” said Mike Williams, a physicist at the Massachusetts Institute of Technology. “In fact, you can’t even imagine how complicated it is.”

    The proton is a quantum mechanical object that exists as a haze of probabilities until an experiment forces it to take a concrete form. And its forms differ drastically depending on how researchers set up their experiment. Connecting the particle’s many faces has been the work of generations. “We’re kind of just starting to understand this system in a complete way,” said Richard Milner, a nuclear physicist at The Massachusetts Institute of Technology.

    As the pursuit continues, the proton’s secrets keep tumbling out. Most recently, a monumental data analysis published in August [Nature (below)] found that the proton contains traces of particles called charm quarks that are heavier than the proton itself.

    The proton “has been humbling to humans,” Williams said. “Every time you think you kind of have a handle on it, it throws you some curveballs.”

    Recently, Milner, together with Rolf Ent at The DOE’s Thomas Jefferson National Accelerator Facility, Massachusetts Institute of Technology filmmakers Chris Boebel and Joe McMaster, and animator James LaPlante, set out to transform a set of arcane plots that compile the results of hundreds of experiments into a series of animations of the shape-shifting proton. We have incorporated their animations into our own attempt to unveil its secrets.

    Cracking Open the Proton

    Proof that the proton contains multitudes came from The Stanford Linear Accelerator Center (SLAC) in 1967. In earlier experiments, researchers had pelted it with electrons and watched them ricochet off like billiard balls. But SLAC could hurl electrons more forcefully, and researchers saw that they bounced back differently. The electrons were hitting the proton hard enough to shatter it — a process called deep inelastic scattering — and were rebounding from point-like shards of the proton called quarks. “That was the first evidence that quarks actually exist,” said Xiaochao Zheng, a physicist at the University of Virginia.

    After SLAC’s discovery, which won the Nobel Prize in Physics in 1990, scrutiny of the proton intensified. Physicists have carried out hundreds of scattering experiments to date. They infer various aspects of the object’s interior by adjusting how forcefully they bombard it and by choosing which scattered particles they collect in the aftermath.

    2

    By using higher-energy electrons, physicists can ferret out finer features of the target proton. In this way, the electron energy sets the maximum resolving power of a deep inelastic scattering experiment. More powerful particle colliders offer a sharper view of the proton.

    Higher-energy colliders also produce a wider array of collision outcomes, letting researchers choose different subsets of the outgoing electrons to analyze. This flexibility has proved key to understanding quarks, which careen about inside the proton with different amounts of momentum.

    By measuring the energy and trajectory of each scattered electron, researchers can tell if it has glanced off a quark carrying a large chunk of the proton’s total momentum or just a smidgen. Through repeated collisions, they can take something like a census — determining whether the proton’s momentum is mostly bound up in a few quarks, or distributed over many.

    3

    Even SLAC’s proton-splitting collisions were gentle by today’s standards. In those scattering events, electrons often shot out in ways suggesting that they had crashed into quarks carrying a third of the proton’s total momentum. The finding matched a theory from Murray Gell-Mann and George Zweig, who in 1964 posited that a proton consists of three quarks.

    Gell-Mann and Zweig’s “quark model” remains an elegant way to imagine the proton. It has two “up” quarks with electric charges of +2/3 each and one “down” quark with a charge of −1/3, for a total proton charge of +1.


    Three quarks careen about in this data-driven animation. Credit: MIT/Jefferson Lab/Sputnik Animation.

    But the quark model is an oversimplification that has serious shortcomings.

    It fails, for instance, when it comes to a proton’s spin, a quantum property analogous to angular momentum. The proton has half a unit of spin, as do each of its up and down quarks. Physicists initially supposed that — in a calculation echoing the simple charge arithmetic — the half-units of the two up quarks minus that of the down quark must equal half a unit for the proton as a whole. But in 1988, the European Muon Collaboration reported [Physics Letters B (below)] that the quark spins add up to far less than one-half. Similarly, the masses of two up quarks and one down quark only comprise about 1% of the proton’s total mass. These deficits drove home a point physicists were already coming to appreciate: The proton is much more than three quarks.

    Much More Than Three Quarks

    The Hadron-Electron Ring Accelerator (HERA), which operated in Hamburg, Germany, from 1992 to 2007, slammed electrons into protons roughly a thousand times more forcefully than SLAC had.

    In HERA experiments, physicists could select electrons that had bounced off of extremely low-momentum quarks, including ones carrying as little as 0.005% of the proton’s total momentum. And detect them they did: HERA’s electrons rebounded from a maelstrom of low-momentum quarks and their antimatter counterparts, antiquarks.


    Many quarks and antiquarks seethe in a roiling particle “sea.” Credit: MIT/Jefferson Lab/Sputnik Animation.

    The results confirmed a sophisticated and outlandish theory that had by then replaced Gell-Mann and Zweig’s quark model. Developed in the 1970s, it was a quantum theory of the “strong interaction” that acts between quarks. The theory describes quarks as being roped together by force-carrying particles called gluons. Each quark and each gluon has one of three types of “colour” charge, labeled red, green and blue; these colour-charged particles naturally tug on each other and form a group — such as a proton — whose colours add up to a neutral white. The colourful theory became known as quantum chromodynamics, or “QCD”.

    According to QCD, gluons can pick up momentary spikes of energy. With this energy, a gluon splits into a quark and an antiquark — each carrying just a tiny bit of momentum — before the pair annihilates and disappears. It’s this “sea” of transient gluons, quarks and antiquarks that HERA, with its greater sensitivity to lower-momentum particles, detected firsthand.

    HERA also picked up hints of what the proton would look like in more powerful colliders. As physicists adjusted HERA to look for lower-momentum quarks, these quarks — which come from gluons — showed up in greater and greater numbers. The results suggested that in even higher-energy collisions, the proton would appear as a cloud made up almost entirely of gluons.


    Gluons abound in a cloud-like form. Credit: MIT/Jefferson Lab/Sputnik Animation.

    The gluon dandelion is exactly what QCD predicts. “The HERA data are direct experimental proof that QCD describes nature,” Milner said.

    But the young theory’s victory came with a bitter pill: While QCD beautifully described the dance of short-lived quarks and gluons revealed by HERA’s extreme collisions, the theory is useless for understanding the three long-lasting quarks seen in SLAC’s gentle bombardment.

    QCD’s predictions are easy to understand only when the strong force is relatively weak. And the strong force weakens only when quarks are extremely close together, as they are in short-lived quark-antiquark pairs. Frank Wilczek, David Gross and David Politzer identified this defining feature of QCD in 1973, winning the Nobel Prize for it 31 years later.

    But for gentler collisions like SLAC’s, where the proton acts like three quarks that mutually keep their distance, these quarks pull on each other strongly enough that QCD calculations become impossible. Thus, the task of further demystifying the three-quark view of the proton has fallen largely to experimentalists. (Researchers who run “digital experiments,” in which QCD predictions are simulated on supercomputers, have also made key contributions.) And it’s in this low-resolution picture that physicists keep finding surprises.

    A Charming New View

    Recently, a team led by Juan Rojo of the National Institute for Subatomic Physics in the Netherlands and VU University Amsterdam analyzed more than 5,000 proton snapshots taken over the last 50 years, using machine learning to infer the motions of quarks and gluons inside the proton in a way that sidesteps theoretical guesswork.

    The new scrutiny picked up a background blur in the images that had escaped past researchers. In relatively soft collisions just barely breaking the proton open, most of the momentum was locked up in the usual three quarks: two ups and a down. But a small amount of momentum appeared to come from a “charm” quark and charm antiquark — colossal elementary particles that each outweigh the entire proton by more than one-third.


    The proton sometimes acts like a “molecule” of five quarks.

    Short-lived charms frequently show up in the “quark sea” view of the proton (gluons can split into any of six different quark types if they have enough energy). But the results from Rojo and colleagues suggest that the charms have a more permanent presence, making them detectable in gentler collisions. In these collisions, the proton appears as a quantum mixture, or superposition, of multiple states: An electron usually encounters the three lightweight quarks. But it will occasionally encounter a rarer “molecule” of five quarks, such as an up, down and charm quark grouped on one side and an up quark and charm antiquark on the other.

    Such subtle details about the proton’s makeup could prove consequential. At the Large Hadron Collider, physicists search for new elementary particles by bashing high-speed protons together and seeing what pops out; to understand the results, researchers need to know what’s in a proton to begin with.

    The occasional apparition of giant charm quarks would throw off the odds [below] of making more exotic particles.

    And when protons called cosmic rays hurtle here from outer space and slam into protons in Earth’s atmosphere, charm quarks popping up at the right moments would shower Earth with extra-energetic neutrinos [SciPost Physics (below)], researchers calculated in 2021. These could confound observers searching for high-energy neutrinos coming from across the cosmos.

    Rojo’s collaboration plans to continue exploring the proton by searching for an imbalance between charm quarks and antiquarks. And heavier constituents, such as the top quark, could make even rarer and harder-to-detect appearances.

    Next-generation experiments will seek still more unknown features.




    Physicists at Brookhaven National Laboratory hope to fire up the Electron-Ion Collider in the 2030s and pick up where HERA left off, taking higher-resolution snapshots that will enable the first 3D reconstructions of the proton.

    The EIC will also use spinning electrons to create detailed maps of the spins of the internal quarks and gluons, just as SLAC and HERA mapped out their momentums. This should help researchers to finally pin down the origin of the proton’s spin, and to address other fundamental questions about the baffling particle that makes up most of our everyday world.

    Correction: October 20, 2022
    A previous version of the article erroneously implied that lower-momentum quarks live shorter lives than higher-momentum quarks in the quark sea. The text has been updated to clarify that all these quarks are lower-momentum and shorter-lived than those in the three quark-picture.

    Science papers:
    Nature
    throw off the odds 2016
    SciPost Physics 2021
    See the above science papers for instructive material with images.
    Physics Letters B 1988

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:34 pm on December 27, 2022 Permalink | Reply
    Tags: "A Deepening Crisis Forces Physicists to Rethink Structure of Nature’s Laws", , A candidate for the fundamental theory of gravity and everything else string theory holds that all particles are-close up-little vibrating strings., , Cosmologists observe that space’s expansion is accelerating only slowly indicating that the cosmological constant is small., , Crisis in Particle Physics forces a rethink of what is natural., For several years the particle physicists who study nature’s fundamental building blocks have been in a textbook "Kuhnian" crisis., If space were infused with a Planckian density of energy the universe would have ripped itself apart moments after the Big Bang. But this hasn’t happened., In "The Structure of Scientific Revolutions" the philosopher of science Thomas Kuhn observed that scientists spend long periods taking small steps., Many particle physicists migrated to other research areas., Modular invariance, , , Quanta Magazine, , The cosmological constant problem seemed potentially related to mysterious quantum aspects of gravity since the energy of space is detected solely through its gravitational effect., The crisis became undeniable in 2016 when the Large Hadron Collider in Geneva still hadn’t conjured up any of the new elementary particles that theorists had been expecting for decades., The energy of space is detected solely through its gravitational effect., The fabric of space even when devoid of matter seems as if it should sizzle with energy., The search for supersymmetric partner particles began at the Large Electron-Positron Collider in the 1990s., When particle physicists add up all the presumptive contributions to the energy of space they find that as with the Higgs mass injections of energy coming from Planck-scale phenomena should blow it up, Within a few years physicists found a tidy solution: supersymmetry-a hypothesized doubling of nature’s elementary particles.   

    From “Quanta Magazine” : “A Deepening Crisis Forces Physicists to Rethink Structure of Nature’s Laws” 

    From “Quanta Magazine”

    3.1.22 [Just now in social media.]
    Natalie Wolchover

    For three decades, researchers hunted in vain for new elementary particles that would have explained why nature looks the way it does. As physicists confront that failure, they’re reexamining a longstanding assumption: that big stuff consists of smaller stuff.

    1
    Emily Buder/Quanta Magazine; Kristina Armitage and Rui Braz for Quanta Magazine.

    Crisis in Particle Physics forces a rethink of what is natural.

    In The Structure of Scientific Revolutions, the philosopher of science Thomas Kuhn observed that scientists spend long periods taking small steps. They pose and solve puzzles while collectively interpreting all data within a fixed worldview or theoretical framework, which Kuhn called a paradigm. Sooner or later, though, facts crop up that clash with the reigning paradigm. Crisis ensues. The scientists wring their hands, reexamine their assumptions and eventually make a revolutionary shift to a new paradigm, a radically different and truer understanding of nature. Then incremental progress resumes.

    For several years the particle physicists who study nature’s fundamental building blocks have been in a textbook Kuhnian crisis.

    The crisis became undeniable in 2016, when, despite a major upgrade, the Large Hadron Collider in Geneva still hadn’t conjured up any of the new elementary particles that theorists had been expecting for decades. The swarm of additional particles would have solved a major puzzle about an already known one, the famed Higgs boson.


    _______________________________________________
    Higgs


    _______________________________________________

    The hierarchy problem, as the puzzle is called, asks why the Higgs boson is so lightweight — a hundred million billion times less massive than the highest energy scales that exist in nature. The Higgs mass seems unnaturally dialed down relative to these higher energies, as if huge numbers in the underlying equation that determines its value all miraculously cancel out.

    The extra particles would have explained the tiny Higgs mass, restoring what physicists call “naturalness” to their equations. But after the LHC became the third and biggest collider to search in vain for them, it seemed that the very logic about what’s natural in nature might be wrong. “We are confronted with the need to reconsider the guiding principles that have been used for decades to address the most fundamental questions about the physical world,” Gian Giudice [below], head of the theory division at CERN, the lab that houses the LHC,wrote in 2017.

    At first, the community despaired. “You could feel the pessimism,” said Isabel Garcia Garcia, a particle theorist at the Kavli Institute for Theoretical Physics at The University of California-Santa Barbara, who was a graduate student at the time. Not only had the $10 billion proton smasher failed to answer a 40-year-old question, but the very beliefs and strategies that had long guided particle physics could no longer be trusted. People wondered more loudly than before whether the universe is simply unnatural, the product of fine-tuned mathematical cancellations. Perhaps there’s a multiverse of universes, all with randomly dialed Higgs masses and other parameters, and we find ourselves here only because our universe’s peculiar properties foster the formation of atoms, stars and planets and therefore life. This “anthropic argument,” though possibly right, is frustratingly untestable.

    Many particle physicists migrated to other research areas, “where the puzzle hasn’t gotten as hard as the hierarchy problem,” said Nathaniel Craig, a theoretical physicist at The University of California-Santa Barbara.

    Some of those who remained set to work scrutinizing decades-old assumptions. They started thinking anew about the striking features of nature that seem unnaturally fine-tuned — both the Higgs boson’s small mass, and a seemingly unrelated case, one that concerns the unnaturally low energy of space itself. “The really fundamental problems are problems of naturalness,” Garcia Garcia said.

    Their introspection is bearing fruit. Researchers are increasingly zeroing in on what they see as a weakness in the conventional reasoning about naturalness. It rests on a seemingly benign assumption, one that has been baked into scientific outlooks since ancient Greece: Big stuff consists of smaller, more fundamental stuff — an idea known as reductionism. “The reductionist paradigm … is hard-wired into the naturalness problems,” said Nima Arkani-Hamed, a theorist at the Institute for Advanced Study.

    Now a growing number of particle physicists think naturalness problems and the null results at the Large Hadron Collider might be tied to reductionism’s breakdown. “Could it be that this changes the rules of the game?” Arkani-Hamed said. In a slew of recent papers, researchers have thrown reductionism to the wind. They’re exploring novel ways in which big and small distance scales might conspire, producing values of parameters that look unnaturally fine-tuned from a reductionist perspective.

    “Some people call it a crisis. That has a pessimistic vibe associated to it and I don’t feel that way about it,” said Garcia Garcia. “It’s a time where I feel like we are on to something profound.”

    What Naturalness Is

    The Large Hadron Collider did make one critical discovery: In 2012, it finally struck upon the Higgs boson [above], the keystone of the 50-year-old set of equations known as the Standard Model [above] of Particle Physics, which describes the 17 known elementary particles.

    The discovery of the Higgs confirmed a riveting story that’s written in the Standard Model equations. Moments after the Big Bang, an entity that permeates space called the Higgs field suddenly became infused with energy. This Higgs field crackles with Higgs bosons, particles that possess mass because of the field’s energy. As electrons, quarks and other particles move through space, they interact with Higgs bosons, and in this way they acquire mass as well.

    After the Standard Model was completed in 1975, its architects almost immediately noticed a problem [Physics Letters B (below)].

    When the Higgs gives other particles mass, they give it right back; the particle masses shake out together. Physicists can write an equation for the Higgs boson’s mass that includes terms from each particle it interacts with. All the massive Standard Model particles contribute terms to the equation, but these aren’t the only contributions. The Higgs should also mathematically mingle with heavier particles, up to and including phenomena at the Planck scale, an energy level associated with the quantum nature of gravity, black holes and the Big Bang. Planck-scale phenomena should contribute terms to the Higgs mass that are huge — roughly a hundred million billion times larger than the actual Higgs mass. Naively, you would expect the Higgs boson to be as heavy as they are, thereby beefing up other elementary particles as well. Particles would be too heavy to form atoms, and the universe would be empty.

    For the Higgs to depend on enormous energies yet end up so light, you have to assume that some of the Planckian contributions to its mass are negative while others are positive, and that they’re all dialed to just the right amounts to exactly cancel out. Unless there’s some reason for this cancellation, it seems ludicrous — about as unlikely as air currents and table vibrations counteracting each other to keep a pencil balanced on its tip. This kind of fine-tuned cancellation physicists deem “unnatural.”

    Within a few years, physicists found a tidy solution: supersymmetry, a hypothesized doubling of nature’s elementary particles.

    Supersymmetry says that every boson (one of two types of particle) has a partner fermion (the other type), and vice versa. Bosons and fermions contribute positive and negative terms to the Higgs mass, respectively. So if these terms always come in pairs, they’ll always cancel.

    The search for supersymmetric partner particles began at the Large Electron-Positron Collider in the 1990s.

    Researchers assumed the particles were just a tad heavier than their Standard Model partners, requiring more raw energy to materialize, so they accelerated particles to nearly light speed, smashed them together, and looked for heavy apparitions among the debris.

    Meanwhile, another naturalness problem surfaced.

    3
    Merrill Sherman for Quanta Magazine.

    The fabric of space even when devoid of matter seems as if it should sizzle with energy — the net activity of all the quantum fields coursing through it. When particle physicists add up all the presumptive contributions to the energy of space, they find that, as with the Higgs mass, injections of energy coming from Planck-scale phenomena should blow it up. Albert Einstein showed that the energy of space, which he dubbed the cosmological constant, has a gravitationally repulsive effect; it causes space to expand faster and faster. If space were infused with a Planckian density of energy, the universe would have ripped itself apart moments after the Big Bang. But this hasn’t happened.

    Instead, cosmologists observe that space’s expansion is accelerating only slowly, indicating that the cosmological constant is small. Measurements in 1998 pegged its value as a million million million million million times lower than the Planck energy. Again, it seems all those enormous energy injections and extractions in the equation for the cosmological constant perfectly cancel out, leaving space eerily placid.

    Both of these big naturalness problems were evident by the late 1970s, but for decades, physicists treated them as unrelated. “This was in the phase where people were schizophrenic about this,” said Arkani-Hamed. The cosmological constant problem seemed potentially related to mysterious, quantum aspects of gravity, since the energy of space is detected solely through its gravitational effect. The hierarchy problem looked more like a “dirty-little-details problem,” Arkani-Hamed said — the kind of issue that, like two or three other problems of the past, would ultimately reveal a few missing puzzle pieces. “The sickness of the Higgs,” as Giudice called its unnatural lightness, was nothing a few supersymmetry particles at the LHC couldn’t cure.

    In hindsight, the two naturalness problems seem more like symptoms of a deeper issue.

    “It’s useful to think about how these problems come about,” said Garcia Garcia in a Zoom call from Santa Barbara this winter. “The hierarchy problem and the cosmological constant problem are problems that arise in part because of the tools we’re using to try to answer questions — the way we’re trying to understand certain features of our universe.”

    Reductionism Made Precise

    Physicists come by their funny way of tallying contributions to the Higgs mass and cosmological constant honestly. The calculation method reflects the strange nesting-doll structure of the natural world.

    Zoom in on something, and you’ll discover that it’s actually a lot of smaller things. What looks from afar like a galaxy is really a collection of stars; each star is many atoms; an atom further dissolves into hierarchical layers of subatomic parts. Moreover, as you zoom in to shorter distance scales, you see heavier and more energetic elementary particles and phenomena — a profound link between high energies and short distances that explains why a high-energy particle collider acts like a microscope on the universe. The connection between high energies and short distances has many avatars throughout physics. For instance, quantum mechanics says every particle is also a wave; the more massive the particle, the shorter its associated wavelength. Another way to think about it is that energy has to cram together more densely to form smaller objects. Physicists refer to low-energy, long-distance physics as “the IR,” and high-energy, short-distance physics as “the UV,” drawing an analogy with infrared and ultraviolet wavelengths of light.

    In the 1960s and ’70s, the particle physics titans Kenneth Wilson and Steven Weinberg put their finger on what’s so remarkable about nature’s hierarchical structure: It allows us to describe goings-on at some big, IR scale of interest without knowing what’s “really” happening at more microscopic, UV scales. You can, for instance, model water with a hydrodynamic equation that treats it as a smooth fluid, glossing over the complicated dynamics of its H2O molecules. The hydrodynamic equation includes a term representing water’s viscosity — a single number, which can be measured at IR scales, that summarizes all those molecular interactions happening in the UV. Physicists say IR and UV scales “decouple,” which lets them effectively describe aspects of the world without knowing what’s going on deep down at the Planck scale — the ultimate UV scale, corresponding to a billionth of a trillionth of a trillionth of a centimeter, or 10 billion billion gigaelectron-volts (GeV) of energy, where the very fabric of space-time probably dissolves into something else.

    “We can do physics because we can remain ignorant about what happens at short distances,” said Riccardo Rattazzi, a theoretical physicist at the Swiss Federal Institute of Technology Lausanne.

    Wilson and Weinberg separately developed pieces of the framework that particle physicists use to model different levels of our nesting-doll world: effective field theory. It’s in the context of EFT that naturalness problems arise.

    An EFT models a system — a bundle of protons and neutrons, say — over a certain range of scales. Zoom in on protons and neutrons for a while and they will keep looking like protons and neutrons; you can describe their dynamics over that range with “chiral effective field theory.” But then an EFT will reach its “UV cutoff,” a short-distance, high-energy scale at which the EFT stops being an effective description of the system. At a cutoff of 1 GeV, for example, chiral effective field theory stops working, because protons and neutrons stop behaving like single particles and instead act like trios of quarks. A different theory kicks in.

    Importantly, an EFT breaks down at its UV cutoff for a reason. The cutoff is where new, higher-energy particles or phenomena that aren’t included in that theory must be found.

    In its range of operation, an EFT accounts for UV physics below the cutoff by adding “corrections” representing these unknown effects. It’s just like how a fluid equation has a viscosity term to capture the net effect of short-distance molecular collisions. Physicists don’t need to know what actual physics lies at the cutoff to write these corrections; they just use the cutoff scale as a ballpark estimate of the size of the effects.

    Typically when you’re calculating something at an IR scale of interest, the UV corrections are small, proportional to the (relatively smaller) length scale associated with the cutoff. The situation changes, though, when you’re using EFT to calculate a parameter like the Higgs mass or the cosmological constant — something that has units of mass or energy. Then the UV corrections to the parameter are big, because (to have the right units) the corrections are proportional to the energy — rather than the length — associated with the cutoff. And while the length is small, the energy is high. Such parameters are said to be “UV-sensitive.”

    The concept of naturalness emerged in the 1970s along with effective field theory itself, as a strategy for identifying where an EFT must cut off, and where, therefore, new physics must lie. The logic goes like this: If a mass or energy parameter has a high cutoff, its value should naturally be large, pushed higher by all the UV corrections. Therefore, if the parameter is small, the cutoff energy must be low.

    Some commentators have dismissed naturalness as a mere aesthetic preference. But others point to when the strategy revealed precise, hidden truths about nature. “The logic works,” said Craig, a leader of recent efforts to rethink that logic. Naturalness problems “have always been a signpost of where the picture changes and new things should appear.”

    What Naturalness Can Do

    In 1974, a few years before the term “naturalness” was even coined, Mary K. Gaillard and Ben Lee made spectacular use of the strategy to predict the mass [Physical Review D (below)]of a then-hypothetical particle called the charm quark. “The success of her prediction and its relevance to the hierarchy problem are wildly underappreciated in our field,” Craig said.

    That summer of ’74, Gaillard and Lee were puzzling over the difference between the masses of two kaon particles — composites of quarks. The measured difference was small. But when they tried to calculate this mass difference with an EFT equation, they saw that its value was at risk of blowing up. Because the kaon mass difference has units of mass, it’s UV-sensitive, receiving high-energy corrections coming from the unknown physics at the cutoff. The theory’s cutoff wasn’t known, but physicists at the time reasoned that it couldn’t be very high, or else the resulting kaon mass difference would seem curiously small relative to the corrections — unnatural, as physicists now say. Gaillard and Lee inferred their EFT’s low cutoff scale, the place where new physics should reveal itself. They argued that a recently proposed quark called the charm quark must be found with a mass of no more than 1.5 GeV.

    The charm quark showed up three months later, weighing 1.2 GeV. The discovery ushered in a renaissance of understanding known as the “November revolution” that quickly led to the completion of the Standard Model. In a recent video call, Gaillard, now 82, recalled that she was in Europe visiting CERN when the news broke. Lee sent her a telegram: CHARM HAS BEEN FOUND.

    Such triumphs led many physicists to feel certain that the hierarchy problem, too, should herald new particles not much heavier than those of the Standard Model. If the Standard Model’s cutoff were up near the Planck scale (where researchers know for sure that the Standard Model fails, since it doesn’t account for quantum gravity), then the UV corrections to the Higgs mass would be huge — making its lightness unnatural. A cutoff not far above the mass of the Higgs boson itself would make the Higgs about as heavy as the corrections coming from the cutoff, and everything would look natural. “That option has been the starting point of the work that has been done in trying to address the hierarchy problem in the last 40 years,” said Garcia Garcia. “People came up with great ideas, like supersymmetry, compositeness [of the Higgs], that we haven’t seen realized in nature.”

    Garcia Garcia was a few years into her particle physics doctorate at The University of Oxford in 2016 when it became clear to her that a reckoning was in order. “That’s when I became more interested in this missing component that we don’t normally incorporate when we discuss these problems, which is gravity — this realization that there’s more to quantum gravity than we can tell from effective field theory.”

    Gravity Mixes Everything Up

    Theorists learned in the 1980s that gravity doesn’t play by the usual reductionist rules. If you bash two particles together hard enough, their energies become so concentrated at the collision point that they’ll form a black hole — a region of such extreme gravity that nothing can escape. Bash particles together even harder, and they’ll form a bigger black hole. More energy no longer lets you see shorter distances — quite the opposite. The harder you bash, the bigger the resulting invisible region is. Black holes and the quantum gravity theory that describes their interiors completely reverse the usual relationship between high energies and short distances. “Gravity is anti-reductionist,” said Sergei Dubovsky, a physicist at New York University.

    Quantum gravity seems to toy with nature’s architecture, making a mockery of the neat system of nested scales that EFT-wielding physicists have grown accustomed to. Craig, like Garcia Garcia, began to think about the implications of gravity soon after the LHC’s search came up empty. In trying to brainstorm new solutions to the hierarchy problem, Craig reread a 2008 essay about naturalness by Giudice, the CERN theorist. He started wondering what Giudice meant when he wrote that the solution to the cosmological constant problem might involve “some complicated interplay between infrared and ultraviolet effects.” If the IR and the UV have complicated interplay, that would defy the usual decoupling that allows effective field theory to work. “I just Googled things like ‘UV-IR mixing,’” Craig said, which led him to some intriguing papers from 1999, “and off I went.”

    UV-IR mixing potentially resolves naturalness problems by breaking EFT’s reductionist scheme. In EFT, naturalness problems arise when quantities like the Higgs mass and the cosmological constant are UV-sensitive, yet somehow don’t blow up, as if there’s a conspiracy between all the UV physics that nullifies their effect on the IR. “In the logic of effective field theory, we discard that possibility,” Craig explained. Reductionism tells us that IR physics emerges from UV physics — that water’s viscosity comes from its molecular dynamics, protons get their properties from their inner quarks, and explanations reveal themselves as you zoom in — never the reverse. The UV isn’t influenced or explained by the IR, “so [UV effects] can’t have a conspiracy to make things work out for the Higgs at a very different scale.”

    The question Craig now asks is: “Could that logic of effective field theory break down?” Perhaps explanations really can flow both ways between the UV and the IR. “That’s not totally pie in the sky, because we know that gravity does that,” he said. “Gravity violates the normal EFT reasoning because it mixes physics at all length scales — short distances, long distances. Because it does that, it gives you this way out.”

    How UV-IR Mixing Might Save Naturalness

    Several new studies of UV-IR mixing and how it might solve naturalness problems refer back to two papers that appeared in 1999. “There is a growth of interest in these more exotic, non-EFT-like solutions to these problems,” said Patrick Draper, a professor at the University of Illinois-Urbana-Champaign whose recent work picks up where one of the 1999 papers left off.

    Draper and his colleagues study the CKN bound [Physical Review Letters (below)], named for the authors of the ’99 paper, Andrew Cohen, David B. Kaplan and Ann Nelson. The authors thought about how, if you put particles in a box and heat it up, you can only increase the energy of the particles so much before the box collapses into a black hole. They calculated that the number of high-energy particle states you can fit in the box before it collapses is proportional to the box’s surface area raised to the three-fourths power, not the box’s volume as you might think. They realized that this represented a strange UV-IR relationship. The size of the box, which sets the IR scale, severely limits the number of high-energy particle states within the box — the UV scale.

    They then realized that if their same bound applies to our entire universe, it resolves the cosmological constant problem. In this scenario, the observable universe is like a very large box. And the number of high-energy particle states it can contain is proportional to the observable universe’s surface area to the three-fourths power, not the universe’s (much larger) volume.

    That means the usual EFT calculation of the cosmological constant is too naive. That calculation tells the story that high-energy phenomena should appear when you zoom in on the fabric of space, and this should blow up the energy of space. But the CKN bound implies that there may be far, far less high-energy activity than the EFT calculation assumes — meaning precious few high-energy states available for particles to occupy. Cohen, Kaplan and Nelson did a simple calculation showing that, for a box the size of our universe, their bound predicts more or less exactly the tiny value for the cosmological constant that’s observed.

    Their calculation implies that big and small scales might correlate with each other in a way that becomes apparent when you look at an IR property of the whole universe, such as the cosmological constant.

    Draper and Nikita Blinov confirmed in another crude calculation last year that the CKN bound predicts the observed cosmological constant; they also showed [ that it does so without ruining the many successes of EFT in smaller-scale experiments.

    The CKN bound doesn’t tell you why the UV and IR are correlated — why, that is, the size of the box (the IR) severely limits the number of high-energy states within the box (the UV). For that, you probably need to know quantum gravity.

    Other researchers have looked for answers in a specific theory of quantum gravity: string theory. Last summer, the string theorists Steven Abel and Keith Dienes showed how UV-IR mixing in string theory might address both the hierarchy and cosmological constant problems.

    A candidate for the fundamental theory of gravity and everything else, string theory holds that all particles are, close up, little vibrating strings. Standard Model particles like photons and electrons are low-energy vibration modes of the fundamental string. But the string can wiggle more energetically as well, giving rise to an infinite spectrum of string states with ever-higher energies. The hierarchy problem, in this context, asks why corrections from these string states don’t inflate the Higgs, if there’s nothing like supersymmetry to protect it.


    Video: The Standard Model of particle physics is the most successful scientific theory of all time. In this explainer, Cambridge University physicist David Tong recreates the model, piece by piece, to provide some intuition for how the fundamental building blocks of our universe fit together. Credit: Emily Buder/Quanta Magazine; Adrian Vasquez de Velasco, Kristina Armitage and Rui Braz for Quanta Magazine

    Dienes and Abel calculated that, because of a different symmetry of string theory called modular invariance, corrections from string states at all energies in the infinite spectrum from IR to UV will be correlated in just the right way to cancel out, keeping both the Higgs mass and the cosmological constant small. The researchers noted that this conspiracy between low- and high-energy string states doesn’t explain why the Higgs mass and the Planck energy are so widely separated to begin with, only that such a separation is stable. Still, in Craig’s opinion, “it’s a really good idea.”

    The new models represent a growing grab bag of UV-IR mixing ideas. Craig’s angle of attack traces back to the other 1999 paper, by the prominent theorist Nathan Seiberg of the Institute for Advanced Study and two co-authors. They studied situations where there’s a background magnetic field filling space. To get the gist of how UV-IR mixing arises here, imagine a pair of oppositely charged particles attached by a spring and flying through space, perpendicular to the magnetic field. As you crank up the field’s energy, the charged particles accelerate apart, stretching the spring. In this toy scenario, higher energies correspond to longer distances.

    Seiberg and company found that the UV corrections in this situation have peculiar features that illustrate how the reductionist arrow can be spun round, so that the IR affects what happens in the UV. The model isn’t realistic, because the real universe doesn’t have a magnetic field imposing a background directionality. Still, Craig has been exploring whether anything like it could work as a solution to the hierarchy problem.

    Craig, Garcia Garcia and Seth Koren have also jointly studied how an argument about quantum gravity called the weak gravity conjecture, if true, might impose consistency conditions that naturally require a huge separation between the Higgs mass and the Planck scale.

    Dubovsky, at NYU, has mulled over these issues since at least 2013, when it was already clear that supersymmetry particles were very tardy to the LHC party. That year, he and two collaborators discovered a new kind of quantum gravity model that solves the hierarchy problem; in the model, the reductionist arrow points to both the UV and the IR from an intermediate scale. Intriguing as this was, the model only worked in two-dimensional space, and Dubovsky had no clue how to generalize it. He turned to other problems. Then last year, he encountered UV-IR mixing again: He found that a naturalness problem that arises in studies of colliding black holes is resolved by a “hidden” symmetry that links low- and high-frequency deformations of the shape of the black holes.

    Like other researchers, Dubovsky doesn’t seem to think any of the specific models discovered so far have the obvious makings of a Kuhnian revolution. Some think the whole UV-IR mixing concept lacks promise. “There is currently no sign of a breakdown of EFT,” said David E. Kaplan, a theoretical physicist at Johns Hopkins University (no relation to the author of the CKN paper). “I think there is no there there.” To convince everyone, the idea will need experimental evidence, but so far, the existing UV-IR mixing models are woefully short on testable predictions; they typically aim to explain why we haven’t seen new particles beyond the Standard Model, rather than predicting that we should. But there’s always hope of future predictions and discoveries in cosmology, if not from colliders.

    Taken together, the new UV-IR mixing models illustrate the myopia of the old paradigm — one based solely on reductionism and effective field theory — and that may be a start.

    “Just the fact that you lose reductionism when you go to the Planck scale, so that gravity is anti-reductionist,” Dubovsky said, “I think it would be, in some sense, unfortunate if this fact doesn’t have deep implications for things which we observe.”

    Science papers presented in chronological order:
    Physical Review D 1974
    Physics Letters B 1979
    Physical Review Letters 1999
    2008 essay about naturalness by Giudice 2008
    Natural Tuning: Towards A Proof of Concept 2013
    Gian Giudice 2017
    The Weak Scale from Weak Gravity 2019
    The Weak Scale from Weak Gravity 2019
    whether anything like it could work 2019
    Noncommutative Perturbative Dynamics
    How UV-IR mixing in string theory 2021
    Densities of States and the CKN Bound 2021
    Physical Review Letters 2021
    How UV-IR mixing in string theory 2021

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: