Tagged: Quanta Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:13 am on October 5, 2022 Permalink | Reply
    Tags: "Molecule-Building Innovators Win Nobel Prize in Chemistry", , , , Quanta Magazine   

    From “Quanta Magazine” : “Molecule-Building Innovators Win Nobel Prize in Chemistry” 

    From “Quanta Magazine”

    Yasemin Saplakoglu

    Carolyn Bertozzi, Morten Meldal and K. Barry Sharpless have been awarded the 2022 Nobel Prize in Chemistry for the development of click chemistry and bioorthogonal chemistry. Credit: (left): Roy Kaltschmidt, LBNL; Jens Christian Navarro Poulsen, University of Copenhagen; Bengt Oberger

    Carolyn Bertozzi, Morten Meldal and K. Barry Sharpless have been awarded the 2022 Nobel Prize in Chemistry for the development of click chemistry and bioorthogonal chemistry. Click chemistry revolutionized the options available to chemists for creating the molecules they desired. Bioorthogonal chemistry made it possible to monitor the chemical processes going on inside living cells without harming them.

    “It’s all about snapping molecules together,” said Johan Aqvist, chair of the Nobel Committee for Chemistry, during the announcement. Imagine, he told the audience, that you could attach small chemical buckles to a bunch of different types of molecular building blocks and then link these buckles together to produce complex molecules. That idea, put forth by Barry Sharpless of Scripps Research about 20 years ago, later became reality when he and Morten Meldal of the University of Copenhagen independently found the first perfect candidates for the job. Their buckles easily snapped together and wouldn’t link onto anything they shouldn’t.

    Then, in 2003, Carolyn Bertozzi put forth the idea of using click chemistry in biological systems without interfering with the system itself. Bertozzi called this “bioorthogonal” chemistry in a paper [PNAS (below)] she and her colleagues published that year. It has since become a widely-adopted term in the field [National Library of Medicine].

    The ability to perform complex reactions in living systems without interfering with natural biological reactions made it possible to study diseases inside cells, or even inside complex organisms [American Chemical Society] such as zebrafish, rather than in laboratory dishes. It has already helped scientists understand an important protein processing reaction called glycosylation, helped to develop molecular imaging molecules that could detect disease in living organisms and opened up the possibility of selectively delivering drugs to particular tissues in the body [Angewandte Chemie].

    These findings have “led to a revolution in how chemists think about linking molecules together and how to do it in living cells,” Aqvist said.

    Today’s announcement marks the second time that Sharpless has won a Nobel Prize in Chemistry. In 2001, he shared in the prize with William Knowles and Ryoji Noyori for their development of catalytic asymmetric synthesis.

    What is click chemistry?

    Sharpless spent much of the 1990s considering the need to find less cumbersome ways to synthesize complex molecules. His thinking culminated in a 2001 paper Angewandte Chemie in which he and his coauthors proposed the term “click chemistry” to refer to any reaction that links together molecular building blocks in an efficient and quick manner. Shortly after the publication of the paper, Meldal and Sharpless independently discovered the first click-chemistry reaction: a highly useful one called the copper-catalyzed azide-alkyne cycloaddition.

    On one side of the reaction is an azide, a molecule that has three nitrogen atoms in a row. On the other side of the reaction is an alkyne, a molecule in which two carbon atoms are bonded together with a triplet bond. By themselves, these two building blocks aren’t very reactive: Mixed together, they are slow to react and yield a mixture of products. But Meldal and Sharpless separately realized that if they added a bit of copper to the mix, the reaction accelerated dramatically and led primarily to a stable product known as a triazole.

    By strategically adding azide and alkyne “tags” to molecules, chemists can use this copper-catalyzed reaction to link them precisely into much larger molecules with specific structures.

    The copper-catalyzed reaction immediately gained “enormous interest” across chemistry and related fields, said Olof Ramström of the Nobel Committee for Chemistry during the announcement. Although other click chemistry reactions have been found, “this particular reaction has almost become synonymous with the click chemistry concept and is also often called the click reaction,” Ramström said. “You can say that it’s still the crown jewel of click reactions.”

    What is bioorthogonal chemistry?

    In 2003, Bertozzi coined the term “bioorthogonal chemistry” for any kind of chemical reaction that could occur within a living system without interfering or harming it. It’s click chemistry that can be applied to living organisms.

    The seeds for this idea sprouted in the 1990s, when Bertozzi began to study a particular glycan, or complex sugar found on the surface of cells. Conducting research on this glycan wasn’t easy with the chemical techniques available to her at the time. But after hearing another scientist give a seminar on coaxing cells to produce an unnatural sugar molecule, Bertozzi was inspired to wonder whether she could do something similar to map the glycans on cells. That’s when her work into bioorthogonal chemistry began.

    How is bioorthogonal chemistry used to study living systems?

    Bertozzi came up with a simple way to track glycans on a cell. First, she grew cells near a modified sugar that was linked with an azide. The cells up took this outside molecule and incorporated it into glycans on their surface. Then Bertozzi added to the mixture an alkyne that had a fluorescent molecule attached to it. The alkyne underwent a click reaction with the modified sugar and attached the fluorescent molecule to it. With that simple reaction, the glycans glowed green, and that allowed Bertozzi to track their movements across cell membranes under a microscope.

    Today, Bertozzi, who is now a professor at Stanford University, tracks glycans found on the surface of tumor cells. This work enabled her to discover that certain glycans protect tumor cells from the body’s immune system. Her findings have opened up avenues for cancer immune therapy, with many researchers working to find “clickable” antibodies to target different types of tumors. Bertozzi and her team are also working on this problem and have created a new drug that’s in clinical trials, which targets and destroys glycans on the surface of tumor cells.

    What are other applications for click chemistry and bioorthogonal chemistry?

    Making it possible to track the movements of molecules through and across cells is just one of many types of applications for click chemistry and bioorthogonal chemistry.

    A major advantage of the techniques is that they don’t introduce unwanted byproducts into reaction mixtures — a clean efficiency that allows scientists to carefully craft complex molecules for a variety of purposes.

    Click chemistry has made possible massive strides in drug development, DNA sequencing, the synthesis of “smart” materials and almost any other application in which chemists need to simply connect pairs of building blocks, Ramström said. Researchers can now easily add functionality to a wide range of materials, for example by clicking in chemical extensions that can conduct electricity or capture sunlight.

    Bioorthogonal reactions are used widely to investigate vital processes in cells, and those applications have had an enormous impact throughout the fields of biology and biochemistry. Researchers can probe how biomolecules interact within cells, and they can image living cells without disturbing them. In studies of disease, bioorthogonal reactions are useful for studying not just the cells of patients but also those of pathogens: The proteins in bacterial cell walls can be labeled to follow their movements through the body. Researchers are also starting to develop engineered antibodies that can click onto their tumor targets to deliver cancer-killing therapeutics more precisely.

    “These very important accomplishment and these really fantastic discoveries from our three laureates have really made an enormous impact on chemistry and on science in general,” Ramström said. “For that, it’s really been to the greatest benefit of humankind.”

    Science papers:
    National Library of Medicine
    American Chemical Society
    Angewandte Chemie
    Angewandte Chemie
    See the science papers for instructive imagery.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by The Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 9:11 am on September 28, 2022 Permalink | Reply
    Tags: "A Wheel Made of ‘Odd Matter’ Spontaneously Rolls Uphill", , Biorobotics, Physicists have solved a key problem of robotic locomotion by revising the usual rules of interaction between simple component parts., , Quanta Magazine,   

    From “Quanta Magazine” Via “WIRED“: “A Wheel Made of ‘Odd Matter’ Spontaneously Rolls Uphill” 

    From “Quanta Magazine”



    Ben Brubaker

    Physicists have solved a key problem of robotic locomotion by revising the usual rules of interaction between simple component parts.

    In cycling through a sequence of shapes, an odd wheel propels itself up steep and bumpy terrain.Illustration: Samuel Velasco/Quanta Magazine

    In a physics lab in Amsterdam, there’s a wheel that can spontaneously roll uphill by wiggling.

    This “odd wheel” looks simple: just six small motors linked together by plastic arms and rubber bands to form a ring about 6 inches in diameter. When the motors are powered on, it starts writhing, executing complicated squashing and stretching motions and occasionally flinging itself into the air, all the while slowly making its way up a bumpy foam ramp.

    “I find it very playful,” said Ricard Alert, a biophysicist at the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany, who was not involved in making the wheel. “I liked it a lot.”

    The odd wheel’s unorthodox mode of travel exemplifies a recent trend: Physicists are finding ways to get useful collective behavior to spontaneously emerge in robots assembled from simple parts that obey simple rules. “I’ve been calling it robophysics,” said Daniel Goldman, a physicist at the Georgia Institute of Technology.

    The problem of locomotion—one of the most elementary behaviors of living things—has long preoccupied biologists and engineers alike. When animals encounter obstacles and rugged terrain, we instinctively take these challenges in stride, but how we do this is not so simple. Engineers have struggled to build robots that won’t collapse or lurch forward when navigating real-world environments, and they can’t possibly program a robot to anticipate all the challenges it might encounter.

    The odd wheel, developed by the physicists Corentin Coulais of the University of Amsterdam and Vincenzo Vitelli of the University of Chicago and collaborators and described in a recent preprint, embodies a very different approach to locomotion. The wheel’s uphill movement emerges from simple oscillatory motion in each of its component parts. Although these parts know nothing about the environment, the wheel as a whole automatically adjusts its wiggling motion to compensate for uneven terrain.

    Energy generated during each cyclical oscillation of the odd wheel allows it to push off against the ground and roll upward and over obstacles. (Another version of the wheel with only six motors was studied in a recent paper.)Video: Corentin Coulais.

    The physicists also created an “odd ball” that always bounces to one side and an “odd wall” that controls where it absorbs energy from an impact. The objects all stem from the same equation describing an asymmetric relationship between stretching and squashing motions that the researchers identified two years ago.

    “These are indeed behaviors you would not expect,” said Auke Ijspeert, a bioroboticist at the Swiss Federal Institute of Technology Lausanne. Coulais and Vitelli declined to comment while their latest paper is under peer review.

    In addition to guiding the design of more robust robots, the new research may prompt insights into the physics of living systems and inspire the development of novel materials.

    Odd Matter

    The odd wheel grew out of Coulais and Vitelli’s past work on the physics of “active matter”—an umbrella term for systems whose constituent parts consume energy from the environment, such as swarms of bacteria, flocks of birds and certain artificial materials. The energy supply engenders rich behavior, but it also leads to instabilities that make active matter difficult to control.

    Physicists have historically focused on systems that conserve energy, which must obey principles of reciprocity: If there’s a way for such a system to gain energy by moving from A to B, any process that takes the system from B back to A must cost an equal amount of energy. But with a constant influx of energy from within, this constraint no longer applies.

    In a 2020 paper in Nature Physics [below], Vitelli and several collaborators began to investigate active solids with nonreciprocal mechanical properties. They developed a theoretical framework in which nonreciprocity manifested in the relationships between different kinds of stretching and squashing motions. “That to me was just a beautiful mathematical framework,” said Nikta Fakhri, a biophysicist at the Massachusetts Institute of Technology.

    Suppose you squash one side of a solid, causing it to bulge outward in a perpendicular direction. You can also stretch and squash it along an axis rotated by 45 degrees, distorting it into a diamond shape. In an ordinary, passive solid, these two modes are independent; deforming the solid in one direction does not deform it along either diagonal.

    In an active solid, the researchers showed that the two modes can instead have a nonreciprocal coupling: Squashing the solid in one direction will also squash it along the axis rotated by 45 degrees, but squashing along this diagonal will stretch it, not squash it, along the original axis. Mathematically, the number describing the coupling between these two modes is positive going one way and negative going the other way. Because of the sign difference, the physicists call the phenomenon “odd elasticity.”

    In an odd elastic solid, undoing a deformation isn’t as simple as reversing the stretching and squashing motions that produced it; instead, the cycle of deformations that returns the solid to its starting configuration can leave it with some excess energy. This has striking consequences, such as enabling uphill locomotion of the odd wheel.

    Meanwhile Coulais, an experimentalist, was studying [Nature Communications (below] nonreciprocity in robotic active matter consisting of a chain of simple modules, each outfitted with a motor, sensor and microcontroller. With these sensing and control capabilities, Coulais could use feedback loops to program each module to respond nonreciprocally to the movements of its neighbors.

    Fig. 1
    Asymmetric and unidirectionally amplified waves in a nonreciprocal mass-and-spring model. a Schematic representation of the nonreciprocal mass-and-spring model. b Magnitude of the solutions of Eq. (1) in the frequency domain exp(i(ωt−q±x)) vs. spatial coordinate, for three different frequencies. c Green’s function of Eq. (1) vs. time and spatial coordinate. In (b) and (c), ε = 0.9 and c = 0.5

    Fig. 2
    Robotic metamaterial with nonreciprocal interactions. a Robotic metamaterial made of 10 unit cells mechanically connected by soft elastic beams (i). Scale bar: 2 cm. (bc) Closeup b and sketch c on two unit cells. Each unit cell is a minimal robot with a unique rotational degree of freedom that comprises an angular sensor (ii), a coreless DC motor (iii), and a microcontroller (iv). Each unit cell communicates with its right neighbor via electric wires (v). These components allow to program a control loop characterized by the feedback parameter α (see main text for definition). d Rescaled torsional stiffnesses CL→R/C (red) and CR→L/C (blue) as a function of the feedback parameter

    More instructive images are available in the science paper.

    The two physicists, former colleagues at Leiden University in the Netherlands, then teamed up to develop robotic active matter that would embody the mathematics of odd elasticity.

    Uncommon Oscillations

    Ordinary elasticity—the springiness of matter—is a bulk property that emerges from springlike interactions between matter’s microscopic constituents. Coulais and Vitelli sought to put an odd twist on the elastic interactions between robotic modules.

    In their new design, each module consisted of a motor controlling the rotation of two plastic arms, with rubber bands supplying springiness by pulling back on the arms. The researchers started with a pair of modules sharing an arm. Sensors and controllers on the modules implemented a nonreciprocal feedback loop: A clockwise turn of the first one’s motor would generate a clockwise torque on the second one’s motor, but a clockwise rotation of the second motor would induce a counterclockwise torque on the first.

    This arrangement is inherently unstable. Left undisturbed, the modules will sit still forever, but even the slightest nudge will give rise to an unending tug of war: Whichever way a motor turns, its interaction with the other motor pushes it back in the opposite direction. If the coupling between the modules is strong enough, the arms will start oscillating back and forth with increasing amplitude.

    On a 2D plot with axes representing the two motor angles, these growing oscillations will appear as an outward spiral, gaining energy on each cycle like a runner descending an Escher staircase and picking up speed with each lap. But the motors can only put out so much torque, and energy is lost to friction, so the amplitude of the oscillations eventually tops out. On the 2D plot of motor angles, the spiraling trajectory converges to a circle, then keeps retracing its path exactly. Physicists call this self-sustained, constant-amplitude oscillation a limit cycle.

    The modules’ limit-cycle oscillations represent a victory of stable, regular motion over the chaos that so often plagues complex systems. Consider the chaotic “double pendulum,” which consists of one pendulum hanging from another: Small changes in its initial conditions soon lead to totally different trajectories through space. Limit cycles are the opposite phenomenon: Different initial conditions ultimately yield the same trajectory. In the case of Coulais and Vitelli’s odd modules, regardless of which arm was initially nudged and in which direction, the system eventually exhibits the same steady-state oscillations.

    This key feature makes limit-cycle oscillations more special than, say, the familiar cyclical motion of a (single) pendulum. On a 2D plot of a pendulum’s position and velocity, its oscillations appear as orbits around a closed loop, but if you start the pendulum swinging at different speeds, it’ll trace a larger or smaller circle. Limit-cycle oscillations are much more robust: Many trajectories that start out different converge on exactly the same orbit, and if the system is nudged away from this orbit, it’ll get pulled back in.

    These limit-cycle oscillations offered the researchers a way to tame the unruly dynamics of active matter and put it to work.

    Behind the Wheel

    Now that Coulais and Vitelli had engineered the building blocks of odd matter, it was time to assemble them. Many modules connected in the right way would resemble the odd elastic solid Vitelli had initially envisioned. What would happen if these modules were linked together with shared arms to form a wheel?

    When the team supplied power to the motors, the loop began to oscillate, interweaving stretching and squashing with similar motions angled at 45 degrees. It switched back and forth between the two modes of self-deformation in Vitelli’s theory of odd elasticity. The limit-cycle oscillations of adjacent motors generated a limit cycle in the collective motion of the wheel as a whole. The oddness of the motors’ coupling singled out a direction for the wheel’s locomotion, much as an Escher staircase breaks the symmetry between clockwise and counterclockwise laps—it’s all downhill one way and all uphill the other way. The energy generated during each limit cycle allowed the wheel to push off against the ground and roll upward.

    Odd interactions between adjacent robotic modules can also be utilized to construct an odd wall.Courtesy of Corentin Coulais.

    It’s hard to pin down why the wheel’s uphill locomotion is so robust, precisely because its limit cycle is an emergent phenomenon, not seen when you scrutinize any individual module. Nick Gravish, a roboticist at the University of California-San Diego, suspects that the limit-cycle oscillations of each pair of motors greatly restrict the possible collective motions of the wheel. He noted that the emergence of collective motion from low-level oscillations has parallels in biology: “Animals are lots of interconnected oscillatory components that have to work together.”

    Coulais and Vitelli also explored the effects of odd couplings on collisions. They showed that an odd ball—a projectile assembled from odd modules—would always bounce off in a specific direction when launched without any spin, while an odd wall could control the direction in which it absorbed energy from a projectile. These functions could prove useful in the design of new active materials, said Denis Bartolo, a physicist at the École Normale Supérieure in Lyon, France, adding that “the next huge step to be made would be to find a way to self-assemble these machines.”


    Before the recent experiments, it wasn’t obvious that odd interactions would give rise to locomotion. Each motor responds only to its neighbors, and yet the wheel moves forward. This absence of top-down control is especially intriguing to biologists seeking to understand how swarms cooperate without designated leaders, and how primitive animals without nervous systems seek out food.

    The emergent locomotion of the odd wheel is appealing to researchers largely because the wheel’s building blocks are so simple. “You can just be lost in the complexity of living systems,” said Alert. He pointed to a famous quote from Richard Feynman: “What I cannot create, I do not understand.”

    Coulais and Vitelli developed their odd modules without mimicking any specific living system, so it’s an open question whether biology has made use of the same emergent dynamics. M. Cristina Marchetti, a theoretical physicist at the University of California-Santa Barbara, called the result “very interesting,” and said the next step to understanding its possible role in biology is to see how well the behavior persists in a noisy environment like that of a living cell.

    But whereas evolution often finds good solutions to problems, it can miss opportunities. The odd wheel might be a true novelty. Bartolo notes that, in the design of robots, machines and materials, bioinspiration has its limits: “If you tried to make a plane using beating wings, you would still be walking or swimming from Normandy to New York.”

    Science papers:
    Nature Physics
    Nature Communications

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 10:46 am on September 17, 2022 Permalink | Reply
    Tags: "Chaos Researchers Can Now Predict Perilous Points of No Return", , Certain complex systems can undergo “tipping point” transitions suddenly changing their behavior dramatically and perhaps irreversibly., , Quanta Magazine   

    From “Quanta Magazine” : “Chaos Researchers Can Now Predict Perilous Points of No Return” 

    From “Quanta Magazine”

    Ben Brubaker

    Scientists have struggled to predict if or when the influx of fresh water from melting ice sheets could cause a tipping point, bringing North Atlantic ocean currents to an abrupt halt. Credit: Pro_Studio/Shutterstock.

    Predicting complex systems like the weather is famously difficult. But at least the weather’s governing equations don’t change from one day to the next. In contrast, certain complex systems can undergo “tipping point” transitions suddenly changing their behavior dramatically and perhaps irreversibly, with little warning and potentially catastrophic consequences.

    On long enough timescales, most real-world systems are like this. Consider the Gulf Stream in the North Atlantic, which transports warm equatorial water northward as part of an oceanic conveyor belt that helps regulate Earth’s climate. The equations that describe these circulating currents are slowly changing due to the influx of fresh water from melting ice sheets. So far the circulation has slowed gradually, but decades from now it may abruptly grind to a halt.

    “Suppose everything is OK now,” said Ying-Cheng Lai, a physicist at Arizona State University. “How do you tell that it’s not going to be OK in the future?”

    In a series of recent papers [below], researchers have shown that machine learning algorithms can predict tipping-point transitions in archetypal examples of such “nonstationary” systems, as well as features of their behavior after they’ve tipped. The surprisingly powerful new techniques could one day find applications in climate science, ecology, epidemiology and many other fields.

    A surge of interest in the problem began four years ago with groundbreaking results [Physical Review Letters (below)] from the group of Edward Ott, a leading chaos researcher at the University of Maryland. Ott’s team found that a type of machine learning algorithm called a recurrent neural network could predict the evolution of stationary chaotic systems (which don’t have tipping points) stunningly far into the future. The network relied only on records of the chaotic system’s past behavior — it had no information about the underlying equations.

    The network’s learning approach differed from that of deep neural networks, which feed data through a tall stack of layers of artificial neurons for tasks like speech recognition and natural language processing. All neural networks learn by adjusting the strength of the connections between their neurons in response to training data. Ott and his collaborators used a less computationally expensive training method called reservoir computing, which adjusts only a few connections in a single layer of artificial neurons. Despite its simplicity, reservoir computing seems suited to the task of predicting chaotic evolution.

    Impressive as the 2018 results were, researchers suspected that machine learning’s data-driven approach wouldn’t be able to predict tipping-point transitions in nonstationary systems or infer how these systems would behave afterward. A neural network trains on past data about an evolving system, but “what’s happening in the future is evolving by different rules,” said Ott. It’s like trying to predict the outcome of a baseball game only to find that it’s morphed into a cricket match.

    And yet, in the past two years, Ott’s group and several others have shown that reservoir computing works unexpectedly well for these systems too.

    In a 2021 paper [Physical Review Research (below)], Lai and collaborators gave their reservoir computing algorithm access to the slowly drifting value of a parameter that would eventually send a model system over a tipping point — but they provided no other information about the system’s governing equations. This situation pertains to a number of real-world scenarios: We know how the carbon dioxide concentration in the atmosphere is rising, for instance, but we don’t know all the ways that this variable will influence the climate. The team found that a neural network trained on past data could predict the value at which the system would eventually become unstable. Ott’s group published related results last year [Chaos (below)].

    Surprisingly, the network performed best when trained on noisy data. Noise is ubiquitous in real-world systems, but it ordinarily hinders prediction. Here it helped, apparently by exposing the algorithm to a wider range of the system’s possible behavior. To take advantage of this counterintuitive result, Patel and Ott tweaked their reservoir computing procedure to enable the neural network to recognize noise as well as the system’s average behavior. “That’s going to be important for any approach that’s trying to extrapolate” the behavior of nonstationary systems, said Michael Graham, a fluid dynamicist at the University of Wisconsin, Madison.

    Patel and Ott also considered a class of tipping points that mark an especially stark change in behavior.

    Suppose the state of a system is plotted as a point moving around in an abstract space of all its possible states. Systems that undergo regular cycles would trace out a repeating orbit in the space, while chaotic evolution would look like a tangled mess. A tipping point might cause an orbit to spiral out of control but remain in the same part of the plot, or it might cause initially chaotic motion to spill out into a larger region. In these cases a neural network may find hints of the system’s fate encoded in its past exploration of relevant regions of the state space.

    More challenging are transitions in which a system is suddenly expelled from one region and its later evolution unfolds in a distant region. “Not only are the dynamics changing, but now you’re wandering into territory you’ve never ever seen,” explained Patel. Such transitions are typically “hysteretic,” meaning they’re not easily reversed — even if, say, a slowly increasing parameter that caused the transition is nudged down again. This kind of hysteresis is common: Kill one too many top predators in an ecosystem, for instance, and the altered dynamics might cause the prey population to suddenly explode; add a predator back again and the prey population stays elevated.

    When trained on data from a system exhibiting a hysteretic transition, Patel and Ott’s reservoir computing algorithm was able to predict an imminent tipping point, but it got the timing wrong and failed to predict the system’s subsequent behavior. The researchers then tried a hybrid approach combining machine learning and conventional knowledge-based modeling of the system. They found that the hybrid algorithm exceeded the sum of its parts: It could predict statistical properties of future behavior even when the knowledge-based model had incorrect parameter values and therefore failed on its own.

    Soon Hoe Lim, a machine learning researcher at the Nordic Institute for Theoretical Physics in Stockholm who has studied the short-term behavior of nonstationary systems [Chaos 2020 (below)], hopes the recent work will “serve as a catalyst for further studies,” including comparisons between the performance of reservoir computing and that of deep learning algorithms. If reservoir computing can hold its own against more resource-intensive methods, that would bode well for the prospect of studying tipping points in large, complex systems like ecosystems and Earth’s climate.

    “There’s a lot to do in this field,” Ott said. “It’s really wide open.”

    Science papers:
    Physical Review Letters
    Chaos 2021
    Physical Review Research 2021
    Chaos 2020

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 10:49 pm on September 8, 2022 Permalink | Reply
    Tags: "A Black Hole’s Orbiting Ring of Light Could Encrypt Its Inner Secrets", , , , , , Quanta Magazine   

    From “Quanta Magazine” : “A Black Hole’s Orbiting Ring of Light Could Encrypt Its Inner Secrets” 

    From “Quanta Magazine”

    Thomas Lewton

    The photon ring, which glows orange in this visualization of light flowing around a black hole, contains a succession of images of the entire universe. Credit: Olena Shmahalo for Quanta Magazine; Source: Jeremy Schnittman/ NASA’s Goddard Space Flight Center.

    When photons hurtle toward a black hole, most are sucked into its depths, never to return, or gently deflected away. A rare few, however, skirt the hole, making a series of abrupt U-turns. Some of these photons keep circling the black hole practically forever.

    Described by astrophysicists as a “cosmic movie camera” and an “infinite light trap,” the resulting ring of orbiting photons is among the weirdest phenomena in nature. If you detect the photons, “you’re going to see every object in the universe infinitely many times,” said Sam Gralla, a physicist at the University of Arizona.

    But unlike the iconic event horizon of a black hole — the boundary within which gravity is so strong that nothing can escape — the photon ring, which orbits the hole farther away, has never received much attention from theorists. It makes sense that researchers have been preoccupied with the event horizon, since it marks the edge of their knowledge about the universe. Throughout most of the cosmos, gravity tracks with curves in space and time as described by Albert Einstein’s General Theory of Relativity. But spacetime warps so much inside black holes that General Relativity breaks down there. Quantum Gravity theorists seeking a truer, quantum description of gravity have therefore looked to the horizon for answers.

    “I had taken the view that the event horizon was what we needed to understand,” said Andrew Strominger, a leading black hole and quantum gravity theorist at Harvard University. “And I thought of the photon ring as some sort of technical, complicated thing which didn’t have any deep significance.”

    Now Strominger is making his own U-turn and trying to convince other theorists to join him. “We’re exploring, excitedly, the possibility that the photon ring is the thing that you have to understand to unlock the secrets of Kerr black holes,” he said, referring to the kind of spinning black holes created when stars die and gravitationally collapse. (The photon ring forms concurrently.)

    In a paper posted online in May and recently accepted for publication in Classical Quantum Gravity, Strominger and his collaborators revealed that the photon ring around a spinning black hole has an unexpected kind of symmetry — a way that it can be transformed and still stay the same. The symmetry suggests that the ring may encode information about the hole’s quantum structure. “This symmetry smells like something to do with the central problem of understanding the quantum dynamics of black holes,” he said. The discovery has led researchers to debate whether the photon ring might even be part of a black hole’s “holographic dual” — a quantum system that’s exactly equivalent to the black hole itself, and which the black hole can be thought of as emerging out of like a hologram.

    “It opens up a very interesting avenue for understanding the holography of these [black hole] geometries,” said Alex Maloney, a theorist at McGill University in Canada who was not involved in the research. “The new symmetry organizes the structure of black holes far from the event horizon, and I think that’s very exciting.”

    Photons that make a single U-turn around a black hole before flying away from it create an image of a ring, labeled n = 1 in the video. Photons that redirect twice before flying away from the hole form an image of a thinner ring within the first ring, labeled n = 2 in the video, and so on.
    Credit: Harvard-Smithsonian Center for Astrophysics.

    Much more theoretical study is needed before researchers can say for sure whether, or in what way, the photon ring encodes a black hole’s inner contents. But at the very least, theorists say the new paper has detailed a precise test for any quantum system claiming to be the black hole’s holographic dual. “It’s a target for a holographic description,” said Juan Maldacena of the Institute for Advanced Study in Princeton, New Jersey, one of the original architects of holography.

    Hiding in the Photon Ring

    Part of the excitement about the photon ring is that, unlike the event horizon, it’s actually visible. In fact, Strominger’s U-turn toward these rings happened because of a photograph: the first-ever image of a black hole.

    When the Event Horizon Telescope (EHT) unveiled it in 2019, “I cried,” he said. “It’s amazingly beautiful.”

    Elation soon spiraled into confusion. The black hole in the image had a thick ring of light around it, but physicists on the EHT team didn’t know whether this light was the product of the hole’s chaotic surrounding environment, or if it included the black hole’s photon ring. They went to Strominger and his theorist colleagues for help interpreting the image. Together, they browsed the huge databank of computer simulations that the EHT team was using to disentangle the physical processes that produce light around black holes. In these simulated images, they could see the thin, bright ring embedded in the larger, fuzzier orange doughnut of light.

    “When you look at all the simulations, you can’t miss it,” said Shahar Hadar of the University of Haifa in Israel, who collaborated with Strominger and the EHT physicists on the research while at Harvard. The formation of the photon ring seems to be a “universal effect” that happens around all black holes, Hadar said.

    Unlike the maelstrom of energetic colliding particles and fields that surrounds black holes, the theorists determined, the sharp line of the photon ring carries direct information about the black hole’s properties, including its mass and amount of spin. “It’s definitely the most beautiful and compelling way to really see the black hole,” said Strominger.

    The collaboration of astronomers, simulators and theorists found that the EHT’s actual photograph, which shows the black hole at the center of the nearby galaxy Messier 87, isn’t sharp enough to resolve the photon ring, although it isn’t far off. They argued in a 2020 paper [Science Advances (below)] that future, higher-resolution telescopes should easily see photon rings. (A new paper [The Astrophysical Journal (below)] claims to have found the ring in the EHT’s 2019 image by applying an algorithm to remove layers from the original data, but the claim has been met with skepticism.)

    Still, having stared at photon rings for so long in the simulations, Strominger and his colleagues began to wonder if their form hinted at an even deeper meaning.

    A Surprising Symmetry

    Photons that make a single U-turn around a black hole and then zip toward Earth would appear to us as a single ring of light. Photons that make two U-turns around the hole appear as a fainter, thinner subring within the first ring. And photons that make three U-turns appear as a subring within that subring, and so on, creating nested rings, each fainter and thinner than the last.

    Light from the inner subrings has made more orbits and was therefore captured before the light from outer subrings, resulting in a series of time-delayed snapshots of the surrounding universe. “Together, the set of subrings are akin to the frames of a movie, capturing the history of the visible universe as seen from the black hole,” the collaboration wrote in the 2020 paper.

    Strominger said that when he and his collaborators looked at the EHT pictures, “we were like: ‘Hey, there’s an infinite number of copies of the universe right there at that screen? Couldn’t that be where the holographic dual lives?’”

    The researchers realized that the ring’s concentric structure is suggestive of a group of symmetries called conformal symmetry. A system that has conformal symmetry exhibits “scale invariance,” meaning it looks the same when you zoom in or out. In this case, each photon subring is an exact, demagnified copy of the previous subring. Moreover, a conformally symmetric system stays the same when translated forward or backward in time and when all spatial coordinates are inverted, shifted and then inverted again.

    Strominger encountered conformal symmetry in the 1990s when it turned up in a special kind of five-dimensional black hole he was studying. By precisely understanding the details of this symmetry, he and Cumrun Vafa found a novel way to connect general relativity to the quantum world, at least inside these extreme kinds of black holes. They imagined cutting out the black hole and replacing its event horizon with what they called a holographic plate, a surface containing a quantum system of particles that respect conformal symmetry. They showed that the system’s properties correspond to properties of the black hole, as if the black hole is a higher-dimensional hologram of the conformal quantum system. In this way, they built a bridge between the description of a black hole according to general relativity and its quantum mechanical description.

    In 1997, Maldacena extended this same holographic principle to an entire toy universe. He discovered a “universe in a bottle,” in which a conformally symmetric quantum system living on the bottle’s surface exactly mapped onto properties of space-time and gravity in the bottle’s interior. It was as if the interior was a “universe” that projected from its lower-dimensional surface like a hologram.

    The discovery led many theorists to believe that the real universe is a hologram. The hitch is that Maldacena’s universe in a bottle differs from our own. It’s filled with a type of space-time that’s negatively curved, which gives it a surface-like outer boundary. Our universe is thought to be flat, and theorists have little idea what the holographic dual of flat space-time looks like. “We need to get back to the real world, while taking inspiration from what we learned from these hypothetical worlds,” Strominger said.

    And so the group decided to study a realistic spinning black hole sitting in flat space-time, like those photographed by the Event Horizon Telescope. “The first questions to ask are: Where does the holographic dual live? And what are the symmetries?” said Hadar.

    Searching for the Holographic Dual

    Historically, conformal symmetry has proved a trustworthy guide in the search for quantum systems that holographically map onto systems with gravity. “Saying conformal symmetry and black hole in the same sentence to a quantum gravity theorist is like waving red meat in front of a dog,” said Strominger.

    Starting from the description of spinning black holes in general relativity, called the Kerr metric, the group began to look for hints of conformal symmetry. They imagined hitting the black hole with a hammer to make it ring like a bell. These slowly fading vibrations are like the gravitational waves created when, say, two black holes collide. The black hole will ring with some resonant frequencies that depend on the shape of space-time (that is, on the Kerr metric) just as the ringing tones of a bell depend on its shape.

    Figuring out the exact pattern of vibrations is unfeasible because the Kerr metric is so complicated. So the team approximated the pattern by only considering high-frequency vibrations, which result from hitting the black hole very hard. They noticed a relationship between the pattern of waves at these high energies and the structure of the black hole’s photon rings. The pattern “turns out to be completely governed by the photon ring,” said Alex Lupsasca of the Vanderbilt Initiative for Gravity, Waves and Fluids in Tennessee, who co-authored the new paper with Strominger, Hadar and Daniel Kapec of Harvard.

    A pivotal moment came in the summer of 2020 during the Covid-19 pandemic. Blackboards and benches were set up on the grass outside Harvard’s Jefferson physics lab, and the researchers could finally meet up in person. They worked out that, like the conformal symmetry which relates each photon ring to the next subring, the successive tones of a ringing black hole are related to each other by conformal symmetry. This relationship between the photon rings and the black hole vibrations could be a “harbinger” of holography, said Strominger.

    Another clue that the photon ring may hold special significance comes from the counterintuitive way the ring relates to the black hole’s geometry. “It’s very, very weird,” Hadar said. “As you move along different points on the photon ring, you are actually probing different radii” or depths into the black hole.

    These findings imply to Strominger that the photon ring, rather than the event horizon, is a “natural candidate” for part of the holographic plate of a spinning black hole.

    If so, there may be a new way to picture what happens to information about objects that fall into black holes — a long-standing mystery known as the black hole information paradox. Recent calculations indicate that this information is somehow preserved by the universe as a black hole slowly evaporates. Strominger now speculates that the information might be stored in the holographic plate. “Perhaps information doesn’t really fall into the black hole, but it sort of stays in a cloud around outside the black hole, which probably extends to the photon ring,” he said. “But we don’t understand how it’s coded in there, or exactly how that works.”

    A Call to Theorists

    Strominger and company’s hunch that the holographic dual lives in or around the photon ring has been met with skepticism by some quantum gravity theorists, who see it as too bold an extrapolation from the ring’s conformal symmetry. “Where the holographic dual lives is a much deeper question than: What is the symmetry?” said Daniel Harlow, a quantum gravity and black hole theorist at the Massachusetts Institute of Technology. Although he is in favor of further research on the issue, Harlow stresses that a convincing holographic duality, in this case, must show how the properties of the photon ring, such as individual photons’ orbits and frequencies, mathematically map onto the fine-grained quantum details of the black hole.

    Nevertheless, several experts said that the new research offers a useful needle that any proposed holographic dual must thread: The dual must be able to encode the unusual vibration pattern of a spinning black hole after it has been struck like a bell. “Demanding the quantum system that describes the black hole reproduces all of that complexity is an incredibly powerful constraint — and one that we’ve never tried to exploit before,” said Strominger. Eva Silverstein, a theoretical physicist at Stanford University, said, “It seems like a very nice piece of theoretical data for people to try to reproduce when attempting a holographic dual description.”

    Maldacena agreed, saying, “One would like to understand how to incorporate this into a holographic dual. So it will probably stimulate some research in that direction.”

    Maloney suspects that the newfound symmetry of the photon ring will spur interest among both theorists and observers. If hoped-for upgrades to the Event Horizon Telescope get funded, it could start to detect photon rings within a few years.

    Future measurements of these rings won’t directly test holography, though — rather, the data will allow extreme tests of general relativity near black holes. It’s up to theorists to determine with pen-and-paper calculations if the structure of the infinite light traps around black holes can mathematically encrypt the secrets within.

    Science paper:
    Classical Quantum Gravity
    Science Advances
    The Astrophysical Journal

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 4:13 pm on August 23, 2022 Permalink | Reply
    Tags: "What Drives Galaxies? The Milky Way’s Black Hole May Be the Key", , , , , , Quanta Magazine   

    From “Quanta Magazine” : “What Drives Galaxies? The Milky Way’s Black Hole May Be the Key” 

    From “Quanta Magazine”

    Thomas Lewton

    Olena Shmahalo for Quanta Magazine

    On May 12, at nine simultaneous press conferences around the world, astrophysicists revealed the first image of the black hole at the heart of the Milky Way.

    Event Horizon Telescope Array

    The locations of the radio dishes that will be part of the Event Horizon Telescope array. Image credit: Event Horizon Telescope. via University of Arizona.

    About the Event Horizon Telescope (EHT)

    The EHT consortium consists of 13 stakeholder institutes; The Academia Sinica Institute of Astronomy & Astrophysics [中央研究院天文及天文物理研究所](TW) , The University of Arizona, The University of Chicago, The East Asian Observatory, Goethe University Frankfurt [Goethe-Universität](DE), Institut de Radioastronomie Millimétrique, Large Millimeter Telescope, The MPG Institute for Radio Astronomy[MPG Institut für Radioastronomie](DE), MIT Haystack Observatory, The National Astronomical Observatory of Japan[[国立天文台](JP), The Perimeter Institute for Theoretical Physics (CA), Radboud University [Radboud Universiteit](NL) and The Center for Astrophysics | Harvard & Smithsonian.

    At first, awesome though it was, the painstakingly produced image of the ring of light around our galaxy’s central pit of darkness seemed to merely prove what experts already expected: The Milky Way’s supermassive black hole exists, it is spinning, and it obeys Albert Einstein’s General Theory of Relativity.

    And yet, on closer inspection, things don’t quite stack up.

    From the brightness of the bagel of light, researchers have estimated how quickly matter is falling onto SGR A* — the name given to the Milky Way’s central black hole. The answer is: not quickly at all. “It’s clogged up to a little trickle,” said Priya Natarajan, a cosmologist at Yale University, comparing the galaxy to a broken shower head. Somehow only a thousandth of the matter that’s flowing into the Milky Way from the surrounding intergalactic medium makes it all the way down and into the hole. “That’s revealing a huge problem,” Natarajan said. “Where is this gas going? What is happening to the flow? It’s very clear that our understanding of black hole growth is suspect.”

    Over the past quarter century, astrophysicists have come to recognize what a tight-knit, dynamic relationship exists between many galaxies and the black holes at their centers.

    “There’s been a really huge transition in the field,” says Ramesh Narayan, a theoretical astrophysicist at Harvard University. “The surprise was that black holes are important as shapers and controllers of how galaxies evolve.”

    These giant holes — concentrations of matter so dense that gravity prevents even light from escaping — are like the engines of galaxies, but researchers are only beginning to understand how they operate. Gravity draws dust and gas inward to the galactic center, where it forms a swirling accretion disk around the supermassive black hole, heating up and turning into white-hot plasma. Then, when the black hole engulfs this matter (either in dribs and drabs or in sudden bursts), energy is spat back out into the galaxy in a feedback process. “When you grow a black hole, you are producing energy and dumping it into the surroundings more efficiently than through any other process we know of in nature,” said Eliot Quataert, a theoretical astrophysicist at Princeton University. This feedback affects star formation rates and gas flow patterns throughout the galaxy.
    But researchers have only vague ideas about supermassive black holes’ “active” episodes, which turn them into so-called active galactic nuclei (AGNs). “What is the triggering mechanism? What is the off switch? These are the fundamental questions that we’re still trying to get at,” said Kirsten Hall of the Harvard-Smithsonian Center for Astrophysics.

    Stellar feedback, which occurs when a star explodes as a supernova, is known to have similar effects as AGN feedback on a smaller scale. These stellar engines are easily big enough to regulate small “dwarf” galaxies, whereas only the giant engines of supermassive black holes can dominate the evolution of the largest “elliptical” galaxies.

    Size-wise, the Milky Way, a typical spiral galaxy, sits in the middle. With few obvious signs of activity at its center, our galaxy was long thought to be dominated by stellar feedback. But several recent observations suggest that AGN feedback shapes it as well. By studying the details of the interplay between these feedback mechanisms in our home galaxy — and grappling with puzzles like the current dimness of SGR A* — astrophysicists hope to figure out how galaxies and black holes co-evolve in general. The Milky Way “is becoming the most powerful astrophysical laboratory,” said Natarajan. By serving as a microcosm, it “may hold the key.”

    Galactic Engines

    By the late 1990s, astronomers generally accepted the presence of black holes in galaxies’ centers. By then they could see close enough to these invisible objects to deduce their mass from the movements of stars around them. A strange correlation emerged: The more massive a galaxy is, the heavier its central black hole. “This was particularly tight, and it was totally revolutionary. Somehow the black hole is talking to the galaxy,” said Tiziana Di Matteo, an astrophysicist at Carnegie Mellon University.

    The correlation is surprising when you consider that the black hole — big as it is — is a scant fraction of the galaxy’s size. (SGR A* weighs roughly 4 million suns, for instance, while the Milky Way measures some 1.5 trillion solar masses.) Because of this, the black hole’s gravity only pulls with any strength on the innermost region of the galaxy.

    To Martin Rees, the United Kingdom’s Astronomer Royal, AGN feedback offered a natural way to connect the relatively tiny black hole to the galaxy at large. Two decades earlier, in the 1970s, Rees correctly hypothesized that supermassive black holes power the luminous jets observed in some far-off, brightly glowing galaxies called quasars. He even proposed, along with Donald Lynden-Bell, that a black hole would explain why the Milky Way’s center glows. Could these be signs of a general phenomenon that governs the size of supermassive black holes everywhere?

    The idea was that the more matter a black hole swallows, the brighter it gets, and the increased energy and momentum blows gas outward. Eventually, the outward pressure stops gas from falling into the black hole. “That will terminate the growth. In a hand-wavy way, that was the reasoning,” said Rees. Or, in Di Matteo’s words, “the black hole eats and then swallows.” A very big galaxy puts more weight on the central black hole, making it harder to blow gas outward, and so the black hole grows bigger before it swallows.

    Yet few astrophysicists were convinced that the energy of infalling matter could be ejected in such a dramatic way. “When I was doing my thesis, we were all obsessed with black holes as a point of no return — just gas going in,” said Natarajan, who helped develop the first AGN feedback models as Rees’ graduate student. “Everyone had to do it very cautiously and gingerly as it was so radical.”

    Confirmation of the feedback idea came a few years later, from computer simulations developed by Di Matteo and the astrophysicists Volker Springel and Lars Hernquist. “We wanted to reproduce the amazing zoo of galaxies that we see in the real universe,” Di Matteo said. They knew the basic picture: Galaxies start out small and dense in the early universe. Wind the clock forward and gravity smashes these dwarfs together in a blaze of spectacular mergers, forming rings, whirlpools, cigars and every shape in between. Galaxies grow in size and variety until, after enough collisions, they become big and smooth. “It ends up in a blob,” said Di Matteo. In the simulations, she and her colleagues could re-create these large featureless blobs, called elliptical galaxies, by merging spiral galaxies many times. But there was a problem.

    While spiral galaxies like the Milky Way have many young stars that glow blue, giant elliptical galaxies only contain very old stars that glow red. “They are red and dead,” said Springel, of the Max Planck Institute for Astrophysics in Garching, Germany. But every time the team ran their simulation, it spat out ellipticals that glowed blue. Whatever was switching off star formation hadn’t been captured in their computer model.

    Then, Springel said, “we had the idea to augment our galaxy mergers with supermassive black holes in the center. We let these black holes swallow gas and release energy until the whole thing flew apart, like a pressure cooker pot. Suddenly, the elliptical galaxy would stop star formation and would become red and dead.”

    “My jaw dropped,” he added. “We did not expect [the effect] to be so extreme.”

    By reproducing red-and-dead ellipticals, the simulation bolstered the black hole feedback theories of Rees and Natarajan. A black hole, despite its relatively tiny size, can talk to the galaxy as a whole through feedback. Over the last two decades, the computer models have been refined and expanded to simulate large swaths of the cosmos, and they broadly match the eclectic galaxy zoo we see around us. These simulations also show that ejected energy from black holes fills the space between galaxies with hot gas that otherwise should have already cooled and turned into stars. “People are convinced by now that supermassive black holes are very plausible engines,” said Springel. “No one has come up with a successful model without black holes.”

    Mysteries of Feedback

    Yet the computer simulations are still surprisingly blunt.

    As matter creeps inward to the accretion disk around a black hole, friction causes energy to be pushed back out; the amount of energy lost this way is something the coders put into their simulations by hand through trial and error. It’s a sign that the details are still elusive. “There’s a possibility that in some instances we’re getting the right answer for the wrong reason,” said Quataert. “Maybe we’re not capturing what is actually the most important thing about how black holes grow and how they dump energy into their surroundings.”

    The truth is that astrophysicists don’t really know how AGN feedback works. “We know how important it is. But it’s escaping us exactly what causes this feedback,” said Di Matteo. “The key, key problem is that we don’t understand feedback deeply, physically.”

    They know that some energy is emitted as radiation, which gives the centers of active galaxies their characteristic bright glow. Strong magnetic fields cause matter to fly out from the accretion disk too, either as diffuse galactic winds or in powerful narrow jets. The mechanism by which black holes are thought to launch jets, called the Blandford-Znajek process, was identified in the 1970s, but what determines the beam’s power, and how much of its energy gets absorbed by the galaxy, is “still an open unsolved problem,” said Narayan. The galactic wind, which emanates spherically from the accretion disk and so tends to interact more directly with the galaxy than the narrow jets, is even more mysterious. “The billion-dollar question is: How is the energy coupling to the gas?” said Springel.

    Jets emerging from the black hole in the center of the galaxy Cygnus A create massive interstellar blobs, visible here in radio waves. Credit: NRAO/AUI/NSF.

    One sign that there’s still a problem is that the black holes in state-of-the-art cosmological simulations end up smaller [MNRAS (below)] than the observed sizes of real supermassive black holes in some systems. To switch off star formation and create red-and-dead galaxies, the simulations need black holes to eject so much energy that they choke off the inward flux of matter, so that the black holes stop growing. “The feedback in the simulations is too aggressive; it stunts the growth prematurely,” Natarajan said.

    The Milky Way exemplifies the opposite problem: Simulations typically predict that a galaxy of its size should have a black hole between three and 10 times bigger than Sagittarius A* is.

    By taking a closer look at the Milky Way and nearby galaxies, researchers hope we can begin to unravel precisely how AGN feedback works.

    Milky Way Ecosystem

    In December 2020, researchers with the eROSITA X-ray telescope reported that they had spotted a pair of bubbles stretching tens of thousands of light-years above and below the Milky Way.

    The vast bubbles of X-rays resembled equally baffling bubbles of gamma rays that, 10 years earlier, the Fermi Gamma-ray Space Telescope detected emanating from the galaxy.

    Two origin theories of the Fermi bubbles were still being hotly debated. Some astrophysicists suggested that they were a relic of a jet that shot out of SGR A* millions of years ago. Others thought the bubbles were the accumulated energy of many stars exploding near the galactic center — a kind of stellar feedback.

    When Hsiang-Yi Karen Yang of National Tsing Hua University in Taiwan saw the image of the eROSITA X-ray bubbles, she “started jumping up and down.” It was clear to Yang that the X-rays could have a common origin with the gamma rays if both were generated by the same AGN jet. (The X-rays would come from shocked gas in the Milky Way rather than from the jet itself.) Along with coauthors Ellen Zweibel and Mateusz Ruszkowski, she set about building a computer model. The results, published in Nature Astrophysics [below] this past spring, not only replicate the shape of the observed bubbles and a bright shock front, but predict that they formed over the course of 2.6 million years (expanding outward from a jet that was active for 100,000 years) — far too quickly to be explained by stellar feedback.

    The finding suggests that AGN feedback may be far more important in run-of-the-mill disk galaxies like the Milky Way than researchers used to think. The picture that’s emerging is akin to that of an ecosystem, Yang said, where AGN and stellar feedback are intertwined with the diffuse, hot gas that surrounds galaxies, called the circumgalactic medium. Different effects and flow patterns will dominate in different galaxy types and at different times.

    A case study of the Milky Way’s past and present could unveil the interplay of these processes. Europe’s Gaia space telescope, for example, has mapped the precise positions and movements of millions of the Milky Way’s stars, allowing astrophysicists to retrace the history of its mergers with smaller galaxies.

    Such merger events have been hypothesized to activate supermassive black holes by shaking matter into them, causing them to suddenly brighten and even launch jets. “There’s a big debate in the field as to whether or not mergers are important,” said Quataert. The Gaia star data suggests that the Milky Way did not undergo a merger at the time that the Fermi and eROSITA bubbles formed, disfavoring mergers as the triggers of the AGN jet.

    The Gaia spacecraft’s measurements of the positions and velocities of millions of stars and other objects in and around the Milky Way have allowed astronomers to unravel the history of the galaxy’s mergers with smaller galaxies. These mergers left traces in the form of streams of stars. Credit: S. Payne-Wardenaar / K. Malhan, MPIA.

    Alternatively, blobs of gas may just happen to collide with the black hole and activate it. It might chaotically switch between eating, belching out energy as jets and galactic winds, and pausing.

    The Event Horizon Telescope’s recent image of Sagittarius A* [above], which reveals its current trickle of infalling matter, presents a new puzzle to solve. Astrophysicists already knew that not all of the gas that is drawn into a galaxy will make it to the black hole horizon, since galactic winds push outward against this accretion flow. But the strength of the winds required to explain such an extremely tapered flow is unrealistic. “When I do simulations, I don’t see a huge wind,” said Narayan. “It’s not the kind of wind you need for a complete explanation of what’s going on.”

    Nested Simulations

    Part of the challenge in understanding how galaxies work is the huge difference between the length scales at play in stars and black holes and the scales of entire galaxies and their surroundings. When simulating a physical process on a computer, researchers pick a scale and include relevant effects at that scale. But in galaxies, big and small effects interact.

    “The black hole is truly tiny, compared to the big galaxy, and you cannot put them all in one single humongous simulation,” said Narayan. “Each regime needs information from the other guy, but doesn’t know how to make the connection.”

    To try and bridge this gap, Narayan, Natarajan and colleagues are launching a project that will use nested simulations to build a coherent model of how gas flows through the Milky Way and the nearby active galaxy Messier 87. “You allow information to come from the galaxy to tell the black hole what to do, and then you allow the information from the black hole to go back and tell the galaxy what to do,” Narayan said. “It’s a loop that goes round and round and round.”

    The simulations should help clarify the flow pattern of the diffuse gas in and around galaxies. (Further observations of the circumgalactic medium by the James Webb Space Telescope will help as well.) “That’s a critical part of this whole ecosystem,” Quataert said. “How do you get the gas down to the black hole to drive all the energy that goes back out?”

    Crucially, in the new scheme, all inputs and outputs between simulations of different scales must be consistent, leaving fewer dials to twiddle. “If the simulation is set up properly, it will self-consistently decide how much gas should reach the black hole,” Narayan said. “We can look into it and ask: Why did it not eat all the gas? Why was it so fussy and take so little of the available gas?” The group hopes to create a series of snapshots of the galaxies during different phases of their evolution.

    For now, much about these galactic ecosystems is still a hunch. “It’s really a new era, where people are starting to think about these overlapping scenarios,” said Yang. “I don’t have a clear answer, but I hope I will in a few years.”

    Science papers:
    Nature Astronomy

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 2:09 pm on August 19, 2022 Permalink | Reply
    Tags: "Physics Duo Finds Magic in Two Dimensions", A dream was born-the number one goal of condensed matter physics today: finding or engineering a substance that can superconduct electricity in our hot roughly 300-kelvin world., , Graphene (as its discoverers dubbed it) was a whole new category of substance — a 2D material., Graphene transformed Condensed Matter Physics., Jie Shan and Kin Fai Mak at Columbia University saw signs that flakes of molybdenite might be even more magical than graphene., Molybdenite-even to the trained eye-looks almost identical to graphite: a lustrous silvery crystal., Much research in condensed matter physics is a trial-and-error hunt for crystals that can keep their electrons paired or shepherd electrons in other wondrous ways., , Quanta Magazine, Shan and Mak have published an eye-popping eight papers in “Nature”., Shan and Mak’s group [now at Cornell University] has captured electrons behaving in unprecedented ways in these flat crystals., , Today that same flakiness is fueling a physics revolution-Graphene., Today that same flakiness is fueling a physics revolution., With its tendency to flake into powdery fragments molybdenite became a popular lubricant in the 20th century.   

    From “Quanta Magazine” : “Physics Duo Finds Magic in Two Dimensions” 

    From “Quanta Magazine”

    Charlie Wood

    Of his partnership with Jie Shan (left), Kin Fai Mak said, “One plus one is more than two.” Credit: Sasha Maslov and Olena Shmahalo for Quanta Magazine

    Molybdenite-even to the trained eye-looks almost identical to graphite: a lustrous silvery crystal. It acts similarly too, sloughing off flakes in a way that would make for a good pencil filling. But to an electron, the two grids of atoms form different worlds. The distinction first entered the scientific record 244 years ago. Carl Scheele, a Swedish chemist renowned for his discovery of oxygen, plunged each mineral into assorted acids and watched the lurid clouds of gas that billowed forth. Scheele, who eventually paid for this approach with his life, dying of suspected heavy metal poisoning at 43, concluded that molybdenite was a new substance. Describing it in a letter to the Royal Swedish Academy of Science in 1778, he wrote, “I refer here not to the commonly known graphite that one can acquire from the apothecary. This transition metal seems to be unknown.”

    With its tendency to flake into powdery fragments molybdenite became a popular lubricant in the 20th century. It helped skis glide farther through the snow and smoothed the exit of bullets from rifle barrels in Vietnam.

    Today that same flakiness is fueling a physics revolution.

    The breakthroughs started with graphite and Scotch tape. Researchers discovered by chance in 2004 that they could use tape to peel off flakes of graphite just one atom thick. These crystalline sheets, each a flat array of carbon atoms, had astonishing properties that were radically different from those of the three-dimensional crystals they came from. Graphene (as its discoverers dubbed it) was a whole new category of substance — a 2D material. Its discovery transformed condensed matter physics, the branch of physics that seeks to understand the many forms and behaviors of matter. Nearly half of all physicists are condensed matter physicists; it’s the subfield that brought us computer chips, lasers, LED bulbs, MRI machines, solar panels, and all manner of modern technological marvels. After graphene’s discovery, thousands of condensed matter physicists started studying the new material, hoping it would undergird future technologies.

    The mineral molybdenite is often crushed into powdery fragments and used as an industrial lubricant. But physicists have discovered that 2D sheets of the hexagonal crystal conjure novel electron behaviors. Credit: Harold Moritz.

    Graphene’s discoverers received the Nobel Prize in Physics in 2010. That same year, two young physicists at Columbia University, Jie Shan and Kin Fai Mak saw signs that flakes of molybdenite might be even more magical than graphene. The lesser-known mineral has properties that make it tough to study — too tough for many labs — but it captivated Shan and Mak. The tenacious duo devoted nearly a decade to wrangling 2D molybdenite (or molybdenum disulfide, as the lab-grown version of the crystal is called) and a family of closely related 2D crystals.

    Now their effort is paying off. Shan and Mak, who are now married and run a joint research group at Cornell University, have shown that 2D crystals of molybdenum disulfide and its relatives can give rise to an enormous variety of exotic quantum phenomena. “It’s a crazy playground,” said James Hone, a researcher at Columbia who supplies the Cornell lab with high-quality crystals. “You can do all of modern condensed matter physics in one material system.”

    Shan and Mak’s group has captured electrons behaving in unprecedented ways in these flat crystals. They’ve coaxed the particles to merge into a quantum fluid and freeze into an assortment of icelike structures. They’ve learned to assemble grids of gigantic artificial atoms that are now serving as test beds for fundamental theories of matter. Since opening their Cornell lab in 2018, the master electron tamers have published an eye-popping eight papers in Nature, the most prestigious journal in science, as well as a slew of further papers. Theorists say the couple is expanding the understanding of what throngs of electrons are capable.

    Their research “is deeply impressive in many aspects,” said Philip Kim, a prominent condensed matter physicist at Harvard University. “It is, I would say, sensational.”

    Rise of 2D Materials

    A material’s attributes generally reflect what its electrons are doing. In conductors such as metals, for instance, electrons sail between atoms with ease, carrying electricity. In insulators like wood and glass, electrons stay put. Semiconductors like silicon fall in between: Their electrons can be forced to move with an influx of energy, making them ideal for switching currents on and off — the job of a transistor. Over the last 50 years, besides those three basic electron behaviors, condensed matter physicists have seen the lightweight charged particles behaving in many more exotic ways.

    One of the more dramatic surprises came in 1986, when two IBM researchers, Georg Bednorz and Alex Müller, detected [Zeitschrift für Physik B Condensed Matter (below)] a current of electrons moving through a copper oxide (“cuprate”) crystal without any resistance whatsoever.

    In 1986, Georg Bednorz (left) and Alex Müller stumbled on a new family of copper-based materials called cuprates that could superconduct in far warmer temperatures than normal metals. Courtesy of IBM Research.

    This superconductivity — the ability of electricity to flow with perfect efficiency — had been seen before, but only for well-understood reasons in materials cooled to within a few degrees of absolute zero. This time, Bednorz and Müller observed a mysterious form of the phenomenon that persisted at a record-breaking 35 kelvins (that is, 35 degrees above absolute zero). Scientists soon discovered other cuprates that superconduct above 100 kelvins. A dream was born that remains perhaps the number one goal of condensed matter physics today: finding or engineering a substance that can superconduct electricity in our hot, roughly 300-kelvin world, enabling lossless power lines, levitating vehicles and other hyper-efficient devices that would significantly reduce humanity’s energy needs.

    The key to superconductivity is to coax electrons, which normally repel one another, to pair up and form entities known as bosons. Bosons can then collectively meld into a frictionless quantum fluid. Attractive forces that create bosons, such as atomic vibrations, can normally overcome electrons’ repulsion only at cryogenic temperatures or high pressures. But the need for these extreme conditions has prevented superconductivity from finding its way into everyday devices. The discovery of cuprates raised hopes that the right atomic lattice could “glue” electrons together so firmly that they’d stay stuck even at room temperature.

    Going on 40 years after Bednorz and Müller’s finding, theorists still aren’t completely sure how the glue in cuprates works, much less how to tweak the materials to strengthen it. Thus, much research in condensed matter physics is a trial-and-error hunt for crystals that can keep their electrons paired or shepherd electrons in other wondrous ways. “Condensed matter is a branch of physics that allows for serendipities,” said Kim. Such was the 2004 discovery of 2D materials.

    Merrill Sherman/Quanta Magazine

    Andre Geim and Konstantin Novoselov, working with graphite at the University of Manchester in the United Kingdom, discovered [Nature Materials (below)] a shocking consequence of the material’s flakiness. A graphite crystal contains carbon atoms arranged into loosely bound sheets of hexagons. Theorists had long predicted that without the stabilizing influence of the stack, heat-induced vibrations would break up a one-layer sheet. But Geim and Novoselov found that they could peel off stable, atomically thin sheets with little more than Scotch tape and persistence. Graphene was the first truly flat material — a plane on which electrons can slide around but not up and down.

    Hone, the Columbia physicist, discovered that the world’s thinnest material is somehow also the strongest [Science (below)]. It was a remarkable upset for a material that theorists thought wouldn’t hang together at all.

    What most intrigued physicists about graphene was how the carbon flatland transformed electrons: Nothing could slow them down. Electrons often get tripped up by the lattice of atoms through which they move, acting heavier than their textbook mass (an insulator’s immobile electrons act as if they have infinite mass). Graphene’s flat lattice, however, let electrons whiz around at a million meters per second — only a few hundred times slower than the speed of light. At that constant, blistering speed, the electrons flew as if they had no mass at all, blessing graphene with extreme (though not super) conductivity.

    A whole field sprang up around the wonder material. Researchers also began to think more broadly. Could 2D flakes of other substances harbor superpowers of their own? Hone was among those who branched out. In 2009, he measured some mechanical properties of graphite’s doppelgänger, molybdenum disulfide, then passed the crystal off to two optical specialists in the Columbia lab of Tony Heinz. It was a casual move that would change the careers of everyone involved.

    The molybdenum disulfide sample landed in the hands of Jie Shan, a visiting professor early in her career, and Kin Fai Mak, a graduate student. The young duo was studying how graphene interacts with light, but they had already started daydreaming about other materials. Graphene’s speedy electrons make it a fantastic conductor, but what they wanted was a 2D semiconductor — a material whose flow of electrons they could turn on and off, and which could therefore serve as a transistor.

    Molybdenum disulfide was known to be a semiconductor. And Shan and Mak soon found out that, like graphite, it gained additional powers in 2D. When they pointed a laser on 3D crystals of “moly disulfide” (as they affectionally call it), the crystals stayed dark. But when Shan and Mak ripped off layers with Scotch tape, hit them with a laser, and examined them under a microscope, they saw the 2D sheets shining brightly.

    Research from other groups would later confirm that well-made sheets of a closely related material reflect every last photon that hits them. “That’s kind of mind-boggling,” Mak said recently, when I met him and Shan in their shared office at Cornell. “You just have a single sheet of atoms, and it can reflect 100% of the light like a perfect mirror.” They realized that this property might lead to spectacular optical devices.

    Independently, Feng Wang, a physicist at the University of California, Berkeley, made the same discovery. A 2D material that was highly reflective and a semiconductor to boot caught the community’s attention. Both groups published their findings in 2010 [Nano Letters (below)]; the papers have since received more than 16,000 citations between them. “Everybody with lasers started getting very interested in 2D materials,” Hone said.

    By identifying moly disulfide as a second 2D wonder material, the two groups had made landfall on a whole continent of 2D materials. Moly disulfide belongs to a family of substances known as transition metal dichalcogenides (TMDs), in which atoms from the metallic middle region of the periodic table such as molybdenum link up with pairs of chemical compounds known as chalcogenides, such as sulfur. Moly disulfide is the only naturally occurring TMD, but there are dozens more [Nature Reviews Materials (below)] that researchers can whip up in labs — tungsten disulfide, molybdenum ditelluride and so on. Most form weakly bound sheets, making them susceptible to the business side of a piece of tape.

    The initial wave of excitement soon ebbed, however, as researchers struggled to get TMDs to do more than shine. Wang’s group, for one, fell back on graphene after finding that they couldn’t easily attach metal electrodes to moly disulfide. “That has been the stumbling block for our group for quite a few years,” he said. “Even now we are not very good at making contact.” It seemed that the main advantage of TMDs over graphene was also their biggest weakness: To study a material’s electronic properties, researchers must often push electrons into it and measure the resistance of the resulting current. But because semiconductors are poor conductors, it’s hard to get electrons in or out.

    Mak and Shan initially felt ambivalent. “It was really unclear whether we should keep working on graphene or start working on this new material,” Mak said. “But since we found it has this nice property, we continued to do a few more experiments.”

    As they worked, the two researchers became increasingly enchanted by moly disulfide, and by each other. Initially, their contact was professional, limited largely to research-focused emails. “Fai was often asking, ‘Where is that piece of equipment? Where did you put that?’” Shan said. But eventually their relationship, incubated by long hours and catalyzed by experimental success, turned romantic. “We just saw each other too often, literally in the same lab working on the same project,” Mak said. “The project working very well also made us happy.”

    All Physics All the Time

    It would take a partnership between two devoted physicists with iron discipline to bring the troublesome TMDs to heel.

    Academics always came easily to Shan. Growing up in the 1970s in the coastal province of Zhejiang, she was a star student, excelling in math, science and language and earning a coveted spot at the University of Science and Technology of China in Hefei. There, she qualified for a selective cultural exchange program between China and the Soviet Union, and she jumped at the chance to study Russian and physics at Moscow State University. “When you’re a teen, you’re eager to explore the world,” she said. “I didn’t hesitate.”

    Right away, she saw more of the world than she had bargained for. Visa troubles delayed her arrival in Russia by a few months, and she lost her seat in the language program. The authorities found her another course, and shortly after landing in Moscow she boarded a train and traveled 5,000 kilometers east. Three days later she arrived in the city of Irkutsk in the middle of Siberia at the onset of winter. “The advice I got was, ‘Never, ever touch anything without gloves,’” lest she get stuck, she said.

    Shan kept her gloves on, learned Russian in a single semester, and came to appreciate the stark beauty of the wintry landscape. When the course ended and the snow melted, she returned to the capital to begin her physics degree, arriving in Moscow in the spring of 1990, in the midst of the breakup of the Soviet Union.

    Those were chaotic years. Shan saw tanks rolling through the streets near the university as Communists tried to regain control of the government. On another occasion, just after a final exam, fighting broke out. “We could hear gunfire, and we were told to turn off the lights in the dorm,” she said. Everything, from food to toilet paper, was rationed through a coupon system. Nevertheless, Shan felt inspired by the resilience of her professors, who continued with their research despite the turmoil. “The conditions were tough, but many of the scientists had this kind of an attitude. They truly love what they do, despite what’s going on,” she said.

    As the world order collapsed, Shan distinguished herself, publishing a theoretical optics paper that caught Heinz’s eye at Columbia. He encouraged her to apply, and she relocated to New York, where she occasionally helped other international students get their footing in a foreign country. She recruited Wang to work in Heinz’s lab, for instance, and shared experimental tips. “She taught me how to be patient,” he said, and “how to not get frustrated with the laser.”

    Most researchers take a postdoctoral position after earning their Ph.D., but Shan joined Case Western Reserve University directly as an associate professor in 2001. Several years later, on a sabbatical, she returned to Heinz’s lab at Columbia. For once, her timing was fortuitous. She started collaborating with a charming and bright-eyed graduate student in Heinz’s group, Kin Fai Mak.

    Mak had followed a different, less tumultuous path to New York City. Growing up in Hong Kong, he struggled in school, as little besides physics made sense to him. “It was the only thing I like and was actually good at, so I picked physics,” he said.

    His undergraduate research at Hong Kong University of Science and Technology stood out, and Heinz recruited him to join Columbia’s booming condensed matter physics program. There, he threw himself into research, spending almost all his waking hours in the lab except for the occasional game of intramural soccer. Andrea Young, a fellow grad student (now a professor at the University of California, Santa Barbara), shared an apartment with Mak on West 113th Street. “I was lucky if I could catch him at 2 o’clock in the morning to cook some pasta and talk about physics. It was all physics all the time,” Young said.

    But the good times didn’t last. Shortly after an excursion to the Amazon rainforest in Colombia with Young, Mak fell ill. His doctors weren’t sure what to make of his puzzling test results, and he got sicker. A lucky coincidence saved his life. Young described the situation to his father, a medical researcher, who immediately recognized the signs of aplastic anemia — an unusual blood condition that happened to be the subject of his own research. “It’s actually really rare to get this disease, first of all,” Mak said. “And even rarer to get a disease in which your roommate’s father is an expert.”

    Young’s father helped Mak enroll in experimental treatments. He spent much of his final year of graduate school in the hospital and came close to death several times. Throughout the ordeal, Mak’s ardor for physics drove him to keep working. “He was writing PRL letters from his hospital bed,” Young said, referring to the journal Physical Review Letters. “Despite all of this, he was one of the most productive students ever,” Heinz said. “It was something of a miracle.”

    Further treatments eventually helped Mak make a full recovery. Young, himself a well-known experimentalist, would later quip about his interventions, “Among friends I call it my greatest contribution to physics.”

    Into the 2D Wilderness

    Mak moved on to Cornell as a postdoctoral researcher in 2012, by which time Shan had already returned to Case Western. They pursued individual projects with graphene and other materials, but they also continued to unlock further secrets of the TMDs together.

    At Cornell, Mak learned the art of electron transport measurements — the other main way of divining the movement of electrons, besides optics. This expertise made him and Shan a double threat in a field where researchers typically specialize in one type or the other. “Whenever I meet Fai and Jie I complain, ‘It’s unfair you guys do transport,’” Kim said. “What am I supposed to do?”

    The more the duo learned about TMDs, the more intriguing they got. Researchers typically focus on one of two properties of electrons: their charge and spin (or intrinsic angular momentum). Controlling the flow of electric charge is the foundation of modern electronics. And flipping electrons’ spin could lead to “spintronics” devices that pack more information into smaller spaces. In 2014, Mak helped discover [Science (below)]that electrons in 2D moly disulfide can acquire a special, third property: These electrons must move with specific amounts of momentum, a controllable attribute known as “valley” that researchers speculate might spawn yet a third field of “valleytronics” technology.

    That same year, Mak and Shan identified another striking feature of TMDs. Electrons are not the only entities that move through a crystal; physicists also track “holes,” the vacancies created when electrons hop elsewhere. These holes can roam a material like real positively charged particles. The positive hole attracts a negative electron to form a fleeting partnership, known as an exciton, in the moment before the electron plugs the hole. Shan and Mak measured the attraction [Physical Review Letters (below)] between electrons and holes in 2D tungsten diselenide and found it hundreds of times stronger than in a typical 3D semiconductor. The finding hinted that excitons in TMDs could be especially robust, and that in general electrons were more likely to do all sorts of weird things.


    The couple secured positions together at Pennsylvania State University and started a lab there. Finally convinced that TMDs were worth betting their careers on, they made the materials the focus of their new group. They also got married.

    Meanwhile, Hone’s team at Columbia saw graphene’s properties get even more extreme when they placed it on top of a high-quality insulator, boron nitride. It was an early example of one of the most novel aspects of 2D materials: their stackability.

    Put one 2D material on top of another, and the layers will sit a fraction of a nanometer apart — no distance at all from the perspective of their electrons. As a result, stacked sheets effectively merge into one substance. “It’s not just two materials together,” Wang said. “You really create a new material.”

    Whereas graphene consists exclusively of carbon atoms, the diverse family of TMD lattices brings dozens of additional elements into the stacking game. Each TMD has its own intrinsic abilities. Some are magnetic; others superconduct. Researchers looked forward to mixing and matching them to fashion materials with their combined powers.

    But when Hone’s group placed moly disulfide on an insulator, the properties of the stack showed lackluster gains compared to what they had seen in graphene. Eventually they realized that they hadn’t checked the quality of the TMD crystals. When they had some colleagues stick their moly disulfide under a microscope capable of resolving individual atoms, they were stunned. Some atoms sat in the wrong place, while others had gone missing entirely. As many as 1 in 100 lattice sites had some problem, impeding the lattice’s ability to direct electrons. Graphene, in comparison, was the image of perfection, with roughly one defect per million atoms. “We finally realized that the stuff we’d been buying was complete garbage,” Hone said.

    Around 2016, he decided to go into the business of growing research-grade TMDs. He recruited a postdoc, Daniel Rhodes, with experience growing crystals by melting powders of raw materials at extremely high temperatures and then cooling them at a glacial pace. “It’s like growing rock candy from sugar in water,” Hone explained. The new process took a month, compared to a few days for commercial methods. But it produced TMD crystals hundreds to thousands of times better than the ones for sale in chemical catalogs.

    Before Shan and Mak could take advantage of Hone’s increasingly pristine crystals, they faced the unglamorous task of figuring out how to work with microscopic flakes that don’t like to accept electrons. To pump in electrons (the basis of the transport technique Mak had picked up as a postdoc), the couple obsessed over countless details: which type of metal to use for the electrode, how far from the TMD to place it, even which chemicals to use to clean the contacts. Trying out the endless ways of setting up electrodes was slow and laborious — “a time-consuming process of refining this or refining that bit by bit,” Mak said.

    They also spent years figuring out how to lift and stack the microscopic flakes, which measure just tenths of millionths of a meter across. With this ability, plus Hone’s crystals and improved electrical contacts, everything came together in 2018. The couple moved to Ithaca, New York, to take new positions at Cornell, and a cascade of pioneering results came spilling out of their lab.

    Breakthroughs at Cornell

    “Today, everything is hard to pick up for some reason,” said Zhengchao Xia, a graduate student in Mak and Shan’s group, as the dark silhouette of a boron nitride flake threatened to peel off and fall back to the silicon surface below. The Madagascar-shaped sheet clung feebly to a hunk of graphite resembling Saudi Arabia, much as paper might cling to the crackling surface of a recently rubbed balloon. The graphite, in turn, was stuck to a gooey dewdrop of plastic attached to a glass slide. Xia used a computer interface to direct a motorized stand gripping the slide. Like an arcade-goer might maneuver a claw machine with a joystick, she gingerly lifted the stack into the air at a rate of one-fifth of a millionth of a meter per mouse click, staring intently at the computer monitor to see if she had successfully nabbed the boron nitride flake.

    She had. With a few more clicks the two-layer stack came free, and Xia moved swiftly but deliberately to deposit the flakes onto a third material embedded with sprawling metal electrodes. With a few more clicks she heated the surface, melting the slide’s plastic adhesive before either of us could sneeze the microscopic device away.

    “I always have this nightmare that it just disappears,” she said.

    Zhengchao Xia, a graduate student in Mak and Shan’s group, uses a motorized positioning stage to stack layers of material into a new 2D device.

    From start to finish, it had taken Xia more than an hour to assemble the bottom half of a simple device — the equivalent of an open-faced PB&J. She showed me another stack she had recently put together and rattled off a few of the ingredients, which included the TMDs tungsten diselenide and moly ditelluride. One of dozens of microscopic sandwiches she has constructed and studied over the last year, this Dagwood of a device had a whopping 10 layers and took several hours to assemble.

    This stacking of 2D materials, which is also done in labs at Columbia, the Massachusetts Institute of Technology, Berkeley, Harvard and other institutions, represents the realization of a long-held dream of condensed matter physicists. No longer are researchers restricted to materials found in the ground or grown slowly in a lab. Now they can play with the atomic equivalent of Lego bricks, snapping together sheets to build bespoke structures with desired properties. When it comes to assembling TMD structures, few have gone as far as the Cornell group.

    Mak and Shan’s first major discovery at Cornell concerned excitons, the strongly bound electron-hole pairs they had seen in TMDs back in 2014. Excitons intrigue physicists because these “quasiparticles” may offer a roundabout way to achieve a perennial goal of condensed matter physics: room-temperature superconductivity.

    Excitons play by the same funky rules as electron-electron pairs; these electron-hole pairs, too, become bosons, which lets them “condense” into a shared quantum state known as a Bose-Einstein condensate. This coherent horde of quasiparticles can display quantum traits such as superfluidity, the ability to flow with no resistance. (When a superfluid carries electric current, it superconducts.)

    But unlike repulsive electrons, electrons and holes love to couple up. Researchers say this potentially makes their glue stronger. The challenges to exciton-based superconductivity lie in keeping the electron from filling the hole, and getting the electrically neutral pairs to flow in a current — all in as warm a room as possible. So far, Mak and Shan have solved the first problem and have a plan to tackle the second.

    Clouds of atoms can be coaxed into forming condensates by chilling them to a hair above absolute zero with powerful lasers. But theorists have long suspected that condensates of excitons could form at higher temperatures. The Cornell group made this idea a reality with their stackable TMDs. Using a two-layer sandwich, they put extra electrons in the top layer and removed electrons from the bottom, leaving holes. The electrons and holes paired up, making excitons that are long-lived because the electrons have trouble jumping to the opposite layer to neutralize their partners. In October 2019, the group reported signs [Nature (below)] of an exciton condensate at a balmy 100 kelvins. In this setup, the excitons persisted for tens of nanoseconds, a lifetime for this type of quasiparticle. In the fall of 2021 [Nature (below)] , the group described an improved apparatus where excitons seem to last for milliseconds, which Mak called “practically forever.”

    Researchers rip Scotch tape off a 3D crystal to create 2D sheets (left). They then stack these layers and attach electrodes. A microscopic image of one such device appears on a computer monitor (right).
    Sasha Maslov for Quanta Magazine

    The team is now pursuing a scheme [Nature Physics (below)] concocted by theorists in 2008 for creating an exciton current. Allan MacDonald, a prominent condensed matter theorist at the University of Texas, Austin, and his graduate student Jung-Jung Su proposed making neutral excitons flow by applying an electric field oriented in a way that encourages both electrons and holes to move in the same direction. To pull it off in the lab, the Cornell group must once again grapple with their perennial enemy, electrical contacts. In this case, they have to attach multiple sets of electrodes to the TMD layers, some to manufacture the excitons and others to move them.

    Shan and Mak believe they are on track to get excitons flowing at up to 100 kelvins soon. That’s a frigid room for a person (−173 degrees Celsius or −280 degrees Fahrenheit), but it’s a huge leap from the nanokelvin conditions that most bosonic condensates need.

    “That will be by itself a nice achievement,” Mak said with a sly smile, “to warm up the temperature by a billion times.”

    Magical Moiré Materials

    In 2018, while the Cornell lab ramped up their TMD experiments, another graphene surprise launched a second 2D materials revolution. Pablo Jarillo-Herrero, a researcher at MIT and another Columbia alum, announced that twisting one layer of graphene with respect to the layer below created a magical new 2D material. The secret was to drop the upper layer such that its hexagons landed with a slight “twist,” so that they were rotated exactly 1.1 degrees against the hexagons below. This angle misalignment causes an offset between atoms that grows and shrinks as you move across a material, generating a repeating pattern of large “supercells” known as a moiré superlattice. MacDonald and a colleague had calculated in 2011 [PNAS (below)] that at the “magic angle” of 1.1 degrees, the unique crystal structure of the superlattice would compel graphene’s electrons to slow and sense the repulsion of their neighbors.

    Merrill Sherman/Quanta Magazine

    When electrons become aware of each other, weird things happen. In normal insulators, conductors and semiconductors, electrons are thought to interact only with the lattice of atoms; they race around too quickly to notice each other. But slowed to a crawl, electrons can jostle each other and collectively assume an assortment of exotic quantum states. Jarillo-Herrero’s experiments demonstrated that, for poorly understood reasons, this electron-to-electron communication in twisted, magic-angle graphene gives rise to an especially strong form of superconductivity.

    The graphene moiré superlattice also introduced researchers to a radical new way of controlling electrons. In the superlattice, electrons become oblivious to the individual atoms and experience the supercells themselves as if they were giant atoms. This makes it easy to populate the supercells with enough electrons to form collective quantum states. Using an electric field to dial up or down the average number of electrons per supercell, Jarillo-Herrero’s group was able to make their twisted bilayer graphene device serve as a superconductor, act as an insulator, or display a raft of other, stranger electron behaviors.

    Physicists around the world rushed into the nascent field of “twistronics.” But many have found that twisting is tough. Atoms have no reason to fall neatly into the “magic” 1.1-degree misalignment, so sheets wrinkle in ways that completely change their properties. Xia, the Cornell graduate student, said she has a bunch of friends at other universities working with twisted devices. Creating a working device typically takes them dozens of tries. And even then, each device behaves differently, so specific experiments are almost impossible to repeat.

    TMDs present a far easier way to create moiré superlattices. Because different TMDs have hexagonal lattices of different sizes, stacking a lattice of slightly larger hexagons over a smaller lattice creates a moiré pattern just the way angle misalignment does. In this case, because there is no rotation between the layers, the stack is more likely to snap into place and stay still. When Xia sets out to create a TMD moiré device, she said, she generally succeeds four times out of five.

    TMD moiré materials make ideal playgrounds for exploring electron interactions. Because the materials are semiconductors, their electrons get heavy as they slog through the materials, unlike the frenetic electrons in graphene. And the gigantic moiré cells slow them down further: Whereas electrons often move between atoms by “tunneling,” a quantum mechanical behavior akin to teleportation, tunneling rarely happens in a moiré lattice, since supercells sit roughly 100 times further apart than the atoms inside them. The distance helps the electrons settle down and gives them a chance to know their neighbors.

    Shan and Mak’s friendly rival, Feng Wang, was one of the first to recognize the potential of TMD moiré superlattices. Back-of-the-envelope calculations suggested that these materials should give rise to one of the simplest ways electrons can organize — a state known as a Wigner crystal, where mutual repulsion locks lethargic electrons into place. Wang’s team saw signs of such states [Nature (below)] in 2020 and published the first image [Nature (below)] of electrons holding each other at arm’s length in Nature in 2021. By then, word of Wang’s TMD moiré activities had already spread through the tightknit 2D physics community, and the Cornell TMD factory was churning out TMD moiré devices of their own. Shan and Mak also reported evidence for Wigner crystals in TMD superlattices in 2020 and discovered within months that electrons in their devices could crystallize in almost two dozen different Wigner crystal patterns [Nature (below)].

    At the same time, the Cornell group was also crafting TMD moiré materials into a power tool. MacDonald and collaborators had predicted [Physical Review Letters (below)] in 2018 that these devices have the right combination of technical features to make them perfectly represent one of the most important toy models in condensed matter physics. The Hubbard model, as it’s called, is a theorized system used to understand a wide variety of electron behaviors. Independently proposed [Nature Physics (below)] by Martin Gutzwiller, Junjiro Kanamori and John Hubbard in 1963, the model is physicists’ best attempt to strip the practically infinite variety of crystalline lattices down to their most essential features. Picture a grid of atoms hosting electrons. The Hubbard model assumes that each electron feels two competing forces: It wants to move by tunneling to neighboring atoms, but it’s also repulsed by its neighbors, which makes it want to stay where it is. Different behaviors arise depending on which desire is strongest. The only problem with the Hubbard model is that in all but the simplest case — a 1D string of atoms — it is mathematically unsolvable.

    According to MacDonald and colleagues, TMD moiré materials could act as “simulators” of the Hubbard model, potentially solving some of the field’s deepest mysteries, such as the nature of the glue that binds electrons into superconducting pairs in cuprates. Instead of struggling with an impossible equation, researchers could set electrons loose in a TMD sandwich and see what they did. “We can write down this model, but it’s very difficult to answer lots of important questions,” MacDonald said. “Now we can do it just by doing an experiment. That’s really groundbreaking.”

    Merrill Sherman/Quanta Magazine

    To build their Hubbard model simulator, Shan and Mak stacked layers of tungsten diselenide and tungsten sulfide to create a moiré superlattice, and they attached electrodes to dial up or down an electric field passing through the TMD sandwich. The electric field controlled how many electrons would fill each supercell. Since the cells act like giant atoms, going from one electron to two electrons per supercell was like transforming a lattice of hydrogen atoms into a lattice of helium atoms. In their initial Hubbard model publication in Nature in March 2020 [Nature (below)], they reported simulating atoms with up to two electrons; today, they can go up to eight. In some sense, they had realized the ancient aim of turning lead into gold. “It’s like tuning chemistry,” Mak said, “going through the periodic table.” In principle, they can even conjure up a grid of fictitious atoms with, say, 1.38 electrons each.

    Next, the group looked to the hearts of the artificial atoms. With more electrodes, they could control the supercells’ “potential” by making changes akin to adding positive protons to the centers of the giant synthetic atoms. The more charge a nucleus has, the harder it is for electrons to tunnel away, so this electric field let them raise and lower the hopping tendency.

    Mak and Shan’s control of the giant atoms — and therefore the Hubbard model — was complete. The TMD moiré system lets them summon a grid of ersatz atoms, even ones that don’t exist in nature, and smoothly transform them as they wish. It’s a power that, even to other researchers in the field, borders on magical. “If I were to single out their most exciting and impressive effort, that’s the one,” Kim said.

    The Cornell group quickly used their designer atoms to settle a 70-year-old debate. The question was: What if you could take an insulator and tweak its atoms to turn it into a conducting metal? Would the changeover happen gradually or abruptly?

    With their moiré alchemy, Shan and Mak carried out the thought experiment in their lab. First they simulated heavy atoms, which trapped electrons so that the TMD superlattice acted like an insulator. Then they shrank the atoms, weakening the trap until electrons became able to hop to freedom, letting the superlattice become a conducting metal. By observing a gradually falling electrical resistance as the superlattice acted increasingly like a metal, they showed that the transition is not abrupt. This finding, which they announced in Nature [Nature] last year, opens up the possibility that the superlattice’s electrons may be able to achieve a long-sought type of fluidity known as a quantum spin liquid. “That may be the most interesting problem one can tackle,” Mak said.

    Almost at the same time, the couple lucked into what some physicists consider their most significant discovery yet. “It was actually a total accident,” Mak said. “Nobody expected it.”

    When they started their Hubbard simulator research, the researchers used TMD sandwiches in which the hexagons on the two layers are aligned, with transition metals atop transition metals and chalcogenides atop chalcogenides. (That’s when they discovered the gradual insulator-to-metal transition.) Then, serendipitously, they happened to repeat the experiment with devices in which the top layer had been stacked backward.

    As before, the resistance started falling as electrons began to hop. But then it plunged abruptly, going so low that the researchers wondered if the moiré had begun to superconduct. Exploring further, though, they measured a rare pattern of resistance [Nature(below)]known as the quantum anomalous Hall effect — proof that something even weirder was going on. The effect indicated that the crystal structure of the device was compelling electrons along the edge of the material to act differently from those in the center. In the middle of the device, electrons were trapped in an insulating state. But around the perimeter, they flowed in one direction — explaining the super-low resistance. By accident, the researchers had created an extremely unusual and fragile type of matter known as a Chern insulator.

    Merrill Sherman/Quanta Magazine

    The quantum anomalous hall effect, first observed in 2013 [Science (below)], usually falls apart if the temperature rises above a few hundredths of a kelvin. In 2019, Young’s group in Santa Barbara had seen it in a one-off twisted graphene sandwich at around 5 kelvins. Now Shan and Mak had achieved the effect at nearly the same temperature, but in a no-twist TMD device that anyone can re-create. “Ours was a higher temperature, but I’ll take theirs any day because they can do it 10 times in a row,” Young said. That means you can understand it “and use it to actually do something.”

    Mak and Shan believe that, with some fiddling, they can use TMD moiré materials to build Chern insulators that survive to 50 or 100 kelvin. If they’re successful, the work could lead to another way to get current flowing with no resistance — at least for tiny “nanowires,” which they may even be able to switch on and off at specific places within a device.

    Exploration in Flatland

    Even as the landmark results pile up, the couple shows no signs of slowing down. On the day I visited, Mak looked on as students tinkered with a towering dilution refrigerator that would let them chill their devices to temperatures a thousand times colder than what they’ve worked with so far. There’s been so much physics to discover at “warmer” conditions that the group hasn’t had a chance to thoroughly search the deeper cryogenic realm for signs of superconductivity. If the super fridge lets the TMDs superconduct, that will answer yet another question, showing that a form of magnetism intrinsic to cuprates (but absent from TMDs) is not an essential ingredient of the electron-binding glue. “That’s like killing one of the important components that theorists really wanted to kill for a long time,” Mak said.

    He and Shan and their group haven’t even begun to experiment with some of the funkier TMDs. After spending years inventing the equipment needed to move around the continent of 2D materials, they’re finally gearing up to venture beyond the moly disulfide beachhead they landed on back in 2010.

    The two researchers attribute their success to a culture of cooperation that they absorbed at Columbia. The initial collaboration with Hone that introduced them to moly disulfide, they say, was just one of the many opportunities they enjoyed because they were free to follow their curiosity. “We didn’t have to discuss” their plans with Heinz, the head of their lab, Shan said. “We talked to people from other groups. We did the experiments. We even wrapped things up.”

    Today they foster a similarly relaxed environment at Cornell, where they oversee a couple dozen postdocs, visiting researchers and students, all of whom are largely free to do their own thing. “Students are very smart and have good ideas,” Mak said. “Sometimes you don’t want to interfere.”

    Their marriage also makes their lab unique. The two have learned to lean into their personal strengths. Besides an abundance of creativity as an experimentalist, Shan possesses a careful discipline that makes her a good manager; as the three of us talked, she frequently nudged “Professor Fai” back on track when his enthusiasm for physics pushed him too deep into technicalities. Mak, for his part, enjoys toiling alongside the early-career researchers, both inside and outside the lab. He recently started rock climbing with the group. “It seems like their lab is their family,” said Young. Shan and Mak told me they achieve more together than they could alone. “One plus one is more than two,” Mak said.

    The devices they’re building may also stack up to be more than the sum of their parts. As researchers join TMD sheets together to create excitons and moiré superlattices, they speculate about how the new ways of domesticating electrons might supercharge technology. Even if pocket-ready superconductivity remains elusive, Bose-Einstein condensates could lead to ultra-sensitive quantum sensors, and better control of Chern-like insulators could enable powerful quantum computers. And those are just the obvious ideas. Incremental improvements in materials science often add up to radical applications few saw coming. The researchers who developed the transistor, for instance, would have struggled to predict smartphones powered by billions of microscopic switches stuffed into a chip the size of a fingernail. And the scientists who endeavored to fashion glass fibers that could carry light across their lab bench could not have foreseen that 10,000-kilometer undersea optical fibers would someday link continents. Two-dimensional materials may evolve in similarly unpredictable directions. “A really new materials platform generates its own applications as opposed to displacing existing materials,” said Heinz.

    While driving me to the Ithaca bus stop, Shan and Mak told me about a recent (and rare) vacation they took to Banff, Canada, where they once again displayed their knack for stumbling onto surprises through a blend of effort and luck. They had spent days trying — in vain — to spot a bear. Then, at the end of the trip, on their way to the airport, they stopped to stretch their legs at a botanical reserve and found themselves face to face with a black bear.

    Similarly, with condensed matter physics, their approach is to wander around together in a new landscape and see what shows up. “We don’t have much theoretical guidance, but we just fool around and play with experiments,” Mak said. “It can fail, but sometimes you can bump into something very unexpected.”

    Science papers:
    Zeitschrift für Physik B Condensed Matter
    Nature Materials
    Nano Letters
    Nature Reviews Materials
    Physical Review Letters
    Nature Physics
    Physical Review Letters
    Nature Physics

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by The Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 3:02 pm on August 12, 2022 Permalink | Reply
    Tags: "How the Physics of Nothing Underlies Everything", A gravitational vacuum lacks any matter or energy capable of bending space., , Aristotle asserted that nature abhors a vacuum., , , Quanta Magazine, The electromagnetic vacuum is the absence of a medium that can slow down light., The specific variety of nothing depends on what sort of something physicists intend to describe., The vacuum has become a bedrock concept in physics - the foundation of any theory of something.   

    From “Quanta Magazine” : “How the Physics of Nothing Underlies Everything” 

    From “Quanta Magazine”

    Charlie Wood

    An instability in the vacuum of space could suddenly spawn a rapidly expanding bubble with no interior — true nothingness. Credit: Merrill Sherman/Quanta Magazine

    Millennia ago, Aristotle asserted that nature abhors a vacuum, reasoning that objects would fly through truly empty space at impossible speeds. In 1277, the French bishop Etienne Tempier shot back, declaring that God could do anything, even create a vacuum.

    Then a mere scientist pulled it off. Otto von Guericke invented a pump to suck the air from within a hollow copper sphere, establishing perhaps the first high-quality vacuum on Earth. In a theatrical demonstration in 1654, he showed that not even two teams of horses straining to rip apart the watermelon-size ball could overcome the suction of nothing.

    A 1672 book about the vacuum by the German scientist Otto von Guericke depicts a demonstration he gave for Emperor Ferdinand III, in which teams of horses tried unsuccessfully to pull apart the halves of a vacuum-filled copper sphere.
    Credit: Royal Astronomical Society/Science Source.

    Since then, the vacuum has become a bedrock concept in physics – the foundation of any theory of something. Von Guericke’s vacuum was an absence of air. The electromagnetic vacuum is the absence of a medium that can slow down light. And a gravitational vacuum lacks any matter or energy capable of bending space. In each case the specific variety of nothing depends on what sort of something physicists intend to describe. “Sometimes, it’s the way we define a theory,” said Patrick Draper, a theoretical physicist at the University of Illinois.

    As modern physicists have grappled with more sophisticated candidates for the ultimate theory of nature, they have encountered a growing multitude of types of nothing. Each has its own behavior, as if it’s a different phase of a substance. Increasingly, it seems that the key to understanding the origin and fate of the universe may be a careful accounting of these proliferating varieties of absence.

    Quantum Nothingness

    Nothing started to seem like something in the 20th century, as physicists came to view reality as a collection of fields: objects that fill space with a value at each point (the electric field, for instance, tells you how much force an electron will feel in different places). In classical physics, a field’s value can be zero everywhere so that it has no influence and contains no energy. “Classically, the vacuum is boring,” said Daniel Harlow, a theoretical physicist at the Massachusetts Institute of Technology. “Nothing is happening.”he ultimate theory of nature, they have encountered a growing multitude of types of nothing. Each has its own behavior, as if it’s a different phase of a substance. Increasingly, it seems that the key to understanding the origin and fate of the universe may be a careful accounting of these proliferating varieties of absence.

    “We’re learning there’s a lot more to learn about nothing than we thought,” said Isabel Garcia Garcia, a particle physicist at the Kavli Institute for Theoretical Physics in California. “How much more are we missing?”

    So far, such studies have led to a dramatic conclusion: Our universe may sit on a platform of shoddy construction, a “metastable” vacuum that is doomed — in the distant future — to transform into another sort of nothing, destroying everything in the process.

    Quantum Nothingness

    Nothing started to seem like something in the 20th century, as physicists came to view reality as a collection of fields: objects that fill space with a value at each point (the electric field, for instance, tells you how much force an electron will feel in different places). In classical physics, a field’s value can be zero everywhere so that it has no influence and contains no energy. “Classically, the vacuum is boring,” said Daniel Harlow, a theoretical physicist at the Massachusetts Institute of Technology. “Nothing is happening.”

    But physicists learned that the universe’s fields are quantum, not classical, which means they are inherently uncertain. You’ll never catch a quantum field with exactly zero energy. Harlow likens a quantum field to an array of pendulums — one at each point in space — whose angles represent the field’s values. Each pendulum hangs nearly straight down but jitters back and forth.

    Left alone, a quantum field will stay in its minimum-energy configuration, known as its “true vacuum” or “ground state.” (Elementary particles are ripples in these fields.) “When we talk about the vacuum of a system, we have in mind in some loose way the preferred state of the system,” said Garcia Garcia.

    Most of the quantum fields that fill our universe have one, and only one, preferred state, in which they’ll remain for eternity. Most, but not all.

    True and False Vacuums

    In the 1970s, physicists came to appreciate the significance of a different class of quantum fields whose values prefer not to be zero, even on average. Such a “scalar field” is like a collection of pendulums all hovering at, say, a 10-degree angle. This configuration can be the ground state: The pendulums prefer that angle and are stable.

    In 2012, experimentalists at the Large Hadron Collider proved that a scalar field known as the Higgs field permeates the universe.


    At first, in the hot, early universe, its pendulums pointed down. But as the cosmos cooled, the Higgs field changed state, much as water can freeze into ice, and its pendulums all rose to the same angle. (This nonzero Higgs value is what gives many elementary particles the property known as mass.)

    With scalar fields around, the stability of the vacuum is not necessarily absolute. A field’s pendulums might have multiple semi-stable angles and a proclivity for switching from one configuration to another. Theorists aren’t certain whether the Higgs field, for instance, has found its absolute favorite configuration — the true vacuum. Some have argued [Frontiers in Astronomy and Space Sciences (below)] that the field’s current state, despite having persisted for 13.8 billion years, is only temporarily stable, or “metastable.”

    If so, the good times won’t last forever. In the 1980s, the physicists Sidney Coleman and Frank De Luccia described how a false vacuum [Physical Review D (below)] of a scalar field could “decay.” At any moment, if enough pendulums in some location jitter their way into a more favorable angle, they’ll drag their neighbors to meet them, and a bubble of true vacuum will fly outward at nearly light speed. It will rewrite physics as it goes, busting up the atoms and molecules in its path. (Don’t panic. Even if our vacuum is only metastable, given its staying power so far, it will probably last for billions of years more.)

    In the potential mutability of the Higgs field, physicists identified the first of a practically infinite number of ways that nothingness could kill us all.

    More Problems, More Vacuums

    As physicists have attempted to fit nature’s confirmed laws into a larger set (filling in giant gaps in our understanding in the process), they have cooked up candidate theories of nature with additional fields and other ingredients.

    When fields pile up, they interact, influencing each other’s pendulums and establishing new mutual configurations in which they like to get stuck. Physicists visualize these vacuums as valleys in a rolling “energy landscape.” Different pendulum angles correspond to different amounts of energy, or altitudes in the energy landscape, and a field seeks to lower its energy just as a stone seeks to roll downhill. The deepest valley is the ground state, but the stone could come to rest — for a time, anyway — in a higher valley.

    A couple of decades ago, the landscape exploded in scale. The physicists Joseph Polchinski and Raphael Bousso were studying certain aspects of string theory, the leading mathematical framework for describing gravity’s quantum side. String theory works only if the universe has some 10 dimensions, with the extra ones curled up into shapes too tiny to detect. Polchinski and Bousso calculated in 2000 [Journal of High Energy Physics (below)] that such extra dimensions could fold up in a tremendous number of ways. Each way of folding would form a distinct vacuum with its own physical laws.

    The discovery that string theory allows nearly countless vacuums jibed with another discovery from nearly two decades earlier.

    Cosmologists in the early 1980s developed a hypothesis known as cosmic inflation that has become the leading theory of the universe’s birth. The theory holds that the universe began with a quick burst of exponential expansion, which handily explains the universe’s smoothness and hugeness.


    In physical cosmology, cosmic inflation, cosmological inflation is a theory of exponential expansion of space in the early universe. The inflationary epoch lasted from 10^−36 seconds after the conjectured Big Bang singularity to some time between 10^−33 and 10^−32 seconds after the singularity. Following the inflationary period, the universe continued to expand, but at a slower rate. The acceleration of this expansion due to dark energy began after the universe was already over 7.7 billion years old (5.4 billion years ago).

    Inflation theory was developed in the late 1970s and early 80s, with notable contributions by several theoretical physicists, including Alexei Starobinsky at Landau Institute for Theoretical Physics, Alan Guth at Cornell University, and Andrei Linde at Lebedev Physical Institute. Alexei Starobinsky, Alan Guth, and Andrei Linde won the 2014 Kavli Prize “for pioneering the theory of cosmic inflation.” It was developed further in the early 1980s. It explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe. Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed.

    The detailed particle physics mechanism responsible for inflation is unknown. The basic inflationary paradigm is accepted by most physicists, as a number of inflation model predictions have been confirmed by observation;[a] however, a substantial minority of scientists dissent from this position. The hypothetical field thought to be responsible for inflation is called the inflaton.

    In 2002 three of the original architects of the theory were recognized for their major contributions; physicists Alan Guth of M.I.T., Andrei Linde of Stanford, and Paul Steinhardt of Princeton shared the prestigious Dirac Prize “for development of the concept of inflation in cosmology”. In 2012 Guth and Linde were awarded the Breakthrough Prize in Fundamental Physics for their invention and development of inflationary cosmology.

    Alan Guth, from M.I.T., who first proposed Cosmic Inflation.

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes. Credit: Alex Mittelmann.

    Alan Guth’s notes:
    Alan Guth’s original notes on inflation.

    But inflation’s successes come at a price.

    The researchers found that once cosmic inflation started, it would continue. Most of the vacuum would violently explode outward forever. Only finite regions of space would stop inflating, becoming bubbles of relative stability separated from each other by inflating space in between. Inflationary cosmologists believe we call one of these bubbles home.

    A Multiverse of Vacuums

    To some, the notion that we live in a multiverse — an endless landscape of vacuum bubbles — is disturbing. It makes the nature of any one vacuum (such as ours) seem random and unpredictable, curbing our ability to understand our universe. Polchinski, who died in 2018, told the physicist and author Sabine Hossenfelder that discovering string theory’s landscape of vacuums initially made him so miserable it led him to seek therapy. If string theory predicts every imaginable variety of nothing, has it predicted anything?

    To others, the plethora of vacuums is not a problem; “in fact, it’s a virtue,” said Andrei Linde, a prominent cosmologist at Stanford University and one of the developers of cosmic inflation. That’s because the multiverse potentially solves a great mystery: the ultra-low energy of our particular vacuum.

    When theorists naïvely estimate the collective jittering of all the universe’s quantum fields, the energy is huge — enough to rapidly accelerate the expansion of space and, in short order, rip the cosmos apart. But the observed acceleration of space is extremely mild in comparison, suggesting that much of the collective jittering cancels out and our vacuum has an extraordinarily low positive value for its energy.

    In a solitary universe, the tiny energy of the one and only vacuum looks like a profound puzzle. But in a multiverse, it’s just dumb luck. If different bubbles of space have different energies and expand at different rates, galaxies and planets will form only in the most lethargic bubbles. Our calm vacuum, then, is no more mysterious than the Goldilocks orbit of our planet: We find ourselves here because most everywhere else is inhospitable to life.

    Love it or hate it, the multiverse hypothesis as currently understood has a problem. Despite string theory’s seemingly infinite menu of vacuums, so far no one has found a specific folding of tiny extra dimensions that corresponds to a vacuum like ours, with its barely positive energy. String theory seems to yield negative-energy vacuums much more easily.

    Perhaps string theory is untrue, or the flaw could lie with researchers’ immature understanding of it. Physicists may not have hit on the right way to handle positive vacuum energy within string theory. “That’s perfectly possible,” said Nathan Seiberg, a physicist at the Institute for Advanced Study in Princeton, New Jersey. “This is a hot topic.”

    Or our vacuum could just be inherently sketchy. “The prevailing view is that [positively energized] space is not stable,” Seiberg said. “It could decay to something else, so that could be one of the reasons why it is so hard to understand the physics of it.”

    These researchers suspect that our vacuum is not one of reality’s preferred states, and that it will someday jitter itself into a deeper, more stable valley. In doing so, our vacuum could lose the field that generates electrons or pick up a new palette of particles. The tightly folded dimensions could come unfurled. Or the vacuum could even give up on existence entirely.

    “That’s another one of the options,” Harlow said. “A true nothing.”

    The End of the Vacuum

    The physicist Edward Witten first discovered the “bubble of nothing” in 1982 [Nuclear Physics B (below)]. While studying a vacuum with one extra dimension curled up into a tiny circle at each point, he found that quantum jitters inevitably jiggled the extra dimension, sometimes shrinking the circle to a point. As the dimension vanished into nothingness, Witten found, it took everything else with it. The instability would spawn a rapidly expanding bubble with no interior, its mirrorlike surface marking the end of space-time itself.

    This instability of tiny dimensions has long plagued string theory, and various ingredients have been devised to stiffen them. In December, Garcia Garcia, together with Draper and Benjamin Lillard of Illinois, calculated the lifetime of a vacuum with a single extra curled-up dimension. They considered various stabilizing bells and whistles, but they found that most mechanisms failed to stop the bubbles. Their conclusions [Physical Review D (below)] aligned with Witten’s: When the size of the extra dimension fell below a certain threshold, the vacuum collapsed at once. A similar calculation — one extended to more sophisticated models — could rule out vacuums in string theory with dimensions below that size.

    With a large enough hidden dimension, however, the vacuum could survive for many billions of years. This means that theories producing bubbles of nothing could plausibly match our universe. If so, Aristotle may have been more right than he knew. Nature may not be a big fan of the vacuum. In the extremely long run, it may prefer nothing at all.

    Science papers:
    Frontiers in Astronomy and Space Sciences
    Physical Review D
    Journal of High Energy Physics
    Nuclear Physics B
    Physical Review D

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 8:51 am on August 8, 2022 Permalink | Reply
    Tags: "Hidden Chaos Found to Lurk in Ecosystems", , , , , Quanta Magazine   

    From “Quanta Magazine” : “Hidden Chaos Found to Lurk in Ecosystems” 

    From “Quanta Magazine”

    July 27, 2022
    Joanna Thompson

    The graphical tool called a logistic diagram showed ecologists in the 1970s that chaos could creep into the population fluctuations of species.

    Logistic diagram. Credit: Geoff Boeing.

    But for decades, data showed little evidence of true chaos in ecosystem dynamics. Credit: Kristina Armitage for Quanta Magazine; Source: Adobe Stock

    Physical scientists seem to find the phenomenon of chaos everywhere: in the orbits of planets, in weather systems, in a river’s swirling eddies. For nearly three decades, ecologists considered chaos in the living world to be surprisingly rare by comparison. A new analysis [Nature Ecology & Evolution (below)], however, reveals that chaos is far more prevalent in ecosystems than researchers thought.

    Tanya Rogers was looking back through the scientific literature for recent studies on chaos in ecosystems when she discovered something unexpected: No one had published a quantitative analysis of it in over 25 years. “It was kind of surprising,” said Rogers, a research ecologist at the University of California-Santa Cruz and the new study’s first author. “Like, ‘I can’t believe no one’s done this.’”

    So she decided to do it herself. Analyzing more than 170 sets of time-dependent ecosystem data, Rogers and her colleagues found that chaos was present in a third of them — nearly three times more than the estimates in previous studies. What’s more, they discovered that certain groups of organisms, like plankton, insects and algae, were far more prone to chaos than larger organisms like wolves and birds.

    “That really wasn’t in the literature at all,” said Stephan Munch, an evolutionary ecologist at Santa Cruz and a co-author of the study. Their results suggest that to protect vulnerable species, it is both possible and necessary to build more complex population models as guides for conservation policies.

    When ecology was first recognized as a formal science in the 19th century, the prevailing assumption was that nature follows simple, easily understood rules, like a mechanical clock driven by interlocking gears. If scientists could measure the right variables, they could predict the outcome: More rain, for example, would mean a better apple harvest.

    In reality, because of chaos, “the world is a lot more whack-a-mole,” said George Sugihara, a quantitative ecologist at the Scripps Institution of Oceanography at The Scripps Institution of Oceanography at University of California-San Diego who was not involved in the new research. Chaos reflects predictability over time. A system is said to be stable if it changes very little over a long timescale, and random if its fluctuations are unpredictable. But a chaotic system — one ruled by nonlinear responses to events — may be predictable over short periods but is subject to increasingly dramatic shifts the further out you go.

    “We often give the weather as an example of a chaotic system,” said Rogers. A summer breeze over the open ocean probably won’t impact tomorrow’s forecast, but under just the right conditions, it could theoretically send a hurricane plowing into the Caribbean in a few weeks.

    Ecologists began flirting with the concept of chaos in the 1970s, when the mathematical biologist Robert May developed a revolutionary tool called the logistic map.

    This branching diagram (sometimes known as a cobweb plot because of its appearance) shows how chaos creeps into simple models of population growth and other systems over time. Since the survival of organisms is affected so much by chaotic forces like the weather, ecologists assumed that species populations in nature would also often rise and fall chaotically. Logistic maps quickly became ubiquitous in the field as theoretical ecologists sought to explain population fluctuations in organisms like salmon and the algae that cause red tides.

    Populations of the microscopic algae called diatoms (top) sometimes explode into massive swirling blooms in the ocean that can be seen from space, as in this photograph of the Chukchi Sea between Siberia and Alaska taken by Landsat 8 in June 2018 (bottom). Credits:(from top) M.I. Walker/Science Source; NASA/U.S. Geological Survey/Norman Kuring/Kathryn Hansen.

    By the early ’90s, ecologists had amassed enough time-series data sets on species populations and enough computing power to test these ideas. There was just one problem: The chaos didn’t seem to be there. Only about 10% of the examined populations seemed to change chaotically; the rest either cycled stably or fluctuated randomly. Theories of ecosystem chaos fell out of scientific fashion by the mid-1990s.

    The new results from Rogers, Munch and their Santa Cruz mathematician colleague Bethany Johnson, however, suggest that the older work missed where the chaos was hiding. To detect chaos, the earlier studies used models with a single dimension — the population size of one species over time. They didn’t consider corresponding changes in messy real-world factors like temperature, sunlight, rainfall and interactions with other species that might affect populations. Their one-dimensional models captured how the populations changed, but not why they changed.

    But Rogers and Munch “went looking for [chaos] in a more sensible way,” said Aaron King, a professor of ecology and evolutionary biology at the University of Michigan who was not involved in the study. Using three different complex algorithms, they analyzed 172 time series of different organisms’ populations as models with as many as six dimensions rather than just one, leaving room for the potential influence of unspecified environmental factors. In this way, they could check whether unnoticed chaotic patterns might be embedded within the one-dimensional representation of the population shifts. For example, more rainfall might be chaotically linked to population increases or decreases, but only after a delay of several years.

    In the population data for about 34% of the species, Rogers, Johnson and Munch discovered, the signatures of nonlinear interactions were indeed present, which was significantly more chaos than was previously detected. In most of those data sets, the population changes for the species did not appear chaotic at first, but the relationship of the numbers to underlying factors was. They could not say precisely which environmental factors were responsible for the chaos, but whatever they were, their fingerprints were on the data.

    The researchers also uncovered an inverse relationship between an organism’s body size and how chaotic its population dynamics tend to be. This may be due to differences in generation time, with small organisms that breed more often also being more affected by outside variables more often. For example, populations of diatoms with generations of around 15 hours show much more chaos than packs of wolves with generations almost five years long.

    However, that doesn’t necessarily mean that wolf populations are inherently stable. “One possibility is that we’re not seeing chaos there because we just don’t have enough data to go back over a long enough period of time to see it,” said Munch. In fact, he and Rogers suspect that because of the constraints of their data, their models might be underestimating how much underlying chaos is present in ecosystems.

    Sugihara thinks that the new results might be important for conservation. Improved models with the right element of chaos could do a better job of forecasting toxic algal blooms, for example, or tracking fishery populations to prevent overfishing. Considering chaos could also help researchers and conservation managers to understand how far out it’s possible to meaningfully predict population size. “I do think that it’s useful for the issue to be in people’s minds,” he said.

    However, he and King both caution against placing too much faith in these chaos-conscious models. “The classical concept of chaos is fundamentally a stationary concept,” King said: It is built on the assumption that chaotic fluctuations represent a departure from some predictable, stable norm. But as climate change progresses, most real-world ecosystems are becoming increasingly unstable even in the short term. Even taking many dimensions into account, scientists will have to be conscious of this ever-shifting baseline.

    Still, taking chaos into consideration is an important step toward more accurate modeling. “I think this is really exciting,” said Munch. “It just runs counter to the way we currently think about ecological dynamics.”

    Science paper:
    Nature Ecology & Evolution

    See the full article here.


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by The Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 11:02 am on August 4, 2022 Permalink | Reply
    Tags: "At Long Last Mathematical Proof That Black Holes Are Stable", "Kerr black holes", , , , , , , , Quanta Magazine   

    From Columbia University And Princeton University Via “Quanta Magazine” : “At Long Last Mathematical Proof That Black Holes Are Stable” 

    Columbia U bloc

    From Columbia University


    Princeton University

    Princeton University


    From “Quanta Magazine”

    Steve Nadis

    Mehau Kulyk / Science Source.

    In 1963, the mathematician Roy Kerr found a solution to Albert Einstein’s equations that precisely described the spacetime outside what we now call a rotating black hole. (The term wouldn’t be coined for a few more years.) In the nearly six decades since his achievement, researchers have tried to show that these so-called Kerr black holes are stable. What that means, explained Jérémie Szeftel, a mathematician at Sorbonne University, “is that if I start with something that looks like a Kerr black hole and give it a little bump” — by throwing some gravitational waves at it, for instance — “what you expect, far into the future, is that everything will settle down, and it will once again look exactly like a Kerr solution.”

    The opposite situation — a mathematical instability — “would have posed a deep conundrum to theoretical physicists and would have suggested the need to modify, at some fundamental level, Einstein’s theory of gravitation,” said Thibault Damour, a physicist at the Institute of Advanced Scientific Studies in France.

    In a 912-page paper posted online on May 30 [below], Szeftel, Elena Giorgi of Columbia University and Sergiu Klainerman of Princeton University have proved that slowly rotating Kerr black holes are indeed stable. The work is the product of a multiyear effort. The entire proof — consisting of the new work, an 800-page paper by Klainerman and Szeftel from 2021 [below], plus three background papers that established various mathematical tools — totals roughly 2,100 pages in all.

    The new result “does indeed constitute a milestone in the mathematical development of general relativity,” said Demetrios Christodoulou, a mathematician at ETH Zürich.

    Shing-Tung Yau, an emeritus professor at Harvard University who recently moved to Tsinghua University, was similarly laudatory, calling the proof “the first major breakthrough” in this area of General Relativity since the early 1990s. “It is a very tough problem,” he said. He did stress, however, that the new paper has not yet undergone peer review. But he called the 2021 paper, which has been approved for publication, both “complete and exciting.”

    One reason the question of stability has remained open for so long is that most explicit solutions to Einstein’s equations, such as the one found by Kerr, are stationary, Giorgi said. “These formulas apply to black holes that are just sitting there and never change; those aren’t the black holes we see in nature.” To assess stability, researchers need to subject black holes to minor disturbances and then see what happens to the solutions that describe these objects as time moves forward.

    For example, imagine sound waves hitting a wineglass. Almost always, the waves shake the glass a little bit, and then the system settles down. But if someone sings loudly enough and at a pitch that exactly matches the glass’s resonant frequency, the glass could shatter. Giorgi, Klainerman and Szeftel wondered whether a similar resonance-type phenomenon could happen when a black hole is struck by gravitational waves.

    They considered several possible outcomes. A gravitational wave might, for instance, cross the event horizon of a Kerr black hole and enter the interior. The black hole’s mass and rotation could be slightly altered, but the object would still be a black hole characterized by Kerr’s equations. Or the gravitational waves could swirl around the black hole before dissipating in the same way that most sound waves dissipate after encountering a wineglass.

    Or they could combine to create havoc or, as Giorgi put it, “God knows what.” The gravitational waves might congregate outside a black hole’s event horizon and concentrate their energy to such an extent that a separate singularity would form. The space-time outside the black hole would then be so severely distorted that the Kerr solution would no longer prevail. This would be a dramatic sign of instability.

    The three mathematicians relied on a strategy — called proof by contradiction — that had been previously employed in related work. The argument goes roughly like this: First, the researchers assume the opposite of what they’re trying to prove, namely that the solution does not exist forever — that there is, instead, a maximum time after which the Kerr solution breaks down. They then use some “mathematical trickery,” said Giorgi — an analysis of partial differential equations, which lie at the heart of general relativity — to extend the solution beyond the purported maximum time. In other words, they show that no matter what value is chosen for the maximum time, it can always be extended. Their initial assumption is thus contradicted, implying that the conjecture itself must be true.

    Klainerman emphasized that he and his colleagues have built on the work of others. “There have been four serious attempts,” he said, “and we happen to be the lucky ones.” He considers the latest paper a collective achievement, and he’d like the new contribution to be viewed as “a triumph for the whole field.”

    So far, stability has only been proved for slowly rotating black holes — where the ratio of the black hole’s angular momentum to its mass is much less than 1. It has not yet been demonstrated that rapidly rotating black holes are also stable. In addition, the researchers did not determine precisely how small the ratio of angular momentum to mass has to be in order to ensure stability.

    Given that only one step in their long proof rests on the assumption of low angular momentum, Klainerman said he would “not be surprised at all if, by the end of the decade, we will have a full resolution of the Kerr [stability] conjecture.”

    Giorgi is not quite so sanguine. “It is true that the assumption applies to just one case, but it is a very important case.” Getting past that restriction will require quite a bit of work, she said; she is not sure who will take it on or when they might succeed.

    Looming beyond this problem is a much bigger one called the final state conjecture, which basically holds that if we wait long enough, the universe will evolve into a finite number of Kerr black holes moving away from each other. The final state conjecture depends on Kerr stability and on other sub-conjectures that are extremely challenging in themselves. “We have absolutely no idea how to prove this,” Giorgi admitted. To some, that statement might sound pessimistic. Yet it also illustrates an essential truth about Kerr black holes: They are destined to command the attention of mathematicians for years, if not decades, to come.

    Science papers:
    https://arxiv.org/pdf/2205.14808.pdf [2022]
    https://arxiv.org/pdf/2104.11857.pdf [2021]

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Princeton: Overview

    Princeton University is a private Ivy League research university in Princeton, New Jersey (US). Founded in 1746 in Elizabeth as the College of New Jersey, Princeton is the fourth-oldest institution of higher education in the United States and one of the nine colonial colleges chartered before the American Revolution. The institution moved to Newark in 1747, then to the current site nine years later. It was renamed Princeton University in 1896.

    Princeton provides undergraduate and graduate instruction in the humanities, social sciences, natural sciences, and engineering. It offers professional degrees through the Princeton School of Public and International Affairs, the School of Engineering and Applied Science, the School of Architecture and the Bendheim Center for Finance. The university also manages the DOE’s Princeton Plasma Physics Laboratory. Princeton has the largest endowment per student in the United States.

    As of October 2020, 69 Nobel laureates, 15 Fields Medalists and 14 Turing Award laureates have been affiliated with Princeton University as alumni, faculty members or researchers. In addition, Princeton has been associated with 21 National Medal of Science winners, 5 Abel Prize winners, 5 National Humanities Medal recipients, 215 Rhodes Scholars, 139 Gates Cambridge Scholars and 137 Marshall Scholars. Two U.S. Presidents, twelve U.S. Supreme Court Justices (three of whom currently serve on the court) and numerous living billionaires and foreign heads of state are all counted among Princeton’s alumni body. Princeton has also graduated many prominent members of the U.S. Congress and the U.S. Cabinet, including eight Secretaries of State, three Secretaries of Defense and the current Chairman of the Joint Chiefs of Staff.

    Princeton University, founded as the College of New Jersey, was considered the successor of the “Log College” founded by the Reverend William Tennent at Neshaminy, PA in about 1726. New Light Presbyterians founded the College of New Jersey in 1746 in Elizabeth, New Jersey. Its purpose was to train ministers. The college was the educational and religious capital of Scottish Presbyterian America. Unlike Harvard University , which was originally “intensely English” with graduates taking the side of the crown during the American Revolution, Princeton was founded to meet the religious needs of the period and many of its graduates took the American side in the war. In 1754, trustees of the College of New Jersey suggested that, in recognition of Governor Jonathan Belcher’s interest, Princeton should be named as Belcher College. Belcher replied: “What a name that would be!” In 1756, the college moved its campus to Princeton, New Jersey. Its home in Princeton was Nassau Hall, named for the royal House of Orange-Nassau of William III of England.

    Following the untimely deaths of Princeton’s first five presidents, John Witherspoon became president in 1768 and remained in that post until his death in 1794. During his presidency, Witherspoon shifted the college’s focus from training ministers to preparing a new generation for secular leadership in the new American nation. To this end, he tightened academic standards and solicited investment in the college. Witherspoon’s presidency constituted a long period of stability for the college, interrupted by the American Revolution and particularly the Battle of Princeton, during which British soldiers briefly occupied Nassau Hall; American forces, led by George Washington, fired cannon on the building to rout them from it.

    In 1812, the eighth president of the College of New Jersey, Ashbel Green (1812–23), helped establish the Princeton Theological Seminary next door. The plan to extend the theological curriculum met with “enthusiastic approval on the part of the authorities at the College of New Jersey.” Today, Princeton University and Princeton Theological Seminary maintain separate institutions with ties that include services such as cross-registration and mutual library access.

    Before the construction of Stanhope Hall in 1803, Nassau Hall was the college’s sole building. The cornerstone of the building was laid on September 17, 1754. During the summer of 1783, the Continental Congress met in Nassau Hall, making Princeton the country’s capital for four months. Over the centuries and through two redesigns following major fires (1802 and 1855), Nassau Hall’s role shifted from an all-purpose building, comprising office, dormitory, library, and classroom space; to classroom space exclusively; to its present role as the administrative center of the University. The class of 1879 donated twin lion sculptures that flanked the entrance until 1911, when that same class replaced them with tigers. Nassau Hall’s bell rang after the hall’s construction; however, the fire of 1802 melted it. The bell was then recast and melted again in the fire of 1855.

    James McCosh became the college’s president in 1868 and lifted the institution out of a low period that had been brought about by the American Civil War. During his two decades of service, he overhauled the curriculum, oversaw an expansion of inquiry into the sciences, and supervised the addition of a number of buildings in the High Victorian Gothic style to the campus. McCosh Hall is named in his honor.

    In 1879, the first thesis for a Doctor of Philosophy (Ph.D.) was submitted by James F. Williamson, Class of 1877.

    In 1896, the college officially changed its name from the College of New Jersey to Princeton University to honor the town in which it resides. During this year, the college also underwent large expansion and officially became a university. In 1900, the Graduate School was established.

    In 1902, Woodrow Wilson, graduate of the Class of 1879, was elected the 13th president of the university. Under Wilson, Princeton introduced the preceptorial system in 1905, a then-unique concept in the United States that augmented the standard lecture method of teaching with a more personal form in which small groups of students, or precepts, could interact with a single instructor, or preceptor, in their field of interest.

    In 1906, the reservoir Carnegie Lake was created by Andrew Carnegie. A collection of historical photographs of the building of the lake is housed at the Seeley G. Mudd Manuscript Library on Princeton’s campus. On October 2, 1913, the Princeton University Graduate College was dedicated. In 1919 the School of Architecture was established. In 1933, Albert Einstein became a lifetime member of the Institute for Advanced Study with an office on the Princeton campus. While always independent of the university, the Institute for Advanced Study occupied offices in Jones Hall for 6 years, from its opening in 1933, until its own campus was finished and opened in 1939.


    In 1969, Princeton University first admitted women as undergraduates. In 1887, the university actually maintained and staffed a sister college, Evelyn College for Women, in the town of Princeton on Evelyn and Nassau streets. It was closed after roughly a decade of operation. After abortive discussions with Sarah Lawrence College to relocate the women’s college to Princeton and merge it with the University in 1967, the administration decided to admit women and turned to the issue of transforming the school’s operations and facilities into a female-friendly campus. The administration had barely finished these plans in April 1969 when the admissions office began mailing out its acceptance letters. Its five-year coeducation plan provided $7.8 million for the development of new facilities that would eventually house and educate 650 women students at Princeton by 1974. Ultimately, 148 women, consisting of 100 freshmen and transfer students of other years, entered Princeton on September 6, 1969 amidst much media attention. Princeton enrolled its first female graduate student, Sabra Follett Meservey, as a PhD candidate in Turkish history in 1961. A handful of undergraduate women had studied at Princeton from 1963 on, spending their junior year there to study “critical languages” in which Princeton’s offerings surpassed those of their home institutions. They were considered regular students for their year on campus, but were not candidates for a Princeton degree.

    As a result of a 1979 lawsuit by Sally Frank, Princeton’s eating clubs were required to go coeducational in 1991, after Tiger Inn’s appeal to the U.S. Supreme Court was denied. In 1987, the university changed the gendered lyrics of “Old Nassau” to reflect the school’s co-educational student body. From 2009 to 2011, Princeton professor Nannerl O. Keohane chaired a committee on undergraduate women’s leadership at the university, appointed by President Shirley M. Tilghman.

    The main campus sits on about 500 acres (2.0 km^2) in Princeton. In 2011, the main campus was named by Travel+Leisure as one of the most beautiful in the United States. The James Forrestal Campus is split between nearby Plainsboro and South Brunswick. The University also owns some property in West Windsor Township. The campuses are situated about one hour from both New York City and Philadelphia.

    The first building on campus was Nassau Hall, completed in 1756 and situated on the northern edge of campus facing Nassau Street. The campus expanded steadily around Nassau Hall during the early and middle 19th century. The McCosh presidency (1868–88) saw the construction of a number of buildings in the High Victorian Gothic and Romanesque Revival styles; many of them are now gone, leaving the remaining few to appear out of place. At the end of the 19th century much of Princeton’s architecture was designed by the Cope and Stewardson firm (same architects who designed a large part of Washington University in St Louis and University of Pennsylvania) resulting in the Collegiate Gothic style for which it is known today. Implemented initially by William Appleton Potter and later enforced by the University’s supervising architect, Ralph Adams Cram, the Collegiate Gothic style remained the standard for all new building on the Princeton campus through 1960. A flurry of construction in the 1960s produced a number of new buildings on the south side of the main campus, many of which have been poorly received. Several prominent architects have contributed some more recent additions, including Frank Gehry (Lewis Library), I. M. Pei (Spelman Halls), Demetri Porphyrios (Whitman College, a Collegiate Gothic project), Robert Venturi and Denise Scott Brown (Frist Campus Center, among several others), and Rafael Viñoly (Carl Icahn Laboratory).

    A group of 20th-century sculptures scattered throughout the campus forms the Putnam Collection of Sculpture. It includes works by Alexander Calder (Five Disks: One Empty), Jacob Epstein (Albert Einstein), Henry Moore (Oval with Points), Isamu Noguchi (White Sun), and Pablo Picasso (Head of a Woman). Richard Serra’s The Hedgehog and The Fox is located between Peyton and Fine halls next to Princeton Stadium and the Lewis Library.

    At the southern edge of the campus is Carnegie Lake, an artificial lake named for Andrew Carnegie. Carnegie financed the lake’s construction in 1906 at the behest of a friend who was a Princeton alumnus. Carnegie hoped the opportunity to take up rowing would inspire Princeton students to forsake football, which he considered “not gentlemanly.” The Shea Rowing Center on the lake’s shore continues to serve as the headquarters for Princeton rowing.

    Cannon Green

    Buried in the ground at the center of the lawn south of Nassau Hall is the “Big Cannon,” which was left in Princeton by British troops as they fled following the Battle of Princeton. It remained in Princeton until the War of 1812, when it was taken to New Brunswick. In 1836 the cannon was returned to Princeton and placed at the eastern end of town. It was removed to the campus under cover of night by Princeton students in 1838 and buried in its current location in 1840.

    A second “Little Cannon” is buried in the lawn in front of nearby Whig Hall. This cannon, which may also have been captured in the Battle of Princeton, was stolen by students of Rutgers University in 1875. The theft ignited the Rutgers-Princeton Cannon War. A compromise between the presidents of Princeton and Rutgers ended the war and forced the return of the Little Cannon to Princeton. The protruding cannons are occasionally painted scarlet by Rutgers students who continue the traditional dispute.

    In years when the Princeton football team beats the teams of both Harvard University and Yale University in the same season, Princeton celebrates with a bonfire on Cannon Green. This occurred in 2012, ending a five-year drought. The next bonfire happened on November 24, 2013, and was broadcast live over the Internet.


    Princeton’s grounds were designed by Beatrix Farrand between 1912 and 1943. Her contributions were most recently recognized with the naming of a courtyard for her. Subsequent changes to the landscape were introduced by Quennell Rothschild & Partners in 2000. In 2005, Michael Van Valkenburgh was hired as the new consulting landscape architect for the campus. Lynden B. Miller was invited to work with him as Princeton’s consulting gardening architect, focusing on the 17 gardens that are distributed throughout the campus.


    Nassau Hall

    Nassau Hall is the oldest building on campus. Begun in 1754 and completed in 1756, it was the first seat of the New Jersey Legislature in 1776, was involved in the battle of Princeton in 1777, and was the seat of the Congress of the Confederation (and thus capitol of the United States) from June 30, 1783, to November 4, 1783. It now houses the office of the university president and other administrative offices, and remains the symbolic center of the campus. The front entrance is flanked by two bronze tigers, a gift of the Princeton Class of 1879. Commencement is held on the front lawn of Nassau Hall in good weather. In 1966, Nassau Hall was added to the National Register of Historic Places.

    Residential colleges

    Princeton has six undergraduate residential colleges, each housing approximately 500 freshmen, sophomores, some juniors and seniors, and a handful of junior and senior resident advisers. Each college consists of a set of dormitories, a dining hall, a variety of other amenities—such as study spaces, libraries, performance spaces, and darkrooms—and a collection of administrators and associated faculty. Two colleges, First College and Forbes College (formerly Woodrow Wilson College and Princeton Inn College, respectively), date to the 1970s; three others, Rockefeller, Mathey, and Butler Colleges, were created in 1983 following the Committee on Undergraduate Residential Life (CURL) report, which suggested the institution of residential colleges as a solution to an allegedly fragmented campus social life. The construction of Whitman College, the university’s sixth residential college, was completed in 2007.

    Rockefeller and Mathey are located in the northwest corner of the campus; Princeton brochures often feature their Collegiate Gothic architecture. Like most of Princeton’s Gothic buildings, they predate the residential college system and were fashioned into colleges from individual dormitories.

    First and Butler, located south of the center of the campus, were built in the 1960s. First served as an early experiment in the establishment of the residential college system. Butler, like Rockefeller and Mathey, consisted of a collection of ordinary dorms (called the “New New Quad”) before the addition of a dining hall made it a residential college. Widely disliked for their edgy modernist design, including “waffle ceilings,” the dormitories on the Butler Quad were demolished in 2007. Butler is now reopened as a four-year residential college, housing both under- and upperclassmen.

    Forbes is located on the site of the historic Princeton Inn, a gracious hotel overlooking the Princeton golf course. The Princeton Inn, originally constructed in 1924, played regular host to important symposia and gatherings of renowned scholars from both the university and the nearby Institute for Advanced Study for many years. Forbes currently houses nearly 500 undergraduates in its residential halls.

    In 2003, Princeton broke ground for a sixth college named Whitman College after its principal sponsor, Meg Whitman, who graduated from Princeton in 1977. The new dormitories were constructed in the Collegiate Gothic architectural style and were designed by architect Demetri Porphyrios. Construction finished in 2007, and Whitman College was inaugurated as Princeton’s sixth residential college that same year.

    The precursor of the present college system in America was originally proposed by university president Woodrow Wilson in the early 20th century. For over 800 years, however, the collegiate system had already existed in Britain at University of Cambridge (UK) and University of Oxford (UK). Wilson’s model was much closer to Yale University’s present system, which features four-year colleges. Lacking the support of the trustees, the plan languished until 1968. That year, Wilson College was established to cap a series of alternatives to the eating clubs. Fierce debates raged before the present residential college system emerged. The plan was first attempted at Yale, but the administration was initially uninterested; an exasperated alumnus, Edward Harkness, finally paid to have the college system implemented at Harvard in the 1920s, leading to the oft-quoted aphorism that the college system is a Princeton idea that was executed at Harvard with funding from Yale.

    Princeton has one graduate residential college, known simply as the Graduate College, located beyond Forbes College at the outskirts of campus. The far-flung location of the GC was the spoil of a squabble between Woodrow Wilson and then-Graduate School Dean Andrew Fleming West. Wilson preferred a central location for the college; West wanted the graduate students as far as possible from the campus. Ultimately, West prevailed. The Graduate College is composed of a large Collegiate Gothic section crowned by Cleveland Tower, a local landmark that also houses a world-class carillon. The attached New Graduate College provides a modern contrast in architectural style.

    McCarter Theatre

    The Tony-award-winning McCarter Theatre was built by the Princeton Triangle Club, a student performance group, using club profits and a gift from Princeton University alumnus Thomas McCarter. Today, the Triangle Club performs its annual freshmen revue, fall show, and Reunions performances in McCarter. McCarter is also recognized as one of the leading regional theaters in the United States.

    Art Museum

    The Princeton University Art Museum was established in 1882 to give students direct, intimate, and sustained access to original works of art that complement and enrich instruction and research at the university. This continues to be a primary function, along with serving as a community resource and a destination for national and international visitors.

    Numbering over 92,000 objects, the collections range from ancient to contemporary art and concentrate geographically on the Mediterranean regions, Western Europe, China, the United States, and Latin America. There is a collection of Greek and Roman antiquities, including ceramics, marbles, bronzes, and Roman mosaics from faculty excavations in Antioch. Medieval Europe is represented by sculpture, metalwork, and stained glass. The collection of Western European paintings includes examples from the early Renaissance through the 19th century, with masterpieces by Monet, Cézanne, and Van Gogh, and features a growing collection of 20th-century and contemporary art, including iconic paintings such as Andy Warhol’s Blue Marilyn.

    One of the best features of the museums is its collection of Chinese art, with important holdings in bronzes, tomb figurines, painting, and calligraphy. Its collection of pre-Columbian art includes examples of Mayan art, and is commonly considered to be the most important collection of pre-Columbian art outside of Latin America. The museum has collections of old master prints and drawings and a comprehensive collection of over 27,000 original photographs. African art and Northwest Coast Indian art are also represented. The Museum also oversees the outdoor Putnam Collection of Sculpture.

    University Chapel

    The Princeton University Chapel is located on the north side of campus, near Nassau Street. It was built between 1924 and 1928, at a cost of $2.3 million [approximately $34.2 million in 2020 dollars]. Ralph Adams Cram, the University’s supervising architect, designed the chapel, which he viewed as the crown jewel for the Collegiate Gothic motif he had championed for the campus. At the time of its construction, it was the second largest university chapel in the world, after King’s College Chapel, Cambridge. It underwent a two-year, $10 million restoration campaign between 2000 and 2002.

    Measured on the exterior, the chapel is 277 feet (84 m) long, 76 feet (23 m) wide at its transepts, and 121 feet (37 m) high. The exterior is Pennsylvania sandstone, with Indiana limestone used for the trim. The interior is mostly limestone and Aquia Creek sandstone. The design evokes an English church of the Middle Ages. The extensive iconography, in stained glass, stonework, and wood carvings, has the common theme of connecting religion and scholarship.

    The Chapel seats almost 2,000. It hosts weekly ecumenical Christian services, daily Roman Catholic mass, and several annual special events.

    Murray-Dodge Hall

    Murray-Dodge Hall houses the Office of Religious Life (ORL), the Murray Dodge Theater, the Murray-Dodge Café, the Muslim Prayer Room and the Interfaith Prayer Room. The ORL houses the office of the Dean of Religious Life, Alison Boden, and a number of university chaplains, including the country’s first Hindu chaplain, Vineet Chander; and one of the country’s first Muslim chaplains, Sohaib Sultan.


    Published in 2008, Princeton’s Sustainability Plan highlights three priority areas for the University’s Office of Sustainability: reduction of greenhouse gas emissions; conservation of resources; and research, education, and civic engagement. Princeton has committed to reducing its carbon dioxide emissions to 1990 levels by 2020: Energy without the purchase of offsets. The University published its first Sustainability Progress Report in November 2009. The University has adopted a green purchasing policy and recycling program that focuses on paper products, construction materials, lightbulbs, furniture, and electronics. Its dining halls have set a goal to purchase 75% sustainable food products by 2015. The student organization “Greening Princeton” seeks to encourage the University administration to adopt environmentally friendly policies on campus.


    The Trustees of Princeton University, a 40-member board, is responsible for the overall direction of the University. It approves the operating and capital budgets, supervises the investment of the University’s endowment and oversees campus real estate and long-range physical planning. The trustees also exercise prior review and approval concerning changes in major policies, such as those in instructional programs and admission, as well as tuition and fees and the hiring of faculty members.

    With an endowment of $26.1 billion, Princeton University is among the wealthiest universities in the world. Ranked in 2010 as the third largest endowment in the United States, the university had the greatest per-student endowment in the world (over $2 million for undergraduates) in 2011. Such a significant endowment is sustained through the continued donations of its alumni and is maintained by investment advisers. Some of Princeton’s wealth is invested in its art museum, which features works by Claude Monet, Vincent van Gogh, Jackson Pollock, and Andy Warhol among other prominent artists.


    Undergraduates fulfill general education requirements, choose among a wide variety of elective courses, and pursue departmental concentrations and interdisciplinary certificate programs. Required independent work is a hallmark of undergraduate education at Princeton. Students graduate with either the Bachelor of Arts (A.B.) or the Bachelor of Science in Engineering (B.S.E.).

    The graduate school offers advanced degrees spanning the humanities, social sciences, natural sciences, and engineering. Doctoral education is available in most disciplines. It emphasizes original and independent scholarship whereas master’s degree programs in architecture, engineering, finance, and public affairs and public policy prepare candidates for careers in public life and professional practice.

    The university has ties with the Institute for Advanced Study, Princeton Theological Seminary and the Westminster Choir College of Rider University .


    Undergraduate courses in the humanities are traditionally either seminars or lectures held 2 or 3 times a week with an additional discussion seminar that is called a “precept.” To graduate, all A.B. candidates must complete a senior thesis and, in most departments, one or two extensive pieces of independent research that are known as “junior papers.” Juniors in some departments, including architecture and the creative arts, complete independent projects that differ from written research papers. A.B. candidates must also fulfill a three or four semester foreign language requirement and distribution requirements (which include, for example, classes in ethics, literature and the arts, and historical analysis) with a total of 31 classes. B.S.E. candidates follow a parallel track with an emphasis on a rigorous science and math curriculum, a computer science requirement, and at least two semesters of independent research including an optional senior thesis. All B.S.E. students must complete at least 36 classes. A.B. candidates typically have more freedom in course selection than B.S.E. candidates because of the fewer number of required classes. Nonetheless, in the spirit of a liberal arts education, both enjoy a comparatively high degree of latitude in creating a self-structured curriculum.

    Undergraduates agree to adhere to an academic integrity policy called the Honor Code, established in 1893. Under the Honor Code, faculty do not proctor examinations; instead, the students proctor one another and must report any suspected violation to an Honor Committee made up of undergraduates. The Committee investigates reported violations and holds a hearing if it is warranted. An acquittal at such a hearing results in the destruction of all records of the hearing; a conviction results in the student’s suspension or expulsion. The signed pledge required by the Honor Code is so integral to students’ academic experience that the Princeton Triangle Club performs a song about it each fall. Out-of-class exercises fall under the jurisdiction of the Faculty-Student Committee on Discipline. Undergraduates are expected to sign a pledge on their written work affirming that they have not plagiarized the work.


    The Graduate School has about 2,600 students in 42 academic departments and programs in social sciences; engineering; natural sciences; and humanities. These departments include the Department of Psychology; Department of History; and Department of Economics.

    In 2017–2018, it received nearly 11,000 applications for admission and accepted around 1,000 applicants. The University also awarded 319 Ph.D. degrees and 170 final master’s degrees. Princeton has no medical school, law school, business school, or school of education. (A short-lived Princeton Law School folded in 1852.) It offers professional graduate degrees in architecture; engineering; finance and public policy- the last through the Princeton School of Public and International Affairs founded in 1930 as the School of Public and International Affairs and renamed in 1948 after university president (and U.S. president) Woodrow Wilson, and most recently renamed in 2020.


    The Princeton University Library system houses over eleven million holdings including seven million bound volumes. The main university library, Firestone Library, which houses almost four million volumes, is one of the largest university libraries in the world. Additionally, it is among the largest “open stack” libraries in existence. Its collections include the autographed manuscript of F. Scott Fitzgerald’s The Great Gatsby and George F. Kennan’s Long Telegram. In addition to Firestone library, specialized libraries exist for architecture, art and archaeology, East Asian studies, engineering, music, public and international affairs, public policy and university archives, and the sciences. In an effort to expand access, these libraries also subscribe to thousands of electronic resources.


    High Meadows Environmental Institute

    The High Meadows Environmental Institute is an “interdisciplinary center of environmental research, education, and outreach” at the university. The institute was started in 1994. About 90 faculty members at Princeton University are affiliated with it.

    The High Meadows Environmental Institute has the following research centers:

    Carbon Mitigation Initiative (CMI): This is a 15-year-long partnership between PEI and British Petroleum with the goal of finding solutions to problems related to climate change. The Stabilization Wedge Game has been created as part of this initiative.
    Center for BioComplexity (CBC)
    Cooperative Institute for Climate Science (CICS): This is a collaboration with the National Oceanographic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory.
    Energy Systems Analysis Group
    Grand Challenges

    Princeton Plasma Physics Laboratory

    The DOE’s Princeton Plasma Physics Laboratory was founded in 1951 as Project Matterhorn, a top-secret cold war project aimed at achieving controlled nuclear fusion. Princeton astrophysics professor Lyman Spitzer became the first director of the project and remained director until the lab’s declassification in 1961 when it received its current name.
    PPPL currently houses approximately half of the graduate astrophysics department, the Princeton Program in Plasma Physics. The lab is also home to the Harold P. Furth Plasma Physics Library. The library contains all declassified Project Matterhorn documents, included the first design sketch of a stellarator by Lyman Spitzer.

    Princeton is one of five US universities to have and to operate a Department of Energy national laboratory.

    Student life and culture

    University housing is guaranteed to all undergraduates for all four years. More than 98% of students live on campus in dormitories. Freshmen and sophomores must live in residential colleges, while juniors and seniors typically live in designated upperclassman dormitories. The actual dormitories are comparable, but only residential colleges have dining halls. Nonetheless, any undergraduate may purchase a meal plan and eat in a residential college dining hall. Recently, upperclassmen have been given the option of remaining in their college for all four years. Juniors and seniors also have the option of living off-campus, but high rent in the Princeton area encourages almost all students to live in university housing. Undergraduate social life revolves around the residential colleges and a number of coeducational eating clubs, which students may choose to join in the spring of their sophomore year. Eating clubs, which are not officially affiliated with the university, serve as dining halls and communal spaces for their members and also host social events throughout the academic year.

    Princeton’s six residential colleges host a variety of social events and activities, guest speakers, and trips. The residential colleges also sponsor trips to New York for undergraduates to see ballets, operas, Broadway shows, sports events, and other activities. The eating clubs, located on Prospect Avenue, are co-ed organizations for upperclassmen. Most upperclassmen eat their meals at one of the eleven eating clubs. Additionally, the clubs serve as evening and weekend social venues for members and guests. The eleven clubs are Cannon; Cap and Gown; Charter; Cloister; Colonial; Cottage; Ivy; Quadrangle; Terrace; Tiger; and Tower.

    Princeton hosts two Model United Nations conferences, PMUNC in the fall for high school students and PDI in the spring for college students. It also hosts the Princeton Invitational Speech and Debate tournament each year at the end of November. Princeton also runs Princeton Model Congress, an event that is held once a year in mid-November. The four-day conference has high school students from around the country as participants.

    Although the school’s admissions policy is need-blind, Princeton, based on the proportion of students who receive Pell Grants, was ranked as a school with little economic diversity among all national universities ranked by U.S. News & World Report. While Pell figures are widely used as a gauge of the number of low-income undergraduates on a given campus, the rankings article cautions “the proportion of students on Pell Grants isn’t a perfect measure of an institution’s efforts to achieve economic diversity,” but goes on to say that “still, many experts say that Pell figures are the best available gauge of how many low-income undergrads there are on a given campus.”

    TigerTrends is a university-based student run fashion, arts, and lifestyle magazine.


    Princeton has made significant progress in expanding the diversity of its student body in recent years. The 2019 freshman class was one of the most diverse in the school’s history, with 61% of students identifying as students of color. Undergraduate and master’s students were 51% male and 49% female for the 2018–19 academic year.

    The median family income of Princeton students is $186,100, with 57% of students coming from the top 10% highest-earning families and 14% from the bottom 60%.

    In 1999, 10% of the student body was Jewish, a percentage lower than those at other Ivy League schools. Sixteen percent of the student body was Jewish in 1985; the number decreased by 40% from 1985 to 1999. This decline prompted The Daily Princetonian to write a series of articles on the decline and its reasons. Caroline C. Pam of The New York Observer wrote that Princeton was “long dogged by a reputation for anti-Semitism” and that this history as well as Princeton’s elite status caused the university and its community to feel sensitivity towards the decrease of Jewish students. At the time many Jewish students at Princeton dated Jewish students at the University of Pennsylvania in Philadelphia because they perceived Princeton as an environment where it was difficult to find romantic prospects; Pam stated that there was a theory that the dating issues were a cause of the decline in Jewish students.

    In 1981, the population of African Americans at Princeton University made up less than 10%. Bruce M. Wright was admitted into the university in 1936 as the first African American, however, his admission was a mistake and when he got to campus he was asked to leave. Three years later Wright asked the dean for an explanation on his dismissal and the dean suggested to him that “a member of your race might feel very much alone” at Princeton University.


    Princeton enjoys a wide variety of campus traditions, some of which, like the Clapper Theft and Nude Olympics, have faded into history:

    Arch Sings – Late-night concerts that feature one or several of Princeton’s undergraduate a cappella groups, such as the Princeton Nassoons; Princeton Tigertones; Princeton Footnotes; Princeton Roaring 20; and The Princeton Wildcats. The free concerts take place in one of the larger arches on campus. Most are held in Blair Arch or Class of 1879 Arch.

    Bonfire – Ceremonial bonfire that takes place in Cannon Green behind Nassau Hall. It is held only if Princeton beats both Harvard University and Yale University at football in the same season. The most recent bonfire was lighted on November 18, 2018.

    Bicker – Selection process for new members that is employed by selective eating clubs. Prospective members, or bickerees, are required to perform a variety of activities at the request of current members.

    Cane Spree – An athletic competition between freshmen and sophomores that is held in the fall. The event centers on cane wrestling, where a freshman and a sophomore will grapple for control of a cane. This commemorates a time in the 1870s when sophomores, angry with the freshmen who strutted around with fancy canes, stole all of the canes from the freshmen, hitting them with their own canes in the process.

    The Clapper or Clapper Theft – The act of climbing to the top of Nassau Hall to steal the bell clapper, which rings to signal the start of classes on the first day of the school year. For safety reasons, the clapper has been removed permanently.

    Class Jackets (Beer Jackets) – Each graduating class designs a Class Jacket that features its class year. The artwork is almost invariably dominated by the school colors and tiger motifs.

    Communiversity – An annual street fair with performances, arts and crafts, and other activities that attempts to foster interaction between the university community and the residents of Princeton.

    Dean’s Date – The Tuesday at the end of each semester when all written work is due. This day signals the end of reading period and the beginning of final examinations. Traditionally, undergraduates gather outside McCosh Hall before the 5:00 PM deadline to cheer on fellow students who have left their work to the very last minute.

    FitzRandolph Gates – At the end of Princeton’s graduation ceremony, the new graduates process out through the main gate of the university as a symbol of the fact that they are leaving college. According to tradition, anyone who exits campus through the FitzRandolph Gates before his or her own graduation date will not graduate.

    Holder Howl – The midnight before Dean’s Date, students from Holder Hall and elsewhere gather in the Holder courtyard and take part in a minute-long, communal primal scream to vent frustration from studying with impromptu, late night noise making.

    Houseparties – Formal parties that are held simultaneously by all of the eating clubs at the end of the spring term.

    Ivy stones – Class memorial stones placed on the exterior walls of academic buildings around the campus.

    Lawnparties – Parties that feature live bands that are held simultaneously by all of the eating clubs at the start of classes and at the conclusion of the academic year.

    Princeton Locomotive – Traditional cheer in use since the 1890s. It is commonly heard at Opening Exercises in the fall as alumni and current students welcome the freshman class, as well as the P-rade in the spring at Princeton Reunions. The cheer starts slowly and picks up speed, and includes the sounds heard at a fireworks show.

    Hip! Hip!
    Rah, Rah, Rah,
    Tiger, Tiger, Tiger,
    Sis, Sis, Sis,
    Boom, Boom, Boom, Ah!
    Princeton! Princeton! Princeton!

    Or if a class is being celebrated, the last line consists of the class year repeated three times, e.g. “Eighty-eight! Eighty-eight! Eighty-eight!”

    Newman’s Day – Students attempt to drink 24 beers in the 24 hours of April 24. According to The New York Times, “the day got its name from an apocryphal quote attributed to Paul Newman: ’24 beers in a case, 24 hours in a day. Coincidence? I think not.'” Newman had spoken out against the tradition, however.

    Nude Olympics – Annual nude and partially nude frolic in Holder Courtyard that takes place during the first snow of the winter. Started in the early 1970s, the Nude Olympics went co-educational in 1979 and gained much notoriety with the American press. For safety reasons, the administration banned the Olympics in 2000 to the chagrin of students.

    Prospect 11 – The act of drinking a beer at all 11 eating clubs in a single night.

    P-rade – Traditional parade of alumni and their families. They process through campus by class year during Reunions.

    Reunions – Massive annual gathering of alumni held the weekend before graduation.


    Princeton supports organized athletics at three levels: varsity intercollegiate, club intercollegiate, and intramural. It also provides “a variety of physical education and recreational programs” for members of the Princeton community. According to the athletics program’s mission statement, Princeton aims for its students who participate in athletics to be “‘student athletes’ in the fullest sense of the phrase. Most undergraduates participate in athletics at some level.

    Princeton’s colors are orange and black. The school’s athletes are known as Tigers, and the mascot is a tiger. The Princeton administration considered naming the mascot in 2007, but the effort was dropped in the face of alumni opposition.


    Princeton is an NCAA Division I school. Its athletic conference is the Ivy League. Princeton hosts 38 men’s and women’s varsity sports. The largest varsity sport is rowing, with almost 150 athletes.

    Princeton’s football team has a long and storied history. Princeton played against Rutgers University in the first intercollegiate football game in the U.S. on Nov 6, 1869. By a score of 6–4, Rutgers won the game, which was played by rules similar to modern rugby. Today Princeton is a member of the Football Championship Subdivision of NCAA Division I. As of the end of the 2010 season, Princeton had won 26 national football championships, more than any other school.

    Club and intramural

    In addition to varsity sports, Princeton hosts about 35 club sports teams. Princeton’s rugby team is organized as a club sport. Princeton’s sailing team is also a club sport, though it competes at the varsity level in the MAISA conference of the Inter-Collegiate Sailing Association.

    Each year, nearly 300 teams participate in intramural sports at Princeton. Intramurals are open to members of Princeton’s faculty, staff, and students, though a team representing a residential college or eating club must consist only of members of that college or club. Several leagues with differing levels of competitiveness are available.


    Notable among a number of songs commonly played and sung at various events such as commencement, convocation, and athletic games is Princeton Cannon Song, the Princeton University fight song.

    Bob Dylan wrote Day of The Locusts (for his 1970 album New Morning) about his experience of receiving an honorary doctorate from the University. It is a reference to the negative experience he had and it mentions the Brood X cicada infestation Princeton experienced that June 1970.

    “Old Nassau”

    Old Nassau has been Princeton University’s anthem since 1859. Its words were written that year by a freshman, Harlan Page Peck, and published in the March issue of the Nassau Literary Review (the oldest student publication at Princeton and also the second oldest undergraduate literary magazine in the country). The words and music appeared together for the first time in Songs of Old Nassau, published in April 1859. Before the Langlotz tune was written, the song was sung to Auld Lang Syne’s melody, which also fits.

    However, Old Nassau does not only refer to the university’s anthem. It can also refer to Nassau Hall, the building that was built in 1756 and named after William III of the House of Orange-Nassau. When built, it was the largest college building in North America. It served briefly as the capitol of the United States when the Continental Congress convened there in the summer of 1783. By metonymy, the term can refer to the university as a whole. Finally, it can also refer to a chemical reaction that is dubbed “Old Nassau reaction” because the solution turns orange and then black.
    Princeton Shield

    Columbia U Campus

    Columbia University was founded in 1754 as King’s College by royal charter of King George II of England. It is the oldest institution of higher learning in the state of New York and the fifth oldest in the United States.

    University Mission Statement

    Columbia University is one of the world’s most important centers of research and at the same time a distinctive and distinguished learning environment for undergraduates and graduate students in many scholarly and professional fields. The University recognizes the importance of its location in New York City and seeks to link its research and teaching to the vast resources of a great metropolis. It seeks to attract a diverse and international faculty and student body, to support research and teaching on global issues, and to create academic relationships with many countries and regions. It expects all areas of the University to advance knowledge and learning at the highest level and to convey the products of its efforts to the world.

    Columbia University is a private Ivy League research university in New York City. Established in 1754 on the grounds of Trinity Church in Manhattan Columbia is the oldest institution of higher education in New York and the fifth-oldest institution of higher learning in the United States. It is one of nine colonial colleges founded prior to the Declaration of Independence, seven of which belong to the Ivy League. Columbia is ranked among the top universities in the world by major education publications.

    Columbia was established as King’s College by royal charter from King George II of Great Britain in reaction to the founding of Princeton College. It was renamed Columbia College in 1784 following the American Revolution, and in 1787 was placed under a private board of trustees headed by former students Alexander Hamilton and John Jay. In 1896, the campus was moved to its current location in Morningside Heights and renamed Columbia University.

    Columbia scientists and scholars have played an important role in scientific breakthroughs including brain-computer interface; the laser and maser; nuclear magnetic resonance; the first nuclear pile; the first nuclear fission reaction in the Americas; the first evidence for plate tectonics and continental drift; and much of the initial research and planning for the Manhattan Project during World War II. Columbia is organized into twenty schools, including four undergraduate schools and 15 graduate schools. The university’s research efforts include the Lamont–Doherty Earth Observatory, the Goddard Institute for Space Studies, and accelerator laboratories with major technology firms such as IBM. Columbia is a founding member of the Association of American Universities and was the first school in the United States to grant the M.D. degree. With over 14 million volumes, Columbia University Library is the third largest private research library in the United States.

    The university’s endowment stands at $11.26 billion in 2020, among the largest of any academic institution. As of October 2020, Columbia’s alumni, faculty, and staff have included: five Founding Fathers of the United States—among them a co-author of the United States Constitution and a co-author of the Declaration of Independence; three U.S. presidents; 29 foreign heads of state; ten justices of the United States Supreme Court, one of whom currently serves; 96 Nobel laureates; five Fields Medalists; 122 National Academy of Sciences members; 53 living billionaires; eleven Olympic medalists; 33 Academy Award winners; and 125 Pulitzer Prize recipients.

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 2:17 pm on August 1, 2022 Permalink | Reply
    Tags: "Antipodal duality": A hidden connection between two different phenomena that couldn’t be explained by our current understanding of physics, "Particle Physicists Puzzle Over a New Duality", "Scattering amplitudes": the probabilities of possible outcomes of particle collisions, Antipodal duality has been confirmed for high-precision calculations involving 93 million terms., , , , Quanta Magazine   

    From “Quanta Magazine” : “Particle Physicists Puzzle Over a New Duality” 

    From “Quanta Magazine”

    Katie McCormick

    The new “antipodal duality” inverts the terms used to calculate one particle scattering process to get the terms for another, in a way that’s similar to inverting the coordinates of points on a sphere. Kristina Armitage for Quanta Magazine.

    Last year, the particle physicist Lance Dixon was preparing a lecture when he noticed a striking similarity between two formulas that he planned to include in his slides.

    The formulas, called “scattering amplitudes”, give the probabilities of possible outcomes of particle collisions. One of the scattering amplitudes represented the probability of two gluon particles colliding and producing four gluons; the other gave the probability of two gluons colliding to produce a gluon and a Higgs particle.

    “I was getting a little confused because they looked kind of similar,” said Dixon, who is a professor at Stanford University, “and then I realized that the numbers were basically the same — it’s just that the [order] had gotten reversed.”

    He shared his observation with his collaborators over Zoom. Knowing of no reason the two scattering amplitudes should correspond, the group thought perhaps it was a coincidence. They started calculating the two amplitudes at progressively higher levels of precision (the greater the precision, the more terms they had to compare). By the end of the call, having calculated thousands of terms that kept agreeing, the physicists were pretty certain they were dealing with a new duality — a hidden connection between two different phenomena that couldn’t be explained by our current understanding of physics.

    Now, the antipodal duality, as the researchers are calling it, has been confirmed for high-precision calculations involving 93 million terms. While this duality arises in a simplified theory of gluons and other particles that does not quite describe our universe, there are clues that a similar duality might hold in the real world. Researchers hope that investigating the strange finding could help them make new connections between seemingly unrelated aspects of particle physics.

    “This is a magnificent discovery because it is totally unexpected,” said Anastasia Volovich, a particle physicist at Brown University, “and there is still no explanation of why it should be true.”

    The DNA of Particle Scattering

    Dixon and his team discovered the antipodal duality by using a special “code” to compute scattering amplitudes more efficiently than they could with traditional methods. Typically, to figure out the probability of two high-energy gluons scattering to produce four lower-energy gluons, for example, you must consider all the possible pathways that might yield this outcome. You know the beginning and the end of the story (two gluons become four), but you also need to know the middle — including all the particles that can temporarily pop in and out of existence, thanks to quantum uncertainty. Traditionally, you must add up the probability of each possible middle event, taking them one at a time.

    In 2010, these cumbersome calculations were circumvented by four researchers, including Volovich, who found a shortcut [Physical Review Letters (below)]. They realized that many of the complicated expressions in an amplitude calculation could be eliminated by reorganizing everything into a new structure. The six basic elements of the new structure, called “letters,” are variables representing combinations of each particle’s energy and momentum. The six letters make up words, and the words combine to form terms in each scattering amplitude.

    Dixon compares this new scheme to the genetic code, in which four chemical building blocks combine to form the genes in a strand of DNA. Like the genetic code, the “DNA of particle scattering,” as he calls it, has rules about which combinations of words are allowed. Some of these rules follow from known physical or mathematical principles, but others seem arbitrary. The only way to discover some of the rules is by looking for hidden patterns in the lengthy calculations.

    Once found, these inscrutable rules have helped particle physicists calculate scattering amplitudes at much higher levels of precision than they could achieve with the traditional approach. The restructuring also allowed Dixon and his collaborators to spot the hidden connection between the two seemingly unrelated scattering amplitudes.

    Antipode Map

    At the heart of the duality is the “antipode map.” In geometry, an antipode map takes a point on a sphere and inverts the coordinates, sending you straight through the sphere’s center to a point on the other side. It’s the mathematical equivalent of digging a hole from Chile to China.

    In scattering amplitudes, the antipode map that Dixon found is a bit more abstract. It inverts the order of the letters used to calculate the amplitude. Apply this antipode map to all the terms in the scattering amplitude for two gluons becoming four, and (after a simple change of variables) this yields the amplitude for two gluons becoming one gluon plus a Higgs.

    In Dixon’s DNA analogy, the duality is like reading a genetic sequence backward and realizing that it encodes a totally new protein unrelated to the one encoded by the original sequence.

    “We all used to be convinced that the antipode map was useless. … It didn’t seem to have any physical significance, or to do anything meaningful,” said Matt von Hippel, an amplitude specialist at the Niels Bohr Institute in Copenhagen who wasn’t involved in the research. “And now there’s this totally inexplicable duality using it, which is pretty wild.”

    Not Quite Our World

    There are now two big questions. First, why does the duality exist? And second, will a similar connection be found to hold in the real world?

    The 17 known elementary particles that comprise our world abide by a set of equations called the Standard Model of particle physics. According to the Standard Model, two gluons, the massless particles that glue together atomic nuclei, easily interact with each other to double their own number, becoming four gluons. However, to produce one gluon and one Higgs particle, colliding gluons must first morph into a quark and an antiquark; these then transform into a gluon and a Higgs via a different force than the one governing gluons’ mutual interactions.

    These two scattering processes are so different, with one involving an entirely different sector of the Standard Model, that a duality between them would be very surprising.

    But the antipodal duality is also unexpected even in the simplified model of particle physics that Dixon and his colleagues were studying. Their toy model governs fictional gluons with extra symmetries, which enable more precise calculations of scattering amplitudes. The duality links a scattering process involving these gluons and one that requires an external interaction with particles described by a different theory.

    Dixon thinks he has a very tenuous clue about where the duality comes from.

    Recall those inexplicable rules found by Volovich and her colleagues that dictate which combinations of words are allowed in a scattering amplitude. Some of the rules seem to arbitrarily restrict which letters can appear next to each other in the two-gluon-to-gluon-plus-Higgs amplitude. But map those rules over to the other side of the duality, and they transform into a set of well-established rules [Physical Review D (below)] that ensure causality — guaranteeing that the interactions between incoming particles occur before the outgoing particles appear.

    For Dixon, this is a tiny hint at a deeper physical connection between the two amplitudes, and a reason to think something similar might hold in the Standard Model. “But it’s pretty weak,” he said. “It’s, like, secondhand information.”

    Other dualities between disparate physical phenomena have already been found. The AdS-CFT correspondence, for example, in which a theoretical world without gravity is dual to a world with gravity, has fueled thousands of research papers since its 1997 discovery. But this duality, too, only exists for a gravitational world with a warped geometry unlike that of the actual universe. Still, for many physicists, the fact that multiple dualities almost hold in our world hints that they could be scratching the surface of an all-encompassing theoretical structure in which these surprising connections are manifest. “I think they’re all part of the story,” said Dixon.

    Science papers:
    Physical Review Letters

    Physical Review D

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: