Tagged: Quanta Magazine (US) Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:23 pm on September 7, 2021 Permalink | Reply
    Tags: "One Lab’s Quest to Build Space-Time Out of Quantum Particles", AdS: anti-de Sitter space, , CFT: conformal field theory, Pairs of atoms could be entangled together and then each pair would itself be entangled with another pair and so on forming a kind of tree., Physicists have been suggesting for over a decade that gravity — and even space-time itself — may emerge from a strange quantum connection called entanglement., , Quanta Magazine (US), , , , Testing quantum gravity without black holes or galaxy-size particle accelerators., The Standard Model despite its success is clearly incomplete.   

    From Quanta Magazine (US) and Stanford University (US) : “One Lab’s Quest to Build Space-Time Out of Quantum Particles” 

    From Quanta Magazine (US)

    and

    Stanford University Name

    Stanford University (US)

    September 7, 2021
    Adam Becker

    1
    Quantum particles entangled in a “tree-like” structure correspond to various configurations of space-time.
    Credit: Olena Shmahalo & Samuel Velasco/Quanta Magazine; Photo: Felix Mittermeier.

    The prospects for directly testing a theory of quantum gravity are poor, to put it mildly. To probe the ultra-tiny Planck scale, where quantum gravitational effects appear, you would need a particle accelerator as big as the Milky Way galaxy. Likewise, black holes hold singularities that are governed by quantum gravity, but no black holes are particularly close by — and even if they were, we could never hope to see what’s inside. Quantum gravity was also at work in the first moments of the Big Bang, but direct signals from that era are long gone, leaving us to decipher subtle clues that first appeared hundreds of thousands of years later.

    But in a small lab just outside Palo Alto, the Stanford University professor Monika Schleier-Smith and her team are trying a different way to test quantum gravity without black holes or galaxy-size particle accelerators. Physicists have been suggesting for over a decade that gravity — and even space-time itself — may emerge from a strange quantum connection called entanglement. Schleier-Smith and her collaborators are reverse-engineering the process. By engineering highly entangled quantum systems in a tabletop experiment, Schleier-Smith hopes to produce something that looks and acts like the warped space-time predicted by Albert Einstein’s theory of general relativity.

    In a paper posted in June, her team announced their first experimental step along this route: a system of atoms trapped by light, with connections made to order, finely controlled with magnetic fields. When tuned in the right way, the long-distance correlations in this system describe a treelike geometry, similar to ones seen in simple models of emergent space-time. Schleier-Smith and her colleagues hope to build on this work to create analogues to more complex geometries, including those of black holes. In the absence of new data from particle physics or cosmology — a state of affairs that could continue indefinitely — this could be the most promising route for putting the latest ideas about quantum gravity to the test.

    The Perils of Perfect Predictions

    For five decades, the prevailing theory of particle physics, the Standard Model, has met with almost nothing but success — to the endless frustration of particle physicists.

    The problem lies in the fact that the Standard Model despite its success is clearly incomplete. It doesn’t include gravity, despite the long search for a theory of quantum gravity to replace general relativity. Nor can it explain dark matter or dark energy, which account for 95% of all the stuff in the universe. (The Standard Model also has trouble with the fact that neutrinos have mass — the sole particle physics phenomenon it has failed to predict.)

    Moreover, the Standard Model itself dictates that beyond a certain threshold of high energy — one closely related to the Planck scale — it almost certainly fails.

    1
    Monika Schleier-Smith’s lab at Stanford is a dense maze of cables and optical equipment. “But at the end of the day,” she said, “You can make a system that is clean and controlled.”
    Credit: Dawn Harmer/DOE’s SLAC National Accelerator Laboratory (US).

    Physicists are desperate for puzzling experimental data that might help to guide them as they build the Standard Model’s replacement. String theory, still the leading candidate to replace the Standard Model, has often been accused of being untestable. But one of the strangest features of string theory suggests a way to test some ideas about quantum gravity that don’t require impractical feats of galactic architecture.

    String theory is filled with dualities — relations between different physical systems that share the same mathematical structure. Perhaps the most surprising and consequential of these dualities is a connection between a type of quantum theory in four dimensions without gravity, known as a conformal field theory (CFT), and a particular kind of five-dimensional space-time with gravity, known as an anti-de Sitter (AdS) space. This AdS/CFT correspondence, as it’s known, was first discovered in 1997 by the physicist Juan Maldacena, now at the Institute for Advanced Study (US).

    Because the CFT has one fewer dimension than the AdS space, the former can be thought of as lying on the surface of the latter, like the two-dimensional skin of a three-dimensional apple. Yet the quantum theory on the surface still fully captures all the features of the volume inside — as if you could tell everything about the interior of an apple just by looking at its skin. This is an example of what physicists call holography: a lower-dimensional space giving rise to a higher-dimensional space, like a flat hologram producing a 3D image.

    In the AdS/CFT correspondence, the interior or “bulk” space emerges from relationships between the quantum components on the surface. Specifically, the geometry of the bulk space is built from entanglement, the “spooky” quantum connections that infamously troubled Einstein. Neighboring regions of the bulk correspond to highly entangled portions of the surface. Distant regions of the bulk correspond to less entangled parts of the surface. If the surface has a simple and orderly set of entanglement relations, the corresponding bulk space will be empty. If the surface is chaotic, with all its parts entangled with all the others, the bulk will form a black hole.

    The AdS/CFT correspondence is a deep and fruitful insight into the connections between quantum physics and general relativity. But it doesn’t actually describe the world we live in. Our universe isn’t a five-dimensional anti-de Sitter space — it’s an expanding four-dimensional space with a “flat” geometry.

    So over the past few years, researchers have proposed another approach. Rather than starting from the bulk — our own universe — and looking for the kind of quantum entanglement pattern that could produce it, we can go the other way. Perhaps experimenters could build systems with interesting entanglements — like the CFT on the surface — and search for any analogues to space-time geometry and gravity that emerge.

    That’s easier said than done. It’s not yet possible to build a system like any of the strongly interacting quantum systems known to have gravitational duals. But theorists have only mapped out a small fraction of possible systems — many others are too complex to study theoretically with existing mathematical tools. To see if any of those systems actually yield some kind of space-time geometry, the only option is to physically construct them in the lab and see if they also have a gravitational dual. “These experimental constructions might help us discover such systems,” said Maldacena. “There might be simpler systems than the ones we know about.” So quantum gravity theorists have turned to experts in building and controlling entanglement in quantum systems, like Schleier-Smith and her team.

    Quantum Gravity Meets Cold Atoms

    “There’s something really just elegant about the theory of quantum mechanics that I’ve always loved,” said Schleier-Smith. “If you go into the lab, you’ll see there’s cables all over the place and all kinds of electronics we had to build and vacuum systems and messy-looking hardware. But at the end of the day, you can make a system that is clean and controlled in such a way that it does nicely map onto this sort of elegant theory that you can write down on paper.”

    This messy elegance has been a hallmark of Schleier-Smith’s work since her graduate days at The Massachusetts Institute of Technology (US), where she used light to coax collections of atoms into particular entangled states and demonstrated how to use these quantum systems to build more precise atomic clocks. After MIT, she spent a few years at the MPG Institute for Quantum Optics [MPG Institut für Quantenoptik](DE) in Garching, Germany, before landing at Stanford in 2013. A couple of years later, Brian Swingle, a theoretical physicist then at Stanford working on string theory, quantum gravity and other related subjects, reached out to her with an unusual question. “I wrote her an email saying, basically, ‘Can you reverse time in your lab?’” said Swingle. “And she said yes. And so we started talking.”

    Swingle wanted to reverse time in order to study black holes and a quantum phenomenon known as scrambling. In quantum scrambling, information about a quantum system’s state is rapidly dispersed across a larger system, making it very hard to recover the original information. “Black holes are very good scramblers of information,” said Swingle. “They hide information very well.” When an object is dropped into a black hole, information about that object is rapidly hidden from the rest of the universe. Understanding how black holes obscure information about the objects that fall into them — and whether that information is merely hidden or actually destroyed — has been a major focus of theoretical physics since the 1970s.

    In the AdS/CFT correspondence, a black hole in the bulk corresponds to a dense web of entanglement at the surface that scrambles incoming information very quickly. Swingle wanted to know what a fast-scrambling quantum system would look like in the lab, and he realized that in order to confirm scrambling was taking place as rapidly as possible, researchers would need to tightly control the quantum system in question, with the ability to perfectly reverse all interactions. “The sort of obvious way to do it required the ability to effectively fast forward and rewind the system,” said Swingle. “And that’s not something you can do in an everyday kind of experiment.” But Swingle knew Schleier-Smith’s lab might be able to control the entanglement between atoms carefully enough to perfectly reverse all their interactions, as if time were running backward. “If you have this nice, isolated, well-controlled, highly engineered quantum many-body system, then maybe you have a chance,” he said.

    So Swingle reached out to Schleier-Smith and told her what he wanted to do. “He explained to me this conjecture that this process of scrambling — that there’s a fundamental speed limit to how fast that can happen,” said Schleier-Smith. “And that if you could build a quantum system in the lab that scrambles at this fundamental speed limit, then maybe that would be some kind of an analogue of a black hole.” Their conversations continued, and in 2016, Swingle and Schleier-Smith co-authored a paper, along with Patrick Hayden, another theorist at Stanford, and Gregory Bentsen, one of Schleier-Smith’s graduate students at the time, outlining a feasible method for creating and probing fast quantum scrambling in the lab.

    That work left Schleier-Smith contemplating other quantum gravitational questions that her lab could investigate. “That made me think … maybe these are actually good platforms for being able to realize some toy models of quantum gravity that are hard to realize by other means,” she said. She started to consider a setup where pairs of atoms could be entangled together and then each pair would itself be entangled with another pair and so on forming a kind of tree. “It seemed kind of far-fetched to actually do it, but at least I could sort of imagine on paper how you would design a system where you can do that,” she said. But she wasn’t sure if this actually corresponded to any known model of quantum gravity.

    3
    A view of the vacuum chamber at the center of the experiment. This view, taken several years ago, is now impossible, as there have been too many elements placed around the apparatus.
    4
    Inside the control room where researchers control the experiment and analyze the data. Credit: Khoi Huynh. Courtesy of Monika Schleier-Smith.

    Intense and affable, Schleier-Smith has an infectious enthusiasm for her work, as her student Bentsen discovered. He had started his doctoral work at Stanford in theoretical physics, but Schleier-Smith managed to pull him into her group anyhow. “I sort of convinced him to do experiments,” she recalled, “but he maintained an interest in theory as well, and liked to chat with theorists around the department.” She discussed her new idea with Bentsen, who discussed it with Sean Hartnoll, another theorist at Stanford. Hartnoll in turn played matchmaker, connecting Schleier-Smith and Bentsen with Steven Gubser, a theorist at Princeton University (US). (Gubser later died in a rock-climbing accident.)

    At the time, Gubser was working on a twist on the AdS/CFT correspondence. Rather than using the familiar kind of numbers that physicists generally use, he was using a set of alternative number systems known as the p-adic numbers. The key distinction between the p-adics and ordinary “real” numbers is the way the size of a number is defined. In the p-adics, a number’s size is determined by its prime factors. There’s a p-adic number system for each prime number: the 2-adics, the 3-adics, the 5-adics, and so on. In each p-adic number system, the more factors a number has that are multiples of p, the smaller that number is. So, for example, in the 2-adics, 44 is much closer to 0 than it is to 45, because 44 has two factors that are multiples of 2, while 45 doesn’t have any. But in the 3-adics, it’s the reverse; 45 is closer to 0 than to 44, because 45 has two factors that are multiples of 3. Each p-adic number system can also be represented as a kind of tree, with each branch containing numbers that all have the same number of factors that are multiples of p.

    4
    In p-adic geometry, different branches share the same number of factors that are multiples of p.
    Samuel Velasco/Quanta Magazine.

    Using the p-adics, Gubser and others had discovered a remarkable fact about the AdS/CFT correspondence. If you rewrite the surface theory using the p-adic numbers rather than the reals, the bulk is replaced with a kind of infinite tree. Specifically, it’s a tree with infinite branches packed into a finite space, resembling the structure of the p-adic numbers themselves. The p-adics, Gubser wrote, are “naturally holographic.”

    “The structure of p-adic numbers that [Gubser] told me about reminded me of the way Monika’s atoms interacted with each other,” said Hartnoll, “so I put them in touch.” Gubser co-authored a paper in 2019 with Schleier-Smith, Bentsen and others. In the paper, the team described how to get something resembling the p-adic tree to emerge from entangled atoms in an actual lab. With the plan in hand, Schleier-Smith and her team got to work.

    Building Space-Time in the Lab

    Schleier-Smith’s lab at Stanford is a dense forest of mirrors, lenses and fiber-optic cables that surround a vacuum chamber at the center of the room. In that vacuum chamber, 18 tiny collections of rubidium atoms — about 10,000 to a group — are arranged in a line and cooled to phenomenally low temperatures, a fraction of a degree above absolute zero. A specially tuned laser and a magnetic field that increases from one end of the chamber to the other allow the experimenters to choose which groups of atoms become correlated with each other.

    Using this lab setup, Schleier-Smith and her research group were able to get the two groups of atoms at the ends of the line just as correlated as neighboring groups were in the middle of the line, connecting the ends and turning the line into a circle of correlations. They then coaxed the collection of atoms into a treelike structure. All of this was accomplished without moving the atoms at all — the correlation “geometry” was wholly disconnected from the actual spatial geometry of the atoms.

    While the tree structure formed by the interacting atoms in Schleier-Smith’s lab isn’t a full-blown realization of p-adic AdS/CFT, it’s “a first step towards holography in the laboratory,” said Hayden. Maldacena, the originator of the AdS/CFT correspondence, agrees: “I’m very excited about this,” he said. “Our subject has been always very theoretical, and so this contact with experiment will probably raise more questions.”

    Hayden sees this as the way of the future. “Instead of trying to understand the emergence of space-time in our universe, let’s actually just make toy universes in the lab and study the emergence of space-time there,” he said. “And that sounds like a crazy thing to do, right? Like kind of mad-scientist kind of crazy, right? But I think it really is likely to be easier to do that than to directly test quantum gravity.”

    Schleier-Smith is also optimistic about the future. “We’re still at the stage of getting more and more control, characterizing the quantum states that we have. But … I would love to get to that point where we don’t know what will happen,” she said. “And maybe we measure the correlations in the system, and we learn that there’s a geometric description, some holographic description that we didn’t know was there. That would be cool.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford University campus
    Stanford University (US)

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members.

    Stanford University, officially Leland Stanford Junior University, is a private research university located in Stanford, California. Stanford was founded in 1885 by Leland and Jane Stanford in memory of their only child, Leland Stanford Jr., who had died of typhoid fever at age 15 the previous year. Stanford is consistently ranked as among the most prestigious and top universities in the world by major education publications. It is also one of the top fundraising institutions in the country, becoming the first school to raise more than a billion dollars in a year.

    Leland Stanford was a U.S. senator and former governor of California who made his fortune as a railroad tycoon. The school admitted its first students on October 1, 1891, as a coeducational and non-denominational institution. Stanford University struggled financially after the death of Leland Stanford in 1893 and again after much of the campus was damaged by the 1906 San Francisco earthquake. Following World War II, provost Frederick Terman supported faculty and graduates’ entrepreneurialism to build self-sufficient local industry in what would later be known as Silicon Valley.

    The university is organized around seven schools: three schools consisting of 40 academic departments at the undergraduate level as well as four professional schools that focus on graduate programs in law, medicine, education, and business. All schools are on the same campus. Students compete in 36 varsity sports, and the university is one of two private institutions in the Division I FBS Pac-12 Conference. It has gained 126 NCAA team championships, and Stanford has won the NACDA Directors’ Cup for 24 consecutive years, beginning in 1994–1995. In addition, Stanford students and alumni have won 270 Olympic medals including 139 gold medals.

    As of October 2020, 84 Nobel laureates, 28 Turing Award laureates, and eight Fields Medalists have been affiliated with Stanford as students, alumni, faculty, or staff. In addition, Stanford is particularly noted for its entrepreneurship and is one of the most successful universities in attracting funding for start-ups. Stanford alumni have founded numerous companies, which combined produce more than $2.7 trillion in annual revenue, roughly equivalent to the 7th largest economy in the world (as of 2020). Stanford is the alma mater of one president of the United States (Herbert Hoover), 74 living billionaires, and 17 astronauts. It is also one of the leading producers of Fulbright Scholars, Marshall Scholars, Rhodes Scholars, and members of the United States Congress.

    Stanford University was founded in 1885 by Leland and Jane Stanford, dedicated to Leland Stanford Jr, their only child. The institution opened in 1891 on Stanford’s previous Palo Alto farm.

    Jane and Leland Stanford modeled their university after the great eastern universities, most specifically Cornell University. Stanford opened being called the “Cornell of the West” in 1891 due to faculty being former Cornell affiliates (either professors, alumni, or both) including its first president, David Starr Jordan, and second president, John Casper Branner. Both Cornell and Stanford were among the first to have higher education be accessible, nonsectarian, and open to women as well as to men. Cornell is credited as one of the first American universities to adopt this radical departure from traditional education, and Stanford became an early adopter as well.

    Despite being impacted by earthquakes in both 1906 and 1989, the campus was rebuilt each time. In 1919, The Hoover Institution on War, Revolution and Peace was started by Herbert Hoover to preserve artifacts related to World War I. The Stanford Medical Center, completed in 1959, is a teaching hospital with over 800 beds. The DOE’s SLAC National Accelerator Laboratory(US)(originally named the Stanford Linear Accelerator Center), established in 1962, performs research in particle physics.

    Land

    Most of Stanford is on an 8,180-acre (12.8 sq mi; 33.1 km^2) campus, one of the largest in the United States. It is located on the San Francisco Peninsula, in the northwest part of the Santa Clara Valley (Silicon Valley) approximately 37 miles (60 km) southeast of San Francisco and approximately 20 miles (30 km) northwest of San Jose. In 2008, 60% of this land remained undeveloped.

    Stanford’s main campus includes a census-designated place within unincorporated Santa Clara County, although some of the university land (such as the Stanford Shopping Center and the Stanford Research Park) is within the city limits of Palo Alto. The campus also includes much land in unincorporated San Mateo County (including the SLAC National Accelerator Laboratory and the Jasper Ridge Biological Preserve), as well as in the city limits of Menlo Park (Stanford Hills neighborhood), Woodside, and Portola Valley.

    Non-central campus

    Stanford currently operates in various locations outside of its central campus.

    On the founding grant:

    Jasper Ridge Biological Preserve is a 1,200-acre (490 ha) natural reserve south of the central campus owned by the university and used by wildlife biologists for research.
    SLAC National Accelerator Laboratory is a facility west of the central campus operated by the university for the Department of Energy. It contains the longest linear particle accelerator in the world, 2 miles (3.2 km) on 426 acres (172 ha) of land.
    Golf course and a seasonal lake: The university also has its own golf course and a seasonal lake (Lake Lagunita, actually an irrigation reservoir), both home to the vulnerable California tiger salamander. As of 2012 Lake Lagunita was often dry and the university had no plans to artificially fill it.

    Off the founding grant:

    Hopkins Marine Station, in Pacific Grove, California, is a marine biology research center owned by the university since 1892.
    Study abroad locations: unlike typical study abroad programs, Stanford itself operates in several locations around the world; thus, each location has Stanford faculty-in-residence and staff in addition to students, creating a “mini-Stanford”.

    Redwood City campus for many of the university’s administrative offices located in Redwood City, California, a few miles north of the main campus. In 2005, the university purchased a small, 35-acre (14 ha) campus in Midpoint Technology Park intended for staff offices; development was delayed by The Great Recession. In 2015 the university announced a development plan and the Redwood City campus opened in March 2019.

    The Bass Center in Washington, DC provides a base, including housing, for the Stanford in Washington program for undergraduates. It includes a small art gallery open to the public.

    China: Stanford Center at Peking University, housed in the Lee Jung Sen Building, is a small center for researchers and students in collaboration with Beijing University [北京大学](CN) (Kavli Institute for Astronomy and Astrophysics at Peking University(CN) (KIAA-PKU).

    Administration and organization

    Stanford is a private, non-profit university that is administered as a corporate trust governed by a privately appointed board of trustees with a maximum membership of 38. Trustees serve five-year terms (not more than two consecutive terms) and meet five times annually.[83] A new trustee is chosen by the current trustees by ballot. The Stanford trustees also oversee the Stanford Research Park, the Stanford Shopping Center, the Cantor Center for Visual Arts, Stanford University Medical Center, and many associated medical facilities (including the Lucile Packard Children’s Hospital).

    The board appoints a president to serve as the chief executive officer of the university, to prescribe the duties of professors and course of study, to manage financial and business affairs, and to appoint nine vice presidents. The provost is the chief academic and budget officer, to whom the deans of each of the seven schools report. Persis Drell became the 13th provost in February 2017.

    As of 2018, the university was organized into seven academic schools. The schools of Humanities and Sciences (27 departments), Engineering (nine departments), and Earth, Energy & Environmental Sciences (four departments) have both graduate and undergraduate programs while the Schools of Law, Medicine, Education and Business have graduate programs only. The powers and authority of the faculty are vested in the Academic Council, which is made up of tenure and non-tenure line faculty, research faculty, senior fellows in some policy centers and institutes, the president of the university, and some other academic administrators, but most matters are handled by the Faculty Senate, made up of 55 elected representatives of the faculty.

    The Associated Students of Stanford University (ASSU) is the student government for Stanford and all registered students are members. Its elected leadership consists of the Undergraduate Senate elected by the undergraduate students, the Graduate Student Council elected by the graduate students, and the President and Vice President elected as a ticket by the entire student body.

    Stanford is the beneficiary of a special clause in the California Constitution, which explicitly exempts Stanford property from taxation so long as the property is used for educational purposes.

    Endowment and donations

    The university’s endowment, managed by the Stanford Management Company, was valued at $27.7 billion as of August 31, 2019. Payouts from the Stanford endowment covered approximately 21.8% of university expenses in the 2019 fiscal year. In the 2018 NACUBO-TIAA survey of colleges and universities in the United States and Canada, only Harvard University(US), the University of Texas System(US), and Yale University(US) had larger endowments than Stanford.

    In 2006, President John L. Hennessy launched a five-year campaign called the Stanford Challenge, which reached its $4.3 billion fundraising goal in 2009, two years ahead of time, but continued fundraising for the duration of the campaign. It concluded on December 31, 2011, having raised a total of $6.23 billion and breaking the previous campaign fundraising record of $3.88 billion held by Yale. Specifically, the campaign raised $253.7 million for undergraduate financial aid, as well as $2.33 billion for its initiative in “Seeking Solutions” to global problems, $1.61 billion for “Educating Leaders” by improving K-12 education, and $2.11 billion for “Foundation of Excellence” aimed at providing academic support for Stanford students and faculty. Funds supported 366 new fellowships for graduate students, 139 new endowed chairs for faculty, and 38 new or renovated buildings. The new funding also enabled the construction of a facility for stem cell research; a new campus for the business school; an expansion of the law school; a new Engineering Quad; a new art and art history building; an on-campus concert hall; a new art museum; and a planned expansion of the medical school, among other things. In 2012, the university raised $1.035 billion, becoming the first school to raise more than a billion dollars in a year.

    Research centers and institutes

    DOE’s SLAC National Accelerator Laboratory(US)
    Stanford Research Institute, a center of innovation to support economic development in the region.
    Hoover Institution, a conservative American public policy institution and research institution that promotes personal and economic liberty, free enterprise, and limited government.
    Hasso Plattner Institute of Design, a multidisciplinary design school in cooperation with the Hasso Plattner Institute of University of Potsdam [Universität Potsdam](DE) that integrates product design, engineering, and business management education).
    Martin Luther King Jr. Research and Education Institute, which grew out of and still contains the Martin Luther King Jr. Papers Project.
    John S. Knight Fellowship for Professional Journalists
    Center for Ocean Solutions
    Together with UC Berkeley(US) and UC San Francisco(US), Stanford is part of the Biohub, a new medical science research center founded in 2016 by a $600 million commitment from Facebook CEO and founder Mark Zuckerberg and pediatrician Priscilla Chan.

    Discoveries and innovation

    Natural sciences

    Biological synthesis of deoxyribonucleic acid (DNA) – Arthur Kornberg synthesized DNA material and won the Nobel Prize in Physiology or Medicine 1959 for his work at Stanford.
    First Transgenic organism – Stanley Cohen and Herbert Boyer were the first scientists to transplant genes from one living organism to another, a fundamental discovery for genetic engineering. Thousands of products have been developed on the basis of their work, including human growth hormone and hepatitis B vaccine.
    Laser – Arthur Leonard Schawlow shared the 1981 Nobel Prize in Physics with Nicolaas Bloembergen and Kai Siegbahn for his work on lasers.
    Nuclear magnetic resonance – Felix Bloch developed new methods for nuclear magnetic precision measurements, which are the underlying principles of the MRI.

    Computer and applied sciences

    ARPANETStanford Research Institute, formerly part of Stanford but on a separate campus, was the site of one of the four original ARPANET nodes.

    Internet—Stanford was the site where the original design of the Internet was undertaken. Vint Cerf led a research group to elaborate the design of the Transmission Control Protocol (TCP/IP) that he originally co-created with Robert E. Kahn (Bob Kahn) in 1973 and which formed the basis for the architecture of the Internet.

    Frequency modulation synthesis – John Chowning of the Music department invented the FM music synthesis algorithm in 1967, and Stanford later licensed it to Yamaha Corporation.

    Google – Google began in January 1996 as a research project by Larry Page and Sergey Brin when they were both PhD students at Stanford. They were working on the Stanford Digital Library Project (SDLP). The SDLP’s goal was “to develop the enabling technologies for a single, integrated and universal digital library” and it was funded through the National Science Foundation, among other federal agencies.

    Klystron tube – invented by the brothers Russell and Sigurd Varian at Stanford. Their prototype was completed and demonstrated successfully on August 30, 1937. Upon publication in 1939, news of the klystron immediately influenced the work of U.S. and UK researchers working on radar equipment.

    RISCARPA funded VLSI project of microprocessor design. Stanford and UC Berkeley are most associated with the popularization of this concept. The Stanford MIPS would go on to be commercialized as the successful MIPS architecture, while Berkeley RISC gave its name to the entire concept, commercialized as the SPARC. Another success from this era were IBM’s efforts that eventually led to the IBM POWER instruction set architecture, PowerPC, and Power ISA. As these projects matured, a wide variety of similar designs flourished in the late 1980s and especially the early 1990s, representing a major force in the Unix workstation market as well as embedded processors in laser printers, routers and similar products.
    SUN workstation – Andy Bechtolsheim designed the SUN workstation for the Stanford University Network communications project as a personal CAD workstation, which led to Sun Microsystems.

    Businesses and entrepreneurship

    Stanford is one of the most successful universities in creating companies and licensing its inventions to existing companies; it is often held up as a model for technology transfer. Stanford’s Office of Technology Licensing is responsible for commercializing university research, intellectual property, and university-developed projects.

    The university is described as having a strong venture culture in which students are encouraged, and often funded, to launch their own companies.

    Companies founded by Stanford alumni generate more than $2.7 trillion in annual revenue, equivalent to the 10th-largest economy in the world.

    Some companies closely associated with Stanford and their connections include:

    Hewlett-Packard, 1939, co-founders William R. Hewlett (B.S, PhD) and David Packard (M.S).
    Silicon Graphics, 1981, co-founders James H. Clark (Associate Professor) and several of his grad students.
    Sun Microsystems, 1982, co-founders Vinod Khosla (M.B.A), Andy Bechtolsheim (PhD) and Scott McNealy (M.B.A).
    Cisco, 1984, founders Leonard Bosack (M.S) and Sandy Lerner (M.S) who were in charge of Stanford Computer Science and Graduate School of Business computer operations groups respectively when the hardware was developed.[163]
    Yahoo!, 1994, co-founders Jerry Yang (B.S, M.S) and David Filo (M.S).
    Google, 1998, co-founders Larry Page (M.S) and Sergey Brin (M.S).
    LinkedIn, 2002, co-founders Reid Hoffman (B.S), Konstantin Guericke (B.S, M.S), Eric Lee (B.S), and Alan Liu (B.S).
    Instagram, 2010, co-founders Kevin Systrom (B.S) and Mike Krieger (B.S).
    Snapchat, 2011, co-founders Evan Spiegel and Bobby Murphy (B.S).
    Coursera, 2012, co-founders Andrew Ng (Associate Professor) and Daphne Koller (Professor, PhD).

    Student body

    Stanford enrolled 6,996 undergraduate and 10,253 graduate students as of the 2019–2020 school year. Women comprised 50.4% of undergraduates and 41.5% of graduate students. In the same academic year, the freshman retention rate was 99%.

    Stanford awarded 1,819 undergraduate degrees, 2,393 master’s degrees, 770 doctoral degrees, and 3270 professional degrees in the 2018–2019 school year. The four-year graduation rate for the class of 2017 cohort was 72.9%, and the six-year rate was 94.4%. The relatively low four-year graduation rate is a function of the university’s coterminal degree (or “coterm”) program, which allows students to earn a master’s degree as a 1-to-2-year extension of their undergraduate program.

    As of 2010, fifteen percent of undergraduates were first-generation students.

    Athletics

    As of 2016 Stanford had 16 male varsity sports and 20 female varsity sports, 19 club sports and about 27 intramural sports. In 1930, following a unanimous vote by the Executive Committee for the Associated Students, the athletic department adopted the mascot “Indian.” The Indian symbol and name were dropped by President Richard Lyman in 1972, after objections from Native American students and a vote by the student senate. The sports teams are now officially referred to as the “Stanford Cardinal,” referring to the deep red color, not the cardinal bird. Stanford is a member of the Pac-12 Conference in most sports, the Mountain Pacific Sports Federation in several other sports, and the America East Conference in field hockey with the participation in the inter-collegiate NCAA’s Division I FBS.

    Its traditional sports rival is the University of California, Berkeley, the neighbor to the north in the East Bay. The winner of the annual “Big Game” between the Cal and Cardinal football teams gains custody of the Stanford Axe.

    Stanford has had at least one NCAA team champion every year since the 1976–77 school year and has earned 126 NCAA national team titles since its establishment, the most among universities, and Stanford has won 522 individual national championships, the most by any university. Stanford has won the award for the top-ranked Division 1 athletic program—the NACDA Directors’ Cup, formerly known as the Sears Cup—annually for the past twenty-four straight years. Stanford athletes have won medals in every Olympic Games since 1912, winning 270 Olympic medals total, 139 of them gold. In the 2008 Summer Olympics, and 2016 Summer Olympics, Stanford won more Olympic medals than any other university in the United States. Stanford athletes won 16 medals at the 2012 Summer Olympics (12 gold, two silver and two bronze), and 27 medals at the 2016 Summer Olympics.

    Traditions

    The unofficial motto of Stanford, selected by President Jordan, is Die Luft der Freiheit weht. Translated from the German language, this quotation from Ulrich von Hutten means, “The wind of freedom blows.” The motto was controversial during World War I, when anything in German was suspect; at that time the university disavowed that this motto was official.
    Hail, Stanford, Hail! is the Stanford Hymn sometimes sung at ceremonies or adapted by the various University singing groups. It was written in 1892 by mechanical engineering professor Albert W. Smith and his wife, Mary Roberts Smith (in 1896 she earned the first Stanford doctorate in Economics and later became associate professor of Sociology), but was not officially adopted until after a performance on campus in March 1902 by the Mormon Tabernacle Choir.
    “Uncommon Man/Uncommon Woman”: Stanford does not award honorary degrees, but in 1953 the degree of “Uncommon Man/Uncommon Woman” was created to recognize individuals who give rare and extraordinary service to the University. Technically, this degree is awarded by the Stanford Associates, a voluntary group that is part of the university’s alumni association. As Stanford’s highest honor, it is not conferred at prescribed intervals, but only when appropriate to recognize extraordinary service. Recipients include Herbert Hoover, Bill Hewlett, Dave Packard, Lucile Packard, and John Gardner.
    Big Game events: The events in the week leading up to the Big Game vs. UC Berkeley, including Gaieties (a musical written, composed, produced, and performed by the students of Ram’s Head Theatrical Society).
    “Viennese Ball”: a formal ball with waltzes that was initially started in the 1970s by students returning from the now-closed Stanford in Vienna overseas program. It is now open to all students.
    “Full Moon on the Quad”: An annual event at Main Quad, where students gather to kiss one another starting at midnight. Typically organized by the Junior class cabinet, the festivities include live entertainment, such as music and dance performances.
    “Band Run”: An annual festivity at the beginning of the school year, where the band picks up freshmen from dorms across campus while stopping to perform at each location, culminating in a finale performance at Main Quad.
    “Mausoleum Party”: An annual Halloween Party at the Stanford Mausoleum, the final resting place of Leland Stanford Jr. and his parents. A 20-year tradition, the “Mausoleum Party” was on hiatus from 2002 to 2005 due to a lack of funding, but was revived in 2006. In 2008, it was hosted in Old Union rather than at the actual Mausoleum, because rain prohibited generators from being rented. In 2009, after fundraising efforts by the Junior Class Presidents and the ASSU Executive, the event was able to return to the Mausoleum despite facing budget cuts earlier in the year.
    Former campus traditions include the “Big Game bonfire” on Lake Lagunita (a seasonal lake usually dry in the fall), which was formally ended in 1997 because of the presence of endangered salamanders in the lake bed.

    Award laureates and scholars

    Stanford’s current community of scholars includes:

    19 Nobel Prize laureates (as of October 2020, 85 affiliates in total)
    171 members of the National Academy of Sciences
    109 members of National Academy of Engineering
    76 members of National Academy of Medicine
    288 members of the American Academy of Arts and Sciences
    19 recipients of the National Medal of Science
    1 recipient of the National Medal of Technology
    4 recipients of the National Humanities Medal
    49 members of American Philosophical Society
    56 fellows of the American Physics Society (since 1995)
    4 Pulitzer Prize winners
    31 MacArthur Fellows
    4 Wolf Foundation Prize winners
    2 ACL Lifetime Achievement Award winners
    14 AAAI fellows
    2 Presidential Medal of Freedom winners

    Stanford University Seal

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 10:03 pm on August 24, 2021 Permalink | Reply
    Tags: "This Physicist Discovered an Escape From Hawking’s Black Hole Paradox", , In 1974 Stephen Hawking calculated that black holes’ secrets die with them., Quanta Magazine (US), ,   

    From Quanta Magazine (US) : “This Physicist Discovered an Escape From Hawking’s Black Hole Paradox” 

    From Quanta Magazine (US)

    August 23, 2021
    Natalie Wolchover

    1
    Netta Engelhardt puzzles over the fates of black holes in her office at the Massachusetts Institute of Technology. Credit: Tira Khan for Quanta Magazine.

    In 1974 Stephen Hawking calculated that black holes’ secrets die with them. Random quantum jitter on the spherical outer boundary, or “event horizon,” of a black hole will cause the hole to radiate particles and slowly shrink to nothing. Any record of the star whose violent contraction formed the black hole — and whatever else got swallowed up after — then seemed to be permanently lost.

    Hawking’s calculation posed a paradox — the infamous “black hole information paradox” — that has motivated research in fundamental physics ever since. On the one hand, quantum mechanics, the rulebook for particles, says that information about particles’ past states gets carried forward as they evolve — a bedrock principle called “unitarity.” But black holes take their cues from general relativity, the theory that space and time form a bendy fabric and gravity is the fabric’s curves. Hawking had tried to apply quantum mechanics to particles near a black hole’s periphery, and saw unitarity break down.

    So do evaporating black holes really destroy information, meaning unitarity is not a true principle of nature? Or does information escape as a black hole evaporates? Solving the information paradox quickly came to be seen as a route to discovering the true, quantum theory of gravity, which general relativity approximates well everywhere except black holes.

    In the past two years, a network of quantum gravity theorists, mostly millennials, has made enormous progress on Hawking’s paradox. One of the leading researchers is Netta Engelhardt, a 32-year-old theoretical physicist at The Massachusetts Institute of Technology (US). She and her colleagues have completed a new calculation that corrects Hawking’s 1974 formula; theirs indicates that information does, in fact, escape black holes via their radiation. She and Aron Wall identified an invisible surface that lies inside a black hole’s event horizon, called the “quantum extremal surface.” In 2019, Engelhardt and others showed that this surface seems to encode the amount of information that has radiated away from the black hole, evolving over the hole’s lifetime exactly as expected if information escapes.

    Engelhardt received a 2021 New Horizons in Physics Prize “for calculating the quantum information content of a black hole and its radiation.” Ahmed Almheiri of The Institute for Advanced Study (US), a frequent collaborator, noted her “deeply rooted intuition for the intricate workings of gravity,” particularly in the discovery of quantum extremal surfaces.

    Engelhardt set her sights on quantum gravity when she was 9 years old. She moved to Boston from Israel that year with her family, and, not knowing any English, read every book in Hebrew she could find in her house. The last was Hawking’s A Brief History of Time. “What that book did for me was trigger a desire to understand the fundamental building blocks of the universe,” she said. “From then on, I was sort of finding my own way, watching different popular science videos and asking questions of anybody who might have the answers, and narrowing down what I wanted to work on.” She ultimately found her way to Hawking’s paradox.

    When Quanta Magazine caught up with Engelhardt in a recent video call, she emphasized that the full solution to the paradox — and the quantum theory of gravity — is a work in progress. We discussed that progress, which centrally involves the concept of entropy, and the search for a “reverse algorithm” that would allow someone to reconstruct a black hole’s past. The conversation has been condensed and edited for clarity.

    Would you say you and your colleagues have solved the black hole information paradox?

    Not yet. We’ve made a lot of progress toward a resolution. That’s part of what makes the field so exciting; we’re moving forward — and we’re not doing it so slowly, either — but there’s still a lot that we have to uncover and understand.

    Could you summarize what you’ve figured out so far?

    Certainly. Along the way there have been a number of very important developments. One I will mention is a 1993 paper by Don Page [Physical Review Letters]. Page said, suppose that information is conserved. Then the entropy of everything outside of a black hole starts out at some value, increases, then has to go back down to the original value once the black hole has evaporated altogether. Whereas Hawking’s calculation predicts that the entropy increases, and once the black hole is evaporated completely, it just plateaus at some value and that’s it.

    3
    Samuel Velasco/Quanta Magazine.

    So the question became, which entropy curve is right. Normally, entropy is the number of possible indistinguishable configurations of a system. What’s the best way to understand entropy in this black hole context?

    You could think of this entropy as ignorance of the state of affairs in the black hole interior. The more possibilities there are for what could be going on in the black hole interior, the more ignorant you will be about which configuration the system is in. So this entropy measures ignorance.

    Page’s discovery was that if you assume that the evolution of the universe doesn’t lose information, then, if you start out with zero ignorance about the universe before a black hole forms, eventually you’re going to end up with zero ignorance once the black hole is gone, since all the information that went in has come back out. That’s in conflict with what Hawking derived, which was that eventually you end up with ignorance.

    You characterize Page’s insight and all other work on the information paradox prior to 2019 as “understanding the problem better.” What happened in 2019?

    The activity that started in 2019 is the steps towards actually resolving the problem. The two papers that kicked this off were work by myself, Ahmed Almheiri, Don Marolf and Henry Maxfield3 and, in parallel, the second paper, which came out at the same time, by Geoff Penington. We submitted our papers on the same day and coordinated because we knew we were both onto the same thing.

    The idea was to calculate the entropy in a different way. This is where Don Page’s calculation was very important for us. If we use Hawking’s method and his assumptions, we get a formula for the entropy which is not consistent with unitarity. Now we want to understand how we could possibly do a calculation that would give us the curve of the entropy that Page proposed, which goes up then comes back down.

    And for this we relied on a proposal that Aron Wall and I gave in 2014: the quantum extremal surface proposal, which essentially states that the so-called quantum-corrected area of a certain surface inside the black hole is what computes the entropy. We said, maybe that’s a way to do the quantum gravity calculation that gives us a unitary result. And I will say: It was kind of a shot in the dark.

    When did you realize that it worked?

    This entire time is a bit of a daze in my mind, it was so exciting; I think I slept maybe two hours a night for weeks. The calculation came together over a period of three weeks, I want to say. I was at Princeton University (US) at the time. We just had a meeting on campus. I have a very distinct memory of driving home, and I was thinking to myself, wow, this could be it.

    The crux of the matter was, there’s more than one quantum extremal surface in the problem. There’s one quantum extremal surface that gives you the wrong answer — the Hawking answer. To correctly calculate the entropy, you have to pick the right one, and the right one is always the one with the smallest quantum-corrected area. And so what was really exciting — I think the moment we realized this might really actually work out — is when we found that exactly at the time when the entropy curve needs to “turn over” [go from increasing to decreasing], there’s a jump. At that time, the quantum extremal surface with the smallest quantum-corrected area goes from being the surface that would give you Hawking’s answer to a new and unexpected one. And that one reproduces the Page curve.

    What are these quantum extremal surfaces, exactly?

    Let me try to intuit a little bit what a classical, non-quantum extremal surface feels like. Let me begin with just a sphere. Imagine that you place a light bulb inside of it, and you follow the light rays as they move outward through the sphere. As the light rays get farther and farther away from the light bulb, the area of the spheres that they pass through will be getting larger and larger. We say that the cross-sectional area of the light rays is getting larger.

    That’s an intuition that works really well in approximately flat space where we live. But when you consider very curved space-time like you find inside a black hole, what can happen is that even though you’re firing your light rays outwards from the light bulb, and you’re looking at spheres that are progressively farther away from the bulb, the cross-sectional area is actually shrinking. And this is because space-time is very violently curved. It’s something that we call focusing of light rays, and it’s a very fundamental concept in gravity and general relativity.

    The extremal surface straddles this line between the very violent situation where the area is decreasing, and a normal situation where the area increases. The area of the surface is neither increasing nor decreasing, and so intuitively you can think of an extremal surface as kind of lying right at the cusp of where you’d expect strong curvature to start kicking in. A quantum extremal surface is the same idea, but instead of area, now you’re looking at quantum-corrected area. This is a sum of area and entropy, which is neither increasing nor decreasing.

    What does the quantum extremal surface mean? What’s the difference between things that are inside versus outside?

    Recall that when the Page curve turns over, we expect that our ignorance of what the black hole contains starts to decrease, as we have access to more and more of its radiation. So the radiation emitted by the hole must start to “learn” about the black hole interior.

    It’s the quantum extremal surface that divides the space-time in two: Everything inside the surface, the radiation can already decode. Everything outside of it is what remains hidden in the black hole system, what’s not contained in the information of the radiation. As the black hole emits more radiation, the quantum extremal surface moves outwards and encompasses an ever-larger volume of the black hole interior. By the time that black hole evaporates altogether, the radiation has to be able to decode everything that way.

    Now that we have an explicit calculation that gives us a unitary answer, that gives us so many tools to start asking questions that we could never ask before, like where does this formula come from, what does it mean about what type of theory quantum gravity is? Also, what is the mechanism in quantum gravity that restores unitarity? It has something to do with the quantum extremal surface formula.

    Most of the justification for the quantum extremal surface formula comes from studying black holes in “Anti-de Sitter” (AdS) space — saddle-shaped space with an outer boundary. Whereas our universe has approximately flat space, and no boundary. Why should we think that these calculations apply to our universe?

    First, we can’t really get around the fact that our universe contains both quantum mechanics and gravity. It contains black holes. So our understanding of the universe is going to be incomplete until we have a description of what happens inside a black hole. The information problem is such a difficult problem to solve that any progress — whether it’s in a toy model or not — is making progress towards understanding phenomena that happen in our universe.

    Now at a more technical level, quantum extremal surfaces can be computed in different kinds of space-times, including flat space like in our universe. And in fact there already have been papers written on the behavior of quantum extremal surfaces within different kinds of space-times and what types of entropy curves they would give rise to.

    We have a very firm interpretation of the quantum extremal surface in AdS space. We can extrapolate and say that in flat space there exists some interpretation of the quantum extremal surface which is analogous, and I think that’s probably true. It has many nice properties; it looks like it’s the right thing. We get really interesting behavior and we expect to get unitarity as well, and so, yes, we do expect that this phenomenon does translate, although the interpretation is going to be harder.

    You said at the beginning of our conversation that we don’t know the solution to the information paradox yet. Can you explain what a solution looks like?

    A full resolution of the information paradox would have to tell us exactly how the black hole information comes out. If I’m an observer that’s sitting outside of a black hole and I have extremely sophisticated technology and all the time in the world — a quantum computer taking incredibly sophisticated measurements, all the radiation of that black hole — what does it take for me to actually decode the radiation to reconstruct, for instance, the star that collapsed and formed the black hole? What process do I need to put my quantum computer through? We need to answer that question.

    So you want to find the reverse algorithm that unscrambles the information in the radiation. What’s the connection between that algorithm and quantum gravity?

    This algorithm that decodes the Hawking radiation is coming from the process in which quantum gravity encodes the radiation as it evaporates at the black hole horizon. The emergence of the black hole interior from quantum gravity and the dynamics of the black hole interior, the experience of an object that falls into the black hole — all of that is encoded in this reverse algorithm that quantum gravity has to spit out. All of those are tied up in the question of “how does the information get encoded in the Hawking radiation?”

    You’ve lately been writing papers about something called “a python’s lunch”. What’s that?

    It’s one thing to ask how can you decode the Hawking radiation; you also might ask, how complex is the task of decoding the Hawking radiation. And, as it turns out, extremely complex. So maybe the difference between Hawking’s calculation and the quantum extremal surface calculation that gives unitarity is that Hawking’s calculation is just dropping the high-complexity operations.

    It’s important to understand the complexity geometrically. And in 2019 there was a paper by some of my colleagues that proposed that whenever you have more than one quantum extremal surface, the one that would be wrong for the entropy can be used to calculate the complexity of decoding the black hole radiation. The two quantum extremal surfaces can be thought of as sort of constrictions in the space-time geometry, and those of us who have read Le Petit Prince see an elephant inside a python, and so it has become known as a python’s lunch.

    We proposed that multiple quantum extremal surfaces are the exclusive source of high complexity. And these two papers that you’re referring to are essentially an argument for this “strong python’s lunch” proposal. That is very insightful for us because it identifies the part of the geometry that Hawking’s calculation knows about and part of the geometry that Hawking’s calculation doesn’t know about. It’s working towards putting his and our calculations in the same language so that we know why one is right, and the other is wrong.

    Where would you say we currently stand in our effort to understand the quantum nature of gravity?

    I like to think of this as a puzzle, where we have all the edge pieces and we’re missing the center. We have many different insights about quantum gravity. There are many ways in which people are trying to understand it. Some by constraining it: What are things that it can’t do? Some by trying to construct aspects of it: things that it must do. My personal preferred approach is more to do with the information paradox, because it’s so pivotal; it’s such an acute problem. It’s clearly telling us: Here’s where you messed up. And to me that says, here’s a place where we can begin to fix our pillars, one of which must be wrong, of our understanding of quantum gravity.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:25 pm on August 20, 2021 Permalink | Reply
    Tags: "How Big Data Carried Graph Theory Into New Dimensions", , Hypergraph, Markov chain, , New kinds of network models that can find complex structures and signals in the noise of big data., Quanta Magazine (US), Simplicial complexes, Tensors,   

    From Quanta Magazine (US) : “How Big Data Carried Graph Theory Into New Dimensions” 

    From Quanta Magazine (US)

    August 19, 2021
    Stephen Ornes

    1
    Mike Hughes for Quanta Magazine.

    The mathematical language for talking about connections, which usually depends on networks — vertices (dots) and edges (lines connecting them) — has been an invaluable way to model real-world phenomena since at least the 18th century. But a few decades ago, the emergence of giant data sets forced researchers to expand their toolboxes and, at the same time, gave them sprawling sandboxes in which to apply new mathematical insights. Since then, said Josh Grochow, a computer scientist at the University of Colorado-Boulder (US), there’s been an exciting period of rapid growth as researchers have developed new kinds of network models that can find complex structures and signals in the noise of big data.

    Grochow is among a growing chorus of researchers who point out that when it comes to finding connections in big data, graph theory has its limits. A graph represents every relationship as a dyad, or pairwise interaction. However, many complex systems can’t be represented by binary connections alone. Recent progress in the field shows how to move forward.

    Consider trying to forge a network model of parenting. Clearly, each parent has a connection to a child, but the parenting relationship isn’t just the sum of the two links, as graph theory might model it. The same goes for trying to model a phenomenon like peer pressure.

    “There are many intuitive models. The peer pressure effect on social dynamics is only captured if you already have groups in your data,” said Leonie Neuhäuser of RWTH AACHEN UNIVERSITY [Rheinisch-Westfaelische Technische Hochschule (DE). But binary networks don’t capture group influences.

    Mathematicians and computer scientists use the term “higher-order interactions” to describe these complex ways that group dynamics, rather than binary links, can influence individual behaviors. These mathematical phenomena appear in everything from entanglement interactions in quantum mechanics to the trajectory of a disease spreading through a population. If a pharmacologist wanted to model drug interactions [npj Systems Biology and Applications], for example, graph theory might show how two drugs respond to each other — but what about three? Or four?

    While the tools for exploring these interactions are not new, it’s only in recent years that high-dimensional data sets have become an engine for discovery, giving mathematicians and network theorists new ideas. These efforts have yielded interesting results about the limits of graphs and the possibilities of scaling up.

    “Now we know that the network is just the shadow of the thing,” Grochow said. If a data set has a complex underlying structure, then modeling it as a graph may reveal only a limited projection of the whole story.

    “We’ve realized that the data structures we’ve used to study things, from a mathematical perspective, aren’t quite fitting what we’re seeing in the data,” said the mathematician Emilie Purvine of the DOE’s Pacific Northwest National Laboratory (US).

    Which is why mathematicians, computer scientists and other researchers are increasingly focusing on ways to generalize graph theory — in its many guises — to explore higher-order phenomena. The last few years have brought a torrent of proposed ways to characterize these interactions, and to mathematically verify them in high-dimensional data sets.

    For Purvine, the mathematical exploration of higher-order interactions is like the mapping of new dimensions. “Think about a graph as a foundation on a two-dimensional plot of land,” she said. The three-dimensional buildings that can go on top could vary significantly. “When you’re down at ground level, they look the same, but what you construct on top is different.”

    Enter the Hypergraph

    The search for those higher-dimensional structures is where the math turns especially murky — and interesting. The higher-order analogue of a graph, for example, is called a hypergraph, and instead of edges, it has “hyperedges.” These can connect multiple nodes, which means it can represent multi-way (or multilinear) relationships. Instead of a line, a hyperedge might be seen as a surface, like a tarp staked in three or more places.

    Which is fine, but there’s still a lot we don’t know about how these structures relate to their conventional counterparts. Mathematicians are currently learning which rules of graph theory also apply for higher-order interactions, suggesting new areas of exploration.

    To illustrate the kinds of relationship that a hypergraph can tease out of a big data set — and an ordinary graph can’t — Purvine points to a simple example close to home, the world of scientific publication. Imagine two data sets, each containing papers co-authored by up to three mathematicians; for simplicity, let’s name them A, B and C. One data set contains six papers, with two papers by each of the three distinct pairs (AB, AC and BC). The other contains only two papers total, each co-authored by all three mathematicians (ABC).

    A graph representation of co-authorship, taken from either data set, might look like a triangle, showing that each mathematician (three nodes) had collaborated with the other two (three links). If your only question was who had collaborated with whom, then you wouldn’t need a hypergraph, Purvine said

    But if you did have a hypergraph, you could also answer questions about less obvious structures. A hypergraph of the first set (with six papers), for example, could include hyperedges showing that each mathematician contributed to four papers. A comparison of hypergraphs from the two sets would show that the papers’ authors differed in the first set but was the same in the second.

    Hypergraphs in the Wild

    Such higher-order methods have already proved useful in applied research, such as when ecologists showed how the reintroduction of wolves to Yellowstone National Park in the 1990s triggered changes in biodiversity and in the structure of the food chain. And in one recent paper, Purvine and her colleagues analyzed a database of biological responses to viral infections, using hypergraphs to identify the most critical genes involved. They also showed how those interactions would have been missed by the usual pairwise analysis afforded by graph theory.

    “That’s the kind of power we’re seeing from hypergraphs, to go above and beyond graphs,” said Purvine.

    However, generalizing from graphs to hypergraphs quickly gets complicated. One way to illustrate this is to consider the canonical cut problem from graph theory, which asks: Given two distinct nodes on a graph, what’s the minimum number of edges you can cut to completely sever all connections between the two? Many algorithms can readily find the optimal number of cuts for a given graph.

    But what about cutting a hypergraph? “There are lots of ways of generalizing this notion of a cut to a hypergraph,” said Austin Benson, a mathematician at Cornell University (US). But there’s no one clear solution, he said, because a hyperedge could be severed various ways, creating new groups of nodes.

    Together with two colleagues, Benson recently tried to formalize all the different ways of splitting up a hypergraph. What they found hinted at a variety of computational complexities: For some situations, the problem was readily solved in polynomial time, which basically means a computer could crunch through solutions in a reasonable time. But for others, the problem was basically unsolvable — it was impossible to know for certain whether a solution existed at all.

    “There are still many open questions there,” Benson said. “Some of these impossibility results are interesting because you can’t possibly reduce them to graphs. And on the theory side, if you haven’t reduced it to something you could have found with a graph, it’s showing you that there is something new there.”

    The Mathematical Sandwich

    But the hypergraph isn’t the only way to explore higher-order interactions. Topology — the mathematical study of geometric properties that don’t change when you stretch, compress or otherwise transform objects — offers a more visual approach. When a topologist studies a network, they look for shapes and surfaces and dimensions. They might note that the edge connecting two nodes is one-dimensional and ask about the properties of one-dimensional objects in different networks. Or they might see the two-dimensional triangular surface formed by connecting three nodes and ask similar questions.

    Topologists call these structures simplicial complexes [European Journal of Physics]. These are, effectively, hypergraphs viewed through the framework of topology. Neural networks, which fall into the general category of machine learning, offer a telling example. They’re driven by algorithms designed to mimic how our brains’ neurons process information. Graph neural networks (GNNs), which model connections between things as pairwise connections, excel at inferring data that’s missing from large data sets, but as in other applications, they could miss interactions that only arise from groups of three or more. In recent years, computer scientists have developed simplicial neural networks, which use higher-order complexes to generalize the approach of GNNs to find these effects.

    Simplicial complexes connect topology to graph theory, and, like hypergraphs, they raise compelling mathematical questions that will drive future investigations. For example, in topology, special kinds of subsets of simplicial complexes are also themselves simplicial complexes and therefore have the same properties. If the same held true for a hypergraph, the subsets would include all the hyperedges within — including all the embedded two-way edges.

    But that’s not always the case. “What we’re seeing now is that data falls into this middle ground where not every hyperedge, not every complex interaction, is the same size as every other one,” Purvine said. “You can have a three-way interaction, but not the pairwise interactions.” Big data sets have shown clearly that the group influence often far outstrips the influence of an individual, whether in biological signaling networks or in social behaviors like peer pressure.

    Purvine describes data as filling the middle of a kind of mathematical sandwich, bound on top by these ideas from topology, and underneath by the limitations of graphs. Network theorists are now challenged to find the new rules for higher-order interactions. And for mathematicians, she said, “there’s room to play.”

    Random Walks and Matrices

    That sense of creative “play” extends to other tools as well. There are all sorts of beautiful connections between graphs and other tools for describing data, said Benson. “But as soon as you move to the higher-order setting, these connections are harder to come by.”

    That’s especially clear when you try to consider a higher-dimensional version of a Markov chain, he said. A Markov chain describes a multistage process in which the next stage depends only on an element’s current position; researchers have used Markov models to describe how things like information, energy and even money flow through a system. Perhaps the best-known example of a Markov chain is a random walk, which describes a path where each step is determined randomly from the one before it. A random walk is also a specific graph: Any walk along a graph can be shown as a sequence moving from node to node along links.

    But how to scale up something as simple as a walk? Researchers turn to higher-order Markov chains, which instead of depending only on current position can consider many of the previous states. This approach proved useful for modeling systems like web browsing behavior and airport traffic flows. Benson has ideas for other ways to extend it: He and his colleagues recently described [SIAM Review] a new model for stochastic, or random, processes that combines higher-order Markov chains with another tool called tensors. They tested it against a data set of taxi rides in New York City to see how well it could predict trajectories. The results were mixed: Their model predicted the movement of cabs better than a usual Markov chain, but neither model was very reliable.

    Tensors themselves represent yet another tool for studying higher-order interactions that has come into its own in recent years. To understand tensors, first think of matrices, which organize data into an array of rows and columns. Now imagine matrices made of matrices, or matrices that have not only rows and columns, but also depth or other dimensions of data. These are tensors. If every matrix corresponded to a musical duet, then tensors would include all possible configurations of instruments.

    Tensors are nothing new to physicists, who have long used them to describe, for example, the different possible quantum states of a particle, but network theorists adopted this tool to expand on the power of matrices in high-dimensional data sets. And mathematicians are using them to crack open new classes of problems. Grochow uses tensors to study the isomorphism problem, which essentially asks how you know whether two objects are, in some way, the same. His recent work with Youming Qiao has produced a new way to identify complex problems that might be difficult or impossible to solve.

    How to Hypergraph Responsibly

    Benson’s inconclusive taxi model raises a pervasive question: When do researchers actually need tools like hypergraphs? In many cases, under the right conditions, a hypergraph will deliver the exact same type of predictions and analyses as a graph. “If something is already encapsulated in the network, is it really necessary to model the system [as higher-order]?” asked Michael Schaub of RWTH Aachen University.

    It depends on the data set, he said. “A graph is a good abstraction for a social network, but social networks are so much more. With higher-order systems, there are more ways to model.” Graph theory may show how individuals are connected, for example, but not capture the ways in which clusters of friends on social media influence each other’s behavior.

    The same higher-order interactions won’t emerge in every data set, so new theories are, curiously, driven by the data — which challenges the underlying logical sense that drew Purvine to the field in the first place. “What I love about math is that it’s based in logic and if you follow the right direction, you get to the right answer. But sometimes, when you’re defining whole new areas of math, there’s this subjectivity of what is the right way of doing it,” she says. “And if you don’t recognize that there are multiple ways of doing it, you can maybe drive the community in the wrong direction.”

    Ultimately, Grochow said, these tools represent a kind of freedom, not just allowing researchers to better understand their data, but allowing mathematicians and computer scientists to explore new worlds of possibilities. “There’s endless stuff to explore. It’s interesting and beautiful, and a source of a lot of great questions.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:54 am on July 30, 2021 Permalink | Reply
    Tags: "Eternal Change for No Energy-A Time Crystal Finally Made Real", A time crystal is an object whose parts move in a regular repeating cycle sustaining this constant change without burning any energy., , , Evading the second law of thermodynamics., Floquet time crystal, Google Sycamore quantum computer, , Quanta Magazine (US), , Researchers at Google in collaboration with physicists at Stanford and Princeton and other universities say that they have used Google’s quantum computer to demonstrate a genuine “time crystal” , Researchers have raced to create a time crystal over the past five years but previous demos successful on their own terms have failed to satisfy the criteria needed to establish its existence., The time crystal is a new category of phases of matter expanding the definition of what a phase is., Time crystals are also the first objects to spontaneously break “time-translation symmetry.”   

    From Quanta Magazine (US) : “Eternal Change for No Energy-A Time Crystal Finally Made Real” 

    From Quanta Magazine (US)

    1
    Maylee for Quanta Magazine
    .


    A time crystal flips back and forth between two states without burning energy.
    Maylee for Quanta Magazine.

    In a preprint posted online Thursday night, researchers at Google in collaboration with physicists at Stanford University (US), Princeton University (US) and other universities say that they have used Google’s quantum computer to demonstrate a genuine “time crystal” for the first time.

    A novel phase of matter that physicists have strived to realize for many years, a time crystal is an object whose parts move in a regular repeating cycle sustaining this constant change without burning any energy.

    “The consequence is amazing: You evade the second law of thermodynamics,” said co-author Roderich Moessner, director of the MPG Institute for the Physics of Complex Systems [MPG Institut für Physik komplexer Systeme] (DE) in Dresden, Germany. That’s the law that says disorder always increases.

    Time crystals are also the first objects to spontaneously break “time-translation symmetry,” the usual rule that a stable object will remain the same throughout time. A time crystal is both stable and ever-changing, with special moments that come at periodic intervals in time.

    The time crystal is a new category of phases of matter expanding the definition of what a phase is. All other known phases, like water or ice, are in thermal equilibrium: Their constituent atoms have settled into the state with the lowest energy permitted by the ambient temperature, and their properties don’t change with time. The time crystal is the first “out-of-equilibrium” phase: It has order and perfect stability despite being in an excited and evolving state.

    “This is just this completely new and exciting space that we’re working in now,” said Vedika Khemani, a condensed matter physicist now at Stanford who co-discovered the novel phase while she was a graduate student and co-authored the new paper.

    Khemani, Moessner, Shivaji Sondhi of Princeton and Achilleas Lazarides of Loughborough University (UK) in the United Kingdom discovered the possibility of the phase and described its key properties in 2015; a rival group of physicists led by Chetan Nayak of Microsoft Station Q and the University of California-Santa Barbara (US) identified it as a time crystal soon after.

    Researchers have raced to create a time crystal over the past five years, but previous demos, though successful on their own terms, have failed to satisfy all the criteria needed to establish the time crystal’s existence. “There are good reasons to think that none of those experiments completely succeeded, and a quantum computer like [Google’s] would be particularly well placed to do much better than those earlier experiments,” said John Chalker, a condensed matter physicist at the University of Oxford (UK) who wasn’t involved in the new work.

    Google’s quantum computing team made headlines in 2019 when they performed the first-ever computation that ordinary computers weren’t thought to be able to do in a practical amount of time. Yet that task was contrived to show a speedup and was of no inherent interest. The new time crystal demo marks one of the first times a quantum computer has found gainful employment.

    “It’s a fantastic use of [Google’s] processor,” Nayak said.

    With today’s preprint, which has been submitted for publication, and other recent results, researchers have fulfilled the original hope for quantum computers. In his 1982 paper proposing the devices, the physicist Richard Feynman argued that they could be used to simulate the particles of any imaginable quantum system.

    A time crystal exemplifies that vision. It’s a quantum object that nature itself probably never creates, given its complex combination of delicate ingredients. Imaginations conjured the recipe, stirred by nature’s most baffling laws.

    An Impossible Idea, Resurrected

    The original notion of a time crystal had a fatal flaw.

    The Nobel Prize­-winning physicist Frank Wilczek conceived the idea in 2012, while teaching a class about ordinary (spatial) crystals. “If you think about crystals in space, it’s very natural also to think about the classification of crystalline behavior in time,” he told this magazine not long after.

    Consider a diamond, a crystalline phase of a clump of carbon atoms. The clump is governed by the same equations everywhere in space, yet it takes a form that has periodic spatial variations, with atoms positioned at lattice points. Physicists say that it “spontaneously breaks space-translation symmetry.” Only minimum-energy equilibrium states spontaneously break spatial symmetries in this way.

    Wilczek envisioned a multi-part object in equilibrium, much like a diamond. But this object breaks time-translation symmetry: It undergoes periodic motion, returning to its initial configuration at regular intervals.

    Wilczek’s proposed time crystal was profoundly different from, say, a wall clock — an object that also undergoes periodic motion. Clock hands burn energy and stop when the battery runs out. A Wilczekian time crystal requires no input and continues indefinitely, since the system is in its ultra-stable equilibrium state.

    If it sounds implausible, it is: After much thrill and controversy, a 2014 proof showed that Wilczek’s prescription fails, like all other perpetual-motion machines conceived throughout history.

    That year, researchers at Princeton were thinking about something else. Khemani and her doctoral adviser, Sondhi, were studying many-body localization, an extension of Anderson localization, the Nobel Prize-winning 1958 discovery that an electron can get stuck in place, as if in a crevice in a rugged landscape.

    An electron is best pictured as a wave, whose height in different places gives the probability of detecting the particle there. The wave naturally spreads out over time. But Philip Anderson discovered that randomness — such as the presence of random defects in a crystal lattice — can cause the electron’s wave to break up, destructively interfere with itself, and cancel out everywhere except in a small region. The particle localizes.

    People thought for decades that interactions between multiple particles would destroy the interference effect. But in 2005, three physicists at Princeton and Columbia University (US) showed that a one-dimensional chain of quantum particles can experience many-body localization; that is, they all get stuck in a fixed state. This phenomenon would become the first ingredient of the time crystal.

    Imagine a row of particles, each with a magnetic orientation (or “spin”) that points up, down, or some probability of both directions. Imagine that the first four spins initially point up, down, down and up. The spins will quantum mechanically fluctuate and quickly align, if they can. But random interference between them can cause the row of particles to get stuck in their particular configuration, unable to rearrange or settle into thermal equilibrium. They’ll point up, down, down and up indefinitely.

    Sondhi and a collaborator had discovered that many-body localized systems can exhibit a special kind of order, which would become the second key ingredient of a time crystal: If you flip all the spins in the system (yielding down, up, up and down in our example), you get another stable, many-body localized state.

    3
    Samuel Velasco/Quanta Magazine.

    In the fall of 2014, Khemani joined Sondhi on sabbatical at the Max Planck Institute in Dresden. There, Moessner and Lazarides specialized in so-called Floquet systems: periodically driven systems, such as a crystal that’s being stimulated with a laser of a certain frequency. The laser’s intensity, and thus the strength of its effect on the system, periodically varies.

    Moessner, Lazarides, Sondhi and Khemani studied what happens when a many-body localized system is periodically driven in this way. They found in calculations and simulations that when you tickle a localized chain of spins with a laser in a particular way, they’ll flip back and forth, moving between two different many-body localized states in a repeating cycle forever without absorbing any net energy from the laser.

    They called their discovery a pi spin-glass phase (where the angle pi signifies a 180-degree flip). The group reported the concept of this new phase of matter — the first many-body, out-of-equilibrium phase ever identified — in a 2015 preprint, but the words “time crystal” didn’t appear anywhere in it. The authors added the term in an updated version, published in Physical Review Letters in June 2016, thanking a reviewer in the acknowledgments for making the connection between their pi spin-glass phase and time crystals.

    Something else happened between the preprint’s appearance and its publication: Nayak, who is a former graduate student of Wilczek’s, and collaborators Dominic Else and Bela Bauer put out a preprint in March 2016 proposing the existence of objects called Floquet time crystals. They pointed to Khemani and company’s pi spin-glass phase as an example.

    A Floquet time crystal exhibits the kind of behavior envisioned by Wilczek, but only while being periodically driven by an external energy source. This kind of time crystal circumvents the failure of Wilczek’s original idea by never professing to be in thermal equilibrium. Because it’s a many-body localized system, its spins or other parts are unable to settle into equilibrium; they’re stuck where they are. But the system doesn’t heat up either, despite being pumped by a laser or other driver. Instead, it cycles back and forth indefinitely between localized states.

    Already, the laser will have broken the symmetry between all moments in time for the row of spins, imposing instead “discrete time-translation symmetry” — that is, identical conditions only after each periodic cycle of the laser. But then, through its back-and-forth flips, the row of spins further breaks the discrete time-translation symmetry imposed by the laser, since its own periodic cycles are multiples of the laser’s.

    Khemani and co-authors had characterized this phase in detail, but Nayak’s group couched it in the language of time, symmetry and spontaneous symmetry-breaking — all fundamental concepts in physics. As well as offering sexier terminology, they provided new facets of understanding, and they slightly generalized the notion of a Floquet time crystal beyond the pi spin-glass phase (noting that a certain symmetry it has isn’t needed). Their paper was published in Physical Review Letters in August 2016, two months after Khemani and company published the theoretical discovery of the first example of the phase.

    Both groups claim to have discovered the idea. Since then, the rival researchers and others have raced to create a time crystal in reality.

    The Perfect Platform

    Nayak’s crew teamed up with Chris Monroe at the University of Maryland (US), who uses electromagnetic fields to trap and control ions. Last month, the group reported in Science that they’d turned the trapped ions into an approximate, or “prethermal,” time crystal. Its cyclical variations (in this case, ions jumping between two states) are practically indistinguishable from those of a genuine time crystal. But unlike a diamond, this prethermal time crystal is not forever; if the experiment ran for long enough, the system would gradually equilibrate and the cyclical behavior would break down.

    Khemani, Sondhi, Moessner and collaborators hitched their wagon elsewhere. In 2019, Google announced that its Sycamore quantum computer had completed a task in 200 seconds that would take a conventional computer 10,000 years. (Other researchers would later describe a way to greatly speed up the ordinary computer’s calculation.) In reading the announcement paper, Moessner said, he and his colleagues realized that “the Sycamore processor contains as its fundamental building blocks exactly the things we need to realize the Floquet time crystal.”

    Serendipitously, Sycamore’s developers were also looking for something to do with their machine, which is too error-prone to run the cryptography and search algorithms designed for full-fledged quantum computers. When Khemani and colleagues reached out to Kostya Kechedzhi, a theorist at Google, he and his team quickly agreed to collaborate on the time crystal project. “My work, not only with discrete time crystals but other projects, is to try and use our processor as a scientific tool to study new physics or chemistry,” Kechedzhi said.


    Video: Quantum computers aren’t the next generation of supercomputers — they’re something else entirely. Before we can even begin to talk about their potential applications, we need to understand the fundamental physics that drives the theory of quantum computing. Credit:Emily Buder/Quanta Magazine; Chris FitzGerald and DVDP for Quanta Magazine.

    Quantum computers consist of “qubits” — essentially controllable quantum particles, each of which can maintain two possible states, labeled 0 and 1, at the same time. When qubits interact, they can collectively juggle an exponential number of simultaneous possibilities, enabling computing advantages.

    Google’s qubits consist of superconducting aluminum strips. Each has two possible energy states, which can be programmed to represent spins pointing up or down. For the demo, Kechedzhi and collaborators used a chip with 20 qubits to serve as the time crystal.

    Perhaps the main advantage of the machine over its competitors is its ability to tune the strengths of interactions between its qubits. This tunability is key to why the system could become a time crystal: The programmers could randomize the qubits’ interaction strengths, and this randomness created destructive interference between them that allowed the row of spins to achieve many-body localization. The qubits could lock into a set pattern of orientations rather than aligning.

    The researchers gave the spins arbitrary initial configurations, such as: up, down, down, up, and so on. Pumping the system with microwaves flipped up-pointing spins to down and vice versa. By running tens of thousands of demos for each initial configuration and measuring the states of the qubits after different amounts of time in each run, the researchers could observe that the system of spins was flipping back and forth between two many-body localized states.

    The hallmark of a phase is extreme stability. Ice stays as ice even if the temperature fluctuates. Indeed, the researchers found that microwave pulses only had to flip spins somewhere in the ballpark of 180 degrees, but not exactly that much, for the spins to return to their exact initial orientation after two pulses, like little boats righting themselves. Furthermore, the spins never absorbed or dissipated net energy from the microwave laser, leaving the disorder of the system unchanged.

    It’s unclear whether a Floquet time crystal might have practical use. But its stability seems promising to Moessner. “Something that’s as stable as this is unusual, and special things become useful,” he said.

    Or the state might be merely conceptually useful. It’s the first and simplest example of an out-of-equilibrium phase, but the researchers suspect that more such phases are physically possible.

    Nayak argues that time crystals illuminate something profound about the nature of time. Normally in physics, he said, “however much you try to treat [time] as being just another dimension, it is always kind of an outlier.” Einstein made the best attempt at unification, weaving 3D space together with time into a four-dimensional fabric: space-time. But even in his theory, unidirectional time is unique. With time crystals, Nayak said, “this is the first case that I know of where all of a sudden time is just one of the gang.”

    Chalker argues, though, that time remains an outlier. Wilczek’s time crystal would have been a true unification of time and space, he said. Spatial crystals are in equilibrium, and relatedly, they break continuous space-translation symmetry. The discovery that, in the case of time, only discrete time-translation symmetry may be broken by time crystals puts a new angle on the distinction between time and space.

    These discussions will continue, driven by the possibility of exploration on quantum computers. Condensed matter physicists used to concern themselves with the phases of the natural world. “The focus moved from studying what nature gives us,” Chalker said, to dreaming up exotic forms of matter that quantum mechanics allows.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:45 am on July 23, 2021 Permalink | Reply
    Tags: "New Shape Opens ‘Wormhole’ Between Numbers and Geometry", A revitalized geometric object called the Fargues-Fontaine curve., , Beginning in the early 1980s Vladimir Drinfeld and later Alexander Beilinson proposed that there should be a way to interpret Langlands’ conjectures in geometric terms., By the 1500s mathematicians had discovered tidy formulas for calculating the roots of polynomials whose highest powers are 2 3 or 4., For decades the geometric Langlands program remained at a distance from the original one., Galois proposed studying the symmetries between roots which he encoded in a new mathematical object eventually called a Galois group., In 1832 the young mathematician Évariste Galois discovered the search was fruitless., Langlands proposed that there should be a way of matching every Galois group with an object called an automorphic form., Laurent Fargues and Peter Scholze have found a new more powerful way of connecting number theory and geometry as part of the sweeping Langlands program., Quanta Magazine (US), The final result doesn’t so much bridge numbers and geometry as collapse the ground between them., The Langlands program began in 1967 when its namesake Robert Langlands wrote a letter to a famed mathematician named André Weil., The Langlands program is a network of conjectures that touch upon almost every area of pure mathematics., The Langlands program is a sprawling research vision that begins with a simple concern: finding solutions to polynomial equations., The long-running “Langlands program” which seeks to link disparate branches of mathematics — like calculus and geometry — to answer some of the most fundamental questions about numbers., The new work from Scholze and Fargues finally fulfills the hopes pinned on the geometric Langlands program., The work fashions a new geometric object that fulfills a bold once fanciful dream about the relationship between geometry and numbers., They searched for ways to identify the roots of polynomials with variables raised to the power of 5 and beyond., Throughout the 20th century mathematicians devised new ways of studying Galois groups. One main strategy involved creating a dictionary translating between the groups and other objects., You can graph polynomials. You can’t graph a number.   

    From Quanta Magazine (US) : “New Shape Opens ‘Wormhole’ Between Numbers and Geometry” 

    From Quanta Magazine (US)

    July 19, 2021
    Kevin Hartnett

    Laurent Fargues and Peter Scholze have found a new more powerful way of connecting number theory and geometry as part of the sweeping Langlands program.

    1
    Matteo Bassini for Quanta Magazine; spots by Olena Shmahalo/Quanta Magazine.

    The grandest project in mathematics has received a rare gift, in the form of a mammoth 350-page paper posted in February that will change the way researchers around the world investigate some of the field’s deepest questions. The work fashions a new geometric object that fulfills a bold once fanciful dream about the relationship between geometry and numbers.

    “This truly opens up a tremendous amount of possibilities. Their methods and constructions are so new they’re just waiting to be explored,” said Tasho Kaletha of the University of Michigan (US).

    The work is a collaboration between Laurent Fargues of the Mathematics Institute of Jussieu–Paris Rive Gauche [Institut de Mathématiques de Jussieu-Paris Rive Gauche](FR) in Paris and Peter Scholze of the Rhenish Friedrich Wilhelm University of Bonn[Rheinische Friedrich-Wilhelms-Universität Bonn](DE). It opens a new front in the long-running “Langlands program” which seeks to link disparate branches of mathematics — like calculus and geometry — to answer some of the most fundamental questions about numbers.

    Their paper realizes that vision, giving mathematicians an entirely new way of thinking about questions that have inspired and confounded them for centuries.

    At the center of Fargues and Scholze’s work is a revitalized geometric object called the Fargues-Fontaine curve. It was first developed around 2010 by Fargues and Jean-Marc Fontaine, who was a professor at Paris-Sud University until he died of cancer in 2019. After a decade, the curve is only now achieving its highest form.

    “Back then they knew the Fargues-Fontaine curve was something interesting and important, but they didn’t understand in which ways,” said Eva Viehmann of the Technical University of Munich [Technische Universität München] (DE).

    The curve might have remained confined to the technical corner of mathematics where it was invented, but in 2014 events involving Fargues and Scholze propelled it to the center of the field. Over the next seven years they worked out the foundational details needed to adapt Fargues’ curve to Scholze’s theory. The final result doesn’t so much bridge numbers and geometry as collapse the ground between them.

    “It’s some kind of wormhole between two different worlds,” said Scholze. “They really just become the same thing somehow through a different lens.”

    3

    Root Harvest

    The Langlands program is a sprawling research vision that begins with a simple concern: finding solutions to polynomial equations like x^2 − 2 = 0 and x^4 − 10x^2 + 22 = 0.
    Solving them means finding the “roots” of the polynomial — the values of x that make the polynomial equal zero (x = √±2 for the first example, and x = √±5√±3for the second)

    By the 1500s mathematicians had discovered tidy formulas for calculating the roots of polynomials whose highest powers are 2 3 or 4. They then searched for ways to identify the roots of polynomials with variables raised to the power of 5 and beyond. But in 1832 the young mathematician Évariste Galois discovered the search was fruitless, proving that there are no general methods for calculating the roots of higher-power polynomials.

    Galois didn’t stop there, though. In the months before his death in a duel in 1832 at age 20, Galois laid out a new theory of polynomial solutions. Rather than calculating roots exactly — which can’t be done in most cases — he proposed studying the symmetries between roots which he encoded in a new mathematical object eventually called a Galois group.

    In the example x^2 − 2, instead of making the roots explicit, the Galois group emphasizes that the two roots (whatever they are) are mirror images of each other as far as the laws of algebra are concerned.

    “Mathematicians had to step away from formulas because usually there were no formulas,” said Brian Conrad of Stanford University (US). “Computing a Galois group is some measure of computing the relations among the roots.”

    Throughout the 20th century mathematicians devised new ways of studying Galois groups. One main strategy involved creating a dictionary translating between the groups and other objects — often functions coming from calculus — and investigating those as a proxy for working with Galois groups directly. This is the basic premise of the Langlands program, which is a broad vision for investigating Galois groups — and really polynomials — through these types of translations.

    The Langlands program began in 1967 when its namesake, Robert Langlands wrote a letter to a famed mathematician named André Weil. Langlands proposed that there should be a way of matching every Galois group with an object called an automorphic form. While Galois groups arise in algebra (reflecting the way you use algebra to solve equations), automorphic forms come from a very different branch of mathematics called analysis, which is an enhanced form of calculus. Mathematical advances from the first half of the 20th century had identified enough similarities between the two to make Langlands suspect a more thorough link.

    “It’s remarkable that these objects of a very different nature somehow communicate with each other,” said Ana Caraiani of Imperial College London (UK).

    If mathematicians could prove what came to be called the Langlands correspondence, they could confidently investigate all polynomials using the powerful tools of calculus. The conjectured relationship is so fundamental that its solution may also touch on many of the biggest open problems in number theory, including three of the million-dollar Millennium Prize problems: the Riemann hypothesis, the BSD conjecture and the Hodge conjecture.

    Given the stakes, generations of mathematicians have been motivated to join the effort, developing Langlands’ initial conjectures into what is almost certainly the largest, most expansive project in the field today.

    “The Langlands program is a network of conjectures that touch upon almost every area of pure mathematics,” said Caraiani.

    Numbers From Shapes

    Beginning in the early 1980s Vladimir Drinfeld and later Alexander Beilinson proposed that there should be a way to interpret Langlands’ conjectures in geometric terms. The translation between numbers and geometry is often difficult, but when it works it can crack problems wide open.

    To take just one example, a basic question about a number is whether it has a repeated prime factor. The number 12 does: It factors into 2 × 2 × 3, with the 2 occurring twice. The number 15 does not (it factors into 3 × 5).

    In general, there’s no quick way of knowing whether a number has a repeated factor. But there is an analogous geometric problem which is much easier.

    Polynomials have many of the same properties as numbers: You can add, subtract, multiply and divide them. There’s even a notion of what it means for a polynomial to be “prime.” But unlike numbers, polynomials have a clear geometric guise. You can graph their solutions and study the graphs to gain insights about them.

    For instance, if the graph is tangent to the x-axis at any point, you can deduce that the polynomial has a repeated factor (indicated at exactly the point of tangency). It’s just one example of how a murky arithmetic question acquires a visual meaning once converted into its analogue for polynomials.

    “You can graph polynomials. You can’t graph a number. And when you graph a [polynomial] it gives you ideas,” said Conrad. “With a number you just have the number.”

    The “geometric” Langlands program, as it came to be called, aimed to find geometric objects with properties that could stand in for the Galois groups and automorphic forms in Langlands’ conjectures. Proving an analogous correspondence in this new setting by using geometric tools could give mathematicians more confidence in the original Langlands conjectures and perhaps suggest useful ways of thinking about them. It was a nice vision, but also a somewhat airy one — a bit like saying you could cross the universe if you only had a time machine.

    “Making geometric objects that serve a similar role in the setting of numbers is a much more difficult thing to do,” said Conrad.

    So for decades the geometric Langlands program remained at a distance from the original one. The two were animated by the same goal, but they involved such fundamentally different objects that there was no real way to make them talk to each other.

    “The arithmetic people sort of looked bemused by [the geometric Langlands program]. They said it’s fine and good, but completely unrelated to our concern,” said Kaletha.

    The new work from Scholze and Fargues finally fulfills the hopes pinned on the geometric Langlands program — by finding the first shape whose properties communicate directly with Langlands’ original concerns.

    Scholze’s Tour de Force

    In September 2014, Scholze was teaching a special course at the University of California-Berkeley (US). Despite being only 26, he was already a legend in the mathematics world. Two years earlier he had completed his dissertation, in which he articulated a new geometric theory based on objects he’d invented called perfectoid spaces. He then used this framework to solve part of a problem in number theory called the weight-monodromy conjecture.

    But more important than the particular result was the sense of possibility surrounding it — there was no telling how many other questions in mathematics might yield to this incisive new perspective.

    The topic of Scholze’s course was an even more expansive version of his theory of perfectoid spaces. Mathematicians filled the seats in the small seminar room, lined up along the walls and spilled out into the hallway to hear him talk.

    “Everyone wanted to be there because we knew this was revolutionary stuff,” said David Ben-Zvi of the University of Texas-Austin (US).

    Scholze’s theory was based on special number systems called the p-adics. The “p” in p-adic stands for “prime,” as in prime numbers. For each prime, there is a unique p-adic number system: the 2-adics, the 3-adics, the 5-adics and so on. P-adic numbers have been a central tool in mathematics for over a century. They’re useful as more manageable number systems in which to investigate questions that occur back in the rational numbers (numbers that can be written as a ratio of positive or negative whole numbers), which are unwieldy by comparison.

    The virtue of p-adic numbers is that they’re each based on just one single prime. This makes them more straightforward, with more obvious structure, than the rationals, which have an infinitude of primes with no obvious pattern among them. Mathematicians often try to understand basic questions about numbers in the p-adics first, and then take those lessons back to their investigation of the rationals.

    “The p-adic numbers are a small window into the rational numbers,” said Kaletha.

    All number systems have a geometric form — the real numbers, for instance, take the form of a line. Scholze’s perfectoid spaces gave a new and more useful geometric form to the p-adic numbers. This enhanced geometry made the p-adics, as seen through his perfectoid spaces, an even more effective way to probe basic number-theoretic phenomena, like questions about the solutions of polynomial equations.

    “He reimagined the p-adic world and made it into geometry,” said Ben-Zvi. “Because they’re so fundamental, this leads to lots and lots of successes.”

    In his Berkeley course, Scholze presented a more general version of his theory of perfectoid spaces, built on even newer objects he’d devised called diamonds. The theory promised to further enlarge the uses of the p-adic numbers. Yet at the time Scholze began teaching, he had not even finished working it out.

    “He was giving the course as he was developing the theory. He was coming up with ideas in the evening and presenting them fresh out of his mind in the morning,” said Kaletha.

    It was a virtuosic display, and one of the people in the room to hear it was Laurent Fargues.

    Have Curve, Will Travel

    At the same time Scholze was giving his lectures, Fargues was attending a special semester at the Mathematical Sciences Research Institute just up the hill from the Berkeley campus. He had thought a lot about the p-adic numbers, too. For the past decade he’d worked with Jean-Marc Fontaine in an area of math called p-adic Hodge theory, which focuses on basic arithmetic questions about these numbers. During that time, he and Fontaine had come up with a new geometric object of their own. It was a curve — the Fargues-Fontaine curve — whose points each represented a version of an important object called a p-adic ring.

    As originally conceived, it was a narrowly useful tool in a technical part of mathematics, not something likely to shake up the entire field.

    “It’s an organizing principle in p-adic Hodge theory, that’s how I think of it. It was impossible for me to keep track of all these rings before this curve came up,” said Caraiani.

    But as Fargues sat listening to Scholze, he envisioned an even greater role for the curve in mathematics. The never-realized goal of the geometric Langlands program was to find a geometric object that encoded answers to questions in number theory. Fargues perceived how his curve, merged with Scholze’s p-adic geometry, could serve exactly that role. Around mid-semester he pulled Scholze aside and shared his nascent plan. Scholze was skeptical.

    “He mentioned this idea to me over a coffee break at MSRI,” said Scholze. “It was not a very long conversation. At first I thought it couldn’t be good.”

    But they had more conversations, and Scholze soon realized the approach might work after all. On December 5, as the semester wound down, Fargues gave a lecture at MSRI in which he introduced a new vision for the geometric Langlands program. He proposed that it should be possible to redefine the Fargues-Fontaine curve in terms of Scholze’s p-adic geometry, and then use that redefined object to prove a version of the Langlands correspondence. Fargues’ proposal was a final, unexpected turn in what had already been a thrilling season of mathematics.

    “It was like this grand finale of this semester. I remember just being in shock,” said Ben-Zvi.

    A Local Correspondence

    The original Langlands conjectures are about matching representations of the Galois groups of the rational numbers with automorphic forms. The p-adics are a different number system, and there is a version of the Langlands conjectures there, too. (Both are still separate from the geometric Langlands program.) It also involves a kind of matching, though in this case it’s between representations of the Galois group of the p-adic numbers and representations of p-adic groups.

    While their objects are different, the spirit of the two conjectures is the same: to study solutions to polynomials — in terms of rational numbers in one case and p-adic numbers in the other — by relating two seemingly unrelated kinds of objects. Mathematicians refer to the Langlands conjecture for rational numbers as the “global” Langlands correspondence, because the rationals contain all the primes, and the version for p-adics as the “local” Langlands correspondence, since p-adic number systems deal with one prime at a time.

    In his December lecture at MSRI, Fargues proposed proving the local Langlands conjecture using the geometry of the Fargues-Fontaine curve. But because he and Fontaine had developed the curve for a completely different and more limited task, their definition required more powerful geometry that could provide the structure and complexity the curve would ultimately need to support these enlarged plans.

    The situation was similar to how you could arrive at a three-sided shape that’s independent of any particular geometric theory, but if you combine that shape with the theory of Euclidean geometry, suddenly it takes on a richer life: You get trigonometry, the Pythagorean theorem and well-defined notions of symmetry. It becomes a fully fledged triangle.

    “[Fargues] was taking the idea of the curve and using the powerful geometry that Scholze developed to flesh out that idea,” said Kaletha. “That allows you to formally state the beautiful properties of the curve.”

    Fargues’ strategy came to be known as the “geometrization of the local Langlands correspondence.” But at the time he made it, existing mathematics didn’t have the tools he needed to carry it out, and new geometric theories don’t come along every day. Luckily, history was on his side.

    “[Fargues’ conjecture] was a bold idea because Fargues needed geometry that didn’t exist. But as it turned out Scholze at that very moment was developing it,” said Kaletha.

    Foundation Building

    Following their time together in Berkeley, Fargues and Scholze spent the next seven years establishing a geometric theory that would allow them to reconstruct the Fargues-Fontaine curve in a form suitable for their plans.

    “In 2014 it was basically already clear what the picture should be and how everything should fit together. It was just that everything was completely ill-defined. There were no foundations in place to talk about any of this,” said Scholze.

    The work took place in several stages. In 2017 Scholze completed a paper called Étale Cohomology of Diamonds, which formalized many of the most important ideas he had introduced during his Berkeley lectures. He combined that paper with another massive work that he and co-author Dustin Clausen of the University of University of Copenhagen [Københavns Universitet](DK) released as a series of lectures in 2020. That material — all 352 pages of it — was needed to establish a foundation for a few particular points that had come up in Scholze’s work on diamonds.

    “Scholze had to come up with a whole other theory which was just there to take care of certain technical issues that came up on the last three pages of his [2017] paper,” said Kaletha.

    Altogether, these and other papers allowed Fargues and Scholze to devise an entirely new way of defining a geometric object. Imagine that you start with an unorganized collection of points — a “cloud of dust,” in Scholze’s words — that you want to glue together in just the right way to assemble the object you’re looking for. The theory Fargues and Scholze developed provides exact mathematical directions for performing that gluing and certifies that, in the end, you will get the Fargues-Fontaine curve. And this time, it’s defined in just the right way for the task at hand — addressing the local Langlands correspondence.

    “That’s technically the only way we can get our hands on it,” said Scholze. “You have to rebuild a lot of foundations of geometry in this kind of framework, and it was very surprising to me that it is possible.”

    After they’d defined the Fargues-Fontaine curve, Fargues and Scholze embarked on the next stage of their journey: equipping it with the features necessary to prove a correspondence between representations of Galois groups and representations of p-adic groups.

    To understand these features, let’s first consider a simpler geometric object, like a circle. At every point on the circle it’s possible to position a line that’s tangent to the shape at exactly that point. Every point has a unique tangent line. You can collect all those many lines together into an auxiliary geometric object, called the tangent bundle, that’s associated to the underlying geometric object, the circle.

    In their new work, Fargues and Scholze do something similar for the Fargues-Fontaine curve. But instead of tangent planes and bundles, they define ways of constructing many more complicated geometric objects. One example, called sheaves, can be associated naturally to points on the Fargues-Fontaine curve the way tangent lines can be associated to points on a circle.

    Sheaves were first defined in the 1950s by Alexander Grothendieck, and they keep track of how algebraic and geometric features of the underlying geometric object interact with each other. For decades, mathematicians have suspected they might be the best objects to focus on in the geometric Langlands program.

    “You reinterpret the theory of representations of Galois groups in terms of sheaves,” said Conrad.

    There are local and global versions of the geometric Langlands program, just as there are for the original one. Questions about sheaves relate to the global geometric program, which Fargues suspected could connect to the local Langlands correspondence. The issue was that mathematicians didn’t have the right kinds of sheaves defined on the right kind of geometric object to carry the day. Now Fargues and Scholze have provided them, via the Fargues-Fontaine curve.

    The End of the Beginning

    Specifically, they came up with two different kinds: Coherent sheaves correspond to representations of p-adic groups, and étale sheaves to representations of Galois groups. In their new paper, Fargues and Scholze prove that there’s always a way to match a coherent sheaf with an étale sheaf, and as a result there’s always a way to match a representation of a p-adic group with a representation of a Galois group.

    In this way, they finally proved one direction of the local Langlands correspondence. But the other direction remains an open question.

    “It gives you one direction, how to go from a representation of a p-adic group to a representation of a Galois group, but doesn’t tell you how to go back,” said Scholze.

    The work is one of the biggest advances so far on the Langlands program — often mentioned in the same breath as work by Vincent Lafforgue of the Fourier Institute [Bienvenue à l’Institut Fourier – Laboratoire de mathématiques](FR) on a different aspect of the Langlands correspondence in 2018. It’s also the most tangible evidence yet that earlier mathematicians weren’t foolish to attempt the Langlands program by geometric means.

    “These things are a great vindication for the work people were doing in geometric Langlands for decades,” said Ben-Zvi.

    For mathematics as a whole, there’s a sense of awe and possibility in the reception of the new work: awe at the way the theory of p-adic geometry Scholze has been building since graduate school manifests in the Fargues-Fontaine curve, and possibility because that curve opens entirely new and unexplored dimensions of the Langlands program.

    “It’s really changed everything. These last five or eight years, they have really changed the whole field,” said Viehmann.

    The clear next step is to nail down both sides of the local Langlands correspondence — to prove that it’s a two-way street, rather than the one-way road Fargues and Scholze have paved so far.

    Beyond that, there’s the global Langlands correspondence itself. There’s no obvious way to translate Fargues and Scholze’s geometry of the p-adic numbers into corresponding constructions for the rational numbers. But it’s also impossible to look at this new work and not wonder if there might be a way.

    “It’s a direction I’m really hoping to head into,” Scholze said.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:13 pm on July 20, 2021 Permalink | Reply
    Tags: "A Video Tour of the Standard Model", , , , Quanta Magazine (US), ,   

    From Quanta Magazine (US) via Symmetry: “A Video Tour of the Standard Model” 

    From Quanta Magazine

    via

    Symmetry Mag

    Symmetry

    July 16, 2021
    Kevin Hartnett

    1
    Standard Model of Particle Physics. Credit: Quanta Magazine.


    The Standard Model: The Most Successful Scientific Theory Ever.
    Video: The Standard Model of particle physics is the most successful scientific theory of all time. In this explainer, Cambridge University physicist David Tong recreates the model, piece by piece, to provide some intuition for how the fundamental building blocks of our universe fit together.
    Emily Buder/Quanta Magazine.
    Kristina Armitage and Rui Braz for Quanta Magazine.

    Recently, Quanta has explored the collaboration between physics and mathematics on one of the most important ideas in science: quantum field theory. The basic objects of a quantum field theory are quantum fields, which spread across the universe and, through their fluctuations, give rise to the most fundamental phenomena in the physical world. We’ve emphasized the unfinished business in both physics and mathematics — the ways in which physicists still don’t fully understand a theory they wield so effectively, and the grand rewards that await mathematicians if they can provide a full description of what quantum field theory actually is.

    This incompleteness, however, does not mean the work has been unsatisfying so far.

    For our final entry in this “Math Meets QFT” series, we’re exploring the most prominent quantum field theory of them all: the Standard Model. As the University of Cambridge (UK) physicist David Tong puts it in the accompanying video, it’s “the most successful scientific theory of all time” despite being saddled with a “rubbish name.”

    The Standard Model describes physics in the three spatial dimensions and one time dimension of our universe. It captures the interplay between a dozen quantum fields representing fundamental particles and a handful of additional fields representing forces. The Standard Model ties them all together into a single equation that scientists have confirmed countless times, often with astonishing accuracy. In the video, Professor Tong walks us through that equation term by term, introducing us to all the pieces of the theory and how they fit together. The Standard Model is complicated, but it is easier to work with than many other quantum field theories. That’s because sometimes the fields of the Standard Model interact with each other quite feebly, as writer Charlie Wood described in the second piece in our series.

    From Quanta Magazine : “Mathematicians Prove 2D Version of Quantum Gravity Really Works”

    The Standard Model has been a boon for physics, but it’s also had a bit of a hangover effect. It’s been extraordinarily effective at explaining experiments we can do here on Earth, but it can’t account for several major features of the wider universe, including the action of gravity at short distances and the presence of dark matter and dark energy. Physicists would like to move beyond the Standard Model to an even more encompassing physical theory. But, as the physicist Davide Gaiotto put it in the first piece in our series, the glow of the Standard Model is so strong that it’s hard to see beyond it.

    From Quanta Magazine : “The Mystery at the Heart of Physics That Only Math Can Solve”

    And that, maybe, is where math comes in. Mathematicians will have to develop a fresh perspective on quantum field theory if they want to understand it in a self-consistent and rigorous way. There’s reason to hope that this new vantage will resolve many of the biggest open questions in physics.

    The process of bringing QFT into math may take some time — maybe even centuries, as the physicist Nathan Seiberg speculated in the third piece in our series — but it’s also already well underway. By now, math and quantum field theory have indisputably met. It remains to be seen what happens as they really get to know each other.

    From Quanta Magazine : “Nathan Seiberg on How Math Might Complete the Ultimate Physics Theory”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 9:28 am on July 18, 2021 Permalink | Reply
    Tags: "Neurons Unexpectedly Encode Information in the Timing of Their Firing", , Artificial Intelligence researchers typically have to train artificial neural networks on hundreds or thousands of examples of a pattern or concept before the synapse strengthens., , , , , Information seems to be encoded through the strengthening of synapses only when two neurons fire within tens of milliseconds of each other., It’s really important not just how many [neuron activations] occur but when exactly they occur., Phase precession: a relationship between the continuous rhythm of a brain wave and the specific moments that neurons in that brain area activate., , Place cells: each of which is tuned to a specific region or “place field.”, Quanta Magazine (US), The closer you get to the center of a place field the faster the corresponding place cell fires., The pattern of phase precession was elusive in humans until now., There are other theories about our rapid learning abilities. And researchers stressed that it’s difficult to draw conclusions about any widespread role for phase precession., These studies suggest that phase precession allows the brain to link sequences of times; images; and events in the same way as it does spatial positions.   

    From Quanta Magazine : “Neurons Unexpectedly Encode Information in the Timing of Their Firing” 

    From Quanta Magazine

    July 7, 2021
    Elena Renken

    1
    Samuel Velasco/Quanta Magazine.

    For decades, neuroscientists have treated the brain somewhat like a Geiger counter: The rate at which neurons fire is taken as a measure of activity, just as a Geiger counter’s click rate indicates the strength of radiation. But new research suggests the brain may be more like a musical instrument. When you play the piano, how often you hit the keys matters, but the precise timing of the notes is also essential to the melody.

    “It’s really important not just how many [neuron activations] occur but when exactly they occur,” said Joshua Jacobs, a neuroscientist and biomedical engineer at Columbia University (US) who reported new evidence for this claim last month in Cell.

    For the first time, Jacobs and two coauthors spied neurons in the human brain encoding spatial information through the timing, rather than rate, of their firing. This temporal firing phenomenon is well documented in certain brain areas of rats, but the new study and others suggest it might be far more widespread in mammalian brains. “The more we look for it, the more we see it,” Jacobs said.

    Some researchers think the discovery might help solve a major mystery: how brains can learn so quickly.

    The phenomenon is called phase precession. It’s a relationship between the continuous rhythm of a brain wave — the overall ebb and flow of electrical signaling in an area of the brain — and the specific moments that neurons in that brain area activate. A theta brain wave, for instance, rises and falls in a consistent pattern over time, but neurons fire inconsistently, at different points on the wave’s trajectory. In this way, brain waves act like a clock, said one of the study’s coauthors, Salman Qasim, also of Columbia. They let neurons time their firings precisely so that they’ll land in range of other neurons’ firing — thereby forging connections between neurons.

    Researchers began noticing phase precession decades ago among the neurons in rat brains that encode information about spatial position. Human brains and rat brains both contain these so-called place cells, each of which is tuned to a specific region or “place field.” Our brains seem to scale these place fields to cover our current surroundings, whether that’s miles of freeway or the rooms of one’s home, said Kamran Diba, a neuroscientist at the University of Michigan (US). The closer you get to the center of a place field the faster the corresponding place cell fires. As you leave one place field and enter another, the firing of the first place cell peters out, while that of the second picks up.

    But along with rate, there’s also timing: As the rat passes through a place field, the associated place cell fires earlier and earlier with respect to the cycle of the background theta wave. As the rat crosses from one place field into another, the very early firing of the first place cell occurs close in time with the late firing of the next place cell. Their near-coincident firings cause the synapse, or connection, between them to strengthen, and this coupling of the place cells ingrains the rat’s trajectory into the brain. (Information seems to be encoded through the strengthening of synapses only when two neurons fire within tens of milliseconds of each other.)

    Phase precession is obvious in rats. “It’s so prominent and prevalent in the rodent brain that it makes you want to assume it’s a generalizable mechanism,” Qasim said. Scientists had also identified phase precession in the spatial processing of bats and marmosets, but the pattern was elusive in humans until now.

    Monitoring individual neurons is too invasive to do on the average human study participant, but the Columbia team took advantage of data collected years ago from 13 epilepsy patients who had already had electrodes implanted to map the electrical signals of their seizures. The electrodes recorded the firings of individual neurons while patients steered their way through a virtual-reality simulation using a joystick. As the patients maneuvered themselves around, the researchers identified phase precession in 12% of the neurons they were monitoring.

    Pulling out these signals required sophisticated statistical analysis, because humans exhibit a more complicated pattern of overlapping brain waves than rodents do — and because less of our neural activity is devoted to navigation. But the researchers could say definitively that phase precession is there.

    Other research suggests that phase precession may be crucial beyond navigation. In animals, the phenomenon has been tied to non-spatial perceptions, including processing sounds and smell. And in humans, research co-authored by Jacobs last year found phase precession in time-sensitive brain cells NIH-NLB-PNAS. A not-yet-peer-reviewed preprint [bioRxiv] by cognitive scientists in France and the Netherlands indicated that processing serial images involved phase precession, too. Finally, in Jacobs’ new study, it was found not just in literal navigation, but also as the humans progressed toward abstract goals in the simulation.

    These studies suggest that phase precession allows the brain to link sequences of times; images; and events in the same way as it does spatial positions. “Finding that first evidence really opens the door for it to be some sort of universal coding mechanism in the brain — across mammalian species, possibly,” Qasim said. “You might be missing a whole lot of information coding if you’re not tracking the relative timing of neural activity.”

    Neuroscientists are, in fact, on the lookout for a new kind of coding in the brain to answer the longstanding question: How does the brain encode information so quickly? It’s understood that patterns in external data become ingrained in the firing patterns of the network through the strengthening and weakening of synaptic connections. But artificial intelligence researchers typically have to train artificial neural networks on hundreds or thousands of examples of a pattern or concept before the synapse strengths adjust enough for the network to learn the pattern. Mysteriously, humans can typically learn from just one or a handful of examples.

    Phase precession could play a role in that disparity. One hint of this comes from a study [Journal of Neuroscience] by Johns Hopkins University (US) researchers who found that phase precession showed up in rats learning an unfamiliar track — on their first lap. “As soon as you’re learning something, this pattern for learning sequences is already in place,” Qasim added. “That might facilitate very rapid learning of sequences.”

    Phase precession organizes the timing so that learning happens more often than it could otherwise. It arranges for neurons activated by related information to fire in quick-enough succession for the synapse between them to strengthen. “It would point to this notion that the brain is basically computing faster than you would imagine from rate coding alone,” Diba said.

    There are other theories about our rapid learning abilities. And researchers stressed that it’s difficult to draw conclusions about any widespread role for phase precession in the brain from the limited studies so far.

    Still, a thorough search for the phenomenon may be in order. Bradley Lega, a neurologist at the UTexas Southwestern Medical Center(US), said, “There’s a lot of problems that phase precession can solve.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: