Tagged: Mathematics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:52 pm on June 18, 2021 Permalink | Reply
    Tags: "Mathematicians Prove 2D Version of Quantum Gravity Really Works", A trilogy of landmark publications, , “Liouville field”- see the description in the full blog post., , DOZZ formula: a finding of Harald Dorn; Hans-Jörg Otto; Alexeif Zamolodchikov; Alexander Zamolodchikov, Fields are central to quantum physics too; however the situation here is more complicated due to the deep randomness of quantum theory., In classical physics for example a single field tells you everything about how a force pushes objects around., In physics today the main actors in the most successful theories are fields., Mathematics, , QFT: Quantum Field Theory-a model of how one or more quantum fields each with their infinite variations act and interact., ,   

    From Quanta Magazine : “Mathematicians Prove 2D Version of Quantum Gravity Really Works” 

    From Quanta Magazine

    June 17, 2021
    Charlie Wood

    In three towering papers, a team of mathematicians has worked out the details of Liouville quantum field theory, a two-dimensional model of quantum gravity.

    1
    Credit: Olena Shmahalo/Quanta Magazine.

    Alexander Polyakov, a theoretical physicist now at Princeton University (US), caught a glimpse of the future of quantum theory in 1981. A range of mysteries, from the wiggling of strings to the binding of quarks into protons, demanded a new mathematical tool whose silhouette he could just make out.

    “There are methods and formulae in science which serve as master keys to many apparently different problems,” he wrote in the introduction to a now famous four-page letter in Physics Letters B. “At the present time we have to develop an art of handling sums over random surfaces.”

    Polyakov’s proposal proved powerful. In his paper he sketched out a formula that roughly described how to calculate averages of a wildly chaotic type of surface, the “Liouville field.” His work brought physicists into a new mathematical arena, one essential for unlocking the behavior of theoretical objects called strings and building a simplified model of quantum gravity.

    Years of toil would lead Polyakov to breakthrough solutions for other theories in physics, but he never fully understood the mathematics behind the Liouville field.

    Over the last seven years, however, a group of mathematicians has done what many researchers thought impossible. In a trilogy of landmark publications, they have recast Polyakov’s formula using fully rigorous mathematical language and proved that the Liouville field flawlessly models the phenomena Polyakov thought it would.

    1
    Vincent Vargas of the National Centre for Scientific Research [Centre national de la recherche scientifique, [CNRS] (FR) and his collaborators have achieved a rare feat: a strongly interacting quantum field theory perfectly described by a brief mathematical formula.

    “It took us 40 years in math to make sense of four pages,” said Vincent Vargas, a mathematician at the French National Center for Scientific Research and co-author of the research with Rémi Rhodes of Aix-Marseille University [Aix-Marseille Université] (FR), Antti Kupiainen of the University of Helsinki [ Helsingin yliopisto; Helsingfors universitet] (FI), François David of the French National Centre for Scientific Research [Centre national de la recherche scientifique, [CNRS] (FR), and Colin Guillarmou of Paris-Saclay University [Université Paris-Saclay] (FR).

    The three papers forge a bridge between the pristine world of mathematics and the messy reality of physics — and they do so by breaking new ground in the mathematical field of probability theory. The work also touches on philosophical questions regarding the objects that take center stage in the leading theories of fundamental physics: quantum fields.

    “This is a masterpiece in mathematical physics,” said Xin Sun, a mathematician at the University of Pennsylvania (US).

    Infinite Fields

    In physics today the main actors in the most successful theories are fields — objects that fill space, taking on different values from place to place.

    In classical physics for example a single field tells you everything about how a force pushes objects around. Take Earth’s magnetic field: The twitches of a compass needle reveal the field’s influence (its strength and direction) at every point on the planet.

    Fields are central to quantum physics too; however the situation here is more complicated due to the deep randomness of quantum theory. From the quantum perspective, Earth doesn’t generate one magnetic field, but rather an infinite number of different ones. Some look almost like the field we observe in classical physics, but others are wildly different.

    But physicists still want to make predictions — predictions that ideally match, in this case, what a mountaineer reads on a compass. Assimilating the infinite forms of a quantum field into a single prediction is the formidable task of a “quantum field theory,” or QFT. This is a model of how one or more quantum fields each with their infinite variations act and interact.

    Driven by immense experimental support, QFTs have become the basic language of particle physics. The Standard Model is one such QFT, depicting fundamental particles like electrons as fuzzy bumps that emerge from an infinitude of electron fields. It has passed every experimental test to date (although various groups may be on the verge of finding the first holes).

    Physicists play with many different QFTs. Some, like the Standard Model, aspire to model real particles moving through the four dimensions of our universe (three spatial dimensions plus one dimension of time). Others describe exotic particles in strange universes, from two-dimensional flatlands to six-dimensional uber-worlds. Their connection to reality is remote, but physicists study them in the hopes of gaining insights they can carry back into our own world.

    Polyakov’s Liouville field theory is one such example.

    1

    Gravity’s Field

    The Liouville field, which is based on an equation from complex analysis developed in the 1800s by the French mathematician Joseph Liouville, describes a completely random two-dimensional surface — that is, a surface, like Earth’s crust, but one in which the height of every point is chosen randomly. Such a planet would erupt with mountain ranges of infinitely tall peaks, each assigned by rolling a die with infinite faces.

    Such an object might not seem like an informative model for physics, but randomness is not devoid of patterns. The bell curve, for example, tells you how likely you are to randomly pass a seven-foot basketball player on the street. Similarly, bulbous clouds and crinkly coastlines follow random patterns, but it’s nevertheless possible to discern consistent relationships between their large-scale and small-scale features.

    Liouville theory can be used to identify patterns in the endless landscape of all possible random, jagged surfaces. Polyakov realized this chaotic topography was essential for modeling strings, which trace out surfaces as they move. The theory has also been applied to describe quantum gravity in a two-dimensional world. Einstein defined gravity as space-time’s curvature, but translating his description into the language of quantum field theory creates an infinite number of space-times — much as the Earth produces an infinite collection of magnetic fields. Liouville theory packages all those surfaces together into one object. It gives physicists the tools to measure the curvature —and hence, gravitation — at every location on a random 2D surface.

    “Quantum gravity basically means random geometry, because quantum means random and gravity means geometry,” said Sun.

    Polyakov’s first step in exploring the world of random surfaces was to write down an expression defining the odds of finding a particular spiky planet, much as the bell curve defines the odds of meeting someone of a particular height. But his formula did not lead to useful numerical predictions.

    To solve a quantum field theory is to be able to use the field to predict observations. In practice, this means calculating a field’s “correlation functions,” which capture the field’s behavior by describing the extent to which a measurement of the field at one point relates, or correlates, to a measurement at another point. Calculating correlation functions in the photon field, for instance, can give you the textbook laws of quantum electromagnetism.

    Polyakov was after something more abstract: the essence of random surfaces, similar to the statistical relationships that make a cloud a cloud or a coastline a coastline. He needed the correlations between the haphazard heights of the Liouville field. Over the decades he tried two different ways of calculating them. He started with a technique called the Feynman path integral and ended up developing a workaround known as the bootstrap. Both methods came up short in different ways, until the mathematicians behind the new work united them in a more precise formulation.

    Add ’Em Up

    You might imagine that accounting for the infinitely many forms a quantum field can take is next to impossible. And you would be right. In the 1940s Richard Feynman, a quantum physics pioneer, developed one prescription for dealing with this bewildering situation, but the method proved severely limited.

    Take, again, Earth’s magnetic field. Your goal is to use quantum field theory to predict what you’ll observe when you take a compass reading at a particular location. To do this, Feynman proposed summing all the field’s forms together. He argued that your reading will represent some average of all the field’s possible forms. The procedure for adding up these infinite field configurations with the proper weighting is known as the Feynman path integral.

    It’s an elegant idea that yields concrete answers only for select quantum fields. No known mathematical procedure can meaningfully average an infinite number of objects covering an infinite expanse of space in general. The path integral is more of a physics philosophy than an exact mathematical recipe. Mathematicians question its very existence as a valid operation and are bothered by the way physicists rely on it.

    “I’m disturbed as a mathematician by something which is not defined,” said Eveliina Peltola, a mathematician at the University of Bonn [Rheinische Friedrich-Wilhelms-Universität Bonn](DE) in Germany.

    Physicists can harness Feynman’s path integral to calculate exact correlation functions for only the most boring of fields — free fields, which do not interact with other fields or even with themselves. Otherwise, they have to fudge it, pretending the fields are free and adding in mild interactions, or “perturbations.” This procedure, known as perturbation theory, gets them correlation functions for most of the fields in the Standard Model, because nature’s forces happen to be quite feeble.

    But it didn’t work for Polyakov. Although he initially speculated that the Liouville field might be amenable to the standard hack of adding mild perturbations, he found that it interacted with itself too strongly. Compared to a free field, the Liouville field seemed mathematically inscrutable, and its correlation functions appeared unattainable.

    Up by the Bootstraps

    Polyakov soon began looking for a workaround. In 1984, he teamed up with Alexander Belavin and Alexander Zamolodchikov to develop a technique called the bootstrap — a mathematical ladder that gradually leads to a field’s correlation functions.

    To start climbing the ladder, you need a function which expresses the correlations between measurements at a mere three points in the field. This “three-point correlation function,” plus some additional information about the energies a particle of the field can take, forms the bottom rung of the bootstrap ladder.

    From there you climb one point at a time: Use the three-point function to construct the four-point function, use the four-point function to construct the five-point function, and so on. But the procedure generates conflicting results if you start with the wrong three-point correlation function in the first rung.

    Polyakov, Belavin and Zamolodchikov used the bootstrap to successfully solve a variety of simple QFT theories, but just as with the Feynman path integral, they couldn’t make it work for the Liouville field.

    Then in the 1990s two pairs of physicists — Harald Dorn and Hans-Jörg Otto, and Zamolodchikov and his brother Alexei — managed to hit on the three-point correlation function that made it possible to scale the ladder, completely solving the Liouville field (and its simple description of quantum gravity). Their result, known by their initials as the DOZZ formula, let physicists make any prediction involving the Liouville field. But even the authors knew they had arrived at it partially by chance, not through sound mathematics.

    “They were these kind of geniuses who guessed formulas,” said Vargas.

    Educated guesses are useful in physics, but they don’t satisfy mathematicians, who afterward wanted to know where the DOZZ formula came from. The equation that solved the Liouville field should have come from some description of the field itself, even if no one had the faintest idea how to get it.

    “It looked to me like science fiction,” said Kupiainen. “This is never going to be proven by anybody.”

    Taming Wild Surfaces

    In the early 2010s, Vargas and Kupiainen joined forces with the probability theorist Rémi Rhodes and the physicist François David. Their goal was to tie up the mathematical loose ends of the Liouville field — to formalize the Feynman path integral that Polyakov had abandoned and, just maybe, demystify the DOZZ formula.

    As they began, they realized that a French mathematician named Jean-Pierre Kahane had discovered, decades earlier, what would turn out to be the key to Polyakov’s master theory.

    “In some sense it’s completely crazy that Liouville was not defined before us,” Vargas said. “All the ingredients were there.”

    The insight led to three milestone papers in mathematical physics completed between 2014 and 2020.

    2

    They first polished off the path integral, which had failed Polyakov because the Liouville field interacts strongly with itself, making it incompatible with Feynman’s perturbative tools. So instead, the mathematicians used Kahane’s ideas to recast the wild Liouville field as a somewhat milder random object known as the Gaussian free field. The peaks in the Gaussian free field don’t fluctuate to the same random extremes as the peaks in the Liouville field, making it possible for the mathematicians to calculate averages and other statistical measures in sensible ways.

    “Somehow it’s all just using the Gaussian free field,” Peltola said. “From that they can construct everything in the theory.”

    In 2014, they unveiled their result: a new and improved version of the path integral Polyakov had written down in 1981, but fully defined in terms of the trusted Gaussian free field. It’s a rare instance in which Feynman’s path integral philosophy has found a solid mathematical execution.

    “Path integrals can exist, do exist,” said Jörg Teschner, a physicist at the German Electron Synchrotron.

    With a rigorously defined path integral in hand, the researchers then tried to see if they could use it to get answers from the Liouville field and to derive its correlation functions. The target was the mythical DOZZ formula — but the gulf between it and the path integral seemed vast.

    “We’d write in our papers, just for propaganda reasons, that we want to understand the DOZZ formula,” said Kupiainen.

    The team spent years prodding their probabilistic path integral, confirming that it truly had all the features needed to make the bootstrap work. As they did so, they built on earlier work by Teschner. Eventually, Vargas, Kupiainen and Rhodes succeeded with a paper posted in 2017 [Annals of Mathematics] and another in October 2020, with Colin Guillarmou. They derived DOZZ and other correlation functions from the path integral and showed that these formulas perfectly matched the equations physicists had reached using the bootstrap.

    “Now we’re done,” Vargas said. “Both objects are the same.”

    The work explains the origins of the DOZZ formula and connects the bootstrap procedure —which mathematicians had considered sketchy — with verified mathematical objects. Altogether, it resolves the final mysteries of the Liouville field.

    “It’s somehow the end of an era,” said Peltola. “But I hope it’s also the beginning of some new, interesting things.”

    New Hope for QFTs

    Vargas and his collaborators now have a unicorn on their hands, a strongly interacting QFT perfectly described in a nonperturbative way by a brief mathematical formula that also makes numerical predictions.

    Now the literal million-dollar question is: How far can these probabilistic methods go? Can they generate tidy formulas for all QFTs? Vargas is quick to dash such hopes, insisting that their tools are specific to the two-dimensional environment of Liouville theory. In higher dimensions, even free fields are too irregular, so he doubts the group’s methods will ever be able to handle the quantum behavior of gravitational fields in our universe.

    But the fresh minting of Polyakov’s “master key” will open other doors. Its effects are already being felt in probability theory, where mathematicians can now wield previously dodgy physics formulas with impunity. Emboldened by the Liouville work, Sun and his collaborators have already imported equations from physics to solve two problems regarding random curves.

    Physicists await tangible benefits too, further down the road. The rigorous construction of the Liouville field could inspire mathematicians to try their hand at proving features of other seemingly intractable QFTs — not just toy theories of gravity but descriptions of real particles and forces that bear directly on the deepest physical secrets of reality.

    “[Mathematicians] will do things that we can’t even imagine,” said Davide Gaiotto, a theoretical physicist at the Perimeter Institute for Theoretical Physics (CA).

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 9:16 pm on April 20, 2021 Permalink | Reply
    Tags: "Looking at the stars or falling by the wayside? How astronomy is failing female scientists", , , , , , , , Mathematics, ,   

    From phys.org : “Looking at the stars or falling by the wayside? How astronomy and all of the Physical Sciences are failing female scientists” 

    From phys.org

    April 20, 2021
    Lisa Kewley

    1
    Women astronomers get disproportionately less telescope time than their male colleagues. Credit: Wikimedia Commons, CC BY-SA.

    “It will take until at least 2080 before women make up just one-third of Australia’s professional astronomers, unless there is a significant boost to how we nurture female researchers’ careers.

    Over the past decade, astronomy has been rightly recognized as leading the push towards gender equity in the sciences. But my new modeling, published today in Nature Astronomy, shows it is not working fast enough.

    The Australian Academy of Science’s decadal plan for astronomy in Australia proposes women should comprise one-third of the senior workforce by 2025.

    It’s a worthy, if modest, target. However, with new data from the academy’s Science in Australia Gender Equity (SAGE) program, I have modeled the effects of current hiring rates and practices and arrived at a depressing, if perhaps not surprising, conclusion. Without a change to the current mechanisms, it will take at least 60 years to reach that 30% level.

    However, the modeling also suggests that the introduction of ambitious, affirmative hiring programs aimed at recruiting and retaining talented women astronomers could see the target reached in just over a decade—and then growing to 50% in a quarter of a century.

    How did we get here?

    Before looking at how that might be done, it’s worth examining how the gender imbalance in physics arose in the first place. To put it bluntly: how did we get to a situation in which 40% of astronomy Ph.D.s are awarded to women, yet they occupy fewer than 20% of senior positions?

    On a broad level, the answer is simple: my analysis shows women depart astronomy at two to three times the rate of men. In Australia, from postdoc status to assistant professor level, 62% of women leave the field, compared with just 17% of men. Between assistant professor and full professor level, 47% of women leave; the male departure rate is about half that. Women’s departure rates are similar in US astronomy “The Leaky Pipeline for Postdocs: A study of the time between receiving a PhD and securing a faculty job for male and female astronomers”.

    The next question is: why?

    Many women leave out of sheer disillusionment. Women in physics and astronomy say their careers progress more slowly than those of male colleagues, and that the culture is not welcoming.

    They receive fewer career resources and opportunities. Randomized double blind trials and broad research studies in astronomy and across the sciences show implicit bias in astronomy, which means more men are published, cited, invited to speak at conferences, and given telescope time.

    It’s hard to build a solid research-based body of work when one’s access to tools and recognition is disproportionately limited.

    The loyalty problem

    There is another factor that sometimes contributes to the loss of women astronomers: loyalty. In situations where a woman’s male partner is offered a new job in another town or city, the woman more frequently gives up her work to facilitate the move.

    Encouraging universities or research institutes to help partners find suitable work nearby is thus one of the strategies I (and others) have suggested to help recruit women astrophysicists.

    But the bigger task at hand requires institutions to identify, tackle and overcome inherent bias—a legacy of a conservative academic tradition that, research shows, is weighted towards men.

    A key mechanism to achieve this was introduced in 2014 by the Astronomical Society of Australia. It devised a voluntary rating and assessment system known as the Pleiades Awards, which rewards institutions for taking concrete actions to advance the careers of women and close the gender gap.

    Initiatives include longer-term postdoctoral positions with part-time options, support for returning to astronomy research after career breaks, increasing the fraction of permanent positions relative to fixed-term contracts, offering women-only permanent positions, recruitment of women directly to professorial levels, and mentoring of women for promotion to the highest levels.

    Most if not all Australian organizations that employ astronomers have signed up to the Pleiades Awards, and are showing genuine commitment to change.

    So why is progress still so slow?

    Seven years on, we would expect to have seen an increase in women recruited to, and retained in, senior positions.

    And we are, but the effect is far from uniform. My own organization, the ARC Center of Excellence in All-Sky Astrophysics in 3 Dimensions (ASTRO 3D), is on track for a 50:50 women-to-men ratio working at senior levels by the end of this year.

    The University of Sydney School of Physics – Faculty of Science (AU) has made nine senior appointments over the past three years, seven of them women.

    But these examples are outliers. At many institutions, inequitable hiring ratios and high departure rates persist despite a large pool of women astronomers at postdoc levels and the positive encouragement of the Pleiades Awards.

    Using these results and my new workforce models, I have shown current targets of 33% or 50% of women at all levels is unattainable if the status quo remains.

    How to move forward

    I propose a raft of affirmative measures to increase the presence of women at all senior levels in Australian astronomy—and keep them there.

    These include creating multiple women-only roles, creating prestigious senior positions for women, and hiring into multiple positions for men and women to avoid perceptions of tokenism. Improved workplace flexibility is crucial to allowing female researchers to develop their careers while balancing other responsibilities.

    Australia is far from unique when it comes to dealing with gender disparities in astronomy. Broadly similar situations persist in China, the United States and Europe. An April 2019 paper [Nature Astronomy] outlined similar discrimination experienced by women astronomers in Europe.

    Australia, however, is well placed to play a leading role in correcting the imbalance. With the right action, it wouldn’t take long to make our approach to gender equity as world-leading as our research.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Science X in 100 words
    Science X™ is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004 (Physorg.com), Science X’s readership has grown steadily to include 5 million scientists, researchers, and engineers every month. Science X publishes approximately 200 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Science X community members enjoy access to many personalized features such as social networking, a personal home page set-up, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.
    Mission 12 reasons for reading daily news on Science X Organization Key editors and writersinclude 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

     
  • richardmitnick 1:05 pm on April 14, 2021 Permalink | Reply
    Tags: "NOVEL THEORY ADDRESSES CENTURIES-OLD PHYSICS PROBLEM NOVEL THEORY ADDRESSES CENTURIES-OLD PHYSICS PROBLEM", , , , Mathematics, Mutual gravitational attraction, , The Hebrew University of Jerusalem [הַאוּנִיבֶרְסִיטָה הַעִבְרִית בְּיְרוּשָׁלַיִם ]   

    From The Hebrew University of Jerusalem [הַאוּנִיבֶרְסִיטָה הַעִבְרִית בְּיְרוּשָׁלַיִם ] (IL) : NOVEL THEORY ADDRESSES CENTURIES-OLD PHYSICS PROBLEM” 

    Hebrew U of Jerusalem bloc

    From The Hebrew University of Jerusalem [הַאוּנִיבֶרְסִיטָה הַעִבְרִית בְּיְרוּשָׁלַיִם‎] (IL)

    12/04/2021

    Hebrew University of Jerusalem Researcher introduces a new approach to the “three-body problem”; predicts its outcome statistics.

    1

    The “three-body problem,” the term coined for predicting the motion of three gravitating bodies in space, is essential for understanding a variety of astrophysical processes as well as a large class of mechanical problems, and has occupied some of the world’s best physicists, astronomers and mathematicians for over three centuries. Their attempts have led to the discovery of several important fields of science; yet its solution remained a mystery.

    At the end of the 17th century, Sir Isaac Newton succeeded in explaining the motion of the planets around the sun through a law of universal gravitation. He also sought to explain the motion of the moon. Since both the earth and the sun determine the motion of the moon, Newton became interested in the problem of predicting the motion of three bodies moving in space under the influence of their mutual gravitational attraction (see attached illustration), a problem that later became known as “the three-body problem”. However, unlike the two-body problem, Newton was unable to obtain a general mathematical solution for it. Indeed, the three-body problem proved easy to define, yet difficult to solve.

    New research, led by Prof. Barak Kol of the Racah Institute of Physics at the Hebrew University, adds a step to this scientific journey that began with Newton, touching on the limits of scientific prediction, and the role of chaos in it.

    The theoretical study presents a novel and exact reduction of the problem, enabled by a re-examination of the basic concepts that underlie previous theories. It allows for a precise prediction of the probability for each of the three bodies to escape the system.

    Following Newton and two centuries of fruitful research in the field including by Euler, Lagrange and Jacobi, by the late 19th century the mathematician Poincare discovered that the problem exhibits extreme sensitivity to the bodies’ initial positions and velocities. This sensitivity, which later became known as chaos, has far-reaching implications – it indicates that there is no deterministic solution in closed-form to the three-body problem.

    In the 20th century, the development of computers made it possible to re-examine the problem with the help of computerized simulations of the bodies’ motion. The simulations showed that under some general assumptions, a three-body system experiences periods of chaotic, or random, motion alternating with periods of regular motion, until finally the system disintegrates into a pair of bodies orbiting their common center of mass and a third one moving away, or escaping, from them.

    The chaotic nature implies that not only is a closed-form solution impossible, but also computer simulations cannot provide specific and reliable long-term predictions. However, the availability of large sets of simulations led in 1976 to the idea of seeking a statistical prediction of the system, and in particular, predicting the escape probability of each of the three bodies. In this sense, the original goal, to find a deterministic solution, was found to be wrong, and it was recognized that the right goal is to find a statistical solution.

    Determining the statistical solution has proven to be no easy task due to three features of this problem: the system presents chaotic motion that alternates with regular motion; it is unbounded and susceptible to disintegration. A year ago, Dr. Nicholas Stone of the Racah Institute of Physics at the Hebrew University and his colleagues used a new method of calculation, and for the first time achieved a closed mathematical expression for the statistical solution. However, this method, like all its predecessor statistical approaches, rests on certain assumptions. Inspired by these results, Kol initiated a re-examination of these assumptions.

    In order to understand the novelty of the new approach, it is necessary to discuss the notion of “phase space” that underlies all statistical theories in physics. A phase space is nothing but the space of all positions and velocities of the particles that compose a system. For instance, the phase space of a single particle allowed to move on a meter-long track with a velocity of at most two meters per second, is a rectangle, whose width is 1 meter, and whose length is four meters per second (since the velocity can be directed either to the left or to the right).

    Normally, physicists identify probability of an event of interest with its associated phase space volume (phase volume, in short). For instance, the probability for the particle to be found in the left half of the track, is associated with the volume of the left half of the phase space rectangle, which is one half of the total volume.

    The three-body problem is unbounded and the gravitational force is unlimited in range. This suggests infinite phase space volumes, which would imply infinite probabilities. In order to overcome this and related issues, all previous approaches postulated a “strong interaction region” and ignored phase volumes outside of it, such that phase volumes become finite. However, since the gravitational force decreases with distance, but never disappears, this is not an accurate theory, and it introduces a certain arbitrariness into the model.

    The new study, recently published in the scientific journal Celestial Mechanics and Dynamical Astronomy, focuses on the outgoing flux of phase-volume, rather than the phase-volume itself. For instance, consider a volume of gas within a container, a marked gas molecule moving within it, and consider the container wall to have two small holes. In this case, the probability for the molecule’s eventual exit hole would be proportional to the flux of the surrounding gas through each hole.

    Since the flux is finite even when the volume is infinite, this flux-based approach avoids the artificial problem of infinite probabilities, without ever introducing the artificial strong interaction region.

    In order to treat the mix between chaotic and regular motion, the flux-based theory further introduces an unknown quantity, the emissivity. In this way, the statistical prediction exactly factorizes into a closed-form expression, and the emissivity, which is presumably simpler and is left for future study.

    The flux-based theory predicts the escape probabilities of each body, under the assumption that the emissivity can be averaged out and ignored. The predictions are different from all previous frameworks, and Prof. Kol emphasizes that “tests by millions of computer simulations shows strong agreement between theory and simulation.” The simulations were carried out in collaboration with Viraj Manwadkar from the University of Chicago, Alessandro Trani from the Okinawa Institute in Japan, and Nathan Leigh from University of Concepcion in Chile. This agreement proves that understanding the system requires a paradigm shift and that the new conceptual basis describes the system well.

    It turns out, then, that even for the foundations of such an old problem, innovation is possible.

    The implications of this study are wide-ranging and is expected to influence both the solution of a variety of astrophysical problems and the understanding of an entire class of problems in mechanics. In astrophysics, it may have application to the mechanism that creates pairs of compact bodies that are the source of gravitational waves, as well as to deepen the understanding of the dynamics within star clusters. In mechanics, the three-body problem is a prototype for a variety of chaotic problems, so progress in it is likely to reflect on additional problems in this important class.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Hebrew University of Jerusalem campus

    The Hebrew University of Jerusalem (IL), founded in 1918 and opened officially in 1925, is Israel’s premier university as well as its leading research institution. The Hebrew University is ranked internationally among the 100 leading universities in the world and first among Israeli universities.
    The recognition the Hebrew University has attained confirms its reputation for excellence and its leading role in the scientific community. It stresses excellence and offers a wide array of study opportunities in the humanities, social sciences, exact sciences and medicine. The university encourages multi-disciplinary activities in Israel and overseas and serves as a bridge between academic research and its social and industrial applications.

    The Hebrew University has set as its goals the training of public, scientific, educational and professional leadership; the preservation of and research into Jewish, cultural, spiritual and intellectual traditions; and the expansion of the boundaries of knowledge for the benefit of all humanity.

     
  • richardmitnick 4:25 pm on January 27, 2021 Permalink | Reply
    Tags: "How heavy is Dark Matter? Scientists radically narrow the potential mass range for the first time", , , Dark Matter cannot be either ‘ultra-light’ or ‘super-heavy’., Gravity acts on Dark Matter just as it acts on the visible universe., If it turns out that the mass of Dark Matter is outside of the range predicted by the Sussex team then it will also prove that an additional force acts on Dark Matter., Mathematics, , , U Sussex (UK)   

    From U Sussex (UK): “How heavy is Dark Matter? Scientists radically narrow the potential mass range for the first time” 

    From U Sussex (UK)

    27 January 2021
    Anna Ford

    1
    Credit: Greg Rakozy on Unsplash.

    Scientists have calculated the mass range for Dark Matter – and it’s tighter than the science world thought.

    Their findings – due to be published in Physical Letters B in March – radically narrow the range of potential masses for Dark Matter particles, and help to focus the search for future Dark Matter-hunters. The University of Sussex researchers used the established fact that gravity acts on Dark Matter just as it acts on the visible universe to work out the lower and upper limits of Dark Matter’s mass.

    The results show that Dark Matter cannot be either ‘ultra-light’ or ‘super-heavy’, as some have theorised, unless an as-yet undiscovered force also acts upon it.

    The team used the assumption that the only force acting on Dark Matter is gravity, and calculated that Dark Matter particles must have a mass between 10-3 eV and 107 eV. That’s a much tighter range than the 10-24 eV – 1019 GeV spectrum which is generally theorised.

    What makes the discovery even more significant is that if it turns out that the mass of Dark Matter is outside of the range predicted by the Sussex team, then it will also prove that an additional force – as well as gravity – acts on Dark Matter.

    Professor Xavier Calmet from the School of Mathematical and Physical Sciences at the University of Sussex, said:

    “This is the first time that anyone has thought to use what we know about quantum gravity as a way to calculate the mass range for Dark Matter. We were surprised when we realised no-one had done it before – as were the fellow scientists reviewing our paper.

    “What we’ve done shows that Dark Matter cannot be either ‘ultra-light’ or ‘super-heavy’ as some theorise – unless there is an as-yet unknown additional force acting on it. This piece of research helps physicists in two ways: it focuses the search area for Dark Matter, and it will potentially also help reveal whether or not there is a mysterious unknown additional force in the universe.”

    Folkert Kuipers, a PhD student working with Professor Calmet, at the University of Sussex, said:

    “As a PhD student, it’s great to be able to work on research as exciting and impactful as this. Our findings are very good news for experimentalists as it will help them to get closer to discovering the true nature of Dark Matter.”

    The visible universe – such as ourselves, the planets and stars – accounts for 25 per cent of all mass in the universe. The remaining 75 per cent is comprised of Dark Matter.

    It is known that gravity acts on Dark Matter because that’s what accounts for the shape of galaxies.

    Dark Matter Background
    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, some 30 years later, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com.


    Coma cluster via NASA/ESA Hubble.


    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.
    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.
    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).


    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL).


    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970. https://home.dtm.ciw.edu.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Sussex (UK) is a leading research-intensive university near Brighton. We have both an international and local outlook, with staff and students from more than 100 countries and frequent engagement in community activities and services.

     
  • richardmitnick 2:55 pm on January 26, 2021 Permalink | Reply
    Tags: "Connected Moments" for Quantum Computing, , , , Mathematics,   

    From DOE’s Pacific Northwest National Laboratory: “Connected Moments for Quantum Computing” 

    From DOE’s Pacific Northwest National Laboratory

    January 12, 2021 [Just now in social media.]

    Karyn Hede
    karyn.hede@pnnl.gov

    Math shortcut shaves time and cost of quantum calculations while maintaining accuracy.

    1

    Quantum computers are exciting in part because they are being designed to show how the world is held together. This invisible “glue” is made of impossibly tiny particles and energy. And like all glue, it’s kind of messy.

    Once the formula for the glue is known, it can be used to hold molecules together in useful structures. And these new kinds of materials and chemicals may one day fuel our vehicles and warm our homes.

    But before all that, we need math. That’s where theoretical chemists Bo Peng and Karol Kowalski have excelled. The Pacific Northwest National Laboratory duo are teaching today’s computers to do the math that will reveal the universe’s subatomic glue, once full-scale quantum computing becomes feasible.

    2
    The connected moments mathematical method is helping understand the universal energy glue that binds molecules together. Credit: Nathan Johnson /Pacific Northwest National Laboratory.

    The team recently showed that they could use a mathematical tool called “connected moments,” to greatly reduce the time and calculation costs of conducting one kind of quantum calculation. Using what’s called a quantum simulator, the team showed that they could accurately model simple molecules. This feat, which mathematically describes the energy glue holding together molecules, garnered “editor’s pick” in the Journal of Chemical Physics, signifying its scientific importance.

    “We showed that we can use this approach to reduce the complexity of quantum calculations needed to model a chemical system, while also reducing errors,” said Peng. “We see this as a compromise that will allow us to get from what we can do right now with a quantum computer to what will be possible in the near future.”

    Connected moments

    The research team applied a mathematical concept that was first described 40 years ago. They were attracted to the connected moments method because of its ability to accurately reconstruct the total energy of a molecular system using much less time and many fewer cycles of calculations. This is important because today’s quantum computers are prone to error. The more quantum circuits needed for a calculation, the more opportunity for error to creep in. By using fewer of these fragile quantum circuits, they reduced the error rate of the whole calculation, while maintaining an accurate result.

    “The design of this algorithm allows us to do the equivalent of a full-scale quantum calculation with modest resources,” said Kowalski.

    Timing-saving method applies to chemistry and materials science.

    In the study, the team established the reliability of the connected moments method for accurately describing the energy in both a simple molecule of hydrogen and a simple metal impurity. Using relatively simple models allowed the team to compare its method with existing full-scale computing models known to be correct and accurate.

    “This study demonstrated that the connected moments method can advance the accuracy and affordability of electronic structure methods,” said Kowalski. “We are already working on extending the work to larger systems, and integrating it with emerging quantum computing frameworks.”

    By studying both a chemical system and a material system the researchers showed the versatility of the approach for describing the total energy in both systems. The preparation of this so-called “initial state” is a steppingstone to studying more complex interactions between molecules—how the energy shifts around to keep molecules glued together.

    Bridge to quantum computing

    The published study [The Journal of Chemical Physics] used IBM’s QISKIT quantum computing software, but work is already under way to extend its use with other quantum computing platforms. Specifically, the research team is working to extend the work to support XACC, an infrastructure developed at Oak Ridge National Laboratory. The XACC software will allow the scientists to take advantage of the fastest, most accurate world-class computers as a quantum–classical computing hybrid.

    This discovery will now be incorporated into research to be performed in the Quantum Science Center, a U.S. Department of Energy Office of Science (DOE-SC)-supported initiative.

    “This work was conducted with a very small system of four qubits, but we hope to extend to a 12-qubit system in the near term, with an ultimate goal of a 50-qubit system within three to five years,” said Peng.

    At that point, the messy glue of the universe may be easier to apply.

    The research was supported by the DOE-SC Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    DOE’s Pacific Northwest National Laboratory (PNNL) is one of the United States Department of Energy National Laboratories, managed by the Department of Energy’s Office of Science. The main campus of the laboratory is in Richland, Washington.

    PNNL scientists conduct basic and applied research and development to strengthen U.S. scientific foundations for fundamental research and innovation; prevent and counter acts of terrorism through applied research in information analysis, cyber security, and the nonproliferation of weapons of mass destruction; increase the U.S. energy capacity and reduce dependence on imported oil; and reduce the effects of human activity on the environment. PNNL has been operated by Battelle Memorial Institute since 1965.

     
  • richardmitnick 5:22 pm on January 21, 2021 Permalink | Reply
    Tags: "Pioneering new technique could revolutionize super-resolution imaging systems", A new technique called Repeat DNA-Paint which is capable of supressing background noise and nonspecific signals., , , , DNA-PAINT (Point Accumulation for Imaging in Nanoscale Topography) is noisy and has nonspecific signals., Mathematics, , Molecules in a cell are labelled with marker molecules that are attached to single DNA strands., , Optics and imaging, ,   

    From University of Exeter (UK): “Pioneering new technique could revolutionize super-resolution imaging systems” 

    From University of Exeter (UK)

    21 January 2021

    1
    Credit: Pixabay/CC0 Public Domain

    Scientists have developed a pioneering new technique that could revolutionise the accuracy, precision and clarity of super-resolution imaging systems.

    A team of scientists, led by Dr Christian Soeller from the University of Exeter’s Living Systems Institute, which champions interdisciplinary research and is a hub for new high-resolution measurement techniques, has developed a new way to improve the very fine, molecular imaging of biological samples.

    The new method builds upon the success of an existing super-resolution imaging technique called DNA-PAINT (Point Accumulation for Imaging in Nanoscale Topography) – where molecules in a cell are labelled with marker molecules that are attached to single DNA strands.

    Matching DNA strands are then also labelled with a florescent chemical compound and introduced in solution – when they bind the marker molecules, it creates a ‘blinking effect’ that makes imaging possible.

    However, DNA-PAINT has a number of drawbacks in its current form, which limit the applicability and performance of the technology when imaging biological cells and tissues.

    In response, the research team have developed a new technique, called Repeat DNA-Paint, which is capable of supressing background noise and nonspecific signals, as well as decreasing the time taken for the sampling process.

    Crucially, using Repeat DNA-PAINT is straightforward and does not carry any known drawbacks, it is routinely applicable, consolidating the role of DNA-PAINT as one of the most robust and versatile molecular resolution imaging methods.

    The study is published in Nature Communications on 21st January 2021.

    Dr Soeller, lead author of the study and who is a biophysicist at the Living Systems Institute said: “We can now see molecular detail with light microscopy in a way that a few years ago seemed out of reach. This allows us to directly see how molecules orchestrate the intricate biological functions that enable life in both health and disease”.

    The research was enabled by colleagues from physics, biology, medicine, mathematics and chemistry working together across traditional discipline boundaries. Dr Lorenzo Di Michele, co-author from Imperial College London said: “This work is a clear example of how quantitative biophysical techniques and concepts can really improve our ability to study biological systems”.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Exeter (UK) is a public research university in Exeter, Devon, South West England, United Kingdom. It was founded and received its royal charter in 1955, although its predecessor institutions, St Luke’s College, Exeter School of Science, Exeter School of Art, and the Camborne School of Mines were established in 1838, 1855, 1863, and 1888 respectively. In post-nominals, the University of Exeter is abbreviated as Exon. (from the Latin Exoniensis), and is the suffix given to honorary and academic degrees from the university.

    The university has four campuses: Streatham and St Luke’s (both of which are in Exeter); and Truro and Penryn (both of which are in Cornwall). The university is primarily located in the city of Exeter, Devon, where it is the principal higher education institution. Streatham is the largest campus containing many of the university’s administrative buildings. The Penryn campus is maintained in conjunction with Falmouth University under the Combined Universities in Cornwall (CUC) initiative. The Exeter Streatham Campus Library holds more than 1.2 million physical library resources, including historical journals and special collections. The annual income of the institution for 2017–18 was £415.5 million of which £76.1 million was from research grants and contracts, with an expenditure of £414.2 million.

    Exeter is a member of the Russell Group of research-intensive UK universities[9] and is also a member of Universities UK, the European University Association, and the Association of Commonwealth Universities and an accredited institution of the Association of MBAs (AMBA).

     
  • richardmitnick 4:43 pm on January 19, 2021 Permalink | Reply
    Tags: , , Biomarkers left behind by tiny single-cell organisms called archaea in the distant past., , Mathematics, , , TEX86   

    From ARC Centres of Excellence for Gravitational Wave Discovery OzGrav (AU) via phys.org: “Using 100-million-year-old fossils and gravitational-wave science to predict Earth’s future climate” 

    arc-centers-of-excellence-bloc

    From ARC Centres of Excellence for Gravitational Wave Discovery

    (AU)

    via


    phys.org

    January 19, 2021

    A group of international scientists, including an Australian astrophysicist, has used findings from gravitational wave astronomy (used to find black holes in space) to study ancient marine fossils as a predictor of climate change.

    The research, published in the journal Climate of the Past, is a unique collaboration between palaeontologists, astrophysicists and mathematicians seeking to improve the accuracy of a palaeo-thermometer, which can use fossil evidence of climate change to predict what is likely to happen to the Earth in coming decades.

    Professor Ilya Mandel, from the ARC Centre of Excellence in Gravitational Wave Discovery (OzGrav), and colleagues, studied biomarkers left behind by tiny single-cell organisms called archaea in the distant past, including the Cretaceous period and the Eocene.

    Marine archaea in our modern oceans produce compounds called Glycerol Dialkyl Glycerol Tetraethers (GDGTs). The ratios of different types of GDGTs they produce depend on the local sea temperature at the site of formation.

    When preserved in ancient marine sediments, the measured abundances of GDGTs have the potential to provide a geological record of long-term planetary surface temperatures.

    To date, scientists have combined GDGT concentrations into a single parameter called TEX86, which can be used to make rough estimates of the surface temperature. However, this estimate is not very accurate when the values of TEX86 from recent sediments are compared to modern sea surface temperatures.

    1
    Image of archaea. Credit: Steve Gschmeissner/Science Photo Library

    “After several decades of study, the best available models are only able to measure temperature from GDGT concentrations with an accuracy of around 6 degrees Celsius,” Professor Mandel said. Therefore, this approach cannot be relied on for high-precision measurements of ancient climates.

    Professor Mandel and his colleagues at the University of Birmingham in the UK have applied modern machine-learning tools—originally used in the context of gravitational-wave astrophysics to create predictive models of merging black holes and neutron stars—to improve temperature estimation based on GDGT measurements. This enabled them to take all observations into account for the first time rather than relying on one particular combination, TEX86. This produced a far more accurate palaeo-thermometer. Using these tools, the team extracted temperature from GDGT concentrations with an accuracy of just 3.6 degrees—a significant improvement, nearly twice the accuracy of previous models.

    According to Professor Mandel, determining how much the Earth will warm in coming decades relies on modelling, “so it is critically important to calibrate those models by utilizing literally hundreds of millions of years of climate history to predict what might happen to the Earth in the future,” he said.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    OzGrav (AU)


    ARC Centres of Excellence for Gravitational Wave Discovery OzGrav (AU)
    A new window of discovery.
    A new age of gravitational wave astronomy.
    One hundred years ago, Albert Einstein produced one of the greatest intellectual achievements in physics, the theory of general relativity. In general relativity, spacetime is dynamic. It can be warped into a black hole. Accelerating masses create ripples in spacetime known as gravitational waves (GWs) that carry energy away from the source. Recent advances in detector sensitivity led to the first direct detection of gravitational waves in 2015. This was a landmark achievement in human discovery and heralded the birth of the new field of gravitational wave astronomy. This was followed in 2017 by the first observations of the collision of two neutron-stars. The accompanying explosion was subsequently seen in follow-up observations by telescopes across the globe, and ushered in a new era of multi-messenger astronomy.

    The mission of the ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav) is to capitalise on the historic first detections of gravitational waves to understand the extreme physics of black holes and warped spacetime, and to inspire the next generation of Australian scientists and engineers through this new window on the Universe.

    OzGrav is funded by the Australian Government through the Australian Research Council Centres of Excellence funding scheme, and is a partnership between Swinburne University (host of OzGrav headquarters), the Australian National University, Monash University, University of Adelaide, University of Melbourne, and University of Western Australia, along with other collaborating organisations in Australia and overseas.

    ________________________________________________________

    The objectives for the ARC Centres of Excellence are to to:

    undertake highly innovative and potentially transformational research that aims to achieve international standing in the fields of research envisaged and leads to a significant advancement of capabilities and knowledge

    link existing Australian research strengths and build critical mass with new capacity for interdisciplinary, collaborative approaches to address the most challenging and significant research problems

    develope relationships and build new networks with major national and international centres and research programs to help strengthen research, achieve global competitiveness and gain recognition for Australian research

    build Australia’s human capacity in a range of research areas by attracting and retaining, from within Australia and abroad, researchers of high international standing as well as the most promising research students

    provide high-quality postgraduate and postdoctoral training environments for the next generation of researchers

    offer Australian researchers opportunities to work on large-scale problems over long periods of time

    establish Centres that have an impact on the wider community through interaction with higher education institutes, governments, industry and the private and non-profit sector.

     
  • richardmitnick 11:30 am on January 8, 2021 Permalink | Reply
    Tags: "How I Learned to Love and Fear the Riemann Hypothesis", , Mathematics,   

    From Quanta Magazine: “How I Learned to Love and Fear the Riemann Hypothesis” 

    From Quanta Magazine

    January 4, 2021
    Alex Kontorovich


    The Riemann Hypothesis, Explained

    I first heard of the Riemann hypothesis — arguably the most important and notorious unsolved problem in all of mathematics — from the late, great Eli Stein, a world-renowned mathematician at Princeton University. I was very fortunate that Professor Stein decided to reimagine the undergraduate analysis sequence during my sophomore year of college, in the spring of 2000. He did this by writing a rich, broad set of now-famous books on the subject (co-authored with then-graduate student Rami Shakarchi).

    In mathematics, analysis deals with the ideas of calculus in a rigorous, axiomatic way. Stein wrote four books in the series. The first was on Fourier analysis (the art and science of decomposing arbitrary signals into combinations of simple harmonic waves). The next was on complex analysis (which treats functions that have complex numbers as both inputs and outputs), followed by real analysis (which develops, among other things, a rigorous way to measure sizes of sets) and finally functional analysis (which deals broadly with functions of functions). These are core subjects containing foundational knowledge for any working mathematician.

    In Stein’s class, my fellow students and I were the guinea pigs on whom the material for his books was to be rehearsed. We had front-row seats as Eli (as I later came to call him) showcased his beloved subject’s far-reaching consequences: Look at how amazing analysis is, he would say. You can even use it to resolve problems in the distant world of number theory! Indeed, his book on Fourier analysis builds up to a proof of Dirichlet’s theorem on primes in arithmetic progressions, which says, for example, that infinitely many primes leave a remainder of 6 when divided by 35 (since 6 and 35 have no prime factors in common). And his course on complex analysis included a proof of the prime number theorem, which gives an asymptotic estimate for the number of primes below a growing bound. Moreover, I learned that if the Riemann hypothesis is true, we’ll get a much stronger prime number theorem than the one known today. To see why that is and for a closer look under the hood of this famous math problem, please watch the accompanying video at the top of this page.

    Despite Eli’s proselytizing to us on the wide-ranging power of analysis, I learned the opposite lesson: Look at how amazing number theory is — you can even use fields as far away as analysis to prove the things you want! Stein’s class helped set me on the path to becoming a number theorist. But as I came to understand more about the Riemann hypothesis over the years, I learned not to make it a focus of my research. It’s just too hard to make progress.

    After Princeton, I went off to graduate school at Columbia University. It was an exciting time to be working in number theory. In 2003, Dan Goldston and Cem Yıldırım announced a spectacular new result about gaps in the primes, only to withdraw the claim soon after. (As Goldston wrote years later on accepting the prestigious Cole Prize for these ideas: “While mathematicians often do not have much humility, we all have lots of experience with humiliation.”) Nevertheless, the ideas became an important ingredient in the Green-Tao theorem, which shows that the set of prime numbers contains arithmetic progressions of any given length. Then, working with János Pintz, Goldston and Yıldırım salvaged enough of their method to prove, in their breakthrough GPY theorem in 2005, that primes will infinitely often have gaps which are arbitrarily small when compared to the average gap. Moreover, if you could improve their result by any amount at all, you would prove that primes infinitely often differ by some bounded constant. And this would be a huge leap toward solving the notoriously difficult twin primes conjecture, which predicts that there are infinitely many pairs of primes that differ by 2.

    A meeting on how to extend the GPY method was immediately organized at the American Institute of Mathematics in San Jose, California. As a bright-eyed and bushy-tailed grad student, I felt extraordinarily lucky to be there among the world’s top experts. By the end of the week, the experts agreed that it was basically impossible to improve the GPY method to get bounded prime gaps. Fortunately, Yitang Zhang did not attend this meeting. Almost a decade later, after years of incredibly hard work in relative isolation, he found a way around the impasse and proved the experts wrong. I guess the moral of my story is that when people organize meetings on how not to solve the Riemann hypothesis (as they do from time to time), don’t go!

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:41 am on January 3, 2021 Permalink | Reply
    Tags: , , , Mathematics, , , , ,   

    From Quanta Magazine: “Landmark Computer Science Proof Cascades Through Physics and Math” 

    From Quanta Magazine

    March 4, 2020 [From Year End Wrap-Up.]
    Kevin Hartnett

    Computer scientists established a new boundary on computationally verifiable knowledge. In doing so, they solved major open problems in quantum mechanics and pure mathematics.

    1
    A new proof in computer science also has implications for researchers in quantum mechanics and pure math. DVDP for Quanta Magazine.

    In 1935, Albert Einstein, working with Boris Podolsky and Nathan Rosen, grappled with a possibility revealed by the new laws of quantum physics: that two particles could be entangled, or correlated, even across vast distances.

    The very next year, Alan Turing formulated the first general theory of computing and proved that there exists a problem that computers will never be able to solve.

    These two ideas revolutionized their respective disciplines. They also seemed to have nothing to do with each other. But now a landmark proof has combined them while solving a raft of open problems in computer science, physics and mathematics.

    The new proof establishes that quantum computers that calculate with entangled quantum bits or qubits, rather than classical 1s and 0s, can theoretically be used to verify answers to an incredibly vast set of problems. The correspondence between entanglement and computing came as a jolt to many researchers.

    “It was a complete surprise,” said Miguel Navascués, who studies quantum physics at the Institute for Quantum Optics and Quantum Information in Vienna.

    The proof’s co-authors set out to determine the limits of an approach to verifying answers to computational problems. That approach involves entanglement. By finding that limit the researchers ended up settling two other questions almost as a byproduct: Tsirelson’s problem in physics, about how to mathematically model entanglement, and a related problem in pure mathematics called the Connes embedding conjecture.

    In the end, the results cascaded like dominoes.

    “The ideas all came from the same time. It’s neat that they come back together again in this dramatic way,” said Henry Yuen of the University of Toronto and an author of the proof, along with Zhengfeng Ji of the University of Technology Sydney, Anand Natarajan and Thomas Vidick of the California Institute of Technology, and John Wright of the University of Texas, Austin. The five researchers are all computer scientists.

    Undecidable Problems

    Turing defined a basic framework for thinking about computation before computers really existed. In nearly the same breath, he showed that there was a certain problem computers were provably incapable of addressing. It has to do with whether a program ever stops.

    Typically, computer programs receive inputs and produce outputs. But sometimes they get stuck in infinite loops and spin their wheels forever. When that happens at home, there’s only one thing left to do.

    “You have to manually kill the program. Just cut it off,” Yuen said.

    Turing proved that there’s no all-purpose algorithm that can determine whether a computer program will halt or run forever. You have to run the program to find out.

    2
    The computer scientists Henry Yuen, Thomas Vidick, Zhengfeng Ji, Anand Natarajan and John Wright co-authored a proof about verifying answers to computational problems and ended up solving major problems in math and quantum physics.
    Credits:(Yuen) Andrea Lao; (Vidick) Courtesy of Caltech; (Ji) Anna Zhu; (Natarajan) David Sella; (Wright) Soya Park.

    “You’ve waited a million years and a program hasn’t halted. Do you just need to wait 2 million years? There’s no way of telling,” said William Slofstra, a mathematician at the University of Waterloo.

    In technical terms, Turing proved that this halting problem is undecidable — even the most powerful computer imaginable couldn’t solve it.

    After Turing, computer scientists began to classify other problems by their difficulty. Harder problems require more computational resources to solve — more running time, more memory. This is the study of computational complexity.

    Ultimately, every problem presents two big questions: “How hard is it to solve?” and “How hard is it to verify that an answer is correct?”

    Interrogate to Verify

    When problems are relatively simple, you can check the answer yourself. But when they get more complicated, even checking an answer can be an overwhelming task. However, in 1985 computer scientists realized it’s possible to develop confidence that an answer is correct even when you can’t confirm it yourself.

    The method follows the logic of a police interrogation.

    If a suspect tells an elaborate story, maybe you can’t go out into the world to confirm every detail. But by asking the right questions, you can catch your suspect in a lie or develop confidence that the story checks out.

    In computer science terms, the two parties in an interrogation are a powerful computer that proposes a solution to a problem — known as the prover — and a less powerful computer that wants to ask the prover questions to determine whether the answer is correct. This second computer is called the verifier.

    To take a simple example, imagine you’re colorblind and someone else — the prover — claims two marbles are different colors. You can’t check this claim by yourself, but through clever interrogation you can still determine whether it’s true.

    Put the two marbles behind your back and mix them up. Then ask the prover to tell you which is which. If they really are different colors, the prover should answer the question correctly every time. If the marbles are actually the same color — meaning they look identical — the prover will guess wrong half the time.

    “If I see you succeed a lot more than half the time, I’m pretty sure they’re not” the same color, Vidick said.

    By asking a prover questions, you can verify solutions to a wider class of problems than you can on your own.

    n 1988, computer scientists considered what happens when two provers propose solutions to the same problem. After all, if you have two suspects to interrogate, it’s even easier to solve a crime, or verify a solution, since you can play them against each other.

    “It gives more leverage to the verifier. You interrogate, ask related questions, cross-check the answers,” Vidick said. If the suspects are telling the truth, their responses should align most of the time. If they’re lying, the answers will conflict more often.

    Similarly, researchers showed that by interrogating two provers separately about their answers, you can quickly verify solutions to an even larger class of problems than you can when you only have one prover to interrogate.

    Computational complexity may seem entirely theoretical, but it’s also closely connected to the real world. The resources that computers need to solve and verify problems — time and memory — are fundamentally physical. For this reason, new discoveries in physics can change computational complexity.

    “If you choose a different set of physics, like quantum rather than classical, you get a different complexity theory out of it,” Natarajan said.

    The new proof is the end result of 21st-century computer scientists confronting one of the strangest ideas of 20th-century physics: entanglement.

    The Connes Embedding Conjecture

    When two particles are entangled, they don’t actually affect each other — they have no causal relationship. Einstein and his co-authors elaborated on this idea in their 1935 paper. Afterward, physicists and mathematicians tried to come up with a mathematical way of describing what entanglement really meant.

    Yet the effort came out a little muddled. Scientists came up with two different mathematical models for entanglement — and it wasn’t clear that they were equivalent to each other.

    In a roundabout way, this potential dissonance ended up producing an important problem in pure mathematics called the Connes embedding conjecture. Eventually, it also served as a fissure that the five computer scientists took advantage of in their new proof.

    The first way of modeling entanglement was to think of the particles as spatially isolated from each other. One is on Earth, say, and the other is on Mars; the distance between them is what prevents causality. This is called the tensor product model.

    But in some situations, it’s not entirely obvious when two things are causally separate from each other. So mathematicians came up with a second, more general way of describing causal independence.

    When the order in which you perform two operations doesn’t affect the outcome, the operations “commute”: 3 x 2 is the same as 2 x 3. In this second model, particles are entangled when their properties are correlated but the order in which you perform your measurements doesn’t matter: Measure particle A to predict the momentum of particle B or vice versa. Either way, you get the same answer. This is called the commuting operator model of entanglement.

    Both descriptions of entanglement use arrays of numbers organized into rows and columns called matrices. The tensor product model uses matrices with a finite number of rows and columns. The commuting operator model uses a more general object that functions like a matrix with an infinite number of rows and columns.

    Over time, mathematicians began to study these matrices as objects of interest in their own right, completely apart from any connection to the physical world. As part of this work, a mathematician named Alain Connes conjectured in 1976 that it should be possible to approximate many infinite-dimensional matrices with finite-dimensional ones. This is one implication of the Connes embedding conjecture.

    The following decade a physicist named Boris Tsirelson posed a version of the problem that grounded it in physics once more. Tsirelson conjectured that the tensor product and commuting operator models of entanglement were roughly equivalent. This makes sense, since they’re theoretically two different ways of describing the same physical phenomenon. Subsequent work showed that because of the connection between matrices and the physical models that use them, the Connes embedding conjecture and Tsirelson’s problem imply each other: Solve one, and you solve the other.

    Yet the solution to both problems ended up coming from a third place altogether.

    Game Show Physics

    In the 1960s, a physicist named John Bell came up with a test for determining whether entanglement was a real physical phenomenon, rather than just a theoretical notion. The test involved a kind of game whose outcome reveals whether something more than ordinary, non-quantum physics is at work.

    Computer scientists would later realize that this test about entanglement could also be used as a tool for verifying answers to very complicated problems.

    But first, to see how the games work, let’s imagine two players, Alice and Bob, and a 3-by-3 grid. A referee assigns Alice a row and tells her to enter a 0 or a 1 in each box so that the digits sum to an odd number. Bob gets a column and has to fill it out so that it sums to an even number. They win if they put the same number in the one place her row and his column overlap. They’re not allowed to communicate.

    Under normal circumstances, the best they can do is win 89% of the time. But under quantum circumstances, they can do better.

    Imagine Alice and Bob split a pair of entangled particles. They perform measurements on their respective particles and use the results to dictate whether to write 1 or 0 in each box. Because the particles are entangled, the results of their measurements are going to be correlated, which means their answers will correlate as well — meaning they can win the game 100% of the time.

    3
    Credit: Lucy Reading-Ikkanda/Quanta Magazine.

    So if you see two players winning the game at unexpectedly high rates, you can conclude that they are using something other than classical physics to their advantage. Such Bell-type experiments are now called “nonlocal” games, in reference to the separation between the players. Physicists actually perform them in laboratories.

    “People have run experiments over the years that really show this spooky thing is real,” said Yuen.

    As when analyzing any game, you might want to know how often players can win a nonlocal game, provided they play the best they can. For example, with solitaire, you can calculate how often someone playing perfectly is likely to win.

    But in 2016, William Slofstra proved that there’s no general algorithm for calculating the exact maximum winning probability for all nonlocal games. So researchers wondered: Could you at least approximate the maximum-winning percentage?

    Computer scientists have homed in on an answer using the two models describing entanglement. An algorithm that uses the tensor product model establishes a floor, or minimum value, on the approximate maximum-winning probability for all nonlocal games. Another algorithm, which uses the commuting operator model, establishes a ceiling.

    These algorithms produce more precise answers the longer they run. If Tsirelson’s prediction is true, and the two models really are equivalent, the floor and the ceiling should keep pinching closer together, narrowing in on a single value for the approximate maximum-winning percentage.

    But if Tsirelson’s prediction is false, and the two models are not equivalent, “the ceiling and the floor will forever stay separated,” Yuen said. There will be no way to calculate even an approximate winning percentage for nonlocal games.

    In their new work, the five researchers used this question — about whether the ceiling and floor converge and Tsirelson’s problem is true or false — to solve a separate question about when it’s possible to verify the answer to a computational problem.

    Entangled Assistance

    In the early 2000s, computer scientists began to wonder: How does it change the range of problems you can verify if you interrogate two provers that share entangled particles?

    Most assumed that entanglement worked against verification. After all, two suspects would have an easier time telling a consistent lie if they had some means of coordinating their answers.

    But over the last few years, computer scientists have realized that the opposite is true: By interrogating provers that share entangled particles, you can verify a much larger class of problems than you can without entanglement.

    “Entanglement is a way to generate correlations that you think might help them lie or cheat,” Vidick said. “But in fact you can use that to your advantage.”

    To understand how, you first need to grasp the almost otherworldly scale of the problems whose solutions you could verify through this interactive procedure.

    Imagine a graph — a collection of dots (vertices) connected by lines (edges). You might want to know whether it’s possible to color the vertices using three colors, so that no vertices connected by an edge have the same color. If you can, the graph is “three-colorable.”

    If you hand a pair of entangled provers a very large graph, and they report back that it can be three-colored, you’ll wonder: Is there a way to verify their answer?

    For very big graphs, it would be impossible to check the work directly. So instead, you could ask each prover to tell you the color of one of two connected vertices. If they each report a different color, and they keep doing so every time you ask, you’ll gain confidence that the three-coloring really works.

    But even this interrogation strategy fails as graphs get really big — with more edges and vertices than there are atoms in the universe. Even the task of stating a specific question (“Tell me the color of XYZ vertex”) is more than you, the verifier, can manage: The amount of data required to name a specific vertex is more than you can hold in your working memory.

    But entanglement makes it possible for the provers to come up with the questions themselves.

    “The verifier doesn’t have to compute the questions. The verifier forces the provers to compute the questions for them,” Wright said.

    The verifier wants the provers to report the colors of connected vertices. If the vertices aren’t connected, then the answers to the questions won’t say anything about whether the graph is three-colored. In other words, the verifier wants the provers to ask correlated questions: One prover asks about vertex ABC and the other asks about vertex XYZ. The hope is that the two vertices are connected to each other, even though neither prover knows which vertex the other is thinking about. (Just as Alice and Bob hope to fill in the same number in the same square even though neither knows which row or column the other has been asked about.)

    If two provers were coming up with these questions completely on their own, there’d be no way to force them to select connected, or correlated, vertices in a way that would allow the verifier to validate their answers. But such correlation is exactly what entanglement enables.

    “We’re going to use entanglement to offload almost everything onto the provers. We make them select questions by themselves,” Vidick said.

    At the end of this procedure, the provers each report a color. The verifier checks whether they’re the same or not. If the graph really is three-colorable, the provers should never report the same color.

    “If there is a three-coloring, the provers will be able to convince you there is one,” Yuen said.

    As it turns out, this verification procedure is another example of a nonlocal game. The provers “win” if they convince you their solution is correct.

    In 2012, Vidick and Tsuyoshi Ito proved that it’s possible to play a wide variety of nonlocal games with entangled provers to verify answers to at least the same number of problems you can verify by interrogating two classical computers. That is, using entangled provers doesn’t work against verification. And last year, Natarajan and Wright proved that interacting with entangled provers actually expands the class of problems that can be verified.

    But computer scientists didn’t know the full range of problems that can be verified in this way. Until now.

    A Cascade of Consequences

    In their new paper, the five computer scientists prove that interrogating entangled provers makes it possible to verify answers to unsolvable problems, including the halting problem.

    “The verification capability of this type of model is really mind-boggling,” Yuen said.

    But the halting problem can’t be solved. And that fact is the spark that sets the final proof in motion.

    Imagine you hand a program to a pair of entangled provers. You ask them to tell you whether it will halt. You’re prepared to verify their answer through a kind of nonlocal game: The provers generate questions and “win” based on the coordination between their answers.

    If the program does in fact halt, the provers should be able to win this game 100% of the time — similar to how if a graph is actually three-colorable, entangled provers should never report the same color for two connected vertices. If it doesn’t halt, the provers should only win by chance — 50% of the time.

    That means if someone asks you to determine the approximate maximum-winning probability for a specific instance of this nonlocal game, you will first need to solve the halting problem. And solving the halting problem is impossible. Which means that calculating the approximate maximum-winning probability for nonlocal games is undecidable, just like the halting problem.

    This in turn means that the answer to Tsirelson’s problem is no — the two models of entanglement are not equivalent. Because if they were, you could pinch the floor and the ceiling together to calculate an approximate maximum-winning probability.

    “There cannot be such an algorithm, so the two [models] must be different,” said David Pérez-García of the Complutense University of Madrid.

    The new paper proves that the class of problems that can be verified through interactions with entangled quantum provers, a class called MIP*, is exactly equal to the class of problems that are no harder than the halting problem, a class called RE. The title of the paper states it succinctly: “MIP* = RE.”

    In the course of proving that the two complexity classes are equal, the computer scientists proved that Tsirelson’s problem is false, which, due to previous work, meant that the Connes embedding conjecture is also false.

    For researchers in these fields, it was stunning that answers to such big problems would fall out from a seemingly unrelated proof in computer science.

    “If I see a paper that says MIP* = RE, I don’t think it has anything to do with my work,” said Navascués, who co-authored previous work tying Tsirelson’s problem and the Connes embedding conjecture together. “For me it was a complete surprise.”

    Quantum physicists and mathematicians are just beginning to digest the proof. Prior to the new work, mathematicians had wondered whether they could get away with approximating infinite-dimensional matrices by using large finite-dimensional ones instead. Now, because the Connes embedding conjecture is false, they know they can’t.

    “Their result implies that’s impossible,” said Slofstra.

    The computer scientists themselves did not aim to answer the Connes embedding conjecture, and as a result, they’re not in the best position to explain the implications of one of the problems they ended up solving.

    “Personally, I’m not a mathematician. I don’t understand the original formulation of the Connes embedding conjecture well,” said Natarajan.

    He and his co-authors anticipate that mathematicians will translate this new result into the language of their own field. In a blog post announcing the proof, Vidick wrote, “I don’t doubt that eventually complexity theory will not be needed to obtain the purely mathematical consequences.”

    Yet as other researchers run with the proof, the line of inquiry that prompted it is coming to a halt. For more than three decades, computer scientists have been trying to figure out just how far interactive verification will take them. They are now confronted with the answer, in the form of a long paper with a simple title and echoes of Turing.

    “There’s this long sequence of works just wondering how powerful” a verification procedure with two entangled quantum provers can be, Natarajan said. “Now we know how powerful it is. That story is at an end.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 9:14 am on November 13, 2020 Permalink | Reply
    Tags: "UMass Dartmouth professors to use fastest supercomputer in the nation for research", , Mathematics, , , , ,   

    From UMass Dartmouth: “UMass Dartmouth professors to use fastest supercomputer in the nation for research” 

    From UMass Dartmouth

    November 12, 2020
    Ryan Merrill
    508-910-6884
    rmerrill1@umassd.edu

    Professor Sigal Gottlieb and Professor Gaurav Khanna awarded opportunity to Oak Ridge National Lab’s Summit supercomputer.

    ORNL IBM AC922 SUMMIT supercomputer, was No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy.


    Oak Ridge National Lab’s Summit supercomputer is the fastest in America and Professor Sigal Gottlieb (Mathematics) and Professor Gaurav Khanna (Physics) are getting a chance to test its power.

    The system, built by IBM, can perform 200 quadrillion calculations in one second. Funded by the U.S. Department of Energy, the Summit supercomputer consists of 9,216 POWER9 processors, 27,648 Nvidia Tesla graphics processing units, and consumes 13 MW of power.

    Gottlieb and Khanna, alongside their colleague Zachary Grant of Oak Ridge National Lab, were awarded 880,000 core-hours of supercomputing time on Summit. They received the maximum awarded Directors’ Discretionary allocation which is equivalent to $132,200 of funding according to the Department of Energy. Their research project titled “Mixed-Precision WENO Method for Hyperbolic PDE Solutions” involves implementing and evaluating different computational methods for black hole simulations.

    Their proposal for supercomputing time was successful, in part, due to excellent preliminary results that were generated using UMass Dartmouth’s own C.A.R.N.i.E supercomputer, and MIT’s Satori supercomputer that Khanna had access to via UMass Dartmouth’s membership in the Massachusetts Green High Performance Computing Consortium (MGHPCC). The Satori supercomputer is similar in design to Summit, but almost two orders-of-magnitude smaller in size.

    Gottlieb and Khanna are the Co-Directors for UMass Dartmouth’s Center for Scientific Computing & Visualization Research and Grant was a former student of Gottlieb’s in the Engineering & Applied Sciences Ph.D. program.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Mission Statement

    UMass Dartmouth distinguishes itself as a vibrant, public research university dedicated to engaged learning and innovative research resulting in personal and lifelong student success. The University serves as an intellectual catalyst for economic, social, and cultural transformation on a global, national, and regional scale.
    Vision Statement

    UMass Dartmouth will be a globally recognized premier research university committed to inclusion, access, advancement of knowledge, student success, and community engagement.

    The University of Massachusetts Dartmouth (UMass Dartmouth or UMassD) is one of five campuses and operating subdivisions of the University of Massachusetts. It is located in North Dartmouth, Massachusetts, United States, in the center of the South Coast region, between the cities of New Bedford to the east and Fall River to the west. Formerly Southeastern Massachusetts University, it was merged into the University of Massachusetts system in 1991.

    The campus has an overall student body of 8,647 students (school year 2016-2017), including 6,999 undergraduates and 1,648 graduate/law students. As of the 2017 academic year, UMass Dartmouth records 399 full-time faculty on staff. For the fourth consecutive year UMass Dartmouth receives top 20 national rank from President’s Higher Education Community Service Honor Roll for its civic engagement.

    The university also includes the University of Massachusetts School of Law, as the trustees of the state’s university system voted during 2004 to purchase the nearby Southern New England School of Law (SNESL), a private institution that was accredited regionally but not by the American Bar Association (ABA).
    UMass School of Law at Dartmouth opened its doors in September 2010, accepting all current SNESL students with a C or better average as transfer students, and achieved (provisional) ABA accreditation in June 2012. The law school achieved full accreditation in December 2016.

    In 2011, UMass Dartmouth became the first university in the world to have a sustainability report that met the top level of the world’s most comprehensive, credible, and widely used standard (the GRI’s G3.1 standard). In 2013, UMass Dartmouth became the first university in the world whose annual sustainability report achieved an A+ application level according to the Global Reporting Initiative G3.1 standard (by having the sources of data used in its annual sustainability report verified by an independent third party).

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: