Tagged: Mathematics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:58 am on November 9, 2020 Permalink | Reply
    Tags: , , , , , , , Mathematics,   

    From Harvard Gazette: “Digging into the history of the cosmos” Cora Dvorkin 

    Harvard University

    From Harvard Gazette

    A sense of discovery and adventure has come to define much of Cora Dvorkin’s work as an associate professor in the Department of Physics. Credit: Stephanie Mitchell/Harvard Staff Photographer.

    But lab members say award-winning cosmologist is equally invested in futures.

    Cora Dvorkin’s fascination with math and the cosmos started with her father, a family friend, and famed theoretical physicist Stephen Hawking.

    Drawn to math at an early age, Dvorkin remembers long discussions with her father and his friend about abstract mathematical concepts like the origin of infinity or zero and was 10 years old when first handed Hawking’s A Brief History of Time. It didn’t take long for a young Dvorkin, growing up in Buenos Aires, to become enthralled with the kinds of connections Hawkings was making.

    “I realized that I could access the kind of questions that I was interested in with the tool of mathematics,” Dvorkin said. “I had fun when my mind went out [in search of big answers] and then it came back, and I realized I was physically at this place, but I was flying somewhere else.”

    That sense of discovery and adventure has come to define much of her work as an associate professor in the FAS’ Department of Physics. There, the theoretical cosmologist uses advanced algorithms and machine learning to analyze data from satellites and telescopes all over the world to study the origins and composition of the early universe. Her lab’s main goal is trying to understand the nature of one of the universe’s most important and puzzling features: Dark Matter.

    “We use our computers to simulate the universe and to do our calculations,” said Dvorkin, who came to Harvard in 2014 as a fellow for the Institute for Theory and Computation at the Center for Astrophysics | Harvard & Smithsonian. “The data that we use are either from the cosmic microwave background [CMB], which is the afterglow from the Big Bang, or data from what is known as the large-scale structure of the universe, such as galaxy surveys or gravitational lensing, which is the light coming towards us [from distant galaxies] that’s deflected [and distorted] because of massive structures along the way.”

    Gravitational Lensing

    Gravitational Lensing NASA/ESA.

    CMB per ESA/Planck

    Laniakea supercluster. From Nature The Laniakea supercluster of galaxies R. Brent Tully, Hélène Courtois, Yehuda Hoffman & Daniel Pomarède at http://www.nature.com/nature/journal/v513/n7516/full/nature13674.html. Milky Way is the red dot.

    A lot of these structures are what’s known as Dark Matter. Scientists believe dark matter is the glue holding galaxies together and the organizing force giving the universe its overall structure. It comprises around 80 percent of all mass.

    Dark Matter Background
    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com.

    Coma cluster via NASA/ESA Hubble.

    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.

    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.

    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).

    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL).

    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970. https://home.dtm.ciw.edu.

    The Vera C. Rubin Observatory currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    LSST Data Journey, Illustration by Sandbox Studio, Chicago with Ana Kova.

    Catching a glimpse of it is exceedingly difficult, however. Dark matter doesn’t emit, reflect, or absorb light, making it essentially invisible to current instruments. Researchers instead infer things about dark matter through what its powerful gravity allows it to do: bend and focus the light around it, a phenomenon called gravitational lensing.

    In recent years, Dvorkin’s lab has been a leader in finding new approaches to learn about dark matter. One study published last year [Physical Review D], for instance involved, using a novel machine learning method to detect what’s known as subhalos, or small clumps of dark matter that live within larger halos of the dark matter holding a galaxy together. The halos basically create pockets where certain stars are confined. While they can’t be seen, these subhalos can be traced by analyzing the light distortion from the lensing effect. The problem is that the analysis is often expensive and can take weeks.

    “Most of the time you get no detections, so what I have been working on with a graduate student and now with a postdoc is if we can automate a procedure like direct detection, for example, using convolutional neural networks, making this process of detecting subhalos much faster,” Dvorkin said.

    The lab showed their strategy using machine learning can reduce the analysis to a few seconds rather than a few weeks using traditional methods.

    Other dark matter research involves looking at the early universe, which has included using cosmic microwave background observations to study the structure of dark matter and pioneering a method for investigating the shape of an aspect inflation known as “Generalized Slow Roll.” Along with colleagues at Harvard, MIT, and other universities, Dvorkin helped launch a new National Science Foundation institute for artificial intelligence, where she’ll apply some of her methods for detecting dark matter.

    Her current and past work has turned heads. Dvorkin received the Department of Energy Early Career award in 2019. She snagged the Scientist of the Year award given by the students interns and faculty at The Harvard Foundation for Intercultural and Race Relations in 2018 for her contributions to physics, cosmology and STEM Education. Dvorkin was named a Radcliffe Institute Fellowship from 2018 to 2019. And in 2012, she was given the Martin and Beate Block Award, an international prize given out annually to a promising young physicist by the Aspen Center for Physics.

    Professor of astronomy and physics Douglas Finkbeiner considers himself among Dvorkin’s fans — not only because her stellar work has led to a good-looking trophy case but also because of how she champions her collaborators, especially future scientists.

    “Cora is not just a builder of theories, but a builder of people,” he said. “It has been a joy to watch her students [and research associates] grow and mature into top-notch scientists.”

    The Dvorkin Group is comprised of 11 members, including seven graduate students and one undergrad.

    “We’ve got a really big group in comparison to any other research groups that I have been a part of,” said Bryan Ostdiek, one of the lab’s three postdoctoral fellows. “This makes everything very lively” and collaborative on projects, he said. It was especially evident before the pandemic, but still happens now through Zoom and Slack messaging.

    And that’s just the way Dvorkin likes it.

    “I still remember the time when I was a graduate student,” Dvorkin said. “I benefited a lot from discussions with my adviser, but I also benefited from discussions with other group members. I have tried to give postdocs the opportunity to work with students because at some point they will be applying for faculty jobs.”

    When it comes to projects lab members say Dvorkin is as hands-on as they need her to be, but that she also gives them the freedom they need to evaluate data or come up with their own ideas for research.

    Ana Diaz Rivero, A.M. ’18, a physics Ph.D. candidate at the Graduate School of Arts and Sciences, says she’s been able to get early experience authoring scientific papers through her work at the lab, including in leading journals like The Astrophysical Journal and Physical Review D. She’s also been invited to give a number of talks, including an upcoming one at the Max Planck Institute for Astrophysics.

    Rivero says she’s been working with Dvorkin since the start of her graduate experience at Harvard in 2016.

    “I got accepted into Harvard, and on the day of my acceptance she sent me an email saying congrats on getting into Harvard, and we set up a time to talk,” Rivero said. The pair had met at Columbia University at a talk Dvorkin was giving. “When I came to visit at Open House, I spoke to her, and I really liked her, and I told her what ideas I had, and she was super supportive of me working on them in her group. So, on Day One of Harvard, I started out on a research project with her, and we’ve written a lot of papers together since.”

    Outreach like that is important to Dvorkin, especially to increase inclusion and diversity in the field. It’s why in the past she’s given talks at the Harvard Foundation’s annual Albert Einstein Science Conference: Advancing Minorities and Women in Science, Technology, Engineering and Mathematics and why, more recently, she’s been in contact with the Black National Society of Physicists.

    “I’m very concerned about these topics, and I’m trying my best to do whatever I can to fight this problem,” Dvorkin said.

    Reasons like this is why the group’s youngest lab member says Dvorkin not only serves as an excellent mentor but as a role model for female scientists like herself.

    “In general, there aren’t a lot of women in physics and, in particular, there aren’t a lot of women in theoretical physics, so I really, really appreciate having her as a mentor,” said Maya Burhanpurkar ’22, a Harvard undergrad studying physics and computer science. “It shows me what’s possible as a woman in the field.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Harvard University campus
    Harvard University is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

  • richardmitnick 3:07 pm on September 6, 2020 Permalink | Reply
    Tags: "Mathematicians Report New Discovery About the Dodecahedron", , Jayadev Athreya; David Aulicino; and Patrick Hooper have shown that an infinite number of such paths do in fact exist on the dodecahedron., Mathematicians have spent over 2000 years dissecting the structure of the five Platonic solids., Mathematics, , The five Platonic solids-the tetrahedron; cube; octahedron; icosahedron; and dodecahedron., The researchers figured out how to classify all the straight paths from one corner back to itself that avoid other corners., The solution required modern techniques and computer algorithms., Translation surfaces   

    From Quanta Magazine: “Mathematicians Report New Discovery About the Dodecahedron” 

    From Quanta Magazine

    August 31, 2020
    Erica Klarreich

    Three mathematicians have resolved a fundamental question about straight paths on the 12-sided Platonic solid.

    Samuel Velasco/Quanta Magazine.

    Even though mathematicians have spent over 2,000 years dissecting the structure of the five Platonic solids — the tetrahedron, cube, octahedron, icosahedron and dodecahedron — there’s still a lot we don’t know about them.

    Now, a trio of mathematicians has resolved one of the most basic questions about the dodecahedron.

    Suppose you stand at one of the corners of a Platonic solid. Is there some straight path you could take that would eventually return you to your starting point without passing through any of the other corners? For the four Platonic solids built out of squares or equilateral triangles — the cube, tetrahedron, octahedron and icosahedron — mathematicians recently figured out that the answer is no. Any straight path starting from a corner will either hit another corner or wind around forever without returning home. But with the dodecahedron, which is formed from 12 pentagons, mathematicians didn’t know what to expect.

    Now Jayadev Athreya, David Aulicino and Patrick Hooper have shown that an infinite number of such paths do in fact exist on the dodecahedron. Their paper, published in May in Experimental Mathematics, shows that these paths can be divided into 31 natural families.

    The solution required modern techniques and computer algorithms. “Twenty years ago, [this question] was absolutely out of reach; 10 years ago it would require an enormous effort of writing all necessary software, so only now all the factors came together,” wrote Anton Zorich, of the Institute of Mathematics of Jussieu in Paris, in an email.

    The project began in 2016 when Athreya, of the University of Washington, and Aulicino, of Brooklyn College, started playing with a collection of card stock cutouts that fold up into the Platonic solids. As they built the different solids, it occurred to Aulicino that a body of recent research on flat geometry might be just what they’d need to understand straight paths on the dodecahedron. “We were literally putting these things together,” Athreya said. “So it was kind of idle exploration meets an opportunity.”

    Together with Hooper, of the City College of New York, the researchers figured out how to classify all the straight paths from one corner back to itself that avoid other corners.

    Their analysis is “an elegant solution,” said Howard Masur of the University of Chicago. “It’s one of these things where I can say, without any hesitation, ‘Goodness, oh, I wish I had done that!’”

    Hidden Symmetries

    Although mathematicians have speculated about straight paths on the dodecahedron for more than a century, there’s been a resurgence of interest in the subject in recent years following gains in understanding “translation surfaces.” These are surfaces formed by gluing together parallel sides of a polygon, and they’ve proved useful for studying a wide range of topics involving straight paths on shapes with corners, from billiard table trajectories to the question of when a single light can illuminate an entire mirrored room.

    In all these problems, the basic idea is to unroll your shape in a way that makes the paths you are studying simpler. So to understand straight paths on a Platonic solid, you could start by cutting open enough edges to make the solid lie flat, forming what mathematicians call a net. One net for the cube, for example, is a T shape made of six squares.

    A paper dodecahedron constructed in 2018 by David Aulicino and Jayadev Athreya to show that straight paths from a vertex back to itself while avoiding other vertices are in fact possible. Credit: Patrick Hooper.

    Imagine that we’ve flattened out the dodecahedron, and now we’re walking along this flat shape in some chosen direction. Eventually we’ll hit the edge of the net, at which point our path will hop to a different pentagon (whichever one was glued to our current pentagon before we cut open the dodecahedron). Whenever the path hops, it also rotates by some multiple of 36 degrees.

    To avoid all this hopping and rotating, when we hit an edge of the net we could instead glue on a new, rotated copy of the net and continue straight into it. We’ve added some redundancy: Now we have two different pentagons representing each pentagon on the original dodecahedron. So we’ve made our world more complicated — but our path has gotten simpler. We can keep adding a new net each time we need to expand beyond the edge of our world.

    By the time our path has traveled through 10 nets, we’ve rotated our original net through every possible multiple of 36 degrees, and the next net we add will have the same orientation as the one we started with. That means this 11th net is related to the original one by a simple shift — what mathematicians call a translation. Instead of gluing on an 11th net, we could simply glue the edge of the 10th net to the corresponding parallel edge in the original net. Our shape will no longer lie flat on the table, but mathematicians think of it as still “remembering” the flat geometry from its previous incarnation — so, for instance, paths are considered straight if they were straight in the unglued shape. After we do all such possible gluings of corresponding parallel edges, we end up with what is called a translation surface.

    The resulting surface is a highly redundant representation of the dodecahedron, with 10 copies of each pentagon. And it’s massively more complicated: It glues up into a shape like a doughnut with 81 holes. Nevertheless, this complicated shape allowed the three researchers to access the rich theory of translation surfaces.

    To tackle this giant surface, the mathematicians rolled up their sleeves — figuratively and literally. After working on the problem for a few months, they realized that the 81-holed doughnut surface forms a redundant representation not just of the dodecahedron but also of one of the most studied translation surfaces. Called the double pentagon, it is made by attaching two pentagons along a single edge and then gluing together parallel sides to create a two-holed doughnut with a rich collection of symmetries.

    This shape also happened to be tattooed on Athreya’s arm. “The double pentagon was something that I already knew and loved,” said Athreya, who got the tattoo a year before he and Aulicino started thinking about the dodecahedron.

    Athreya’s right arm bears a tattoo of his favorite translation surface — a double pentagon. Credit: Radhika Govindrajan.

    Because the double pentagon and the dodecahedron are geometric cousins, the former’s high degree of symmetry can elucidate the structure of the latter. It’s an “amazing hidden symmetry,” said Alex Eskin of the University of Chicago (who was Athreya’s doctoral adviser about 15 years ago). “The fact that the dodecahedron has this hidden symmetry group is, I think, quite remarkable.”

    Video: In this Numberphile episode, Jayadev Athreya explains how he and his colleagues solved a longstanding problem about straight paths on a dodecahedron.

    The relationship between these surfaces meant that the researchers could tap into an algorithm for analyzing highly symmetric translation surfaces developed by Myriam Finster of the Karlsruhe Institute of Technology in Germany. By adapting Finster’s algorithm, the researchers were able to identify all the straight paths on the dodecahedron from a corner to itself, and to classify these paths via the dodecahedron’s hidden symmetries.

    The analysis was “one of the most fun projects I’ve gotten to work on in my whole career,” Athreya said. “It’s important to keep playing with things.”

    The new result shows that even objects that have been studied for thousands of years can still hold secrets, Eskin said. “I think even for [the three mathematicians], it was very, very surprising to say something new about the dodecahedron.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 7:43 am on August 24, 2020 Permalink | Reply
    Tags: "Computer Search Settles 90-Year-Old Math Problem", , As of this past fall the question remained unresolved only for seven-dimensional space., , In 1940 Oskar Perron proved that the conjecture is true for spaces in dimensions one through six., Keller’s conjecture, Mathematics, Posed 90 years ago by Ott-Heinrich Keller is a problem about covering spaces with identical tiles., , The answer comes packaged with a long proof explaining why it’s right., The authors of the new work (cited) solved the problem using 40 computers., The Mysterious Seventh Dimension   

    From Quanta Magazine: “Computer Search Settles 90-Year-Old Math Problem” 

    From Quanta Magazine

    August 19, 2020
    Kevin Hartnett

    By translating Keller’s conjecture into a computer-friendly search for a type of graph, researchers have finally resolved a problem about covering spaces with tiles.

    Olena Shmahalo/Quanta Magazine.

    A team of mathematicians has finally finished off Keller’s conjecture, but not by working it out themselves. Instead, they taught a fleet of computers to do it for them.

    Keller’s conjecture, posed 90 years ago by Ott-Heinrich Keller, is a problem about covering spaces with identical tiles. It asserts that if you cover a two-dimensional space with two-dimensional square tiles, at least two of the tiles must share an edge. It makes the same prediction for spaces of every dimension — that in covering, say, 12-dimensional space using 12-dimensional “square” tiles, you will end up with at least two tiles that abut each other exactly.

    Over the years, mathematicians have chipped away at the conjecture, proving it true for some dimensions and false for others. As of this past fall the question remained unresolved only for seven-dimensional space.

    But a new computer-generated proof has finally resolved the problem. The proof, posted online last October [The Resolution of Keller’s Conjecture], is the latest example of how human ingenuity, combined with raw computing power, can answer some of the most vexing problems in mathematics.

    The authors of the new work — Joshua Brakensiek of Stanford University, Marijn Heule and John Mackey of Carnegie Mellon University, and David Narváez of the Rochester Institute of Technology — solved the problem using 40 computers. After a mere 30 minutes, the machines produced a one-word answer: Yes, the conjecture is true in seven dimensions. And we don’t have to take their conclusion on faith.

    The answer comes packaged with a long proof explaining why it’s right. The argument is too sprawling to be understood by human beings, but it can be verified by a separate computer program as correct.

    In other words, even if we don’t know what the computers did to solve Keller’s conjecture, we can assure ourselves they did it correctly.

    The Mysterious Seventh Dimension

    It’s easy to see that Keller’s conjecture is true in two-dimensional space. Take a piece of paper and try to cover it with equal-sized squares, with no gaps between the squares and no overlapping. You won’t get far before you realize that at least two of the squares need to share an edge. If you have blocks lying around it’s similarly easy to see that the conjecture is true in three-dimensional space. In 1930, Keller conjectured that this relationship holds for corresponding spaces and tiles of any dimension.

    Early results supported Keller’s prediction. In 1940, Oskar Perron proved that the conjecture is true for spaces in dimensions one through six. But more than 50 years later, a new generation of mathematicians found the first counterexample to the conjecture: Jeffrey Lagarias and Peter Shor proved that the conjecture is false in dimension 10 in 1992.

    Samuel Velasco/Quanta Magazine; source: https://www.cs.cmu.edu/~mheule/Keller/

    A simple argument shows that once the conjecture is false in one dimension, it’s necessarily false in all higher dimensions. So after Lagarias and Shor, the only unsettled dimensions were seven, eight and nine. In 2002, Mackey proved Keller’s conjecture false in dimension eight (and therefore also in dimension nine).

    That left just dimension seven open — it was either the highest dimension where the conjecture holds or the lowest dimension where it fails.

    “Nobody knows exactly what’s going on there,” said Heule.

    Connect the Dots

    As mathematicians chipped away at the problem over the decades, their methods changed. Perron worked out the first six dimensions with pencil and paper, but by the 1990s, researchers had learned how to translate Keller’s conjecture into a completely different form — one that allowed them to apply computers to the problem.

    The original formulation of Keller’s conjecture is about smooth, continuous space. Within that space, there are infinitely many ways of placing infinitely many tiles. But computers aren’t good at solving problems involving infinite options — to work their magic they need some kind of discrete, finite object to think about.

    In 1990, Keresztély Corrádi and Sándor Szabó came up with just such a discrete object. They proved that you can ask questions about this object that are equivalent to Keller’s conjecture — so that if you prove something about these objects, you necessarily prove Keller’s conjecture as well. This effectively reduced a question about infinity to an easier problem about the arithmetic of a few numbers.

    Here’s how it works.

    Say you want to solve Keller’s conjecture in dimension two. Corrádi and Szabó came up with a method for doing this by building what they called a Keller graph.

    To start, imagine 16 dice on a table, each positioned so that the face with two dots is facing up. (The fact that it’s two dots reflects the fact that you’re addressing the conjecture for dimension two; we’ll see why it’s 16 dice in a moment.) Now color each dot using any of four colors: red, green, white or black.

    The positions of dots on a single die are not interchangeable: Think of one position as representing an x-coordinate and the other as representing a y-coordinate. Once the dice are colored, we’ll start drawing lines, or edges, between pairs of dice if two conditions hold: The dice have dots in one position that are different colors, and in the other position they have dots whose colors are not only different but paired, with red and green forming one pair and black and white the other.

    Samuel Velasco/Quanta Magazine; source: https://www.cs.cmu.edu/~mheule/Keller/

    So, for example, if one die has two red dots and the other has two black dots, they’re not connected: While they meet the criteria for one position (different colors), they don’t meet the criteria for the other (paired colors). However, if one die is colored red-black and the other is colored green-green they are connected, because they have paired colors in one position (red-green) and different colors in the other (black-green).

    There are 16 possible ways of using four colors to color two dots (that’s why we’re working with 16 dice). Array all 16 possibilities in front of you. Connect all pairs of dice that fit the rule. Now for the crucial question: Can you find four dice that are all connected to each other?

    Such fully connected subsets of dice are called a clique. If you can find one, you’ve proved Keller’s conjecture false in dimension two. But you can’t, because it won’t exist. The fact that there’s no clique of four dice means Keller’s conjecture is true in dimension two.

    The dice are not literally the tiles at issue in Keller’s conjecture, but you can think of each die as representing a tile. Think of the colors assigned to the dots as coordinates which situate the dice in space. And think of the existence of an edge as a description of how two dice are positioned relative to each other.

    If two dice have the exact same colors, they represent tiles that are in the exact same position in space. If they have no colors in common and no paired colors (one die is black-white and the other is green-red), they represent tiles that would partially overlap — which, remember, is not allowed in the tiling. If the two dice have one set of paired colors and one set of the same color (one is red-black and the other is green-black) they represent tiles that share a face.

    Finally, and most importantly, if they have one set of paired colors and another set of colors that are merely different — that is, if they’re connected by an edge — it means the dice represent tiles that are touching each other, but shifted off each other slightly, so that their faces don’t exactly align. This is the condition you really want to investigate. Dice that are connected by an edge represent tiles that are connected without sharing a face — exactly the kind of tiling arrangement needed to disprove Keller’s conjecture.

    “They need to touch each other, but they can’t fully touch each other,” Heule said.

    Samuel Velasco/Quanta Magazine

    Scaling Up

    Thirty years ago, Corrádi and Szabó proved that mathematicians can use this procedure to address Keller’s conjecture in any dimension by adjusting the parameters of the experiment. To prove Keller’s conjecture in three dimensions you might use 216 dice with three dots on a face, and maybe three pairs of colors (though there’s flexibility on this point). Then you’d look for eight dice (2³) among them that are fully connected to each other using the same two conditions we used before.

    As a general rule, to prove Keller’s conjecture in dimension n, you use dice with n dots and try to find a clique of size 2n. You can think of this clique as representing a kind of “super tile” (made up of 2n smaller tiles) that could cover the entire n-dimensional space.

    So if you can find this super tile (that itself contains no face-sharing tiles), you can use translated, or shifted, copies of it to cover the entire space with tiles that don’t share a face, thus disproving Keller’s conjecture.

    “If you succeed, you can cover the whole space by translation. The block with no common face will extend to the whole tiling,” said Lagarias, who is now at the University of Michigan.

    Mackey disproved Keller’s conjecture in dimension eight by finding a clique of 256 dice (2^8), so answering Keller’s conjecture for dimension seven required looking for a clique of 128 dice (2^7). Find that clique, and you’ve proved Keller’s conjecture false in dimension seven. Prove that such a clique can’t exist, on the other hand, and you’ve proved the conjecture true.

    Unfortunately, finding a clique of 128 dice is a particularly thorny problem. In previous work, researchers could use the fact that dimensions eight and 10 can be “factored,” in a sense, into lower-dimensional spaces that are easier to work with. No such luck here.

    “Dimension seven is bad because it’s prime, which meant that you couldn’t split it into lower-dimensional things,” Lagarias said. “So there was no choice but to deal with the full combinatorics of these graphs.”

    Seeking out a clique of size 128 may be a difficult task for the unassisted human brain, but it’s exactly the kind of question a computer is good at answering — especially if you give it a little help.

    The Language of Logic

    To turn the search for cliques into a problem that computers can grapple with, you need a representation of the problem that uses propositional logic. It’s a type of logical reasoning that incorporates a set of constraints.

    Let’s say you and two friends are planning a party. The three of you are trying to put together the guest list, but you have somewhat competing interests. Maybe you want to either invite Avery or exclude Kemba. One of your co-planners wants to invite Kemba or Brad or both of them. Your other co-planner, with an ax to grind, wants to leave off Avery or Brad or both of them. Given these constraints, you could ask: Is there a guest list that satisfies all three party planners?

    In computer science terms, this type of question is known as a satisfiability problem. You solve it by describing it in what’s called a propositional formula that in this case looks like this, where the letters A, K and B stand for the potential guests: (A OR NOT K) AND (K OR B) AND (NOT A OR NOT B).

    The computer evaluates this formula by plugging in either 0 or 1 for each variable. A 0 means the variable is false, or turned off, and a 1 means it’s true, or turned on. So if you put in a 0 for “A” it means Avery is not invited, while a 1 means she is. There are lots of ways of assigning 1s and 0s to this simple formula — or building the guest list — and it’s possible that after running through them the computer will conclude it’s not possible to satisfy all the competing demands. In this case, though, there are two ways of assigning 1s and 0s that work for everyone: A = 1, K = 1, B = 0 (meaning inviting Avery and Kemba) and A = 0, K = 0, B = 1 (meaning inviting just Brad).

    A computer program that solves propositional logic statements like this is called a SAT solver, where “SAT” stands for “satisfiability.” It explores every combination of variables and produces a one-word answer: Either YES, there is a way to satisfy the formula, or NO, there’s not.

    “You just decide whether each variable is true or false in a way to make the whole formula true, and if you can do it the formula is satisfiable, and if you can’t the formula is unsatisfiable,” said Thomas Hales of the University of Pittsburgh.

    The question of whether it’s possible to find a clique of size 128 is a similar kind of problem. It can also be written as a propositional formula and plugged into a SAT solver. Start with a large number of dice with seven dots apiece and six possible colors. Can you color the dots such that 128 dice can be connected to each other according to the specified rules? In other words, is there a way of assigning colors that makes the clique possible?

    The propositional formula that captures this question about cliques is quite long, containing 39,000 different variables. Each can be assigned one of two values (0 or 1). As a result, the number of possible permutations of variables, or ways of arranging colors on the dice, is 239,000 — a very, very big number.

    To answer Keller’s conjecture for dimension seven, a computer would have to check every one of those combinations — either ruling them all out (meaning no clique of size 128 exists, and Keller is true in dimension seven) or finding just one that works (meaning Keller is false).

    “If you had a naive computer check all possible [configurations], it would be this 324-digit number of cases,” Mackey said. It would take the world’s fastest computers until the end of time before they’d exhausted all the possibilities.

    But the authors of the new work figured out how computers could arrive at a definitive conclusion without actually having to check every possibility. Efficiency is the key.

    Hidden Efficiencies

    Mackey recalls the day when, in his eyes, the project really came together. He was standing in front of a blackboard in his office at Carnegie Mellon University discussing the problem with two of his co-authors, Heule and Brakensiek, when Heule suggested a way of structuring the search so that it could be completed in a reasonable amount of time.

    “There was real intellectual genius at work there in my office that day,” Mackey said. “It was like watching Wayne Gretzky, like watching LeBron James in the NBA Finals. I have goose bumps right now [just thinking about it].”

    There are many ways you might grease the search for a particular Keller graph. Imagine that you have many dice on a table and you’re trying to arrange 128 of them in a way that satisfies the rules of a Keller graph. Maybe you arrange 12 of them correctly, but you can’t find a way to add the next die. At that point, you can rule out all the configurations of 128 dice that involve that unworkable starting configuration of 12 tiles.

    “If you know the first five things you’ve assigned don’t fit together, you don’t have to look at any of the other variables, and that generally cuts the search down a whole lot,” said Shor, who is now at the Massachusetts Institute of Technology.

    Another form of efficiency involves symmetry. When objects are symmetric, we think of them as being in some sense the same. This sameness allows you to understand an entire object just by studying a portion of it: Glimpse half a human face and you can reconstruct the whole visage.

    Similar shortcuts work for Keller graphs. Imagine, again, that you’re arranging dice on a table. Maybe you start at the center of the table and build out a configuration to the left. You lay four dice, then hit a roadblock. Now you’ve ruled out one starting configuration — and all configurations based on it. But you can also rule out the mirror image of that starting configuration — the arrangement of dice you get when you position the dice the same way, but building out to the right instead.

    “If you can find a way of doing satisfiability problems that takes into account the symmetries in an intelligent way, then you’ve made the problem much easier,” said Hales.

    The four collaborators took advantage of these kinds of search efficiencies in a new way — in particular, they automated considerations about symmetries, where previous work had relied on mathematicians working practically by hand to deal with them.

    They ultimately streamlined the search for a clique of size 128 so that instead of checking 2^39,000 configurations, their SAT solver only had to search about 1 billion (2^30). This turned a search that might have taken eons into a morning chore. Finally, after just half an hour of computations, they had an answer.

    “The computers said no, so we know the conjecture does hold,” said Heule. There is no way of coloring 128 dice so that they’re all connected to each other, so Keller’s conjecture is true in dimension seven: Any arrangement of tiles that covers the space inevitably includes at least two tiles that share a face.

    The computers actually delivered a lot more than a one-word answer. They supported it with a long proof — 200 gigabytes in size — justifying their conclusion.

    The proof is much more than a readout of all the configurations of variables the computers checked. It’s a logical argument which establishes that the desired clique couldn’t possibly exist. The four researchers fed the Keller proof into a formal proof checker — a computer program that traced the logic of the argument — and confirmed it works.

    “You don’t just go through all the cases and not find anything, you go through all the cases and you’re able to write a proof that this thing doesn’t exist,” Mackey said. “You’re able to write a proof of unsatisfiability.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 4:40 pm on February 18, 2020 Permalink | Reply
    Tags: , , , Mathematics, MIP-multiprover interactive proof, , , ,   

    From Science News: “How a quantum technique highlights math’s mysterious link to physics” 

    From Science News

    February 17, 2020
    Tom Siegfried

    Verifying proofs to very hard math problems is possible with infinite quantum entanglement.

    A technique that relies on quantum entanglement (illustrated) expands the realm of mathematical problems for which the solution could (in theory) be verified. inkoly/iStock/Getty Images Plus.

    It has long been a mystery why pure math can reveal so much about the nature of the physical world.

    Antimatter was discovered in Paul Dirac’s equations before being detected in cosmic rays. Quarks appeared in symbols sketched out on a napkin by Murray Gell-Mann several years before they were confirmed experimentally. Einstein’s equations for gravity suggested the universe was expanding a decade before Edwin Hubble provided the proof. Einstein’s math also predicted gravitational waves a full century before behemoth apparatuses detected those waves (which were produced by collisions of black holes — also first inferred from Einstein’s math).

    Nobel laureate physicist Eugene Wigner alluded to math’s mysterious power as the “unreasonable effectiveness of mathematics in the natural sciences.” Somehow, Wigner said, math devised to explain known phenomena contains clues to phenomena not yet experienced — the math gives more out than was put in. “The enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and … there is no rational explanation for it,” Wigner wrote in 1960.

    But maybe there’s a new clue to what that explanation might be. Perhaps math’s peculiar power to describe the physical world has something to do with the fact that the physical world also has something to say about mathematics.

    At least that’s a conceivable implication of a new paper that has startled the interrelated worlds of math, computer science and quantum physics.

    In an enormously complicated 165-page paper, computer scientist Zhengfeng Ji and colleagues present a result that penetrates to the heart of deep questions about math, computing and their connection to reality. It’s about a procedure for verifying the solutions to very complex mathematical propositions, even some that are believed to be impossible to solve. In essence, the new finding boils down to demonstrating a vast gulf between infinite and almost infinite, with huge implications for certain high-profile math problems. Seeing into that gulf, it turns out, requires the mysterious power of quantum physics.

    Everybody involved has long known that some math problems are too hard to solve (at least without unlimited time), but a proposed solution could be rather easily verified. Suppose someone claims to have the answer to such a very hard problem. Their proof is much too long to check line by line. Can you verify the answer merely by asking that person (the “prover”) some questions? Sometimes, yes. But for very complicated proofs, probably not. If there are two provers, though, both in possession of the proof, asking each of them some questions might allow you to verify that the proof is correct (at least with very high probability). There’s a catch, though — the provers must be kept separate, so they can’t communicate and therefore collude on how to answer your questions. (This approach is called MIP, for multiprover interactive proof.)

    Verifying a proof without actually seeing it is not that strange a concept. Many examples exist for how a prover can convince you that they know the answer to a problem without actually telling you the answer. A standard method for coding secret messages, for example, relies on using a very large number (perhaps hundreds of digits long) to encode the message. It can be decoded only by someone who knows the prime factors that, when multiplied together, produce the very large number. It’s impossible to figure out those prime numbers (within the lifetime of the universe) even with an army of supercomputers. So if someone can decode your message, they’ve proved to you that they know the primes, without needing to tell you what they are.

    Someday, though, calculating those primes might be feasible, with a future-generation quantum computer. Today’s quantum computers are relatively rudimentary, but in principle, an advanced model could crack codes by calculating the prime factors for enormously big numbers.

    That power stems, at least in part, from the weird phenomenon known as quantum entanglement. And it turns out that, similarly, quantum entanglement boosts the power of MIP provers. By sharing an infinite amount of quantum entanglement, MIP provers can verify vastly more complicated proofs than nonquantum MIP provers.

    It is obligatory to say that entanglement is what Einstein called “spooky action at a distance.” But it’s not action at a distance, and it just seems spooky. Quantum particles (say photons, particles of light) from a common origin (say, both spit out by a single atom) share a quantum connection that links the results of certain measurements made on the particles even if they are far apart. It may be mysterious, but it’s not magic. It’s physics.

    Say two provers share a supply of entangled photon pairs. They can convince a verifier that they have a valid proof for some problems. But for a large category of extremely complicated problems, this method works only if the supply of such entangled particles is infinite. A large amount of entanglement is not enough. It has to be literally unlimited. A huge but finite amount of entanglement can’t even approximate the power of an infinite amount of entanglement.

    As Emily Conover explains in her report for Science News, this discovery proves false a couple of widely believed mathematical conjectures. One, known as Tsirelson’s problem, specifically suggested that a sufficient amount of entanglement could approximate what you could do with an infinite amount. Tsirelson’s problem was mathematically equivalent to another open problem, known as Connes’ embedding conjecture, which has to do with the algebra of operators, the kinds of mathematical expressions that are used in quantum mechanics to represent quantities that can be observed.

    Refuting the Connes conjecture, and showing that MIP plus entanglement could be used to verify immensely complicated proofs, stunned many in the mathematical community. (One expert, upon hearing the news, compared his feces to bricks.) But the new work isn’t likely to make any immediate impact in the everyday world. For one thing, all-knowing provers do not exist, and if they did they would probably have to be future super-AI quantum computers with unlimited computing capability (not to mention an unfathomable supply of energy). Nobody knows how to do that in even Star Trek’s century.

    Still, pursuit of this discovery quite possibly will turn up deeper implications for math, computer science and quantum physics.

    It probably won’t shed any light on controversies over the best way to interpret quantum mechanics, as computer science theorist Scott Aaronson notes in his blog about the new finding. But perhaps it could provide some sort of clues regarding the nature of infinity. That might be good for something, perhaps illuminating whether infinity plays a meaningful role in reality or is a mere mathematical idealization.

    On another level, the new work raises an interesting point about the relationship between math and the physical world. The existence of quantum entanglement, a (surprising) physical phenomenon, somehow allows mathematicians to solve problems that seem to be strictly mathematical. Wondering why physics helps out math might be just as entertaining as contemplating math’s unreasonable effectiveness in helping out physics. Maybe even one will someday explain the other.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 12:14 pm on February 12, 2020 Permalink | Reply
    Tags: Atom or noise?, , , , Mathematics, , , Stanford’s Department of Bioengineering   

    From SLAC National Accelerator Lab: “Atom or noise? New method helps cryo-EM researchers tell the difference” 

    From SLAC National Accelerator Lab

    February 11, 2020
    Nathan Collins

    Cryogenic electron microscopy can in principle make out individual atoms in a molecule, but distinguishing the crisp from the blurry parts of an image can be a challenge. A new mathematical method may help.

    Cryogenic electron microscopy, or cryo-EM, has reached the point where researchers could in principle image individual atoms in a 3D reconstruction of a molecule – but just because they could see those details doesn’t always mean they do. Now, researchers at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have proposed a new way to quantify how accurate such reconstructions are and, in the process, how confident they can be in their molecular interpretations. The study was published February 10 in Nature Methods.

    Cryo-EM works by freezing biological molecules which can contain thousands of atoms so they can be imaged under an electron microscope. By aligning and combining many two-dimensional images, researchers can compute three-dimensional maps of an entire molecule, and this technique has been used to study everything from battery failure to the way viruses invade cells. However, an issue that has been hard to solve is how to accurately assess the true level of detail or resolution at every point in such maps and in turn determine what atomic features are truly visible or not.

    A cryo-EM map of the molecule apoferritin (left) and a detail of the map showing the atomic model researchers use to construct Q-scores. (Image courtesy Greg Pintilie)

    Wah Chiu, a professor at SLAC and Stanford, Grigore Pintilie, a computational scientist in Chiu’s group, and colleagues devised the new measures, known as Q-scores, to address that issue. To compute Q-scores, scientists start by building and adjusting an atomic model until it best matches the corresponding cryo-EM derived 3D map. Then, they compare the map to an idealized version in which each atom is well-resolved, revealing to what degree the map truly resolves the atoms in the atomic model.

    The researchers validated their approach on large molecules, including a protein called apoferritin that they studied in the Stanford-SLAC Cryo-EM Facilities. Kaiming Zhang, another research scientist in Chiu’s group, produced 3D maps close to the highest resolution reached to date – up to 1.75 angstrom, less than a fifth of a nanometer. Using such maps, they showed how Q-scores varied in predictable ways based on overall resolution and on which parts of a molecule they were studying. Pintilie and Chiu say they hope Q-scores will help biologists and others using cryo-EM better understand and interpret the 3D maps and resulting atomic models.

    The study was performed in collaboration with researchers from Stanford’s Department of Bioengineering. Molecular graphics and analysis were performed using the University of California, San Francisco’s Chimera software package. The project was funded by the National Institutes of Health.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition


    SLAC/LCLS II projected view

    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.

    SSRL and LCLS are DOE Office of Science user facilities.

  • richardmitnick 12:57 pm on February 10, 2020 Permalink | Reply
    Tags: "Where math meets physics", , , Mathematics, ,   

    From Penn Today: “Where math meets physics” 

    From Penn Today

    February 7, 2020
    Erica K. Brockmeier
    Eric Sucar, Photographer

    Collaborations between physicists and mathematicians at Penn showcase the importance of research that crosses the traditional boundaries that separate fields of science.

    Penn is home to an active and flourishing collaboration between physicists and mathematicians. Advances in the fields of geometry, string theory, and particle physics have been made possible by teams of researchers, like physicist Burt Ovrut (above), who speak different “languages,” embrace new research cultures, and understand the power of tackling problems through an interdisciplinary approach.

    In the scientific community, “interdisciplinary” can feel like an overused, modern-day buzzword. But uniting different academic disciplines is far from a new concept. Math, chemistry, physics, and biology were grouped together for many years under the umbrella “natural philosophy,” and it was only as knowledge grew and specialization became necessary that these disciplines became more specialized.

    With many complex scientific questions still in need of answers, working across multiple fields is now seen an essential part of research. At Penn, long-running collaborations between the physics and astronomy and the math departments showcase the importance of interdisciplinary research that crosses traditional boundaries. Advances in geometry, string theory, and particle physics, for example, have been made possible by teams of researchers who speak different “languages,” embrace new research cultures, and understand the power of tackling problems through an interdisciplinary approach.

    A tale of two disciplines

    Math and physics are two closely connected fields. For physicists, math is a tool used to answer questions. For example, Newton invented calculus to help describe motion. For mathematicians, physics can be a source of inspiration, with theoretical concepts such as general relativity and quantum theory providing an impetus for mathematicians to develop new tools.

    But despite their close connections, physics and math research relies on distinct methods. As the systematic study of how matter behaves, physics encompasses the study of both the great and the small, from galaxies and planets to atoms and particles. Questions are addressed using combinations of theories, experiments, models, and observations to either support or refute new ideas about the nature of the universe.

    In contrast, math is focused on abstract topics such as quantity (number theory), structure (algebra), and space (geometry). Mathematicians look for patterns and develop new ideas and theories using pure logic and mathematical reasoning. Instead of experiments or observations, mathematicians use proofs to support their ideas.

    While physicists rely heavily on math for calculations in their work, they don’t work towards a fundamental understanding of abstract mathematical ideas in the way that mathematicians do. Physicists “want answers, and the way they get answers is by doing computations,” says mathematician Tony Pantev. “But in mathematics, the computations are just a decoration on top of the cake. You have to understand everything completely, then you do a computation.”

    This fundamental difference leads researchers in both fields to use the analogy of language, highlighting a need to “translate” ideas in order to make progress and understand one another. “We are dealing with how to formulate physics questions so it can be seen as a mathematics problem” says physicist Mirjam Cvetič . “That’s typically the hardest part.”

    Kamien works on physics problems in that have a strong connection to geometry and topology and encourages his students to understand problems as mathematicians do. “Understanding things for the sake of understanding them is worthwhile, and connecting them to things that other people know is also worthwhile,” he says.

    “A physicist comes to us, asks, ‘How do you prove that this is true?’ and we immediately show them it’s false,” says mathematician Ron Donagi. “But we keep talking, and the trick is not to do what they say to do but what they mean, a translation of the problem.”

    In addition to differences in methodology and language, math and physics also have different research cultures. In physics, papers might involve dozens of co-authors and institutions, with researchers publishing work several times per year. In contrast, mathematicians might work on a single problem that takes years to complete with a small number of collaborators. “Sometimes, physics papers are essentially, ‘We discovered this thing, isn’t that cool,’” says physicist Randy Kamien. “But math is never like that. Everything is about understanding things for the sake of understanding them. Culturally, it’s very different.”

    Mind the gap

    When asked how mathematicians and physicists can bridge these fundamental gaps and successfully work together, many researchers refer to a commonly cited example that also has a connection to Penn. In the 1950s, Eugenio Calabi, now professor emeritus, conjectured the existence of a six-dimensional manifold, a topological space arranged in a way that allows complex structures to be described and understood more simply. After the manifold’s existence was proven in 1978 by Shing-Tung Yau, this new finding was poised to become a fundamental component of a new idea in particle physics: string theory.

    Proposed in the 1970s as a candidate framework for a “theory of everything,” it describes matter as being made of one-dimensional vibrating strings that form elementary particles, like electrons and neutrinos, as well as forces, like gravity and electromagnetism. The challenge, however, is that string theory requires a 10-dimensional universe, so physicists turned to the Calabi-Yau manifolds as a place to house the “extra” dimensions.

    Because the structure is so complex and only recently proven by mathematicians, it wasn’t simple to directly implement into a physics framework, even though physicists use math all of the time in their work. Physicists “use differential geometry, but that’s been known for a long time,” says physicist Burt Ovrut. “When all of a sudden string theory launches, who the heck knows what a Calabi-Yau manifold is?”

    Through the combined efforts of Ed Witten, a physicist with strong mathematical knowledge, and mathematician Michael Atiyah, researchers found a way to apply Calabi-Yau manifolds in string theory. It was the ability of Witten to help translate ideas between the two fields that many researchers say was instrumental in successfully applying brand-new ideas from mathematics into up-and-coming theories from physics.

    At Penn, mathematicians, including Donagi, Pantev, and Antonella Grassi, and physicists Cvetič , Kamien, Ovrut, and Jonathan Heckman have also recognized the importance of speaking a common language as they work across the two fields. They credit Penn as being a place that’s particularly adept at fostering connections and bridging gaps in cultural, linguistic, and methodological differences, and they credit their success to time spent listening to new ideas and developing ways to “translate” between languages.

    For Donagi, it was a chance encounter with Witten in the mid 1990s that led the mathematician to his first collaboration with a researcher outside of pure math. He enjoyed working with Witten so much that he reached out to Penn physicists Cvetič and Ovrut to start a “local” crossover collaboration. “I’ve been hooked since then, and I’ve been talking as much to physicists as to other mathematicians,” Donagi says.

    During the mid-2000s, Donagi and Ovrut co-led a math and physics program with Pantev and Grassi that was supported by the U.S. Department of Energy. The collaboration marked a successful first official math and physics crossover collaboration at Penn. As Ovrut explains, the work was focused on a specific kind of string theory and required extremely close interactions between physics and math researchers. “It was at the very edge of mathematics and algebraic geometry, so I couldn’t do this myself, and the mathematicians were very interested in these things.”

    Cvetič, a longtime collaborator with Donagi and Grassi, says that Penn’s mathematicians have the expertise they need to help answer important questions in physics and that their collaborations at the interface of string theory and algebraic geometry are “extremely fruitful and productive.”

    “I think it’s been incredibly productive and helpful for both our groups,” Donagi says. “We’ve been doing this for longer than anyone else, and we have a really good strong connection between the groups. They’ve almost become one group.”

    “What facilitates this type of research is that we can talk to the physicists,” says Pantev (right), who has worked for many years with Cvetič and Donagi. “When we go talk to them, they know how to speak our language, and they can explain the questions they are struggling with in a way that we can understand and approach them.”

    And in terms of embracing cultural differences, physicists like Kamien, who works on problems with a strong connection to geometry and topology, encourages his group members to try to understand math the way mathematicians do instead of only seeing it as a tool for their work. “We’ve tried to absorb not just their language but their culture, how they understand things, how sometimes understanding a problem more deeply is better,” he says.

    Crossing paths

    Craig Lawrie and Ling Lin, a current and former postdoc working with Cvetič and Heckman, know firsthand about both the challenges and opportunities of working on a problem that combines both cutting-edge math and physics. Physicists like Lawrie and Lin, who work in M-theory and F-theory, are trying to figure out what types of particles different geometric structures can create while also removing the “extra” six dimensions.

    Adding extra symmetries makes string theory problems easier to work with and allows researchers to ask questions about the properties of geometric structures and how they correspond to real-world physics. Building off previous work by Heckman, Lawrie and Lin were able to extract physical features from known geometries in five-dimensional systems to see if those particles overlapped with standard model particles. Using their knowledge of both physics and math, the researchers showed that geometries in different dimensions are all related mathematically, which means they can study particles in different dimensions more easily.

    Using their physics intuition, Lawrie and Lin were able to apply their knowledge of math to make new discoveries that wouldn’t have been possible if the two fields were used in isolation. “What we found seems to suggest that theories in five dimensions come from theories in six dimensions,” explains Lin. “That is something that mathematicians, if they didn’t know about string theory or physics, would not think about.”

    Lawrie adds that being able to work directly with mathematicians is also helpful in their field since understanding new math research can be a challenge, even for theoretical physics researchers. “As physicists, we can have a long discussion where we use a lot of intuition, but if you talk to a mathematician they will say, ‘Wait, precisely what do you mean by that?’ and then you have to pull out your important assumptions,” says Lawrie. “It’s also good for clarifying our own thought process.”

    Rodrigo Barbosa also knows what it’s like to work across fields, in his case coming from math to physics. While studying a seven-dimensional manifold as part of his Ph.D. program, Barbosa connected at a conference with Lawrie over their shared research interests. They were then able to combine their experiences through a successful interdisciplinary collaboration [Physical Review D], work that was motivated by Barbosa’s Ph.D. research in math that included both junior and senior faculty as well as postdocs and graduate students from physics.

    While Barbosa says that the work was challenging, especially being the only mathematician in the group, he also found it rewarding. He enjoyed being able to provide mathematical explanations for certain difficult concepts and relished the rare opportunity to work so closely with researchers outside of his field while still in graduate school. “I’m very grateful that I did my Ph.D. at Penn because it’s really one of a handful of places where this could have happened,” he says.

    The next generation

    Faculty in both departments see the next generation of students and postdocs as “ambidextrous,” having fundamental skills, knowledge, and intuition from both math and physics. “Young people are extremely sophisticated and open minded,” says Pantev. “In the old days, it was very hard to get into physics-related research if you were a mathematician because the thinking is completely different. Now, young people are equally versed in both modes of thinking, so it’s easy for them to make progress.”

    Heckman joined the physics faculty in 2017 and is already active in a number of collaborations with the math department. “What makes this place so great is that we’re talking a common language,” he says. “Although Ron says we sometimes speak with an accent.”

    Heckman is also a member of this new ambidextrous generation of researchers, and in his two years at Penn he has co-authored several papers and started new projects with mathematicians. He says that researchers who want to be successful in the future need to be able to balance the needs of both fields. “Some students act more like mathematicians, and I have to guide them to act more like physicists, and others have more physical intuition but they have to pick up the math,” he says.

    It’s a balance that requires a blend of flexibility and precision, and is one that will be a continuing challenge as topics become increasingly complex and new observations are made from physics experiments. “Mathematicians want to make everything well-defined and rigorous. From a physics perspective, sometimes you want to get an answer that doesn’t need to be well-defined, so you need to make a compromise,” says Lin.

    This compromise is something that’s attracted Barbosa to working more with physicists, adding that the two fields are complementary. “Problems have become so difficult that you need input from all possible directions. Physics works by finding examples and describing solutions, while in math you try to see how general these equations are and how things fit together,” Barbosa says. He also enjoys that physics provides him with a way to make progress on answering questions more quickly than in pure math, where problems can take years to solve.

    The future of crossing over

    The future of interdisciplinary research will depend a lot on the next generation, but Penn is well positioned to continue leading these efforts thanks to the proximity of the two departments, shared grants, cross-listed courses, and students and postdocs that actively work on problems across fields. “There is this constant osmosis of basic knowledge that builds up students who are literate and comfortable with sophisticated language,” says Pantev. “I think we are ahead of the curve, and I think we’ll stay ahead of the curve.”

    It’s something that many at Penn agree is a unique feature of their two departments. “It’s very rare to have such close relationships between mathematicians who really listen to what we say,” says Ovrut. “Penn should be proud of itself for having that kind of synergy. It is not something you see every day.”

    Ovrut (left) was one of the co-leads, along with Donagi, of the incredibly successful joint math and physics program, the first official collaboration between the two departments at Penn.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Penn campus

    Academic life at Penn is unparalleled, with 100 countries and every U.S. state represented in one of the Ivy League’s most diverse student bodies. Consistently ranked among the top 10 universities in the country, Penn enrolls 10,000 undergraduate students and welcomes an additional 10,000 students to our world-renowned graduate and professional schools.

    Penn’s award-winning educators and scholars encourage students to pursue inquiry and discovery, follow their passions, and address the world’s most challenging problems through an interdisciplinary approach.

  • richardmitnick 11:46 am on February 9, 2020 Permalink | Reply
    Tags: Chenyang Xu applies the techniques of abstract algebra to study concrete but complex geometric objects., Mathematics,   

    From MIT News: “New theories at the intersection of algebra and geometry” 

    MIT News

    From MIT News

    February 8, 2020
    Jonathan Mingle

    Chenyang Xu. Image: M. Scott Brauer

    Professor Chenyang Xu applies the techniques of abstract algebra to study concrete but complex geometric objects.

    As a self-described “classical type of mathematician,” Chenyang Xu eschews software for paper and pen, chalk and chalkboard. Walk by his office, and you might simply see him pacing about, deep in concentration.

    Walking — across campus to get a cup of coffee, or from his apartment to his office — is an essential part of his process.

    “The way I think about math, I do a lot of picturing in my brain,” he says. “If I need a more clear picture, I might draw something and do some calculations. And when I walk I think of these pictures.”

    Those paces sometimes lead him to colleagues’ offices. “There are so many great minds here, and I interact with my colleagues in the department a lot,” says Xu, a recently tenured professor of mathematics at MIT.

    Xu’s specialty is algebraic geometry, which applies the problem-solving methods of abstract algebra to the complex but concrete shapes, surfaces, spaces, and curves of geometry. His primary objects of study are algebraic varieties — geometric manifestations of sets of solutions of systems of polynomial equations. As he walks and talks with colleagues, Xu focuses on ways of classifying these algebraic varieties in higher dimensions, using the techniques of birational geometry.

    “I like to talk with other mathematicians working in my subject,” Xu says. “We discuss a bit, then go back to think for ourselves, encounter new difficulties, then discuss again. Most of my papers are basically collaborations.”

    Such a collaboration helped Xu take his research in a new direction toward developing the new theory of K-stability of Fano varieties. Eight years ago, he devoted some thought to a certain subject in his field known as K-stability, which he describes as “an algebraic definition invented for differential geometry studies.”

    “I tried to develop an algebraic theory based on this K-stability as a background intuition, using algebraic geometry tools.” After a few years’ “gap,” he eventually came back to it because of conversations with his collaborator Chi Li, a professor of mathematics at Purdue University.

    “He had more of a differential geometry background and translated that concept into algebraic geometry,” says Xu. “That’s when I realized this was important to study. Since then, we have done more than we expected four or five years ago.”

    Together they published a highly cited paper Annals of Mathematics in 2014 on the “K-stability of Fano varieties,” which put forward an entirely new theory in the field of birational algebraic geometry.

    It was representative of his approach to mathematics, which involves advancing new theories before tackling specific problems.

    “In my subject there are questions that everybody trying to solve, that have been open for 40 years,” Xu says. “I have those kinds of problems in my mind. My way of doing math is to go after the theory. Instead of working on one problem with techniques, we have to first develop the theory. We then see something in a new light. Every time I find some new theory, I test it on old classical problems to see if it works or not.”

    The beauty of math

    Growing up near Chengdu, in China’s Sichuan Province, Xu enjoyed math from a young age. “I attended some math Olympiads, and I did okay, but I wasn’t the gold medal winner,” he says with a laugh.

    He was talented enough, however, to earn bachelor’s and master’s degrees at Peking University, as a part of the premier math program in China.

    “After I got into college, I started to learn more advanced mathematics, and I found it very beautiful and very deep,” he says. “To me, a big chunk of mathematics is art more than science.”

    Toward the end of his time at Peking, he concentrated increasingly on algebraic geometry. “I just like geometry a lot and wanted to study some subject related to geometry,” he says. “I found that I’m good at the techniques of algebra. So using those techniques to study geometry fit me very well.”

    Xu then pursued a PhD at Princeton University, where his advisor, János Kollár, a leading algebraic geometer, had a “huge influence” on him.

    “What I learned from him, aside from many techniques, of course, was more about what I could call ‘taste,’” says Xu. “What questions are important in mathematics? In general, graduate students or postdocs in the early stages of their career need some role model to follow. Doing math is a complicated thing, and at some point there are choices they need to make,” he says, that require balancing how difficult or interesting a particular problem might be with more practical concerns about its tractability.

    In addition to Kollár’s mentorship, the unfamiliarity of his new surroundings also aided his research.

    “I had never been outside China before that point, so there was a bit of culture shock,” he recalls. “I didn’t know much about U.S. culture at the time. But in some sense that made me even more concentrated on my work.”

    After Xu received his doctorate in 2008, he spent three years as a postdoc and C.L.E. Moore Instructor at MIT. He then spent about six years as a professor at the Beijing International Center of Mathematical Research and then returned to MIT as a full professor of mathematics in 2018.

    Throughout those years, Xu demonstrated a talent for finding important questions to pursue, becoming a leading thinker in his field and making a series of major advances in algebraic birational geometry.

    In 2017, Xu won the inaugural Future Science Prize in Mathematics and Computer Science for his “fundamental contributions” to the field of birational geometry. Some of that field’s real-world applications include coding and robotics. For example, birational geometry techniques are used to help robots “see” by grouping a series of two-dimensional pictures together into something approximating a field of vision to navigate our three-dimensional world.

    Xu’s work to advance the minimal model program (MMP) — a key theory in birational geometry that was first articulated in the early 1980s — and apply it to algebraic varieties won him the 2019 New Horizons Prize for early-career achievement in mathematics. He has since proved a series of conjectures related to the MMP, expanding it to previously untested varieties of certain conditions.

    The theory of algebraic K-stability that he developed has proven to be fertile ground for new discoveries. “I’m still working on this topic, and it’s a particularly interesting question to me,” he says.

    Xu has been making progress on proving other key conjectures related to K-stability rooted in the minimal model program. Recently, he drew on that prior work to prove the existence of moduli space for Fano algebraic varieties. Now he’s hard at work developing a solution for a specific property of that moduli space: its “compactness.”

    “To solve that problem it will be very important,” he says. “I hope we can still solve the last piece of it. I’m pretty sure that would be my best work to date.”

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 1:09 pm on January 22, 2020 Permalink | Reply
    Tags: "Brewing a better espresso with a shot of maths", , Mathematics,   

    From University of Portsmouth: “Brewing a better espresso with a shot of maths” 

    From From University of Portsmouth

    22 January 2020

    Read how Dr Jamie Foster’s number-crunching has uncovered the secret to espresso perfection.

    Dr Jamie Foster

    Mathematicians, physicists and materials experts might not spring to mind as the first people to consult about whether you are brewing your coffee right.

    But a team of such researchers including Dr Jamie Foster, a mathematician at the University of Portsmouth’s School of Mathematics and Physics, are challenging common espresso wisdom.

    They have found, that fewer coffee beans, ground more coarsely, are the key to a drink that is cheaper to make, more consistent from shot to shot, and just as strong.

    The study is published in the journal Matter.

    Dr Foster and colleagues set out wanting to understand why sometimes two shots of espresso, made in seemingly the same way, can sometimes taste rather different.

    Researchers have found that fewer coffee beans, ground more coarsely, are the key to a drink that is cheaper to make, more consistent from shot to shot, and just as strong.

    They began by creating a new mathematical theory to describe extraction from a single grain, many millions of which comprise a coffee ‘bed’ which you would find in the basket of an espresso machine.

    Dr Foster said: “In order to solve the equations on a realistic coffee bed you would need an army of super computers, so we needed to find a way of simplifying the equations.

    “The hard mathematical work was in making these simplifications systematically, in such a way that none of the important detail was lost.

    “The conventional wisdom is that if you want a stronger cup of coffee, you should grind your coffee finer. This makes sense because the finer the grounds mean that more surface area of coffee bean is exposed to water, which should mean a stronger coffee.”

    When the researchers began to look at this in detail, it turned out to be not so simple. They found coffee was more reliable from cup to cup when using fewer beans ground coarsely.

    “When beans were ground finely, the particles were so small that in some regions of the bed they clogged up the space where the water should be flowing,” Dr Foster said.

    “These clogged sections of the bed are wasted because the water cannot flow through them and access that tasty coffee that you want in your cup. If we grind a bit coarser, we can access the whole bed and have a more efficient extraction.

    “It’s also cheaper, because when the grind setting is changed, we can use fewer beans and be kinder to the environment.

    “Once we found a way to make shots efficiently, we realised that as well as making coffee shots that stayed reliably the same, we were using less coffee.”

    The new recipes have been trialled in a small US coffee shop over a period of one year and they have reported saving thousands of dollars. Estimates indicates that scaling this up to encompass the whole US coffee market could save over $US1.1bn dollar per year.

    Previous studies have looked at drip filter coffee. This is the first time mathematicians have used theoretical modelling to study the science of the perfect espresso – a more complicated process due to the additional pressure.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Portsmouth is a public university in the city of Portsmouth, Hampshire, England. The history of the university dates back to 1908, when the Park building opened as a Municipal college and public library. It was previously known as Portsmouth Polytechnic until 1992, when it was granted university status through the Further and Higher Education Act 1992. It is ranked among the Top 100 universities under 50 in the world.

    We’re a New Breed of University
    We’re proud to be a breath of fresh air in the academic world – a place where everyone gets the support they need to achieve their best.
    We’re always discovering. Through the work we do, we engage with our community and world beyond our hometown. We don’t fit the mould, we break it.
    We educate and transform the lives of our students and the people around us. We recruit students for their promise and potential and for where they want to go.
    We stand out, not just in the UK but in the world, in innovation and research, with excellence in areas from cosmology and forensics to cyber security, epigenetics and brain tumour research.
    Just as the world keeps moving, so do we. We’re closely involved with our local community and we take our ideas out into the global marketplace. We partner with business, industry and government to help improve, navigate and set the course for a better future.
    Since the first day we opened our doors, our story has been about looking forward. We’re interested in the future, and here to help you shape it.

    The university offers a range of disciplines, from Pharmacy, International relations and politics, to Mechanical Engineering, Paleontology, Criminology, Criminal Justice, among others. The Guardian University Guide 2018 ranked its Sports Science number one in England, while Criminology, English, Social Work, Graphic Design and Fashion and Textiles courses are all in the top 10 across all universities in the UK. Furthermore, 89% of its research conducted in Physics, and 90% of its research in Allied Health Professions (e.g. Dentistry, Nursing and Pharmacy) have been rated as world-leading or internationally excellent in the most recent Research Excellence Framework (REF2014).

    The University is a member of the University Alliance and The Channel Islands Universities Consortium. Alumni include Tim Peake, Grayson Perry, Simon Armitage and Ben Fogle.

    Portsmouth was named the UK’s most affordable city for students in the Natwest Student Living Index 2016. On Friday 4 May 2018, the University of Portsmouth was revealed as the main shirt sponsor of Portsmouth F.C. for the 2018–19, 2019–20 and 2020–21 seasons.

  • richardmitnick 11:18 am on October 30, 2019 Permalink | Reply
    Tags: "Nature can help solve optimization problems", , , , Mathematics,   

    From MIT News: “Nature can help solve optimization problems” 

    MIT News

    From MIT News

    October 28, 2019
    Kylie Foy | Lincoln Laboratory

    An analog circuit solves combinatorial optimization problems by using oscillators’ natural tendency to synchronize. The technology could scale up to solve these problems faster than digital computers. Image: Bryan Mastergeorge

    A low-cost analog circuit based on synchronizing oscillators could scale up quickly and cheaply to beat out digital computers.

    Today’s best digital computers still struggle to solve, in a practical time frame, a certain class of problem: combinatorial optimization problems, or those that involve combing through large sets of possibilities to find the best solution. Quantum computers hold potential to take on these problems, but scaling up the number of quantum bits in these systems remains a hurdle.

    Now, MIT Lincoln Laboratory researchers have demonstrated an alternative, analog-based way to accelerate the computing of these problems. “Our computer works by ‘computing with physics’ and uses nature itself to help solve these tough optimization problems,” says Jeffrey Chou, co-lead author of a paper about this work published in Nature’s Scientific Reports. “It’s made of standard electronic components, allowing us to scale our computer quickly and cheaply by leveraging the existing microchip industry.”

    Perhaps the most well-known combinatorial optimization problem is that of the traveling salesperson. The problem asks to find the shortest route a salesperson can take through a number of cities, starting and ending at the same one. It may seem simple with only a few cities, but the problem becomes exponentially difficult to solve as the number of cities grows, bogging down even the best supercomputers. Yet optimization problems need to be solved in the real world daily; the solutions are used to schedule shifts, minimize financial risk, discover drugs, plan shipments, reduce interference on wireless networks, and much more.

    “It has been known for a very long time that digital computers are fundamentally bad at solving these types of problems,” says Suraj Bramhavar, also a co-lead author. “Many of the algorithms that have been devised to find solutions have to trade off solution quality for time. Finding the absolute optimum solution winds up taking an unreasonably long time when the problem sizes grow.” Finding better solutions and doing so in dramatically less time could save industries billions of dollars. Thus, researchers have been searching for new ways to build systems designed specifically for optimization.

    Finding the beat

    Nature likes to optimize energy, or achieve goals in the most efficient and distributed manner. This principle can be witnessed in the synchrony of nature, like heart cells beating together or schools of fish moving as one. Similarly, if you set two pendulum clocks on the same surface, no matter when the individual pendula are set into motion, they will eventually be lulled into a synchronized rhythm, reaching their apex at the same time but moving in opposite directions (or out of phase). This phenomenon was first observed in 1665 by the Dutch scientist Christiaan Huygens. These clocks are an example of coupled oscillators, set up in such a way that energy can be transferred between them.

    “We’ve essentially built an electronic, programmable version of this [clock setup] using coupled nonlinear oscillators,” Chou says, showing a YouTube video of metronomes displaying a similar phenomenon. “The idea is that if you set up a system that encodes your problem’s energy landscape, then the system will naturally try to minimize the energy by synchronizing, and in doing so, will settle on the best solution. We can then read out this solution.”

    The laboratory’s prototype is a type of Ising machine, a computer based on a model in physics that describes a network of magnets, each of which have a magnetic “spin” orientation that can point only up or down. Each spin’s final orientation depends on its interaction with every other spin. The individual spin-to-spin interactions are defined with a specific coupling weight, which denotes the strength of their connection. The goal of an Ising machine is to find, given a specific coupling strength network, the correct configuration of each spin, up or down, that minimizes the overall system energy.

    But how does an Ising machine solve an optimization problem? It turns out that optimization problems can be mapped directly onto the Ising model, so that a set of a spins with certain coupling weights can represent each city and the distances between them in the traveling salesperson problem. Thus, finding the lowest-energy configuration of spins in the Ising model translates directly into the solution for the seller’s fastest route. However, solving this problem by individually checking each of the possible configurations becomes prohibitively difficult when the problems grow to even modest sizes.

    In recent years, there have been efforts to build quantum machines that map to the Ising model, the most notable of which is one from the Canadian company D-Wave Systems. These machines may offer an efficient way to search the large solution space and find the correct answer, although they operate at cryogenic temperatures.

    The laboratory’s system runs a similar search, but does so using simple electronic oscillators. Each oscillator represents a spin in the Ising model, and similarly takes on a binarized phase, where oscillators that are synchronized, or in phase, represent the “spin up” configuration and those that are out of phase represent the “spin down” configuration. To set the system up to solve an optimization problem, the problem is first mapped to the Ising model, translating it into programmable coupling weights connecting each oscillator.

    With the coupling weights programmed, the oscillators are allowed to run, like the pendulum arm of each clock being released. The system then naturally relaxes to its overall minimum energy state. Electronically reading out each oscillator’s final phase, representing “spin up” or “spin down,” presents the answer to the posed question. When the system ran against more than 2,000 random optimization problems, it came to the correct solution 98 percent of the time.

    Previously, researchers at Stanford University demonstrated an Ising machine [Science] that uses lasers and electronics to solve optimization problems. That work revealed the potential for a significant speedup over digital computing although, according to Chou, the system may be difficult and costly to scale to larger sizes. The goal of finding a simpler alternative ignited the laboratory’s research.

    Scaling up

    The individual oscillator circuit the team used in their demonstration is similar to circuitry found inside cellphones or Wi-Fi routers. One addition they’ve made is a crossbar architecture that allows all of the oscillators in the circuit to be directly coupled to each other. “We have found an architecture that is both scalable to manufacture and can enable full connectivity to thousands of oscillators,” Chou says. A fully connected system allows it to easily be mapped to a wide variety of optimization problems.

    “This work from Lincoln Laboratory makes innovative use of a crossbar architecture in its construction of an analog-electronic Ising machine,” says Peter McMahon, an assistant professor of applied and engineering physics at Cornell University who was not involved in this research. “It will be interesting to see how future developments of this architecture and platform perform.”

    The laboratory’s prototype Ising machine uses four oscillators. The team is now working out a plan to scale the prototype to larger numbers of oscillators, or “nodes,” and fabricate it on a printed circuit board. “If we can get to, say, 500 nodes, there is a chance we can start to compete with existing computers, and at 1,000 nodes we might be able to beat them,” Bramhavar says.

    The team sees a clear path forward to scaling up because the technology is based on standard electronic components. It’s also extremely cheap. All the parts for their prototype can be found in a typical undergraduate electrical engineering lab and were bought online for about $20.

    “What excites me is the simplicity,” Bramhavar adds. “Quantum computers are expected to demonstrate amazing performance, but the scientific and engineering challenges required to scale them up are quite hard. Demonstrating even a small fraction of the performance gains envisioned with quantum computers, but doing so using hardware from the existing electronics industry, would be a huge leap forward. Exploiting the natural behavior of these circuits to solve real problems presents a very compelling alternative for what the next era of computing could be.”

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: