Tagged: Harvard Gazette Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:14 pm on October 30, 2019 Permalink | Reply
    Tags: "Riding the quantum computing ‘wave’", , Google and "quantum supremacy", Harvard and MIT already built a quantum machine of similar power to Google’s and used it to solve scientific problems., Harvard Gazette, Harvard Quantum Initiative, , Professor of Physics Mikhail Lukin   

    From Harvard Gazette: “Riding the quantum computing ‘wave’” 

    Harvard University


    From Harvard Gazette

    October 29, 2019
    Alvin Powell

    1
    An artist’s drawing of Google’s quantum computer chip, called Sycamore, and its surrounding hardware. Forest Stearns/Google AI Quantum Artist in Residence.

    Harvard Quantum Initiative Co-Director Lukin on ‘quantum supremacy’ and Google’s announcement of its achievement.

    The computing world was abuzz last week after Google scientists announced they’d passed a key threshold in which an experimental quantum computer solved a problem in just minutes that a classical computer would take years —10,000 by Google’s count — to solve. The pronouncement, later disputed by scientists at rival IBM, was widely hailed as proof that quantum computers, which use the mysterious properties of matter at extremely small scales to greatly advance processing power, can — as theorized — exhibit “quantum supremacy” and vastly outperform even the world’s most powerful classical computers.

    At Harvard, George Vasmer Leverett Professor of Physics Mikhail Lukin watched the announcement with interest, in part because he — together with collaborators from Harvard and MIT — already built a quantum machine of similar power to Google’s and used it to solve scientific problems. Lukin, who co-directs the Harvard Quantum Initiative, spoke with the Gazette about the week’s quantum computing news.

    Q&A
    Mikhail Lukin

    GAZETTE: What is “quantum supremacy” and why is it important in talking about quantum computing?

    LUKIN: Let me first describe quantum computers. They constitute a new approach to processing information, one that makes use of the laws of quantum mechanics, the discipline of physics that describes the behavior of particles at a microscopic level, of atoms, of nuclei.

    GAZETTE: Does it take advantage of differences in behavior at those very tiny scales from what we might expect in the macro world?

    LUKIN: It takes advantage of differences in behavior and in particular one very strange feature of the quantum world. That is that objects can be in several different states — in several different places — at once. That sounds very bizarre, but in the quantum world, an object can be in your office and in my office at the same time. Even though it sounds strange, this idea of quantum “superposition” has been confirmed and it’s been routinely studied in experiments involving microscopic objects like single atoms, for example, over the past century.

    GAZETTE: Now, this is not a new idea, or even — at this point — a revolutionary idea because it’s actually used in certain applications that people see every day, right?

    LUKIN: Yes. Even certain technologies such as magnetic resonance imaging (MRI) are based on this idea of superposition. So when you get an MRI in the hospital, superposition is being used. Superposition seems bizarre because these effects do not occur in the macroscopic world. Quantum superpositions of microscopic objects are extremely fragile and susceptible to any kind of environmental perturbation. A single photon hitting quantum superposition can cause it to collapse. That means when you look at it, you will always find an object in one place or another. The idea of quantum computing is to make use of these superpositions for massively parallel processing of information. If you were to use a classical computer, you’d code your information in a string of zeroes and ones. In a quantum computer, you can prepare a state that has all sorts of combinations of zeros and ones, it can be in one state, and in another state, and in yet another state all at once. Then, as long as it’s quantum mechanical, as long as it can preserve this superposition, it can process all of these inputs simultaneously. And that massive parallelism gives rise to a very powerful computer.

    GAZETTE: What is the threshold of “quantum supremacy” that Google said it passed last week?

    LUKIN: This massive parallelism enables one to exponentially speed up quantum computation over classical computers. Quantum bits are the analog of bits in a classical computer, and are used to store a superposition state. So, even if a quantum computer has just 50 quantum bits, which seems very small, it is very challenging for a classical machine to simulate it. And the reason is that even 50 qubits in a superposition state can store and process exponentially many more combinations at once. If you had a system of 300 qubits, you could store and process more bits of information than the number of particles in the universe. So this idea of supremacy is that you build a system large enough, quantum enough, and programmable enough that you can execute operations that the best possible classical computer just cannot simulate in any kind of reasonable time.

    GAZETTE: In this specific case we’re talking about a quantum computer with 53 qubits. How big is it physically? How would it compare to, say, a supercomputer?

    LUKIN: In terms of physical size? It is a room full of equipment. Something like the Summit supercomputer in Oak Ridge National Lab. It’s probably comparable.

    GAZETTE: But it has exponentially more computing power?

    LUKIN: That’s the hope. If you have a 50-qubit system and run it long enough to execute a general enough algorithm, it will be very, very challenging for the best classical computers to catch up. And if they can catch up, you could just add a few more qubits — above 60, 70, or 100 — and it’s very clear that it will be completely impossible for a classical computer to catch up.

    GAZETTE: IBM, which is a competitor, cast doubt on Google’s achievement. Do you have a sense as to who’s right or who’s wrong here?

    IBM iconic image of Quantum computer

    LUKIN: Google’s achievement is quite impressive. But it points to one specific calculation and says, “We crossed the threshold.” I think, in practice, it’s not quite like that. There is no doubt that once quantum computers become large enough, classical computers cannot simulate them. That’s very clear. It is also clear that 50 qubits is a sort of threshold and the system that they built is very competitive with the best systems that exist around in the world, including one here at Harvard. What they’ve done is a great example of how to test for this so-called supremacy idea. However, 50 or so qubit systems already have been used in other labs, including ours, for the past two or so years. There have been a number of experiments done with systems of that scale which classical computers have a hard time catching up to. In this sense, I would describe what’s going on now as not a singular event but more like a wave that is coming. It might be that IBM folks found an algorithm to efficiently simulate on a classical computer what the Google quantum computer did. I’m not surprised by that, but at the same time, it’s very, very clear that we’re entering, as a community, a place where no one has ever been before, meaning we can do things much faster than the classical computers. It is actually happening.

    ___________________________________________

    “Even Google’s team will agree the real goal now is to show some examples where quantum computers can be useful either for scientific applications or for general purpose applications.”
    — Mikhail Lukin

    ___________________________________________

    GAZETTE: What was the problem that Google was trying to solve?

    LUKIN: The specific problem that Google is trying to solve is akin to generating random numbers in a quantum way. The algorithm they’re using is not designed to be practically useful. So in this practical sense, quantum supremacy by itself, to me, does not mean very much. But what will be really exciting — and this is a key goal in the field — is achieving that quantum advantage for things that are useful. You execute the algorithm and actually learn something. There are two types of useful quantum advantage, one for scientific applications and the other for general purpose applications. Google’s paper [Nature] is something in between.

    2
    “Google’s achievement is quite impressive. But it points to one specific calculation and says, ‘We crossed the threshold.’ I think, in practice, it’s not quite like that,” says Professor Mikhail Lukin, co-director of Harvard’s Quantum Initiative. Photo by Sophie Park

    GAZETTE: Purely for demonstration purposes?

    LUKIN: Yes, it is a demonstration experiment. However, in the domain of scientific applications, it is pretty clear that quantum computers will be very useful for simulating complex quantum systems. This is what we have focused on here at Harvard. In fact, using our system, I believe that we’ve already crossed into the domain where we have useful quantum advantage for scientific applications. Using our 51-qubit system, we have made one of the largest quantum superposition states, and we have already discovered new phenomena that have not been known previously, and that you would not be able to uncover using brute force classical simulations. In fact, here at Harvard, in different labs, we have at least two systems which have either entered or are entering this domain [of quantum supremacy] for scientific applications. And I think it’s very significant, because these experiments are already creating value for the scientific community.

    GAZETTE: So, if the definition of quantum supremacy is to do things much faster or that classical computers can’t do, you’re there already?

    LUKIN: That’s right. You could argue that these are not problems that people on the street will care about, and I’m sensitive to that. Another goal, which is exciting and I think we are now in a unique position to tackle, is to look for quantum advantage with practical relevance. That may be one of IBM’s points, and I agree with it. Even Google’s team will agree the real goal now is to show some examples where quantum computers can be useful either for scientific applications or for general purpose applications.

    GAZETTE: Do you see a future where quantum computers replace classical computers in everyday life, in smartphones and laptops? Or is it going to be the case where quantum computers will be very valuable for specific things and classical computers will continue to be valuable for other things?

    LUKIN: It’s very hard to predict the future. But my best guess would be that it’s the latter: that quantum computers would be used as accelerators for problems that are very hard for classical machines.

    GAZETTE: So, when you think about big problems that are really hard, are you thinking about things like modeling climate or fusion research, or are you talking about other things like what you use yours for: to understand something horribly complex like quantum mechanics itself?

    LUKIN: I’m tempted to answer “all of the above.” There are different classes of problems. For example, understanding complex materials and modeling chemical reactions are problems that are fundamentally quantum mechanical, which is why classical computers have such a hard time solving them. Understanding how complex quantum systems behave far away from the equilibrium and looking for new phases of matter, these are the kinds of problems for which we are already using a quantum advantage. This work has already stimulated many research directions. There are other problems that push the boundaries of what it is possible to do with conventional computers, like modeling the climate or complex optimization — finding optimal arrangements for networks, for signal routing, finance, logistics, artificial intelligence. It’s our hope that quantum computers will eventually accelerate calculations relevant to these problems. Another famous problem is factoring, related to encryption. When you encode your credit card numbers, you currently use so-called RSA encryption, which is based on the difficulty of problems like finding the factors of a large number. For problems like these there are quantum algorithms that can be exponentially faster than the best-known classical algorithms. At the same time, quantum computers can actually be used to improve security of communications channels — this is another example of a useful quantum advantage.

    GAZETTE: Can you talk about some of the challenges you see ahead?

    LUKIN: It is important to emphasize two points: We still do not know how to build truly large-scale quantum machines, containing many thousands of qubits. There are several approaches to this problem that have to be investigated seriously, but we do not know at the moment what a truly large-scale quantum computer will ultimately look like. The second issue, which we have already discussed, is that we still do not know for which applications quantum computers will be most useful. This field is at a unique point where a lot of basic research still has to be done, but some systems are ready to be engineered and deployed, though at a relatively small scale.

    GAZETTE: I want to touch on the community working in this area and the Harvard Quantum Initiative. How old is it now?

    LUKIN: It’s about a year old. We have, between Harvard and MIT, a very special community of researchers looking at various aspects of this frontier. There are around 40 research groups in an extremely collaborative community that really spans these two institutions and several startup companies. Many of these groups are already world leaders in their respective disciplines and when they work together, something very special can happen. This is also a unique opportunity for educating students, who will eventually become leaders at the forefront of this exciting interdisciplinary field. Enabling these collaborations and educating a new generation of quantum scientists and engineers are the key goals of Harvard’s Quantum Initiative. Our work on quantum computers is an example of such truly collaborative projects between theorists, experimentalists, engineers, and computer scientists from both of our institutions — this is our competitive advantage, our “secret sauce.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Harvard University campus
    Harvard University is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 11:10 am on October 3, 2019 Permalink | Reply
    Tags: "Tiny tweezers", , , Harvard Gazette, , ,   

    From Harvard Gazette: “Tiny tweezers” 

    Harvard University


    From Harvard Gazette

    October 2, 2019
    Peter Reuell
    Photos by Jon Chase/Harvard Staff Photographer

    1

    In a first, optical tweezers give Harvard scientists the control to capture ultracold molecules.

    For most people, tweezers are a thing you’d find in a medicine cabinet or beauty salon, useful for getting rid of ingrown hairs or sculpting eyebrows.

    Those designed by John Doyle and Kang-Kuen Ni have more exotic applications.

    Using precisely focused lasers that act as “optical tweezers,” the pair have been able to capture and control individual, ultracold molecules — the eventual building-blocks of a quantum computer — and study the collisions between molecules in more detail than ever before. The work is described in a paper published in Science on Sept. 13.

    “We’re interested in doing two things,” said Doyle, the Henry B. Silsbee Professor of Physics and co-director of the Quantum Science and Engineering Initiative. “One is building up complex quantum systems, which are interesting because it turns out that if you can put together certain kinds of quantum systems they can solve problems that can’t be solved using a classical computer, including understanding advanced materials and perhaps designing new materials, or even looking at problems we haven’t thought of yet, because we haven’t had the tools.

    “The other is to actually hold these molecules so we can study the molecules themselves to get insight into their structure and the interactions between molecules,” he continued. “We can also use them to look for new particles beyond the Standard Model, perhaps explaining key cosmological questions.”

    Ni, the Morris Kahn Associate Professor of Chemistry and Chemical Biology, explained that the work began with a cloud of molecules — in this case calcium monofluoride molecules — trapped in a small chamber. Using lasers, the team cooled the molecules to just above absolute zero, then used optical tweezers to capture them.

    3
    Harvard’s Kang-Kuen Ni (left) and John Doyle use precisely focused lasers as optical tweezers.

    “Because the molecules are very cold, they have very low kinetic energy,” Ni said. “An optical tweezer is a very tightly focused laser beam, but the molecules see it as a well, and as they move into the tweezer, they continue to be cooled and lose energy to fall to the bottom of the tweezer trap.”

    Using five beams, Ni, Doyle, and colleagues were able to hold five separate molecules in the tweezers, and demonstrate exacting control over them.

    “The challenge for molecules, and the reason we haven’t done it before, is because they have a number of degrees of freedom — they have electronic and spin states, they have vibration, they have rotation, with each molecule having its own features,” she said. “In principle, one could choose the perfect molecule for a particular use — you can say I want to use this property for one thing, and another property for something else. But the molecules, whatever they are, have to be controlled in the first place. The novelty of this work is in being able to have that individual control.”

    While capturing individual molecules in optical tweezers is a key part of potentially building what Doyle called a “quantum simulator,” the work also allowed researchers to closely observe a process that has remained largely mysterious: the collision between molecules.

    “Simple physics questions deserve answers,” Doyle said. “And a simple physics question here is, what happens when two molecules hit each other? Do they form a reaction? Do they bounce off each other? In this ultracold, quantum region … we don’t know much.

    “There are a number of very good theorists who are working hard to understand if quantum mechanics can predict what we’re going to see,” he continued. “But, of course, nothing motivates new theory like new experiments, and now we have some very nice experimental data.”

    In subsequent experiments, Ni said the team is using the optical tweezers to “steer” molecules together and study the resulting collisions.

    In separate experiments, researchers from her lab explore reactions of ultracold molecules. “We are studying these reactions at ultracold temperatures, which haven’t been achieved previously,” she said. “And we’re seeing new things.”

    Ni was also the author of a 2018 study that theorized how captured molecules, if brought close enough together, might interact, potentially enabling researchers to use them to perform quantum calculations.

    “The idea of Kang-Kuen’s paper is that we can bring these single molecules together and couple them, which is equivalent to a quantum gate, and do some processing,” Doyle said. “So that coupling could be used to perform quantum processing.”

    The current study is also noteworthy for its collaborative nature, Doyle said.

    “We talk a lot about collaboration in the Harvard Quantum Initiative and the Center for Ultracold Atoms (CUA), and the bottom line is this collaboration was driven by scientific interest, and included Wolfgang Ketterle at MIT, one of our CUA colleagues” he said. “We all have strong scientific interest in molecules, and the fact that Kang-Kuen’s lab is in chemistry and my lab is here in physics has not been a significant barrier.

    “It has been absolutely fabulous working together to solve these problems. And one of the big reasons why is when you have two faculty members from two different departments, they’re not only bringing their personal scientific perspective, they’re bringing to some degree, all the knowledge from their groups together.”

    This research was supported with funding from the National Science Foundation.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Harvard University campus
    Harvard University is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 8:58 am on April 22, 2019 Permalink | Reply
    Tags: "Before the Big Bang", , , , , Cosmic microwave background [CMB] radiation, , Harvard Gazette, Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey   

    From Harvard Gazette: “Before the Big Bang” 


    Harvard University


    From Harvard Gazette

    April 18, 2019
    Peter Reuell

    Study outlines new proposal for probing the primordial universe.

    1
    NASA WMAP

    Most everybody is familiar with the Big Bang — the notion that an impossibly hot, dense universe exploded into the one we know today. But what do we know about what came before?

    In the quest to resolve several puzzles discovered in the initial condition of the Big Bang, scientists have developed a number of theories to describe the primordial universe, the most successful of which — known as cosmic inflation — describes how the universe dramatically expanded in size in a fleeting fraction of a second right before the Big Bang.

    Inflation

    4
    Alan Guth, from Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    Alan Guth’s notes:
    5

    But as successful as the inflationary theory has been, controversies have led to active debates over the years.

    Some researchers have developed very different theories to explain the same experimental results that have supported the inflationary theory so far. In some of these theories, the primordial universe was contracting instead of expanding, and the Big Bang was thus a part of a Big Bounce.

    Some researchers — including Avi Loeb, the Frank B. Baird, Jr. Professor of Science and chair of the Astronomy Department — have raised concerns about the theory, suggesting that its seemingly endless adaptability makes it all but impossible to test.

    “The current situation for inflation is that it’s such a flexible idea … it cannot be falsified experimentally,” Loeb said. “No matter what result of the observable people set out to measure would turn out to be, there are always some models of inflation that can explain it.” Therefore, experiments can only help to nail down some model details within the framework of the inflationary theory, but cannot test the validity of the framework itself. However, falsifiability should be a hallmark of any scientific theory.

    That’s where Xingang Chen comes in.

    1
    Xingang Chen is one of the authors of a new study that examines what the universe looked like before the Big Bang. Jon Chase/Harvard Staff Photographer.

    A senior lecturer in astronomy, Chen and his collaborators for many years have been developing the idea of using something he called a “primordial standard clock” as a probe of the primordial universe. Together with Loeb and Zhong-Zhi Xianyu, a postdoctoral researcher in the Physics Department, Chen applied this idea to the noninflationary theories after he learned about an intense debate in 2017 that questioned whether inflationary theories make any predictions at all. In a paper published as an Editor’s Suggestion in Physical Review Letters, the team laid out a method that may be used to falsify the inflationary theory experimentally.

    In an effort to find some characteristic that can separate inflation from other theories, the team began by identifying the defining property of the various theories — the evolutionary history of the size of the primordial universe. “For example, during inflation, by definition the size of the universe grows exponentially,” Xianyu said. “In some alternative theories, the size of the universe contracts — in some very slowly and in some very fast.

    “The conventional observables people have proposed so far have trouble distinguishing the different theories because these observables are not directly related to this property,” he continued. “So we wanted to find what the observables are that can be linked to that defining property.”

    The signals generated by the primordial standard clock can serve this purpose.

    That clock, Chen said, is any type of massively heavy elementary particle in the energetic primordial universe. Such particles should exist in any theory, and they oscillate at some regular frequency, much like the swaying of a clock’s pendulum.

    The primordial universe was not entirely uniform. Quantum fluctuations became the seeds of the large-scale structure of today’s universe and one key source of information physicists rely on to learn about what happened before the Big Bang. The theory outlined by Chen suggests that ticks of the standard clock generated signals that were imprinted into the structure of those fluctuations. And because standard clocks in different primordial universes would leave different patterns of signals, Chen said, they may be able to determine which theory of the primordial universe is most accurate.

    “If we imagine all the information we learned so far about what happened before the Big Bang is in a roll of film frames, then the standard clock tells us how these frames should be played,” Chen explained. “Without any clock information, we do not know if the film should be played forward or backward, fast or slow — just like we are not sure if the primordial universe was inflating or contracting, and how fast it did that. This is where the problem lies. The standard clock put time stamps on each of these frames when the film was shot before the Big Bang, and tells us what this film is about.”

    The team calculated how these standard clock signals should look in noninflationary theories, and suggested how to search for them in astrophysical observations. “If a pattern of signals representing a contracting universe were found,” Xianyu said, “it would falsify the entire inflationary theory, regardless of what detailed models one constructs.”

    The success of this idea lies in experimentation. “These signals will be very subtle to detect,” Chen said. “Our proposal is that there should be some kind of massive fields that have generated these imprints and we computed their patterns, but we don’t know how large the overall amplitude of these signals is. It may be that they are very faint and very hard to detect, so that means we will have to search in many different places.

    “The cosmic microwave background [CMB] radiation is one place,” he continued. “The distribution of galaxies is another. We have already started to search for these signals and there are some interesting candidates already, but we still need more data.”

    Cosmic Background Radiation per Planck

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Harvard University campus
    Harvard University is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 9:55 am on January 30, 2019 Permalink | Reply
    Tags: Embedding ethics in computer science curriculum, Ethics permeates the design of almost every computer system or algorithm that’s going out in the world, Harvard Gazette, In 2015 Barbara Grosz designed a new course called “Intelligent Systems: Design and Ethical Challenges, Initiative dubbed Embedded EthiCS, Philosophy graduate students are paired with computer science faculty members, The Embedded EthiCS model has attracted interest from universities — and companies — around the country   

    From Harvard Gazette: “Embedding ethics in computer science curriculum” 

    Harvard University
    Harvard University


    From Harvard Gazette

    January 25, 2019
    Paul Karoff

    1
    Photo illustration by Judy Blomquist/Harvard Staff

    Harvard initiative seen as a national model.

    Barbara Grosz has a fantasy that every time a computer scientist logs on to write an algorithm or build a system, a message will flash across the screen that asks, “Have you thought about the ethical implications of what you’re doing?”

    Until that day arrives, Grosz, the Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), is working to instill in the next generation of computer scientists a mindset that considers the societal impact of their work, and the ethical reasoning and communications skills to do so.

    “Ethics permeates the design of almost every computer system or algorithm that’s going out in the world,” Grosz said. “We want to educate our students to think not only about what systems they could build, but whether they should build those systems and how they should design those systems.”

    At a time when computer science departments around the country are grappling with how to turn out graduates who understand ethics as well as algorithms, Harvard is taking a novel approach.

    In 2015, Grosz designed a new course called “Intelligent Systems: Design and Ethical Challenges.” An expert in artificial intelligence and a pioneer in natural language processing, Grosz turned to colleagues from Harvard’s philosophy department to co-teach the course. They interspersed into the course’s technical content a series of real-life ethical conundrums and the relevant philosophical theories necessary to evaluate them. This forced students to confront questions that, unlike most computer science problems, have no obvious correct answer.

    Students responded. The course quickly attracted a following and by the second year 140 people were competing for 30 spots. There was a demand for more such courses, not only on the part of students, but by Grosz’s computer science faculty colleagues as well.

    “The faculty thought this was interesting and important, but they didn’t have expertise in ethics to teach it themselves,” she said.

    2
    Barbara Grosz (from left), Jeffrey Behrends, and Alison Simmons hope Harvard’s approach to turning out graduates who understand ethics as well as algorithms becomes a national model. Credit: Rose Lincoln/Harvard Staff Photographer

    In response, Grosz and collaborator Alison Simmons, the Samuel H. Wolcott Professor of Philosophy, developed a model that draws on the expertise of the philosophy department and integrates it into a growing list of more than a dozen computer science courses, from introductory programming to graduate-level theory.

    Under the initiative, dubbed Embedded EthiCS, philosophy graduate students are paired with computer science faculty members. Together, they review the course material and decide on an ethically rich topic that will naturally arise from the content. A graduate student identifies readings and develops a case study, activities, and assignments that will reinforce the material. The computer science and philosophy instructors teach side by side when the Embedded EthiCS material is brought to the classroom.

    Grosz and her philosophy colleagues are at the center of a movement that they hope will spread to computer science programs around the country. Harvard’s “distributed pedagogy” approach is different from many university programs that treat ethics by adding a stand-alone course that is, more often than not, just an elective for computer science majors.

    “Standalone courses can be great, but they can send the message that ethics is something that you think about after you’ve done your ‘real’ computer science work,” Simmons said. “We want to send the message that ethical reasoning is part of what you do as a computer scientist.”

    Embedding ethics across the curriculum helps computer science students see how ethical issues can arise from many contexts, issues ranging from the way social networks facilitate the spread of false information to censorship to machine-learning techniques that empower statistical inferences in employment and in the criminal justice system.

    Courses in artificial intelligence and machine learning are obvious areas for ethical discussions, but Embedded EthiCS also has built modules for less-obvious pairings, such as applied algebra.

    “We really want to get students habituated to thinking: How might an ethical issue arise in this context or that context?” Simmons said.

    ____________________________________________

    “Standalone courses can be great, but they can send the message that ethics is something that you think about after you’ve done your ‘real’ computer science work.”
    — Alison Simmons, Samuel H. Wolcott Professor of Philosophy
    ____________________________________________

    David Parkes, George F. Colony Professor of Computer Science, teaches a wide-ranging undergraduate class on topics in algorithmic economics. “Without this initiative, I would have struggled to craft the right ethical questions related to rules for matching markets, or choosing objectives for recommender systems,” he said. “It has been an eye-opening experience to get students to think carefully about ethical issues.”

    Grosz acknowledged that it can be a challenge for computer science faculty and their students to wrap their heads around often opaque ethical quandaries.

    “Computer scientists are used to there being ways to prove problem set answers correct or algorithms efficient,” she said. “To wind up in a situation where different values lead to there being trade-offs and ways to support different ‘right conclusions’ is a challenging mind shift. But getting these normative issues into the computer system designer’s mind is crucial for society right now.”

    Jeffrey Behrends, currently a fellow-in-residence at Harvard’s Edmond J. Safra Center for Ethics, has co-taught the design and ethics course with Grosz. Behrends said the experience revealed greater harmony between the two fields than one might expect.

    “Once students who are unfamiliar with philosophy are introduced to it, they realize that it’s not some arcane enterprise that’s wholly independent from other ways of thinking about the world,” he said. “A lot of students who are attracted to computer science are also attracted to some of the methodologies of philosophy, because we emphasize rigorous thinking. We emphasize a methodology for solving problems that doesn’t look too dissimilar from some of the methodologies in solving problems in computer science.”

    The Embedded EthiCS model has attracted interest from universities — and companies — around the country. Recently, experts from more than 20 institutions gathered at Harvard for a workshop on the challenges and best practices for integrating ethics into computer science curricula. Mary Gray, a senior researcher at Microsoft Research (and a fellow at Harvard’s Berkman Klein Center for Internet and Society), who helped convene the gathering, said that in addition to impeccable technical chops, employers increasingly are looking for people who understand the need to create technology that is accessible and socially responsible.

    “Our challenge in industry is to help researchers and practitioners not see ethics as a box that has to be checked at the end, but rather to think about these things from the very beginning of a project,” Gray said.

    Those concerns recently inspired the Association for Computing Machinery (ACM), the world’s largest scientific and educational computing society, to update its code of ethics for the first time since 1992.

    Curriculum at a glance:
    A sampling of classes from the Embedded EthiCS pilot program and the issues they address

    Great Ideas in Computer Science-The ethics of electronic privacy

    Introduction to Computer Science II- Morally responsible software engineering
    Networks- Facebook, fake news, and ethics of censorship

    Programming Languages- Verifiably ethical software systems

    Design of Useful and Usable Interactive Systems- Inclusive design and equality of opportunity

    Introduction to AI- Machines and moral decision making

    Autonomous Robot Systems-Robots and work

    In hope of spreading the Embedded EthiCS concept widely across the computer science landscape, Grosz and colleagues have authored a paper to be published in the journal Communications of the ACM and launched a website to serve as an open-source repository of their most successful course modules.

    They envision a culture shift that leads to a new generation of ethically minded computer science practitioners.

    “In our dream world, success will lead to better-informed policymakers and new corporate models of organization that build ethics into all stages of design and corporate leadership,” Behrends says.

    The experiment has also led to interesting conversations beyond the realm of computer science.

    “We’ve been doing this in the context of technology, but embedding ethics in this way is important for every scientific discipline that is putting things out in the world,” Grosz said. “To do that, we will need to grow a generation of philosophers who will think about ways in which they can take philosophical ethics and normative thinking, and bring it to all of science and technology.”

    Carefully designed course modules

    At the heart of the Embedded EthiCS program are carefully designed, course-specific modules, collaboratively developed by faculty along with computer science and philosophy graduate student teaching fellows.

    A module that Kate Vredenburgh, a philosophy Ph.D. student, created for a course taught by Professor Finale Doshi-Velez asks students to grapple with questions of how machine-learning models can be discriminatory, and how that discrimination can be reduced. An introductory lecture sets out a philosophical framework of what discrimination is, including the concepts of disparate treatment and impact. Students learn how eliminating discrimination in machine learning requires more than simply reducing bias in the technical sense. Even setting a socially good task may not be enough to reduce discrimination, since machine learning relies on predictively useful correlations and those correlations sometimes result in increased inequality between groups.

    The module illuminates the ramifications and potential limitations of using a disparate impact definition to identify discrimination. It also introduces technical computer science work on discrimination — statistical fairness criteria. An in-class exercise focuses on a case in which an algorithm that predicts the success of job applicants to sales positions at a major retailer results in fewer African-Americans being recommended for positions than white applicants.

    An out-of-class assignment asks students to draw on this grounding to address a concrete ethical problem faced by working computer scientists (that is, software engineers working for the Department of Labor). The assignment gives students an opportunity to apply the material to a real-world problem of the sort they might face in their careers, and asks them to articulate and defend their approach to solving the problem.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Harvard University campus
    Harvard University is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 10:55 am on January 2, 2019 Permalink | Reply
    Tags: , , , , Center for Computation and Visualization at Brown University, , Harvard Gazette, , Stepping inside a dead star, VR-virtual-reality   

    From Harvard Gazette: “Stepping inside a dead star” 

    Harvard University
    Harvard University


    From Harvard Gazette

    December 21, 2018
    Juan Siliezar

    Team uses detailed data to create a virtual-reality display of what’s left after explosion.

    Cassiopeia A, the youngest known supernova remnant in the Milky Way, is the remains of a star that exploded almost 400 years ago. The star was approximately 15 to 20 times the mass of our sun and sat in the Cassiopeia constellation, almost 11,000 light-years from earth.

    Though stunningly distant, it’s now possible to step inside a virtual-reality (VR) depiction of what followed that explosion.

    4
    Wearing VR goggles Kim Arcand views a 3-D representation of the Cassiopeia A supernova remnant, pictured above, at the YURT VR Cave at Brown.

    A team led by Kimberly Kowal Arcand from the Harvard-Smithsonian Center for Astrophysics (CfA) and the Center for Computation and Visualization at Brown University has made it possible for astronomers, astrophysicists, space enthusiasts, and the simply curious to experience what it’s like inside a dead star. Their efforts are described in a recent paper in Communicating Astronomy with the Public.

    The VR project — believed to be the first of its kind, using X-ray data from NASA’s Chandra X-ray Observatory mission (which is headquartered at CfA), infrared data from the Spitzer Space Telescope, and optical data from other telescopes — adds new layers of understanding to one of the most famous and widely studied objects in the sky.

    NASA/Chandra X-ray Telescope

    NASA/Spitzer Infrared Telescope

    “Our universe is dynamic and 3-D, but we don’t get that when we are constantly looking at things” in two dimensions, said Arcand, the visualization lead at CfA.

    The project builds on previous research done on Cas A, as it’s commonly known, that first rendered the dead star into a 3-D model using the X-ray and optical data from multiple telescopes.

    3
    Cassiopeia A

    Arcand and her team used that data to convert the model into a VR experience by using MinVR and VTK, two data visualization platforms. The coding work was primarily handled by Brown computer science senior Elaine Jiang, a co-author on the paper.

    The VR experience lets users walk inside a colorful digital rendering of the stellar explosion and engage with parts of it while reading short captions identifying the materials they see.

    “Astronomers have long studied supernova remnants to better understand exactly how stars produce and disseminate many of the elements observed on Earth and in the cosmos at large,” Arcand said.

    When stars explode, they expel all of their elements into the universe. In essence, they help create the elements of life, from the iron in our blood to the calcium in our bones. All of that, researchers believe, comes from previous generations of exploded stars.

    In the 3-D model of Cas A, and now in the VR model, elements such as iron, silicon, and sulfur are represented by different colors. Seeing it in 3-D throws Cas A into fresh perspective, even for longtime researchers and astronomers who build models of supernova explosions.

    “The first time I ever walked inside the same data set that I have been staring at for 20 years, I just immediately was fascinated by things I had never noticed, like how various bits of the iron were in different locations,” Arcand said. “The ability to look at something in three dimensions and being immersed in it just kind of opened up my eyes to think about it in different ways.”

    The VR platforms also opens understanding of the supernova remnant, which is the strongest radio source beyond our solar system, to new audiences. VR versions of Cas A are available by request for a VR cave (a specially made room in which the floors and walls are projection screens), as well as on Oculus Rift, a VR computer platform. As part of this project, the team also created a version that works with Google Cardboard or similar smartphone platforms. In a separate but related project, Arcand and a team from CfA worked with the Smithsonian Learning Lab to create a browser-based, interactive, 3-D application and 360-degree video of Cas A that works with Google Cardboard and similar platforms.

    “My whole career has been looking at data and how we take data and make it accessible or visualize it in a way that adds meaning to it that’s still scientific,” Arcand said.

    VR is an almost perfect avenue for this approach, since it has been surging in popularity as both entertainment and an educational tool. It has been used to help medical staff prepare for surgeries, for example, and video game companies have used it to add excitement and immersion to popular games.

    Arcand hopes to make Cas A accessible to even more people, such as the visually impaired, by adding sound elements to the colors in the model.

    Reaction to the VR experience has been overwhelmingly positive, Arcand said. Experts and non-experts alike are hit by what Arcand calls “awe moments” of being inside and learning about something so massive and far away.

    “Who doesn’t want to walk inside a dead star?” Arcand said.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Harvard University campus
    Harvard University is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 9:55 am on October 31, 2018 Permalink | Reply
    Tags: a snag, , Beyond the standard model, Harvard Gazette, ,   

    From Harvard Gazette: “Beyond the standard model, a snag” 

    Harvard University
    Harvard University


    From Harvard Gazette

    Electrons, up really close

    1
    Physics Professor John Doyle works in Lyman Lab. Kris Snibbe/Harvard Staff Photographer

    Team makes most precise measure ever of their charge.

    Electrons are almost unimaginably small, but their tiny size doesn’t mean they can’t be used to poke holes in theories of how the universe works.

    Working in a basement lab at Harvard, a group of researchers led by John Doyle, the Henry B. Silsbee Professor of Physics, has been part of a team to make the most precise measurement ever of the shape of the field around an electron, and the results suggest that some theories for what lies beyond the standard model of physics need to return to the drawing board. The study is described in a recently published paper in the journal Nature.

    The team included groups led by David DeMille from Yale University and Gerald Gabrielse from Northwestern University.

    “This measurement is an order of magnitude better than the last best measurement, which we had also made,” Doyle said. “What this means is these theories of what is beyond the standard model, they may have to be revised.”

    The findings are the latest to emerge from the Advanced Cold Molecule Electron Electric Dipole Moment (ACME) Search, a decade-long project hunting for evidence of exotic particles that fall outside of the standard model. A description of how the basic building blocks of matter interact with a handful of fundamental forces, the standard model is the key theory of particle physics, Doyle said, but it’s also incomplete.

    “There are at least two fundamental observations we can make that are not explained by the standard model,” Doyle said. “One is the matter-antimatter asymmetry in the universe. The universe started off as a very small, hot place, where matter and antimatter were balanced. But as the universe expanded and cooled, at some point matter was favored, and that’s the normal stuff we see now in the universe — stars, the earth, the sky, etc. There is a general theory about how that took place, and that theory requires a property called ‘time reversal violation’ (which states that microscopic physics can tell which way time is flowing) … but the standard model does not have enough.”

    “Dark matter is also not explained by the standard model,” he continued. “We can see dark matter based on how the direction of light is changed as it passes through galaxies. And based on the rotation frequencies of galaxy discs, we know there must be some other matter there, but we don’t know what it is.”

    Women in STEM – Vera Rubin
    Fritz Zwicky discovered Dark Matter when observing the movement of the Coma Cluster

    Coma cluster via NASA/ESA Hubble

    But most of the real work was done by Vera Rubin

    Fritz Zwicky from http:// palomarskies.blogspot.com


    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science)


    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL)


    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970. https://home.dtm.ciw.edu

    While scientists have advanced a number of theories that would address the gaps in the standard model, Doyle said, it remains unclear which — if any — may be supported by the scientific evidence.

    “There are fairly generic theories that predict new particles, and with them predict enough ‘time reversal violation’ to describe the matter-antimatter asymmetry,” Doyle said. “In addition, some of these predicted particles are thought of as dark matter candidates.” Though massive facilities like the Large Hadron Collider are taking part in the search for those particles, Doyle and colleagues have been leaders in that hunt, and are doing it using a device that is the size of a large office.

    What the ACME team is searching for, Doyle said, is something called the “electron dipole moment,” a telltale sign that the field surrounding the electron is spontaneously transforming into new, predicted particles.

    “The electron is just a point particle, but the electrical field around it contains energy … which can spontaneously turn into a particle for a short time, including these beyond-the-standard-model particles,” Doyle said. “So there is a dance that goes on constantly, where the field is being converted into a particle, which decays back into the field.

    “The trick is observing the effect of the process,” he said. “We refer to these particles as being ‘virtually created’ particles, and the theory is that because of them, the electron will actually look somewhat like a molecule, with a small positive charge on one end and a negative charge on the other.”

    The result is an electron with a field that is not perfectly spherical, but slightly squashed. It’s that slight deformation that Doyle and colleagues were trying to find.

    The device they used to do so works by firing a beam of cold thorium-oxide molecules into a relatively small chamber, where lasers select specific quantum states, orienting the molecules and their electrons as they pass between two charged glass plates inside a carefully controlled magnetic field.

    Another set of “readout” lasers targets the molecules as they emerge from the chamber, causing them to emit light. By monitoring that light, the team can identify whether the electrons twist or tumble during flight, as they would if the shape of their field was squashed. “We didn’t see any electron dipole moment, which means these new particles would have to have different properties than were predicted,” Doyle said. “What this says is that these beyond-the-standard-model theories may have to be revised.”

    The finding doesn’t, however, close the door on theories of what lies beyond the standard model, Doyle said.

    “There were reasons to think there were particles at this mass at this energy scale, but we are not finding them,” he said. “Some of those theories look very unlikely to be correct. But there is room for some other very reasonable theories that predict particles just above where we are now, so it’s definitely worthwhile to keep pushing.”

    Going forward, Doyle and colleagues may not be alone in that effort.

    “This field is growing,” he said. “Most of the people who did the ACME experiments were students and postdoctoral fellows, who, together, deserve most of the credit. They have now become the next generation of scientists in the field and have, along with scientists from other labs, come up with exciting new ideas on how to do these measurements even better … So we expect this to be a very vibrant area for at least the next 10 years as people use new quantum tools to make these types of measurements. The big physics mysteries of the universe remain, and this community aims to keep trying to figure them out in our basement labs.”

    This research was supported with funding from the National Science Foundation.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Harvard University campus
    Harvard is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 11:11 am on August 22, 2018 Permalink | Reply
    Tags: , , , Harvard Gazette, Tamara Pico, The Hudson River,   

    From Harvard Gazette: Women in STEM -“Tracking rivers to read ancient glaciers” Tamara Pico 

    Harvard University
    Harvard University


    From Harvard Gazette

    1
    Tamara Pico is lead author of a new study that estimates how Ice Age glaciers moved by examining how the weight of the North American ice sheet altered topography and led to changes in the course of rivers. Jon Chase/Harvard Staff Photographer

    Long-ago changes along the Hudson may provide evidence of how ice sheets grew.

    In a kind of geological mystery, scientists have known for decades that a massive ice sheet stretched to cover most of Canada and much of the northeastern U.S. 25,000 years ago. What’s been trickier to pin down is how — and especially how quickly — it reached its ultimate size.

    One clue to answering that, Tamara Pico said, may involve changes to the Hudson River.

    Pico, who is a Graduate School of Arts and Sciences Ph.D. student working in the group led by Jerry Mitrovica, the Frank B. Baird Jr. Professor of Science, is the lead author of a study that estimates how glaciers moved by examining how the weight of the ice sheet altered topography and led to changes in the river’s course. The study is described in a July paper published in Geology.

    “The Hudson River has changed course multiple times over the last million years,” Pico said. “The last time was about 30,000 years ago, just before the last glacial maximum, when it moved to the east.

    “That ancestral channel has been dated and mapped … and the way the ice sheet connects to this is: As it is growing, it’s loading the crust it’s sitting on. The Earth is like bread dough on these time scales, so as it gets depressed under the ice sheet, the region around it bulges upward. In fact, we call it the peripheral bulge. The Hudson is sitting on this bulge, and as it’s lifted up and tilted, the river can be forced to change directions.”

    To develop a system that could connect the growth of the ice sheet with changes in the Hudson’s direction, Pico began with a model for how the Earth deforms in response to various loads.

    “So we can say, if there’s an ice sheet over Canada, I can predict the land in New York City to be uplifted by X many meters,” she said. “What we did was create a number of different ice histories that show how the ice sheet might have grown, each of which predicts a certain pattern of uplift, and then we can model how the river might have evolved in response to that upwelling.”

    The result, Pico said, is a model that may for the first time be able to use the changes in natural features in the landscape to measure the growth of ice sheets.

    “This is the first time a study has used the change in a river’s direction to understand which ice history is most likely,” she said. “There’s very little data about how the ice sheet grew because as it grows it acts like a bulldozer and scrapes everything away to the edges. We have plenty of information about how the ice retreats, because it deposits debris as it melts back, but we don’t get that type of record as the ice is advancing.”

    2
    Source: “Glacial isostatic adjustment deflects the path of the ancestral Hudson River,” T. Pico, J.X. Mitrovica, J. Braun, K.L. Ferrier

    What little data scientists do have about how the ice sheet grew, Pico said, comes from data about sea level during the period, and suggests that the ice sheet over Canada, particularly in the eastern part of the country, remained relatively small for a long period, and then suddenly began to grow quickly.

    “In a way, this study is motivated by that, because it’s asking: Can we use evidence for a change in river direction … to test whether the ice sheet grew quickly or slowly?” she said. “We can only ask that question because these areas were never covered by ice, so this record is preserved. We can use evidence in the landscape and the rivers to say something about the ice sheet, even though this area was never covered by ice.”

    While the study offers strong suggestive evidence that the technique works, Pico said there is still a great deal of work to be done to confirm that the findings are solid.

    “This is the first time this has been done, so we need to do more work to explore how the river responds to this type of uplift and understand what we should be looking for in the landscape,” she said. “But I think it’s extremely exciting because we are so limited in what we know about ice sheets before the last glacial maximum. We don’t know how fast they grew. If we don’t know that, we don’t know how stable they are.”

    Going forward, Pico said she is working to apply the technique to several other rivers along the Eastern Seaboard, including the Delaware, Potomac, and Susquehanna, all of which show signs of rapid change during the same period.

    “There is some evidence that rivers experienced very unusual changes that are no doubt related to this process,” she said. “The Delaware may have actually reversed slope, and the Potomac and Susquehanna both show a large increase in erosion in some areas, suggesting the water was moving much faster.”

    In the long run, Pico said, the study may help researchers rewrite their understanding of how quickly the landscape can change and how rivers and other natural features respond.

    “For me, this work is about trying to connect the evidence on land to the history of glaciation to show the community that this process — what we call glacial isostatic adjustment — can really impact rivers,” Pico said. “People most often think of rivers as stable features of the landscape that remain fixed over very long, million-year time scales, but we can show that these Ice Age effects can alter the landscape on millennial time scales. The ice sheet grows, the Earth deforms, and rivers respond.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Harvard University campus
    Harvard is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 6:45 am on March 15, 2018 Permalink | Reply
    Tags: , Earthquake detection, Harvard Gazette,   

    From Harvard Gazette: “Learning to find ‘quiet’ earthquakes’ 

    Harvard University
    Harvard University


    Harvard Gazette

    March 14, 2018
    Peter Reuell

    Researchers create algorithm that can separate small disturbances from seismic noise.

    1
    Assistant Professor Marine Denolle is the co-author of a new study that uses computer-learning algorithms to detect tiny earthquakes hidden in seismic “noise,” like human activity, that could be used for real-time detection and early warnings. Kris Snibbe/Harvard Staff Photographer.

    Imagine standing in the middle of Harvard Square and the swirling cacophony that comes with it: the thrum of passing cars, the rumbling of trucks and buses, the chattering tourists and students, and a busker or two competing for attention.

    1

    Now imagine trying to filter out all that noise and pick up a whisper from a block away, and you have some idea of the challenge facing seismologists.

    2
    Marine Denolle, Assistant professor at the Radcliffe Institute and assistant professor of earth and planetary sciences in the Harvard Faculty of Arts and Sciences.

    Marine Denolle is one of several co-authors of a study that used computer-learning algorithms to identify small earthquakes buried in seismic noise. Other authors are Thibaut Perol, who has doctoral and master’s degrees from the Harvard John A. Paulson School for Engineering and Applied Sciences and the Harvard Institute for Applied Computational Science, and Michaël Gharbi, a doctoral student at Massachusetts Institute of Technology. The study was published in the journal Science Advances.

    While researchers hope the algorithm may one day allow for development of a system for real-time earthquake detection, the ability to track limited “micro-seismicity” should help scientists draw a clearer picture of a number of processes in the Earth.

    “We can use this data to map fluid migration, whether it’s magma or wastewater or oil,” Denolle said. “In addition, there is a redistribution of stresses after an earthquake … but it’s very difficult to understand that process because the only data points we have are the earthquake, so we have to infer our models from there. This can help give us a more complete picture.”

    ___________________________________________________________________________________________________________

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    1

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

    ___________________________________________________________________________________________________________

    “Seismometers are incredibly sensitive,” she said. “They can pick up signals from everything from a person walking to ocean waves hitting on the shore to the movement of a tree’s roots as it sways in the wind.

    Denolle said that studying the data will be easy — because it’s already being collected.

    “Seismometers are incredibly sensitive,” she said. “They can pick up signals from everything from a person walking to ocean waves hitting on the shore to the movement of a tree’s roots as it sways in the wind.

    “But the signals of these smaller earthquakes are buried in that background noise,” she continued. “This is really about signal detection. That’s why deep-learning techniques are useful — because you can extract features from the noise.”

    To build an algorithm capable of sorting through that seismic noise, Denolle and colleagues went to Oklahoma.

    There, researchers spent nearly two years collecting data on more than 2,000 recognized earthquakes. That data, along with seismic noise, was used to train a learning algorithm to pick out previously unidentified quakes hidden in the information.

    “We found that in a typical month, where there might be 100 earthquakes detected, there were actually at least 3,500 events,” she said. “That’s two or three orders of magnitude larger. So it works, but what we wanted to do was not only to detect earthquakes but to identify and locate them in real time for early warning systems.”

    3
    Dots of varying color and size denote the location, depth, and intensity of seismic activity along the San Andreas fault in California. Kris Snibbe/Harvard Staff Photographer.

    To provide that early warning, Denolle said, the system has to work fast, so Perol designed the algorithm at the heart of the system with efficiency in mind. Because of the massive amounts of data collected in the field — some data sets are as large as 100 terabytes — Denolle said traditional algorithms could take minutes or longer just to analyze the data from a single day.

    “But with the code we developed, it works in seconds,” she said.

    Denolle and her colleagues later applied the algorithm to include seismic data collected in Spain, and it was able to identify earthquakes, even though seismic stations were placed further apart and the quake waveforms were dramatically different from those used to train the system.

    “We applied this code blindly, with all the optimization for Oklahoma, and it still detected most of the earthquakes,” Denolle said. “That suggests that this code is very generalizable.”

    Going forward, Denolle said she hopes to refine the algorithm to improve the ability to pinpoint the location of earthquakes. She plans to conduct additional tests using larger data sets, like those collected around volcanoes.

    “This is level one. We need to detect earthquakes to understand what’s going on in the Earth,” said Denolle. “Looking at these smaller events might tell us something about bigger events … so this is fundamental.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Harvard University campus
    Harvard is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 9:29 am on March 7, 2018 Permalink | Reply
    Tags: A new view of the moon, , , , , Harvard Gazette   

    From Harvard Gazette: “A new view of the moon” 

    Harvard University
    Harvard University


    Harvard Gazette

    1
    No image caption or credit

    Research led by grad student points to origins in ‘synestia,’ challenging widely accepted model.

    Simon Lock wants to change the way you think about the moon.

    A graduate student in Harvard’s Department of Earth and Planetary Sciences, Lock is the lead author of research that challenges mainstream thought by suggesting that the moon emerged from a massive, doughnut-shaped cloud of vaporized rock called a synestia. The study was published this week in the Journal of Geophysical Research: Planets.

    “The commonly accepted theory as to how the moon was formed is that a Mars-size body collided with the proto-Earth and spun material into orbit,” Lock said. “That mass settled into a disk and later accreted to form the moon. The body that was left after the impact was the Earth. This has been the canonical model for about 20 years.”

    It’s a compelling story, Lock said, but probably wrong.

    “Getting enough mass into orbit in the canonical scenario is actually very difficult, and there’s a very narrow range of collisions that might be able to do it,” he said. “There’s only a couple-of-degree window of impact angles and a very narrow range of sizes … and even then some impacts still don’t work.”

    “This new work explains features of the moon that are hard to resolve with current ideas,” said co-author Sarah Stewart, a professor of Earth and planetary sciences at the University of California, Davis. “This is the first model that can match the pattern of the moon’s composition.”

    Tests have shown that the isotopic “fingerprints” for both the Earth and moon are nearly identical, suggesting that both came from the same source, the researchers noted. But in the canonical story, the moon formed from the remnants of just one of the two colliding bodies.

    It’s not just similarities between the Earth and moon that raise questions about the conventional wisdom — their differences do as well.

    Many volatile elements that are relatively common on Earth, such as potassium, sodium, and copper, are far less abundant on the moon.

    “There hasn’t been a good explanation for this,” Lock said. “People have proposed various hypotheses for how the moon could have wound up with fewer volatiles, but no one has been able to quantitatively match the moon’s composition.”

    The scenario outlined by Lock and colleagues still begins with a massive collision, but rather than creating a disc of rocky material, the impact creates the synestia.

    “It’s huge,” Lock said. “It can be 10 times the size of the Earth, and because there’s so much energy in the collision, maybe 10 percent of the rock of Earth is vaporized, and the rest is liquid … so the way you form the moon out of a synestia is very different.”

    The phenomenon includes a “seed” — a small amount of liquid rock that gathers just off the center of the doughnut-like structure. As the structure cools, vaporized rock condenses and rains down toward the center of the synestia. Some of the rain runs into the moon, causing it to grow.

    “The rate of rainfall is about 10 times that of a hurricane on Earth,” Lock said. “Over time, the whole structure shrinks, and the moon emerges from the vapor. Eventually, the whole synestia condenses and what’s left is a ball of spinning liquid rock that eventually forms the Earth as we know it today.”

    The model addresses each of the problems with the canonical model for the moon’s creation, Lock said. Since both the Earth and moon are created from the same cloud of vaporized rock, they naturally share similar isotope fingerprints. The lack of volatile elements on the moon, meanwhile, can be explained by it having formed surrounded by vapor and at 4,000‒6,000 degrees Fahrenheit.

    “This is a dramatically different way of forming the moon,” Lock said. “You just don’t think of a satellite forming inside another body, but this is what appears to happen.”

    Lock was quick to note that the work is still taking shape.

    “This is a basic model,” he said. “We’ve done calculations of each of the processes that go into forming the moon and shown that the model could work, but there are various aspects of our theory that will need more interrogation.

    “For example, when the moon is in this vapor, what does it do to that vapor? How does it perturb it? How does the vapor flow past the moon? These are all things we need to go back and examine in more detail.”

    Along with Lock and Stewart, researchers on the study were Matija Ćuk (SETI Institute), Stein Jacobsen (Harvard), Zoë Leinhardt (University of Bristol), Mia Mace (Bristol), and Michail Petaev (Harvard).

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Harvard University campus
    Harvard is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 12:12 pm on January 25, 2018 Permalink | Reply
    Tags: , , Harvard Gazette, Mathematicians work to expand their new pictorial mathematical language into other areas, , Picture-perfect approach to science   

    From Harvard Gazette: “Picture-perfect approach to science” 

    Harvard University
    Harvard University


    Harvard Gazette

    January 24, 2018
    Peter Reuell

    1
    Zhengwei Liu (left) and Arthur Jaffe are leading a new project to expand quon, their pictorial math language developed to help understand quantum information theory, into new fields from algebra to M-theory. Stephanie Mitchell/Harvard Staff Photographer.

    Mathematicians work to expand their new pictorial mathematical language into other areas.

    A picture is worth 1,000 words, the saying goes, but a group of Harvard-based scientists is hoping that it may also be worth the same number of equations.

    Pictorial laws appear to unify ideas from disparate, interdisciplinary fields of knowledge, linking them beautifully like elements of a da Vinci painting. The group is working to expand the pictorial mathematical language first outlined last year by Arthur Jaffe, the Landon T. Clay Professor of Mathematics and Theoretical Science, and postdoctoral fellow Zhengwei Liu.

    “There is one word you can take away from this: excitement,” Jaffe said. “And that’s because we’re not trying just to solve a problem here or there, but we are trying to develop a new way to think about mathematics, through developing and using different mathematical languages based on pictures in two, three, and more dimensions.”

    Last year they created a 3-D language called quon, which they used to understand concepts related to quantum information theory. Now, new research has offered tantalizing hints that quon could offer insights into a host of other areas in mathematics, from algebra to Fourier analysis, as well as in theoretical physics, from statistical physics to string theory. The researchers describe their vision of the project in a paper that appeared Jan. 2 in the journal Proceedings of the National Academy of Sciences.

    “There has been a great deal of evolution in this work over the past year, and we think this is the tip of the iceberg,” Jaffe said. “We’ve discovered that the ideas we used for quantum information are relevant to a much broader spectrum of subjects. We are very grateful to have received a grant from the Templeton Religion Trust that enabled us to assemble a team of researchers last summer to pursue this project further, including undergraduates, graduate students, and postdocs, as well as senior collaborators at other institutions.”

    The core team involves distinguished mathematicians such as Adrian Ocneanu, a visiting professor this year at Harvard, Vaughan Jones, and Alina Vdovina. As important are rising stars who have come to Harvard from around the world, including Jinsong Wu from the Harbin Institute of Technology and William Norledge, a recent graduate from the University of Newcastle. Also involved are students such as Alex Wozniakowski, one of the original members of the project and now a student at Nanyang Technological University in Singapore, visiting graduate students Kaifeng Bu from Zhejiang University in Hangzhou, China, Weichen Gu and Boqing Xue from the Chinese Academy of Sciences in Beijing, Harvard graduate student Sruthi Narayanan, and Chase Bendarz, an undergraduate at Northwestern University and Harvard.

    2
    An illustration of the project is pictured in Lyman Building at Harvard University. Stephanie Mitchell/Harvard Staff Photographer.

    While images have been used in mathematics since ancient times, Jaffe and colleagues believe that the team’s approach, which involves applying pictures to math generally and using images to explore the connections between math and subjects such as physics and cognitive science, may mark the emergence of a new field.

    Among the sort of problems the team has already been able to solve, Liu said, is a pictorial way to think about Fourier analysis.

    “We developed this, motivated by several ideas from Ocneanu,” he said. “Immediately, we used this to give new insights into quantum information. But we also found that we could prove an elaborate algebraic identity for formula 6j-symbols,” a standard tool in representation theory, in theoretical physics, and in chemistry.

    That identity had been found in an elementary case, but Harvard mathematician Shamil Shakirov conjectured that it was true in a general form. The group has now posted a proof on arXiv.org that is under review for publication later in the year. Another very general family of identities that the group has understood simply using the geometric Fourier transform is known as the Verlinde fusion formulas.

    “By looking at the mathematical analysis of pictures, we also found some really unexpected new inequalities. They generalize the famous uncertainty principles of [Werner] Heisenberg and of [Lucien] Hardy and become parts of a larger story,” Liu said. “So the mathematics of the picture languages themselves is quite interesting to understand. We then see their implications on other topics.”

    “I am very taken by this project, because before this, I was working on quantum information, but the only way I knew to do that was using linear algebra,” said Bu. “But working with Arthur and Zhengwei, we’ve been able to use this pictorial language to derive new ideas and geometric tools that we can use to develop new quantum protocols. They have already been useful, and we foresee that these ideas could have wide-ranging applications in the future.

    “It’s amazing, I think, that we can use a simple pictorial language to describe very complicated algebra equations,” Bu continued. “I think this is not only a new approach, but a new field for mathematics.”

    Ocneanu interjected, “Ultimately what higher-dimensional picture language does is to translate the structure of space into mathematics in a natural way.”

    Whereas traditional, linear algebra flattens 3-D concepts into a single line of equations, he said, the picture language allows scientists to use 3-D and higher-dimensional spaces to translate the world around them.

    “Space, or more generally space-time, is a kind of computational machine,” said Ocneanu. “We should really translate what space is doing into the kinds of things mathematicians use, so we can read the structure of space.”

    For Norledge, the new mathematical language is striking in the way it builds from a handful of relatively simple concepts into a complex theory.

    “My background is in representation theory; my thesis is in this area of math called geometric group theory,” he said. “So with a background of using pictures and geometric objects, it helps to apply mathematics in this way. We’re still trying to realize this, but if this all goes through and succeeds, you’ve got a very beautiful area of mathematics where you start with just a few axioms, and just from that beginning you can generalize this highly nontrivial theory with this beautiful structure.”

    “We hope that eventually one can implement the ideas we are studying in new theoretical-physics models, as well as in some practical terms,” Jaffe said. “To share in our excitement, take a look at our website.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Harvard University campus
    Harvard is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: