Tagged: Computer Science Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:59 pm on March 11, 2020 Permalink | Reply
    Tags: , Computer Science, , , , , ,   

    From MIT News: “Novel method for easier scaling of quantum devices” 

    MIT News

    From MIT News

    March 5, 2020
    Rob Matheson

    1
    An MIT team found a way to “recruit” normally disruptive quantum bits (qubits) in diamond to, instead, help carry out quantum operations. This approach could be used to help scale up quantum computing systems. Image: Christine Daniloff, MIT.

    System “recruits” defects that usually cause disruptions, using them to instead carry out quantum operations.

    In an advance that may help researchers scale up quantum devices, an MIT team has developed a method to “recruit” neighboring quantum bits made of nanoscale defects in diamond, so that instead of causing disruptions they help carry out quantum operations.

    Quantum devices perform operations using quantum bits, called “qubits,” that can represent the two states corresponding to classic binary bits — a 0 or 1 — or a “quantum superposition” of both states simultaneously. The unique superposition state can enable quantum computers to solve problems that are practically impossible for classical computers, potentially spurring breakthroughs in biosensing, neuroimaging, machine learning, and other applications.

    One promising qubit candidate is a defect in diamond, called a nitrogen-vacancy (NV) center, which holds electrons that can be manipulated by light and microwaves. In response, the defect emits photons that can carry quantum information. Because of their solid-state environments, however, NV centers are always surrounded by many other unknown defects with different spin properties, called “spin defects.” When the measurable NV-center qubit interacts with those spin defects, the qubit loses its coherent quantum state — “decoheres”— and operations fall apart. Traditional solutions try to identify these disrupting defects to protect the qubit from them.

    In a paper published Feb. 25 in Physical Review Letters, the researchers describe a method that uses an NV center to probe its environment and uncover the existence of several nearby spin defects. Then, the researchers can pinpoint the defects’ locations and control them to achieve a coherent quantum state — essentially leveraging them as additional qubits.

    In experiments, the team generated and detected quantum coherence among three electronic spins — scaling up the size of the quantum system from a single qubit (the NV center) to three qubits (adding two nearby spin defects). The findings demonstrate a step forward in scaling up quantum devices using NV centers, the researchers say.

    “You always have unknown spin defects in the environment that interact with an NV center. We say, ‘Let’s not ignore these spin defects, which [if left alone] could cause faster decoherence. Let’s learn about them, characterize their spins, learn to control them, and ‘recruit’ them to be part of the quantum system,’” says the lead co-author Won Kyu Calvin Sun, a graduate student in the Department of Nuclear Science and Engineering and a member of the Quantum Engineering group. “Then, instead of using a single NV center [or just] one qubit, we can then use two, three, or four qubits.”

    Joining Sun on the paper are lead author Alexandre Cooper ’16 of Caltech; Jean-Christophe Jaskula, a research scientist in the MIT Research Laboratory of Electronics (RLE) and member of the Quantum Engineering group at MIT; and Paola Cappellaro, a professor in the Department of Nuclear Science and Engineering, a member of RLE, and head of the Quantum Engineering group at MIT.

    Characterizing defects

    NV centers occur where carbon atoms in two adjacent places in a diamond’s lattice structure are missing — one atom is replaced by a nitrogen atom, and the other space is an empty “vacancy.” The NV center essentially functions as an atom, with a nucleus and surrounding electrons that are extremely sensitive to tiny variations in surrounding electrical, magnetic, and optical fields. Sweeping microwaves across the center, for instance, makes it change, and thus control, the spin states of the nucleus and electrons.

    Spins are measured using a type of magnetic resonance spectroscopy. This method plots the frequencies of electron and nucleus spins in megahertz as a “resonance spectrum” that can dip and spike, like a heart monitor. Spins of an NV center under certain conditions are well-known. But the surrounding spin defects are unknown and difficult to characterize.

    In their work, the researchers identified, located, and controlled two electron-nuclear spin defects near an NV center. They first sent microwave pulses at specific frequencies to control the NV center. Simultaneously, they pulse another microwave that probes the surrounding environment for other spins. They then observed the resonance spectrum of the spin defects interacting with the NV center.

    The spectrum dipped in several spots when the probing pulse interacted with nearby electron-nuclear spins, indicating their presence. The researchers then swept a magnetic field across the area at different orientations. For each orientation, the defect would “spin” at different energies, causing different dips in the spectrum. Basically, this allowed them to measure each defect’s spin in relation to each magnetic orientation. They then plugged the energy measurements into a model equation with unknown parameters. This equation is used to describe the quantum interactions of an electron-nuclear spin defect under a magnetic field. Then, they could solve the equation to successfully characterize each defect.

    Locating and controlling

    After characterizing the defects, the next step was to characterize the interaction between the defects and the NV, which would simultaneously pinpoint their locations. To do so, they again swept the magnetic field at different orientations, but this time looked for changes in energies describing the interactions between the two defects and the NV center. The stronger the interaction, the closer they were to one another. They then used those interaction strengths to determine where the defects were located, in relation to the NV center and to each other. That generated a good map of the locations of all three defects in the diamond.

    Characterizing the defects and their interaction with the NV center allow for full control, which involves a few more steps to demonstrate. First, they pump the NV center and surrounding environment with a sequence of pulses of green light and microwaves that help put the three qubits in a well-known quantum state. Then, they use another sequence of pulses that ideally entangles the three qubits briefly, and then disentangles them, which enables them to detect the three-spin coherence of the qubits.

    The researchers verified the three-spin coherence by measuring a major spike in the resonance spectrum. The measurement of the spike recorded was essentially the sum of the frequencies of the three qubits. If the three qubits for instance had little or no entanglement, there would have been four separate spikes of smaller height.

    “We come into a black box [environment with each NV center]. But when we probe the NV environment, we start seeing dips and wonder which types of spins give us those dips. Once we [figure out] the spin of the unknown defects, and their interactions with the NV center, we can start controlling their coherence,” Sun says. “Then, we have full universal control of our quantum system.”

    Next, the researchers hope to better understand other environmental noise surrounding qubits. That will help them develop more robust error-correcting codes for quantum circuits. Furthermore, because on average the process of NV center creation in diamond creates numerous other spin defects, the researchers say they could potentially scale up the system to control even more qubits. “It gets more complex with scale. But if we can start finding NV centers with more resonance spikes, you can imagine starting to control larger and larger quantum systems,” Sun says.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 2:12 pm on March 5, 2020 Permalink | Reply
    Tags: "Integrating electronics onto physical prototypes", Computer Science, , In place of flat “breadboards” 3D-printed CurveBoards enable easier testing of circuit design on electronics products.,   

    From MIT News: “Integrating electronics onto physical prototypes” 

    MIT News

    From MIT News

    March 3, 2020
    Rob Matheson

    In place of flat “breadboards,” 3D-printed CurveBoards enable easier testing of circuit design on electronics products.

    1

    2

    (2)CurveBoards are 3D breadboards — which are commonly used to prototype circuits — that can be designed by custom software, 3D printed, and directly integrated into the surface of physical objects, such as smart watches, bracelets, helmets, headphones, and even flexible electronics. CurveBoards can give designers an additional prototyping technique to better evaluate how circuits will look and feel on physical products that users interact with. Image: Dishita Turakhia and Junyi Zhu.

    MIT researchers have invented a way to integrate “breadboards” — flat platforms widely used for electronics prototyping — directly onto physical products. The aim is to provide a faster, easier way to test circuit functions and user interactions with products such as smart devices and flexible electronics.

    Breadboards are rectangular boards with arrays of pinholes drilled into the surface. Many of the holes have metal connections and contact points between them. Engineers can plug components of electronic systems — from basic circuits to full computer processors — into the pinholes where they want them to connect. Then, they can rapidly test, rearrange, and retest the components as needed.

    But breadboards have remained that same shape for decades. For that reason, it’s difficult to test how the electronics will look and feel on, say, wearables and various smart devices. Generally, people will first test circuits on traditional breadboards, then slap them onto a product prototype. If the circuit needs to be modified, it’s back to the breadboard for testing, and so on.

    In a paper being presented at CHI (Conference on Human Factors in Computing Systems)[CHI Paper], the researchers describe “CurveBoards,” 3D-printed objects with the structure and function of a breadboard integrated onto their surfaces. Custom software automatically designs the objects, complete with distributed pinholes that can be filled with conductive silicone to test electronics. The end products are accurate representations of the real thing, but with breadboard surfaces.

    CurveBoards “preserve an object’s look and feel,” the researchers write in their paper, while enabling designers to try out component configurations and test interactive scenarios during prototyping iterations. In their work, the researchers printed CurveBoards for smart bracelets and watches, Frisbees, helmets, headphones, a teapot, and a flexible, wearable e-reader.

    “On breadboards, you prototype the function of a circuit. But you don’t have context of its form — how the electronics will be used in a real-world prototype environment,” says first author Junyi Zhu, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “Our idea is to fill this gap, and merge form and function testing in very early stage of prototyping an object. … CurveBoards essentially add an additional axis to the existing [three-dimensional] XYZ axes of the object — the ‘function’ axis.”

    Joining Zhu on the paper are CSAIL graduate students Lotta-Gili Blumberg, Martin Nisser, and Ethan Levi Carlson; Department of Electrical Engineering and Computer Science (EECS) undergraduate students Jessica Ayeley Quaye and Xin Wen; former EECS undergraduate students Yunyi Zhu and Kevin Shum; and Stefanie Mueller, the X-Window Consortium Career Development Assistant Professor in EECS.

    Custom software and hardware

    A core component of the CurveBoard is custom design-editing software. Users import a 3D model of an object. Then, they select the command “generate pinholes,” and the software automatically maps all pinholes uniformly across the object. Users then choose automatic or manual layouts for connectivity channels. The automatic option lets users explore a different layout of connections across all pinholes with the click of a button. For manual layouts, interactive tools can be used to select groups of pinholes and indicate the type of connection between them. The final design is exported to a file for 3D printing.

    When a 3D object is uploaded, the software essentially forces its shape into a “quadmesh” — where the object is represented as a bunch of small squares, each with individual parameters. In doing so, it creates a fixed spacing between the squares. Pinholes — which are cones, with the wide end on the surface and tapering down — will be placed at each point where the corners of the squares touch. For channel layouts, some geometric techniques ensure the chosen channels will connect the desired electrical components without crossing over one another.

    In their work, the researchers 3D printed objects using a flexible, durable, nonconductive silicone. To provide connectivity channels, they created a custom conductive silicone that can be syringed into the pinholes and then flows through the channels after printing. The silicone is a mixture of a silicone materials designed to have minimal electricity resistance, allowing various types electronics to function.

    To validate the CurveBoards, the researchers printed a variety of smart products. Headphones, for instance, came equipped with menu controls for speakers and music-streaming capabilities. An interactive bracelet included a digital display, LED, and photoresistor for heart-rate monitoring, and a step-counting sensor. A teapot included a small camera to track the tea’s color, as well as colored lights on the handle to indicate hot and cold areas. They also printed a wearable e-book reader with a flexible display.

    Better, faster prototyping

    In a user study, the team investigated the benefits of CurveBoards prototyping. They split six participants with varying prototyping experience into two sections: One used traditional breadboards and a 3D-printed object, and the other used only a CurveBoard of the object. Both sections designed the same prototype but switched back and forth between sections after completing designated tasks. In the end, five of six of the participants preferred prototyping with the CurveBoard. Feedback indicated the CurveBoards were overall faster and easier to work with.

    But CurveBoards are not designed to replace breadboards, the researchers say. Instead, they’d work particularly well as a so-called “midfidelity” step in the prototyping timeline, meaning between initial breadboard testing and the final product. “People love breadboards, and there are cases where they’re fine to use,” Zhu says. “This is for when you have an idea of the final object and want to see, say, how people interact with the product. It’s easier to have a CurveBoard instead of circuits stacked on top of a physical object.”

    Next, the researchers hope to design general templates of common objects, such as hats and bracelets. Right now, a new CurveBoard must built for each new object. Ready-made templates, however, would let designers quickly experiment with basic circuits and user interaction, before designing their specific CurveBoard.

    Additionally, the researchers want to move some early-stage prototyping steps entirely to the software side. The idea is that people can design and test circuits — and possibly user interaction — entirely on the 3D model generated by the software. After many iterations, they can 3D print a more finalized CurveBoard. “That way you’ll know exactly how it’ll work in the real world, enabling fast prototyping,” Zhu says. “That would be a more ‘high-fidelity’ step for prototyping.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 4:40 pm on February 18, 2020 Permalink | Reply
    Tags: , , Computer Science, , MIP-multiprover interactive proof, , , ,   

    From Science News: “How a quantum technique highlights math’s mysterious link to physics” 

    From Science News

    February 17, 2020
    Tom Siegfried

    Verifying proofs to very hard math problems is possible with infinite quantum entanglement.

    1
    A technique that relies on quantum entanglement (illustrated) expands the realm of mathematical problems for which the solution could (in theory) be verified. inkoly/iStock/Getty Images Plus.

    It has long been a mystery why pure math can reveal so much about the nature of the physical world.

    Antimatter was discovered in Paul Dirac’s equations before being detected in cosmic rays. Quarks appeared in symbols sketched out on a napkin by Murray Gell-Mann several years before they were confirmed experimentally. Einstein’s equations for gravity suggested the universe was expanding a decade before Edwin Hubble provided the proof. Einstein’s math also predicted gravitational waves a full century before behemoth apparatuses detected those waves (which were produced by collisions of black holes — also first inferred from Einstein’s math).

    Nobel laureate physicist Eugene Wigner alluded to math’s mysterious power as the “unreasonable effectiveness of mathematics in the natural sciences.” Somehow, Wigner said, math devised to explain known phenomena contains clues to phenomena not yet experienced — the math gives more out than was put in. “The enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and … there is no rational explanation for it,” Wigner wrote in 1960.

    But maybe there’s a new clue to what that explanation might be. Perhaps math’s peculiar power to describe the physical world has something to do with the fact that the physical world also has something to say about mathematics.

    At least that’s a conceivable implication of a new paper that has startled the interrelated worlds of math, computer science and quantum physics.

    In an enormously complicated 165-page paper, computer scientist Zhengfeng Ji and colleagues present a result that penetrates to the heart of deep questions about math, computing and their connection to reality. It’s about a procedure for verifying the solutions to very complex mathematical propositions, even some that are believed to be impossible to solve. In essence, the new finding boils down to demonstrating a vast gulf between infinite and almost infinite, with huge implications for certain high-profile math problems. Seeing into that gulf, it turns out, requires the mysterious power of quantum physics.

    Everybody involved has long known that some math problems are too hard to solve (at least without unlimited time), but a proposed solution could be rather easily verified. Suppose someone claims to have the answer to such a very hard problem. Their proof is much too long to check line by line. Can you verify the answer merely by asking that person (the “prover”) some questions? Sometimes, yes. But for very complicated proofs, probably not. If there are two provers, though, both in possession of the proof, asking each of them some questions might allow you to verify that the proof is correct (at least with very high probability). There’s a catch, though — the provers must be kept separate, so they can’t communicate and therefore collude on how to answer your questions. (This approach is called MIP, for multiprover interactive proof.)

    Verifying a proof without actually seeing it is not that strange a concept. Many examples exist for how a prover can convince you that they know the answer to a problem without actually telling you the answer. A standard method for coding secret messages, for example, relies on using a very large number (perhaps hundreds of digits long) to encode the message. It can be decoded only by someone who knows the prime factors that, when multiplied together, produce the very large number. It’s impossible to figure out those prime numbers (within the lifetime of the universe) even with an army of supercomputers. So if someone can decode your message, they’ve proved to you that they know the primes, without needing to tell you what they are.

    Someday, though, calculating those primes might be feasible, with a future-generation quantum computer. Today’s quantum computers are relatively rudimentary, but in principle, an advanced model could crack codes by calculating the prime factors for enormously big numbers.

    That power stems, at least in part, from the weird phenomenon known as quantum entanglement. And it turns out that, similarly, quantum entanglement boosts the power of MIP provers. By sharing an infinite amount of quantum entanglement, MIP provers can verify vastly more complicated proofs than nonquantum MIP provers.

    It is obligatory to say that entanglement is what Einstein called “spooky action at a distance.” But it’s not action at a distance, and it just seems spooky. Quantum particles (say photons, particles of light) from a common origin (say, both spit out by a single atom) share a quantum connection that links the results of certain measurements made on the particles even if they are far apart. It may be mysterious, but it’s not magic. It’s physics.

    Say two provers share a supply of entangled photon pairs. They can convince a verifier that they have a valid proof for some problems. But for a large category of extremely complicated problems, this method works only if the supply of such entangled particles is infinite. A large amount of entanglement is not enough. It has to be literally unlimited. A huge but finite amount of entanglement can’t even approximate the power of an infinite amount of entanglement.

    As Emily Conover explains in her report for Science News, this discovery proves false a couple of widely believed mathematical conjectures. One, known as Tsirelson’s problem, specifically suggested that a sufficient amount of entanglement could approximate what you could do with an infinite amount. Tsirelson’s problem was mathematically equivalent to another open problem, known as Connes’ embedding conjecture, which has to do with the algebra of operators, the kinds of mathematical expressions that are used in quantum mechanics to represent quantities that can be observed.

    Refuting the Connes conjecture, and showing that MIP plus entanglement could be used to verify immensely complicated proofs, stunned many in the mathematical community. (One expert, upon hearing the news, compared his feces to bricks.) But the new work isn’t likely to make any immediate impact in the everyday world. For one thing, all-knowing provers do not exist, and if they did they would probably have to be future super-AI quantum computers with unlimited computing capability (not to mention an unfathomable supply of energy). Nobody knows how to do that in even Star Trek’s century.

    Still, pursuit of this discovery quite possibly will turn up deeper implications for math, computer science and quantum physics.

    It probably won’t shed any light on controversies over the best way to interpret quantum mechanics, as computer science theorist Scott Aaronson notes in his blog about the new finding. But perhaps it could provide some sort of clues regarding the nature of infinity. That might be good for something, perhaps illuminating whether infinity plays a meaningful role in reality or is a mere mathematical idealization.

    On another level, the new work raises an interesting point about the relationship between math and the physical world. The existence of quantum entanglement, a (surprising) physical phenomenon, somehow allows mathematicians to solve problems that seem to be strictly mathematical. Wondering why physics helps out math might be just as entertaining as contemplating math’s unreasonable effectiveness in helping out physics. Maybe even one will someday explain the other.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 8:56 am on February 5, 2020 Permalink | Reply
    Tags: , , Computer Science, , , The CSAIL team's new system RFocus- a software-controlled “smart surface” that uses more than 3000 antennas to maximize the strength of the signal at the receiver.   

    From MIT News: “A smart surface for smart devices” 

    MIT News

    From MIT News

    February 3, 2020
    Adam Conner-Simons | CSAIL

    1
    Venkat Arun of MIT stands in front of the prototype of RFocus, a software-controlled “smart surface” that uses more than 3,000 antennas to maximize the strength of the signal at the receiver. Photo: Jason Dorfman/CSAIL

    2
    The RFocus platform has more than 3,000 tiny, inexpensive antennas that are used to amplify nearby wireless signals. Photo: Jason Dorfman/CSAIL

    We’ve heard it for years: 5G is coming.

    And yet, while high-speed 5G internet has indeed slowly been rolling out in a smattering of countries across the globe, many barriers remain that have prevented widespread adoption.

    One issue is that we can’t get faster internet speeds without more efficient ways of delivering wireless signals. The general trend has been to simply add antennas to either the transmitter (i.e., Wi-Fi access points and cell towers) or the receiver (such as a phone or laptop). But that’s grown difficult to do as companies increasingly produce smaller and smaller devices, including a new wave of “internet of things” systems.

    Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) looked at the problem recently and wondered if people have had things completely backwards this whole time. Rather than focusing on the transmitters and receivers, what if we could amplify the signal by adding antennas to an external surface in the environment itself?

    That’s the idea behind the CSAIL team’s new system RFocus, a software-controlled “smart surface” that uses more than 3,000 antennas to maximize the strength of the signal at the receiver. Tests showed that RFocus could improve the average signal strength by a factor of almost 10. Practically speaking, the platform is also very cost-effective, with each antenna costing only a few cents. The antennas are inexpensive because they don’t process the signal at all; they merely control how it is reflected. Lead author Venkat Arun says that the project represents what is, to the team’s knowledge, the largest number of antennas ever used for a single communication link.

    While the system could serve as another form of WiFi range extender, the researchers say its most valuable use could be in the network-connected homes and factories of the future.

    For example, imagine a warehouse with hundreds of sensors for monitoring machines and inventory. MIT Professor Hari Balakrishnan says that systems for that type of scale would normally be prohibitively expensive and/or power-intensive, but could be possible with a low-power interconnected system that uses an approach like RFocus.

    “The core goal here was to explore whether we can use elements in the environment and arrange them to direct the signal in a way that we can actually control,” says Balakrishnan, senior author on a new paper about RFocus that will be presented next month at the USENIX Symposium on Networked Systems Design and Implementation (NSDI) in Santa Clara, California. “If you want to have wireless devices that transmit at the lowest possible power, but give you a good signal, this seems to be one extremely promising way to do it.”

    RFocus is a two-dimensional surface composed of thousands of antennas that can each either let the signal through or reflect it. The state of the elements is set by a software controller that the team developed with the goal of maximizing the signal strength at a receiver.

    “The biggest challenge was determining how to configure the antennas to maximize signal strength without using any additional sensors, since the signals we measure are very weak,” says PhD student Venkat Arun, lead author of the new paper alongside Balakrishnan. “We ended up with a technique that is surprisingly robust.”

    The researchers aren’t the first to explore the possibility of improving internet speeds using the external environment. A team at Princeton University led by Professor Kyle Jamieson proposed a similar scheme for the specific situation of people using computers on either side of a wall. Balakrishnan says that the goal with RFocus was to develop an even more low-cost approach that could be used in a wider range of scenarios.

    “Smart surfaces give us literally thousands of antennas to play around with,” says Jamieson, who was not involved in the RFocus project. “The best way of controlling all these antennas, and navigating the massive search space that results when you imagine all the possible antenna configurations, are just two really challenging open problems.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 9:53 am on January 12, 2020 Permalink | Reply
    Tags: , Computer Science, Electrical, , , Yury Polyanskiy   

    From MIT News: “Sending clearer signals” 

    MIT News

    From MIT News

    January 11, 2020
    Rob Matheson

    1
    Yury Polyanskiy. Image: M. Scott Brauer

    Associate Professor Yury Polyanskiy is working to keep data flowing as the “internet of things” becomes a reality.

    In the secluded Russian city where Yury Polyanskiy grew up, all information about computer science came from the outside world. Visitors from distant Moscow would occasionally bring back the latest computer science magazines and software CDs to Polyanskiy’s high school for everyone to share.

    One day while reading a borrowed PC World magazine in the mid-1990s, Polyanskiy learned about a futuristic concept: the World Wide Web.

    Believing his city would never see such wonders of the internet, he and his friends built their own. Connecting an ethernet cable between two computers in separate high-rises, they could communicate back and forth. Soon, a handful of other kids asked to be connected to the makeshift network.

    “It was a pretty challenging engineering problem,” recalls Polyanskiy, an associate professor of electrical engineering and computer science at MIT, who recently earned tenure. “I don’t remember exactly how we did it, but it took us a whole day. You got a sense of just how contagious the internet could be.”

    Thanks to the then-recent fall of the Iron Curtain, Polyanskiy’s family did eventually connect to the internet. Soon after, he became interested in computer science and then information theory, the mathematical study of storing and transmitting data. Now at MIT, his most exciting work centers on preventing major data-transmission issues with the rise of the “internet of things” (IoT). Polyanskiy is a member of the of the Laboratory for Information and Decision Systems, the Institute for Data, Systems, and Society, and the Statistics and Data Science Center.

    Today, people carry around a smartphone and maybe a couple smart devices. Whenever you watch a video on your smartphone, for example, a nearby cell tower assigns you an exclusive chunk of the wireless spectrum for a certain time. It does so for everyone, making sure the data never collide.

    The number IoT devices is expected to explode, however. People may carry dozens of smart devices; all delivered packages may have tracking sensors; and smart cities may implement thousands of connected sensors in their infrastructure. Current systems can’t divvy up the spectrum effectively to stop data from colliding. That will slow down transmission speeds and make our devices consume much more energy in sending and resending data.

    “There may soon be a hundredfold explosion of devices connected to the internet, which is going to clog the spectrum, and there will be no way to ensure interference-free transmission. Entirely new access approaches will be needed,” Polyanskiy says. “It’s the most exciting thing I’m working on, and it’s surprising that no one is talking much about it.”

    From Russia, with love of computer science

    Polyanskiy grew up in a place that translates in English to “Rainbow City,” so named because it was founded as a site to develop military lasers. Surrounded by woods, the city had a population of about 15,000 people, many of them engineers.

    In part, that environment got Polyanskiy into computer science. At the age of 12, he started coding — “and for profit,” he says. His father was working for an engineering firm, on a team that was programming controllers for oil pumps. When the lead programmer took another position, they were left understaffed. “My father was discussing who can help. I was sitting next to him, and I said, ‘I can help,’” Polyanskiy says. “He first said no, but I tried it and it worked out.”

    Soon after, his father opened his own company for designing oil pump controllers and brought Polyanskiy on board while he was still in high school. The business gained customers worldwide. He says some of the controllers he helped program are still being used today.

    Polyanskiy earned his bachelor’s in physics from the Moscow Institute of Physics and Technology, a top university worldwide for physics research. But then, interested in pursuing electrical engineering for graduate school, he applied to programs in the U.S. and was accepted to Princeton University.

    In 2005, he moved to the U.S. to attend Princeton, which came with cultural shocks “that I still haven’t recovered from,” Polyanskiy jokes. For starters, he says, the U.S. education system encourages interaction with professors. Also, the televisions, gaming consoles, and furniture in residential buildings and around campus were not placed under lock and key.

    “In Russia, everything is chained down,” Polyanskiy says. “I still can’t believe U.S. universities just keep those things out in the open.”

    At Princeton, Polyanskiy wasn’t sure which field to enter. But when it came time to select, he asked one rather discourteous student about studying under a giant in information theory, Sergio Verdú. The student told Polyanskiy he wasn’t smart enough for Verdú — so Polyanskiy got defiant. “At that moment, I knew for certain that Sergio would be my number one pick,” Polyanskiy says, laughing. “When people say I can’t do something, that’s usually the best way to motivate me.”

    At Princeton, working under Verdú, Polyanskiy focused on a component of information theory that deals with how much redundancy to send with data. Each time data transmit, they are perturbed by some noise. Adding duplicate data means less data get lost in that noise. Researchers thus study the optimal amounts of redundancy to reduce signal loss but keep transmissions fast.

    In his graduate work, Polyanskiy pinpointed sweet spots for redundancy when transmitting hundreds or thousands of data bits in packets, which is mostly how data are transmitted online today.

    Getting hooked

    After earning his PhD in electrical engineering from Princeton, Polyanskiy finally did come to MIT, his “dream school,” in 2011, but as a professor. MIT had helped pioneer some information theory research and introduced the first college courses in the field.

    Some call information theory “a green island,” he says, “because it’s hard to get into but once you’re there, you’re very happy. And information theorists can be seen as snobby.” When he came to MIT, Polyanskiy says, he was narrowly focused on his work. But he experienced yet another cultural shock — this time in a collaborative and bountiful research culture.

    MIT researchers are constantly presenting at conferences, holding seminars, collaborating, and “working on about 20 projects in parallel,” Polyanskiy says. “I was hesitant that I could do quality research like that, but then I got hooked. I became more broad-minded, thanks to MIT’s culture of drinking from a fire hose. There’s so much going on that eventually you get addicted to learning fields that are far away from you own interests.”

    In collaboration with other MIT researchers, Polyanskiy’s group now focuses on finding ways to split up the spectrum in the coming IoT age. So far, his group has mathematically proven that the systems in use today do not have the capabilities and energy to do so. They’ve also shown what types of alternative transmission systems will and won’t work.

    Inspired by his own experiences, Polyanskiy likes to give his students “little hooks,” tidbits of information about the history of scientific thought surrounding their work and about possible future applications. One example is explaining philosophies behind randomness to mathematics students who may be strictly deterministic thinkers. “I want to give them a little taste of something more advanced and outside scope of what they’re studying,” he says.

    After spending 14 years in the U.S., the culture has shaped the Russian native in certain ways. For instance, he’s accepted a more relaxed and interactive Western teaching style, he says. But it extends beyond the classroom, as well. Just last year, while visiting Moscow, Polyanskiy found himself holding a subway rail with both hands. Why is this strange? Because he was raised to keep one hand on the subway rail, and one hand over his wallet to prevent thievery. “With horror, I realized what I was doing,” Polyanskiy says, laughing. “I said, ‘Yury, you’re becoming a real Westerner.’”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 3:22 pm on December 6, 2019 Permalink | Reply
    Tags: "Using computers to view the unseen", , Computer Science, ,   

    From MIT News: “Using computers to view the unseen” 

    MIT News

    From MIT News

    December 6, 2019
    Rachel Gordon | CSAIL


    Computational Mirrors: Revealing Hidden Video

    .1
    Based on shadows that an out-of-view video casts on nearby objects, MIT researchers can estimate the contents of the unseen video. In the top row, researchers used this method to recreate visual elements in an out-of-view video; the original elements are shown in the bottom row. The effect can be seen in motion in the video below. Images courtesy of the researchers.

    A new computational imaging method could change how we view hidden information in scenes.

    Cameras and computers together can conquer some seriously stunning feats. Giving computers vision has helped us fight wildfires in California, understand complex and treacherous roads — and even see around corners.

    Specifically, seven years ago a group of MIT researchers created a new imaging system. that used floors, doors, and walls as “mirrors” to understand information about scenes outside a normal line of sight. Using special lasers to produce recognizable 3D images, the work opened up a realm of possibilities in letting us better understand what we can’t see.

    Recently, a different group of scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has built off of this work, but this time with no special equipment needed: They developed a method that can reconstruct hidden video from just the subtle shadows and reflections on an observed pile of clutter. This means that, with a video camera turned on in a room, they can reconstruct a video of an unseen corner of the room, even if it falls outside the camera’s field of view.

    By observing the interplay of shadow and geometry in video, the team’s algorithm predicts the way that light travels in a scene, which is known as “light transport.” The system then uses that to estimate the hidden video from the observed shadows — and it can even construct the silhouette of a live-action performance.

    This type of image reconstruction could one day benefit many facets of society: Self-driving cars could better understand what’s emerging from behind corners, elder-care centers could enhance safety for their residents, and search-and-rescue teams could even improve their ability to navigate dangerous or obstructed areas.

    The technique, which is “passive,” meaning there are no lasers or other interventions to the scene, still currently takes about two hours to process, but the researchers say it could eventually be helpful in reconstructing scenes not in the traditional line of sight for the aforementioned applications.

    “You can achieve quite a bit with non-line-of-sight imaging equipment like lasers, but in our approach you only have access to the light that’s naturally reaching the camera, and you try to make the most out of the scarce information in it,” says Miika Aittala, former CSAIL postdoc and current research scientist at NVIDIA, and the lead researcher on the new technique. “Given the recent advances in neural networks, this seemed like a great time to visit some challenges that, in this space, were considered largely unapproachable before.”

    To capture this unseen information, the team uses subtle, indirect lighting cues, such as shadows and highlights from the clutter in the observed area.

    n a way, a pile of clutter behaves somewhat like a pinhole camera, similar to something you might build in an elementary school science class: It blocks some light rays, but allows others to pass through, and these paint an image of the surroundings wherever they hit. But where a pinhole camera is designed to let through just the amount of right rays to form a readable picture, a general pile of clutter produces an image that is scrambled (by the light transport) beyond recognition, into a complex play of shadows and shading.

    You can think of the clutter, then, as a mirror that gives you a scrambled view into the surroundings around it — for example, behind a corner where you can’t see directly.

    The challenge addressed by the team’s algorithm was to unscramble and make sense of these lighting cues. Specifically, the goal was to recover a human-readable video of the activity in the hidden scene, which is a multiplication of the light transport and the hidden video.

    However, unscrambling proved to be a classic “chicken-or-egg” problem. To figure out the scrambling pattern, a user would need to know the hidden video already, and vice versa.

    “Mathematically, it’s like if I told you that I’m thinking of two secret numbers, and their product is 80. Can you guess what they are? Maybe 40 and 2? Or perhaps 371.8 and 0.2152? In our problem, we face a similar situation at every pixel,” says Aittala. “Almost any hidden video can be explained by a corresponding scramble, and vice versa. If we let the computer choose, it’ll just do the easy thing and give us a big pile of essentially random images that don’t look like anything.”

    With that in mind, the team focused on breaking the ambiguity by specifying algorithmically that they wanted a “scrambling” pattern that corresponds to plausible real-world shadowing and shading, to uncover the hidden video that looks like it has edges and objects that move coherently.

    The team also used the surprising fact that neural networks naturally prefer to express “image-like” content, even when they’ve never been trained to do so, which helped break the ambiguity. The algorithm trains two neural networks simultaneously, where they’re specialized for the one target video only, using ideas from a machine learning concept called Deep Image Prior. One network produces the scrambling pattern, and the other estimates the hidden video. The networks are rewarded when the combination of these two factors reproduce the video recorded from the clutter, driving them to explain the observations with plausible hidden data.

    To test the system, the team first piled up objects on one wall, and either projected a video or physically moved themselves on the opposite wall. From this, they were able to reconstruct videos where you could get a general sense of what motion was taking place in the hidden area of the room.

    In the future, the team hopes to improve the overall resolution of the system, and eventually test the technique in an uncontrolled environment.

    Aittala wrote a new paper on the technique alongside CSAIL PhD students Prafull Sharma, Lukas Murmann, and Adam Yedidia, with MIT professors Fredo Durand, Bill Freeman, and Gregory Wornell. They will present it next week at the Conference on Neural Information Processing Systems in Vancouver, British Columbia.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 8:15 am on October 2, 2019 Permalink | Reply
    Tags: "How AI could change science", , , , , , Computer Science, , Kavli Institute for Cosmological Physics,   

    From University of Chicago: “How AI could change science” 

    U Chicago bloc

    From University of Chicago

    Oct 1, 2019
    Louise Lerner
    Rob Mitchum

    1
    At the University of Chicago, researchers are using artificial intelligence’s ability to analyze massive amounts of data in applications from scanning for supernovae to finding new drugs. shutterstock.com

    Researchers at the University of Chicago seek to shape an emerging field.

    AI technology is increasingly used to open up new horizons for scientists and researchers. At the University of Chicago, researchers are using it for everything from scanning the skies for supernovae to finding new drugs from millions of potential combinations and developing a deeper understanding of the complex phenomena underlying the Earth’s climate.

    Today’s AI commonly works by starting from massive data sets, from which it figures out its own strategies to solve a problem or make a prediction—rather than rely on humans explicitly programming it how to reach a conclusion. The results are an array of innovative applications.

    “Academia has a vital role to play in the development of AI and its applications. While the tech industry is often focused on short-term returns, realizing the full potential of AI to improve our world requires long-term vision,” said Rebecca Willett, professor of statistics and computer science at the University of Chicago and a leading expert on AI foundations and applications in science. “Basic research at universities and national laboratories can establish the fundamentals of artificial intelligence and machine learning approaches, explore how to apply these technologies to solve societal challenges, and use AI to boost scientific discovery across fields.”

    2
    Prof. Rebecca Willett gives an introduction to her research on AI and data science foundations. Photo by Clay Kerr

    Willett is one of the featured speakers at the InnovationXLab Artificial Intelligence Summit hosted by UChicago-affiliated Argonne National Laboratory, which will soon be home to the most powerful computer in the world—and it’s being designed with an eye toward AI-style computing. The Oct. 2-3 summit showcases the U.S. Department of Energy lab, bringing together industry, universities, and investors with lab innovators and experts.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer

    The workshop comes as researchers around UChicago and the labs are leading new explorations into AI.

    For example, say that Andrew Ferguson, an associate professor at the Pritzker School of Molecular Engineering, wants to look for a new vaccine or flexible electronic materials. New materials essentially are just different combinations of chemicals and molecules, but there are literally billions of such combinations. How do scientists pick which ones to make and test in the labs? AI could quickly narrow down the list.

    “There are many areas where the Edisonian approach—that is, having an army of assistants make and test hundreds of different options for the lightbulb—just isn’t practical,” Ferguson said.

    Then there’s the question of what happens if AI takes a turn at being the scientist. Some are wondering whether AI models could propose new experiments that might never have occurred to their human counterparts.

    “For example, when someone programmed the rules for the game of Go into an AI, it invented strategies never seen in thousands of years of humans playing the game,” said Brian Nord, an associate scientist in the Kavli Institute for Cosmological Physics and UChicago-affiliated Fermi National Accelerator Laboratory.

    “Maybe sometimes it will have more interesting ideas than we have.”

    Ferguson agreed: “If we write down the laws of physics and input those, what can AI tell us about the universe?”

    3
    Scenes from the 2016 games of Go, an ancient Chinese game far more complex than chess, between Google’s AI “AlphaGo” and world-record Go player Lee Sedol. The match ended with the AI up 4-1. Image courtesy of Bob van den Hoek.

    But ensuring those applications are accurate, equitable, and effective requires more basic computer science research into the fundamentals of AI. UChicago scientists are exploring ways to reduce bias in model predictions, use advanced tools even when data is scarce, and developing “explainable AI” systems that will produce more actionable insights and raise trust among users of those models.

    “Most AIs right now just spit out an answer without any context. But a doctor, for example, is not going to accept a cancer diagnosis unless they can see why and how the AI got there,” Ferguson said.

    With the right calibration, however, researchers see a world of uses for AI. To name just a few: Willett, in collaboration with scientists from Argonne and the Department of Geophysical Sciences, is using machine learning to study clouds and their effect on weather and climate. Chicago Booth economist Sendhil Mullainathan is studying ways in which machine learning technology could change the way we approach social problems, such as policies to alleviate poverty; while neurobiologist David Freedman, a professor in the University’s Division of Biological Sciences, is using machine learning to understand how brains interpret sights and sounds and make decisions.

    Below are looks into three projects at the University showcasing the breadth of AI applications happening now.

    The depths of the universe to the structures of atoms

    We’re getting better and better at building telescopes to scan the sky and accelerators to smash particles at ever-higher energies. What comes along with that, however, is more and more data. For example, the Large Hadron Collider in Europe generates one petabyte of data per second; for perspective, in less than five minutes, that would fill up the world’s most powerful supercomputer.

    LHC

    CERN map


    CERN LHC Maximilien Brice and Julien Marius Ordan


    CERN LHC particles

    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS

    CERN ATLAS Image Claudia Marcelloni CERN/ATLAS

    ALICE

    CERN/ALICE Detector


    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    That’s way too much data to store. “You need to quickly pick out the interesting events to keep, and dump the rest,” Nord said.

    But see “From UC Santa Barbara: “Breaking Data out of the Silos

    Similarly, each night hundreds of telescopes scan the sky. Existing computer programs are pretty good at picking interesting things out of them, but there’s room to improve. (After LIGO detected the gravity waves from two neutron stars crashing together in 2017, telescopes around the world had rooms full of people frantically looking through sky photos to find the point of light it created.)

    MIT /Caltech Advanced aLigo


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Years ago, Nord was sitting and scanning telescope images to look for gravitational lensing, an effect in which large objects distort light as it passes.

    Gravitational Lensing NASA/ESA

    “We were spending all this time doing this by hand, and I thought, surely there has to be a better way,” he said. In fact, the capabilities of AI were just turning a corner; Nord began writing programs to search for lensing with neural networks. Others had the same idea; the technique is now emerging as a standard approach to find gravitational lensing.

    This year Nord is partnering with computer scientist Yuxin Chen to explore what they call a “self-driving telescope”: a framework that could optimize when and where to point telescopes to gather the most interesting data.

    “I view this collaboration between AI and science, in general, to be in a very early phase of development,” Chen said. “The outcome of the research project will not only have transformative effects in advancing the basic science, but it will also allow us to use the science involved in the physical processes to inform AI development.”

    Disentangling style and content for art and science

    In recent years, popular apps have sprung up that can transform photographs into different artistic forms—from generic modes such as charcoal sketches or watercolors to the specific styles of Dali, Monet and other masters. These “style transfer” apps use tools from the cutting edge of computer vision—primarily the neural networks that prove adept at image classification for applications such as image search and facial recognition.

    But beyond the novelty of turning your selfie into a Picasso, these tools kick-start a deeper conversation around the nature of human perception. From a young age, humans are capable of separating the content of an image from its style; that is, recognizing that photos of an actual bear, a stuffed teddy bear, or a bear made out of LEGOs all depict the same animal. What’s simple for humans can stump today’s computer vision systems, but Assoc. Profs. Jason Salavon and Greg Shakhnarovich think the “magic trick” of style transfer could help them catch up.

    Photo gallery 1/2

    4
    This tryptych of images demonstrates how neural networks can transform images with different artistic forms. [Sorry, I do not see the point here.]

    “The fact that we can look at pictures that artists create and still understand what’s in them, even though they sometimes look very different from reality, seems to be closely related to the holy grail of machine perception: what makes the content of the image understandable to people,” said Shakhnarovich, an associate professor at the Toyota Technological Institute of Chicago.

    Salavon and Shakhnarovich are collaborating on new style transfer approaches that separate, capture and manipulate content and style, unlocking new potential for art and science. These new models could transform a headshot into a much more distorted style, such as the distinctive caricatures of The Simpsons, or teach self-driving cars to better understand road signs in different weather conditions.

    “We’re in a global arms race for making cool things happen with these technologies. From what would be called practical space to cultural space, there’s a lot of action,” said Salavon, an associate professor in the Department of Visual Arts at the University of Chicago and an artist who makes “semi-autonomous art”. “But ultimately, the idea is to get to some computational understanding of the ‘essence’ of images. That’s the rich philosophical question.”

    5
    Researchers hope to use AI to decode nature’s rules for protein design, in order to create synthetic proteins with a range of applications. Image courtesy of Emw / CC BY-SA 3.0

    Learning nature’s rules for protein design

    Nature is an unparalleled engineer. Millions of years of evolution have created molecular machines capable of countless functions and survival in challenging environments, like deep sea vents. Scientists have long sought to harness these design skills and decode nature’s blueprints to build custom proteins of their own for applications in medicine, energy production, environmental clean-up and more. But only recently have the computational and biochemical technologies needed to create that pipeline become possible.

    Ferguson and Prof. Rama Ranganathan are bringing these pieces together in an ambitious project funded by a Center for Data and Computing seed grant. Combining recent advancements in machine learning and synthetic biology, they will build an iterative pipeline to learn nature’s rules for protein design, then remix them to create synthetic proteins with elevated or even new functions and properties.

    “It’s not just rebuilding what nature built, we can push it beyond what nature has ever shown us before,” said Ranganathan. “This proposal is basically the starting point for building a whole framework of data-driven molecular engineering.”

    “The way we think of this project is we’re trying to mimic millions of years of evolution in the lab, using computation and experiments instead of natural selection,” Ferguson said.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Chicago Campus

    An intellectual destination

    One of the world’s premier academic and research institutions, the University of Chicago has driven new ways of thinking since our 1890 founding. Today, UChicago is an intellectual destination that draws inspired scholars to our Hyde Park and international campuses, keeping UChicago at the nexus of ideas that challenge and change the world.

    The University of Chicago is an urban research university that has driven new ways of thinking since 1890. Our commitment to free and open inquiry draws inspired scholars to our global campuses, where ideas are born that challenge and change the world.

    We empower individuals to challenge conventional thinking in pursuit of original ideas. Students in the College develop critical, analytic, and writing skills in our rigorous, interdisciplinary core curriculum. Through graduate programs, students test their ideas with UChicago scholars, and become the next generation of leaders in academia, industry, nonprofits, and government.

    UChicago research has led to such breakthroughs as discovering the link between cancer and genetics, establishing revolutionary theories of economics, and developing tools to produce reliably excellent urban schooling. We generate new insights for the benefit of present and future generations with our national and affiliated laboratories: Argonne National Laboratory, Fermi National Accelerator Laboratory, and the Marine Biological Laboratory in Woods Hole, Massachusetts.

    The University of Chicago is enriched by the city we call home. In partnership with our neighbors, we invest in Chicago’s mid-South Side across such areas as health, education, economic growth, and the arts. Together with our medical center, we are the largest private employer on the South Side.

    In all we do, we are driven to dig deeper, push further, and ask bigger questions—and to leverage our knowledge to enrich all human life. Our diverse and creative students and alumni drive innovation, lead international conversations, and make masterpieces. Alumni and faculty, lecturers and postdocs go on to become Nobel laureates, CEOs, university presidents, attorneys general, literary giants, and astronauts.

     
  • richardmitnick 8:25 pm on September 16, 2019 Permalink | Reply
    Tags: Computer Science, , , , ,   

    From UC Santa Barbara: “A Quantum Leap” 

    UC Santa Barbara Name bloc
    From UC Santa Barbara

    September 16, 2019
    James Badham

    $25M grant makes UC Santa Barbara home to the nation’s first NSF-funded Quantum Foundry, a center for development of materials and devices for quantum information-based technologies.

    1
    Professors Stephen Wilson and Ania Bleszynski Jayich will co-direct the campus’s new Quantum Foundry

    We hear a lot these days about the coming quantum revolution. Efforts to understand, develop, and characterize quantum materials — defined broadly as those displaying characteristics that can be explained only by quantum mechanics and not by classical physics — are intensifying.

    Researchers around the world are racing to understand these materials and harness their unique qualities to develop revolutionary quantum technologies for quantum computing, communications, sensing, simulation and other quantum technologies not yet imaginable.

    This week, UC Santa Barbara stepped to the front of that worldwide research race by being named the site of the nation’s first Quantum Foundry.

    Funded by an initial six-year, $25-million grant from the National Science Foundation (NSF), the project, known officially as the UC Santa Barbara NSF Quantum Foundry, will involve 20 faculty members from the campus’s materials, physics, chemistry, mechanical engineering and computer science departments, plus myriad collaborating partners. The new center will be anchored within the California Nanosystems Institute (CNSI) in Elings Hall.

    3
    California Nanosystems Institute

    The grant provides substantial funding to build equipment and develop tools necessary to the effort. It also supports a multi-front research mission comprising collaborative interdisciplinary projects within a network of university, industry, and national-laboratory partners to create, process, and characterize materials for quantum information science. The Foundry will also develop outreach and educational programs aimed at familiarizing students at all levels with quantum science, creating a new paradigm for training students in the rapidly evolving field of quantum information science and engaging with industrial partners to accelerate development of the coming quantum workforce.

    “We are extremely proud that the National Science Foundation has chosen UC Santa Barbara as home to the nation’s first NSF-funded Quantum Foundry,” said Chancellor Henry T. Yang. “The award is a testament to the strength of our University’s interdisciplinary science, particularly in materials, physics and chemistry, which lie at the core of quantum endeavors. It also recognizes our proven track record of working closely with industry to bring technologies to practical application, our state-of-the-art facilities and our educational and outreach programs that are mutually complementary with our research.

    “Under the direction of physics professor Ania Bleszynski Jayich and materials professor Stephen Wilson the foundry will provide a collaborative environment for researchers to continue exploring quantum phenomena, designing quantum materials and building instruments and computers based on the basic principles of quantum mechanics,” Yang added.

    Said Joseph Incandela, the campus’s vice chancellor for research, “UC Santa Barbara is a natural choice for the NSF quantum materials Foundry. We have outstanding faculty, researchers, and facilities, and a great tradition of multidisciplinary collaboration. Together with our excellent students and close industry partnerships, they have created a dynamic environment where research gets translated into important technologies.”

    “Being selected to build and host the nation’s first Quantum Foundry is tremendously exciting and extremely important,” said Rod Alferness, dean of the College of Engineering. “It recognizes the vision and the decades of work that have made UC Santa Barbara a truly world-leading institution worthy of assuming a leadership role in a mission as important as advancing quantum science and the transformative technologies it promises to enable.”

    “Advances in quantum science require a highly integrated interdisciplinary approach, because there are many hard challenges that need to be solved on many fronts,” said Bleszynski Jayich. “One of the big ideas behind the Foundry is to take these early theoretical ideas that are just beginning to be experimentally viable and use quantum mechanics to produce technologies that can outperform classical technologies.”

    Doing so, however, will require new materials.

    “Quantum technologies are fundamentally materials-limited, and there needs to be some sort of leap or evolution of the types of materials we can harness,” noted Wilson. “The Foundry is where we will try to identify and create those materials.”

    Research Areas and Infrastructure

    Quantum Foundry research will be pursued in three main areas, or “thrusts”:

    • Natively Entangled Materials, which relates to identifying and characterizing materials that intrinsically host anyon excitations and long-range entangled states with topological, or structural, protection against decoherence. These include new intrinsic topological superconductors and quantum spin liquids, as well as materials that enable topological quantum computing.

    • Interfaced Topological States, in which researchers will seek to create and control protected quantum states in hybrid materials.

    • Coherent Quantum Interfaces, where the focus will be on engineering materials having localized quantum states that can be interfaced with various other quantum degrees of freedom (e.g. photons or phonons) for distributing quantum information while retaining robust coherence.

    Developing these new materials and assessing their potential for hosting the needed coherent quantum state requires specialized equipment, much of which does not exist yet. A significant portion of the NSF grant is designated to develop such infrastructure, both to purchase required tools and equipment and to fabricate new tools necessary both to grow and characterize the quantum states in the new materials, Wilson said.

    UC Santa Barbara’s deep well of shared materials growth and characterization infrastructure was also a factor in securing the grant. The Foundry will leverage existing facilities, such as the large suite of instrumentation shared via the Materials Research Lab and the California Nanosystems Institute, multiple molecular beam epitaxy (MBE) growth chambers (the university has the largest number of MBE apparatuses in academia), unique optical facilities such as the Terahertz Facility, state-of-the-art clean rooms, and others among the more than 300 shared instruments on campus.

    Data Science

    NSF is keenly interested in both generating and sharing data from materials experiments. “We are going to capture Foundry data and harness it to facilitate discovery,” said Wilson. “The idea is to curate and share data to accelerate discovery at this new frontier of quantum information science.”

    Industrial Partners

    Industry collaborations are an important part of the Foundry project. UC Santa Barbara’s well-established history of industrial collaboration — it leads all universities in the U.S. in terms of industrial research dollars per capita — and the application focus that allows it to to transition ideas into materials and materials into technologies, was important in receiving the Foundry grant.

    Another value of industrial collaboration, Wilson explained, is that often, faculty might be looking at something interesting without being able to visualize how it might be useful in a scaled-up commercial application. “If you have an array of directions you could go, it is essential to have partners to help you visualize those having near-term potential,” he said.

    “This is a unique case where industry is highly interested while we are still at the basic-science level,” said Bleszynski Jayich. “There’s a huge industry partnership component to this.”

    Among the 10 inaugural industrial partners are Microsoft, Google, IBM, Hewlett Packard Enterprises, HRL, Northrop Grumman, Bruker, SomaLogic, NVision, and Anstrom Science. Microsoft and Google have substantial campus presences already; Microsoft’s Quantum Station Q lab is here, and UC Santa Barbara professor and Google chief scientist John Martinis and a team of his Ph.D. student researchers are working with Google at its Santa Barbara office, adjacent to campus, to develop Google’s quantum computer.

    Undergraduate Education

    In addition, with approximately 700 students, UC Santa Barbara’s undergraduate physics program is the largest in the U.S. “Many of these students, as well as many undergraduate engineering and chemistry students, are hungry for an education in quantum science, because it’s a fascinating subject that defies our classical intuition, and on top of that, it offers career opportunities. It can’t get much better than that,” Bleszynski Jayich said.

    Graduate Education Program

    Another major goal of the Foundry project is to integrate quantum science into education and to develop the quantum workforce. The traditional approach to quantum education at the university level has been for students to take physics classes, which are focused on the foundational theory of quantum mechanics.

    “But there is an emerging interdisciplinary component of quantum information that people are not being exposed to in that approach,” Wilson explained. “Having input from many overlapping disciplines in both hard science and engineering is required, as are experimental touchstones for trying to understand these phenomena. Student involvement in industry internships and collaborative research with partner companies is important in addressing that.”

    “We want to introduce a more practical quantum education,” Bleszynski Jayich added. “Normally you learn quantum mechanics by learning about hydrogen atoms and harmonic oscillators, and it’s all theoretical. That training is still absolutely critical, but now we want to supplement it, leveraging our abilities gained in the past 20 to 30 years to control a quantum system on the single-atom, single-quantum-system level. Students will take lab classes where they can manipulate quantum systems and observe the highly counterintuitive phenomena that don’t make sense in our classical world. And, importantly, they will learn various cutting-edge techniques for maintaining quantum coherence.

    “That’s particularly important,” she continued, “because quantum technologies rely on the success of the beautiful, elegant theory of quantum mechanics, but in practice we need unprecedented control over our experimental systems in order to observe and utilize their delicate quantum behavior.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition


    UC Santa Barbara Seal
    The University of California, Santa Barbara (commonly referred to as UC Santa Barbara or UCSB) is a public research university and one of the 10 general campuses of the University of California system. Founded in 1891 as an independent teachers’ college, UCSB joined the University of California system in 1944 and is the third-oldest general-education campus in the system. The university is a comprehensive doctoral university and is organized into five colleges offering 87 undergraduate degrees and 55 graduate degrees. In 2012, UCSB was ranked 41st among “National Universities” and 10th among public universities by U.S. News & World Report. UCSB houses twelve national research centers, including the renowned Kavli Institute for Theoretical Physics.

     
  • richardmitnick 11:32 am on September 11, 2019 Permalink | Reply
    Tags: , Computer Science, Expert in computer science education Mehran Sahami, Interviewer Russ Altman,   

    From Stanford University Engineering- “Mehran Sahami: The evolution of computer science education” 

    From Stanford University Engineering

    August 23, 2019

    Once the core American curriculum meant reading, writing and arithmetic, but Stanford professor Mehran Sahami says we might soon have to add a fourth skill to that list, “coding.”

    Sahami thinks deeply about such matters. He’s the leading force behind recent changes in Stanford’s computer science curriculum. He notes that it may not be surprising that more students are choosing to major in computer science than ever before, but what might turn heads is the changing face and intellectual landscape of the field. With concerted effort, more women and minorities, and even students from traditional liberal arts and sciences backgrounds, are venturing into computer science.

    Sahami says coding has become more than just video-games, social media and smartphone apps. The field is an intellectual endeavor taking on the biggest issues of our day — particularly in its influence on data-driven decision making, personal privacy, artificial intelligence and autonomous systems, and the role of large platforms like Google, Facebook and Apple on free speech issues.

    Sahami says that computers and algorithms are now part of the fabric of everyday life and how the future plays out will depend upon realizing more cultural and gender diversity in computer science classrooms and encouraging multidisciplinary thinking throughout computer science.

    Join host Russ Altman and expert in computer science education Mehran Sahami for an inspiring journey through the computer science curriculum of tomorrow. You can listen to The Future of Everything on Sirius XM Insight Channel 121, iTunes, Google Play, SoundCloud, Spotify, Stitcher or via Stanford Engineering Magazine.

    1
    CS is not just about sitting in a cube programming, it’s about solving social problems through computation. | Illustration by Kevin Craft

    Russ Altman: Today on The Future of Everything, the future of computer science education. Let’s think about it. Computer science is the toast of the town. Students are flocking to learn how to program computers and what the underpinnings are of computational systems are, how they work, how they should be designed, implemented, evaluated. At Stanford University and many other places, computer science has become the number one major, in some cases eclipsing really popular traditional majors, like economics, psychology, biology.

    The job market seems great for these students who have skills that are needed in almost every industry. It’s not just about creating software for PCs or iPhones, but increasingly it’s about building systems that interact with the physical world.

    Think about self-driving cars, robotic assistants and other things like that. AI, artificial intelligent systems, have also become powerful with voice recognition, like Siri and Alexa, the ability to translate, the ability to recognize faces, even in our cell phones, and the kind of big data mining that is transforming the financial community, real estate, entertainment, sports, news, even healthcare. These systems promise efficiencies, but do add some worry about the loss of jobs and displaced workers.

    Professor Mehran Sahami is a professor of computer science at Stanford and an expert at computer science education. Before coming to Stanford, he worked at Google and he has led national committees that have created guidelines for computer science programs internationally.

    Mehran, there is a boom in interest in computer science as an area of study. Of course, students are always encouraged to follow their passion when they choose their majors. But should we worry that there’s enough English majors, and history majors, and all these traditional majors that I mentioned before? Is this a blip or is this a change in the ecosystem that we’re expecting now for a long time to come?

    Mehran Sahami: Sure, that’s a great question. I do think it’s a sea change. I think it makes, when looking forward, there’s a really difference in terms of the decisions students are making in terms of where they wanna go, the kinds of fields they wanna pursue. I do think we would lament the fact if we lost all the English majors and lost all the economics majors, because what we’re actually seeing now is more problems require multi-disciplinary expertise to really solve, and so we need people across the board.

    But I think what students have seen, especially in the last 10 years is that computer science is not just about sitting in a cube programming 80 hours a week. It’s about solving social problems through computation, and so that’s really brought the ability of students from computing and the ability of students in other areas to come together and solve bigger problems.

    Russ Altman: Are you seeing an increase in interest in kind of joint majors where people have two feet in two different camps, say English and computer science, or the arts and computer science? Is that a thing?

    Mehran Sahami: That is a thing. We actually even had specialized program called CS+X where X was a choice of many different humanities majors at Stanford. We actually, rather than having that program saw that students were just choosing anyway to do double majors with computer science, lots of students who minors with computer science, and vice versa. They’ll major in something else and minor in computer science. So many students are already making this choice to combine fields.

    Russ Altman: We kinda jumped right into it, but let’s step back. Tell me what a computer science. You’re an expert at this. What is a computer science education look like? I think everybody would say, “Well, they learn how to program computers.” But I suspect, in fact, I know that it’s more than that. Can you give us a kind of thumbnail sketch of what a computer science training should look like.

    Mehran Sahami: Sure. I think most people, like you said, think of computer science as just programming. And what a lot of students see when they get here is that there is far more richness to it as an intellectual endeavor. There is mathematics and logic behind it. There is the notions of building larger systems, the kinds of algorithms you can build, how efficient they are, artificial intelligence, which you alluded to, has seen a huge boom in the last few years, because it’s allowed us to solve problems in ways that potentially were even better than could’ve been hand crafted by human beings.

    When you see this whole intersection of stuff coming together in computing, how humans interact with machines, trying to solve problems in biology, for example, as you’ve done for many years, the larger scale impact of that field becomes clear.

    What students do isn’t just about programming, but it’s about understanding how programming is a tool to solve a lot of problems, and there’s lots of science behind how those tools are built and how they’re used.

    Russ Altman: Great. That’s actually very exciting. It means that we’re giving them a tool set that’s gonna last them for their whole life, even as the problem of the day changes. As we think about the future, how are we doing in terms of diversity in computer science? And I mean diversity along all axes, sex and gender, underrepresented minorities, different socioeconomic groups. Are they all feeling welcome to this big tent, or do we still have a recruitment issue?

    Mehran Sahami: Well, we still have a recruitment issue, but the good new sis that it’s getting better. For many years, and it’s still true now, there’s a large gender imbalance in computing. It’s actually gotten quite a bit better. At Stanford now, for example, more than a third of the undergraduate majors are women, which is more than double the percentage that we had 10 years ago. The field in general is seeing more diversity. And along the lines of underrepresented minorities, different socioeconomic classes, we’re also seeing some movement there. Again, those numbers are nowhere near where we’d like them to be, that would be representative of the population as a whole. We still have a lot of work to do. But directionally, things are moving the right way. And I think part of that is also that earlier in the pipeline in K through 12 education, there is more opportunities for computing.

    Russ Altman: Actually, I’m glad you mentioned that because I did wanna ask you.

    At the K through 12 level, if you’re a parent, in my head, this is all, and I don’t know if this is even fair, but in my head this is all confused with the rise of video games. Because I know there’s an issue of young people using video games far beyond what I would’ve ever imagined and certainly far beyond what was available to me as a youth.

    But what is the best advice about the appropriate level of computer exposure for somebody in their K through 12 education? Should parents be introducing kids to programming early? Can they just wait and let it evolve as a natural interest? I think of it as like, is it the new reading, writing, arithmetic, and coding? Is that the new fourth area of basic competency for grammar school in K through 12? And I know you’ve thought about these issues. What’s the right answer? What’s our current best understanding?

    Mehran Sahami: Well, one of the things that’s touted a lot is a notion of what’s called computational thinking, which is a notion that would encompass some amount of programming, but also understanding just how computational techniques work and what they entail. So understanding something about data and how that might get used in a program without necessarily programming the actual program yourself.

    And that brings up lots of social issues as well like privacy. How do you protect your data? How do you think about the passwords that you have.

    For a lot of these things, generally, it’s not too early to have kids learn something about it. As a parent myself, I worry about how much time they spent on screens. And the current thinking there is it’s not just about how much time is actually spent in front of a screen, but what that time is spent doing. And there’s even lots of activities that don’t involve spending time in front of the screen.

    So, sometimes people think about what’s the notion of having a kindergartner or a first grader program? Can we even do that?

    We say, well, the notion of programming there isn’t about sitting in front of a computer and typing. It’s about making a peanut butter and jelly sandwich. So, what does that mean? It means it’s a process and you think about, well, you need to get out the bread, you need to get out the peanut butter, you open the jar. There’s these set of steps you have to go through. And if you don’t follow the steps properly, you don’t get the result you want. And that gets kids to think about what an algorithmic process is, without actually touching a computer.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. I’m speaking with Professor Mehran Sahami about computer science and, just in the last few moments, computer science for young people. On this issue of diversity and the pipeline, is there evidence that we need new ways? Are there emerging ways of teaching computer science that are perhaps more palatable to these groups that have not traditionally been involved in computer science? I’m wondering of curriculum are evolving to attract, for example, young women who might not have traditionally been attracted, or, again, underrepresented minorities. Do we see opportunities for changing the way things are taught to make it more welcoming?

    Mehran Sahami: Sure, and it has to do both with the content and the culture. So, from the standpoint of content, I think one of the things that’s happened in the last few years that’s helped with increasing the diversity numbers is more people understanding the social impact of computing. And there are studies that have shown, for example, that in terms of the choice of activities that women, for example, might wanna do, they are tended to be drawn more toward majors that they see the social impact of the work.

    There in terms of thinking about the examples you use in computer science classes, the kinds of problems that you could actually solve, integrating in more of the social problem solving aspect. How can we think about using computers, for example, to try to understand or solve diseases or to think about understanding climate change? That brings in a wider swath of people than otherwise previously came.

    The other is culture. I think there’s been, and this is well documented, a lot of sexist culture in the computing community. Bright light has been shined on that in the past few years, and slowly the culture is beginning to change. And so, when you change that culture, you make it more welcoming and you have a community of people who now feels as though they’re not on the margin but actually right in the middle of the field. It helps bring other people in who are from similar communities.

    Russ Altman: So I really find that interesting. I’m not an organizational behavior expert at all, but I’ve heard that a lot cultural change often needs to come from the top. There needs to be leadership committed to changing the culture.

    So in this world of, in this distributed world of education, who is the one who’s charged with changing culture? Is it the faculty? Is it the student leadership? Who does that fall to, in terms of changing the culture, and what is their to-do list?

    Mehran Sahami: Yeah, that’s a great question. It actually requires many levels of engagement. Certainly in the university, faculty need to be supportive. They need to think about the curriculum that they define, the examples that they use, how the language they use, how welcoming they are to students. One of the things we’ve seen here at Stanford is the students have been very active in terms of student organizations, creating social events to bring people together, and to help not only create the community, but show others that the community exists.

    But if you think in the larger scope in industry, leaders of companies need to show that those companies are welcoming, that they’re taking steps toward diversity. They’re really listening to their employees. That’s the place where, in fact, it’s changed on a larger cultural level.

    Russ Altman: So, that really does make sense and it’s great to hear that we’re making some progress in terms of the numbers. The 30%, I remember when I was in training, it was less than 10%, and it was a big problem. That was ancient history.

    I know that one of the new things that you’ve been involved with and I definitely want to have some time to talk about it, is an ethics class and ethics for computer science students. I don’t recall if it’s required or just recommended.

    Tell me about why. You’re a very famous, I didn’t mention this in my introduction, but you also are well-known to be one of the most distinguished teachers at Stanford with a great record of getting people fired up about the material in your classes. Why did you turn your attention to ethics and what are the special challenges in teaching ethics to this group? Do they get it? Are they excited about it, or are they like, “Why are we doing this?”

    Mehran Sahami: Right, well, first of all, you’re too kind in your assessment.

    But I would say, for many years, there’s been classes on ethics with computing at Stanford. What we’ve done in this most recent revision, and this is what with collaborators, Rob Reich and Jeremy Weinstein are both in the political science department. Rob is a moral philosopher. Jeremy is a public policy expert.

    And then I as the computer scientist came together to say what we wanna do is a modernized revision of this class where the problems we look or the problems that students are gonna be be grappling with in sort of the zero to 10-year time pan from the time they graduate, that it brings together these different aspects. The computing is on of them, but we need to understand philosophically.

    What are the societal outcomes we want? What are the moral codes we wanna live by? How do we think about value trade-offs?

    And then from the public policy standpoint, what can we do not only as engineers but as potentially leaders of companies, as academics, as citizens, in order to help see these kinds of changes through so we actually get the outcomes we like.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. I’m speaking with Mehran Sahami, and we just turned our attention to a new course in ethics, looking at the big picture ethics. So this is not you shouldn’t cheat, you shouldn’t steal stuff, you should make sure that your authors are all recognized. These are the big ticket items in ethics of computer science. Can you give me a feeling for some of the issues that are hot button issues? I love what you said about it’s zero to 10, which means literally these are issues that could be big today or tomorrow. How did you decide which issues need to come to their attention as young trainees in computer science?

    Mehran Sahami: Sure. I mean, first, we sat down on the white board and wrote out a bunch of topics that we thought were relevant and quickly realized the full set was far larger than we could cover in one class. So we focused on four that we could really do deep dives into. First one was algorithmic decision making, that computers and algorithms are used more and more to make meaningful decisions that impact our lives, for example, whether or not we get loans, mortgages, whether or not if we have some trouble with the criminal justice system, if we get bail or not.

    Russ Altman: And these may be without a human in the loop of that decision making?

    Mehran Sahami: For some of them. Some of them have a human in loop that’s required. For example, in the financial industry, there are some decisions that a human has to take responsibility for, but there are some micro transactions, for example, when you try to run your credit card where the decision might get made to deny that without a human being involved. That was the first area.

    Then we looked at issues around data privacy and how different companies, what kinds of policies they have, the different views say in the United States versus Europe around privacy, so we could also look at different cultural norms.

    The third unit was around AI and autonomous systems. So, our deep dive was mainly on autonomous vehicles, something that students are looking at now and society in general is gonna have to deal with, both from the technological stand point, but more so from the issues around economics. Job displacement, what is automation gonna mean in the long term, how do we turn our attention to think about what sorts of automation we wanna build in terms of weighing the positives and negatives in society, safety versus job displacement.

    And then the last unit was on the power of large platforms, say Facebooks, and Apples, and Googles of the world where now, for example, the questions around who has free speech are governed more by the large platforms who can determine who’s on them or not, versus governments.

    We’re seeing these changes of things that previously happen in the public sphere moving to the private sphere. And how do we think about that? Because they’re in the same kind of electoral recourse if you don’t like a policy that Facebook wants to implement for who can say what on their platform.

    Russ Altman: So that sounds really exciting. So tell me who signed up for this class? Was it your folks from computer science? Was it a bunch of other people saying, “Wow, I might be able to contribute to this conversation”? I mean it sounds like an incredibly open set of issues that a lot of people could contribute to, but who actually signed up?

    Mehran Sahami: Yeah, the vast majority of students who signed up are computer science majors or related majors, electrical engineering, something like that.

    Russ Altman: Was it required? I forgot to ask.

    Mehran Sahami: It satisfies certain requirements, but it’s not the only class that does.

    Russ Altman: Gotcha.

    Mehran Sahami: So it’s a popular choice though for a lot of students. But we did get students from many different disciplines. Certainly we got some from political science, some students from the law school. We got students from other areas like anthropology from the humanities, and so there was lots of different perspectives that were brought into class.

    Russ Altman: Without putting too find a point on it, did you find that the computer science students were well-prepared to have these conversations that were not about technical stuff? Were you pleased to see their level of preparation, or did it highlight for you the need to do more of this kind of training for everybody?

    Mehran Sahami: Yeah, there’s a spectrum. There are some students that were hugely engaged, who actually made different decisions about, for example, what they wanted to pursue for their career as a result of some of the things they learn in that class, very deeply engaged on the social issues and thinking about what can I do as an engineer as well as a citizen to try address these problems. And then we had some students who are seeing some of these things, the social issues for the first time. And so, we’re trying to make it as broad as we can to have something of interest to everyone who’s in there.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. More with Professor Mehran Sahami about computer science, ethics, and the future of education, next on SiriusXM Insight 121.

    Welcome back to The Future of Everything. I’m Russ Altman. I’m speaking with Professor Mehran Sahami about computer science and ethics training and ethics education.

    So, Mehran, at the end of the last segment, you talked about these four kind of pillars of your course, and they were great. And the fourth one which I wanna dig in a little bit more was the power of platform, and you said something very intriguing about how in some ways the First Amendment is now adjudicated not by the courts or by the government, but by these huge platforms that if they turn me off, my voice on Twitter or on Facebook is gone.

    How do you set that up for the students and how do you lead the discussion about the responsibilities and obligations of computer scientists and others in society in the light of this new phenomenon?

    Mehran Sahami: Sure. So, one of the things we do in the class is we have for each one of our areas a case study that we’ve actually gotten professional writers to help write for us. And there in the power of platforms, one of the things we look at are cases where, for example, people have been banned from particular platforms, like say Alex Jones on Twitter. And so, part of that is what are the guidelines that these platforms have? How did they get applied? How can they do it in a way that scales? And so, you find all kinds of interesting phenomena there.

    Some of them are things that a platform will keep information on the platform, even if it may not be information that they deem as entirely trustworthy, because they have to make that determination, and that becomes a very strange place for what the technology companies now making determinations about what is correct information.

    There’s also a lot of automated techniques that go onto it. And so the engineers need to build something that they think can actually detect hateful speech or things that may be beyond the boundary of the acceptable guidelines for the company.

    That brings up the deeper question of what are the acceptable guidelines. They don’t have to be in line necessarily with government practices. And sometimes there is also individual human reviewers that will look at contents and see whether or not —

    Russ Altman: Yeah, these have been in the news recently, because some of them have very stressful jobs because of the content that they’re looking at.

    Mehran Sahami: Exactly. Imagine spending eight hours a day looking at videos of people being beheaded, all kinds of horrible things that people are posting out there. In some cases, it’s actually been reported some of these workers have things like post-traumatic stress disorder from doing this job all day.

    You get into this tension between what is the platform’s responsibility, how can citizens potentially affect what the platforms were doing, and in many cases, because of the voting structures of the shares of the platform, there’s not actually a lot that individuals can do. But what is this — what do we think of meaningfully through thinking about should there be legislation, should there be regulation, at what point does too much move out of the public sphere into the private decision making?

    Russ Altman: So yeah, I’m struck by this problem, because there’s so many facets to it. So one of them is that Facebook, all of these platforms that you mentioned, they’re international platforms. Yes, there are some companies that might ban them. So, they might, even though they’re, in many cases, sitting in the US, and in fact in the case of Facebook, it’s a couple of miles from where you and I are sitting right now, they have an international audience where the laws are different. We might talk about data privacy later, but there are new laws in Europe that are quite different from the laws in the US.

    How do you train the students to think about international-level issues that have to be adjudicated sometimes at a national or even sub-national level?

    Mehran Sahami: Right. Understanding what are the, for one level, what are the cultural reasons why there are some of these different norms.

    And then secondly is understanding that the platforms do need to abide by particular policies and counties, and those policies may be different.

    For example, what Google can show in search results in Germany, where they have restrictions on showing information related to Nazis is different than in the United States. And certainly as we saw with Google eventually pulling out of China, the kinds of policies that they had to abide by if they wanted to continue to provide search there was outside of what they felt comfortable doing.

    And that becomes a question for the company certainly to have to make, but it also becomes a decision for individuals who, say, wanna work at those companies or support those companies, that how do they make their voice heard with respect to the companies choosing to make particular policy decisions as to what they do in different locales.

    Russ Altman: It’s interesting. What I hear you saying is that even though Google is a global platform, it has different flavors and different countries. One of the choices is pull out of the country because the rules are just to compatible with something that they wanna do. As an engineer for these companies, you might be working on a product that will never be deployed in country X, but will be used a lot in country Y, and you have to think about the implications and your comfort level with building these technologies that may or may not wind up being used in different settings.

    I can see, how much do you empower the students to actually voice their opinions to the people who are signing their paychecks? It’s a tricky, you don’t wanna train a bunch of folks who wind up coming back and saying, “Oh, by the way, I was fired because of all the great things that I learned in this class.”

    Mehran Sahami: Yeah, but at the same time you want students to have their own personal moral responsibility. You want them to make decisions that they feel are in line with their one personal ethics.

    But at the same time, there’s a lot of decisions that get made at the engineering level that have far-reaching consequences. So if you’re the engineer who’s working on how do I filter results out of, say the search engine in China or in Germany, there are decisions you’re making deep down in the code in terms of what algorithms you might be using, what techniques, what kind of data that are gonna have real impact on the information people see. And that’s at a level that is affecting individuals, but is at a more granular level than the decisions that are being made by the executives of the company.

    And so, the engineers themselves need to be aware of that when they’re building these systems.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. I’m speaking with Professor Mehran Sahami now about these great scenarios that you’re using in your teaching. So, if you’ll allow me, I would just love to move to another one of your areas and find out what kind of cases you’re using, for example, in data privacy.

    Mehran Sahami: Sure. In data privacy, one of the big things we’re looking at is facial recognition, which is getting… These days you see pretty much articles every day on the use of the technology, what localities have it, which don’t.

    San Francisco, for example, recently banned the use of facial recognition by the public sector at least, whereas many airlines are now moving to using facial recognition to check you in on a plane. As you can imagine that happening, it has these tensions between privacy and security. At one level, why would we use facial recognition to get people on the plane? It only estimates to save a couple minutes of time when you’re boarding a 747.

    Russ Altman: It’s the overhead luggage that they should work on, not the facial recognition.

    Mehran Sahami: Exactly. But the real reason is the folks who are responsible for airline security wanna be able to detect if there’s someone getting on the plane who shouldn’t be getting on the plane.

    The flip side is your personal privacy. To what extent do you take facial recognition to an extreme where everyone can be tracked.

    London, for example, has half a million closed-circuit TV cameras around the city. You combine that with facial recognition, you can get a pretty good map of what most of the people in the city are doing at a given time, who are outdoors at least.

    How do we trade off the privacy implication versus security? Different people will make that decision differently. And part of the public policy question, which is why we look at the multidisciplinary facet of this class, is how as a society do we decide what we wanna do?

    Russ Altman: So you must’ve had… I’m presuming you had small group discussion sessions, because I know you had a big registration in this class, a couple of hundred people perhaps, but you can have a meaningful discussion of 200 people, I would not think. So, did you actually break them down and have them have smaller discussion groups?

    Mehran Sahami: Yeah. There were weekly small discussions. And then one of the things we would do, which actually worked better than you would think, is we took 250 students. We had this large room that had a bunch of tables that seat about eight or 10. And so we could get them in there, seated around these tables, after they read the case study and sort of give them guiding questions for discussion. So then they could have the discussion. We could have call out, so they could share their findings or their insights across the whole class.

    Russ Altman: So when you have discussion sessions, who leads them? Are these computer scientist or a… You said there were a lot of political scientists involved in the class.

    It’s not clear to me at all who I would want leading that discussion, because what you need to know and kind of the frameworks of ethics are quite diverse. So, who leads those discussions?

    Mehran Sahami: Yeah, it’s a great question. We had a large number of TAs that span a bunch of areas. So we have some computer scientists. We had some law students. We had students with backgrounds in philosophy and anthropology, bunch of different areas. And in some cases, the sections were co-taught by computer scientists, say, and someone from the law school. And so you would get these different perspectives to really bring out the richness in the conversation.

    Russ Altman: And I would guess that because of the international nature of the students, people are bringing very different perspectives on issues of privacy, state power, individual human rights. I would guess there’s a huge diversity in the student body on those issues.

    Mehran Sahami: Absolutely. And it’s also a nice way for students to be able to connect, because when you hear about the different kinds of issues in different countries, it’s easy to just think about them in an abstract sense and not understand why. But when you actually have someone sitting across the table saying, “I grew up here and I can tell you about why we believe the particular things we do,” it makes it much more meaningful in terms of that student engagement.

    Russ Altman: Well, there you have it, the next generation of computer scientist trained in the ethics of their technologies.

    Thank you for listening to The Future of Everything. I’m Russ Altman. If you missed any of this of episode, listen any time on-demand with the SiriusXM app.Russ Altman: Today on The Future of Everything, the future of computer science education. Let’s think about it. Computer science is the toast of the town. Students are flocking to learn how to program computers and what the underpinnings are of computational systems are, how they work, how they should be designed, implemented, evaluated. At Stanford University and many other places, computer science has become the number one major, in some cases eclipsing really popular traditional majors, like economics, psychology, biology.

    The job market seems great for these students who have skills that are needed in almost every industry. It’s not just about creating software for PCs or iPhones, but increasingly it’s about building systems that interact with the physical world.

    Think about self-driving cars, robotic assistants and other things like that. AI, artificial intelligent systems, have also become powerful with voice recognition, like Siri and Alexa, the ability to translate, the ability to recognize faces, even in our cell phones, and the kind of big data mining that is transforming the financial community, real estate, entertainment, sports, news, even healthcare. These systems promise efficiencies, but do add some worry about the loss of jobs and displaced workers.

    Professor Mehran Sahami is a professor of computer science at Stanford and an expert at computer science education. Before coming to Stanford, he worked at Google and he has led national committees that have created guidelines for computer science programs internationally.

    Mehran, there is a boom in interest in computer science as an area of study. Of course, students are always encouraged to follow their passion when they choose their majors. But should we worry that there’s enough English majors, and history majors, and all these traditional majors that I mentioned before? Is this a blip or is this a change in the ecosystem that we’re expecting now for a long time to come?

    Mehran Sahami: Sure, that’s a great question. I do think it’s a sea change. I think it makes, when looking forward, there’s a really difference in terms of the decisions students are making in terms of where they wanna go, the kinds of fields they wanna pursue. I do think we would lament the fact if we lost all the English majors and lost all the economics majors, because what we’re actually seeing now is more problems require multi-disciplinary expertise to really solve, and so we need people across the board.

    But I think what students have seen, especially in the last 10 years is that computer science is not just about sitting in a cube programming 80 hours a week. It’s about solving social problems through computation, and so that’s really brought the ability of students from computing and the ability of students in other areas to come together and solve bigger problems.

    Russ Altman: Are you seeing an increase in interest in kind of joint majors where people have two feet in two different camps, say English and computer science, or the arts and computer science? Is that a thing?

    Mehran Sahami: That is a thing. We actually even had specialized program called CS+X where X was a choice of many different humanities majors at Stanford. We actually, rather than having that program saw that students were just choosing anyway to do double majors with computer science, lots of students who minors with computer science, and vice versa. They’ll major in something else and minor in computer science. So many students are already making this choice to combine fields.

    Russ Altman: We kinda jumped right into it, but let’s step back. Tell me what a computer science. You’re an expert at this. What is a computer science education look like? I think everybody would say, “Well, they learn how to program computers.” But I suspect, in fact, I know that it’s more than that. Can you give us a kind of thumbnail sketch of what a computer science training should look like.

    Mehran Sahami: Sure. I think most people, like you said, think of computer science as just programming. And what a lot of students see when they get here is that there is far more richness to it as an intellectual endeavor. There is mathematics and logic behind it. There is the notions of building larger systems, the kinds of algorithms you can build, how efficient they are, artificial intelligence, which you alluded to, has seen a huge boom in the last few years, because it’s allowed us to solve problems in ways that potentially were even better than could’ve been hand crafted by human beings.

    When you see this whole intersection of stuff coming together in computing, how humans interact with machines, trying to solve problems in biology, for example, as you’ve done for many years, the larger scale impact of that field becomes clear.

    What students do isn’t just about programming, but it’s about understanding how programming is a tool to solve a lot of problems, and there’s lots of science behind how those tools are built and how they’re used.

    Russ Altman: Great. That’s actually very exciting. It means that we’re giving them a tool set that’s gonna last them for their whole life, even as the problem of the day changes. As we think about the future, how are we doing in terms of diversity in computer science? And I mean diversity along all axes, sex and gender, underrepresented minorities, different socioeconomic groups. Are they all feeling welcome to this big tent, or do we still have a recruitment issue?

    Mehran Sahami: Well, we still have a recruitment issue, but the good new sis that it’s getting better. For many years, and it’s still true now, there’s a large gender imbalance in computing. It’s actually gotten quite a bit better. At Stanford now, for example, more than a third of the undergraduate majors are women, which is more than double the percentage that we had 10 years ago. The field in general is seeing more diversity. And along the lines of underrepresented minorities, different socioeconomic classes, we’re also seeing some movement there. Again, those numbers are nowhere near where we’d like them to be, that would be representative of the population as a whole. We still have a lot of work to do. But directionally, things are moving the right way. And I think part of that is also that earlier in the pipeline in K through 12 education, there is more opportunities for computing.

    Russ Altman: Actually, I’m glad you mentioned that because I did wanna ask you.

    At the K through 12 level, if you’re a parent, in my head, this is all, and I don’t know if this is even fair, but in my head this is all confused with the rise of video games. Because I know there’s an issue of young people using video games far beyond what I would’ve ever imagined and certainly far beyond what was available to me as a youth.

    But what is the best advice about the appropriate level of computer exposure for somebody in their K through 12 education? Should parents be introducing kids to programming early? Can they just wait and let it evolve as a natural interest? I think of it as like, is it the new reading, writing, arithmetic, and coding? Is that the new fourth area of basic competency for grammar school in K through 12? And I know you’ve thought about these issues. What’s the right answer? What’s our current best understanding?

    Mehran Sahami: Well, one of the things that’s touted a lot is a notion of what’s called computational thinking, which is a notion that would encompass some amount of programming, but also understanding just how computational techniques work and what they entail. So understanding something about data and how that might get used in a program without necessarily programming the actual program yourself.

    And that brings up lots of social issues as well like privacy. How do you protect your data? How do you think about the passwords that you have.

    For a lot of these things, generally, it’s not too early to have kids learn something about it. As a parent myself, I worry about how much time they spent on screens. And the current thinking there is it’s not just about how much time is actually spent in front of a screen, but what that time is spent doing. And there’s even lots of activities that don’t involve spending time in front of the screen.

    So, sometimes people think about what’s the notion of having a kindergartner or a first grader program? Can we even do that?

    We say, well, the notion of programming there isn’t about sitting in front of a computer and typing. It’s about making a peanut butter and jelly sandwich. So, what does that mean? It means it’s a process and you think about, well, you need to get out the bread, you need to get out the peanut butter, you open the jar. There’s these set of steps you have to go through. And if you don’t follow the steps properly, you don’t get the result you want. And that gets kids to think about what an algorithmic process is, without actually touching a computer.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. I’m speaking with Professor Mehran Sahami about computer science and, just in the last few moments, computer science for young people. On this issue of diversity and the pipeline, is there evidence that we need new ways? Are there emerging ways of teaching computer science that are perhaps more palatable to these groups that have not traditionally been involved in computer science? I’m wondering of curriculum are evolving to attract, for example, young women who might not have traditionally been attracted, or, again, underrepresented minorities. Do we see opportunities for changing the way things are taught to make it more welcoming?

    Mehran Sahami: Sure, and it has to do both with the content and the culture. So, from the standpoint of content, I think one of the things that’s happened in the last few years that’s helped with increasing the diversity numbers is more people understanding the social impact of computing. And there are studies that have shown, for example, that in terms of the choice of activities that women, for example, might wanna do, they are tended to be drawn more toward majors that they see the social impact of the work.

    There in terms of thinking about the examples you use in computer science classes, the kinds of problems that you could actually solve, integrating in more of the social problem solving aspect. How can we think about using computers, for example, to try to understand or solve diseases or to think about understanding climate change? That brings in a wider swath of people than otherwise previously came.

    The other is culture. I think there’s been, and this is well documented, a lot of sexist culture in the computing community. Bright light has been shined on that in the past few years, and slowly the culture is beginning to change. And so, when you change that culture, you make it more welcoming and you have a community of people who now feels as though they’re not on the margin but actually right in the middle of the field. It helps bring other people in who are from similar communities.

    Russ Altman: So I really find that interesting. I’m not an organizational behavior expert at all, but I’ve heard that a lot cultural change often needs to come from the top. There needs to be leadership committed to changing the culture.

    So in this world of, in this distributed world of education, who is the one who’s charged with changing culture? Is it the faculty? Is it the student leadership? Who does that fall to, in terms of changing the culture, and what is their to-do list?

    Mehran Sahami: Yeah, that’s a great question. It actually requires many levels of engagement. Certainly in the university, faculty need to be supportive. They need to think about the curriculum that they define, the examples that they use, how the language they use, how welcoming they are to students. One of the things we’ve seen here at Stanford is the students have been very active in terms of student organizations, creating social events to bring people together, and to help not only create the community, but show others that the community exists.

    But if you think in the larger scope in industry, leaders of companies need to show that those companies are welcoming, that they’re taking steps toward diversity. They’re really listening to their employees. That’s the place where, in fact, it’s changed on a larger cultural level.

    Russ Altman: So, that really does make sense and it’s great to hear that we’re making some progress in terms of the numbers. The 30%, I remember when I was in training, it was less than 10%, and it was a big problem. That was ancient history.

    I know that one of the new things that you’ve been involved with and I definitely want to have some time to talk about it, is an ethics class and ethics for computer science students. I don’t recall if it’s required or just recommended.

    Tell me about why. You’re a very famous, I didn’t mention this in my introduction, but you also are well-known to be one of the most distinguished teachers at Stanford with a great record of getting people fired up about the material in your classes. Why did you turn your attention to ethics and what are the special challenges in teaching ethics to this group? Do they get it? Are they excited about it, or are they like, “Why are we doing this?”

    Mehran Sahami: Right, well, first of all, you’re too kind in your assessment.

    But I would say, for many years, there’s been classes on ethics with computing at Stanford. What we’ve done in this most recent revision, and this is what with collaborators, Rob Reich and Jeremy Weinstein are both in the political science department. Rob is a moral philosopher. Jeremy is a public policy expert.

    And then I as the computer scientist came together to say what we wanna do is a modernized revision of this class where the problems we look or the problems that students are gonna be be grappling with in sort of the zero to 10-year time pan from the time they graduate, that it brings together these different aspects. The computing is on of them, but we need to understand philosophically.

    What are the societal outcomes we want? What are the moral codes we wanna live by? How do we think about value trade-offs?

    And then from the public policy standpoint, what can we do not only as engineers but as potentially leaders of companies, as academics, as citizens, in order to help see these kinds of changes through so we actually get the outcomes we like.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. I’m speaking with Mehran Sahami, and we just turned our attention to a new course in ethics, looking at the big picture ethics. So this is not you shouldn’t cheat, you shouldn’t steal stuff, you should make sure that your authors are all recognized. These are the big ticket items in ethics of computer science. Can you give me a feeling for some of the issues that are hot button issues? I love what you said about it’s zero to 10, which means literally these are issues that could be big today or tomorrow. How did you decide which issues need to come to their attention as young trainees in computer science?

    Mehran Sahami: Sure. I mean, first, we sat down on the white board and wrote out a bunch of topics that we thought were relevant and quickly realized the full set was far larger than we could cover in one class. So we focused on four that we could really do deep dives into. First one was algorithmic decision making, that computers and algorithms are used more and more to make meaningful decisions that impact our lives, for example, whether or not we get loans, mortgages, whether or not if we have some trouble with the criminal justice system, if we get bail or not.

    Russ Altman: And these may be without a human in the loop of that decision making?

    Mehran Sahami: For some of them. Some of them have a human in loop that’s required. For example, in the financial industry, there are some decisions that a human has to take responsibility for, but there are some micro transactions, for example, when you try to run your credit card where the decision might get made to deny that without a human being involved. That was the first area.

    Then we looked at issues around data privacy and how different companies, what kinds of policies they have, the different views say in the United States versus Europe around privacy, so we could also look at different cultural norms.

    The third unit was around AI and autonomous systems. So, our deep dive was mainly on autonomous vehicles, something that students are looking at now and society in general is gonna have to deal with, both from the technological stand point, but more so from the issues around economics. Job displacement, what is automation gonna mean in the long term, how do we turn our attention to think about what sorts of automation we wanna build in terms of weighing the positives and negatives in society, safety versus job displacement.

    And then the last unit was on the power of large platforms, say Facebooks, and Apples, and Googles of the world where now, for example, the questions around who has free speech are governed more by the large platforms who can determine who’s on them or not, versus governments.

    We’re seeing these changes of things that previously happen in the public sphere moving to the private sphere. And how do we think about that? Because they’re in the same kind of electoral recourse if you don’t like a policy that Facebook wants to implement for who can say what on their platform.

    Russ Altman: So that sounds really exciting. So tell me who signed up for this class? Was it your folks from computer science? Was it a bunch of other people saying, “Wow, I might be able to contribute to this conversation”? I mean it sounds like an incredibly open set of issues that a lot of people could contribute to, but who actually signed up?

    Mehran Sahami: Yeah, the vast majority of students who signed up are computer science majors or related majors, electrical engineering, something like that.

    Russ Altman: Was it required? I forgot to ask.

    Mehran Sahami: It satisfies certain requirements, but it’s not the only class that does.

    Russ Altman: Gotcha.

    Mehran Sahami: So it’s a popular choice though for a lot of students. But we did get students from many different disciplines. Certainly we got some from political science, some students from the law school. We got students from other areas like anthropology from the humanities, and so there was lots of different perspectives that were brought into class.

    Russ Altman: Without putting too find a point on it, did you find that the computer science students were well-prepared to have these conversations that were not about technical stuff? Were you pleased to see their level of preparation, or did it highlight for you the need to do more of this kind of training for everybody?

    Mehran Sahami: Yeah, there’s a spectrum. There are some students that were hugely engaged, who actually made different decisions about, for example, what they wanted to pursue for their career as a result of some of the things they learn in that class, very deeply engaged on the social issues and thinking about what can I do as an engineer as well as a citizen to try address these problems. And then we had some students who are seeing some of these things, the social issues for the first time. And so, we’re trying to make it as broad as we can to have something of interest to everyone who’s in there.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. More with Professor Mehran Sahami about computer science, ethics, and the future of education, next on SiriusXM Insight 121.

    Welcome back to The Future of Everything. I’m Russ Altman. I’m speaking with Professor Mehran Sahami about computer science and ethics training and ethics education.

    So, Mehran, at the end of the last segment, you talked about these four kind of pillars of your course, and they were great. And the fourth one which I wanna dig in a little bit more was the power of platform, and you said something very intriguing about how in some ways the First Amendment is now adjudicated not by the courts or by the government, but by these huge platforms that if they turn me off, my voice on Twitter or on Facebook is gone.

    How do you set that up for the students and how do you lead the discussion about the responsibilities and obligations of computer scientists and others in society in the light of this new phenomenon?

    Mehran Sahami: Sure. So, one of the things we do in the class is we have for each one of our areas a case study that we’ve actually gotten professional writers to help write for us. And there in the power of platforms, one of the things we look at are cases where, for example, people have been banned from particular platforms, like say Alex Jones on Twitter. And so, part of that is what are the guidelines that these platforms have? How did they get applied? How can they do it in a way that scales? And so, you find all kinds of interesting phenomena there.

    Some of them are things that a platform will keep information on the platform, even if it may not be information that they deem as entirely trustworthy, because they have to make that determination, and that becomes a very strange place for what the technology companies now making determinations about what is correct information.

    There’s also a lot of automated techniques that go onto it. And so the engineers need to build something that they think can actually detect hateful speech or things that may be beyond the boundary of the acceptable guidelines for the company.

    That brings up the deeper question of what are the acceptable guidelines. They don’t have to be in line necessarily with government practices. And sometimes there is also individual human reviewers that will look at contents and see whether or not —

    Russ Altman: Yeah, these have been in the news recently, because some of them have very stressful jobs because of the content that they’re looking at.

    Mehran Sahami: Exactly. Imagine spending eight hours a day looking at videos of people being beheaded, all kinds of horrible things that people are posting out there. In some cases, it’s actually been reported some of these workers have things like post-traumatic stress disorder from doing this job all day.

    You get into this tension between what is the platform’s responsibility, how can citizens potentially affect what the platforms were doing, and in many cases, because of the voting structures of the shares of the platform, there’s not actually a lot that individuals can do. But what is this — what do we think of meaningfully through thinking about should there be legislation, should there be regulation, at what point does too much move out of the public sphere into the private decision making?

    Russ Altman: So yeah, I’m struck by this problem, because there’s so many facets to it. So one of them is that Facebook, all of these platforms that you mentioned, they’re international platforms. Yes, there are some companies that might ban them. So, they might, even though they’re, in many cases, sitting in the US, and in fact in the case of Facebook, it’s a couple of miles from where you and I are sitting right now, they have an international audience where the laws are different. We might talk about data privacy later, but there are new laws in Europe that are quite different from the laws in the US.

    How do you train the students to think about international-level issues that have to be adjudicated sometimes at a national or even sub-national level?

    Mehran Sahami: Right. Understanding what are the, for one level, what are the cultural reasons why there are some of these different norms.

    And then secondly is understanding that the platforms do need to abide by particular policies and counties, and those policies may be different.

    For example, what Google can show in search results in Germany, where they have restrictions on showing information related to Nazis is different than in the United States. And certainly as we saw with Google eventually pulling out of China, the kinds of policies that they had to abide by if they wanted to continue to provide search there was outside of what they felt comfortable doing.

    And that becomes a question for the company certainly to have to make, but it also becomes a decision for individuals who, say, wanna work at those companies or support those companies, that how do they make their voice heard with respect to the companies choosing to make particular policy decisions as to what they do in different locales.

    Russ Altman: It’s interesting. What I hear you saying is that even though Google is a global platform, it has different flavors and different countries. One of the choices is pull out of the country because the rules are just to compatible with something that they wanna do. As an engineer for these companies, you might be working on a product that will never be deployed in country X, but will be used a lot in country Y, and you have to think about the implications and your comfort level with building these technologies that may or may not wind up being used in different settings.

    I can see, how much do you empower the students to actually voice their opinions to the people who are signing their paychecks? It’s a tricky, you don’t wanna train a bunch of folks who wind up coming back and saying, “Oh, by the way, I was fired because of all the great things that I learned in this class.”

    Mehran Sahami: Yeah, but at the same time you want students to have their own personal moral responsibility. You want them to make decisions that they feel are in line with their one personal ethics.

    But at the same time, there’s a lot of decisions that get made at the engineering level that have far-reaching consequences. So if you’re the engineer who’s working on how do I filter results out of, say the search engine in China or in Germany, there are decisions you’re making deep down in the code in terms of what algorithms you might be using, what techniques, what kind of data that are gonna have real impact on the information people see. And that’s at a level that is affecting individuals, but is at a more granular level than the decisions that are being made by the executives of the company.

    And so, the engineers themselves need to be aware of that when they’re building these systems.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. I’m speaking with Professor Mehran Sahami now about these great scenarios that you’re using in your teaching. So, if you’ll allow me, I would just love to move to another one of your areas and find out what kind of cases you’re using, for example, in data privacy.

    Mehran Sahami: Sure. In data privacy, one of the big things we’re looking at is facial recognition, which is getting… These days you see pretty much articles every day on the use of the technology, what localities have it, which don’t.

    San Francisco, for example, recently banned the use of facial recognition by the public sector at least, whereas many airlines are now moving to using facial recognition to check you in on a plane. As you can imagine that happening, it has these tensions between privacy and security. At one level, why would we use facial recognition to get people on the plane? It only estimates to save a couple minutes of time when you’re boarding a 747.

    Russ Altman: It’s the overhead luggage that they should work on, not the facial recognition.

    Mehran Sahami: Exactly. But the real reason is the folks who are responsible for airline security wanna be able to detect if there’s someone getting on the plane who shouldn’t be getting on the plane.

    The flip side is your personal privacy. To what extent do you take facial recognition to an extreme where everyone can be tracked.

    London, for example, has half a million closed-circuit TV cameras around the city. You combine that with facial recognition, you can get a pretty good map of what most of the people in the city are doing at a given time, who are outdoors at least.

    How do we trade off the privacy implication versus security? Different people will make that decision differently. And part of the public policy question, which is why we look at the multidisciplinary facet of this class, is how as a society do we decide what we wanna do?

    Russ Altman: So you must’ve had… I’m presuming you had small group discussion sessions, because I know you had a big registration in this class, a couple of hundred people perhaps, but you can have a meaningful discussion of 200 people, I would not think. So, did you actually break them down and have them have smaller discussion groups?

    Mehran Sahami: Yeah. There were weekly small discussions. And then one of the things we would do, which actually worked better than you would think, is we took 250 students. We had this large room that had a bunch of tables that seat about eight or 10. And so we could get them in there, seated around these tables, after they read the case study and sort of give them guiding questions for discussion. So then they could have the discussion. We could have call out, so they could share their findings or their insights across the whole class.

    Russ Altman: So when you have discussion sessions, who leads them? Are these computer scientist or a… You said there were a lot of political scientists involved in the class.

    It’s not clear to me at all who I would want leading that discussion, because what you need to know and kind of the frameworks of ethics are quite diverse. So, who leads those discussions?

    Mehran Sahami: Yeah, it’s a great question. We had a large number of TAs that span a bunch of areas. So we have some computer scientists. We had some law students. We had students with backgrounds in philosophy and anthropology, bunch of different areas. And in some cases, the sections were co-taught by computer scientists, say, and someone from the law school. And so you would get these different perspectives to really bring out the richness in the conversation.

    Russ Altman: And I would guess that because of the international nature of the students, people are bringing very different perspectives on issues of privacy, state power, individual human rights. I would guess there’s a huge diversity in the student body on those issues.

    Mehran Sahami: Absolutely. And it’s also a nice way for students to be able to connect, because when you hear about the different kinds of issues in different countries, it’s easy to just think about them in an abstract sense and not understand why. But when you actually have someone sitting across the table saying, “I grew up here and I can tell you about why we believe the particular things we do,” it makes it much more meaningful in terms of that student engagement.

    Russ Altman: Well, there you have it, the next generation of computer scientist trained in the ethics of their technologies.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford Engineering has been at the forefront of innovation for nearly a century, creating pivotal technologies that have transformed the worlds of information technology, communications, health care, energy, business and beyond.

    The school’s faculty, students and alumni have established thousands of companies and laid the technological and business foundations for Silicon Valley. Today, the school educates leaders who will make an impact on global problems and seeks to define what the future of engineering will look like.
    Mission

    Our mission is to seek solutions to important global problems and educate leaders who will make the world a better place by using the power of engineering principles, techniques and systems. We believe it is essential to educate engineers who possess not only deep technical excellence, but the creativity, cultural awareness and entrepreneurial skills that come from exposure to the liberal arts, business, medicine and other disciplines that are an integral part of the Stanford experience.

    Our key goals are to:

    Conduct curiosity-driven and problem-driven research that generates new knowledge and produces discoveries that provide the foundations for future engineered systems
    Deliver world-class, research-based education to students and broad-based training to leaders in academia, industry and society
    Drive technology transfer to Silicon Valley and beyond with deeply and broadly educated people and transformative ideas that will improve our society and our world.

    The Future of Engineering

    The engineering school of the future will look very different from what it looks like today. So, in 2015, we brought together a wide range of stakeholders, including mid-career faculty, students and staff, to address two fundamental questions: In what areas can the School of Engineering make significant world‐changing impact, and how should the school be configured to address the major opportunities and challenges of the future?

    One key output of the process is a set of 10 broad, aspirational questions on areas where the School of Engineering would like to have an impact in 20 years. The committee also returned with a series of recommendations that outlined actions across three key areas — research, education and culture — where the school can deploy resources and create the conditions for Stanford Engineering to have significant impact on those challenges.

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 11:35 am on June 26, 2019 Permalink | Reply
    Tags: Clark Barrett, Computer algorithms are helping identify these threats., Computer Science, , Subhasish Mitra, The logic flaws or bugs that make chips vulnerable to attack   

    From Stanford University Engineering: “Q&A: What’s new in the effort to prevent hackers from hijacking chips?” 

    From Stanford University Engineering

    June 18, 2019
    Tom Abate
    Andrew Myers

    Designers have always had to find and fix the logic flaws, or bugs, that make chips vulnerable to attack. Now computer algorithms are helping them identify these threats.

    1
    As hackers develop new ways to attack chips, researchers aim to anticipate and forestall their malicious intrusions. | Illustration by Kevin Craft

    In their previous work, Stanford engineering professors Clark Barrett and Subhasish Mitra developed computer algorithms to automate the process of finding bugs in chips and fixing these flaws before the chips are manufactured.

    Now, the researchers are adapting their algorithms to thwart a new type of peril — the possibility that hackers could misuse a chip’s features to carry out some nefarious end. In a recent discussion with Stanford Engineering, Barrett and Mitra explain the risks, and how algorithms can help prevent them.

    What’s new when it comes to finding bugs in chips?

    Designers have always tried to find logic flaws, or bugs as they are called, before chips went into manufacturing. Otherwise, hackers might exploit these flaws to hijack computers or cause malfunctions. This has been called debugging and it has never been easy. Yet we are now starting to discover a new type of chip vulnerability that is different from so-called bugs. These new weaknesses do not arise from logic flaws. Instead, hackers can figure out how to misuse a feature that has been purposely designed into a chip. There is not a flaw in the logic. But hackers might be able to pervert the logic to steal sensitive data or take over the chip.

    Have we already suffered from these unintended consequence attacks?

    In a way, yes. Last year some white hat security experts — good guys who try to anticipate hack attacks — discovered two attacks that could be used to guess secret data contained in sophisticated microprocessors. The white hats called these attacks Spectre and Meltdown. The attacks misused two features designed to speed up chip performance. These features are known as “out-of-order-execution” and “speculative execution.” These features store certain data in a chip in a way that makes the data immediately available should the program require. Say the program requires access to credit card info or private health data. The white hats discovered that Spectre and Meltdown could eavesdrop on any network to which the chip is connected and read that stored data right off the chip.

    How?

    The analogy would be guessing a word in a crossword puzzle without knowing the answer. If a clue demands a plural answer, the last letter is probably an ‘s.’ If the word is in the past tense, the last two letters are probably ‘ed’ and so on. The white hats discovered that hackers could use the out-of-order and speculative execution features as clues to make repeated guesses about what data was being stored for instant use. We think Spectre and Meltdown were discovered before hackers could actually perform such attacks. But it was a big wake-up call. You wouldn’t want a hacker using a technique like that to take control of your self-driving car.

    How do your algorithms deal with traditional bugs and these new unintended weaknesses?

    Let’s start with the traditional bugs. We developed a technique called Symbolic Quick Error Detection — or Symbolic QED. Essentially, we use new algorithms to examine chip designs for potential logic flaws or bugs. We recently tested our algorithms on 16 processors that were already being used to help control critical automotive systems like braking and steering. Before these chips went into cars, the designers had already spent five years debugging their own processors using state-of-the-art techniques and fixing all the bugs they found. After using Symbolic QED for one month, we found every bug they’d found in 60 months — and then we found some bugs that were still in the chips. This was a validation of our approach. We think that by using Symbolic QED before a chip goes into manufacturing we’ll be able to find and fix more logic flaws in less time.

    Would Symbolic QED have found vulnerabilities like Spectre and Meltdown?

    Not in its current incarnation. But we recently collaborated with a research group at the Technische Universität Kaiserslautern in Germany to create an algorithm called Unique Program Execution (UPEC). Essentially, we modified Symbolic QED to anticipate the ways that hackers might exploit a chip’s legitimate features for their own ends. The German researchers then applied UPEC to a class of processors that might run a home security system or other appliance hooked up to the internet of things. UPEC detected new types of attacks that didn’t result from logic flaws, but from the potential misuse of some seemingly innocuous feature.

    This is just the beginning. The processors we tested were relatively simple. Yet, as we saw, they could be perverted. Over time we will develop more sophisticated algorithms to detect and fix the most sophisticated chips, like the ones responsible for controlling navigation systems on autonomous cars. Our message is simple: As we develop more chips for more critical tasks, we’ll need automated systems to find and fix all potential vulnerabilities — traditional bugs and unintended consequences — before chips go into manufacturing. Otherwise we’ll always be playing catch up, trying to patch chips after hackers find the vulnerabilities

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford Engineering has been at the forefront of innovation for nearly a century, creating pivotal technologies that have transformed the worlds of information technology, communications, health care, energy, business and beyond.

    The school’s faculty, students and alumni have established thousands of companies and laid the technological and business foundations for Silicon Valley. Today, the school educates leaders who will make an impact on global problems and seeks to define what the future of engineering will look like.
    Mission

    Our mission is to seek solutions to important global problems and educate leaders who will make the world a better place by using the power of engineering principles, techniques and systems. We believe it is essential to educate engineers who possess not only deep technical excellence, but the creativity, cultural awareness and entrepreneurial skills that come from exposure to the liberal arts, business, medicine and other disciplines that are an integral part of the Stanford experience.

    Our key goals are to:

    Conduct curiosity-driven and problem-driven research that generates new knowledge and produces discoveries that provide the foundations for future engineered systems
    Deliver world-class, research-based education to students and broad-based training to leaders in academia, industry and society
    Drive technology transfer to Silicon Valley and beyond with deeply and broadly educated people and transformative ideas that will improve our society and our world.

    The Future of Engineering

    The engineering school of the future will look very different from what it looks like today. So, in 2015, we brought together a wide range of stakeholders, including mid-career faculty, students and staff, to address two fundamental questions: In what areas can the School of Engineering make significant world‐changing impact, and how should the school be configured to address the major opportunities and challenges of the future?

    One key output of the process is a set of 10 broad, aspirational questions on areas where the School of Engineering would like to have an impact in 20 years. The committee also returned with a series of recommendations that outlined actions across three key areas — research, education and culture — where the school can deploy resources and create the conditions for Stanford Engineering to have significant impact on those challenges.

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: