Tagged: Computer Science Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:25 pm on September 16, 2019 Permalink | Reply
    Tags: Computer Science, , , , ,   

    From UC Santa Barbara: “A Quantum Leap” 

    UC Santa Barbara Name bloc
    From UC Santa Barbara

    September 16, 2019
    James Badham

    $25M grant makes UC Santa Barbara home to the nation’s first NSF-funded Quantum Foundry, a center for development of materials and devices for quantum information-based technologies.

    1
    Professors Stephen Wilson and Ania Bleszynski Jayich will co-direct the campus’s new Quantum Foundry

    We hear a lot these days about the coming quantum revolution. Efforts to understand, develop, and characterize quantum materials — defined broadly as those displaying characteristics that can be explained only by quantum mechanics and not by classical physics — are intensifying.

    Researchers around the world are racing to understand these materials and harness their unique qualities to develop revolutionary quantum technologies for quantum computing, communications, sensing, simulation and other quantum technologies not yet imaginable.

    This week, UC Santa Barbara stepped to the front of that worldwide research race by being named the site of the nation’s first Quantum Foundry.

    Funded by an initial six-year, $25-million grant from the National Science Foundation (NSF), the project, known officially as the UC Santa Barbara NSF Quantum Foundry, will involve 20 faculty members from the campus’s materials, physics, chemistry, mechanical engineering and computer science departments, plus myriad collaborating partners. The new center will be anchored within the California Nanosystems Institute (CNSI) in Elings Hall.

    3
    California Nanosystems Institute

    The grant provides substantial funding to build equipment and develop tools necessary to the effort. It also supports a multi-front research mission comprising collaborative interdisciplinary projects within a network of university, industry, and national-laboratory partners to create, process, and characterize materials for quantum information science. The Foundry will also develop outreach and educational programs aimed at familiarizing students at all levels with quantum science, creating a new paradigm for training students in the rapidly evolving field of quantum information science and engaging with industrial partners to accelerate development of the coming quantum workforce.

    “We are extremely proud that the National Science Foundation has chosen UC Santa Barbara as home to the nation’s first NSF-funded Quantum Foundry,” said Chancellor Henry T. Yang. “The award is a testament to the strength of our University’s interdisciplinary science, particularly in materials, physics and chemistry, which lie at the core of quantum endeavors. It also recognizes our proven track record of working closely with industry to bring technologies to practical application, our state-of-the-art facilities and our educational and outreach programs that are mutually complementary with our research.

    “Under the direction of physics professor Ania Bleszynski Jayich and materials professor Stephen Wilson the foundry will provide a collaborative environment for researchers to continue exploring quantum phenomena, designing quantum materials and building instruments and computers based on the basic principles of quantum mechanics,” Yang added.

    Said Joseph Incandela, the campus’s vice chancellor for research, “UC Santa Barbara is a natural choice for the NSF quantum materials Foundry. We have outstanding faculty, researchers, and facilities, and a great tradition of multidisciplinary collaboration. Together with our excellent students and close industry partnerships, they have created a dynamic environment where research gets translated into important technologies.”

    “Being selected to build and host the nation’s first Quantum Foundry is tremendously exciting and extremely important,” said Rod Alferness, dean of the College of Engineering. “It recognizes the vision and the decades of work that have made UC Santa Barbara a truly world-leading institution worthy of assuming a leadership role in a mission as important as advancing quantum science and the transformative technologies it promises to enable.”

    “Advances in quantum science require a highly integrated interdisciplinary approach, because there are many hard challenges that need to be solved on many fronts,” said Bleszynski Jayich. “One of the big ideas behind the Foundry is to take these early theoretical ideas that are just beginning to be experimentally viable and use quantum mechanics to produce technologies that can outperform classical technologies.”

    Doing so, however, will require new materials.

    “Quantum technologies are fundamentally materials-limited, and there needs to be some sort of leap or evolution of the types of materials we can harness,” noted Wilson. “The Foundry is where we will try to identify and create those materials.”

    Research Areas and Infrastructure

    Quantum Foundry research will be pursued in three main areas, or “thrusts”:

    • Natively Entangled Materials, which relates to identifying and characterizing materials that intrinsically host anyon excitations and long-range entangled states with topological, or structural, protection against decoherence. These include new intrinsic topological superconductors and quantum spin liquids, as well as materials that enable topological quantum computing.

    • Interfaced Topological States, in which researchers will seek to create and control protected quantum states in hybrid materials.

    • Coherent Quantum Interfaces, where the focus will be on engineering materials having localized quantum states that can be interfaced with various other quantum degrees of freedom (e.g. photons or phonons) for distributing quantum information while retaining robust coherence.

    Developing these new materials and assessing their potential for hosting the needed coherent quantum state requires specialized equipment, much of which does not exist yet. A significant portion of the NSF grant is designated to develop such infrastructure, both to purchase required tools and equipment and to fabricate new tools necessary both to grow and characterize the quantum states in the new materials, Wilson said.

    UC Santa Barbara’s deep well of shared materials growth and characterization infrastructure was also a factor in securing the grant. The Foundry will leverage existing facilities, such as the large suite of instrumentation shared via the Materials Research Lab and the California Nanosystems Institute, multiple molecular beam epitaxy (MBE) growth chambers (the university has the largest number of MBE apparatuses in academia), unique optical facilities such as the Terahertz Facility, state-of-the-art clean rooms, and others among the more than 300 shared instruments on campus.

    Data Science

    NSF is keenly interested in both generating and sharing data from materials experiments. “We are going to capture Foundry data and harness it to facilitate discovery,” said Wilson. “The idea is to curate and share data to accelerate discovery at this new frontier of quantum information science.”

    Industrial Partners

    Industry collaborations are an important part of the Foundry project. UC Santa Barbara’s well-established history of industrial collaboration — it leads all universities in the U.S. in terms of industrial research dollars per capita — and the application focus that allows it to to transition ideas into materials and materials into technologies, was important in receiving the Foundry grant.

    Another value of industrial collaboration, Wilson explained, is that often, faculty might be looking at something interesting without being able to visualize how it might be useful in a scaled-up commercial application. “If you have an array of directions you could go, it is essential to have partners to help you visualize those having near-term potential,” he said.

    “This is a unique case where industry is highly interested while we are still at the basic-science level,” said Bleszynski Jayich. “There’s a huge industry partnership component to this.”

    Among the 10 inaugural industrial partners are Microsoft, Google, IBM, Hewlett Packard Enterprises, HRL, Northrop Grumman, Bruker, SomaLogic, NVision, and Anstrom Science. Microsoft and Google have substantial campus presences already; Microsoft’s Quantum Station Q lab is here, and UC Santa Barbara professor and Google chief scientist John Martinis and a team of his Ph.D. student researchers are working with Google at its Santa Barbara office, adjacent to campus, to develop Google’s quantum computer.

    Undergraduate Education

    In addition, with approximately 700 students, UC Santa Barbara’s undergraduate physics program is the largest in the U.S. “Many of these students, as well as many undergraduate engineering and chemistry students, are hungry for an education in quantum science, because it’s a fascinating subject that defies our classical intuition, and on top of that, it offers career opportunities. It can’t get much better than that,” Bleszynski Jayich said.

    Graduate Education Program

    Another major goal of the Foundry project is to integrate quantum science into education and to develop the quantum workforce. The traditional approach to quantum education at the university level has been for students to take physics classes, which are focused on the foundational theory of quantum mechanics.

    “But there is an emerging interdisciplinary component of quantum information that people are not being exposed to in that approach,” Wilson explained. “Having input from many overlapping disciplines in both hard science and engineering is required, as are experimental touchstones for trying to understand these phenomena. Student involvement in industry internships and collaborative research with partner companies is important in addressing that.”

    “We want to introduce a more practical quantum education,” Bleszynski Jayich added. “Normally you learn quantum mechanics by learning about hydrogen atoms and harmonic oscillators, and it’s all theoretical. That training is still absolutely critical, but now we want to supplement it, leveraging our abilities gained in the past 20 to 30 years to control a quantum system on the single-atom, single-quantum-system level. Students will take lab classes where they can manipulate quantum systems and observe the highly counterintuitive phenomena that don’t make sense in our classical world. And, importantly, they will learn various cutting-edge techniques for maintaining quantum coherence.

    “That’s particularly important,” she continued, “because quantum technologies rely on the success of the beautiful, elegant theory of quantum mechanics, but in practice we need unprecedented control over our experimental systems in order to observe and utilize their delicate quantum behavior.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition


    UC Santa Barbara Seal
    The University of California, Santa Barbara (commonly referred to as UC Santa Barbara or UCSB) is a public research university and one of the 10 general campuses of the University of California system. Founded in 1891 as an independent teachers’ college, UCSB joined the University of California system in 1944 and is the third-oldest general-education campus in the system. The university is a comprehensive doctoral university and is organized into five colleges offering 87 undergraduate degrees and 55 graduate degrees. In 2012, UCSB was ranked 41st among “National Universities” and 10th among public universities by U.S. News & World Report. UCSB houses twelve national research centers, including the renowned Kavli Institute for Theoretical Physics.

     
  • richardmitnick 11:32 am on September 11, 2019 Permalink | Reply
    Tags: , Computer Science, Expert in computer science education Mehran Sahami, Interviewer Russ Altman,   

    From Stanford University Engineering- “Mehran Sahami: The evolution of computer science education” 

    From Stanford University Engineering

    August 23, 2019

    Once the core American curriculum meant reading, writing and arithmetic, but Stanford professor Mehran Sahami says we might soon have to add a fourth skill to that list, “coding.”

    Sahami thinks deeply about such matters. He’s the leading force behind recent changes in Stanford’s computer science curriculum. He notes that it may not be surprising that more students are choosing to major in computer science than ever before, but what might turn heads is the changing face and intellectual landscape of the field. With concerted effort, more women and minorities, and even students from traditional liberal arts and sciences backgrounds, are venturing into computer science.

    Sahami says coding has become more than just video-games, social media and smartphone apps. The field is an intellectual endeavor taking on the biggest issues of our day — particularly in its influence on data-driven decision making, personal privacy, artificial intelligence and autonomous systems, and the role of large platforms like Google, Facebook and Apple on free speech issues.

    Sahami says that computers and algorithms are now part of the fabric of everyday life and how the future plays out will depend upon realizing more cultural and gender diversity in computer science classrooms and encouraging multidisciplinary thinking throughout computer science.

    Join host Russ Altman and expert in computer science education Mehran Sahami for an inspiring journey through the computer science curriculum of tomorrow. You can listen to The Future of Everything on Sirius XM Insight Channel 121, iTunes, Google Play, SoundCloud, Spotify, Stitcher or via Stanford Engineering Magazine.

    1
    CS is not just about sitting in a cube programming, it’s about solving social problems through computation. | Illustration by Kevin Craft

    Russ Altman: Today on The Future of Everything, the future of computer science education. Let’s think about it. Computer science is the toast of the town. Students are flocking to learn how to program computers and what the underpinnings are of computational systems are, how they work, how they should be designed, implemented, evaluated. At Stanford University and many other places, computer science has become the number one major, in some cases eclipsing really popular traditional majors, like economics, psychology, biology.

    The job market seems great for these students who have skills that are needed in almost every industry. It’s not just about creating software for PCs or iPhones, but increasingly it’s about building systems that interact with the physical world.

    Think about self-driving cars, robotic assistants and other things like that. AI, artificial intelligent systems, have also become powerful with voice recognition, like Siri and Alexa, the ability to translate, the ability to recognize faces, even in our cell phones, and the kind of big data mining that is transforming the financial community, real estate, entertainment, sports, news, even healthcare. These systems promise efficiencies, but do add some worry about the loss of jobs and displaced workers.

    Professor Mehran Sahami is a professor of computer science at Stanford and an expert at computer science education. Before coming to Stanford, he worked at Google and he has led national committees that have created guidelines for computer science programs internationally.

    Mehran, there is a boom in interest in computer science as an area of study. Of course, students are always encouraged to follow their passion when they choose their majors. But should we worry that there’s enough English majors, and history majors, and all these traditional majors that I mentioned before? Is this a blip or is this a change in the ecosystem that we’re expecting now for a long time to come?

    Mehran Sahami: Sure, that’s a great question. I do think it’s a sea change. I think it makes, when looking forward, there’s a really difference in terms of the decisions students are making in terms of where they wanna go, the kinds of fields they wanna pursue. I do think we would lament the fact if we lost all the English majors and lost all the economics majors, because what we’re actually seeing now is more problems require multi-disciplinary expertise to really solve, and so we need people across the board.

    But I think what students have seen, especially in the last 10 years is that computer science is not just about sitting in a cube programming 80 hours a week. It’s about solving social problems through computation, and so that’s really brought the ability of students from computing and the ability of students in other areas to come together and solve bigger problems.

    Russ Altman: Are you seeing an increase in interest in kind of joint majors where people have two feet in two different camps, say English and computer science, or the arts and computer science? Is that a thing?

    Mehran Sahami: That is a thing. We actually even had specialized program called CS+X where X was a choice of many different humanities majors at Stanford. We actually, rather than having that program saw that students were just choosing anyway to do double majors with computer science, lots of students who minors with computer science, and vice versa. They’ll major in something else and minor in computer science. So many students are already making this choice to combine fields.

    Russ Altman: We kinda jumped right into it, but let’s step back. Tell me what a computer science. You’re an expert at this. What is a computer science education look like? I think everybody would say, “Well, they learn how to program computers.” But I suspect, in fact, I know that it’s more than that. Can you give us a kind of thumbnail sketch of what a computer science training should look like.

    Mehran Sahami: Sure. I think most people, like you said, think of computer science as just programming. And what a lot of students see when they get here is that there is far more richness to it as an intellectual endeavor. There is mathematics and logic behind it. There is the notions of building larger systems, the kinds of algorithms you can build, how efficient they are, artificial intelligence, which you alluded to, has seen a huge boom in the last few years, because it’s allowed us to solve problems in ways that potentially were even better than could’ve been hand crafted by human beings.

    When you see this whole intersection of stuff coming together in computing, how humans interact with machines, trying to solve problems in biology, for example, as you’ve done for many years, the larger scale impact of that field becomes clear.

    What students do isn’t just about programming, but it’s about understanding how programming is a tool to solve a lot of problems, and there’s lots of science behind how those tools are built and how they’re used.

    Russ Altman: Great. That’s actually very exciting. It means that we’re giving them a tool set that’s gonna last them for their whole life, even as the problem of the day changes. As we think about the future, how are we doing in terms of diversity in computer science? And I mean diversity along all axes, sex and gender, underrepresented minorities, different socioeconomic groups. Are they all feeling welcome to this big tent, or do we still have a recruitment issue?

    Mehran Sahami: Well, we still have a recruitment issue, but the good new sis that it’s getting better. For many years, and it’s still true now, there’s a large gender imbalance in computing. It’s actually gotten quite a bit better. At Stanford now, for example, more than a third of the undergraduate majors are women, which is more than double the percentage that we had 10 years ago. The field in general is seeing more diversity. And along the lines of underrepresented minorities, different socioeconomic classes, we’re also seeing some movement there. Again, those numbers are nowhere near where we’d like them to be, that would be representative of the population as a whole. We still have a lot of work to do. But directionally, things are moving the right way. And I think part of that is also that earlier in the pipeline in K through 12 education, there is more opportunities for computing.

    Russ Altman: Actually, I’m glad you mentioned that because I did wanna ask you.

    At the K through 12 level, if you’re a parent, in my head, this is all, and I don’t know if this is even fair, but in my head this is all confused with the rise of video games. Because I know there’s an issue of young people using video games far beyond what I would’ve ever imagined and certainly far beyond what was available to me as a youth.

    But what is the best advice about the appropriate level of computer exposure for somebody in their K through 12 education? Should parents be introducing kids to programming early? Can they just wait and let it evolve as a natural interest? I think of it as like, is it the new reading, writing, arithmetic, and coding? Is that the new fourth area of basic competency for grammar school in K through 12? And I know you’ve thought about these issues. What’s the right answer? What’s our current best understanding?

    Mehran Sahami: Well, one of the things that’s touted a lot is a notion of what’s called computational thinking, which is a notion that would encompass some amount of programming, but also understanding just how computational techniques work and what they entail. So understanding something about data and how that might get used in a program without necessarily programming the actual program yourself.

    And that brings up lots of social issues as well like privacy. How do you protect your data? How do you think about the passwords that you have.

    For a lot of these things, generally, it’s not too early to have kids learn something about it. As a parent myself, I worry about how much time they spent on screens. And the current thinking there is it’s not just about how much time is actually spent in front of a screen, but what that time is spent doing. And there’s even lots of activities that don’t involve spending time in front of the screen.

    So, sometimes people think about what’s the notion of having a kindergartner or a first grader program? Can we even do that?

    We say, well, the notion of programming there isn’t about sitting in front of a computer and typing. It’s about making a peanut butter and jelly sandwich. So, what does that mean? It means it’s a process and you think about, well, you need to get out the bread, you need to get out the peanut butter, you open the jar. There’s these set of steps you have to go through. And if you don’t follow the steps properly, you don’t get the result you want. And that gets kids to think about what an algorithmic process is, without actually touching a computer.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. I’m speaking with Professor Mehran Sahami about computer science and, just in the last few moments, computer science for young people. On this issue of diversity and the pipeline, is there evidence that we need new ways? Are there emerging ways of teaching computer science that are perhaps more palatable to these groups that have not traditionally been involved in computer science? I’m wondering of curriculum are evolving to attract, for example, young women who might not have traditionally been attracted, or, again, underrepresented minorities. Do we see opportunities for changing the way things are taught to make it more welcoming?

    Mehran Sahami: Sure, and it has to do both with the content and the culture. So, from the standpoint of content, I think one of the things that’s happened in the last few years that’s helped with increasing the diversity numbers is more people understanding the social impact of computing. And there are studies that have shown, for example, that in terms of the choice of activities that women, for example, might wanna do, they are tended to be drawn more toward majors that they see the social impact of the work.

    There in terms of thinking about the examples you use in computer science classes, the kinds of problems that you could actually solve, integrating in more of the social problem solving aspect. How can we think about using computers, for example, to try to understand or solve diseases or to think about understanding climate change? That brings in a wider swath of people than otherwise previously came.

    The other is culture. I think there’s been, and this is well documented, a lot of sexist culture in the computing community. Bright light has been shined on that in the past few years, and slowly the culture is beginning to change. And so, when you change that culture, you make it more welcoming and you have a community of people who now feels as though they’re not on the margin but actually right in the middle of the field. It helps bring other people in who are from similar communities.

    Russ Altman: So I really find that interesting. I’m not an organizational behavior expert at all, but I’ve heard that a lot cultural change often needs to come from the top. There needs to be leadership committed to changing the culture.

    So in this world of, in this distributed world of education, who is the one who’s charged with changing culture? Is it the faculty? Is it the student leadership? Who does that fall to, in terms of changing the culture, and what is their to-do list?

    Mehran Sahami: Yeah, that’s a great question. It actually requires many levels of engagement. Certainly in the university, faculty need to be supportive. They need to think about the curriculum that they define, the examples that they use, how the language they use, how welcoming they are to students. One of the things we’ve seen here at Stanford is the students have been very active in terms of student organizations, creating social events to bring people together, and to help not only create the community, but show others that the community exists.

    But if you think in the larger scope in industry, leaders of companies need to show that those companies are welcoming, that they’re taking steps toward diversity. They’re really listening to their employees. That’s the place where, in fact, it’s changed on a larger cultural level.

    Russ Altman: So, that really does make sense and it’s great to hear that we’re making some progress in terms of the numbers. The 30%, I remember when I was in training, it was less than 10%, and it was a big problem. That was ancient history.

    I know that one of the new things that you’ve been involved with and I definitely want to have some time to talk about it, is an ethics class and ethics for computer science students. I don’t recall if it’s required or just recommended.

    Tell me about why. You’re a very famous, I didn’t mention this in my introduction, but you also are well-known to be one of the most distinguished teachers at Stanford with a great record of getting people fired up about the material in your classes. Why did you turn your attention to ethics and what are the special challenges in teaching ethics to this group? Do they get it? Are they excited about it, or are they like, “Why are we doing this?”

    Mehran Sahami: Right, well, first of all, you’re too kind in your assessment.

    But I would say, for many years, there’s been classes on ethics with computing at Stanford. What we’ve done in this most recent revision, and this is what with collaborators, Rob Reich and Jeremy Weinstein are both in the political science department. Rob is a moral philosopher. Jeremy is a public policy expert.

    And then I as the computer scientist came together to say what we wanna do is a modernized revision of this class where the problems we look or the problems that students are gonna be be grappling with in sort of the zero to 10-year time pan from the time they graduate, that it brings together these different aspects. The computing is on of them, but we need to understand philosophically.

    What are the societal outcomes we want? What are the moral codes we wanna live by? How do we think about value trade-offs?

    And then from the public policy standpoint, what can we do not only as engineers but as potentially leaders of companies, as academics, as citizens, in order to help see these kinds of changes through so we actually get the outcomes we like.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. I’m speaking with Mehran Sahami, and we just turned our attention to a new course in ethics, looking at the big picture ethics. So this is not you shouldn’t cheat, you shouldn’t steal stuff, you should make sure that your authors are all recognized. These are the big ticket items in ethics of computer science. Can you give me a feeling for some of the issues that are hot button issues? I love what you said about it’s zero to 10, which means literally these are issues that could be big today or tomorrow. How did you decide which issues need to come to their attention as young trainees in computer science?

    Mehran Sahami: Sure. I mean, first, we sat down on the white board and wrote out a bunch of topics that we thought were relevant and quickly realized the full set was far larger than we could cover in one class. So we focused on four that we could really do deep dives into. First one was algorithmic decision making, that computers and algorithms are used more and more to make meaningful decisions that impact our lives, for example, whether or not we get loans, mortgages, whether or not if we have some trouble with the criminal justice system, if we get bail or not.

    Russ Altman: And these may be without a human in the loop of that decision making?

    Mehran Sahami: For some of them. Some of them have a human in loop that’s required. For example, in the financial industry, there are some decisions that a human has to take responsibility for, but there are some micro transactions, for example, when you try to run your credit card where the decision might get made to deny that without a human being involved. That was the first area.

    Then we looked at issues around data privacy and how different companies, what kinds of policies they have, the different views say in the United States versus Europe around privacy, so we could also look at different cultural norms.

    The third unit was around AI and autonomous systems. So, our deep dive was mainly on autonomous vehicles, something that students are looking at now and society in general is gonna have to deal with, both from the technological stand point, but more so from the issues around economics. Job displacement, what is automation gonna mean in the long term, how do we turn our attention to think about what sorts of automation we wanna build in terms of weighing the positives and negatives in society, safety versus job displacement.

    And then the last unit was on the power of large platforms, say Facebooks, and Apples, and Googles of the world where now, for example, the questions around who has free speech are governed more by the large platforms who can determine who’s on them or not, versus governments.

    We’re seeing these changes of things that previously happen in the public sphere moving to the private sphere. And how do we think about that? Because they’re in the same kind of electoral recourse if you don’t like a policy that Facebook wants to implement for who can say what on their platform.

    Russ Altman: So that sounds really exciting. So tell me who signed up for this class? Was it your folks from computer science? Was it a bunch of other people saying, “Wow, I might be able to contribute to this conversation”? I mean it sounds like an incredibly open set of issues that a lot of people could contribute to, but who actually signed up?

    Mehran Sahami: Yeah, the vast majority of students who signed up are computer science majors or related majors, electrical engineering, something like that.

    Russ Altman: Was it required? I forgot to ask.

    Mehran Sahami: It satisfies certain requirements, but it’s not the only class that does.

    Russ Altman: Gotcha.

    Mehran Sahami: So it’s a popular choice though for a lot of students. But we did get students from many different disciplines. Certainly we got some from political science, some students from the law school. We got students from other areas like anthropology from the humanities, and so there was lots of different perspectives that were brought into class.

    Russ Altman: Without putting too find a point on it, did you find that the computer science students were well-prepared to have these conversations that were not about technical stuff? Were you pleased to see their level of preparation, or did it highlight for you the need to do more of this kind of training for everybody?

    Mehran Sahami: Yeah, there’s a spectrum. There are some students that were hugely engaged, who actually made different decisions about, for example, what they wanted to pursue for their career as a result of some of the things they learn in that class, very deeply engaged on the social issues and thinking about what can I do as an engineer as well as a citizen to try address these problems. And then we had some students who are seeing some of these things, the social issues for the first time. And so, we’re trying to make it as broad as we can to have something of interest to everyone who’s in there.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. More with Professor Mehran Sahami about computer science, ethics, and the future of education, next on SiriusXM Insight 121.

    Welcome back to The Future of Everything. I’m Russ Altman. I’m speaking with Professor Mehran Sahami about computer science and ethics training and ethics education.

    So, Mehran, at the end of the last segment, you talked about these four kind of pillars of your course, and they were great. And the fourth one which I wanna dig in a little bit more was the power of platform, and you said something very intriguing about how in some ways the First Amendment is now adjudicated not by the courts or by the government, but by these huge platforms that if they turn me off, my voice on Twitter or on Facebook is gone.

    How do you set that up for the students and how do you lead the discussion about the responsibilities and obligations of computer scientists and others in society in the light of this new phenomenon?

    Mehran Sahami: Sure. So, one of the things we do in the class is we have for each one of our areas a case study that we’ve actually gotten professional writers to help write for us. And there in the power of platforms, one of the things we look at are cases where, for example, people have been banned from particular platforms, like say Alex Jones on Twitter. And so, part of that is what are the guidelines that these platforms have? How did they get applied? How can they do it in a way that scales? And so, you find all kinds of interesting phenomena there.

    Some of them are things that a platform will keep information on the platform, even if it may not be information that they deem as entirely trustworthy, because they have to make that determination, and that becomes a very strange place for what the technology companies now making determinations about what is correct information.

    There’s also a lot of automated techniques that go onto it. And so the engineers need to build something that they think can actually detect hateful speech or things that may be beyond the boundary of the acceptable guidelines for the company.

    That brings up the deeper question of what are the acceptable guidelines. They don’t have to be in line necessarily with government practices. And sometimes there is also individual human reviewers that will look at contents and see whether or not —

    Russ Altman: Yeah, these have been in the news recently, because some of them have very stressful jobs because of the content that they’re looking at.

    Mehran Sahami: Exactly. Imagine spending eight hours a day looking at videos of people being beheaded, all kinds of horrible things that people are posting out there. In some cases, it’s actually been reported some of these workers have things like post-traumatic stress disorder from doing this job all day.

    You get into this tension between what is the platform’s responsibility, how can citizens potentially affect what the platforms were doing, and in many cases, because of the voting structures of the shares of the platform, there’s not actually a lot that individuals can do. But what is this — what do we think of meaningfully through thinking about should there be legislation, should there be regulation, at what point does too much move out of the public sphere into the private decision making?

    Russ Altman: So yeah, I’m struck by this problem, because there’s so many facets to it. So one of them is that Facebook, all of these platforms that you mentioned, they’re international platforms. Yes, there are some companies that might ban them. So, they might, even though they’re, in many cases, sitting in the US, and in fact in the case of Facebook, it’s a couple of miles from where you and I are sitting right now, they have an international audience where the laws are different. We might talk about data privacy later, but there are new laws in Europe that are quite different from the laws in the US.

    How do you train the students to think about international-level issues that have to be adjudicated sometimes at a national or even sub-national level?

    Mehran Sahami: Right. Understanding what are the, for one level, what are the cultural reasons why there are some of these different norms.

    And then secondly is understanding that the platforms do need to abide by particular policies and counties, and those policies may be different.

    For example, what Google can show in search results in Germany, where they have restrictions on showing information related to Nazis is different than in the United States. And certainly as we saw with Google eventually pulling out of China, the kinds of policies that they had to abide by if they wanted to continue to provide search there was outside of what they felt comfortable doing.

    And that becomes a question for the company certainly to have to make, but it also becomes a decision for individuals who, say, wanna work at those companies or support those companies, that how do they make their voice heard with respect to the companies choosing to make particular policy decisions as to what they do in different locales.

    Russ Altman: It’s interesting. What I hear you saying is that even though Google is a global platform, it has different flavors and different countries. One of the choices is pull out of the country because the rules are just to compatible with something that they wanna do. As an engineer for these companies, you might be working on a product that will never be deployed in country X, but will be used a lot in country Y, and you have to think about the implications and your comfort level with building these technologies that may or may not wind up being used in different settings.

    I can see, how much do you empower the students to actually voice their opinions to the people who are signing their paychecks? It’s a tricky, you don’t wanna train a bunch of folks who wind up coming back and saying, “Oh, by the way, I was fired because of all the great things that I learned in this class.”

    Mehran Sahami: Yeah, but at the same time you want students to have their own personal moral responsibility. You want them to make decisions that they feel are in line with their one personal ethics.

    But at the same time, there’s a lot of decisions that get made at the engineering level that have far-reaching consequences. So if you’re the engineer who’s working on how do I filter results out of, say the search engine in China or in Germany, there are decisions you’re making deep down in the code in terms of what algorithms you might be using, what techniques, what kind of data that are gonna have real impact on the information people see. And that’s at a level that is affecting individuals, but is at a more granular level than the decisions that are being made by the executives of the company.

    And so, the engineers themselves need to be aware of that when they’re building these systems.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. I’m speaking with Professor Mehran Sahami now about these great scenarios that you’re using in your teaching. So, if you’ll allow me, I would just love to move to another one of your areas and find out what kind of cases you’re using, for example, in data privacy.

    Mehran Sahami: Sure. In data privacy, one of the big things we’re looking at is facial recognition, which is getting… These days you see pretty much articles every day on the use of the technology, what localities have it, which don’t.

    San Francisco, for example, recently banned the use of facial recognition by the public sector at least, whereas many airlines are now moving to using facial recognition to check you in on a plane. As you can imagine that happening, it has these tensions between privacy and security. At one level, why would we use facial recognition to get people on the plane? It only estimates to save a couple minutes of time when you’re boarding a 747.

    Russ Altman: It’s the overhead luggage that they should work on, not the facial recognition.

    Mehran Sahami: Exactly. But the real reason is the folks who are responsible for airline security wanna be able to detect if there’s someone getting on the plane who shouldn’t be getting on the plane.

    The flip side is your personal privacy. To what extent do you take facial recognition to an extreme where everyone can be tracked.

    London, for example, has half a million closed-circuit TV cameras around the city. You combine that with facial recognition, you can get a pretty good map of what most of the people in the city are doing at a given time, who are outdoors at least.

    How do we trade off the privacy implication versus security? Different people will make that decision differently. And part of the public policy question, which is why we look at the multidisciplinary facet of this class, is how as a society do we decide what we wanna do?

    Russ Altman: So you must’ve had… I’m presuming you had small group discussion sessions, because I know you had a big registration in this class, a couple of hundred people perhaps, but you can have a meaningful discussion of 200 people, I would not think. So, did you actually break them down and have them have smaller discussion groups?

    Mehran Sahami: Yeah. There were weekly small discussions. And then one of the things we would do, which actually worked better than you would think, is we took 250 students. We had this large room that had a bunch of tables that seat about eight or 10. And so we could get them in there, seated around these tables, after they read the case study and sort of give them guiding questions for discussion. So then they could have the discussion. We could have call out, so they could share their findings or their insights across the whole class.

    Russ Altman: So when you have discussion sessions, who leads them? Are these computer scientist or a… You said there were a lot of political scientists involved in the class.

    It’s not clear to me at all who I would want leading that discussion, because what you need to know and kind of the frameworks of ethics are quite diverse. So, who leads those discussions?

    Mehran Sahami: Yeah, it’s a great question. We had a large number of TAs that span a bunch of areas. So we have some computer scientists. We had some law students. We had students with backgrounds in philosophy and anthropology, bunch of different areas. And in some cases, the sections were co-taught by computer scientists, say, and someone from the law school. And so you would get these different perspectives to really bring out the richness in the conversation.

    Russ Altman: And I would guess that because of the international nature of the students, people are bringing very different perspectives on issues of privacy, state power, individual human rights. I would guess there’s a huge diversity in the student body on those issues.

    Mehran Sahami: Absolutely. And it’s also a nice way for students to be able to connect, because when you hear about the different kinds of issues in different countries, it’s easy to just think about them in an abstract sense and not understand why. But when you actually have someone sitting across the table saying, “I grew up here and I can tell you about why we believe the particular things we do,” it makes it much more meaningful in terms of that student engagement.

    Russ Altman: Well, there you have it, the next generation of computer scientist trained in the ethics of their technologies.

    Thank you for listening to The Future of Everything. I’m Russ Altman. If you missed any of this of episode, listen any time on-demand with the SiriusXM app.Russ Altman: Today on The Future of Everything, the future of computer science education. Let’s think about it. Computer science is the toast of the town. Students are flocking to learn how to program computers and what the underpinnings are of computational systems are, how they work, how they should be designed, implemented, evaluated. At Stanford University and many other places, computer science has become the number one major, in some cases eclipsing really popular traditional majors, like economics, psychology, biology.

    The job market seems great for these students who have skills that are needed in almost every industry. It’s not just about creating software for PCs or iPhones, but increasingly it’s about building systems that interact with the physical world.

    Think about self-driving cars, robotic assistants and other things like that. AI, artificial intelligent systems, have also become powerful with voice recognition, like Siri and Alexa, the ability to translate, the ability to recognize faces, even in our cell phones, and the kind of big data mining that is transforming the financial community, real estate, entertainment, sports, news, even healthcare. These systems promise efficiencies, but do add some worry about the loss of jobs and displaced workers.

    Professor Mehran Sahami is a professor of computer science at Stanford and an expert at computer science education. Before coming to Stanford, he worked at Google and he has led national committees that have created guidelines for computer science programs internationally.

    Mehran, there is a boom in interest in computer science as an area of study. Of course, students are always encouraged to follow their passion when they choose their majors. But should we worry that there’s enough English majors, and history majors, and all these traditional majors that I mentioned before? Is this a blip or is this a change in the ecosystem that we’re expecting now for a long time to come?

    Mehran Sahami: Sure, that’s a great question. I do think it’s a sea change. I think it makes, when looking forward, there’s a really difference in terms of the decisions students are making in terms of where they wanna go, the kinds of fields they wanna pursue. I do think we would lament the fact if we lost all the English majors and lost all the economics majors, because what we’re actually seeing now is more problems require multi-disciplinary expertise to really solve, and so we need people across the board.

    But I think what students have seen, especially in the last 10 years is that computer science is not just about sitting in a cube programming 80 hours a week. It’s about solving social problems through computation, and so that’s really brought the ability of students from computing and the ability of students in other areas to come together and solve bigger problems.

    Russ Altman: Are you seeing an increase in interest in kind of joint majors where people have two feet in two different camps, say English and computer science, or the arts and computer science? Is that a thing?

    Mehran Sahami: That is a thing. We actually even had specialized program called CS+X where X was a choice of many different humanities majors at Stanford. We actually, rather than having that program saw that students were just choosing anyway to do double majors with computer science, lots of students who minors with computer science, and vice versa. They’ll major in something else and minor in computer science. So many students are already making this choice to combine fields.

    Russ Altman: We kinda jumped right into it, but let’s step back. Tell me what a computer science. You’re an expert at this. What is a computer science education look like? I think everybody would say, “Well, they learn how to program computers.” But I suspect, in fact, I know that it’s more than that. Can you give us a kind of thumbnail sketch of what a computer science training should look like.

    Mehran Sahami: Sure. I think most people, like you said, think of computer science as just programming. And what a lot of students see when they get here is that there is far more richness to it as an intellectual endeavor. There is mathematics and logic behind it. There is the notions of building larger systems, the kinds of algorithms you can build, how efficient they are, artificial intelligence, which you alluded to, has seen a huge boom in the last few years, because it’s allowed us to solve problems in ways that potentially were even better than could’ve been hand crafted by human beings.

    When you see this whole intersection of stuff coming together in computing, how humans interact with machines, trying to solve problems in biology, for example, as you’ve done for many years, the larger scale impact of that field becomes clear.

    What students do isn’t just about programming, but it’s about understanding how programming is a tool to solve a lot of problems, and there’s lots of science behind how those tools are built and how they’re used.

    Russ Altman: Great. That’s actually very exciting. It means that we’re giving them a tool set that’s gonna last them for their whole life, even as the problem of the day changes. As we think about the future, how are we doing in terms of diversity in computer science? And I mean diversity along all axes, sex and gender, underrepresented minorities, different socioeconomic groups. Are they all feeling welcome to this big tent, or do we still have a recruitment issue?

    Mehran Sahami: Well, we still have a recruitment issue, but the good new sis that it’s getting better. For many years, and it’s still true now, there’s a large gender imbalance in computing. It’s actually gotten quite a bit better. At Stanford now, for example, more than a third of the undergraduate majors are women, which is more than double the percentage that we had 10 years ago. The field in general is seeing more diversity. And along the lines of underrepresented minorities, different socioeconomic classes, we’re also seeing some movement there. Again, those numbers are nowhere near where we’d like them to be, that would be representative of the population as a whole. We still have a lot of work to do. But directionally, things are moving the right way. And I think part of that is also that earlier in the pipeline in K through 12 education, there is more opportunities for computing.

    Russ Altman: Actually, I’m glad you mentioned that because I did wanna ask you.

    At the K through 12 level, if you’re a parent, in my head, this is all, and I don’t know if this is even fair, but in my head this is all confused with the rise of video games. Because I know there’s an issue of young people using video games far beyond what I would’ve ever imagined and certainly far beyond what was available to me as a youth.

    But what is the best advice about the appropriate level of computer exposure for somebody in their K through 12 education? Should parents be introducing kids to programming early? Can they just wait and let it evolve as a natural interest? I think of it as like, is it the new reading, writing, arithmetic, and coding? Is that the new fourth area of basic competency for grammar school in K through 12? And I know you’ve thought about these issues. What’s the right answer? What’s our current best understanding?

    Mehran Sahami: Well, one of the things that’s touted a lot is a notion of what’s called computational thinking, which is a notion that would encompass some amount of programming, but also understanding just how computational techniques work and what they entail. So understanding something about data and how that might get used in a program without necessarily programming the actual program yourself.

    And that brings up lots of social issues as well like privacy. How do you protect your data? How do you think about the passwords that you have.

    For a lot of these things, generally, it’s not too early to have kids learn something about it. As a parent myself, I worry about how much time they spent on screens. And the current thinking there is it’s not just about how much time is actually spent in front of a screen, but what that time is spent doing. And there’s even lots of activities that don’t involve spending time in front of the screen.

    So, sometimes people think about what’s the notion of having a kindergartner or a first grader program? Can we even do that?

    We say, well, the notion of programming there isn’t about sitting in front of a computer and typing. It’s about making a peanut butter and jelly sandwich. So, what does that mean? It means it’s a process and you think about, well, you need to get out the bread, you need to get out the peanut butter, you open the jar. There’s these set of steps you have to go through. And if you don’t follow the steps properly, you don’t get the result you want. And that gets kids to think about what an algorithmic process is, without actually touching a computer.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. I’m speaking with Professor Mehran Sahami about computer science and, just in the last few moments, computer science for young people. On this issue of diversity and the pipeline, is there evidence that we need new ways? Are there emerging ways of teaching computer science that are perhaps more palatable to these groups that have not traditionally been involved in computer science? I’m wondering of curriculum are evolving to attract, for example, young women who might not have traditionally been attracted, or, again, underrepresented minorities. Do we see opportunities for changing the way things are taught to make it more welcoming?

    Mehran Sahami: Sure, and it has to do both with the content and the culture. So, from the standpoint of content, I think one of the things that’s happened in the last few years that’s helped with increasing the diversity numbers is more people understanding the social impact of computing. And there are studies that have shown, for example, that in terms of the choice of activities that women, for example, might wanna do, they are tended to be drawn more toward majors that they see the social impact of the work.

    There in terms of thinking about the examples you use in computer science classes, the kinds of problems that you could actually solve, integrating in more of the social problem solving aspect. How can we think about using computers, for example, to try to understand or solve diseases or to think about understanding climate change? That brings in a wider swath of people than otherwise previously came.

    The other is culture. I think there’s been, and this is well documented, a lot of sexist culture in the computing community. Bright light has been shined on that in the past few years, and slowly the culture is beginning to change. And so, when you change that culture, you make it more welcoming and you have a community of people who now feels as though they’re not on the margin but actually right in the middle of the field. It helps bring other people in who are from similar communities.

    Russ Altman: So I really find that interesting. I’m not an organizational behavior expert at all, but I’ve heard that a lot cultural change often needs to come from the top. There needs to be leadership committed to changing the culture.

    So in this world of, in this distributed world of education, who is the one who’s charged with changing culture? Is it the faculty? Is it the student leadership? Who does that fall to, in terms of changing the culture, and what is their to-do list?

    Mehran Sahami: Yeah, that’s a great question. It actually requires many levels of engagement. Certainly in the university, faculty need to be supportive. They need to think about the curriculum that they define, the examples that they use, how the language they use, how welcoming they are to students. One of the things we’ve seen here at Stanford is the students have been very active in terms of student organizations, creating social events to bring people together, and to help not only create the community, but show others that the community exists.

    But if you think in the larger scope in industry, leaders of companies need to show that those companies are welcoming, that they’re taking steps toward diversity. They’re really listening to their employees. That’s the place where, in fact, it’s changed on a larger cultural level.

    Russ Altman: So, that really does make sense and it’s great to hear that we’re making some progress in terms of the numbers. The 30%, I remember when I was in training, it was less than 10%, and it was a big problem. That was ancient history.

    I know that one of the new things that you’ve been involved with and I definitely want to have some time to talk about it, is an ethics class and ethics for computer science students. I don’t recall if it’s required or just recommended.

    Tell me about why. You’re a very famous, I didn’t mention this in my introduction, but you also are well-known to be one of the most distinguished teachers at Stanford with a great record of getting people fired up about the material in your classes. Why did you turn your attention to ethics and what are the special challenges in teaching ethics to this group? Do they get it? Are they excited about it, or are they like, “Why are we doing this?”

    Mehran Sahami: Right, well, first of all, you’re too kind in your assessment.

    But I would say, for many years, there’s been classes on ethics with computing at Stanford. What we’ve done in this most recent revision, and this is what with collaborators, Rob Reich and Jeremy Weinstein are both in the political science department. Rob is a moral philosopher. Jeremy is a public policy expert.

    And then I as the computer scientist came together to say what we wanna do is a modernized revision of this class where the problems we look or the problems that students are gonna be be grappling with in sort of the zero to 10-year time pan from the time they graduate, that it brings together these different aspects. The computing is on of them, but we need to understand philosophically.

    What are the societal outcomes we want? What are the moral codes we wanna live by? How do we think about value trade-offs?

    And then from the public policy standpoint, what can we do not only as engineers but as potentially leaders of companies, as academics, as citizens, in order to help see these kinds of changes through so we actually get the outcomes we like.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. I’m speaking with Mehran Sahami, and we just turned our attention to a new course in ethics, looking at the big picture ethics. So this is not you shouldn’t cheat, you shouldn’t steal stuff, you should make sure that your authors are all recognized. These are the big ticket items in ethics of computer science. Can you give me a feeling for some of the issues that are hot button issues? I love what you said about it’s zero to 10, which means literally these are issues that could be big today or tomorrow. How did you decide which issues need to come to their attention as young trainees in computer science?

    Mehran Sahami: Sure. I mean, first, we sat down on the white board and wrote out a bunch of topics that we thought were relevant and quickly realized the full set was far larger than we could cover in one class. So we focused on four that we could really do deep dives into. First one was algorithmic decision making, that computers and algorithms are used more and more to make meaningful decisions that impact our lives, for example, whether or not we get loans, mortgages, whether or not if we have some trouble with the criminal justice system, if we get bail or not.

    Russ Altman: And these may be without a human in the loop of that decision making?

    Mehran Sahami: For some of them. Some of them have a human in loop that’s required. For example, in the financial industry, there are some decisions that a human has to take responsibility for, but there are some micro transactions, for example, when you try to run your credit card where the decision might get made to deny that without a human being involved. That was the first area.

    Then we looked at issues around data privacy and how different companies, what kinds of policies they have, the different views say in the United States versus Europe around privacy, so we could also look at different cultural norms.

    The third unit was around AI and autonomous systems. So, our deep dive was mainly on autonomous vehicles, something that students are looking at now and society in general is gonna have to deal with, both from the technological stand point, but more so from the issues around economics. Job displacement, what is automation gonna mean in the long term, how do we turn our attention to think about what sorts of automation we wanna build in terms of weighing the positives and negatives in society, safety versus job displacement.

    And then the last unit was on the power of large platforms, say Facebooks, and Apples, and Googles of the world where now, for example, the questions around who has free speech are governed more by the large platforms who can determine who’s on them or not, versus governments.

    We’re seeing these changes of things that previously happen in the public sphere moving to the private sphere. And how do we think about that? Because they’re in the same kind of electoral recourse if you don’t like a policy that Facebook wants to implement for who can say what on their platform.

    Russ Altman: So that sounds really exciting. So tell me who signed up for this class? Was it your folks from computer science? Was it a bunch of other people saying, “Wow, I might be able to contribute to this conversation”? I mean it sounds like an incredibly open set of issues that a lot of people could contribute to, but who actually signed up?

    Mehran Sahami: Yeah, the vast majority of students who signed up are computer science majors or related majors, electrical engineering, something like that.

    Russ Altman: Was it required? I forgot to ask.

    Mehran Sahami: It satisfies certain requirements, but it’s not the only class that does.

    Russ Altman: Gotcha.

    Mehran Sahami: So it’s a popular choice though for a lot of students. But we did get students from many different disciplines. Certainly we got some from political science, some students from the law school. We got students from other areas like anthropology from the humanities, and so there was lots of different perspectives that were brought into class.

    Russ Altman: Without putting too find a point on it, did you find that the computer science students were well-prepared to have these conversations that were not about technical stuff? Were you pleased to see their level of preparation, or did it highlight for you the need to do more of this kind of training for everybody?

    Mehran Sahami: Yeah, there’s a spectrum. There are some students that were hugely engaged, who actually made different decisions about, for example, what they wanted to pursue for their career as a result of some of the things they learn in that class, very deeply engaged on the social issues and thinking about what can I do as an engineer as well as a citizen to try address these problems. And then we had some students who are seeing some of these things, the social issues for the first time. And so, we’re trying to make it as broad as we can to have something of interest to everyone who’s in there.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. More with Professor Mehran Sahami about computer science, ethics, and the future of education, next on SiriusXM Insight 121.

    Welcome back to The Future of Everything. I’m Russ Altman. I’m speaking with Professor Mehran Sahami about computer science and ethics training and ethics education.

    So, Mehran, at the end of the last segment, you talked about these four kind of pillars of your course, and they were great. And the fourth one which I wanna dig in a little bit more was the power of platform, and you said something very intriguing about how in some ways the First Amendment is now adjudicated not by the courts or by the government, but by these huge platforms that if they turn me off, my voice on Twitter or on Facebook is gone.

    How do you set that up for the students and how do you lead the discussion about the responsibilities and obligations of computer scientists and others in society in the light of this new phenomenon?

    Mehran Sahami: Sure. So, one of the things we do in the class is we have for each one of our areas a case study that we’ve actually gotten professional writers to help write for us. And there in the power of platforms, one of the things we look at are cases where, for example, people have been banned from particular platforms, like say Alex Jones on Twitter. And so, part of that is what are the guidelines that these platforms have? How did they get applied? How can they do it in a way that scales? And so, you find all kinds of interesting phenomena there.

    Some of them are things that a platform will keep information on the platform, even if it may not be information that they deem as entirely trustworthy, because they have to make that determination, and that becomes a very strange place for what the technology companies now making determinations about what is correct information.

    There’s also a lot of automated techniques that go onto it. And so the engineers need to build something that they think can actually detect hateful speech or things that may be beyond the boundary of the acceptable guidelines for the company.

    That brings up the deeper question of what are the acceptable guidelines. They don’t have to be in line necessarily with government practices. And sometimes there is also individual human reviewers that will look at contents and see whether or not —

    Russ Altman: Yeah, these have been in the news recently, because some of them have very stressful jobs because of the content that they’re looking at.

    Mehran Sahami: Exactly. Imagine spending eight hours a day looking at videos of people being beheaded, all kinds of horrible things that people are posting out there. In some cases, it’s actually been reported some of these workers have things like post-traumatic stress disorder from doing this job all day.

    You get into this tension between what is the platform’s responsibility, how can citizens potentially affect what the platforms were doing, and in many cases, because of the voting structures of the shares of the platform, there’s not actually a lot that individuals can do. But what is this — what do we think of meaningfully through thinking about should there be legislation, should there be regulation, at what point does too much move out of the public sphere into the private decision making?

    Russ Altman: So yeah, I’m struck by this problem, because there’s so many facets to it. So one of them is that Facebook, all of these platforms that you mentioned, they’re international platforms. Yes, there are some companies that might ban them. So, they might, even though they’re, in many cases, sitting in the US, and in fact in the case of Facebook, it’s a couple of miles from where you and I are sitting right now, they have an international audience where the laws are different. We might talk about data privacy later, but there are new laws in Europe that are quite different from the laws in the US.

    How do you train the students to think about international-level issues that have to be adjudicated sometimes at a national or even sub-national level?

    Mehran Sahami: Right. Understanding what are the, for one level, what are the cultural reasons why there are some of these different norms.

    And then secondly is understanding that the platforms do need to abide by particular policies and counties, and those policies may be different.

    For example, what Google can show in search results in Germany, where they have restrictions on showing information related to Nazis is different than in the United States. And certainly as we saw with Google eventually pulling out of China, the kinds of policies that they had to abide by if they wanted to continue to provide search there was outside of what they felt comfortable doing.

    And that becomes a question for the company certainly to have to make, but it also becomes a decision for individuals who, say, wanna work at those companies or support those companies, that how do they make their voice heard with respect to the companies choosing to make particular policy decisions as to what they do in different locales.

    Russ Altman: It’s interesting. What I hear you saying is that even though Google is a global platform, it has different flavors and different countries. One of the choices is pull out of the country because the rules are just to compatible with something that they wanna do. As an engineer for these companies, you might be working on a product that will never be deployed in country X, but will be used a lot in country Y, and you have to think about the implications and your comfort level with building these technologies that may or may not wind up being used in different settings.

    I can see, how much do you empower the students to actually voice their opinions to the people who are signing their paychecks? It’s a tricky, you don’t wanna train a bunch of folks who wind up coming back and saying, “Oh, by the way, I was fired because of all the great things that I learned in this class.”

    Mehran Sahami: Yeah, but at the same time you want students to have their own personal moral responsibility. You want them to make decisions that they feel are in line with their one personal ethics.

    But at the same time, there’s a lot of decisions that get made at the engineering level that have far-reaching consequences. So if you’re the engineer who’s working on how do I filter results out of, say the search engine in China or in Germany, there are decisions you’re making deep down in the code in terms of what algorithms you might be using, what techniques, what kind of data that are gonna have real impact on the information people see. And that’s at a level that is affecting individuals, but is at a more granular level than the decisions that are being made by the executives of the company.

    And so, the engineers themselves need to be aware of that when they’re building these systems.

    Russ Altman: This is The Future of Everything. I’m Russ Altman. I’m speaking with Professor Mehran Sahami now about these great scenarios that you’re using in your teaching. So, if you’ll allow me, I would just love to move to another one of your areas and find out what kind of cases you’re using, for example, in data privacy.

    Mehran Sahami: Sure. In data privacy, one of the big things we’re looking at is facial recognition, which is getting… These days you see pretty much articles every day on the use of the technology, what localities have it, which don’t.

    San Francisco, for example, recently banned the use of facial recognition by the public sector at least, whereas many airlines are now moving to using facial recognition to check you in on a plane. As you can imagine that happening, it has these tensions between privacy and security. At one level, why would we use facial recognition to get people on the plane? It only estimates to save a couple minutes of time when you’re boarding a 747.

    Russ Altman: It’s the overhead luggage that they should work on, not the facial recognition.

    Mehran Sahami: Exactly. But the real reason is the folks who are responsible for airline security wanna be able to detect if there’s someone getting on the plane who shouldn’t be getting on the plane.

    The flip side is your personal privacy. To what extent do you take facial recognition to an extreme where everyone can be tracked.

    London, for example, has half a million closed-circuit TV cameras around the city. You combine that with facial recognition, you can get a pretty good map of what most of the people in the city are doing at a given time, who are outdoors at least.

    How do we trade off the privacy implication versus security? Different people will make that decision differently. And part of the public policy question, which is why we look at the multidisciplinary facet of this class, is how as a society do we decide what we wanna do?

    Russ Altman: So you must’ve had… I’m presuming you had small group discussion sessions, because I know you had a big registration in this class, a couple of hundred people perhaps, but you can have a meaningful discussion of 200 people, I would not think. So, did you actually break them down and have them have smaller discussion groups?

    Mehran Sahami: Yeah. There were weekly small discussions. And then one of the things we would do, which actually worked better than you would think, is we took 250 students. We had this large room that had a bunch of tables that seat about eight or 10. And so we could get them in there, seated around these tables, after they read the case study and sort of give them guiding questions for discussion. So then they could have the discussion. We could have call out, so they could share their findings or their insights across the whole class.

    Russ Altman: So when you have discussion sessions, who leads them? Are these computer scientist or a… You said there were a lot of political scientists involved in the class.

    It’s not clear to me at all who I would want leading that discussion, because what you need to know and kind of the frameworks of ethics are quite diverse. So, who leads those discussions?

    Mehran Sahami: Yeah, it’s a great question. We had a large number of TAs that span a bunch of areas. So we have some computer scientists. We had some law students. We had students with backgrounds in philosophy and anthropology, bunch of different areas. And in some cases, the sections were co-taught by computer scientists, say, and someone from the law school. And so you would get these different perspectives to really bring out the richness in the conversation.

    Russ Altman: And I would guess that because of the international nature of the students, people are bringing very different perspectives on issues of privacy, state power, individual human rights. I would guess there’s a huge diversity in the student body on those issues.

    Mehran Sahami: Absolutely. And it’s also a nice way for students to be able to connect, because when you hear about the different kinds of issues in different countries, it’s easy to just think about them in an abstract sense and not understand why. But when you actually have someone sitting across the table saying, “I grew up here and I can tell you about why we believe the particular things we do,” it makes it much more meaningful in terms of that student engagement.

    Russ Altman: Well, there you have it, the next generation of computer scientist trained in the ethics of their technologies.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford Engineering has been at the forefront of innovation for nearly a century, creating pivotal technologies that have transformed the worlds of information technology, communications, health care, energy, business and beyond.

    The school’s faculty, students and alumni have established thousands of companies and laid the technological and business foundations for Silicon Valley. Today, the school educates leaders who will make an impact on global problems and seeks to define what the future of engineering will look like.
    Mission

    Our mission is to seek solutions to important global problems and educate leaders who will make the world a better place by using the power of engineering principles, techniques and systems. We believe it is essential to educate engineers who possess not only deep technical excellence, but the creativity, cultural awareness and entrepreneurial skills that come from exposure to the liberal arts, business, medicine and other disciplines that are an integral part of the Stanford experience.

    Our key goals are to:

    Conduct curiosity-driven and problem-driven research that generates new knowledge and produces discoveries that provide the foundations for future engineered systems
    Deliver world-class, research-based education to students and broad-based training to leaders in academia, industry and society
    Drive technology transfer to Silicon Valley and beyond with deeply and broadly educated people and transformative ideas that will improve our society and our world.

    The Future of Engineering

    The engineering school of the future will look very different from what it looks like today. So, in 2015, we brought together a wide range of stakeholders, including mid-career faculty, students and staff, to address two fundamental questions: In what areas can the School of Engineering make significant world‐changing impact, and how should the school be configured to address the major opportunities and challenges of the future?

    One key output of the process is a set of 10 broad, aspirational questions on areas where the School of Engineering would like to have an impact in 20 years. The committee also returned with a series of recommendations that outlined actions across three key areas — research, education and culture — where the school can deploy resources and create the conditions for Stanford Engineering to have significant impact on those challenges.

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 11:35 am on June 26, 2019 Permalink | Reply
    Tags: Clark Barrett, Computer algorithms are helping identify these threats., Computer Science, , Subhasish Mitra, The logic flaws or bugs that make chips vulnerable to attack   

    From Stanford University Engineering: “Q&A: What’s new in the effort to prevent hackers from hijacking chips?” 

    From Stanford University Engineering

    June 18, 2019
    Tom Abate
    Andrew Myers

    Designers have always had to find and fix the logic flaws, or bugs, that make chips vulnerable to attack. Now computer algorithms are helping them identify these threats.

    1
    As hackers develop new ways to attack chips, researchers aim to anticipate and forestall their malicious intrusions. | Illustration by Kevin Craft

    In their previous work, Stanford engineering professors Clark Barrett and Subhasish Mitra developed computer algorithms to automate the process of finding bugs in chips and fixing these flaws before the chips are manufactured.

    Now, the researchers are adapting their algorithms to thwart a new type of peril — the possibility that hackers could misuse a chip’s features to carry out some nefarious end. In a recent discussion with Stanford Engineering, Barrett and Mitra explain the risks, and how algorithms can help prevent them.

    What’s new when it comes to finding bugs in chips?

    Designers have always tried to find logic flaws, or bugs as they are called, before chips went into manufacturing. Otherwise, hackers might exploit these flaws to hijack computers or cause malfunctions. This has been called debugging and it has never been easy. Yet we are now starting to discover a new type of chip vulnerability that is different from so-called bugs. These new weaknesses do not arise from logic flaws. Instead, hackers can figure out how to misuse a feature that has been purposely designed into a chip. There is not a flaw in the logic. But hackers might be able to pervert the logic to steal sensitive data or take over the chip.

    Have we already suffered from these unintended consequence attacks?

    In a way, yes. Last year some white hat security experts — good guys who try to anticipate hack attacks — discovered two attacks that could be used to guess secret data contained in sophisticated microprocessors. The white hats called these attacks Spectre and Meltdown. The attacks misused two features designed to speed up chip performance. These features are known as “out-of-order-execution” and “speculative execution.” These features store certain data in a chip in a way that makes the data immediately available should the program require. Say the program requires access to credit card info or private health data. The white hats discovered that Spectre and Meltdown could eavesdrop on any network to which the chip is connected and read that stored data right off the chip.

    How?

    The analogy would be guessing a word in a crossword puzzle without knowing the answer. If a clue demands a plural answer, the last letter is probably an ‘s.’ If the word is in the past tense, the last two letters are probably ‘ed’ and so on. The white hats discovered that hackers could use the out-of-order and speculative execution features as clues to make repeated guesses about what data was being stored for instant use. We think Spectre and Meltdown were discovered before hackers could actually perform such attacks. But it was a big wake-up call. You wouldn’t want a hacker using a technique like that to take control of your self-driving car.

    How do your algorithms deal with traditional bugs and these new unintended weaknesses?

    Let’s start with the traditional bugs. We developed a technique called Symbolic Quick Error Detection — or Symbolic QED. Essentially, we use new algorithms to examine chip designs for potential logic flaws or bugs. We recently tested our algorithms on 16 processors that were already being used to help control critical automotive systems like braking and steering. Before these chips went into cars, the designers had already spent five years debugging their own processors using state-of-the-art techniques and fixing all the bugs they found. After using Symbolic QED for one month, we found every bug they’d found in 60 months — and then we found some bugs that were still in the chips. This was a validation of our approach. We think that by using Symbolic QED before a chip goes into manufacturing we’ll be able to find and fix more logic flaws in less time.

    Would Symbolic QED have found vulnerabilities like Spectre and Meltdown?

    Not in its current incarnation. But we recently collaborated with a research group at the Technische Universität Kaiserslautern in Germany to create an algorithm called Unique Program Execution (UPEC). Essentially, we modified Symbolic QED to anticipate the ways that hackers might exploit a chip’s legitimate features for their own ends. The German researchers then applied UPEC to a class of processors that might run a home security system or other appliance hooked up to the internet of things. UPEC detected new types of attacks that didn’t result from logic flaws, but from the potential misuse of some seemingly innocuous feature.

    This is just the beginning. The processors we tested were relatively simple. Yet, as we saw, they could be perverted. Over time we will develop more sophisticated algorithms to detect and fix the most sophisticated chips, like the ones responsible for controlling navigation systems on autonomous cars. Our message is simple: As we develop more chips for more critical tasks, we’ll need automated systems to find and fix all potential vulnerabilities — traditional bugs and unintended consequences — before chips go into manufacturing. Otherwise we’ll always be playing catch up, trying to patch chips after hackers find the vulnerabilities

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford Engineering has been at the forefront of innovation for nearly a century, creating pivotal technologies that have transformed the worlds of information technology, communications, health care, energy, business and beyond.

    The school’s faculty, students and alumni have established thousands of companies and laid the technological and business foundations for Silicon Valley. Today, the school educates leaders who will make an impact on global problems and seeks to define what the future of engineering will look like.
    Mission

    Our mission is to seek solutions to important global problems and educate leaders who will make the world a better place by using the power of engineering principles, techniques and systems. We believe it is essential to educate engineers who possess not only deep technical excellence, but the creativity, cultural awareness and entrepreneurial skills that come from exposure to the liberal arts, business, medicine and other disciplines that are an integral part of the Stanford experience.

    Our key goals are to:

    Conduct curiosity-driven and problem-driven research that generates new knowledge and produces discoveries that provide the foundations for future engineered systems
    Deliver world-class, research-based education to students and broad-based training to leaders in academia, industry and society
    Drive technology transfer to Silicon Valley and beyond with deeply and broadly educated people and transformative ideas that will improve our society and our world.

    The Future of Engineering

    The engineering school of the future will look very different from what it looks like today. So, in 2015, we brought together a wide range of stakeholders, including mid-career faculty, students and staff, to address two fundamental questions: In what areas can the School of Engineering make significant world‐changing impact, and how should the school be configured to address the major opportunities and challenges of the future?

    One key output of the process is a set of 10 broad, aspirational questions on areas where the School of Engineering would like to have an impact in 20 years. The committee also returned with a series of recommendations that outlined actions across three key areas — research, education and culture — where the school can deploy resources and create the conditions for Stanford Engineering to have significant impact on those challenges.

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 8:01 am on May 10, 2019 Permalink | Reply
    Tags: "Q&A: SLAC/Stanford researchers prepare for a new quantum revolution", , , Computer Science, , , , , , , Quantum squeezing, , The most exciting opportunities in quantum control make use of a phenomenon known as entanglement   

    From SLAC National Accelerator Lab- “Q&A: SLAC/Stanford researchers prepare for a new quantum revolution” 

    From SLAC National Accelerator Lab

    May 9, 2019
    Manuel Gnida

    Monika Schleier-Smith and Kent Irwin explain how their projects in quantum information science could help us better understand black holes and dark matter.

    The tech world is abuzz about quantum information science (QIS). This emerging technology explores bizarre quantum effects that occur on the smallest scales of matter and could potentially revolutionize the way we live.

    Quantum computers would outperform today’s most powerful supercomputers; data transfer technology based on quantum encryption would be more secure; exquisitely sensitive detectors could pick up fainter-than-ever signals from all corners of the universe; and new quantum materials could enable superconductors that transport electricity without loss.

    In December 2018, President Trump signed the National Quantum Initiative Act into law, which will mobilize $1.2 billion over the next five years to accelerate the development of quantum technology and its applications. Three months earlier, the Department of Energy had already announced $218 million in funding for 85 QIS research awards.

    The Fundamental Physics and Technology Innovation directorates of DOE’s SLAC National Accelerator Laboratory recently joined forces with Stanford University on a new initiative called Q-FARM to make progress in the field. In this Q&A, two Q-FARM scientists explain how they will explore the quantum world through projects funded by DOE QIS awards in high-energy physics.

    Monika Schleier-Smith, assistant professor of physics at Stanford, wants to build a quantum simulator made of atoms to test how quantum information spreads. The research, she said, could even lead to a better understanding of black holes.

    Kent Irwin, professor of physics at Stanford and professor of photon science and of particle physics and astrophysics at SLAC, works on quantum sensors that would open new avenues to search for the identity of the mysterious dark matter that makes up most of the universe.

    1
    Monika Schleier-Smith and Kent Irwin are the principal investigators of three quantum information science projects in high-energy physics at SLAC. (Farrin Abbott/Dawn Harmer/SLAC National Accelerator Laboratory)

    What exactly is quantum information science?

    Irwin: If we look at the world on the smallest scales, everything we know is already “quantum.” On this scale, the properties of atoms, molecules and materials follow the rules of quantum mechanics. QIS strives to make significant advances in controlling those quantum effects that don’t exist on larger scales.

    Schleier-Smith: We’re truly witnessing a revolution in the field in the sense that we’re getting better and better at engineering systems with carefully designed quantum properties, which could pave the way for a broad range of future applications.

    What does quantum control mean in practice?

    Schleier-Smith: The most exciting opportunities in quantum control make use of a phenomenon known as entanglement – a type of correlation that doesn’t exist in the “classical,” non-quantum world. Let me give you a simple analogy: Imagine that we flip two coins. Classically, whether one coin shows heads or tails is independent of what the other coin shows. But if the two coins are instead in an entangled quantum state, looking at the result for one “coin” automatically determines the result for the other one, even though the coin toss still looks random for either coin in isolation.

    Entanglement thus provides a fundamentally new way of encoding information – not in the states of individual “coins” or bits but in correlations between the states of different qubits. This capability could potentially enable transformative new ways of computing, where problems that are intrinsically difficult to solve on classical computers might be more efficiently solved on quantum ones. A challenge, however, is that entangled states are exceedingly fragile: any measurement of the system – even unintentional – necessarily changes the quantum state. So a major area of quantum control is to understand how to generate and preserve this fragile resource.

    At the same time, certain quantum technologies can also take advantage of the extreme sensitivity of quantum states to perturbations. One application is in secure telecommunications: If a sender and receiver share information in the form of quantum bits, an eavesdropper cannot go undetected, because her measurement necessarily changes the quantum state.

    Another very promising application is quantum sensing, where the idea is to reduce noise and enhance sensitivity by controlling quantum correlations, for instance, through quantum squeezing.

    What is quantum squeezing?

    Irwin: Quantum mechanics sets limits on how we can measure certain things in nature. For instance, we can’t perfectly measure both the position and momentum of a particle. The very act of measuring one changes the other. This is called the Heisenberg uncertainty principle. When we search for dark matter, we need to measure an electromagnetic signal extremely well, but Heisenberg tells us that we can’t measure the strength and timing of this signal without introducing uncertainty.

    Quantum squeezing allows us to evade limits on measurement set by Heisenberg by putting all the uncertainty into one thing (which we don’t care about), and then measuring the other with much greater precision. So, for instance, if we squeeze all of the quantum uncertainty in an electromagnetic signal into its timing, we can measure its strength much better than quantum mechanics would ordinarily allow. This lets us search for an electromagnetic signal from dark matter much more quickly and sensitively than is otherwise possible.

    2
    Kent Irwin (at left with Dale Li) leads efforts at SLAC and Stanford to build quantum sensors for exquisitely sensitive detectors. (Andy Freeberg/SLAC National Accelerator Laboratory)

    What types of sensors are you working on?

    Irwin: My team is exploring quantum techniques to develop sensors that could break new ground in the search for dark matter.

    We’ve known since the 1930s that the universe contains much more matter than the ordinary type that we can see with our eyes and telescopes – the matter made up of atoms. Whatever dark matter is, it’s a new type of particle that we don’t understand yet. Most of today’s dark matter detectors search for relatively heavy particles, called weakly interacting massive particles, or WIMPs.

    PandaX II Dark Matter experiment at Jin-ping Underground Laboratory (CJPL) in Sichuan, China

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB deep in Sudbury’s Creighton Mine

    LBNL LZ project at SURF, Lead, SD, USA

    But what if dark matter particles were so light that they wouldn’t leave a trace in those detectors? We want to develop sensors that would be able to “see” much lighter dark matter particles.

    There would be so many of these very light dark matter particles that they would behave much more like waves than individual particles. So instead of looking for collisions of individual dark matter particles within a detector, which is how WIMP detectors work, we want to look for dark matter waves, which would be detected like a very weak AM radio signal.

    In fact, we even call one of our projects “Dark Matter Radio.” It works like the world’s most sensitive AM radio. But it’s also placed in the world’s most perfect radio shield, made up of a material called a superconductor, which keeps all normal radio waves out. However, unlike real AM radio signals, dark matter waves would be able to go right through the shield and produce a signal. So we are looking for a very weak AM radio station made by dark matter at an unknown frequency.

    Quantum sensors can make this radio much more sensitive, for instance by using quantum tricks such as squeezing and entanglement. So the Dark Matter Radio will not only be the world’s most sensitive AM radio; it will also be better than the Heisenberg uncertainty principle would normally allow.

    What are the challenges of QIS?

    Schleier-Smith: There is a lot we need to learn about controlling quantum correlations before we can make broad use of them in future applications. For example, the sensitivity of entangled quantum states to perturbations is great for sensor applications. However, for quantum computing it’s a major challenge because perturbations of information encoded in qubits will introduce errors, and nobody knows for sure how to correct for them.

    To make progress in that area, my team is studying a question that is very fundamental to our ability to control quantum correlations: How does information actually spread in quantum systems?

    The model system we’re using for these studies consists of atoms that are laser-cooled and optically trapped. We use light to controllably turn on interactions between the atoms, as a means of generating entanglement. By measuring the speed with which quantum information can spread in the system, we hope to understand how to design the structure of the interactions to generate entanglement most efficiently. We view the system of cold atoms as a quantum simulator that allows us to study principles that are also applicable to other physical systems.

    In this area of quantum simulation, one major thrust has been to advance understanding of solid-state systems, by trapping atoms in arrays that mimic the structure of a crystalline material. In my lab, we are additionally working to extend the ideas and tools of quantum simulation in new directions. One prospect that I am particularly excited about is to use cold atoms to simulate what happens to quantum information in black holes.

    3
    Monika Schleier-Smith (at center with graduate students Emily Davis and Eric Cooper) uses laser-cooled atoms in her lab at Stanford to study the transfer of quantum information. (Dawn Harmer/SLAC National Accelerator Laboratory)

    What do cold atoms have to do with black holes?

    Schleier-Smith: The idea that there might be any connection between quantum systems we can build in the lab and black holes has its origins in a long-standing theoretical problem: When particles fall into a black hole, what happens to the information they contained? There were compelling arguments that the information should be lost, but that would contradict the laws of quantum mechanics.

    More recently, theoretical physicists – notably my Stanford colleague Patrick Hayden – found a resolution to this problem: We should think of the black hole as a highly chaotic system that “scrambles” the information as fast as physically possible. It’s almost like shredding documents, but quantum information scrambling is much richer in that the result is a highly entangled quantum state.

    Although precisely recreating such a process in the lab will be very challenging, we hope to look at one of its key features already in the near term. In order for information scrambling to happen, information needs to be transferred through space exponentially fast. This, in turn, requires quantum interactions to occur over long distances, which is quite counterintuitive because interactions in nature typically become weaker with distance. With our quantum simulator, we are able to study interactions between distant atoms by sending information back and forth with photons, particles of light.

    What do you hope will happen in QIS over the next few years?

    Irwin: We need to prove that, in real applications, quantum technology is superior to the technology that we already have. We are in the early stages of this new quantum revolution, but this is already starting to happen. The things we’re learning now will help us make a leap in developing future technology, such as universal quantum computers and next-generation sensors. The work we do on quantum sensors will enable new science, not only in dark matter research. At SLAC, I also see potential for quantum-enhanced sensors in X-ray applications, which could provide us with new tools to study advanced materials and understand how biomolecules work.

    Schleier-Smith: QIS offers plenty of room for breakthroughs. There are many open questions we still need to answer about how to engineer the properties of quantum systems in order to harness them for technology, so it’s imperative that we continue to broadly advance our understanding of complex quantum systems. Personally, I hope that we’ll be able to better connect experimental observations with the latest theoretical advances. Bringing all this knowledge together will help us build the technologies of the future.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    SLAC/LCLS


    SLAC/LCLS II projected view


    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.

     
  • richardmitnick 2:21 pm on April 16, 2019 Permalink | Reply
    Tags: , , Computer Science, , , Natural Sciences, The Brendan Iribe Center for Computer Science and Engineering, UMIACS-University of Maryland Institute for Advanced Computer Studies,   

    From University of Maryland CMNS: “University of Maryland Launches Center for Machine Learning” 

    U Maryland bloc

    From University of Maryland


    CMNS

    April 16, 2019

    Abby Robinson
    301-405-5845
    abbyr@umd.edu

    The University of Maryland recently launched a multidisciplinary center that uses powerful computing tools to address challenges in big data, computer vision, health care, financial transactions and more.

    The University of Maryland Center for Machine Learning will unify and enhance numerous activities in machine learning already underway on the Maryland campus.

    1
    University of Maryland computer science faculty member Thomas Goldstein (on left, with visiting graduate student) is a member of the new Center for Machine Learning. Goldstein’s research focuses on large-scale optimization and distributed algorithms for big data. Photo: John T. Consoli.

    Machine learning uses algorithms and statistical models so that computer systems can effectively perform a task without explicit instructions, relying instead on patterns and inference. At UMD, for example, computer vision experts are “training” computers to identify and match key facial characteristics by having machines analyze millions of images publicly available on social media.

    Researchers at UMD are exploring other applications such as groundbreaking work in cancer genomics; powerful algorithms to improve the selection process for organ transplants; and an innovative system that can quickly find, translate and summarize information from almost any language in the world.

    “We wanted to capitalize on the significant strengths we already have in machine learning, provide additional support, and embrace fresh opportunities arising from new facilities and partnerships,” said Mihai Pop, professor of computer science and director of the University of Maryland Institute for Advanced Computer Studies (UMIACS).

    The center officially launched with a workshop last month featuring talks and panel discussions from machine learning experts in auditory systems, biology and medicine, business, chemistry, natural language processing, and security.

    Initial funding for the center comes from the College of Computer, Mathematical, and Natural Sciences (CMNS) and UMIACS, which will provide technical and administrative support.

    An inaugural partner of the center, financial and technology leader Capital One, provided additional support, including endowing three faculty positions in machine learning and computer science. Those positions received matching funding from the state’s Maryland E-Nnovation Initiative.

    Capital One has also provided funding for research projects that align with the organization’s need to stay on the cutting edge in areas like fraud detection and enhancing the customer experience with more personalized, real-time features.

    “We are proud to be a part of the launch of the University of Maryland Center for Machine Learning, and are thrilled to extend our partnership with the university in this field,” said Dave Castillo, the company’s managing vice president at the Center for Machine Learning and Emerging Technology. “At Capital One, we believe forward-leaning technologies like machine learning can provide our customers greater protection, security, confidence and control of their finances. We look forward to advancing breakthrough work with the University of Maryland in years to come.”

    3
    University of Maryland computer science faculty members David Jacobs (left) and Furong Huang (right) are part of the new Center for Machine Learning. Jacobs is an expert in computer vision and is the center’s interim director; Huang is conducting research in neural networks. Photo: John T. Consoli.

    David Jacobs, a professor of computer science with an appointment in UMIACS, will serve as interim director of the new center.

    To jumpstart the center’s activities, Jacobs has recruited a core group of faculty members in computer science and UMIACS: John Dickerson, Soheil Feizi, Thomas Goldstein, Furong Huang and Aravind Srinivasan.

    Faculty members from mathematics, chemistry, biology, physics, linguistics, and data science are also heavily involved in machine learning applications, and Jacobs said he expects many of them to be active in the center through direct or affiliate appointments.

    “We want the center to be a focal point across the campus where faculty, students, and visiting scholars can come to learn about the latest technologies and theoretical applications based in machine learning,” he said.

    Key to the center’s success will be a robust computational infrastructure that is needed to perform complex computations involving massive amounts of data.

    This is where UMIACS plays an important role, Jacobs said, with the institute’s technical staff already supporting multiple machine learning activities in computer vision and computational linguistics.

    Plans call for CMNS, UMIACS and other organizations to invest substantially in new computing resources for the machine learning center, Jacobs added.

    4
    The Brendan Iribe Center for Computer Science and Engineering. Photo: John T. Consoli.

    The center will be located in the Brendan Iribe Center for Computer Science and Engineering, a new state-of-the-art facility at the entrance to campus that will be officially dedicated later this month. In addition to the very latest in computing resources, the Brendan Iribe Center promotes collaboration and connectivity through its open design and multiple meeting areas.

    The Brendan Iribe Center is directly adjacent to the university’s Discovery District, where researchers working in Capital One’s Tech Incubator and other tech startups can interact with UMD faculty members and students on topics related to machine learning.

    Amitabh Varshney, professor of computer science and dean of CMNS, said the center will be a valuable resource for the state of Maryland and the region—both for students seeking the latest knowledge and skills and for companies wanting professional development training for their employees.

    “We have new educational activities planned by the college that include professional master’s programs in machine learning and data science and analytics,” Varshney said. “We want to leverage our location near numerous federal agencies and private corporations that are interested in expanding their workforce capabilities in these areas.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Maryland Campus

    About CMNS

    The thirst for new knowledge is a fundamental and defining characteristic of humankind. It is also at the heart of scientific endeavor and discovery. As we seek to understand our world, across a host of complexly interconnected phenomena and over scales of time and distance that were virtually inaccessible to us a generation ago, our discoveries shape that world. At the forefront of many of these discoveries is the College of Computer, Mathematical, and Natural Sciences (CMNS).

    CMNS is home to 12 major research institutes and centers and to 10 academic departments: astronomy, atmospheric and oceanic science, biology, cell biology and molecular genetics, chemistry and biochemistry, computer science, entomology, geology, mathematics, and physics.

    Our Faculty

    Our faculty are at the cutting edge over the full range of these disciplines. Our physicists fill in major gaps in our fundamental understanding of matter, participating in the recent Higgs boson discovery, and demonstrating the first-ever teleportation of information between atoms. Our astronomers probe the origin of the universe with one of the world’s premier radio observatories, and have just discovered water on the moon. Our computer scientists are developing the principles for guaranteed security and privacy in information systems.

    Our Research

    Driven by the pursuit of excellence, the University of Maryland has enjoyed a remarkable rise in accomplishment and reputation over the past two decades. By any measure, Maryland is now one of the nation’s preeminent public research universities and on a path to become one of the world’s best. To fulfill this promise, we must capitalize on our momentum, fully exploit our competitive advantages, and pursue ambitious goals with great discipline and entrepreneurial spirit. This promise is within reach. This strategic plan is our working agenda.

    The plan is comprehensive, bold, and action oriented. It sets forth a vision of the University as an institution unmatched in its capacity to attract talent, address the most important issues of our time, and produce the leaders of tomorrow. The plan will guide the investment of our human and material resources as we strengthen our undergraduate and graduate programs and expand research, outreach and partnerships, become a truly international center, and enhance our surrounding community.

    Our success will benefit Maryland in the near and long term, strengthen the State’s competitive capacity in a challenging and changing environment and enrich the economic, social and cultural life of the region. We will be a catalyst for progress, the State’s most valuable asset, and an indispensable contributor to the nation’s well-being. Achieving the goals of Transforming Maryland requires broad-based and sustained support from our extended community. We ask our stakeholders to join with us to make the University an institution of world-class quality with world-wide reach and unparalleled impact as it serves the people and the state of Maryland.

    Our researchers are also at the cusp of the new biology for the 21st century, with bioscience emerging as a key area in almost all CMNS disciplines. Entomologists are learning how climate change affects the behavior of insects, and earth science faculty are coupling physical and biosphere data to predict that change. Geochemists are discovering how our planet evolved to support life, and biologists and entomologists are discovering how evolutionary processes have operated in living organisms. Our biologists have learned how human generated sound affects aquatic organisms, and cell biologists and computer scientists use advanced genomics to study disease and host-pathogen interactions. Our mathematicians are modeling the spread of AIDS, while our astronomers are searching for habitable exoplanets.

    Our Education

    CMNS is also a national resource for educating and training the next generation of leaders. Many of our major programs are ranked among the top 10 of public research universities in the nation. CMNS offers every student a high-quality, innovative and cross-disciplinary educational experience that is also affordable. Strongly committed to making science and mathematics studies available to all, CMNS actively encourages and supports the recruitment and retention of women and minorities.

    Our Students

    Our students have the unique opportunity to work closely with first-class faculty in state-of-the-art labs both on and off campus, conducting real-world, high-impact research on some of the most exciting problems of modern science. 87% of our undergraduates conduct research and/or hold internships while earning their bachelor’s degree. CMNS degrees command respect around the world, and open doors to a wide variety of rewarding career options. Many students continue on to graduate school; others find challenging positions in high-tech industry or federal laboratories, and some join professions such as medicine, teaching, and law.

     
  • richardmitnick 8:45 am on April 16, 2019 Permalink | Reply
    Tags: A modified Java virtual machine, , Computer Science, Data Compression,   

    From MIT News: “A novel data-compression technique for faster computer programs” 

    MIT News
    MIT Widget

    From MIT News

    April 16, 2019
    Rob Matheson

    1
    A novel technique developed by MIT researchers compresses “objects” in memory for the first time, freeing up more memory used by computers, allowing them to run faster and perform more tasks simultaneously. Image: Christine Daniloff, MIT

    Researchers free up more bandwidth by compressing “objects” within the memory hierarchy.

    A novel technique developed by MIT researchers rethinks hardware data compression to free up more memory used by computers and mobile devices, allowing them to run faster and perform more tasks simultaneously.

    Data compression leverages redundant data to free up storage capacity, boost computing speeds, and provide other perks. In current computer systems, accessing main memory is very expensive compared to actual computation. Because of this, using data compression in the memory helps improve performance, as it reduces the frequency and amount of data programs need to fetch from main memory.

    Memory in modern computers manages and transfers data in fixed-size chunks, on which traditional compression techniques must operate. Software, however, doesn’t naturally store its data in fixed-size chunks. Instead, it uses “objects,” data structures that contain various types of data and have variable sizes. Therefore, traditional hardware compression techniques handle objects poorly.

    In a paper being presented at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems this week, the MIT researchers describe the first approach to compress objects across the memory hierarchy. This reduces memory usage while improving performance and efficiency.

    Programmers could benefit from this technique when programming in any modern programming language — such as Java, Python, and Go — that stores and manages data in objects, without changing their code. On their end, consumers would see computers that can run much faster or can run many more apps at the same speeds. Because each application consumes less memory, it runs faster, so a device can support more applications within its allotted memory.

    In experiments using a modified Java virtual machine, the technique compressed twice as much data and reduced memory usage by half over traditional cache-based methods.

    “The motivation was trying to come up with a new memory hierarchy that could do object-based compression, instead of cache-line compression, because that’s how most modern programming languages manage data,” says first author Po-An Tsai, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

    “All computer systems would benefit from this,” adds co-author Daniel Sanchez, a professor of computer science and electrical engineering, and a researcher at CSAIL. “Programs become faster because they stop being bottlenecked by memory bandwidth.”

    The researchers built on their prior work that restructures the memory architecture to directly manipulate objects. Traditional architectures store data in blocks in a hierarchy of progressively larger and slower memories, called “caches.” Recently accessed blocks rise to the smaller, faster caches, while older blocks are moved to slower and larger caches, eventually ending back in main memory. While this organization is flexible, it is costly: To access memory, each cache needs to search for the address among its contents.

    “Because the natural unit of data management in modern programming languages is objects, why not just make a memory hierarchy that deals with objects?” Sanchez says.

    In a paper published last October, the researchers detailed a system called Hotpads, that stores entire objects, tightly packed into hierarchical levels, or “pads.” These levels reside entirely on efficient, on-chip, directly addressed memories — with no sophisticated searches required.

    Programs then directly reference the location of all objects across the hierarchy of pads. Newly allocated and recently referenced objects, and the objects they point to, stay in the faster level. When the faster level fills, it runs an “eviction” process that keeps recently referenced objects but kicks down older objects to slower levels and recycles objects that are no longer useful, to free up space. Pointers are then updated in each object to point to the new locations of all moved objects. In this way, programs can access objects much more cheaply than searching through cache levels.

    For their new work, the researchers designed a technique, called “Zippads,” that leverages the Hotpads architecture to compress objects. When objects first start at the faster level, they’re uncompressed. But when they’re evicted to slower levels, they’re all compressed. Pointers in all objects across levels then point to those compressed objects, which makes them easy to recall back to the faster levels and able to be stored more compactly than prior techniques.

    A compression algorithm then leverages redundancy across objects efficiently. This technique uncovers more compression opportunities than previous techniques, which were limited to finding redundancy within each fixed-size block. The algorithm first picks a few representative objects as “base” objects. Then, in new objects, it only stores the different data between those objects and the representative base objects.

    Brandon Lucia, an assistant professor of electrical and computer engineering at Carnegie Mellon University, praises the work for leveraging features of object-oriented programming languages to better compress memory. “Abstractions like object-oriented programming are added to a system to make programming simpler, but often introduce a cost in the performance or efficiency of the system,” he says. “The interesting thing about this work is that it uses the existing object abstraction as a way of making memory compression more effective, in turn making the system faster and more efficient with novel computer architecture features.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 4:24 pm on March 27, 2019 Permalink | Reply
    Tags: Computer Science, , Miti Joshi, ,   

    From Vanderbilt University: Women in STEM-“Find Your Impact: Student empowers women through tech” Miti Joshi 

    Vanderbilt U Bloc

    From Vanderbilt University

    Mar. 27, 2019
    Amy Wolf

    Miti Joshi admits that before she stepped foot on Vanderbilt’s campus, she was very wary of what would later become one of her greatest passions.

    “I’m moving across the world alone, and I need to take a leap of faith here,” said Joshi, an international student from Mumbai, India. “Let me try the craziest thing and declare a computer science major.”

    Joshi, who had never coded before, nor met a female computer programmer, second-guessed her decision immediately.

    “Engineering, yes. Computer science? Not in my wildest dreams,” said Joshi, a Chancellor’s Scholarship recipient. “I hate to say it now, but back then I honestly just thought it was something that only really smart boys did.”

    Role Model

    One of her first professors at the School of Engineering altered her mindset completely.

    “Professor Julie Johnson helped me fall in love with the subject. I was so inspired by this accomplished, confident woman,” said Joshi, who spent many hours in Johnson’s office discussing the role of women in computer science. “She helped me to realize that I was not an imposter and that I absolutely belonged in CS.”

    1
    Julie Johnson, associate professor of the practice of computer science (Susan Urmy/Vanderbilt)

    “Miti’s insights and technical abilities, coupled with her non-stop energy, bring her ideas to life,” Johnson said. “It’s contagious! When Miti has an idea, you can’t help but want to get on board.”

    VandyHacks

    With her newfound confidence, Joshi and a group of freshmen competed in VandyHacks, a 36-hour invention marathon held at the Wond’ry in 2016. Hundreds of students from as far away as California packed the innovation and entrepreneurship center, as well as nearby halls and classrooms, with the goal of producing the next great tech invention.

    2
    Miti Joshi (center) and her team created a virtual reality app at her first VandyHacks hack-a-thon.

    “We did a virtual reality project and it was really difficult. Everything kept breaking, and we didn’t know what was happening because it was our first coding project,” Joshi remembered.

    In the end, the app was successful, and the group won an award for their ambition and drive. That’s when Joshi knew she was on the right path.

    “There’s a certain pure bliss that you feel when you get something right, and CS gives me that,” she said.

    Vanderbilt Women in Computing

    Joshi wanted to encourage that feeling of confidence among female engineering students and create a space where young women could ask questions, help one another and network. With the guidance of graduate student Hayley Adams and Assistant Professor of Computer Science and Computer Engineering Maithilee Kunda, Joshi launched Vanderbilt Women in Computing.

    Seeing other women in computer science has been a source of empowerment for members of the organization. “I feel like people started becoming more comfortable in their own skin in classrooms, and more confident,” Joshi said. “You don’t have to wear a hoodie and code all day to be a great programmer. We wanted to create a space for women to be their authentic selves.”

    Emerge

    The group has created learning and networking events to connect Vanderbilt and the greater Nashville tech community through Emerge conferences. The first two focused on virtual reality and virtual intelligence.

    “People want to learn about new technologies, not just about how to code the new tech,” Joshi said. “They want to discuss how virtual reality or artificial intelligence is going to impact all of our futures.”

    3
    Joshi used this photo to announce the creation of Vanderbilt Women in Computing.

    Mental Wellness

    Joshi’s work with Vanderbilt Women in Computing also opened conversations about mental wellness within the larger tech and engineering spaces.

    “There is the general notion that I am ‘weak’ if I’m facing mental health issues or am overwhelmed by all of the tight deadlines associated with engineering-related projects,” she said. “But if I need to seek mental health resources, I’m not a wimp and I’m not backing out from actually doing the hard work.”

    Joshi said she finds it serendipitous that Vanderbilt’s Center for Student Wellbeing is in close proximity to Featheringill Hall. “It’s a physical reminder to people who are struggling that Vanderbilt has great resources for us.”

    Passion for Dance

    Throughout her time at Vanderbilt, Joshi has let off steam by participating in many of the international dance showcases on campus, including Diwali, Harambee and Café Con Leche.

    3
    Joshi in the Diwali Showcase, 2018.

    I think that’s the coolest thing I’ve done, and I’ve met incredible friends through dance,” she said.

    Joshi wants to connect her passions for people and computer science following graduation.

    “I think the thing that I love the most about tech is its ability to touch people in really profound and meaningful ways,” she said. “I want to stay in CS, and I want to help make beautiful tech that helps impact people.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Commodore Cornelius Vanderbilt was in his 79th year when he decided to make the gift that founded Vanderbilt University in the spring of 1873.
    The $1 million that he gave to endow and build the university was the commodore’s only major philanthropy. Methodist Bishop Holland N. McTyeire of Nashville, husband of Amelia Townsend who was a cousin of the commodore’s young second wife Frank Crawford, went to New York for medical treatment early in 1873 and spent time recovering in the Vanderbilt mansion. He won the commodore’s admiration and support for the project of building a university in the South that would “contribute to strengthening the ties which should exist between all sections of our common country.”

    McTyeire chose the site for the campus, supervised the construction of buildings and personally planted many of the trees that today make Vanderbilt a national arboretum. At the outset, the university consisted of one Main Building (now Kirkland Hall), an astronomical observatory and houses for professors. Landon C. Garland was Vanderbilt’s first chancellor, serving from 1875 to 1893. He advised McTyeire in selecting the faculty, arranged the curriculum and set the policies of the university.

    For the first 40 years of its existence, Vanderbilt was under the auspices of the Methodist Episcopal Church, South. The Vanderbilt Board of Trust severed its ties with the church in June 1914 as a result of a dispute with the bishops over who would appoint university trustees.

    From the outset, Vanderbilt met two definitions of a university: It offered work in the liberal arts and sciences beyond the baccalaureate degree and it embraced several professional schools in addition to its college. James H. Kirkland, the longest serving chancellor in university history (1893-1937), followed Chancellor Garland. He guided Vanderbilt to rebuild after a fire in 1905 that consumed the main building, which was renamed in Kirkland’s honor, and all its contents. He also navigated the university through the separation from the Methodist Church. Notable advances in graduate studies were made under the third chancellor, Oliver Cromwell Carmichael (1937-46). He also created the Joint University Library, brought about by a coalition of Vanderbilt, Peabody College and Scarritt College.

    Remarkable continuity has characterized the government of Vanderbilt. The original charter, issued in 1872, was amended in 1873 to make the legal name of the corporation “The Vanderbilt University.” The charter has not been altered since.

    The university is self-governing under a Board of Trust that, since the beginning, has elected its own members and officers. The university’s general government is vested in the Board of Trust. The immediate government of the university is committed to the chancellor, who is elected by the Board of Trust.

    The original Vanderbilt campus consisted of 75 acres. By 1960, the campus had spread to about 260 acres of land. When George Peabody College for Teachers merged with Vanderbilt in 1979, about 53 acres were added.

    Vanderbilt’s student enrollment tended to double itself each 25 years during the first century of the university’s history: 307 in the fall of 1875; 754 in 1900; 1,377 in 1925; 3,529 in 1950; 7,034 in 1975. In the fall of 1999 the enrollment was 10,127.

    In the planning of Vanderbilt, the assumption seemed to be that it would be an all-male institution. Yet the board never enacted rules prohibiting women. At least one woman attended Vanderbilt classes every year from 1875 on. Most came to classes by courtesy of professors or as special or irregular (non-degree) students. From 1892 to 1901 women at Vanderbilt gained full legal equality except in one respect — access to dorms. In 1894 the faculty and board allowed women to compete for academic prizes. By 1897, four or five women entered with each freshman class. By 1913 the student body contained 78 women, or just more than 20 percent of the academic enrollment.

    National recognition of the university’s status came in 1949 with election of Vanderbilt to membership in the select Association of American Universities. In the 1950s Vanderbilt began to outgrow its provincial roots and to measure its achievements by national standards under the leadership of Chancellor Harvie Branscomb. By its 90th anniversary in 1963, Vanderbilt for the first time ranked in the top 20 private universities in the United States.

    Vanderbilt continued to excel in research, and the number of university buildings more than doubled under the leadership of Chancellors Alexander Heard (1963-1982) and Joe B. Wyatt (1982-2000), only the fifth and sixth chancellors in Vanderbilt’s long and distinguished history. Heard added three schools (Blair, the Owen Graduate School of Management and Peabody College) to the seven already existing and constructed three dozen buildings. During Wyatt’s tenure, Vanderbilt acquired or built one-third of the campus buildings and made great strides in diversity, volunteerism and technology.

    The university grew and changed significantly under its seventh chancellor, Gordon Gee, who served from 2000 to 2007. Vanderbilt led the country in the rate of growth for academic research funding, which increased to more than $450 million and became one of the most selective undergraduate institutions in the country.

    On March 1, 2008, Nicholas S. Zeppos was named Vanderbilt’s eighth chancellor after serving as interim chancellor beginning Aug. 1, 2007. Prior to that, he spent 2002-2008 as Vanderbilt’s provost, overseeing undergraduate, graduate and professional education programs as well as development, alumni relations and research efforts in liberal arts and sciences, engineering, music, education, business, law and divinity. He first came to Vanderbilt in 1987 as an assistant professor in the law school. In his first five years, Zeppos led the university through the most challenging economic times since the Great Depression, while continuing to attract the best students and faculty from across the country and around the world. Vanderbilt got through the economic crisis notably less scathed than many of its peers and began and remained committed to its much-praised enhanced financial aid policy for all undergraduates during the same timespan. The Martha Rivers Ingram Commons for first-year students opened in 2008 and College Halls, the next phase in the residential education system at Vanderbilt, is on track to open in the fall of 2014. During Zeppos’ first five years, Vanderbilt has drawn robust support from federal funding agencies, and the Medical Center entered into agreements with regional hospitals and health care systems in middle and east Tennessee that will bring Vanderbilt care to patients across the state.

    Today, Vanderbilt University is a private research university of about 6,500 undergraduates and 5,300 graduate and professional students. The university comprises 10 schools, a public policy center and The Freedom Forum First Amendment Center. Vanderbilt offers undergraduate programs in the liberal arts and sciences, engineering, music, education and human development as well as a full range of graduate and professional degrees. The university is consistently ranked as one of the nation’s top 20 universities by publications such as U.S. News & World Report, with several programs and disciplines ranking in the top 10.

    Cutting-edge research and liberal arts, combined with strong ties to a distinguished medical center, creates an invigorating atmosphere where students tailor their education to meet their goals and researchers collaborate to solve complex questions affecting our health, culture and society.

    Vanderbilt, an independent, privately supported university, and the separate, non-profit Vanderbilt University Medical Center share a respected name and enjoy close collaboration through education and research. Together, the number of people employed by these two organizations exceeds that of the largest private employer in the Middle Tennessee region.

     
  • richardmitnick 11:10 am on March 20, 2019 Permalink | Reply
    Tags: "Computer science college seniors in U.S. outperform peers in China, , , Computer Science, India and Russia, new research says",   

    From Stanford University: “Computer science college seniors in U.S. outperform peers in China, India and Russia, new research says” 

    Stanford University Name
    From Stanford University

    March 19, 2019
    Alex Shashkevich, Stanford News Service
    (650) 497-4419
    ashashkevich@stanford.edu

    1
    New Stanford-led research found that undergraduate seniors studying computer science in the United States outperformed their peers in China, India and Russia on a standardized exam measuring their skills. (Image credit: Sidekick / Getty Images)

    An international group of scholars led by the Graduate School of Education’s Prashant Loyalka found that undergraduate seniors studying computer science in the United States outperformed final-year students in China, India and Russia on a standardized exam measuring their skills. The research results were published on March 18 in a new paper in Proceedings of the National Academy of Sciences.

    International comparison of universities usually falls in the domain of popular news rankings and general public perception, which rely on limited information and do not consider the skills students acquire, Loyalka said. That’s why he and his team wanted to collect and analyze data on what students learn in colleges and universities in different countries.

    “There is this narrative that higher education in the United States is much stronger than in other countries, and we wanted to test whether that’s true,” said Loyalka, who is also a center research fellow at the Rural Education Action Program in the Freeman Spogli Institute for International Studies. “Our results suggest that the U.S. is doing a great job at least in terms of computer science education compared to these three other major countries.”

    The findings

    As part of the study, the researchers selected nationally representative samples of seniors from undergraduate computer science programs in the U.S., China, India and Russia. Students were given a two-hour standardized computer science test developed by the nonprofit testing and assessment organization Educational Testing Service. In total, 678 students in China, 364 students in India and 551 students in Russia were tested. In the United States, the researchers used assessment data on 6,847 seniors.

    The test, which aligns with national and international guidelines on what should be taught, probed how well students understand different concepts and knowledge about programming, algorithms, software engineering and other computer science principles.

    Researchers found that the average computer science student in the U.S. ranked higher than about 80 percent of students tested in China, India and Russia, Loyalka said. In contrast, the difference in scores among students in China, India and Russia was small and not statistically significant.

    Researchers also compared a smaller pool of students from top-ranking institutions in each country. They found that the average student in a top computer science program in the U.S. also ranked higher than about 80 percent of students from top programs in China, India and Russia. But the top Chinese, Indian and Russian students scored comparably with the U.S. students from regular institutions, according to the research.

    The researchers also found that the success of the American students wasn’t due to the sample having a large number of high-scoring international students. The researchers distinguished international students by their language skills. Of all sampled U.S. students, 89.1 percent reported that their best language is only English, which the researchers considered to be domestic U.S. students.

    “There is this sense in the public that the high quality of STEM programs in the United States is driven by its international students,” Loyalka said. “Our data show that’s not the case. The results hold if we only consider domestic students in the U.S.”

    The researchers also found that male students scored moderately higher than female students in each of the four countries.

    “The difference between men and women is there in every country, but the gaps are modest compared to the gaps we see between countries and elite and non-elite institutions,” Loyalka said.

    Further research

    The new research is a part of a larger effort led by Loyalka to examine the skills of students in science, technology, engineering and math fields in different countries. In another forthcoming paper, he and his collaborators examine other skills among students in the same four countries. Further research will also look at the relationship between skills developed in college and labor market outcomes, he said.

    Another major goal of the research team is to look more deeply at what might be driving the difference in the performance among countries.

    “We’re looking at different aspects of the college experience including faculty behavior, instruction and student interactions,” Loyalka said. “One of our major goals is to see what types of college experiences could contribute to better student performance.”

    Other Stanford co-authors on the paper included doctoral students Angela Sun Johnson and Saurabh Khanna as well as Ashutosh Bhuradia, a project manager for the research.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford University campus. No image credit

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 9:05 pm on February 17, 2019 Permalink | Reply
    Tags: "The Secret History of Women in Coding", , Computer Science,   

    From The New York Times: Women In STEM-“The Secret History of Women in Coding” 

    New York Times

    From The New York Times

    Feb. 13, 2019
    Clive Thompson

    Computer programming once had much better gender balance than it does today. What went wrong?

    1
    2
    Mary Allen Wilkes with a LINC at M.I.T., where she was a programmer. Credit Joseph C. Towler, Jr.

    As a teenager in Maryland in the 1950s, Mary Allen Wilkes had no plans to become a software pioneer — she dreamed of being a litigator. One day in junior high in 1950, though, her geography teacher surprised her with a comment: “Mary Allen, when you grow up, you should be a computer programmer!” Wilkes had no idea what a programmer was; she wasn’t even sure what a computer was. Relatively few Americans were. The first digital computers had been built barely a decade earlier at universities and in government labs.

    By the time she was graduating from Wellesley College in 1959, she knew her legal ambitions were out of reach. Her mentors all told her the same thing: Don’t even bother applying to law school. “They said: ‘Don’t do it. You may not get in. Or if you get in, you may not get out. And if you get out, you won’t get a job,’ ” she recalls. If she lucked out and got hired, it wouldn’t be to argue cases in front of a judge. More likely, she would be a law librarian, a legal secretary, someone processing trusts and estates.

    But Wilkes remembered her junior high school teacher’s suggestion. In college, she heard that computers were supposed to be the key to the future. She knew that the Massachusetts Institute of Technology had a few of them.


    So on the day of her graduation, she had her parents drive her over to M.I.T. and marched into the school’s employment office. “Do you have any jobs for computer programmers?” she asked. They did, and they hired her.

    It might seem strange now that they were happy to take on a random applicant with absolutely no experience in computer programming. But in those days, almost nobody had any experience writing code. The discipline did not yet really exist; there were vanishingly few college courses in it, and no majors. (Stanford, for example, didn’t create a computer-science department until 1965.) So instead, institutions that needed programmers just used aptitude tests to evaluate applicants’ ability to think logically. Wilkes happened to have some intellectual preparation: As a philosophy major, she had studied symbolic logic, which can involve creating arguments and inferences by stringing together and/or statements in a way that resembles coding.

    Wilkes quickly became a programming whiz. She first worked on the IBM 704, which required her to write in an abstruse “assembly language.”

    7
    An IBM 704 computer, with IBM 727 tape drives and IBM 780 CRT display. (Image courtesy of LLNL.)

    (A typical command might be something like “LXA A, K,” telling the computer to take the number in Location A of its memory and load it into to the “Index Register” K.) Even getting the program into the IBM 704 was a laborious affair. There were no keyboards or screens; Wilkes had to write a program on paper and give it to a typist, who translated each command into holes on a punch card. She would carry boxes of commands to an “operator,” who then fed a stack of such cards into a reader. The computer executed the program and produced results, typed out on a printer.

    Often enough, Wilkes’s code didn’t produce the result she wanted. So she had to pore over her lines of code, trying to deduce her mistake, stepping through each line in her head and envisioning how the machine would execute it — turning her mind, as it were, into the computer. Then she would rewrite the program. The capacity of most computers at the time was quite limited; the IBM 704 could handle only about 4,000 “words” of code in its memory. A good programmer was concise and elegant and never wasted a word. They were poets of bits. “It was like working logic puzzles — big, complicated logic puzzles,” Wilkes says. “I still have a very picky, precise mind, to a fault. I notice pictures that are crooked on the wall.”

    What sort of person possesses that kind of mentality? Back then, it was assumed to be women. They had already played a foundational role in the prehistory of computing: During World War II, women operated some of the first computational machines used for code-breaking at Bletchley Park in Britain.

    9
    A Colossus Mark 2 computer being operated by Wrens. The slanted control panel on the left was used to set the “pin” (or “cam”) patterns of the Lorenz. The “bedstead” paper tape transport is on the right.

    Develope-Tommy Flowers, assisted by Sidney Broadhurst, William Chandler and for the Mark 2 machines, Allen Coombs
    Manufacturer-Post Office Research Station
    Type-Special-purpose electronic digital programmable computer
    Generation-First-generation computer
    Release date Mk 1: December 1943 Mk 2: 1 June 1944
    Discontinued 1960

    8
    The Lorenz SZ machines had 12 wheels, each with a different number of cams (or “pins”).
    Wheel number 1 2 3 4 5 6 7 8 9 10 11 12
    BP wheel name[13] ψ1 ψ2 ψ3 ψ4 ψ5 μ37 μ61 χ1 χ2 χ3 χ4 χ5
    Number of cams (pins) 43 47 51 53 59 37 61 41 31 29 26 23

    Colossus was a set of computers developed by British codebreakers in the years 1943–1945 to help in the cryptanalysis of the Lorenz cipher. Colossus used thermionic valves (vacuum tubes) to perform Boolean and counting operations. Colossus is thus regarded as the world’s first programmable, electronic, digital computer, although it was programmed by switches and plugs and not by a stored program.

    Colossus was designed by research telephone engineer Tommy Flowers to solve a problem posed by mathematician Max Newman at the Government Code and Cypher School (GC&CS) at Bletchley Park. Alan Turing’s use of probability in cryptanalysis (see Banburismus) contributed to its design. It has sometimes been erroneously stated that Turing designed Colossus to aid the cryptanalysis of the Enigma.Turing’s machine that helped decode Enigma was the electromechanical Bombe, not Colossus.

    In the United States, by 1960, according to government statistics, more than one in four programmers were women. At M.I.T.’s Lincoln Labs in the 1960s, where Wilkes worked, she recalls that most of those the government categorized as “career programmers” were female. It wasn’t high-status work — yet.

    In 1961, Wilkes was assigned to a prominent new project, the creation of the LINC.

    LINC from MIT Lincoln Lab


    Wesley Clark in 1962 at a demonstration of the first Laboratory Instrument Computer, or LINC. Credit MIT Lincoln Laboratory

    As one of the world’s first interactive personal computers, it would be a breakthrough device that could fit in a single office or lab. It would even have its own keyboard and screen, so it could be programmed more quickly, without awkward punch cards or printouts. The designers, who knew they could make the hardware, needed Wilkes to help write the software that would let a user control the computer in real time.

    For two and a half years, she and a team toiled away at flow charts, pondering how the circuitry functioned, how to let people communicate with it. “We worked all these crazy hours; we ate all kinds of terrible food,” she says. There was sexism, yes, especially in the disparity between how men and women were paid and promoted, but Wilkes enjoyed the relative comity that existed among the men and women at Lincoln Labs, the sense of being among intellectual peers. “We were a bunch of nerds,” Wilkes says dryly. “We were a bunch of geeks. We dressed like geeks. I was completely accepted by the men in my group.” When they got an early prototype of the LINC working, it solved a fiendish data-processing problem for a biologist, who was so excited that he danced a happy jig around the machine.

    In late 1964, after Wilkes returned from traveling around the world for a year, she was asked to finish writing the LINC’s operating system. But the lab had been relocated to St. Louis, and she had no desire to move there. Instead, a LINC was shipped to her parents’ house in Baltimore. Looming in the front hall near the foot of the stairs, a tall cabinet of whirring magnetic tapes across from a refrigerator-size box full of circuitry, it was an early glimpse of a sci-fi future: Wilkes was one of the first people on the planet to have a personal computer in her home. (Her father, an Episcopal clergyman, was thrilled. “He bragged about it,” she says. “He would tell anybody who would listen, ‘I bet you don’t have a computer in your living room.’ ”) Before long, LINC users around the world were using her code to program medical analyses and even create a chatbot that interviewed patients about their symptoms.

    But even as Wilkes established herself as a programmer, she still craved a life as a lawyer. “I also really finally got to the point where I said, ‘I don’t think I want to do this for the rest of my life,’ ” she says. Computers were intellectually stimulating but socially isolating. In 1972, she applied and got in to Harvard Law School, and after graduating, she spent the next four decades as a lawyer. “I absolutely loved it,” she says.

    Today Wilkes is retired and lives in Cambridge, Mass. White-haired at 81, she still has the precise mannerisms and the ready, beaming smile that can be seen in photos from the ’60s, when she posed, grinning, beside the LINC. She told me that she occasionally gives talks to young students studying computer science. But the industry they’re heading into is, astonishingly, less populated with women — and by many accounts less welcoming to them — than it was in Wilkes’s day. In 1960, when she started working at M.I.T., the proportion of women in computing and mathematical professions (which are grouped together in federal government data) was 27 percent. It reached 35 percent in 1990. But, in the government’s published figures, that was the peak. The numbers fell after that, and by 2013, women were down to 26 percent — below their share in 1960.

    When Wilkes talks to today’s young coders, they are often shocked to learn that women were among the field’s earliest, towering innovators and once a common sight in corporate America. “Their mouths are agape,” Wilkes says. “They have absolutely no idea.”

    Almost 200 years ago, the first person to be what we would now call a coder was, in fact, a woman: Lady Ada Lovelace.

    4
    Ada Lovelace aka Augusta Ada Byron-1843 or 1850 a rare daguerreotype by Antoine Claudet. Picture taken in his studio probably near Regents Park in London
    Date 2 January 1843
    Source https://blogs.bodleian.ox.ac.uk/adalovelace/2015/10/14/only-known-photographs-of-ada-lovelace-in-bodleian-display/ Reproduction courtesy of Geoffrey Bond.
    Augusta Ada King, Countess of Lovelace (née Byron; 10 December 1815 – 27 November 1852) was an English mathematician and writer, chiefly known for her work on Charles Babbage’s proposed mechanical general-purpose computer, the Analytical Engine [below]. She was the first to recognise that the machine had applications beyond pure calculation, and published the first algorithm intended to be carried out by such a machine. As a result, she is sometimes regarded as the first to recognise the full potential of a “computing machine” and the first computer programmer.

    Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage’s difference engine

    As a young mathematician in England in 1833, she met Charles Babbage, an inventor who was struggling to design what he called the Analytical Engine, which would be made of metal gears and able to execute if/then commands and store information in memory. Enthralled, Lovelace grasped the enormous potential of a device like this. A computer that could modify its own instructions and memory could be far more than a rote calculator, she realized. To prove it, Lovelace wrote what is often regarded as the first computer program in history, an algorithm with which the Analytical Engine would calculate the Bernoulli sequence of numbers. (She wasn’t shy about her accomplishments: “That brain of mine is something more than merely mortal; as time will show,” she once wrote.) But Babbage never managed to build his computer, and Lovelace, who died of cancer at 36, never saw her code executed.

    Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage’s difference engine

    When digital computers finally became a practical reality in the 1940s, women were again pioneers in writing software for the machines. At the time, men in the computing industry regarded writing code as a secondary, less interesting task. The real glory lay in making the hardware. Software? “That term hadn’t yet been invented,” says Jennifer S. Light, a professor at M.I.T. who studies the history of science and technology.

    This dynamic was at work in the development of the first programmable digital computer in the United States, the Electronic Numerical Integrator and Computer, or Eniac, during the 1940s.

    3
    Computer operators with an Eniac — the world’s first programmable general-purpose computer. Credit Corbis/Getty Images

    ENIAC progamming. Columbia University

    Funded by the military, the thing was a behemoth, weighing more than 30 tons and including 17,468 vacuum tubes. Merely getting it to work was seen as the heroic, manly engineering feat. In contrast, programming it seemed menial, even secretarial. Women had long been employed in the scut work of doing calculations. In the years leading up to the Eniac, many companies bought huge electronic tabulating machines — quite useful for tallying up payroll, say — from companies like IBM; women frequently worked as the punch-card operators for these overgrown calculators. When the time came to hire technicians to write instructions for the Eniac, it made sense, to the men in charge, to pick an all-female team: Kathleen McNulty, Jean Jennings, Betty Snyder, Marlyn Wescoff, Frances Bilas and Ruth Lichterman. The men would figure out what they wanted Eniac to do; the women “programmed” it to execute the instructions.

    “We could diagnose troubles almost down to the individual vacuum tube,” Jennings later told an interviewer for the IEEE Annals of the History of Computing. Jennings, who grew up as the tomboy daughter of low-income parents near a Missouri community of 104 people, studied math at college. “Since we knew both the application and the machine, we learned to diagnose troubles as well as, if not better than, the engineer.”

    The Eniac women were among the first coders to discover that software never works right the first time — and that a programmer’s main work, really, is to find and fix the bugs. Their innovations included some of software’s core concepts. Betty Snyder realized that if you wanted to debug a program that wasn’t running correctly, it would help to have a “break point,” a moment when you could stop a program midway through its run. To this day, break points are a key part of the debugging process.

    In 1946, Eniac’s creators wanted to show off the computer to a group of leaders in science, technology and the military. They asked Jennings and Snyder to write a program that calculated missile trajectories. After weeks of intense effort, they and their team had a working program, except for one glitch: It was supposed to stop when the missile landed, but for some reason it kept running. The night before the demo, Snyder suddenly intuited the problem. She went to work early the next day, flipped a single switch inside the Eniac and eliminated the bug. “Betty could do more logical reasoning while she was asleep than most people can do awake,” Jennings later said. Nonetheless, the women got little credit for their work. At that first official demonstration to show off Eniac, the male project managers didn’t mention, much less introduce, the women.

    After the war, as coding jobs spread from the military into the private sector, women remained in the coding vanguard, doing some of the highest-profile work.

    3
    Rear Admiral Grace M. Hopper, 1984

    Grace Brewster Murray Hopper (née Murray; December 9, 1906 – January 1, 1992) was an American computer scientist and United States Navy rear admiral. One of the first programmers of the Harvard Mark I computer, she was a pioneer of computer programming who invented one of the first compiler related tools. She popularized the idea of machine-independent programming languages, which led to the development of COBOL, an early high-level programming language still in use today.

    The pioneering programmer Grace Hopper is frequently credited with creating the first “compiler,” a program that lets users create programming languages that more closely resemble regular written words: A coder could thus write the English-like code, and the compiler would do the hard work of turning it into ones and zeros for the computer. Hopper also developed the “Flowmatic” language for nontechnical businesspeople. Later, she advised the team that created the Cobol language, which became widely used by corporations. Another programmer from the team, Jean E. Sammet, continued to be influential in the language’s development for decades. Fran Allen was so expert in optimizing Fortran, a popular language for performing scientific calculations, that she became the first female IBM fellow.

    NERSC Hopper Cray XE6 supercomputer

    When the number of coding jobs exploded in the ’50s and ’60s as companies began relying on software to process payrolls and crunch data, men had no special advantage in being hired. As Wilkes had discovered, employers simply looked for candidates who were logical, good at math and meticulous. And in this respect, gender stereotypes worked in women’s favor: Some executives argued that women’s traditional expertise at painstaking activities like knitting and weaving manifested precisely this mind-set. (The 1968 book Your Career in Computers stated that people who like “cooking from a cookbook” make good programmers.)

    The field rewarded aptitude: Applicants were often given a test (typically one involving pattern recognition), hired if they passed it and trained on the job, a process that made the field especially receptive to neophytes. “Know Nothing About Computers? Then We’ll Teach You (and Pay You While Doing So),” one British ad promised in 1965. In a 1957 recruiting pitch in the United States, IBM’s brochure titled My Fair Ladies specifically encouraged women to apply for coding jobs.

    Such was the hunger for programming talent that a young black woman named Arlene Gwendolyn Lee [no photo available] could become one of the early female programmers in Canada, despite the open discrimination of the time. Lee was half of a biracial couple to whom no one would rent, so she needed money to buy a house. According to her son, who has described his mother’s experience in a blog post, Lee showed up at a firm after seeing its ad for data processing and systems analytics jobs in a Toronto newspaper sometime in the early 1960s. Lee persuaded the employers, who were all white, to let her take the coding aptitude test. When she placed in the 99th percentile, the supervisors grilled her with questions before hiring her. “I had it easy,” she later told her son. “The computer didn’t care that I was a woman or that I was black. Most women had it much harder.”

    Elsie Shutt learned to code during her college summers while working for the military at the Aberdeen Proving Ground, an Army facility in Maryland.

    8
    Elsie Shutt founded one of the first software businesses in the U.S. in 1958

    In 1953, while taking time off from graduate school, she was hired to code for Raytheon, where the programmer work force “was about 50 percent men and 50 percent women,” she told Janet Abbate, a Virginia Tech historian and author of the 2012 book Recoding Gender. “And it really amazed me that these men were programmers, because I thought it was women’s work!”

    When Shutt had a child in 1957, state law required her to leave her job; the ’50s and ’60s may have been welcoming to full-time female coders, but firms were unwilling to offer part-time work, even to superb coders. So Shutt founded Computations Inc., a consultancy that produced code for corporations. She hired stay-at-home mothers as part-time employees; if they didn’t already know how to code, she trained them. They cared for their kids during the day, then coded at night, renting time on local computers. “What it turned into was a feeling of mission,” Shutt told Abbate, “in providing work for women who were talented and did good work and couldn’t get part-time jobs.” Business Week called the Computations work force the “pregnant programmers” in a 1963 article illustrated with a picture of a baby in a bassinet in a home hallway, with the mother in the background, hard at work writing software. (The article’s title: Mixing Math and Motherhood.)

    By 1967, there were so many female programmers that Cosmopolitan magazine published an article about The Computer Girls, accompanied by pictures of beehived women at work on computers that evoked the control deck of the U.S.S. Enterprise. The story noted that women could make $20,000 a year doing this work (or more than $150,000 in today’s money). It was the rare white-collar occupation in which women could thrive. Nearly every other highly trained professional field admitted few women; even women with math degrees had limited options: teaching high school math or doing rote calculations at insurance firms.

    “Women back then would basically go, ‘Well, if I don’t do programming, what else will I do?’ ” Janet Abbate says. “The situation was very grim for women’s opportunities.”

    If we want to pinpoint a moment when women began to be forced out of programming, we can look at one year: 1984. A decade earlier, a study revealed that the numbers of men and women who expressed an interest in coding as a career were equal. Men were more likely to enroll in computer-science programs, but women’s participation rose steadily and rapidly through the late ’70s until, by the 1983-84 academic year, 37.1 percent of all students graduating with degrees in computer and information sciences were women. In only one decade, their participation rate more than doubled.

    But then things went into reverse. From 1984 onward, the percentage dropped; by the time 2010 rolled around, it had been cut in half. Only 17.6 percent of the students graduating from computer-science and information-science programs were women.

    One reason for this vertiginous decline has to do with a change in how and when kids learned to program. The advent of personal computers in the late ’70s and early ’80s remade the pool of students who pursued computer-science degrees. Before then, pretty much every student who showed up at college had never touched a computer or even been in the room with one. Computers were rare and expensive devices, available for the most part only in research labs or corporate settings. Nearly all students were on equal footing, in other words, and new to programming.

    Once the first generation of personal computers, like the Commodore 64 or the TRS-80, found their way into homes, teenagers were able to play around with them, slowly learning the major concepts of programming in their spare time.

    9
    Commodore 64

    10
    Radio Shack Tandy TRS80

    By the mid-’80s, some college freshmen were showing up for their first class already proficient as programmers. They were remarkably well prepared for and perhaps even a little jaded about what Computer Science 101 might bring. As it turned out, these students were mostly men, as two academics discovered when they looked into the reasons women’s enrollment was so low.

    5
    Keypunch operators at IBM in Stockholm in the 1930s. Credit IBM

    One researcher was Allan Fisher, then the associate dean of the computer-science school at Carnegie Mellon University. The school established an undergraduate program in computer science in 1988, and after a few years of operation, Fisher noticed that the proportion of women in the major was consistently below 10 percent. In 1994, he hired Jane Margolis, a social scientist who is now a senior researcher in the U.C.L.A. School of Education and Information Studies, to figure out why. Over four years, from 1995 to 1999, she and her colleagues interviewed and tracked roughly 100 undergraduates, male and female, in Carnegie Mellon’s computer-science department; she and Fisher later published the findings in their 2002 book “Unlocking the Clubhouse: Women in Computing.”

    What Margolis discovered was that the first-year students arriving at Carnegie Mellon with substantial experience were almost all male. They had received much more exposure to computers than girls had; for example, boys were more than twice as likely to have been given one as a gift by their parents. And if parents bought a computer for the family, they most often put it in a son’s room, not a daughter’s. Sons also tended to have what amounted to an “internship” relationship with fathers, working through Basic-language manuals with them, receiving encouragement from them; the same wasn’t true for daughters. “That was a very important part of our findings,” Margolis says. Nearly every female student in computer science at Carnegie Mellon told Margolis that her father had worked with her brother — “and they had to fight their way through to get some attention.”

    Their mothers were typically less engaged with computers in the home, they told her. Girls, even the nerdy ones, picked up these cues and seemed to dial back their enthusiasm accordingly. These were pretty familiar roles for boys and girls, historically: Boys were cheered on for playing with construction sets and electronics kits, while girls were steered toward dolls and toy kitchens. It wasn’t terribly surprising to Margolis that a new technology would follow the same pattern as it became widely accepted.

    At school, girls got much the same message: Computers were for boys. Geeky boys who formed computer clubs, at least in part to escape the torments of jock culture, often wound up, whether intentionally or not, reproducing the same exclusionary behavior. (These groups snubbed not only girls but also black and Latino boys.) Such male cliques created “a kind of peer support network,” in Fisher’s words.

    This helped explain why Carnegie Mellon’s first-year classes were starkly divided between the sizable number of men who were already confident in basic programming concepts and the women who were frequently complete neophytes. A cultural schism had emerged. The women started doubting their ability. How would they ever catch up?

    What Margolis heard from students — and from faculty members, too — was that there was a sense in the classroom that if you hadn’t already been coding obsessively for years, you didn’t belong. The “real programmer” was the one who “had a computer-screen tan from being in front of the monitor all the time,” as Margolis puts it. “The idea was, you just have to love being with a computer all the time, and if you don’t do it 24/7, you’re not a ‘real’ programmer.” The truth is, many of the men themselves didn’t fit this monomaniacal stereotype. But there was a double standard: While it was O.K. for the men to want to engage in various other pursuits, women who expressed the same wish felt judged for not being “hard core” enough. By the second year, many of these women, besieged by doubts, began dropping out of the program. (The same was true for the few black and Latino students who also arrived on campus without teenage programming experience.)

    A similar pattern took hold at many other campuses. Patricia Ordóñez, a first-year student at Johns Hopkins University in 1985, enrolled in an Introduction to Minicomputers course. She had been a math whiz in high school but had little experience in coding; when she raised her hand in class at college to ask a question, many of the other students who had spent their teenage years programming — and the professor — made her feel singled out. “I remember one day he looked at me and said, ‘You should already know this by now,’ ” she told me. “I thought, I’m never going to succeed.” She switched majors as a result.

    Yet a student’s decision to stick with or quit the subject did not seem to be correlated with coding talent. Many of the women who dropped out were getting perfectly good grades, Margolis learned. Indeed, some who left had been top students. And the women who did persist and made it to the third year of their program had by then generally caught up to the teenage obsessives. The degree’s coursework was, in other words, a leveling force. Learning Basic as a teenage hobby might lead to lots of fun and useful skills, but the pace of learning at college was so much more intense that by the end of the degree, everyone eventually wound up graduating at roughly the same levels of programming mastery.

    5
    An E.R.A./Univac 1103 computer in the 1950s.Credit Hum Images/Alamy

    “It turned out that having prior experience is not a great predictor, even of academic success,” Fisher says. Ordóñez’s later experience illustrates exactly this: After changing majors at Johns Hopkins, she later took night classes in coding and eventually got a Ph.D. in computer science in her 30s; today, she’s a professor at the University of Puerto Rico Río Piedras, specializing in data science.

    By the ’80s, the early pioneering work done by female programmers had mostly been forgotten. In contrast, Hollywood was putting out precisely the opposite image: Computers were a male domain. In hit movies like Revenge of the Nerds, Weird Science, Tron, WarGames and others, the computer nerds were nearly always young white men. Video games, a significant gateway activity that led to an interest in computers, were pitched far more often at boys, as research in 1985 by Sara Kiesler [Psychology of Women Quartly], a professor at Carnegie Mellon, found. “In the culture, it became something that guys do and are good at,” says Kiesler, who is also a program manager at the National Science Foundation. “There were all kinds of things signaling that if you don’t have the right genes, you’re not welcome.”

    A 1983 study involving M.I.T. students produced equally bleak accounts. Women who raised their hands in class were often ignored by professors and talked over by other students. They would be told they weren’t aggressive enough; if they challenged other students or contradicted them, they heard comments like “You sure are bitchy today — must be your period.” Behavior in some research groups “sometimes approximates that of the locker room,” the report concluded, with men openly rating how “cute” their female students were. (“Gee, I don’t think it’s fair that the only two girls in the group are in the same office,” one said. “We should share.”) Male students mused about women’s mediocrity: “I really don’t think the woman students around here are as good as the men,” one said.

    By then, as programming enjoyed its first burst of cultural attention, so many students were racing to enroll in computer science that universities ran into a supply problem: They didn’t have enough professors to teach everyone. Some added hurdles, courses that students had to pass before they could be accepted into the computer-science major. Punishing workloads and classes that covered the material at a lightning pace weeded out those who didn’t get it immediately. All this fostered an environment in which the students mostly likely to get through were those who had already been exposed to coding — young men, mostly. “Every time the field has instituted these filters on the front end, that’s had the effect of reducing the participation of women in particular,” says Eric S. Roberts, a longtime professor of computer science, now at Reed College, who first studied this problem and called it the “capacity crisis.”

    When computer-science programs began to expand again in the mid-’90s, coding’s culture was set. Most of the incoming students were men. The interest among women never recovered to the levels reached in the late ’70s and early ’80s. And the women who did show up were often isolated. In a room of 20 students, perhaps five or even fewer might be women.

    In 1991, Ellen Spertus, now a computer scientist at Mills College, published a report on women’s experiences in programming classes. She cataloged a landscape populated by men who snickered about the presumed inferiority of women and by professors who told female students that they were “far too pretty” to be studying electrical engineering; when some men at Carnegie Mellon were asked to stop using pictures of naked women as desktop wallpaper on their computers, they angrily complained that it was censorship of the sort practiced by “the Nazis or the Ayatollah Khomeini.”

    As programming was shutting its doors to women in academia, a similar transformation was taking place in corporate America. The emergence of what would be called “culture fit” was changing the who, and the why, of the hiring process. Managers began picking coders less on the basis of aptitude and more on how well they fit a personality type: the acerbic, aloof male nerd.

    The shift actually began far earlier, back in the late ’60s, when managers recognized that male coders shared a growing tendency to be antisocial isolates, lording their arcane technical expertise over that of their bosses. Programmers were “often egocentric, slightly neurotic,” as Richard Brandon, a well-known computer-industry analyst, put it in an address at a 1968 conference, adding that “the incidence of beards, sandals and other symptoms of rugged individualism or nonconformity are notably greater among this demographic.”

    In addition to testing for logical thinking, as in Mary Allen Wilkes’s day, companies began using personality tests to select specifically for these sorts of caustic loner qualities. “These became very powerful narratives,” says Nathan Ensmenger, a professor of informatics at Indiana University, who has studied [Gender and Computing] this transition. The hunt for that personality type cut women out. Managers might shrug and accept a man who was unkempt, unshaven and surly, but they wouldn’t tolerate a women who behaved the same way. Coding increasingly required late nights, but managers claimed that it was too unsafe to have women working into the wee hours, so they forbid them to stay late with the men.

    At the same time, the old hierarchy of hardware and software became inverted. Software was becoming a critical, and lucrative, sector of corporate America. Employers increasingly hired programmers whom they could envision one day ascending to key managerial roles in programming. And few companies were willing to put a woman in charge of men. “They wanted people who were more aligned with management,” says Marie Hicks, a historian at the Illinois Institute of Technology. “One of the big takeaways is that technical skill does not equate to success.”

    By the 1990s and 2000s, the pursuit of “culture fit” was in full force, particularly at start-ups, which involve a relatively small number of people typically confined to tight quarters for long hours. Founders looked to hire people who were socially and culturally similar to them.

    “It’s all this loosey-goosey ‘culture’ thing,” says Sue Gardner, former head of the Wikimedia Foundation, the nonprofit that hosts Wikipedia and other sites. After her stint there, Gardner decided to study why so few women were employed as coders. In 2014, she surveyed more than 1,400 women in the field and conducted sit-down interviews with scores more. It became clear to her that the occupation’s takeover by men in the ’90s had turned into a self-perpetuating cycle. Because almost everyone in charge was a white or Asian man, that was the model for whom to hire; managers recognized talent only when it walked and talked as they did. For example, many companies have relied on whiteboard challenges when hiring a coder — a prospective employee is asked to write code, often a sorting algorithm, on a whiteboard while the employers watch. This sort of thing bears almost no resemblance to the work coders actually do in their jobs. But whiteboard questions resemble classroom work at Ivy League institutions. It feels familiar to the men doing the hiring, many of whom are only a few years out of college. “What I came to realize,” Gardner says, “is that it’s not that women are excluded. It’s that practically everyone is excluded if you’re not a young white or Asian man who’s single.”

    One coder, Stephanie Hurlburt, was a stereotypical math nerd who had deep experience working on graphics software. “I love C++, the low-level stuff,” she told me, referring to a complex language known for allowing programmers to write very fast-running code, useful in graphics. Hurlburt worked for a series of firms this decade, including Unity (which makes popular software for designing games), and then for Facebook on its Oculus Rift VR headset, grinding away for long hours in the run-up to the release of its first demo. Hurlburt became accustomed to shrugging off negative attention and crude sexism. She heard, including from many authority figures she admired, that women weren’t wired for math. While working as a coder, if she expressed ignorance of any concept, no matter how trivial, male colleagues would disparage her. “I thought you were at a higher math level,” one sniffed.

    In 2016, Hurlburt and a friend, Rich Geldreich, founded a start-up called Binomial, where they created software that helps compress the size of “textures” in graphics-heavy software. Being self-employed, she figured, would mean not having to deal with belittling bosses. But when she and Geldreich went to sell their product, some customers assumed that she was just the marketing person. “I don’t know how you got this product off the ground when you only have one programmer!” she recalls one client telling Geldreich.

    In 2014, an informal analysis by a tech entrepreneur and former academic named Kieran Snyder of 248 corporate performance reviews for tech engineers determined that women were considerably more likely than men to receive reviews with negative feedback; men were far more likely to get reviews that had only constructive feedback, with no negative material. In a 2016 experiment conducted by the tech recruiting firm Speak With a Geek, 5,000 résumés with identical information were submitted to firms. When identifying details were removed from the résumés, 54 percent of the women received interview offers; when gendered names and other biographical information were given, only 5 percent of them did.

    Lurking beneath some of this sexist atmosphere is the phantasm of sociobiology. As this line of thinking goes, women are less suited to coding than men because biology better endows men with the qualities necessary to excel at programming. Many women who work in software face this line of reasoning all the time. Cate Huston, a software engineer at Google from 2011 to 2014, heard it from colleagues there when they pondered why such a low percentage of the company’s programmers were women. Peers would argue that Google hired only the best — that if women weren’t being hired, it was because they didn’t have enough innate logic or grit, she recalls.

    In the summer of 2017, a Google employee named James Damore suggested in an internal email that several qualities more commonly found in women — including higher rates of anxiety — explained why they weren’t thriving in a competitive world of coding; he cited the cognitive neuroscientist Simon Baron-Cohen, who theorizes that the male brain is more likely to be “systemizing,” compared with women’s “empathizing” brains. Google fired Damore, saying it could not employ someone who would argue that his female colleagues were inherently unsuited to the job. But on Google’s internal boards, other male employees backed up Damore, agreeing with his analysis. The assumption that the makeup of the coding work force reflects a pure meritocracy runs deep among many Silicon Valley men; for them, sociobiology offers a way to explain things, particularly for the type who prefers to believe that sexism in the workplace is not a big deal, or even doubts it really exists.

    But if biology were the reason so few women are in coding, it would be impossible to explain why women were so prominent in the early years of American programming, when the work could be, if anything, far harder than today’s programming. It was an uncharted new field, in which you had to do math in binary and hexadecimal formats, and there were no helpful internet forums, no Google to query, for assistance with your bug. It was just your brain in a jar, solving hellish problems.

    If biology limited women’s ability to code, then the ratio of women to men in programming ought to be similar in other countries. It isn’t. In India, roughly 40 percent of the students studying computer science and related fields are women. This is despite even greater barriers to becoming a female coder there; India has such rigid gender roles that female college students often have an 8 p.m. curfew, meaning they can’t work late in the computer lab, as the social scientist Roli Varma learned when she studied them in 2015. The Indian women had one big cultural advantage over their American peers, though: They were far more likely to be encouraged by their parents to go into the field, Varma says. What’s more, the women regarded coding as a safer job because it kept them indoors, lessening their exposure to street-level sexual harassment. It was, in other words, considered normal in India that women would code. The picture has been similar in Malaysia, where in 2001 — precisely when the share of American women in computer science had slid into a trough — women represented 52 percent of the undergraduate computer-science majors and 39 percent of the Ph.D. candidates at the University of Malaya in Kuala Lumpur.

    Today, when midcareer women decide that Silicon Valley’s culture is unlikely to change, many simply leave the industry. When Sue Gardner surveyed those 1,400 women in 2014, they told her the same story: In the early years, as junior coders, they looked past the ambient sexism they encountered. They loved programming and were ambitious and excited by their jobs. But over time, Gardner says, “they get ground down.” As they rose in the ranks, they found few, if any, mentors. Nearly two-thirds either experienced or witnessed harassment, she read in “The Athena Factor” (a 2008 study of women in tech); in Gardner’s survey, one-third reported that their managers were more friendly toward and gave more support to their male co-workers. It’s often assumed that having children is the moment when women are sidelined in tech careers, as in many others, but Gardner discovered that wasn’t often the breaking point for these women. They grew discouraged seeing men with no better or even lesser qualifications get superior opportunities and treatment.

    “What surprised me was that they felt, ‘I did all that work!’ They were angry,” Gardner says. “It wasn’t like they needed a helping hand or needed a little extra coaching. They were mad. They were not leaving because they couldn’t hack it. They were leaving because they were skilled professionals who had skills that were broadly in demand in the marketplace, and they had other options. So they’re like, ‘[expletive] it — I’ll go somewhere where I’m seen as valuable.’ ”

    The result is an industry that is drastically more male than it was decades ago, and far more so than the workplace at large. In 2018, according to data from the Bureau of Labor Statistics, about 26 percent of the workers in “computer and mathematical occupations” were women. The percentages for people of color are similarly low: Black employees were 8.4 percent, Latinos 7.5 percent. (The Census Bureau’s American Community Survey put black coders at only 4.7 percent in 2016.) In the more rarefied world of the top Silicon Valley tech firms, the numbers are even more austere: A 2017 analysis by Recode, a news site that covers the technology industry, revealed that 20 percent of Google’s technical employees were women, while only 1 percent were black and 3 percent were Hispanic. Facebook was nearly identical; the numbers at Twitter were 15 percent, 2 percent and 4 percent, respectively.

    The reversal has been profound. In the early days of coding, women flocked to programming because it offered more opportunity and reward for merit, more than fields like law. Now software has the closed door.

    In the late 1990s, Allan Fisher decided that Carnegie Mellon would try to address the male-female imbalance in its computer-science program. Prompted by Jane Margolis’s findings, Fisher and his colleagues instituted several changes. One was the creation of classes that grouped students by experience: The kids who had been coding since youth would start on one track; the newcomers to coding would have a slightly different curriculum, allowing them more time to catch up. Carnegie Mellon also offered extra tutoring to all students, which was particularly useful for the novice coders. If Fisher could get them to stay through the first and second years, he knew, they would catch up to their peers.

    5
    Components from four of the earliest electronic computers, held by Patsy Boyce Simmers, Gail Taylor, Millie Beck and Norma Stec, employees at the United States Army’s Ballistics Research Laboratory.Credit Science Source

    They also modified the courses in order to show how code has impacts in the real world, so a new student’s view of programming wouldn’t just be an endless vista of algorithms disconnected from any practical use. Fisher wanted students to glimpse, earlier on, what it was like to make software that works its way into people’s lives. Back in the ’90s, before social media and even before the internet had gone mainstream, the influence that code could have on daily life wasn’t so easy to see.

    Faculty members, too, adopted a different perspective. For years some had tacitly endorsed the idea that the students who came in already knowing code were born to it. Carnegie Mellon “rewarded the obsessive hacker,” Fisher told me. But the faculty now knew that their assumptions weren’t true; they had been confusing previous experience with raw aptitude. They still wanted to encourage those obsessive teenage coders, but they had come to understand that the neophytes were just as likely to bloom rapidly into remarkable talents and deserved as much support. “We had to broaden how faculty sees what a successful student looks like,” he says. The admissions process was adjusted, too; it no longer gave as much preference to students who had been teenage coders.

    No single policy changed things. “There’s really a virtuous cycle,” Fisher says. “If you make the program accommodate people with less experience, then people with less experience come in.” Faculty members became more used to seeing how green coders evolve into accomplished ones, and they learned how to teach that type.

    Carnegie Mellon’s efforts were remarkably successful. Only a few years after these changes, the percentage of women entering its computer-science program boomed, rising to 42 percent from 7 percent; graduation rates for women rose to nearly match those of the men. The school vaulted over the national average. Other schools concerned about the low number of female students began using approaches similar to Fisher’s. In 2006, Harvey Mudd College tinkered with its Introduction to Computer Science course, creating a track specifically for novices, and rebranded it as Creative Problem Solving in Science and Engineering Using Computational Approaches — which, the institution’s president, Maria Klawe, told me, “is actually a better description of what you’re actually doing when you’re coding.” By 2018, 54 percent of Harvey Mudd’s graduates who majored in computer science were women.

    A broader cultural shift has accompanied the schools’ efforts. In the last few years, women’s interest in coding has begun rapidly rising throughout the United States. In 2012, the percentage of female undergraduates who plan to major in computer science began to rise at rates not seen for 35 years [Computing Research News], since the decline in the mid-’80s, according to research by Linda Sax, an education professor at U.C.L.A. There has also been a boomlet of groups and organizations training and encouraging underrepresented cohorts to enter the field, like Black Girls Code and Code Newbie. Coding has come to be seen, in purely economic terms, as a bastion of well-paying and engaging work.

    In an age when Instagram and Snapchat and iPhones are part of the warp and weft of life’s daily fabric, potential coders worry less that the job will be isolated, antisocial and distant from reality. “Women who see themselves as creative or artistic are more likely to pursue computer science today than in the past,” says Sax, who has pored over decades of demographic data about the students in STEM fields. They’re still less likely to go into coding than other fields, but programming is increasingly on their horizon. This shift is abetted by the fact that it’s much easier to learn programming without getting a full degree, through free online coding schools, relatively cheaper “boot camps” or even meetup groups for newcomers — opportunities that have emerged only in the last decade.

    Changing the culture at schools is one thing. Most female veterans of code I’ve spoken to say that what is harder is shifting the culture of the industry at large, particularly the reflexive sexism and racism still deeply ingrained in Silicon Valley. Some, like Sue Gardner, sometimes wonder if it’s even ethical for her to encourage young women to go into tech. She fears they’ll pour out of computer-science programs in increasing numbers, arrive at their first coding job excited and thrive early on, but then gradually get beaten down by industry. “The truth is, we can attract more and different people into the field, but they’re just going to hit that wall in midcareer, unless we change how things happen higher up,” she says.

    On a spring weekend in 2017, more than 700 coders and designers were given 24 hours to dream up and create a new product at a hackathon in New York hosted by TechCrunch, a news site devoted to technology and Silicon Valley. At lunchtime on Sunday, the teams presented their creations to a panel of industry judges, in a blizzard of frantic elevator pitches. There was Instagrammie, a robot system that would automatically recognize the mood of an elderly relative or a person with limited mobility; there was Waste Not, an app to reduce food waste. Most of the contestants were coders who worked at local high-tech firms or computer-science students at nearby universities.

    6
    Despite women’s historical role in the vanguard of computer programing, some female veterans of code wonder if it’s even ethical to encourage young women to go into tech because of the reflexive sexism in the current culture of Silicon Valley.CreditApic/Getty Images

    The winning team, though, was a trio of high school girls from New Jersey: Sowmya Patapati, Akshaya Dinesh and Amulya Balakrishnan. In only 24 hours, they created reVIVE, a virtual-reality app that tests children for signs of A.D.H.D. After the students were handed their winnings onstage — a trophy-size check for $5,000 — they flopped into chairs in a nearby room to recuperate. They had been coding almost nonstop since noon the day before and were bleary with exhaustion.

    “Lots of caffeine,” Balakrishnan, 17, said, laughing. She wore a blue T-shirt that read WHO HACK THE WORLD? GIRLS. The girls told me that they had impressed even themselves by how much they accomplished in 24 hours. “Our app really does streamline the process of detecting A.D.H.D.,” said Dinesh, who was also 17. “It usually takes six to nine months to diagnose, and thousands of dollars! We could do it digitally in a much faster way!”

    They all became interested in coding in high school, each of them with strong encouragement from immigrant parents. Balakrishnan’s parents worked in software and medicine; Dinesh’s parents came to the United States from India in 2000 and worked in information technology. Patapati immigrated from India as an infant with her young mother, who never went to college, and her father, an information-tech worker who was the first in his rural family to go to college.

    Drawn to coding in high school, the young hackers got used to being the lone girl nerds at school, as Dinesh told me.

    “I tried so hard to get other girls interested in computer science, and it was like, the interest levels were just so low,” she says. “When I walked into my first hackathon, it was the most intimidating thing ever. I looked at a room of 80 kids: Five were girls, and I was probably the youngest person there.” But she kept on competing in 25 more hackathons, and her confidence grew. To break the isolation and meet more girls in coding, she attended events by organizations like #BuiltByGirls, which is where, a few days previously, she had met Patapati and Balakrishnan and where they decided to team up. To attend TechCrunch, Patapati, who was 16, and Balakrishnan skipped a junior prom and a friend’s birthday party. “Who needs a party when you can go to a hackathon?” Patapati said.

    Winning TechCrunch as a group of young women of color brought extra attention, not all of it positive. “I’ve gotten a lot of comments like: ‘Oh, you won the hackathon because you’re a girl! You’re a diversity pick,” Balakrishnan said. After the prize was announced online, she recalled later, “there were quite a few engineers who commented, ‘Oh, it was a girl pick; obviously that’s why they won.’ ”

    Nearly two years later, Balakrishnan was taking a gap year to create a heart-monitoring product she invented, and she was in the running for $100,000 to develop it. She was applying to college to study computer science and, in her spare time, competing in a beauty pageant, inspired by Miss USA 2017, Kara McCullough, who was a nuclear scientist. “I realized that I could use pageantry as a platform to show more girls that they could embrace their femininity and be involved in a very technical, male-dominated field,” she says. Dinesh, in her final year at high school, had started an all-female hackathon that now takes place annually in New York. (“The vibe was definitely very different,” she says, more focused on training newcomers.)

    Patapati and Dinesh enrolled at Stanford last fall to study computer science; both are interested deeply in A.I. They’ve noticed the subtle tensions for women in the coding classes. Patapati, who founded a Women in A.I. group with an Apple tech lead, has watched as male colleagues ignore her raised hand in group discussions or repeat something she just said as if it were their idea. “I think sometimes it’s just a bias that people don’t even recognize that they have,” she says. “That’s been really upsetting.”

    Dinesh says “there’s absolutely a difference in confidence levels” between the male and female newcomers. The Stanford curriculum is so intense that even the relative veterans like her are scrambling: When we spoke recently, she had just spent “three all-nighters in a row” on a single project, for which students had to engineer a “print” command from scratch. At 18, she has few illusions about the road ahead. When she went to a blockchain conference, it was a sea of “middle-aged white and Asian men,” she says. “I’m never going to one again,” she adds with a laugh.

    “My dream is to work on autonomous driving at Tesla or Waymo or some company like that. Or if I see that there’s something missing, maybe I’ll start my own company.” She has begun moving in that direction already, having met one venture capitalist via #BuiltByGirls. “So now I know I can start reaching out to her, and I can start reaching out to other people that she might know,” she says.

    Will she look around, 20 years from now, to see that software has returned to its roots, with women everywhere? “I’m not really sure what will happen,” she admits. “But I do think it is absolutely on the upward climb.”

    Correction: Feb. 14, 2019
    An earlier version of this article misidentified the institution Ellen Spertus was affiliated with when she published a 1991 report on women’s experiences in programming classes. Spertus was at M.I.T. when she published the report, not Mills College, where she is currently a professor.

    Correction: Feb. 14, 2019
    An earlier version of this article misstated Akshaya Dinesh’s current age. She is 18, not 19.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:58 am on January 30, 2019 Permalink | Reply
    Tags: , Algorithm could help autonomous underwater vehicles explore risky but scientifically-rewarding environments, , , Computer Science, Engineers program marine robots to take calculated risks, ,   

    From MIT News: “Engineers program marine robots to take calculated risks” 

    MIT News
    MIT Widget

    From MIT News

    January 30, 2019
    Jennifer Chu

    1
    MIT engineers have now developed an algorithm that lets autonomous underwater vehicles weigh the risks and potential rewards of exploring an unknown region.
    Image: stock image.

    Algorithm could help autonomous underwater vehicles explore risky but scientifically-rewarding environments.

    We know far less about the Earth’s oceans than we do about the surface of the moon or Mars. The sea floor is carved with expansive canyons, towering seamounts, deep trenches, and sheer cliffs, most of which are considered too dangerous or inaccessible for autonomous underwater vehicles (AUV) to navigate.

    But what if the reward for traversing such places was worth the risk?

    MIT engineers have now developed an algorithm that lets AUVs weigh the risks and potential rewards of exploring an unknown region. For instance, if a vehicle tasked with identifying underwater oil seeps approached a steep, rocky trench, the algorithm could assess the reward level (the probability that an oil seep exists near this trench), and the risk level (the probability of colliding with an obstacle), if it were to take a path through the trench.

    “If we were very conservative with our expensive vehicle, saying its survivability was paramount above all, then we wouldn’t find anything of interest,” says Benjamin Ayton, a graduate student in MIT’s Department of Aeronautics and Astronautics. “But if we understand there’s a tradeoff between the reward of what you gather, and the risk or threat of going toward these dangerous geographies, we can take certain risks when it’s worthwhile.”

    Ayton says the new algorithm can compute tradeoffs of risk versus reward in real time, as a vehicle decides where to explore next. He and his colleagues in the lab of Brian Williams, professor of aeronautics and astronautics, are implementing this algorithm and others on AUVs, with the vision of deploying fleets of bold, intelligent robotic explorers for a number of missions, including looking for offshore oil deposits, investigating the impact of climate change on coral reefs, and exploring extreme environments analogous to Europa, an ice-covered moon of Jupiter that the team hopes vehicles will one day traverse.

    “If we went to Europa and had a very strong reason to believe that there might be a billion-dollar observation in a cave or crevasse, which would justify sending a spacecraft to Europa, then we would absolutely want to risk going in that cave,” Ayton says. “But algorithms that don’t consider risk are never going to find that potentially history-changing observation.”

    Ayton and Williams, along with Richard Camilli of the Woods Hole Oceanographic Institution, will present their new algorithm at the Association for the Advancement of Artificial Intelligence conference this week in Honolulu.

    A bold path

    The team’s new algorithm is the first to enable “risk-bounded adaptive sampling.” An adaptive sampling mission is designed, for instance, to automatically adapt an AUV’s path, based on new measurements that the vehicle takes as it explores a given region. Most adaptive sampling missions that consider risk typically do so by finding paths with a concrete, acceptable level of risk. For instance, AUVs may be programmed to only chart paths with a chance of collision that doesn’t exceed 5 percent.

    But the researchers found that accounting for risk alone could severely limit a mission’s potential rewards.

    “Before we go into a mission, we want to specify the risk we’re willing to take for a certain level of reward,” Ayton says. “For instance, if a path were to take us to more hydrothermal vents, we would be willing to take this amount of risk, but if we’re not going to see anything, we would be willing to take less risk.”

    The team’s algorithm takes in bathymetric data, or information about the ocean topography, including any surrounding obstacles, along with the vehicle’s dynamics and inertial measurements, to compute the level of risk for a certain proposed path. The algorithm also takes in all previous measurements that the AUV has taken, to compute the probability that such high-reward measurements may exist along the proposed path.

    If the risk-to-reward ratio meets a certain value, determined by scientists beforehand, then the AUV goes ahead with the proposed path, taking more measurements that feed back into the algorithm to help it evaluate the risk and reward of other paths as the vehicle moves forward.

    The researchers tested their algorithm in a simulation of an AUV mission east of Boston Harbor. They used bathymetric data collected from the region during a previous NOAA survey, and simulated an AUV exploring at a depth of 15 meters through regions at relatively high temperatures. They looked at how the algorithm planned out the vehicle’s route under three different scenarios of acceptable risk.

    In the scenario with the lowest acceptable risk, meaning that the vehicle should avoid any regions that would have a very high chance of collision, the algorithm mapped out a conservative path, keeping the vehicle in a safe region that also did not have any high rewards — in this case, high temperatures. For scenarios of higher acceptable risk, the algorithm charted bolder paths that took a vehicle through a narrow chasm, and ultimately to a high-reward region.

    The team also ran the algorithm through 10,000 numerical simulations, generating random environments in each simulation through which to plan a path, and found that the algorithm “trades off risk against reward intuitively, taking dangerous actions only when justified by the reward.”

    A risky slope

    Last December, Ayton, Williams, and others spent two weeks on a cruise off the coast of Costa Rica, deploying underwater gliders, on which they tested several algorithms, including this newest one. For the most part, the algorithm’s path planning agreed with those proposed by several onboard geologists who were looking for the best routes to find oil seeps.

    Ayton says there was a particular moment when the risk-bounded algorithm proved especially handy. An AUV was making its way up a precarious slump, or landslide, where the vehicle couldn’t take too many risks.

    “The algorithm found a method to get us up the slump quickly, while being the most worthwhile,” Ayton says. “It took us up a path that, while it didn’t help us discover oil seeps, it did help us refine our understanding of the environment.”

    “What was really interesting was to watch how the machine algorithms began to ‘learn’ after the findings of several dives, and began to choose sites that we geologists might not have chosen initially,” says Lori Summa, a geologist and guest investigator at the Woods Hole Oceanographic Institution, who took part in the cruise. “This part of the process is still evolving, but it was exciting to watch the algorithms begin to identify the new patterns from large amounts of data, and couple that information to an efficient, ‘safe’ search strategy.”

    In their long-term vision, the researchers hope to use such algorithms to help autonomous vehicles explore environments beyond Earth.

    “If we went to Europa and weren’t willing to take any risks in order to preserve a probe, then the probability of finding life would be very, very low,” Ayton says. “You have to risk a little to get more reward, which is generally true in life as well.”

    This research was supported, in part, by Exxon Mobile, as part of the MIT Energy Initiative, and by NASA.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: