Tagged: MIT Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:01 am on August 12, 2017 Permalink | Reply
    Tags: An academic trend toward project-based curricula, Facebook had this Facebook Open Academy that got students from multiple universities and paired them up with open-source projects, MIT, Open-source entrepreneurship, Open-source software is free software whose underlying code or “source code” is also freely available, Some of the best known are the Linux operating system the Firefox web browser and the WordPress blogging platform, The goal of the new MIT class was a public software release   

    From MIT: “Open-source entrepreneurship” 

    MIT News

    MIT Widget

    MIT News

    August 11, 2017
    Larry Hardesty

    MIT Professor Saman Amarasinghe’s undergraduate course on initiating and managing open-source development projects had no exams or problem sets. Assignments included consulting with mentors, interviewing users, writing a promotional plan — and, of course, leading the development of an open-source application.Image: MIT News.

    New project-based course lets undergrads lead the development of open-source software.

    Open-source software is free software whose underlying code, or “source code,” is also freely available. Open-source development projects often involve hundreds or even thousands of volunteer coders scattered around the globe. Some of the best known are the Linux operating system, the Firefox web browser, and the WordPress blogging platform.

    This past spring, MIT professor of electrical engineering and computer science Saman Amarasinghe offered 6S194 (Open-Source Entrepreneurship), a new undergraduate course on initiating and managing open-source development projects. The course had no exams or problem sets; instead, the assignments included consulting with mentors, interviewing users, writing a promotional plan — and, of course, leading the development of an open-source application.

    The course is an example of an academic trend toward project-based curricula, which have long had vocal supporters among educational theorists but have drawn renewed attention with the advent of online learning, which turns lectures and discussions into activities that students can pursue on their own schedules.

    But where many project-based undergraduate engineering classes result in designs or products that may not make it out of the classroom, the goal of the new MIT class was a public software release, complete with marketing campaign. And the students learned not only the technical skills required to complete their projects, but the managerial skills required to initiate and guide them.

    The creation of the course had a number of different motivations, Amarasinghe explains. “MIT is a very structured place, and we ask so much of our students, sometimes they don’t have time to do anything interesting outside,” he says. “When you talk to students, they say, ‘We have ideas, but without credit, we don’t have time to do it.’”

    “The other thing that happened was that for the last three, four years, Facebook had this Facebook Open Academy that got students from multiple universities and paired them up with open-source projects,” Amarasinghe adds. “What I found was a lot of times MIT students were somewhat bored with some of those projects because it’s hard to meet MIT expectations. We have much higher expectations of what the kids can do.”

    A third factor, Amarasinghe says, is that many research projects in computer science spawn software that, even though it represents hundreds of hours of work by brilliant coders, never makes it out of the lab. Open-source projects that clean that software up, fill in gaps in its functionality, and create interfaces that make it easy to use could mean that researchers working on related projects, instead of building their own systems from scratch, could modify the code of existing systems, saving a huge amount of time and energy.

    Entrepreneurial expectations

    Classes for Open-Source Entrepreneurship were divided between lectures and “studio” time, in which teams of students could work on their projects. Amarasinghe lectured chiefly on technical topics, and Nick Meyer, entrepreneur-in-residence at the Martin Trust Center for MIT Entrepreneurship, lectured on topics such as market research and marketing. During studio time, both Amarasinghe and Meyer were available to advise students.

    Before the class launched, Amarasinghe and his teaching assistant, Jeffrey Bosboom, a graduate student in electrical engineering and computer science, had identified several MIT research projects that they thought could be the basis of useful open-source software. But students were free to propose their own projects.

    After selecting their projects, the students’ first task was to meet with — or, in the case of the students who proposed their own projects, identify and then meet with — mentors, to sketch out the scope and direction of the projects. Then, for each project, the students had to identify and interview four to six potential users of the resulting software, to determine product specifications.

    “When you start out with the project, you have certain preconceptions about what the problem is and what you have to do to solve that problem,” says Stephen Chou, an MIT graduate student in electrical engineering and computer science, who audited the course. “One of the first things we had to do was to look for potential users of our project, and when you talk to them, you realize that the priorities that you start out with aren’t necessarily the right ones. At the same time, some of the people we talked to were working in fields that were completely unfamiliar, at least to me. So you start learning more about their problems, and sometimes you get completely new ideas. It’s a good way to orient yourself. That was new to me, and it was very helpful.”

    The third stage of the project was the establishment of a software development timeline, and at the end of the semester, as the projects drew to completion, the students’ final assignment was the development of a promotional plan.

    The projects

    Several of the class projects built on software prototypes that had been developed by the students themselves — or by their friends. One project, Gavel, was a system for scoring entries in contests such as science fairs or hackathons, in which teams of programmers develop software to meet specific criteria over the space of days. The initial version had been written by an MIT undergrad who was himself a frequent hackathon participant, and two of his friends agreed to use Amarasinghe’s course to turn the software into an open-source project.

    Typically, hackathon judges use some sort of absolute rating scale, but this is a notoriously problematic approach: Different judges may calibrate the scales differently, and over the course of a contest, judges may recalibrate their own scales if they find that, in assigning their first few scores, they over- or underestimated the competition.

    A better approach is to ask judges to perform pairwise comparisons. Comparisons are easier to aggregate across judges, and individual judgments of relative value tend not to fluctuate. Gavel is a web-based system that sequentially assigns judges pairs of contestants to evaluate, selecting the pairs on the fly to ensure that the final cumulative ranking will be statistically valid.

    Another of the projects, Homer, also reflects the preoccupations of undergraduates at a technical university. Homer is based on psychological research on the frequency with which factual information must be repeated before it will reliably lodge itself in someone’s memory. It’s essentially a digital flash-card system, except that instead of picking cards entirely at random, it cycles them through at intervals selected to maximize retention.

    Other projects, however, grew out of academic research at MIT. One project — dubbed Taco, for tensor algebra compiler — was based on yet-unpublished research from Amarasinghe’s group. A tensor is the higher-dimensional analogue of a matrix, which is essentially a table of data. Mathematical operations involving huge tensors are common in the Internet age: All the ratings assigned individual movies by individual Netflix subscribers, for instance, constitute a three-dimensional tensor.

    If the tensors are sparse, however — if most of their entries are zero — there are computational short cuts for manipulating them. And again, in the internet age, many tensors are sparse: Most Netflix subscribers have rated only a tiny fraction of the movies in Netflix library.

    Taco provides a simple, intuitive interface to let data scientists describe operations involving sparse and nonsparse tensors, and the underlying algorithms automatically generate the often very complicated computer code for executing those operations as efficiently as possible.

    Other projects from the class — such as an interface for a database of neural-network models, or a collaborative annotation tool designed for use in the classroom — also grew out of MIT research. But no matter the sources of the projects, the students were the ones steering them to completion.

    “They had a lot more ownership of a project than being part of a very large project that has thousands of contributors, finding a few bugs or adding a few features,” Amarasinghe says. “They got to think of the big-picture issues — how to build a community, how to attract other programmers, what sort of licensing should be used. MIT students should be the ones who are doing new open-source projects and leading some of these things.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 11:07 am on August 4, 2017 Permalink | Reply
    Tags: , collections of ultracold molecules can retain the information stored in them for hundreds of times longer than researchers have previously achieved in these materials, MIT, , , Ultracold molecules hold promise for quantum computing   

    From MIT: “Ultracold molecules hold promise for quantum computing” 

    MIT News
    MIT Widget

    MIT News

    July 27, 2017
    David L. Chandler

    This vacuum chamber with apertures for several laser beams was used to cool molecules of sodium-potassium down to temperatures of a few hundred nanoKelvins, or billionths of a degree above absolute zero. Such molecules could be used as a new kind of qubit, a building block for eventual quantum computers. Courtesy of the researchers.

    New approach yields long-lasting configurations that could provide long-sought “qubit” material.

    Researchers have taken an important step toward the long-sought goal of a quantum computer, which in theory should be capable of vastly faster computations than conventional computers, for certain kinds of problems. The new work shows that collections of ultracold molecules can retain the information stored in them, for hundreds of times longer than researchers have previously achieved in these materials.

    These two-atom molecules are made of sodium and potassium and were cooled to temperatures just a few ten-millionths of a degree above absolute zero (measured in hundreds of nanokelvins, or nK). The results are described in a report this week in Science, by Martin Zwierlein, an MIT professor of physics and a principal investigator in MIT’s Research Laboratory of Electronics; Jee Woo Park, a former MIT graduate student; Sebastian Will, a former research scientist at MIT and now an assistant professor at Columbia University, and two others, all at the MIT-Harvard Center for Ultracold Atoms.

    Many different approaches are being studied as possible ways of creating qubits, the basic building blocks of long-theorized but not yet fully realized quantum computers. Researchers have tried using superconducting materials, ions held in ion traps, or individual neutral atoms, as well as molecules of varying complexity. The new approach uses a cluster of very simple molecules made of just two atoms.

    “Molecules have more ‘handles’ than atoms,” Zwierlein says, meaning more ways to interact with each other and with outside influences. “They can vibrate, they can rotate, and in fact they can strongly interact with each other, which atoms have a hard time doing. Typically, atoms have to really meet each other, be on top of each other almost, before they see that there’s another atom there to interact with, whereas molecules can see each other” over relatively long ranges. “In order to make these qubits talk to each other and perform calculations, using molecules is a much better idea than using atoms,” he says.

    Using this kind of two-atom molecules for quantum information processing “had been suggested some time ago,” says Park, “and this work demonstrates the first experimental step toward realizing this new platform, which is that quantum information can be stored in dipolar molecules for extended times.”

    “The most amazing thing is that [these] molecules are a system which may allow realizing both storage and processing of quantum information, using the very same physical system,” Will says. “That is actually a pretty rare feature that is not typical at all among the qubit systems that are mostly considered today.”

    In the team’s initial proof-of-principle lab tests, a few thousand of the simple molecules were contained in a microscopic puff of gas, trapped at the intersection of two laser beams and cooled to ultracold temperatures of about 300 nanokelvins. “The more atoms you have in a molecule the harder it gets to cool them,” Zwierlein says, so they chose this simple two-atom structure.

    The molecules have three key characteristics: rotation, vibration, and the spin direction of the nuclei of the two individual atoms. For these experiments, the researchers got the molecules under perfect control in terms of all three characteristics — that is, into the lowest state of vibration, rotation, and nuclear spin alignment.

    “We have been able to trap molecules for a long time, and also demonstrate that they can carry quantum information and hold onto it for a long time,” Zwierlein says. And that, he says, is “one of the key breakthroughs or milestones one has to have before hoping to build a quantum computer, which is a much more complicated endeavor.”

    The use of sodium-potassium molecules provides a number of advantages, Zwierlein says. For one thing, “the molecule is chemically stable, so if one of these molecules meets another one they don’t break apart.”

    In the context of quantum computing, the “long time” Zwierlein refers to is one second — which is “in fact on the order of a thousand times longer than a comparable experiment that has been done” using rotation to encode the qubit, he says. “Without additional measures, that experiment gave a millisecond, but this was great already.” With this team’s method, the system’s inherent stability means “you get a full second for free.”

    That suggests, though it remains to be proven, that such a system would be able to carry out thousands of quantum computations, known as gates, in sequence within that second of coherence. The final results could then be “read” optically through a microscope, revealing the final state of the molecules.

    “We have strong hopes that we can do one so-called gate — that’s an operation between two of these qubits, like addition, subtraction, or that sort of equivalent — in a fraction of a millisecond,” Zwierlein says. “If you look at the ratio, you could hope to do 10,000 to 100,000 gate operations in the time that we have the coherence in the sample. That has been stated as one of the requirements for a quantum computer, to have that sort of ratio of gate operations to coherence times.”

    “The next great goal will be to ‘talk’ to individual molecules. Then we are really talking quantum information,” Will says. “If we can trap one molecule, we can trap two. And then we can think about implementing a ‘quantum gate operation’ — an elementary calculation — between two molecular qubits that sit next to each other,” he says.

    Using an array of perhaps 1,000 such molecules, Zwierlein says, would make it possible to carry out calculations so complex that no existing computer could even begin to check the possibilities. Though he stresses that this is still an early step and that such computers could be a decade or more away, in principle such a device could quickly solve currently intractable problems such as factoring very large numbers — a process whose difficulty forms the basis of today’s best encryption systems for financial transactions.

    Besides quantum computing, the new system also offers the potential for a new way of carrying out precision measurements and quantum chemistry, Zwierlein says.

    “These results are truly state of the art,” says Simon Cornish, a professor of physics at Durham University in the U.K., who was not involved in this work. The findings “beautifully reveal the potential of exploiting nuclear spin states in ultracold molecules for applications in quantum information processing, as quantum memories and as a means to probe dipolar interactions and ultracold collisions in polar molecules,” he says. “I think the results constitute a major step forward in the field of ultracold molecules and will be of broad interest to the large community of researchers exploring related aspects of quantum science, coherence, quantum information, and quantum simulation.”

    The team also included MIT graduate student Zoe Yan and postdoc Huanqian Loh. The work was supported by the National Science Foundation, the U.S. Air Force Office of Scientific Research, the U.S. Army Research Office, and the David and Lucile Packard Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 12:07 pm on July 31, 2017 Permalink | Reply
    Tags: MIT, Siberian Traps, Underground magma pulse triggered end-Permian extinction,   

    From MIT: “Underground magma pulse triggered end-Permian extinction” 

    MIT News

    MIT Widget

    MIT News

    July 31, 2017
    Jennifer Chu

    Study ties specific interval during an extended period of volcanism to Earth’s most severe mass extinction.

    Geologists from the U.S. Geological Survey and MIT have homed in on the precise event that set off the end-Permian extinction, Earth’s most devastating mass extinction, which killed off 90 percent of marine organisms and 75 percent of life on land approximately 252 million years ago.

    In a paper published today in Nature Communications, the team reports that about 251.9 million years ago, a huge pulse of magma rose up through the Earth, in a region that today is known as the Siberian Traps. Some of this molten liquid stopped short of erupting onto the surface and instead spread out beneath the Earth’s shallow crust, creating a vast network of rock stretching across almost 1 million square miles.

    As the subsurface magma crystallized into geologic formations called sills, it heated the surrounding carbon-rich sediments and rapidly released into the atmosphere a tremendous volume of carbon dioxide, methane, and other greenhouse gases.

    “This first pulse of sills generated a huge volume of greenhouse gases, and things got really bad, really fast,” says first author and former MIT graduate student Seth Burgess. “Gases warmed the climate, acidified the ocean, and made it very difficult for things on land and in the ocean to survive. And we think the smoking gun is the first pulse of Siberian Traps sills.”

    Getting to extinction’s roots

    Since the 1980s, scientists have suspected that the Earth’s most severe extinction events, the end-Permian included, were triggered by large igneous provinces such as the Siberian Traps — expansive accumulations of igneous rock, formed from protracted eruptions of lava over land and intrusions of magma beneath the surface. But Burgess was struck by a certain incongruity in such hypotheses.

    “One thing really stuck out as a sore thumb to me: The total duration of magmatism in most cases is about 1 million years, but extinctions happen really quickly, in about 10,000 years. That told me that it’s not the entire large igneous province driving extinction,” says Burgess, who is now a research scientist for the U.S. Geological Survey.

    He surmised that the root cause of mass extinctions might be a shorter, more specific interval of magmatism within the much longer period over which large igneous provinces form.

    Digging through the data

    Burgess decided to re-examine geochronologic measurements he made as a graduate student in the lab of Samuel Bowring, the Robert R. Shrock Professor of Geology in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

    In 2014 and 2015, he and Bowring used high-precision dating techniques to determine the timing of the end-Permian mass extinction and ages of ancient magmatic rocks that the team collected over three field expeditions to the Siberian Traps.

    From the rocks’ ages, they estimated this magmatic period started around 300,000 years before the onset of the end-Permian extinction and petered out 500,000 years after the extinction ended. From these dates, the team concluded that magmatism in the Siberian Traps must have had a role in triggering the mass extinction.

    But a puzzle remained. Even while lava erupted in massive volumes hundreds of thousands of years prior to the extinction, there has been no evidence in the global fossil record to suggest any biotic stress or significant change in the climate system during that period.

    “You’d expect if these lavas are driving extinction, you’d see global evidence of biosphere decline,” Burgess says.

    When he looked back through the group’s data, he noticed that rocks dated within the 300,000-year window prior to the start of the extinction were almost exclusively volcanic, meaning they formed from lava that erupted onto land. In contrast, the subsurface sills only started to appear just before the start of the extinction, 251.9 million years ago.

    “I realized the oldest sills out there correspond, bang-on, with the start of the mass extinction,” Burgess says. “You don’t have any negative effects occurring in the biosphere when you’ve got all this lava erupting, but the second you start intruding sills, the mass extinction starts.”

    Revised timeline

    Based on his new observations of the data, Burgess has outlined a refined, three-stage timeline of the processes that likely triggered the end-Permian extinction. The first stage marks the start of widespread eruptions of lava over land, 252.2 million years ago. As the lava spews out and solidifies over a period of 300,000 years, it builds up a dense, rocky cap.

    The second stage starts at around 251.9 million years ago, when the lava cap becomes a structural barrier to subsequent lava eruption. Instead, acending magma stalls and spreads beneath the lava cap as sills, heating up carbon-rich sediments in the Earth and releasing huge amounts of greenhouse gases to the atmosphere — almost precisely when the mass extinction event began. “These first sills are the key,” Burgess says.

    The last stage begins around 251.5 million years ago, as the release of gases slows, even as magma continues to intrude into the sediments.

    “At this point, the magma has already degassed the basin of most of its volatiles, and it becomes more difficult to generate large volumes of volatiles from a basin that’s already been cooked,” Burgess explains.

    A culprit for other extinctions?

    Could similarly short pulses of sills have triggered other mass extinctions in Earth’s history? Burgess looked at the geochronologic data for three other extinction events which scientists have found to coincide with large igneous provinces: the Cretaceous-Plaeogene, the Triassic/Jurassic, and the early Jurassic extinctions.

    For both the Triassic/Jurassic, and the early Jurassic extinction events, he found that the associated large igneous provinces contained significant networks of sills, or intrusive magma, emplaced into sedimentary basins that likely hosted volatile gases. In these two cases, the extinction trigger might have been an initial short pulse of intrusive magma, similar to the end-Permian.

    However, for the Cretaceous-Paleogene event — the extinction that killed off the dinosaurs — Burgess noted that the large igneous province that was erupting at the time is primarily composed of lavas, not sills, and was erupted into granitic rock, not a gas-rich sedimentary basin. Thus, it likely did not release enough greenhouse gases to exclusively cause the dinosaur die-off. Instead, Burgess says a combination of lava eruptions and the Chicxulub asteroid impact was likely responsible.

    “Large igneous provinces have always been blamed for mass extinctions, but no one has really figured out if they’re really guilty, and if so, how it was done,” Burgess says. “Our new work takes that next step and identifies which part of the large igneous province is guilty, and how it committed the crime.”

    The paper’s co-authors are Bowring and J.D. Muirhead, of Syracuse University. The research was supported, in part, by a U.S. Geological Survey Mendenhall Postdoctoral Research Fellowship, which was awarded to Burgess.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 8:49 am on July 31, 2017 Permalink | Reply
    Tags: , , , , Lindy Elkins-Tanton, MIT, ,   

    From MIT: Women in STEM – “Exploring an unusual metal asteroid” Lindy Elkins-Tanton 

    MIT News
    MIT Widget

    MIT News

    July 25, 2017
    Alice Waugh, MIT Alumni Association

    As principal investigator of the Psyche mission, Lindy Elkins-Tanton ’87, SM ’87, PhD ’02 is just the second woman to lead a NASA spacecraft mission to a planetary body. The first was her former MIT colleague, Vice President for Research Maria Zuber. Photo: Arizona State University.

    Alumna and former MIT professor Lindy Elkins-Tanton is working with MIT faculty in her role as principal investigator for NASA’s upcoming Psyche mission.

    NASA Psyche spacecraft

    Lindy Elkins-Tanton ’87, SM ’87, PhD ’02 is reaching for the stars — literally. She is the principal investigator for Psyche, a NASA mission that will explore an unusual metal asteroid known as 16 Psyche.

    The mission does not launch until 2023, but preparations have begun in collaboration with faculty in the Department of Earth, Atmospheric and Planetary Sciences (EAPS). Professors Benjamin Weiss and Maria Zuber, who also serves as MIT’s vice president for research, wrote a paper about the asteroid with Elkins­-Tanton that was the basis for the team’s selection for NASA’s Discovery Program. MIT Professor Richard Binzel is also a team member.

    At MIT, Elkins-Tanton earned BS and MS degrees in geology and geochemistry with a concentration on how planets form. Then she detoured from academia to the business world before becoming a college lecturer in mathematics in 1995.

    “I realized that in academia, you have this incredible privilege of always being able to ask a harder, bigger question, so you never get bored, and you have the opportunity to inspire students to do more in their lives,” says Elkins-Tanton. She returned to MIT to earn a PhD in geology and geophysics, and for the next decade after completing that degree, she taught, first at Brown University and then at MIT as an EAPS faculty member.

    Since 2014, Elkins-­Tanton has been professor and director of the School of Earth and Space Exploration at Arizona State University.

    She has been revamping the undergraduate curriculum to give it more of an MIT flavor, bringing current research into the classroom and having students tackle real-world problems. This approach has helped her transmit excitement about the field to her students.

    Elkins-Tanton also draws on business skills that she says are quite useful for scientific collaboration: negotiating, making a compelling pitch, and knowing how to build a team that works well. She is applying those skills, along with her management and leadership experience, as the second woman to lead a NASA mission to a major solar system body (after Zuber, who was principal investigator of the Gravity Recovery and Interior Laboratory, or GRAIL, mission).

    Psyche represents a compelling target for study because scientists theorize that it was an ordinary asteroid until violent collisions with other objects blasted away most of its outer rock, exposing its metallic core. This core, the first to be studied, could yield insights into the metal interior of rocky planets in the solar system.

    “We have no idea what a metal body looks like. The one thing I can be sure of is that it will surprise us,” Elkins-Tanton says. “I love this stuff — there are new discoveries every day.”

    See the full article here .
    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 1:42 pm on July 12, 2017 Permalink | Reply
    Tags: , , , MIT, Preventing severe blood loss on the battlefield or in the clinic, Reginald Avery   

    From MIT: “Preventing severe blood loss on the battlefield or in the clinic” Reginald Avery 

    MIT News

    MIT Widget

    MIT News

    July 11, 2017
    Dara Farhadi

    At MIT graduate student Reginald Avery has been conducting research on a biomaterial that could stop wounded soldiers from dying from shock due to severe blood loss. “I wanted to do something related to the military because I grew up around that environment,” he says. “The people, the uniformed soldiers, and the well-controlled atmosphere created a good environment to grow up in, and I wanted to still contribute in some way to that community.” Photo: Ian MacLellan

    PhD student Reginald Avery is developing an injectable material that patches ruptured blood vessels.

    In a tiny room in the sub-basement of MIT’s Building 66 sits a customized, super-resolution microscope that makes it possible to see nanoscale features of a red blood cell. Here, Reginald Avery, a fifth-year graduate student in the Department of Biological Engineering, can be found conducting research with quiet discipline, occasionally fidgeting with his silver watch.

    He spends most of his days either at the microscope, taking high-resolution images of blood clots forming over time, or at the computer, reading literature about super-resolution microscopy. Without windows to approximate the time of day, Avery’s watch comes in handy. Not surprisingly for those who know him, it’s set to military time.

    Avery describes his father as a hard-working inspector general for the U.S. Army Test and Evaluation Command. Avery and his fraternal twin brother, Jeff, a graduate student in computer science at Purdue University, were born in Germany and lived for a portion of their childhoods on military bases in Hawaii and Alabama. Eventually the family moved to Maryland and entered civilian life, but Avery’s experiences on a military base never left him. At MIT he’s been conducting research on a biomaterial that could stop wounded soldiers from dying from shock due to severe blood loss.

    “I wanted to do something related to the military because I grew up around that environment,” he says. “The people, the uniformed soldiers, and the well-controlled atmosphere created a good environment to grow up in, and I wanted to still contribute in some way to that community.”

    Blocking blood loss

    Avery is one of the first graduate students to join the Program in Polymers and Soft Matter (PPSM) from the Department of Biological Engineering. When he first joined the lab of Associate Professor Bradley Olsen in the Department of Chemical Engineering, his focus was on optimizing and testing a material that could be topically applied to wounded soldiers.

    The biomaterial is a hydrogel — a material consisting largely of water — with a viscosity similar to toothpaste. Gelatin proteins and inorganic silica nanoparticles are incorporated into the material and function as a substrate that helps to accelerate coagulation rates and reduce clotting times.

    Co-advised by Ali Khademhosseini at Brigham and Women’s Hospital and in collaboration with others at Massachusetts General Hospital, Avery further developed the material so that it could be injected into ruptured blood vessels. Like a cork on a wine bottle, the biomaterial forms a plug in the leaky vessel and stops any blood loss. Avery’s research was published in Science Translational Medicine and featured on the front cover of the November 2016 issue.

    The current standard for patching blood vessels is imperfect. Surgeons typically use metallic coils, special plastic beads, or compounds also found in super glue. Each technology has limitations that the nanocomposite hydrogel attempts to address.

    “The old techniques don’t take advantage of tissue engineering. It can be difficult for a surgeon to deliver metallic coils and beads to the targeted site, and blood may sometimes still find a path through and result in re-bleeding. It’s also expensive, and some techniques have a finite time period to place the material where it needs to be,” Avery says. “We wanted to use a hydrogel that could completely fill a vessel and not allow any leakage to occur through that injury site.”

    The nanocomposite, which can be injected easily with a syringe or catheter, has been tested in animal models without causing inflammatory side-effects or the formation of clots elsewhere in the animal’s circulatory system. Some in vitro experiments also indicate that the material could be useful for treating aneurysms.

    For the past six months Avery has concentrated on uncovering the physical mechanism by which the nanocomposite material interacts with blood. A super-resolution microscope can achieve a resolution of 250 nanometers; a single red blood cell, for a comparison, is about 8,000 nanometers wide. Avery says the ability to visualize how the physiological molecules and proteins interact with the nanocomposite and other surgical tools may also help him design a better material. Obtaining a comprehensive view of the process, however, can be time-consuming.

    “It’s taking snapshots every 10 or 20 seconds for approximately 30 minutes, and putting all of those pictures together,” he says. “What I want to do is visualize these gels and clots forming over time.”

    Found in translation

    While he is eager to see his material put to use to save lives, Avery is glad to be contributing to the work at the basic and translational research stages. He says he’s driven to appropriately characterize a treatment or biomaterial, ask the right questions, and make sure it functions just as well as, or better than, what is currently used in the clinic.

    “I’m comfortable doing a thorough study in vitro to characterize materials or design some synthetic tests prior to in vivo testing,” he says. “You must be very confident in [the biomaterial] before getting to that step so that you’re effectively utilizing the animals, or even more important, you’re not putting a person at risk if something finally does get to that point.”

    Avery also finds meaning in collaborating and helping others with their research. He has worked on projects using neutron scattering to elucidate the network structure of a homo-polypeptide, performed cell culture on thermoresponsive hydrogels, and developed highly elastic polypeptides, projects that Avery says aren’t directly applicable to his thesis work of treating internal bleeding. However, he was happy to have simply had the experience of learning something new.

    “If I can help somebody with something then I’m going to try to do the best that I can. Whether it’s a homework assignment or something in lab, my goal is not to leave somebody worse off,” Avery says. “If there’s something I’ve done in the past that could help you now, I’m excited to show you and hopefully have it work out well for you. If it doesn’t, we can talk even longer to try to figure out what we could do to make it work better.”

    Of the seven papers that Avery has been involved in over the past three years, almost half were collaborative projects outside the area of his thesis work.

    Avery hopes to finish his PhD thesis by the summer of next year. Afterward, he envisions working for a research institute that is devoted to a single disease or condition, or perhaps for a research center associated with a hospital within the military health system so that he could continue developing biomaterials, diagnostics, or other approaches to help soldiers.

    “I’m usually excited to help somebody get something done or get something done for my project. It’s always exciting to get closer to determining the optimum concentration that you need, seeing that one data point that’s higher than the others, or getting that nice image that shows the effect that you have hypothesized,” Avery says. “That’s still a motivating aspect of coming to lab, to eventually get those results. It can take a long time to get there but once you do, you appreciate the journey.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 8:08 am on July 1, 2017 Permalink | Reply
    Tags: , MIT, , Tiny “motors” are driven by light   

    From MIT: “Tiny “motors” are driven by light” 

    MIT News

    MIT Widget

    MIT News

    June 30, 2017
    David L. Chandler

    Researchers have created in simulations the first system in which can be manipulated by a beam of ordinary light rather than the expensive specialized light sources required by other systems. Image: Christine Daniloff/MIT

    Science fiction is full of fanciful devices that allow light to interact forcefully with matter, from light sabers to photon-drive rockets. In recent years, science has begun to catch up; some results hint at interesting real-world interactions between light and matter at atomic scales, and researchers have produced devices such as optical tractor beams, tweezers, and vortex beams.

    Now, a team at MIT and elsewhere has pushed through another boundary in the quest for such exotic contraptions, by creating in simulations the first system in which particles — ranging from roughly molecule- to bacteria-sized — can be manipulated by a beam of ordinary light rather than the expensive specialized light sources required by other systems. The findings are reported today in the journal Science Advances, by MIT postdocs Ognjen Ilic PhD ’15, Ido Kaminer, and Bo Zhen; professor of physics Marin Soljačić; and two others.

    Most research that attempts to manipulate matter with light, whether by pushing away individual atoms or small particles, attracting them, or spinning them around, involves the use of sophisticated laser beams or other specialized equipment that severely limits the kinds of uses of such systems can be applied to. “Our approach is to look at whether we can get all these interesting mechanical effects, but with very simple light,” Ilic says.

    The team decided to work on engineering the particles themselves, rather than the light beams, to get them to respond to ordinary light in particular ways. As their initial test, the researchers created simulated asymmetrical particles, called Janus (two-faced) particles, just a micrometer in diameter — one-hundredth the width of a human hair. These tiny spheres were composed of a silica core coated on side with a thin layer of gold.

    When exposed to a beam of light, the two-sided configuration of these particles causes an interaction that shifts their axes of symmetry relative to the orientation of the beam, the researchers found. At the same time, this interaction creates forces that set the particles spinning uniformly. Multiple particles can all be affected at once by the same beam. And the rate of spin can be changed by just changing the color of the light.

    The same kind of system, the researchers, say, could be applied to producing different kinds of manipulations, such as moving the positions of the particles. Ultimately, this new principle might be applied to moving particles around inside a body, using light to control their position and activity, for new medical treatments. It might also find uses in optically based nanomachinery.

    About the growing number of approaches to controlling interactions between light and material objects, Kaminer says, “I think about this as a new tool in the arsenal, and a very significant one.”

    Ilic says the study “enables dynamics that may not be achieved by the conventional approach of shaping the beam of light,” and could make possible a wide range of applications that are hard to foresee at this point. For example, in many potential applications, such as biological uses, nanoparticles may be moving in an incredibly complex, changing environment that would distort and scatter the beams needed for other kinds of particle manipulation. But these conditions would not matter to the simple light beams needed to activate the team’s asymmetric particles.

    “Because our approach does not require shaping of the light field, a single beam of light can simultaneously actuate a large number of particles,” Ilic says. “Achieving this type of behavior would be of considerable interest to the community of scientists studying optical manipulation of nanoparticles and molecular machines.” Kaminer adds, “There’s an advantage in controlling large numbers of particles at once. It’s a unique opportunity we have here.”

    Soljačić says this work fits into the area of topological physics, a burgeoning area of research that also led to last year’s Nobel Prize in physics. Most such work, though, has been focused on fairly specialized conditions that can exist in certain exotic materials called periodic media. “In contrast, our work investigates topological phenomena in particles,” he says.

    And this is just the start, the team suggests. This initial set of simulations only addressed the effects with a very simple two-sided particle. “I think the most exciting thing for us,” Kaminer says, “is there’s an enormous field of opportunities here. With such a simple particle showing such complex dynamics,” he says, it’s hard to imagine what will be possible “with an enormous range of particles and shapes and structures we can explore.”

    “Topology has been found to be a powerful tool in describing a select few physical systems,” says Mikael Rechtsman, an assistant professor of physics at Penn State who was not involved in this work. “Whenever a system can be described by a topological number, it is necessarily highly insensitive to imperfections that are present under realistic conditions. Soljačić’s group has managed to find yet another important physical system in which this topological robustness can play a role, namely the control and manipulation of nanoparticles with light. Specifically, they have found that certain particles’ rotational states can be ‘topologically protected’ to be highly stable in the presence of a laser beam propagating through the system. This could potentially have importance for trapping and probing individual viruses and DNA, for example.”

    The team also included Owen Miller at Yale University and Hrvoje Buljan at the University of Zagreb, in Croatia. The work was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, the National Science Foundation, and the European Research Council.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 7:41 am on July 1, 2017 Permalink | Reply
    Tags: MIT, Practical parallelism   

    From MIT: “Practical parallelism” 

    MIT News

    MIT Widget

    MIT News

    June 30, 2017
    Larry Hardesty

    A new system dubbed Fractal achieves 88-fold speedups through a parallelism strategy known as speculative execution. Courtesy of the researchers (edited by MIT News)

    The chips in most modern desktop computers have four “cores,” or processing units, which can run different computational tasks in parallel. But the chips of the future could have dozens or even hundreds of cores, and taking advantage of all that parallelism is a stiff challenge.

    Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory have developed a new system that not only makes parallel programs run much more efficiently but also makes them easier to code.

    In tests on a set of benchmark algorithms that are standard in the field, the researchers’ new system frequently enabled more than 10-fold speedups over existing systems that adopt the same parallelism strategy, with a maximum of 88-fold.

    For instance, algorithms for solving an important problem called max flow have proven very difficult to parallelize. After decades of research, the best parallel implementation of one common max-flow algorithm achieves only an eightfold speedup when it’s run on 256 parallel processors. With the researchers’ new system, the improvement is 322-fold — and the program required only one-third as much code.

    The new system, dubbed Fractal, achieves those speedups through a parallelism strategy known as speculative execution.

    “In a conventional parallel program, you need to divide your work into tasks,” says Daniel Sanchez, an assistant professor of electrical engineering and computer science at MIT and senior author on the new paper. “But because these tasks are operating on shared data, you need to introduce some synchronization to ensure that the data dependencies that these tasks have are respected. From the mid-90s to the late 2000s, there were multiple waves of research in what we call speculative architectures. What these systems do is execute these different chunks in parallel, and if they detect a conflict, they abort and roll back one of them.”

    Constantly aborting computations before they complete would not be a very efficient parallelization strategy. But for many applications, aborted computations are rare enough that they end up squandering less time than the complicated checks and updates required to synchronize tasks in more conventional parallel schemes. Last year, Sanchez’s group reported a system, called Swarm, that extended speculative parallelism to an important class of computational problems that involve searching data structures known as graphs.

    Irreducible atoms

    Research on speculative architectures, however, has often run aground on the problem of “atomicity.” Like all parallel architectures, speculative architectures require the programmer to divide programs into tasks that can run simultaneously. But with speculative architectures, each such task is “atomic,” meaning that it should seem to execute as a single whole. Typically, each atomic task is assigned to a separate processing unit, where it effectively runs in isolation.

    Atomic tasks are often fairly substantial. The task of booking an airline flight online, for instance, consists of many separate operations, but they have to be treated as an atomic unit. It wouldn’t do, for instance, for the program to offer a plane seat to one customer and then offer it to another because the first customer hasn’t finished paying yet.

    With speculative execution, large atomic tasks introduce two inefficiencies. The first is that, if the task has to abort, it might do so only after chewing up a lot of computational cycles. Aborting smaller tasks wastes less time.

    The other is that a large atomic task may have internal subroutines that could be parallelized efficiently. But because the task is isolated on its own processing unit, those subroutines have to be executed serially, squandering opportunities for performance improvements.

    Fractal — which Sanchez developed together with MIT graduate students Suvinay Subramanian, Mark Jeffrey, Maleen Abeydeera, Hyun Ryong Lee, and Victor A. Ying, and with Joel Emer, a professor of the practice and senior distinguished research scientist at the chip manufacturer NVidia — solves both of these problems. The researchers, who are all with MIT’s Department of Electrical Engineering and Computer Science, describe the system in a paper they presented this week at the International Symposium on Computer Architecture.

    With Fractal, a programmer adds a line of code to each subroutine within an atomic task that can be executed in parallel. This will typically increase the length of the serial version of a program by a few percent, whereas an implementation that explicitly synchronizes parallel tasks will often increase it by 300 or 400 percent. Circuits hardwired into the Fractal chip then handle the parallelization.

    Time chains

    The key to the system is a slight modification of a circuit already found in Swarm, the researchers’ earlier speculative-execution system. Swarm was designed to enforce some notion of sequential order in parallel programs. Every task executed in Swarm receives a time stamp, and if two tasks attempt to access the same memory location, the one with the later time stamp is aborted and re-executed.

    Fractal, too assigns each atomic task its own time stamp. But if an atomic task has a parallelizable subroutine, the subroutine’s time stamp includes that of the task that spawned it. And if the subroutine, in turn, has a parallelizable subroutine, the second subroutine’s time stamp includes that of the first, and so on. In this way, the ordering of the subroutines preserves the ordering of the atomic tasks.

    As tasks spawn subroutines that spawn subroutines and so on, the concatenated time stamps can become too long for the specialized circuits that store them. In those cases, however, Fractal simply moves the front of the time-stamp train into storage. This means that Fractal is always working only on the lowest-level, finest-grained tasks it has yet identified, avoiding the problem of aborting large, high-level atomic tasks.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 7:31 am on July 1, 2017 Permalink | Reply
    Tags: , MIT, Neural networks, Peering into neural networks   

    From MIT: “Peering into neural networks” 

    MIT News

    MIT Widget

    MIT News

    June 29, 2017
    Larry Hardesty

    Neural networks learn to perform computational tasks by analyzing large sets of training data. But once they’ve been trained, even their designers rarely have any idea what data elements they’re processing. Image: Christine Daniloff/MIT

    Neural networks, which learn to perform computational tasks by analyzing large sets of training data, are responsible for today’s best-performing artificial intelligence systems, from speech recognition systems, to automatic translators, to self-driving cars.

    But neural nets are black boxes. Once they’ve been trained, even their designers rarely have any idea what they’re doing — what data elements they’re processing and how.

    Two years ago, a team of computer-vision researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) described a method for peering into the black box of a neural net trained to identify visual scenes. The method provided some interesting insights, but it required data to be sent to human reviewers recruited through Amazon’s Mechanical Turk crowdsourcing service.

    At this year’s Computer Vision and Pattern Recognition conference, CSAIL researchers will present a fully automated version of the same system. Where the previous paper reported the analysis of one type of neural network trained to perform one task, the new paper reports the analysis of four types of neural networks trained to perform more than 20 tasks, including recognizing scenes and objects, colorizing grey images, and solving puzzles. Some of the new networks are so large that analyzing any one of them would have been cost-prohibitive under the old method.

    The researchers also conducted several sets of experiments on their networks that not only shed light on the nature of several computer-vision and computational-photography algorithms, but could also provide some evidence about the organization of the human brain.

    Neural networks are so called because they loosely resemble the human nervous system, with large numbers of fairly simple but densely connected information-processing “nodes.” Like neurons, a neural net’s nodes receive information signals from their neighbors and then either “fire” — emitting their own signals — or don’t. And as with neurons, the strength of a node’s firing response can vary.

    In both the new paper and the earlier one, the MIT researchers doctored neural networks trained to perform computer vision tasks so that they disclosed the strength with which individual nodes fired in response to different input images. Then they selected the 10 input images that provoked the strongest response from each node.

    In the earlier paper, the researchers sent the images to workers recruited through Mechanical Turk, who were asked to identify what the images had in common. In the new paper, they use a computer system instead.

    “We catalogued 1,100 visual concepts — things like the color green, or a swirly texture, or wood material, or a human face, or a bicycle wheel, or a snowy mountaintop,” says David Bau, an MIT graduate student in electrical engineering and computer science and one of the paper’s two first authors. “We drew on several data sets that other people had developed, and merged them into a broadly and densely labeled data set of visual concepts. It’s got many, many labels, and for each label we know which pixels in which image correspond to that label.”

    The paper’s other authors are Bolei Zhou, co-first author and fellow graduate student; Antonio Torralba, MIT professor of electrical engineering and computer science; Aude Oliva, CSAIL principal research scientist; and Aditya Khosla, who earned his PhD as a member of Torralba’s group and is now the chief technology officer of the medical-computing company PathAI.

    The researchers also knew which pixels of which images corresponded to a given network node’s strongest responses. Today’s neural nets are organized into layers. Data are fed into the lowest layer, which processes them and passes them to the next layer, and so on. With visual data, the input images are broken into small chunks, and each chunk is fed to a separate input node.

    For every strong response from a high-level node in one of their networks, the researchers could trace back the firing patterns that led to it, and thus identify the specific image pixels it was responding to. Because their system could frequently identify labels that corresponded to the precise pixel clusters that provoked a strong response from a given node, it could characterize the node’s behavior with great specificity.

    The researchers organized the visual concepts in their database into a hierarchy. Each level of the hierarchy incorporates concepts from the level below, beginning with colors and working upward through textures, materials, parts, objects, and scenes. Typically, lower layers of a neural network would fire in response to simpler visual properties — such as colors and textures — and higher layers would fire in response to more complex properties.

    But the hierarchy also allowed the researchers to quantify the emphasis that networks trained to perform different tasks placed on different visual properties. For instance, a network trained to colorize black-and-white images devoted a large majority of its nodes to recognizing textures. Another network, when trained to track objects across several frames of video, devoted a higher percentage of its nodes to scene recognition than it did when trained to recognize scenes; in that case, many of its nodes were in fact dedicated to object detection.

    One of the researchers’ experiments could conceivably shed light on a vexed question in neuroscience. Research involving human subjects with electrodes implanted in their brains to control severe neurological disorders has seemed to suggest that individual neurons in the brain fire in response to specific visual stimuli. This hypothesis, originally called the grandmother-neuron hypothesis, is more familiar to a recent generation of neuroscientists as the Jennifer-Aniston-neuron hypothesis, after the discovery that several neurological patients had neurons that appeared to respond only to depictions of particular Hollywood celebrities.

    Many neuroscientists dispute this interpretation. They argue that shifting constellations of neurons, rather than individual neurons, anchor sensory discriminations in the brain. Thus, the so-called Jennifer Aniston neuron is merely one of many neurons that collectively fire in response to images of Jennifer Aniston. And it’s probably part of many other constellations that fire in response to stimuli that haven’t been tested yet.

    Because their new analytic technique is fully automated, the MIT researchers were able to test whether something similar takes place in a neural network trained to recognize visual scenes. In addition to identifying individual network nodes that were tuned to particular visual concepts, they also considered randomly selected combinations of nodes. Combinations of nodes, however, picked out far fewer visual concepts than individual nodes did — roughly 80 percent fewer.

    “To my eye, this is suggesting that neural networks are actually trying to approximate getting a grandmother neuron,” Bau says. “They’re not trying to just smear the idea of grandmother all over the place. They’re trying to assign it to a neuron. It’s this interesting hint of this structure that most people don’t believe is that simple.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 7:55 am on June 29, 2017 Permalink | Reply
    Tags: , Dialysis, , Hemodialysis, Material can filter nanometer-sized molecules at 10 to 100 times the rate of commercial membranes, , MIT   

    From MIT: “Scientists produce dialysis membrane made from graphene” 

    MIT News

    MIT Widget

    MIT News

    June 28, 2017
    Jennifer Chu

    1) Graphene, grown on copper foil, is pressed against a supporting sheet of polycarbonate. 2) The polycarbonate acts to peel the graphene from the copper. 3) Using interfacial polymerization, researchers seal large tears and defects in graphene. 4) Next, they use oxygen plasma to etch pores of specific sizes in graphene. Courtesy of the researchers (edited by MIT News)

    Material can filter nanometer-sized molecules at 10 to 100 times the rate of commercial membranes.

    Dialysis, in the most general sense, is the process by which molecules filter out of one solution, by diffusing through a membrane, into a more dilute solution. Outside of hemodialysis, which removes waste from blood, scientists use dialysis to purify drugs, remove residue from chemical solutions, and isolate molecules for medical diagnosis, typically by allowing the materials to pass through a porous membrane.

    Today’s commercial dialysis membranes separate molecules slowly, in part due to their makeup: They are relatively thick, and the pores that tunnel through such dense membranes do so in winding paths, making it difficult for target molecules to quickly pass through.

    Now MIT engineers have fabricated a functional dialysis membrane from a sheet of graphene — a single layer of carbon atoms, linked end to end in hexagonal configuration like that of chicken wire. The graphene membrane, about the size of a fingernail, is less than 1 nanometer thick. (The thinnest existing memranes are about 20 nanometers thick.) The team’s membrane is able to filter out nanometer-sized molecules from aqueous solutions up to 10 times faster than state-of-the-art membranes, with the graphene itself being up to 100 times faster.

    While graphene has largely been explored for applications in electronics, Piran Kidambi, a postdoc in MIT’s Department of Mechanical Engineering, says the team’s findings demonstrate that graphene may improve membrane technology, particularly for lab-scale separation processes and potentially for hemodialysis.

    “Because graphene is so thin, diffusion across it will be extremely fast,” Kidambi says. “A molecule doesn’t have to do this tedious job of going through all these tortuous pores in a thick membrane before exiting the other side. Moving graphene into this regime of biological separation is very exciting.”

    Kidambi is a lead author of a study reporting the technology, published today in Advanced Materials. Six co-authors are from MIT, including Rohit Karnik, associate professor of mechanical engineering, and Jing Kong, associate professor of electrical engineering.

    Plugging graphene

    To make the graphene membrane, the researchers first used a common technique called chemical vapor deposition to grow graphene on copper foil. They then carefully etched away the copper and transferred the graphene to a supporting sheet of polycarbonate, studded throughout with pores large enough to let through any molecules that have passed through the graphene. The polycarbonate acts as a scaffold, keeping the ultrathin graphene from curling up on itself.

    The researchers looked to turn graphene into a molecularly selective sieve, letting through only molecules of a certain size. To do so, they created tiny pores in the material by exposing the structure to oxygen plasma, a process by which oxygen, pumped into a plasma chamber, can etch away at materials.

    “By tuning the oxygen plasma conditions, we can control the density and size of pores we make, in the areas where the graphene is pristine,” Kidambi says. “What happens is, an oxygen radical comes to a carbon atom [in graphene] and rapidly reacts, and they both fly out as carbon dioxide.”

    What is left is a tiny hole in the graphene, where a carbon atom once sat. Kidambi and his colleagues found that the longer graphene is exposed to oxygen plasma, the larger and more dense the pores will be. Relatively short exposure times, of about 45 to 60 seconds, generate very small pores.

    Desirable defects

    The researchers tested multiple graphene membranes with pores of varying sizes and distributions, placing each membrane in the middle of a diffusion chamber. They filled the chamber’s feed side with a solution containing various mixtures of molecules of different sizes, ranging from potassium chloride (0.66 nanometers wide) to vitamin B12 (1 to 1.5 nanometers) and lysozyme (4 nanometers), a protein found in egg white. The other side of the chamber was filled with a dilute solution.

    The team then measured the flow of molecules as they diffused through each graphene membrane.

    Membranes with very small pores let through potassium chloride but not larger molecules such as L-tryptophan, which measures only 0.2 nanometers wider. Membranes with larger pores let through correspondingly larger molecules.

    The team carried out similar experiments with commercial dialysis membranes and found that, in comparison, the graphene membranes performed with higher “permeance,” filtering out the desired molecules up to 10 times faster.

    Kidambi points out that the polycarbonate support is etched with pores that only take up 10 percent of its surface area, which limits the amount of desired molecules that ultimately pass through both layers.

    “Only 10 percent of the membrane’s area is accessible, but even with that 10 percent, we’re able to do better than state-of-the-art,” Kidambi says.

    To make the graphene membrane even better, the team plans to improve the polycarbonate support by etching more pores into the material to increase the membrane’s overall permeance. They are also working to further scale up the dimensions of the membrane, which currently measures 1 square centimeter. Further tuning the oxygen plasma process to create tailored pores will also improve a membrane’s performance — something that Kidambi points out would have vastly different consequences for graphene in electronics applications.

    “What’s exciting is, what’s not great for the electronics field is actually perfect in this [membrane dialysis] field,” Kidambi says. “In electronics, you want to minimize defects. Here you want to make defects of the right size. It goes to show the end use of the technology dictates what you want in the technology. That’s the key.”

    This research was supported, in part, by the U.S. Department of Energy and a Lindemann Trust Fellowship.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 5:16 pm on June 28, 2017 Permalink | Reply
    Tags: A new way of extracting copper, , , MIT, Molten electrolysis   

    From MIT: “A new way of extracting copper” 

    MIT News

    MIT Widget

    MIT News

    June 28, 2017
    Denis Paiste

    MIT postdoc Sulata Sahu (left) and graduate student Brian Chmielowiec hold a sample of nearly pure copper deposited on an iron electrode.
    Photo: Denis Paiste/Materials Processing Center

    A new penny, at left, contrasts with samples of nearly pure copper deposited on an iron electrode after extraction through an electrochemical process. Photo: Denis Paiste/Materials Processing Center

    Researchers develop an electrically-driven process to separate commercially important metals from sulfide minerals in one step without harmful byproducts.

    MIT researchers have identified the proper temperature and chemical mixture to selectively separate pure copper and other metallic trace elements from sulfur-based minerals using molten electrolysis. This one-step, environmentally friendly process simplifies metal production and eliminates the toxic byproducts such as sulfur dioxide.

    Postdoc Sulata K. Sahu and PhD student Brian J. Chmielowiec ’12 decomposed sulfur-rich minerals into pure sulfur and extracted three different metals at very high purity: copper, molybdenum, and rhenium. They also quantified the amount of energy needed to run the extraction process.

    An electrolysis cell is a closed circuit, like a battery, but instead of producing electrical energy, it consumes electrical energy to break apart compounds into their elements, for example, splitting water into hydrogen and oxygen. Such electrolytic processes are the primary method of aluminum production and are used as the final step to remove impurities in copper production. Contrary to aluminum, however, there are no direct electrolytic decomposition processes for copper-containing sulfide minerals to produce liquid copper.

    The MIT researchers found a promising method of forming liquid copper metal and sulfur gas in their cell from an electrolyte composed of barium sulfide, lanthanum sulfide, and copper sulfide, which yields greater than 99.9 percent pure copper. This purity is equivalent to the best current copper production methods. Their results are published in an Electrochimica Acta paper with senior author Antoine Allanore, assistant professor of metallurgy.

    One-step process

    “It is a one-step process, directly just decompose the sulfide to copper and sulfur. Other previous methods are multiple steps,” Sahu explains. “By adopting this process, we are aiming to reduce the cost.”

    Copper is in increasing demand for use in electric vehicles, solar energy, consumer electronics and other energy efficiency targets. Most current copper extraction processes burn sulfide minerals in air, which produces sulfur dioxide, a harmful air pollutant that has to be captured and reprocessed, but the new method produces elemental sulfur, which can be safely reused, for example, in fertilizers. The researchers also used electrolysis to produce rhenium and molybdenum, which are often found in copper sulfides at very small levels.

    The new work builds on a 2016 Journal of The Electrochemical Society paper offering proof of electrolytic extraction of copper authored by Samira Sokhanvaran, Sang-Kwon Lee, Guillaume Lambotte, and Allanore. They showed that addition of barium sulfide to a copper sulfide melt suppressed copper sulfide’s electrical conductivity enough to extract a small amount of pure copper from the high-temperature electrochemical cell operating at 1,105 degrees Celsius (2,021 Fahrenheit). Sokhanvaran is now a research scientist at Natural Resources Canada-Canmet Mining; Lee is a senior researcher at Korea Atomic Energy Research Institute; and Lambotte is now a senior research engineer at Boston Electrometallurgical Corp.

    “The new paper shows that we can go further than that and almost make it fully ionic, that is reduce the share of electronic conductivity and therefore increase the efficiency to make metal,” Allanore says.

    These sulfide minerals are compounds where the metal and the sulfur elements share electrons. In their molten state, copper ions are missing one electron, giving them a positive charge, while sulfur ions are carrying two extra electrons, giving them a negative charge. The desired reaction in an electrolysis cell is to form elemental atoms, by adding electrons to metals such as copper, and taking away electrons from sulfur. This happens when extra electrons are introduced to the system by the applied voltage. The metal ions are reacting at the cathode, a negatively charged electrode, where they gain electrons in a process called reduction; meanwhile, the negatively charged sulfur ions are reacting at the anode, a positively charged electrode, where they give up electrons in a process called oxidation.

    In a cell that used only copper sulfide, for example, because of its high electronic conductivity, the extra electrons would simply flow through the electrolyte without interacting with the individual ions of copper and sulfur at the electrodes and no separation would occur. The Allanore Group researchers successfully identified other sulfide compounds that, when added to copper sulfide, change the behavior of the melt so that the ions, rather than electrons, become the primary charge carriers through the system and thus enable the desired chemical reactions. Technically speaking, the additives raise the bandgap of the copper sulfide so it is no longer electronically conductive, Chmielowiec explains. The fraction of the electrons engaging in the oxidation and reduction reactions, measured as a percentage of the total current, that is the total electron flow in the cell, is called its faradaic efficiency.

    Doubling efficiency

    The new work doubles the efficiency for electrolytic extraction of copper reported in the first paper, which was 28 percent with an electrolyte where only barium sulfide added to the copper sulfide, to 59 percent in the second paper with both lanthanum sulfide and barium sulfide added to the copper sulfide.

    “Demonstrating that we can perform faradaic reactions in a liquid metal sulfide is novel and can open the door to study many different systems,” Chmielowiec says. “It works for more than just copper. We were able to make rhenium, and we were able to make molybdenum.” Rhenium and molybdenum are industrially important metals finding use in jet airplane engines, for example. The Allanore laboratory also used molten electrolysis to produce zinc, tin and silver, but lead, nickel and other metals are possible, he suggests.

    The amount of energy required to run the separation process in an electrolysis cell is proportional to the faradaic efficiency and the cell voltage. For water, which was one of the first compounds to be separated by electrolysis, the minimum cell voltage, or decomposition energy, is 1.23 volts. Sahu and Chmielowiec identified the cell voltages in their cell as 0.06 volts for rhenium sulfide, 0.33 volts for molybdenum sulfide, and 0.45 volts for copper sulfide. “For most of our reactions, we apply 0.5 or 0.6 volts, so that the three sulfides are together reduced to metallic, rhenium, molybdenum and copper,” Sahu explains. At the cell operating temperature and at an applied potential of 0.5 to 0.6 volts, the system prefers to decompose those metals because the energy required to decompose both lanthanum sulfide — about 1.7 volts — and barium sulfide — about 1.9 volts — is comparatively much higher. Separate experiments also proved the ability to selectively reduce rhenium or molybdenum without reducing copper, based on their differing decomposition energies.

    Industrial potential

    Important strategic and commodity metals including, copper, zinc, lead, rhenium, and molybdenum are typically found in sulfide ores and less commonly in oxide-based ores, as is the case for aluminum. “What’s typically done is you burn those in air to remove the sulfur, but by doing that you make SO2 [sulfur dioxide], and nobody is allowed to release that directly to air, so they have to capture it somehow. There are a lot of capital costs associated with capturing SO2 and converting it to sulfuric acid,” Chmielowiec explains.

    The closest industrial process to the electrolytic copper extraction they hope to see is aluminum production by an electrolytic process known as Hall-Héroult process, which produces a pool of molten aluminum metal that can be continuously tapped. “The ideal is to run a continuous process,” Chmielowiec says. “So, in our case, you would maintain a constant level of liquid copper and then periodically tap that out of the electrolysis cell. A lot of engineering has gone into that for the aluminum industry, so we would hopefully piggyback off of that.”

    Sahu and Chmielowiec conducted their experiments at 1,227 C, about 150 degrees Celsius above the melting point of copper. It is the temperature commonly used in industry for copper extraction.

    Further improvements

    Aluminum electrolysis systems run at 95 percent faradaic efficiency, so there is room for improvement from the researchers’ reported 59 percent efficiency. To improve their cell efficiency, Sahu says, they may need to modify the cell design to recover a larger amount of liquid copper. The electrolyte can also be further tuned, adding sulfides other than barium sulfide and lanthanum sulfide. “There is no one single solution that will let us do that. It will be an optimization to move it up to larger scale,” Chmielowiec says. That work continues.

    Sahu, 34, received her PhD in chemistry from the University of Madras, in India. Chmielowiec, 27, a second-year doctoral student and a Salapatas Fellow in materials science and engineering, received his BS in chemical engineering at MIT in 2012 and an MS in chemical engineering from Caltech in 2014.

    The work fits into the Allanore Group’s work on high-temperature molten materials, including recent breakthroughs in developing new formulas to predict semiconductivity in molten compounds and demonstrating a molten thermoelectric cell to produce electricity from industrial waste heat. The Allanore Group is seeking a patent on certain aspects of the extraction process.

    Novel and significant work

    “Using intelligent design of the process chemistry, these researchers have developed a very novel route for producing copper,” says Rohan Akolkar, the F. Alex Nason Associate Professor of Chemical and Biomolecular Engineering at Case Western Reserve University, who was not involved in this work. “The researchers have engineered a process that has many of the key ingredients — it’s a cleaner, scalable, and simpler one-step process for producing copper from sulfide ore.”

    “Technologically, the authors appreciate the need to make the process more efficient while preserving the intrinsic purity of the copper produced,” says Akolkar, who visited the Allanore lab late last year. “If the technology is developed further and its techno-economics look favorable, then it may provide a potential pathway for simpler and cleaner production of copper metal, which is important to many applications.” Akolkar notes that “the quality of this work is excellent. The Allanore research group at MIT is at the forefront when it comes to advancing molten salt electrolysis research.”

    University of Rochester professor of chemical engineering Jacob Jorné says, “Current extraction processes involve multiple steps and require high capital investment, thus costly improvements are prohibited. Direct electrolysis of the metal sulfide ores is also advantageous as it eliminates the formation of sulfur dioxide, an acid rain pollutant. “

    “The electrochemistry and thermodynamics in molten salts are quite different than in aqueous [water-based] systems and the research of Allanore and his group demonstrates that a lot of good chemistry has been ignored in the past due to our slavish devotion to water,” Jorné suggests. “Direct electrolysis of metal ores opens the way to a metallurgical renaissance where new discoveries and processes can be implemented and can modernize the aging extraction industry and improve its energy efficiency. The new approach can be applied to other metals of high strategic importance such as the rare earth metals.”

    This work was supported by Norco Conservation and the Office of Naval Research.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: