Tagged: MIT Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:38 am on September 16, 2019 Permalink | Reply
    Tags: , Gaussian noise, MIT, non-Gaussian noise, ,   

    From MIT News and Dartmouth College: “Uncovering the hidden “noise” that can kill qubits” 

    MIT News

    From MIT News

    September 16, 2019
    Rob Matheson

    1
    MIT and Dartmouth College researchers developed a tool that detects new characteristics of non-Gaussian “noise” that can destroy the fragile quantum superposition state of qubits, the fundamental components of quantum computers. Image courtesy of the researchers.

    New detection tool could be used to make quantum computers robust against unwanted environmental disturbances.

    MIT and Dartmouth College researchers have demonstrated, for the first time, a tool that detects new characteristics of environmental “noise” that can destroy the fragile quantum state of qubits, the fundamental components of quantum computers.

    The advance may provide insights into microscopic noise mechanisms to help engineer new ways of protecting qubits.

    Qubits can represent the two states corresponding to the classic binary bits, a 0 or 1. But, they can also maintain a “quantum superposition” of both states simultaneously, enabling quantum computers to solve complex problems that are practically impossible for classical computers.

    But a qubit’s quantum “coherence” — meaning its ability to maintain the superposition state — can fall apart due to noise coming from environment around the qubit. Noise can arise from control electronics, heat, or impurities in the qubit material itself, and can also cause serious computing errors that may be difficult to correct.

    Researchers have developed statistics-based models to estimate the impact of unwanted noise sources surrounding qubits to create new ways to protect them, and to gain insights into the noise mechanisms themselves. But, those tools generally capture simplistic “Gaussian noise,” essentially the collection of random disruptions from a large number of sources. In short, it’s like white noise coming from the murmuring of a large crowd, where there’s no specific disruptive pattern that stands out, so the qubit isn’t particularly affected by any one particular source. In this type of model, the probability distribution of the noise would form a standard symmetrical bell curve, regardless of the statistical significance of individual contributors.

    In a paper published today in the journal Nature Communications, the researchers describe a new tool that, for the first time, measures “non-Gaussian noise” affecting a qubit. This noise features distinctive patterns that generally stem from a few particularly strong noise sources.

    The researchers designed techniques to separate that noise from the background Gaussian noise, and then used signal-processing techniques to reconstruct highly detailed information about those noise signals. Those reconstructions can help researchers build more realistic noise models, which may enable more robust methods to protect qubits from specific noise types. There is now a need for such tools, the researchers say: Qubits are being fabricated with fewer and fewer defects, which could increase the presence of non-Gaussian noise.

    “It’s like being in a crowded room. If everyone speaks with the same volume, there is a lot of background noise, but I can still maintain my own conversation. However, if a few people are talking particularly loudly, I can’t help but lock on to their conversation. It can be very distracting,” says William Oliver, an associate professor of electrical engineering and computer science, professor of the practice of physics, MIT Lincoln Laboratory Fellow, and associate director of the Research Laboratory for Electronics (RLE). “For qubits with many defects, there is noise that decoheres, but we generally know how to handle that type of aggregate, usually Gaussian noise. However, as qubits improve and there are fewer defects, the individuals start to stand out, and the noise may no longer be simply of a Gaussian nature. We can find ways to handle that, too, but we first need to know the specific type of non-Gaussian noise and its statistics.”

    “It is not common for theoretical physicists to be able to conceive of an idea and also find an experimental platform and experimental colleagues willing to invest in seeing it through,” says co-author Lorenza Viola, a professor of physics at Dartmouth. “It was great to be able to come to such an important result with the MIT team.”

    Joining Oliver and Viola on the paper are: first author Youngkyu Sung, Fei Yan, Jack Y. Qiu, Uwe von Lüpke, Terry P. Orlando, and Simon Gustavsson, all of RLE; David K. Kim and Jonilyn L. Yoder of the Lincoln Laboratory; and Félix Beaudoin and Leigh M. Norris of Dartmouth.

    Pulse filters

    For their work, the researchers leveraged the fact that superconducting qubits are good sensors for detecting their own noise. Specifically, they use a “flux” qubit, which consists of a superconducting loop that is capable of detecting a particular type of disruptive noise, called magnetic flux, from its surrounding environment.

    In the experiments, they induced non-Gaussian “dephasing” noise by injecting engineered flux noise that disturbs the qubit and makes it lose coherence, which in turn is then used as a measuring tool. “Usually, we want to avoid decoherence, but in this case, how the qubit decoheres tells us something about the noise in its environment,” Oliver says.

    Specifically, they shot 110 “pi-pulses” — which are used to flip the states of qubits — in specific sequences over tens of microseconds. Each pulse sequence effectively created a narrow frequency “filter” which masks out much of the noise, except in a particular band of frequency. By measuring the response of a qubit sensor to the bandpass-filtered noise, they extracted the noise power in that frequency band.

    By modifying the pulse sequences, they could move filters up and down to sample the noise at different frequencies. Notably, in doing so, they tracked how the non-Gaussian noise distinctly causes the qubit to decohere, which provided a high-dimensional spectrum of the non-Gaussian noise.

    Error suppression and correction

    The key innovation behind the work is carefully engineering the pulses to act as specific filters that extract properties of the “bispectrum,” a two-dimension representation that gives information about distinctive time correlations of non-Gaussian noise.

    Essentially, by reconstructing the bispectrum, they could find properties of non-Gaussian noise signals impinging on the qubit over time — ones that don’t exist in Gaussian noise signals. The general idea is that, for Gaussian noise, there will be only correlation between two points in time, which is referred to as a “second-order time correlation.” But, for non-Gaussian noise, the properties at one point in time will directly correlate to properties at multiple future points. Such “higher-order” correlations are the hallmark of non-Gaussian noise. In this work, the authors were able to extract noise with correlations between three points in time.

    This information can help programmers validate and tailor dynamical error suppression and error-correcting codes for qubits, which fixes noise-induced errors and ensures accurate computation.

    Such protocols use information from the noise model to make implementations that are more efficient for practical quantum computers. But, because the details of noise aren’t yet well-understood, today’s error-correcting codes are designed with that standard bell curve in mind. With the researchers’ tool, programmers can either gauge how their code will work effectively in realistic scenarios or start to zero in on non-Gaussian noise.

    Keeping with the crowded-room analogy, Oliver says: “If you know there’s only one loud person in the room, then you’ll design a code that effectively muffles that one person, rather than trying to address every possible scenario.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 2:36 pm on September 12, 2019 Permalink | Reply
    Tags: MIT, Sonia Reilly, The math of machine learning to improve predictions of natural disasters, UROP-Undergraduate Research Opportunities Program, We want to transform big data into perhaps what we might call smart data.,   

    From MIT News: Women in STEM “Computing in Earth science: a non-linear path” Sonia Reilly 

    MIT News

    From MIT News

    September 11, 2019
    Laura Carter | School of Science

    1
    Course 18C student Sonia Reilly (right) stands with her UROP research advisor, Sai Ravela, in the Department of Earth, Atmospheric and Planetary Sciences, where she worked on a machine learning project this summer. Photo: Lauren Hinkel

    UROP student Sonia Reilly studies the math of machine learning to improve predictions of natural disasters.

    Machine learning is undeniably a tool that most disciplines like to have in their toolbox. However, scientists are still investigating the limits and barriers to incorporating machine learning into their research. Junior Sonia Reilly spent her summer opening up the machine learning black box to better understand how information flows through neural networks as part of the Undergraduate Research Opportunities Program (UROP). Her project, which investigates how machine learning works with the intention of improving its application to the observation of natural phenomena, was overseen by Sai Ravela in the Department of Earth, Atmospheric and Planetary Sciences (EAPS). As a major in Course 18C (Mathematics with Computer Science), Reilly is uniquely equipped to help investigate these connections.

    “In recent years, deep learning has become an immensely popular tool in all kinds of research fields, but the mathematics of how and why it is so effective is still very poorly understood,” says Reilly. “Having that knowledge will enable the design of better-performing learning machines.” To do that, she looks more closely at how the algorithms evolve to produce their final most-probable conclusions, with the end goal of providing insights on information flow, bottlenecks, and maximizing gain from neural networks.

    “We don’t want to be drowning in big data. On the contrary, we want to transform big data into perhaps what we might call smart data,” Ravela says of how machine learning must proceed. “The end goal is always a sensing agent that gathers data from our environment, but one that is knowledge-driven and does just enough work to gather just enough information for meaningful inferences.”

    For Ravela, who leads the Earth Signals and Systems Group (ESSG), better-performing learning machines means more robust early predictions of potential disasters. His group’s research lies largely in how the Earth works as a system, primarily focusing on climate and natural hazards. They observe natural phenomena to produce effective predictive models for dynamic natural processes, such as hurricanes, clouds, volcanoes, earthquakes, glaciers, and wildlife conservation strategies, as well as making advances in engineering and learning itself.

    “In all these projects, it’s impossible to gather dense data in space and time. We show that actively mining the environment through a systems analytic approach is promising,” he says. Ravela recently delivered his group’s latest work — including Reilly’s contributions — to the Association of Computing Machinery’s special interest group on knowledge discovery and data mining (SIGKDD 2019) in early August. He teaches an “infinite course” with a duology of classes taught in spring and fall semesters that provides an overview of machine learning foundations for natural systems science, which anyone can follow along with online.

    According to Ravela, if Reilly is to succeed at advancing the mathematical basis for computational learning models, she will be one of the “early pioneers of learning that can be explained,” an achievement that can provide a promising career path.

    That is ideal for Reilly’s goals of obtaining a PhD in mathematics after graduating from MIT and remaining a contributor to research that can positively impact the world. She’s starting with cramming as much research as she can manage into her schedule over her final two undergraduate years at MIT, including her experience this summer.

    Although this was Reilly’s first UROP experience, it is her second time undertaking a research project that blends mathematics, computer science, and Earth science. Previously, at the Johns Hopkins University Applied Physics Laboratory, Reilly helped develop signal processing techniques and software that would improve the retrieval of useful climate change information from low-quality satellite data.

    “I’ve always wanted to be part of an interdisciplinary research environment where I could use my knowledge of math to contribute to the work of scientists and engineers,” Reilly says of working within EAPS. “It’s encouraging to see that type of environment and get a taste of what it would be like to work in one.”

    Ravela explains that the ESSG is fond of the mutually beneficial inclusion of UROP students. “For me, UROPs are better than grad student and postdocs if, and only if, one can create the right-sized questions for them to run with. But then they run the fastest and are the most clever of all.” He says he feels the UROP program is invaluable and could be beneficial for all students to incorporate, as it offers a chance to learn about other fields and interdisciplinary research, as well as how to incorporate what they learn into tangible results.

    For Reilly, research builds on her foundation obtained from taking classes at MIT, which are a controlled and predictable environment, she says, “but research is nowhere near so linear.” She has relied on her foundation of mathematics and computer science from her courses during her UROP experience while having to learn how to connect and apply them to new fields and to consider topics often outside an undergraduate education. “It often feels like every step I take requires me to learn about an entirely new field of mathematics, and it’s difficult to know where to start. I definitely feel lost sometimes, but I’m also learning an incredible amount.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 11:04 am on September 11, 2019 Permalink | Reply
    Tags: "The answer to life, 32 is unsolvable., 65-year-old problem about 42, and everything", Andrew Booker of Bristol University, Are there three cubes whose sum is 42?, Diophantine Equation x3+y3+z3=k, Drew Sutherland, , MIT,   

    From MIT News: “The answer to life, the universe, and everything” 

    MIT News

    From MIT News

    September 10, 2019
    Sandi Miller | Department of Mathematics

    1
    MIT mathematician Andrew “Drew” Sutherland solved a 65-year-old problem about 42. Image: Department of Mathematics

    2
    This plot by Andrew Sutherland depicts the computation times for each of the 400,000-plus jobs that his team ran on Charity Engine’s compute grid. Each job was assigned a range of the parameter d = [x+y|, which must be a divisor of |z^3-42| for any integer solution to x^3 + y^3 + z^3 = 42. Each dot in the plot represents 25 jobs plotted according to their median runtime, with purple dots representing “smooth” values of d (those with no large prime divisors), and blue dots representing non-smooth values of d — the algorithm handles these two cases differently.
    Image: Andrew Sutherland

    Mathematics researcher Drew Sutherland helps solve decades-old sum-of-three-cubes puzzle, with help from “The Hitchhiker’s Guide to the Galaxy.”

    A team led by Andrew Sutherland of MIT and Andrew Booker of Bristol University has solved the final piece of a famous 65-year old math puzzle with an answer for the most elusive number of all: 42.

    The number 42 is especially significant to fans of science fiction novelist Douglas Adams’ “The Hitchhiker’s Guide to the Galaxy,” because that number is the answer given by a supercomputer to “the Ultimate Question of Life, the Universe, and Everything.”

    Booker also wanted to know the answer to 42. That is, are there three cubes whose sum is 42?

    This sum of three cubes puzzle, first set in 1954 at the University of Cambridge and known as the Diophantine Equation x3+y3+z3=k, challenged mathematicians to find solutions for numbers 1-100. With smaller numbers, this type of equation is easier to solve: for example, 29 could be written as 33 + 13 + 13, while 32 is unsolvable. All were eventually solved, or proved unsolvable, using various techniques and supercomputers, except for two numbers: 33 and 42.

    Booker devised an ingenious algorithm and spent weeks on his university’s supercomputer when he recently came up with a solution for 33. But when he turned to solve for 42, Booker found that the computing needed was an order of magnitude higher and might be beyond his supercomputer’s capability. Booker says he received many offers of help to find the answer, but instead he turned to his friend Andrew “Drew” Sutherland, a principal research scientist in the Department of Mathematics. “He’s a world’s expert at this sort of thing,” Booker says.

    Sutherland, whose specialty includes massively parallel computations, broke the record in 2017 for the largest Compute Engine cluster, with 580,000 cores on Preemptible Virtual Machines, the largest known high-performance computing cluster to run in the public cloud.

    Like other computational number theorists who work in arithmetic geometry, he was aware of the “sum of three cubes” problem. And the two had worked together before, helping to build the L-functions and Modular Forms Database(LMFDB), an online atlas of mathematical objects related to what is known as the Langlands Program. “I was thrilled when Andy asked me to join him on this project,” says Sutherland.

    Booker and Sutherland discussed the algorithmic strategy to be used in the search for a solution to 42. As Booker found with his solution to 33, they knew they didn’t have to resort to trying all of the possibilities for x, y, and z.

    “There is a single integer parameter, d, that determines a relatively small set of possibilities for x, y, and z such that the absolute value of z is below a chosen search bound B,” says Sutherland. “One then enumerates values for d and checks each of the possible x, y, z associated to d. In the attempt to crack 33, the search bound B was 1016, but this B turned out to be too small to crack 42; we instead used B = 1017 (1017 is 100 million billion).

    Otherwise, the main difference between the search for 33 and the search for 42 would be the size of the search and the computer platform used. Thanks to a generous offer from UK-based Charity Engine, Booker and Sutherland were able to tap into the computing power from over 400,000 volunteers’ home PCs, all around the world, each of which was assigned a range of values for d. The computation on each PC runs in the background so the owner can still use their PC for other tasks.

    Sutherland is also a fan of Douglas Adams, so the project was irresistible.

    The method of using Charity Engine is similar to part of the plot surrounding the number 42 in the “Hitchhiker” novel: After Deep Thought’s answer of 42 proves unsatisfying to the scientists, who don’t know the question it is meant to answer, the supercomputer decides to compute the Ultimate Question by building a supercomputer powered by Earth … in other words, employing a worldwide massively parallel computation platform.

    “This is another reason I really liked running this computation on Charity Engine — we actually did use a planetary-scale computer to settle a longstanding open question whose answer is 42.”

    They ran a number of computations at a lower capacity to test both their code and the Charity Engine network. They then used a number of optimizations and adaptations to make the code better suited for a massively distributed computation, compared to a computation run on a single supercomputer, says Sutherland.

    Why couldn’t Bristol’s supercomputer solve this problem?

    “Well, any computer *can* solve the problem, provided you are willing to wait long enough, but with roughly half a million PCs working on the problem in parallel (each with multiple cores), we were able to complete the computation much more quickly than we could have using the Bristol machine (or any of the machines here at MIT),” says Sutherland.

    Using the Charity Engine network is also more energy-efficient. “For the most part, we are using computational resources that would otherwise go to waste,” says Sutherland. “When you’re sitting at your computer reading an email or working on a spreadsheet, you are using only a tiny fraction of the CPU resource available, and the Charity Engine application, which is based on the Berkeley Open Infrastructure for Network Computing (BOINC), takes advantage of this. As a result, the carbon footprint of this computation — related to the electricity our computations caused the PCs in the network to use above and beyond what they would have used, in any case — is lower than it would have been if we had used a supercomputer.”

    Sutherland and Booker ran the computations over several months, but the final successful run was completed in just a few weeks. When the email from Charity Engine arrived, it provided the first solution to x3+y3+z3=42:

    42 = (-80538738812075974)^3 + 80435758145817515^3 + 12602123297335631^3

    “When I heard the news, it was definitely a fist-pump moment,” says Sutherland. “With these large-scale computations you pour a lot of time and energy into optimizing the implementation, tweaking the parameters, and then testing and retesting the code over weeks and months, never really knowing if all the effort is going to pay off, so it is extremely satisfying when it does.”

    Booker and Sutherland say there are 10 more numbers, from 101-1000, left to be solved, with the next number being 114.

    But both are more interested in a simpler but computationally more challenging puzzle: whether there are more answers for the sum of three cubes for 3.

    “There are four very easy solutions that were known to the mathematician Louis J. Mordell, who famously wrote in 1953, ‘I do not know anything about the integer solutions of x3 + y3 + z3 = 3 beyond the existence of the four triples (1, 1, 1), (4, 4, -5), (4, -5, 4), (-5, 4, 4); and it must be very difficult indeed to find out anything about any other solutions.’ This quote motivated a lot of the interest in the sum of three cubes problem, and the case k=3 in particular. While it is conjectured that there should be infinitely many solutions, despite more than 65 years of searching we know only the easy solutions that were already known to Mordell. It would be very exciting to find another solution for k=3.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 12:41 pm on September 6, 2019 Permalink | Reply
    Tags: , Cryogenic transmission electron microscopy, Epitaxial growth, Gold loves to grow into little triangles., Gold triangles may be useful as photonic and plasmonic structures., , MIT, , Recording movies using electron microscopy., Self-assembled structures, , Ultimately we’d like to develop techniques for growing well-defined structures out of metal oxides especially if we can control the composition at each location on the structure., What happens when these 2-D materials and ordinary 3-D materials come together.   

    From MIT News: “Creating new opportunities from nanoscale materials” 

    MIT News

    From MIT News

    September 5, 2019
    Denis Paiste | Materials Research Laboratory

    1
    Left to right: Postdoc Shu Fen Tan, graduate student Kate Reidy, and Professor Frances Ross, all of the Department of Materials Science and Engineering, sit in front of a high vacuum evaporator system. The equipment was housed temporarily in MIT.nano while Ross’s lab was built out in Building 13. Photo: Denis Paiste/Materials Research Laboratory.

    2
    MIT Professor Frances Ross has designed several custom sample holders for examining nanoscale materials in gases and liquid media in the electron microscope. For liquid environments, thin windows of silicon nitride surround the liquid but allow the electron beam to pass through. For gas environments, the sample holder (shown here) must heat and tilt the sample without compromising its cleanliness. Photo: Denis Paiste/Materials Research Laboratory.

    3
    When gold is deposited on “dirty” graphene (left), blobs of gold collect around impurities. But when gold grows on graphene that has been heated and cleansed of impurities (right), it forms perfect triangles of gold. Images: Kate Reidy/MIT.

    5
    Professor Frances Ross (left), graduate student Kate Reidy (center), and postdoc Shu Fen Tan work together at the high vacuum evaporator chamber that is part of an electron microscopy suite donated to MIT by IBM. Photo: Denis Paiste/Materials Research Laboratory.

    6
    Niobium deposited on top of graphene produces structures that look like the frost patterns formed on the inside of windows in winter, or the feathery patterns of some ferns. They are called dendritic structures. Image: Kate Reidy/MIT.

    7
    An electron diffraction image of niobium deposited on top of graphene shows that certain crystal planes of niobium align with the crystal planes of the graphene, which is known as epitaxial growth. When a 3-D material is grown on top of a 2-D layer, this perfectly aligned atomic arrangement is often important for device makers. Image: Kate Reidy/MIT.

    8
    Clean deposition of gold nanoislands on molybdenum disulfide MoS2 with visible moiré patterns. Image: Kate Reidy/MIT.

    A hundred years ago, “2d” meant a two-penny, or 1-inch, nail. Today, “2-D” encompasses a broad range of atomically thin flat materials, many with exotic properties not found in the bulk equivalents of the same materials, with graphene — the single-atom-thick form of carbon — perhaps the most prominent. While many researchers at MIT and elsewhere are exploring two-dimensional materials and their special properties, Frances M. Ross, the Ellen Swallow Richards Professor in Materials Science and Engineering, is interested in what happens when these 2-D materials and ordinary 3-D materials come together.

    “We’re interested in the interface between a 2-D material and a 3-D material because every 2-D material that you want to use in an application, such as an electronic device, still has to talk to the outside world, which is three-dimensional,” Ross says.

    “We’re at an interesting time because there are immense developments in instrumentation for electron microscopy, and there is great interest in materials with very precisely controlled structures and properties, and these two things cross in a fascinating way,” says Ross.

    “The opportunities are very exciting,” Ross says. “We’re going to be really improving the characterization capabilities here at MIT.” Ross specializes in examining how nanoscale materials grow and react in both gases and liquid media, by recording movies using electron microscopy. Microscopy of reactions in liquids is particularly useful for understanding the mechanisms of electrochemical reactions that govern the performance of catalysts, batteries, fuel cells, and other important technologies. “In the case of liquid phase microscopy, you can also look at corrosion where things dissolve away, while in gases you can look at how individual crystals grow or how materials react with, say, oxygen,” she says.

    Ross joined the Department of Materials Science and Engineering (DMSE) faculty last year, moving from the nanoscale materials analysis department at the IBM Thomas J. Watson Research Center. “I learned a tremendous amount from my IBM colleagues and hope to extend our research in material design and growth in new directions,” she says.

    Recording movies

    During a recent visit to her lab, Ross explained an experimental setup donated to MIT by IBM. An ultra-high vacuum evaporator system arrived first, to be attached later directly onto a specially designed transmission electron microscope. “This gives powerful possibilities,” Ross explains. “We can put a sample in the vacuum, clean it, do all sorts of things to it such as heating and adding other materials, then transfer it under vacuum into the microscope, where we can do more experiments while we record images. So we can, for example, deposit silicon or germanium, or evaporate metals, while the sample is in the microscope and the electron beam is shining through it, and we are recording a movie of the process.”

    While waiting this spring for the transmission electron microscope to be set up, members of Ross’ seven-member research group, including materials science and engineering postdoc Shu Fen Tan and graduate student Kate Reidy, made and studied a variety of self-assembled structures. The evaporator system was housed temporarily on the fifth-level prototyping space of MIT.nano while Ross’s lab was being readied in Building 13. “MIT.nano had the resources and space; we were happy to be able to help,” says Anna Osherov, MIT.nano assistant director of user services.

    “All of us are interested in this grand challenge of materials science, which is: ‘How do you make a material with the properties you want and, in particular, how do you use nanoscale dimensions to tweak the properties, and create new properties, that you can’t get from bulk materials?’” Ross says.

    Using the ultra-high vacuum system, graduate student Kate Reidy formed structures of gold and niobium on several 2-D materials. “Gold loves to grow into little triangles,” Ross notes. “We’ve been talking to people in physics and materials science about which combinations of materials are the most important to them in terms of controlling the structures and the interfaces between the components in order to give some improvement in the properties of the material,” she notes.

    Shu Fen Tan synthesized nickel-platinum nanoparticles and examined them using another technique, liquid cell electron microscopy. She could arrange for only the nickel to dissolve, leaving behind spiky skeletons of platinum. “Inside the liquid cell, we are able to see this whole process at high spatial and temporal resolutions,” Tan says. She explains that platinum is a noble metal and less reactive than nickel, so under the right conditions the nickel participates in an electrochemical dissolution reaction and the platinum is left behind.

    Platinum is a well-known catalyst in organic chemistry and fuel cell materials, Tan notes, but it is also expensive, so finding combinations with less-expensive materials such as nickel is desirable.

    “This is an example of the range of materials reactions you can image in the electron microscope using the liquid cell technique,” Ross says. “You can grow materials; you can etch them away; you can look at, for example, bubble formation and fluid motion.”

    A particularly important application of this technique is to study cycling of battery materials. “Obviously, I can’t put an AA battery in here, but you could set up the important materials inside this very small liquid cell and then you can cycle it back and forth and ask, if I charge and discharge it 10 times, what happens? It does not work just as well as before — how does it fail?” Ross asks. “Some kind of failure analysis and all the intermediate stages of charging and discharging can be observed in the liquid cell.”

    “Microscopy experiments where you see every step of a reaction give you a much better chance of understanding what’s going on,” Ross says.

    Moiré patterns

    Graduate student Reidy is interested in how to control the growth of gold on 2-D materials such as graphene, tungsten diselenide, and molybdenum disulfide. When she deposited gold on “dirty” graphene, blobs of gold collected around the impurities. But when Reidy grew gold on graphene that had been heated and cleaned of impurities, she found perfect triangles of gold. Depositing gold on both the top and bottom sides of clean graphene, Reidy saw in the microscope features known as moiré patterns, which are caused when the overlapping crystal structures are out of alignment.

    9
    Physicists at MIT and Harvard University have found that graphene, a lacy, honeycomb-like sheet of carbon atoms, can behave at two electrical extremes: as an insulator, in which electrons are completely blocked from flowing; and as a superconductor, in which electrical current can stream through without resistance. Courtesy of the researchers.

    The gold triangles may be useful as photonic and plasmonic structures. “We think this could be important for a lot of applications, and it is always interesting for us to see what happens,” Reidy says. She is planning to extend her clean growth method to form 3-D metal crystals on stacked 2-D materials with various rotation angles and other mixed-layer structures. Reidy is interested in the properties of graphene and hexagonal boron nitride (hBN), as well as two materials that are semiconducting in their 2-D single-layer form, molybdenum disulfide (MoS2) and tungsten diselenide (WSe2). “One aspect that’s very interesting in the 2-D materials community is the contacts between 2-D materials and 3-D metals,” Reidy says. “If they want to make a semiconducting device or a device with graphene, the contact could be ohmic for the graphene case or a Schottky contact for the semiconducting case, and the interface between these materials is really, really important.”

    “You can also imagine devices using the graphene just as a spacer layer between two other materials,” Ross adds.

    For device makers, Reidy says it is sometimes important to have a 3-D material grow with its atomic arrangement aligned perfectly with the atomic arrangement in the 2-D layer beneath. This is called epitaxial growth. Describing an image of gold grown together with silver on graphene, Reidy explains, “We found that silver doesn’t grow epitaxially, it doesn’t make those perfect single crystals on graphene that we wanted to make, but by first depositing the gold and then depositing silver around it, we can almost force silver to go into an epitaxial shape because it wants to conform to what its gold neighbors are doing.”

    Electron microscope images can also show imperfections in a crystal such as rippling or bending, Reidy notes. “One of the great things about electron microscopy is that it is very sensitive to changes in the arrangement of the atoms,” Ross says. “You could have a perfect crystal and it would all look the same shade of gray, but if you have a local change in the structure, even a subtle change, electron microscopy can pick it up. Even if the change is just within the top few layers of atoms without affecting the rest of the material beneath, the image will show distinctive features that allow us to work out what’s going on.”

    Reidy also is exploring the possibilities of combining niobium — a metal that is superconducting at low temperatures — with a 2-D topological insulator, bismuth telluride. Topological insulators have fascinating properties whose discovery resulted in the Nobel Prize in Physics in 2016. “If you deposit niobium on top of bismuth telluride, with a very good interface, you can make superconducting junctions. We’ve been looking into niobium deposition, and rather than triangles we see structures that are more dendritic looking,” Reidy says. Dendritic structures look like the frost patterns formed on the inside of windows in winter, or the feathery patterns of some ferns. Changing the temperature and other conditions during the deposition of niobium can change the patterns that the material takes.

    All the researchers are eager for new electron microscopes to arrive at MIT.nano to give further insights into the behavior of these materials. “Many things will happen within the next year, things are ramping up already, and I have great people to work with. One new microscope is being installed now in MIT.nano and another will arrive next year. The whole community will see the benefits of improved microscopy characterization capabilities here,” Ross says.

    MIT.nano’s Osherov notes that two cryogenic transmission electron microscopes (cryo-TEM) are installed and running. “Our goal is to establish a unique microscopy-centered community. We encourage and hope to facilitate a cross-pollination between the cryo-EM researchers, primarily focused on biological applications and ‘soft’ material, as well as other research communities across campus,” she says. The latest addition of a scanning transmission electron microscope with enhanced analytical capabilities (ultrahigh energy resolution monochromator, 4-D STEM detector, Super-X EDS detector, tomography, and several in situ holders) brought in by John Chipman Associate Professor of Materials Science and Engineering James M. LeBeau, once installed, will substantially enhance the microscopy capabilities of the MIT campus. “We consider Professor Ross to be an immense resource for advising us in how to shape the in situ approach to measurements using the advanced instrumentation that will be shared and available to all the researchers within the MIT community and beyond,” Osherov says.

    Little drinking straws

    “Sometimes you know more or less what you are going to see during a growth experiment, but very often there’s something that you don’t expect,” Ross says. She shows an example of zinc oxide nanowires that were grown using a germanium catalyst. Some of the long crystals have a hole through their centers, creating structures which are like little drinking straws, circular outside but with a hexagonally shaped interior. “This is a single crystal of zinc oxide, and the interesting question for us is why do the experimental conditions create these facets inside, while the outside is smooth?” Ross asks. “Metal oxide nanostructures have so many different applications, and each new structure can show different properties. In particular, by going to the nanoscale you get access to a diverse set of properties.”

    “Ultimately, we’d like to develop techniques for growing well-defined structures out of metal oxides, especially if we can control the composition at each location on the structure,” Ross says. A key to this approach is self-assembly, where the material builds itself into the structure you want without having to individually tweak each component. “Self-assembly works very well for certain materials but the problem is that there’s always some uncertainty, some randomness or fluctuations. There’s poor control over the exact structures that you get. So the idea is to try to understand self-assembly well enough to be able to control it and get the properties that you want,” Ross says.

    “We have to understand how the atoms end up where they are, then use that self-assembly ability of atoms to make a structure we want. The way to understand how things self-assemble is to watch them do it, and that requires movies with high spatial resolution and good time resolution,” Ross explains. Electron microscopy can be used to acquire structural and compositional information and can even measure strain fields or electric and magnetic fields. “Imagine recording all of these things, but in a movie where you are also controlling how materials grow within the microscope. Once you have made a movie of something happening, you analyze all the steps of the growth process and use that to understand which physical principles were the key ones that determined how the structure nucleated and evolved and ended up the way it does.”

    Future directions

    Ross hopes to bring in a unique high-resolution, high vacuum TEM with capabilities to image materials growth and other dynamic processes. She intends to develop new capabilities for both water-based and gas-based environments. This custom microscope is still in the planning stages but will be situated in one of the rooms in the Imaging Suite in MIT.nano.

    “Professor Ross is a pioneer in this field,” Osherov says. “The majority of TEM studies to-date have been static, rather than dynamic. With static measurements you are observing a sample at one particular snapshot in time, so you don’t gain any information about how it was formed. Using dynamic measurements, you can look at the atoms hopping from state to state until they find the final position. The ability to observe self-assembling processes and growth in real time provides valuable mechanistic insights. We’re looking forward to bringing these advanced capabilities to MIT.nano.” she says.

    “Once a certain technique is disseminated to the public, it brings attention,” Osherov says. “When results are published, researchers expand their vision of experimental design based on available state-of-the-art capabilities, leading to many new experiments that will be focused on dynamic applications.”

    Rooms in MIT.nano feature the quietest space on the MIT campus, designed to reduce vibrations and electromagnetic interference to as low a level as possible. “There is space available for Professor Ross to continue her research and to develop it further,” Osherov says. “The ability of in situ monitoring the formation of matter and interfaces will find applications in multiple fields across campus, and lead to a further push of the conventional electron microscopy limits.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 7:44 pm on September 5, 2019 Permalink | Reply
    Tags: An exotic physical phenomenon involving optical waves; synthetic magnetic fields; and time reversal has been directly observed for the first time following decades of attempts., At this point the experiment is primarily of interest for fundamental physics research with the aim of gaining a better understanding of some basic underpinnings of modern physical theory., For one thing for quantum computation the experiment would need to be scaled up from one single device to likely a whole lattice of them., It would require working with a source of single individual photons., MIT, Observation of the predicted non-Abelian Aharonov-Bohm Effect may offer step toward fault-tolerant quantum computers., The finding relates to gauge fields which describe transformations that particles undergo., The new finding could lead to realizations of what are known as topological phases, The non-Abelian Berry phase is a theoretical gem that is the doorway to understanding many intriguing ideas in contemporary physics   

    From MIT News: “Exotic physics phenomenon is observed for first time” 

    MIT News

    From MIT News

    September 5, 2019
    David L. Chandler

    1
    Images showing interference patterns (top) and a Wilson loop (bottom) were produced by the researchers to confirm the presence of non-Abelian gauge fields created in the research. Image courtesy of the researchers.

    Observation of the predicted non-Abelian Aharonov-Bohm Effect may offer step toward fault-tolerant quantum computers.

    An exotic physical phenomenon, involving optical waves, synthetic magnetic fields, and time reversal, has been directly observed for the first time, following decades of attempts. The new finding could lead to realizations of what are known as topological phases, and eventually to advances toward fault-tolerant quantum computers, the researchers say.

    The new finding involves the non-Abelian Aharonov-Bohm Effect and is reported today in the journal Science by MIT graduate student Yi Yang, MIT visiting scholar Chao Peng (a professor at Peking University), MIT graduate student Di Zhu, Professor Hrvoje Buljan at University of Zagreb in Croatia, Francis Wright Davis Professor of Physics John Joannopoulos at MIT, Professor Bo Zhen at the University of Pennsylvania, and MIT professor of physics Marin Soljačić.

    The finding relates to gauge fields, which describe transformations that particles undergo. Gauge fields fall into two classes, known as Abelian and non-Abelian. The Aharonov-Bohm Effect, named after the theorists who predicted it in 1959, confirmed that gauge fields — beyond being a pure mathematical aid — have physical consequences.

    But the observations only worked in Abelian systems, or those in which gauge fields are commutative — that is, they take place the same way both forward and backward in time. In 1975, Tai-Tsun Wu and Chen-Ning Yang generalized the effect to the non-Abelian regime as a thought experiment. Nevertheless, it remained unclear whether it would even be possible to ever observe the effect in a non-Abelian system. Physicists lacked ways of creating the effect in the lab, and also lacked ways of detecting the effect even if it could be produced. Now, both of those puzzles have been solved, and the observations carried out successfully.

    The effect has to do with one of the strange and counterintuitive aspects of modern physics, the fact that virtually all fundamental physical phenomena are time-invariant. That means that the details of the way particles and forces interact can run either forward or backward in time, and a movie of how the events unfold can be run in either direction, so there’s no way to tell which is the real version. But a few exotic phenomena violate this time symmetry.

    Creating the Abelian version of the Aharonov-Bohm effects requires breaking the time-reversal symmetry, a challenging task in itself, Soljačić says. But to achieve the non-Abelian version of the effect requires breaking this time-reversal multiple times, and in different ways, making it an even greater challenge.

    To produce the effect, the researchers use photon polarization. Then, they produced two different kinds of time-reversal breaking. They used fiber optics to produce two types of gauge fields that affected the geometric phases of the optical waves, first by sending them through a crystal biased by powerful magnetic fields, and second by modulating them with time-varying electrical signals, both of which break the time-reversal symmetry. They were then able to produce interference patterns that revealed the differences in how the light was affected when sent through the fiber-optic system in opposite directions, clockwise or counterclockwise. Without the breaking of time-reversal invariance, the beams should have been identical, but instead, their interference patterns revealed specific sets of differences as predicted, demonstrating the details of the elusive effect.

    The original, Abelian version of the Aharonov-Bohm effect “has been observed with a series of experimental efforts, but the non-Abelian effect has not been observed until now,” Yang says. The finding “allows us to do many things,” he says, opening the door to a wide variety of potential experiments, including classical and quantum physical regimes, to explore variations of the effect.

    The experimental approach devised by this team “might inspire the realization of exotic topological phases in quantum simulations using photons, polaritons, quantum gases, and superconducting qubits,” Soljačić says. For photonics itself, this could be useful in a variety of optoelectronic applications, he says. In addition, the non-Abelian gauge fields that the group was able to synthesize produced a non-Abelian Berry phase, and “combined with interactions, it may potentially one day serve as a platform for fault-tolerant topological quantum computation,” he says.

    At this point, the experiment is primarily of interest for fundamental physics research, with the aim of gaining a better understanding of some basic underpinnings of modern physical theory. The many possible practical applications “will require additional breakthroughs going forward,” Soljačić says.

    For one thing, for quantum computation, the experiment would need to be scaled up from one single device to likely a whole lattice of them. And instead of the beams of laser light used in their experiment, it would require working with a source of single individual photons. But even in its present form, the system could be used to explore questions in topological physics, which is a very active area of current research, Soljačić says.

    “The non-Abelian Berry phase is a theoretical gem that is the doorway to understanding many intriguing ideas in contemporary physics,” says Ashvin Vishwanath, a professor of physics at Harvard University, who was not associated with this work. “I am glad to see it getting the experimental attention it deserves in the current work, which reports a well-controlled and characterized realization. I expect this work to stimulate progress both directly as a building block for more complex architectures, and also indirectly in inspiring other realizations.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:43 pm on September 4, 2019 Permalink | Reply
    Tags: "MIT report examines how to make technology work for society", A 2018 survey by the Pew Research Center found that 65 to 90 percent of respondents in industrialized countries think computers and robots will take over many jobs done by humans., A critical challenge is not necessarily a lack of jobs but the low quality of many jobs., Another concern for workers is income stagnation., Applications of technology have fed inequality in recent decades., Automation is not likely to eliminate millions of jobs any time soon., But the U.S. still needs vastly improved policies if Americans are to build better careers and share prosperity as technological changes occur., But there is reason for concern about the impact of new technology on the labor market., Capitalism in the U.S. must address the interests of workers as well as shareholders., Community colleges are the biggest training providers in the country with 12 million for-credit and non-credit students and are a natural location for bolstering workforce education., Computers and the internet enabled a digitalization of work that made highly educated workers more productive and made less-educated workers easier to replace with machinery., High-tech innovations have displaced “middle-skilled” workers who perform routine tasks., If the next four decades of automation are going to look like the last four decades people have reason to worry., Increased earnings have been concentrated among white-collar workers., Less than a third think better-paying jobs will result from these technologies., MIT, Particularly impacted are workers without college degrees., Productivity growth has not translated into shared prosperity., Task force calls for bold public and private action to harness technology for shared prosperity., Technology has also not displaced lower-skilled service workers leading to a polarized workforce., The likelihood of robots automation and artificial intelligence (AI) wiping out huge sectors of the workforce in the near future is exaggerated the task force concludes., The work of the future can be shaped beneficially by new policies; renewed support for labor; and reformed institutions not just new technologies., We think people are pessimistic because they’re on to something.   

    From MIT News: “MIT report examines how to make technology work for society” 

    MIT News

    From MIT News

    September 4, 2019
    Peter Dizikes

    1
    MIT’s Task Force on the Work of the Future has released a report that punctures some conventional wisdom and builds a nuanced picture of the evolution of technology and jobs.

    Task force calls for bold public and private action to harness technology for shared prosperity.

    Automation is not likely to eliminate millions of jobs any time soon — but the U.S. still needs vastly improved policies if Americans are to build better careers and share prosperity as technological changes occur, according to a new MIT report about the workplace.

    The report, which represents the initial findings of MIT’s Task Force on the Work of the Future, punctures some conventional wisdom and builds a nuanced picture of the evolution of technology and jobs, the subject of much fraught public discussion.

    The likelihood of robots, automation, and artificial intelligence (AI) wiping out huge sectors of the workforce in the near future is exaggerated, the task force concludes — but there is reason for concern about the impact of new technology on the labor market. In recent decades, technology has contributed to the polarization of employment, disproportionately helping high-skilled professionals while reducing opportunities for many other workers, and new technologies could exacerbate this trend.

    Moreover, the report emphasizes, at a time of historic income inequality, a critical challenge is not necessarily a lack of jobs, but the low quality of many jobs and the resulting lack of viable careers for many people, particularly workers without college degrees. With this in mind, the work of the future can be shaped beneficially by new policies, renewed support for labor, and reformed institutions, not just new technologies. Broadly, the task force concludes, capitalism in the U.S. must address the interests of workers as well as shareholders.

    “At MIT, we are inspired by the idea that technology can be a force for good. But if as a nation we want to make sure that today’s new technologies evolve in ways that help build a healthier, more equitable society, we need to move quickly to develop and implement strong, enlightened policy responses,” says MIT President L. Rafael Reif, who called for the creation of the Task Force on the Work of the Future in 2017.

    “Fortunately, the harsh societal consequences that concern us all are not inevitable,” Reif adds. “Technologies embody the values of those who make them, and the policies we build around them can profoundly shape their impact. Whether the outcome is inclusive or exclusive, fair or laissez-faire, is therefore up to all of us. I am deeply grateful to the task force members for their latest findings and their ongoing efforts to pave an upward path.”

    “There is a lot of alarmist rhetoric about how the robots are coming,” adds Elisabeth Beck Reynolds, executive director of the task force, as well as executive director of the MIT Industrial Performance Center. “MIT’s job is to cut through some of this hype and bring some perspective to this discussion.”

    Reynolds also calls the task force’s interest in new policy directions “classically American in its willingness to consider innovation and experimentation.”

    Anxiety and inequality

    The core of the task force consists of a group of MIT scholars. Its research has drawn upon new data, expert knowledge of many technology sectors, and a close analysis of both technology-centered firms and economic data spanning the postwar era.

    The report addresses several workplace complexities. Unemployment in the U.S. is low, yet workers have considerable anxiety, from multiple sources. One is technology: A 2018 survey by the Pew Research Center found that 65 to 90 percent of respondents in industrialized countries think computers and robots will take over many jobs done by humans, while less than a third think better-paying jobs will result from these technologies.

    Another concern for workers is income stagnation: Adjusted for inflation, 92 percent of Americans born in 1940 earned more money than their parents, but only about half of people born in 1980 can say that.

    “The persistent growth in the quantity of jobs has not been matched by an equivalent growth in job quality,” the task force report states.

    Applications of technology have fed inequality in recent decades. High-tech innovations have displaced “middle-skilled” workers who perform routine tasks, from office assistants to assembly-line workers, but these innovations have complemented the activities of many white-collar workers in medicine, science and engineering, finance, and other fields. Technology has also not displaced lower-skilled service workers, leading to a polarized workforce. Higher-skill and lower-skill jobs have grown, middle-skill jobs have shrunk, and increased earnings have been concentrated among white-collar workers.

    “Technological advances did deliver productivity growth over the last four decades,” the report states. “But productivity growth did not translate into shared prosperity.”

    Indeed, says David Autor, who is the Ford Professor of Economics at MIT, associate head of MIT’s Department of Economics, and a co-chair of the task force, “We think people are pessimistic because they’re on to something. Although there’s no shortage of jobs, the gains have been so unequally distributed that most people have not benefited much. If the next four decades of automation are going to look like the last four decades, people have reason to worry.”

    Productive innovations versus “so-so technology”

    A big question, then, is what the next decades of automation have in store. As the report explains, some technological innovations are broadly productive, while others are merely “so-so technologies” — a term coined by economists Daron Acemoglu of MIT and Pascual Restrepo of Boston University to describe technologies that replace workers without markedly improving services or increasing productivity.

    For instance, electricity and light bulbs were broadly productive, allowing the expansion of other types of work. But automated technology allowing for self-check-out at pharmacies or supermarkets merely replaces workers without notably increasing efficiency for the customer or productivity.

    “That’s a strong labor-displacing technology, but it has very modest productivity value,” Autor says of these automated systems. “That’s a ‘so-so technology.’ The digital era has had fabulous technologies for skill complementarity [for white-collar workers], but so-so technologies for everybody else. Not all innovations that raise productivity displace workers, and not all innovations that displace workers do much for productivity.”

    Several forces have contributed to this skew, according to the report. “Computers and the internet enabled a digitalization of work that made highly educated workers more productive and made less-educated workers easier to replace with machinery,” the authors write.

    Given the mixed record of the last four decades, does the advent of robotics and AI herald a brighter future, or a darker one? The task force suggests the answer depends on how humans shape that future. New and emerging technologies will raise aggregate economic output and boost wealth, and offer people the potential for higher living standards, better working conditions, greater economic security, and improved health and longevity. But whether society realizes this potential, the report notes, depends critically on the institutions that transform aggregate wealth into greater shared prosperity instead of rising inequality.

    One thing the task force does not foresee is a future where human expertise, judgment, and creativity are less essential than they are today.

    “Recent history shows that key advances in workplace robotics — those that radically increase productivity — depend on breakthroughs in work design that often take years or even decades to achieve,” the report states.

    As robots gain flexibility and situational adaptability, they will certainly take over a larger set of tasks in warehouses, hospitals, and retail stores — such as lifting, stocking, transporting, cleaning, as well as awkward physical tasks that require picking, harvesting, stooping, or crouching.

    The task force members believe such advances in robotics will displace relatively low-paid human tasks and boost the productivity of workers, whose attention will be freed to focus on higher-value-added work. The pace at which these tasks are delegated to machines will be hastened by slowing growth, tight labor markets, and the rapid aging of workforces in most industrialized countries, including the U.S.

    And while machine learning — image classification, real-time analytics, data forecasting, and more — has improved, it may just alter jobs, not eliminate them: Radiologists do much more than interpret X-rays, for instance. The task force also observes that developers of autonomous vehicles, another hot media topic, have been “ratcheting back” their timelines and ambitions over the last year.

    “The recent reset of expectations on driverless cars is a leading indicator for other types of AI-enabled systems as well,” says David A. Mindell, co-chair of the task force, professor of aeronautics and astronautics, and the Dibner Professor of the History of Engineering and Manufacturing at MIT. “These technologies hold great promise, but it takes time to understand the optimal combination of people and machines. And the timing of adoption is crucial for understanding the impact on workers.”

    Policy proposals for the future

    Still, if the worst-case scenario of a “job apocalypse” is unlikely, the continued deployment of so-so technologies could make the future of work worse for many people.

    If people are worried that technologies could limit opportunity, social mobility, and shared prosperity, the report states, “Economic history confirms that this sentiment is neither ill-informed nor misguided. There is ample reason for concern about whether technological advances will improve or erode employment and earnings prospects for the bulk of the workforce.”

    At the same time, the task force report finds reason for “tempered optimism,” asserting that better policies can significantly improve tomorrow’s work.

    “Technology is a human product,” Mindell says. “We shape technological change through our choices of investments, incentives, cultural values, and political objectives.”

    To this end, the task force focuses on a few key policy areas. One is renewed investment in postsecondary workforce education outside of the four-year college system — and not just in the STEM skills (science, technology, engineering, math) but reading, writing, and the “social skills” of teamwork and judgment.

    Community colleges are the biggest training providers in the country, with 12 million for-credit and non-credit students, and are a natural location for bolstering workforce education. A wide range of new models for gaining educational credentials is also emerging, the task force notes. The report also emphasizes the value of multiple types of on-the-job training programs for workers.

    However, the report cautions, investments in education may be necessary but not sufficient for workers: “Hoping that ‘if we skill them, jobs will come,’ is an inadequate foundation for constructing a more productive and economically secure labor market.”

    More broadly, therefore, the report argues that the interests of capital and labor need to be rebalanced. The U.S., it notes, “is unique among market economies in venerating pure shareholder capitalism,” even though workers and communities are business stakeholders too.

    “Within this paradigm [of pure shareholder capitalism], the personal, social, and public costs of layoffs and plant closings should not play a critical role in firm decision-making,” the report states.

    The task force recommends greater recognition of workers as stakeholders in corporate decision making. Redressing the decades-long erosion of worker bargaining power will require new institutions that bend the arc of innovation toward making workers more productive rather than less necessary. The report holds that the adversarial system of collective bargaining, enshrined in U.S. labor law adopted during the Great Depression, is overdue for reform.

    The U.S. tax code can be altered to help workers as well. Right now, it favors investments in capital rather than labor — for instance, capital depreciation can be written off, and R&D investment receives a tax credit, whereas investments in workers produce no such equivalent benefits. The task force recommends new tax policy that would also incentivize investments in human capital, through training programs, for instance.

    Additionally, the task force recommends restoring support for R&D to past levels and rebuilding U.S. leadership in the development of new AI-related technologies, “not merely to win but to lead innovation in directions that will benefit the nation: complementing workers, boosting productivity, and strengthening the economic foundation for shared prosperity.”

    Ultimately the task force’s goal is to encourage investment in technologies that improve productivity, and to ensure that workers share in the prosperity that could result.

    “There’s no question technological progress that raises productivity creates opportunity,” Autor says. “It expands the set of possibilities that you can realize. But it doesn’t guarantee that you will make good choices.”

    Reynolds adds: “The question for firms going forward is: How are they going to improve their productivity in ways that can lead to greater quality and efficiency, and aren’t just about cutting costs and bringing in marginally better technology?”

    Further research and analyses

    In addition to Reynolds, Autor, and Mindell, the central group within MIT’s Task Force on the Work of the Future consists of 18 MIT professors representing all five Institute schools. Additionally, the project has a 22-person advisory board drawn from the ranks of industry leaders, former government officials, and academia; a 14-person research board of scholars; and eight graduate students. The task force also counsulted with business executives, labor leaders, and community college leaders, among others.

    The task force follows other influential MIT projects such as the Commission on Industrial Productivity, an intensive multiyear study of U.S. industry in the 1980s. That effort resulted in the widely read book, “Made in America,” as well as the creation of MIT’s Industrial Performance Center.

    The current task force taps into MIT’s depth of knowledge across a full range of technologies, as well as its strengths in the social sciences.

    “MIT is engaged in developing frontier technology,” Reynolds says. “Not necessarily what will be introduced tomorrow, but five, 10, or 25 years from now. We do see what’s on the horizon, and our researchers want to bring realism and context to the public discourse.”

    The current report is an interim finding from the task force; the group plans to conduct additional research over the next year, and then will issue a final version of the report.

    “What we’re trying to do with this work,” Reynolds concludes, “is to provide a holistic perspective, which is not just about the labor market and not just about technology, but brings it all together, for a more rational and productive discussion in the public sphere.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 12:01 pm on September 4, 2019 Permalink | Reply
    Tags: , “This study demonstrates how mechanical perturbations of an implant can modulate the host foreign body response...", DSR-dynamic soft reservoir, Implantable medical devices have various failure rates that can be attributed to fibrosis ranging from 30-50 percent for implantable pacemakers or 30 percent for mammoplasty prosthetics., , MIT, Soft robotics, Soft robots are flexible devices that can be implanted into the body., The research describes the use of soft robotics to modify the body’s response to implanted devices., These complex and unpredictable foreign-body responses impair device function and drastically limit the long-term performance and therapeutic efficacy of these devices., These implanted devices are not without problems caused in part by the body’s own protection responses., This work could help patients requiring in-situ (implanted) medical devices such as breast implants; pacemakers; neural probes; glucose biosensors; and drug and cell delivery devices.   

    From MIT News: “Soft robotics breakthrough manages immune response for implanted devices” 

    MIT News

    From MIT News

    September 4, 2019
    Institute for Medical Engineering and Science

    1
    Depiction of a soft robotic device known as a dynamic soft reservoir (DSR) Image courtesy of the researchers.

    Discovery could enable longer-lasting and better-functioning devices — including pacemakers, breast implants, biosensors, and drug delivery devices.

    Researchers from the Institute for Medical Engineering and Science (IMES) at MIT; the National University of Ireland Galway (NUI Galway); and AMBER, the SFI Research Centre for Advanced Materials and BioEngineering Research, recently announced a significant breakthrough in soft robotics that could help patients requiring in-situ (implanted) medical devices such as breast implants, pacemakers, neural probes, glucose biosensors, and drug and cell delivery devices.

    The implantable medical devices market is currently estimated at approximately $100 billion, with significant growth potential into the future as new technologies for drug delivery and health monitoring are developed. These devices are not without problems, caused in part by the body’s own protection responses. These complex and unpredictable foreign-body responses impair device function and drastically limit the long-term performance and therapeutic efficacy of these devices.

    One such foreign body response is fibrosis, a process whereby a dense fibrous capsule surrounds the implanted device, which can cause device failure or impede its function. Implantable medical devices have various failure rates that can be attributed to fibrosis, ranging from 30-50 percent for implantable pacemakers or 30 percent for mammoplasty prosthetics. In the case of biosensors or drug/cell delivery devices, the dense fibrous capsule which can build up around the implanted device can seriously impede its function, with consequences for the patient and costs to the health care system.

    A radical new vision for medical devices to address this problem was published in the internationally respected journal, Science Robotics. The study was led by researchers from NUI Galway, IMES, and the SFI research center AMBER, among others. The research describes the use of soft robotics to modify the body’s response to implanted devices. Soft robots are flexible devices that can be implanted into the body.

    The transatlantic partnership of scientists has created a tiny, mechanically actuated soft robotic device known as a dynamic soft reservoir (DSR) that has been shown to significantly reduce the build-up of the fibrous capsule by manipulating the environment at the interface between the device and the body. The device uses mechanical oscillation to modulate how cells respond around the implant. In a bio-inspired design, the DSR can change its shape at a microscope scale through an actuating membrane.

    IMES core faculty member, assistant professor at the Department of Mechanical Engineering, and W.M. Keck Career Development Professor in Biomedical Engineering Ellen Roche, the senior co-author of the study, is a former researcher at NUI Galway who won international acclaim in 2017 for her work in creating a soft robotic sleeve to help patients with heart failure. Of this research, Roche says “This study demonstrates how mechanical perturbations of an implant can modulate the host foreign body response. This has vast potential for a range of clinical applications and will hopefully lead to many future collaborative studies between our teams.”

    Garry Duffy, professor in anatomy at NUI Galway and AMBER principal investigator, and a senior co-author of the study, adds “We feel the ideas described in this paper could transform future medical devices and how they interact with the body. We are very excited to develop this technology further and to partner with people interested in the potential of soft robotics to better integrate devices for longer use and superior patient outcomes. It’s fantastic to build and continue the collaboration with the Dolan and Roche labs, and to develop a trans-Atlantic network of soft roboticists.”

    The first author of the study, Eimear Dolan, lecturer of biomedical engineering at NUI Galway and former researcher in the Roche and Duffy labs at MIT and NUI Galway, says “We are very excited to publish this study, as it describes an innovative approach to modulate the foreign-body response using soft robotics. I recently received a Science Foundation Ireland Royal Society University Research Fellowship to bring this technology forward with a focus on Type 1 diabetes. It is a privilege to work with such a talented multi-disciplinary team, and I look forward to continuing working together.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 3:48 pm on August 28, 2019 Permalink | Reply
    Tags: "MIT engineers build advanced microprocessor out of carbon nanotubes", , , MIT,   

    From MIT News: “MIT engineers build advanced microprocessor out of carbon nanotubes” 

    MIT News

    From MIT News

    August 28, 2019
    Rob Matheson

    1
    A close up of a modern microprocessor built from carbon nanotube field-effect transistors. Image: Felice Frankel

    2
    MIT engineers have built a modern microprocessor from carbon nanotube field-effect transistors (pictured), which are seen as faster and greener than silicon transistors. The new approach uses the same fabrication processes used for silicon chips. Image courtesy of the researchers

    New approach harnesses the same fabrication processes used for silicon chips, offers key advance toward next-generation computers.

    After years of tackling numerous design and manufacturing challenges, MIT researchers have built a modern microprocessor from carbon nanotube transistors, which are widely seen as a faster, greener alternative to their traditional silicon counterparts.

    The microprocessor, described today in the journal Nature, can be built using traditional silicon-chip fabrication processes, representing a major step toward making carbon nanotube microprocessors more practical.

    Silicon transistors — critical microprocessor components that switch between 1 and 0 bits to carry out computations — have carried the computer industry for decades. As predicted by Moore’s Law, industry has been able to shrink down and cram more transistors onto chips every couple of years to help carry out increasingly complex computations. But experts now foresee a time when silicon transistors will stop shrinking, and become increasingly inefficient.

    Making carbon nanotube field-effect transistors (CNFET) has become a major goal for building next-generation computers. Research indicates CNFETs have properties that promise around 10 times the energy efficiency and far greater speeds compared to silicon. But when fabricated at scale, the transistors often come with many defects that affect performance, so they remain impractical.

    The MIT researchers have invented new techniques to dramatically limit defects and enable full functional control in fabricating CNFETs, using processes in traditional silicon chip foundries. They demonstrated a 16-bit microprocessor with more than 14,000 CNFETs that performs the same tasks as commercial microprocessors. The paper describes the microprocessor design and includes more than 70 pages detailing the manufacturing methodology.

    The microprocessor is based on the RISC-V open-source chip architecture that has a set of instructions that a microprocessor can execute. The researchers’ microprocessor was able to execute the full set of instructions accurately. It also executed a modified version of the classic “Hello, World!” program, printing out, “Hello, World! I am RV16XNano, made from CNTs.”

    “This is by far the most advanced chip made from any emerging nanotechnology that is promising for high-performance and energy-efficient computing,” says co-author Max M. Shulaker, the Emanuel E Landsman Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) and a member of the Microsystems Technology Laboratories. “There are limits to silicon. If we want to continue to have gains in computing, carbon nanotubes represent one of the most promising ways to overcome those limits. [The paper] completely re-invents how we build chips with carbon nanotubes.”

    Joining Shulaker on the paper are: first author and postdoc Gage Hills, graduate students Christian Lau, Andrew Wright, Mindy D. Bishop, Tathagata Srimani, Pritpal Kanhaiya, Rebecca Ho, and Aya Amer, all of EECS; Arvind, the Johnson Professor of Computer Science and Engineering and a researcher in the Computer Science and Artificial Intelligence Laboratory; Anantha Chandrakasan, the dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science; and Samuel Fuller, Yosi Stein, and Denis Murphy, all of Analog Devices.

    Fighting the “bane” of CNFETs

    The microprocessor builds on a previous iteration designed by Shulaker and other researchers six years ago that had only 178 CNFETs and ran on a single bit of data. Since then, Shulaker and his MIT colleagues have tackled three specific challenges in producing the devices: material defects, manufacturing defects, and functional issues. Hills did the bulk of the microprocessor design, while Lau handled most of the manufacturing.

    For years, the defects intrinsic to carbon nanotubes have been a “bane of the field,” Shulaker says. Ideally, CNFETs need semiconducting properties to switch their conductivity on an off, corresponding to the bits 1 and 0. But unavoidably, a small portion of carbon nanotubes will be metallic, and will slow or stop the transistor from switching. To be robust to those failures, advanced circuits will need carbon nanotubes at around 99.999999 percent purity, which is virtually impossible to produce today.

    The researchers came up with a technique called DREAM (an acronym for “designing resiliency against metallic CNTs”), which positions metallic CNFETs in a way that they won’t disrupt computing. In doing so, they relaxed that stringent purity requirement by around four orders of magnitude — or 10,000 times — meaning they only need carbon nanotubes at about 99.99 percent purity, which is currently possible.

    Designing circuits basically requires a library of different logic gates attached to transistors that can be combined to, say, create adders and multipliers — like combining letters in the alphabet to create words. The researchers realized that the metallic carbon nanotubes impacted different pairings of these gates differently. A single metallic carbon nanotube in gate A, for instance, may break the connection between A and B. But several metallic carbon nanotubes in gates B may not impact any of its connections.

    In chip design, there are many ways to implement code onto a circuit. The researchers ran simulations to find all the different gate combinations that would be robust and wouldn’t be robust to any metallic carbon nanotubes. They then customized a chip-design program to automatically learn the combinations least likely to be affected by metallic carbon nanotubes. When designing a new chip, the program will only utilize the robust combinations and ignore the vulnerable combinations.

    “The ‘DREAM’ pun is very much intended, because it’s the dream solution,” Shulaker says. “This allows us to buy carbon nanotubes off the shelf, drop them onto a wafer, and just build our circuit like normal, without doing anything else special.”

    Exfoliating and tuning

    CNFET fabrication starts with depositing carbon nanotubes in a solution onto a wafer with predesigned transistor architectures. However, some carbon nanotubes inevitably stick randomly together to form big bundles — like strands of spaghetti formed into little balls — that form big particle contamination on the chip.

    To cleanse that contamination, the researchers created RINSE (for “removal of incubated nanotubes through selective exfoliation”). The wafer gets pretreated with an agent that promotes carbon nanotube adhesion. Then, the wafer is coated with a certain polymer and dipped in a special solvent. That washes away the polymer, which only carries away the big bundles, while the single carbon nanotubes remain stuck to the wafer. The technique leads to about a 250-times reduction in particle density on the chip compared to similar methods.

    Lastly, the researchers tackled common functional issues with CNFETs. Binary computing requires two types of transistors: “N” types, which turn on with a 1 bit and off with a 0 bit, and “P” types, which do the opposite. Traditionally, making the two types out of carbon nanotubes has been challenging, often yielding transistors that vary in performance. For this solution, the researchers developed a technique called MIXED (for “metal interface engineering crossed with electrostatic doping”), which precisely tunes transistors for function and optimization.

    In this technique, they attach certain metals to each transistor — platinum or titanium — which allows them to fix that transistor as P or N. Then, they coat the CNFETs in an oxide compound through atomic-layer deposition, which allows them to tune the transistors’ characteristics for specific applications. Servers, for instance, often require transistors that act very fast but use up energy and power. Wearables and medical implants, on the other hand, may use slower, low-power transistors.

    The main goal is to get the chips out into the real world. To that end, the researchers have now started implementing their manufacturing techniques into a silicon chip foundry through a program by Defense Advanced Research Projects Agency, which supported the research. Although no one can say when chips made entirely from carbon nanotubes will hit the shelves, Shulaker says it could be fewer than five years. “We think it’s no longer a question of if, but when,” he says.

    The work was also supported by Analog Devices, the National Science Foundation, and the Air Force Research Laboratory.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 12:28 pm on August 27, 2019 Permalink | Reply
    Tags: "Satori" cluster, MIT, ,   

    From MIT News: “IBM gives artificial intelligence computing at MIT a lift” 

    MIT News

    From MIT News

    August 26, 2019
    Kim Martineau | MIT Quest for Intelligence

    Nearly $12 million machine will let MIT researchers run more ambitious AI models.

    1
    An $11.6 million artificial intelligence computing cluster donated by IBM to MIT will come online this fall at the Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, Massachusetts. Photo: Helen Hill/MGHPCC

    IBM designed Summit, the fastest supercomputer on Earth, to run the calculation-intensive models that power modern artificial intelligence (AI).

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Now MIT is about to get a slice.

    IBM pledged earlier this year to donate an $11.6 million computer cluster to MIT modeled after the architecture of Summit, the supercomputer it built at Oak Ridge National Laboratory for the U.S. Department of Energy. The donated cluster is expected to come online this fall when the MIT Stephen A. Schwarzman College of Computing opens its doors, allowing researchers to run more elaborate AI models to tackle a range of problems, from developing a better hearing aid to designing a longer-lived lithium-ion battery.

    “We’re excited to see a range of AI projects at MIT get a computing boost, and we can’t wait to see what magic awaits,” says John E. Kelly III, executive vice president of IBM, who announced the gift in February at MIT’s launch celebration of the MIT Schwarzman College of Computing.

    IBM has named the cluster “Satori”, a Zen Buddhism term for “sudden enlightenment.” Physically the size of a shipping container, Satori is intellectually closer to a Ferrari, capable of zipping through 2 quadrillion calculations per second. That’s the equivalent of each person on Earth performing more than 10 million multiplication problems each second for an entire year, making Satori nimble enough to join the middle ranks of the world’s 500 fastest computers.

    Rapid progress in AI has fueled a relentless demand for computing power to train more elaborate models on ever-larger datasets. At the same time, federal funding for academic computing facilities has been on a three-decade decline. Christopher Hill, director of MIT’s Research Computing Project, puts the current demand at MIT at five times what the Institute can offer.

    “IBM’s gift couldn’t come at a better time,” says Maria Zuber, a geophysics professor and MIT’s vice president of research. “The opening of the new college will only increase demand for computing power. Satori will go a long way in helping to ease the crunch.”

    The computing gap was immediately apparent to John Cohn, chief scientist at the MIT-IBM Watson AI Lab, when the lab opened last year. “The cloud alone wasn’t giving us all that we needed for challenging AI training tasks,” he says. “The expense and long run times made us ask, could we bring more compute power here, to MIT?”

    It’s a mission Satori was built to fill, with IBM Power9 processors, a fast internal network, a large memory, and 256 graphics processing units (GPUs). Designed to rapidly process video-game images, graphics processors have become the workhorse for modern AI applications. Satori, like Summit, has been configured to wring as much power from each GPU as possible.

    IBM’s gift follows a history of collaborations with MIT that have paved the way for computing breakthroughs. In 1956, IBM helped launch the MIT Computation Center with the donation of an IBM 704, the first mass-produced computer to handle complex math.

    3
    IBM 704. Wikipedia

    Nearly three decades later, IBM helped fund Project Athena, an initiative that brought networked computing to campus.

    5
    Project Athena. MIT.

    Together, these initiatives spawned time-share operating systems, foundational programming languages, instant messaging, and the network-security protocol, Kerberos, among other technologies.

    More recently, IBM agreed to invest $240 million over 10 years to establish the MIT-IBM Watson AI Lab, a founding sponsor of MIT’s Quest for Intelligence. In addition to filling the computing gap at MIT, Satori will be configured to allow researchers to exchange data with all major commercial cloud providers, as well as prepare their code to run on IBM’s Summit supercomputer.

    Josh McDermott, an associate professor at MIT’s Department of Brain and Cognitive Sciences, is currently using Summit to develop a better hearing aid, but before he and his students could run their models, they spent countless hours getting the code ready. In the future, Satori will expedite the process, he says, and in the longer term, make more ambitious projects possible.

    “We’re currently building computer systems to model one sensory system but we’d like to be able to build models that can see, hear and touch,” he says. “That requires a much bigger scale.”

    Richard Braatz, the Edwin R. Gilliland Professor at MIT’s Department of Chemical Engineering, is using AI to improve lithium-ion battery technologies. He and his colleagues recently developed a machine learning algorithm to predict a battery’s lifespan from past charging cycles, and now, they’re developing multiscale simulations to test new materials and designs for extending battery life. With a boost from a computer like Satori, the simulations could capture key physical and chemical processes that accelerate discovery. “With better predictions, we can bring new ideas to market faster,” he says.

    Satori will be housed at a silk mill-turned data center, the Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, Massachusetts, and connect to MIT via dedicated, high-speed fiber optic cables. At 150 kilowatts, Satori will consume as much energy as a mid-sized building at MIT, but its carbon footprint will be nearly fully offset by the use of hydro and nuclear power at the Holyoke facility. Equipped with energy-efficient cooling, lighting, and power distribution, the MGHPCC was the first academic data center to receive LEED-platinum status, the highest green-building award, in 2011.

    “Siting Satori at Holyoke minimizes its carbon emissions and environmental impact without compromising its scientific impact,” says John Goodhue, executive director of the MGHPCC.

    Visit the Satori website for more information.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 8:44 am on August 25, 2019 Permalink | Reply
    Tags: "Physicists design an experiment to pin down the origin of the elements", , MIT, ,   

    From MIT News: “Physicists design an experiment to pin down the origin of the elements” 

    MIT News

    From MIT News

    August 20, 2019
    Jennifer Chu

    1
    A new experiment designed by MIT physicists may help to pin down the rate at which huge, massive stars produce oxygen in the universe. Image: NASA/ESA Hubble

    With help from next-generation particle accelerators, the approach may nail down the rate of oxygen production in the universe.

    Nearly all of the oxygen in our universe is forged in the bellies of massive stars like our sun. As these stars contract and burn, they set off thermonuclear reactions within their cores, where nuclei of carbon and helium can collide and fuse in a rare though essential nuclear reaction that generates much of the oxygen in the universe.

    The rate of this oxygen-generating reaction has been incredibly tricky to pin down. But if researchers can get a good enough estimate of what’s known as the “radiative capture reaction rate,” they can begin to work out the answers to fundamental questions, such as the ratio of carbon to oxygen in the universe. An accurate rate might also help them determine whether an exploding star will settle into the form of a black hole or a neutron star.

    Now physicists at MIT’s Laboratory for Nuclear Science (LNS) have come up with an experimental design that could help to nail down the rate of this oxygen-generating reaction. The approach requires a type of particle accelerator that is still under construction, in several locations around the world. Once up and running, such “multimegawatt” linear accelerators may provide just the right conditions to run the oxgen-generating reaction in reverse, as if turning back the clock of star formation.

    The researchers say such an “inverse reaction” should give them an estimate of the reaction rate that actually occurs in stars, with higher accuracy than has previously been achieved.

    “The job description of a physicist is to understand the world, and right now, we don’t quite understand where the oxygen in the universe comes from, and, how oxygen and carbon are made,” says Richard Milner, professor of physics at MIT. “If we’re right, this measurement will help us answer some of these important questions in nuclear physics regarding the origin of the elements.”

    Milner is a co-author of a paper appearing today in the journal Physical Review C, along with lead author and MIT-LNS postdoc Ivica Friščić and MIT Center for Theoretical Physics Senior Research Scientist T. William Donnelly.

    A precipitous drop

    The radiative capture reaction rate refers to the reaction between a carbon-12 nucleus and a helium nucleus, also known as an alpha particle, that takes place within a star. When these two nuclei collide, the carbon nucleus effectively “captures” the alpha particle, and in the process, is excited and radiates energy in the form of a photon. What’s left behind is an oxygen-16 nucleus, which ultimately decays to a stable form of oxygen that exists in our atmosphere.

    But the chances of this reaction occurring naturally in a star are incredibly slim, due to the fact that both an alpha particle and a carbon-12 nucleus are highly positively charged. If they do come in close contact, they are naturally inclined to repel, in what’s known as a Coulomb’s force. To fuse to form oxygen, the pair would have to collide at sufficiently high energies to overcome Coulomb’s force — a rare occurrence. Such an exceedingly low reaction rate would be impossible to detect at the energy levels that exist within stars.

    For the past five decades, scientists have attempted to simulate the radiative capture reaction rate, in small yet powerful particle accelerators. They do so by colliding beams of helium and carbon in hopes of fusing nuclei from both beams to produce oxygen. They have been able to measure such reactions and calculate the associated reaction rates. However, the energies at which such accelerators collide particles are far higher than what occurs in a star, so much so that the current estimates of the oxygen-generating reaction rate are difficult to extrapolate to what actually occurs within stars.

    “This reaction is rather well-known at higher energies, but it drops off precipitously as you go down in energy, toward the interesting astrophysical region,” Friščić says.

    Time, in reverse

    In the new study, the team decided to resurrect a previous notion, to produce the inverse of the oxygen-generating reaction. The aim, essentially, is to start from oxygen gas and split its nucleus into its starting ingredients: an alpha particle and a carbon-12 nucleus. The team reasoned that the probability of the reaction happening in reverse should be greater, and therefore more easily measured, than the same reaction run forward. The inverse reaction should also be possible at energies nearer to the energy range within actual stars.

    In order to split oxygen, they would need a high-intensity beam, with a super-high concentration of electrons. (The more electrons that bombard a cloud of oxygen atoms, the more chance there is that one electron among billions will have just the right energy and momentum to collide with and split an oxygen nucleus.)

    The idea originated with fellow MIT Research Scientist Genya Tsentalovich, who led a proposed experiment at the MIT-Bates South Hall electron storage ring in 2000. Although the experiment was never carried out at the Bates accelerator, which ceased operation in 2005, Donnelly and Milner felt the idea merited to be studed in detail. With the initiation of construction of next-generation linear accelerators in Germany and at Cornell University, having the capability to produce electron beams of high enough intensity, or current, to potentially trigger the inverse reaction, and the arrival of Friščić at MIT in 2016, the study got underway.

    “The possibility of these new, high-intensity electron machines, with tens of milliamps of current, reawakened our interest in this [inverse reaction] idea,” Milner says.

    The team proposed an experiment to produce the inverse reaction by shooting a beam of electrons at a cold, ultradense cloud of oxygen. If an electron successfully collided with and split an oxygen atom, it should scatter away with a certain amount of energy, which physicists have previously predicted. The researchers would isolate the collisions involving electrons within this given energy range, and from these, they would isolate the alpha particles produced in the aftermath.

    Alpha particles are produced when O-16 atoms split. The splitting of other oxygen isotopes can also result in alpha particles, but these would scatter away slightly faster — about 10 nanoseconds faster — than alpha particles produced from the splitting of O-16 atoms. So, the team reasoned they would isolate those alpha particles that were slightly slower, with a slightly shorter “time of flight.”

    The researchers could then calculate the rate of the inverse reaction, given how often slower alpha particles — and by proxy, the splitting of O-16 atoms — occurred. They then developed a model to relate the inverse reaction to the direct, forward reaction of oxygen production that naturally occurs in stars.

    “We’re essentially doing the time-reverse reaction,” Milner says. “If you measure that at the precision we’re talking about, you should be able to directly extract the reaction rate, by factors of up to 20 beyond what anybody has done in this region.”

    Currently, a multimegawatt linear accerator, MESA, is under construction in Germany. Friščić and Milner are collaborating with physicists there to design the experiment, in hopes that, once up and running, they can put their experiment into action to truly pin down the rate at which stars churn oxygen out into the universe.

    “If we’re right, and we make this measurement, it will allow us to answer how much carbon and oxygen is formed in stars, which is the largest uncertainty that we have in our understanding of how stars evolve,” Milner says.

    This research was carried out at MIT’s Laboratory for Nuclear Science and was supported, in part, by the U.S. Department of Energy Office of Nuclear Physics.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: