Tagged: Electron Microscopy Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:51 pm on January 11, 2022 Permalink | Reply
    Tags: "Catalyst surface analysed at atomic resolution", , Atomic Probe Tomography, , Electron Microscopy, , , ,   

    From The Ruhr-Universität Bochum (DE): “Catalyst surface analysed at atomic resolution” 

    From The Ruhr-Universität Bochum (DE)

    1

    Members of the Bochum-based research team in the lab: Weikai Xiang, Chenglong Luan and Tong Li (from left to right) © Privat.

    Catalyst surfaces have rarely been imaged in such detail before. And yet, every single atom can play a decisive role in catalytic activity.

    A German-Chinese research team has visualised the three-dimensional structure of the surface of catalyst nanoparticles at atomic resolution. This structure plays a decisive role in the activity and stability of the particles. The detailed insights were achieved with a combination of atom probe tomography, spectroscopy and electron microscopy. Nanoparticle catalysts can be used, for example, in the production of hydrogen for the chemical industry. To optimise the performance of future catalysts, it is essential to understand how it is affected by the three-dimensional structure.

    Researchers from the Ruhr-Universität Bochum, The University of Duisburg-Essen [Universität Duisburg-Essen](DE) and The MPG Institute for Chemical Energy Conversion [Max-Planck-Institut für chemische Energieumwandlung](DE) cooperated on the project as part of the Collaborative Research Centre “Heterogeneous oxidation catalysis in the liquid phase”.

    At RUB, a team headed by Weikai Xiang and Professor Tong Li from Atomic-scale Characterisation worked together with the Chair of Electrochemistry and Nanoscale Materials and the Chair of Industrial Chemistry. Institutes in Shanghai, China, and Didcot, UK, were also involved. The team presents their findings in the journal Nature Communications, published online on 10 January 2022.

    Particles observed during the catalysis process

    The researchers studied two different types of nanoparticles made of cobalt iron oxide that were around ten nanometres. They analysed the particles during the catalysis of the so-called oxygen evolution reaction. This is a half reaction that occurs during water splitting for hydrogen production: hydrogen can be obtained by splitting water using electrical energy; hydrogen and oxygen are produced in the process. The bottleneck in the development of more efficient production processes is the partial reaction in which oxygen is formed, i.e. the oxygen evolution reaction. This reaction changes the catalyst surface that becomes inactive over time. The structural and compositional changes on the surface play a decisive role in the activity and stability of the electrocatalysts.

    For small nanoparticles with a size around ten nanometres, achieving detailed information about what happens on the catalyst surface during the reaction remains a challenge. Using atom probe tomography, the group successfully visualised the distribution of the different types of atoms in the cobalt iron oxide catalysts in three dimensions. By combining it with other methods, they showed how the structure and composition of the surface changed during the catalysis process – and how this change affected the catalytic performance.

    “Atom probe tomography has enormous potential to provide atomic insights into the compositional changes on the surface of catalyst nanoparticles during important catalytic reactions such as oxygen evolution reaction for hydrogen production or CO2 reduction,” concludes Tong Li.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Ruhr-Universität Bochum (DE) is a public university located in the southern hills of the central Ruhr area in Bochum. It was founded in 1962 as the first new public university in Germany after World War II. Instruction began in 1965.

    The Ruhr-University Bochum is one of the largest universities in Germany and part of the Deutsche Forschungsgemeinschaft, the most important German research funding organization.

    The RUB was very successful in the Excellence Initiative of the German Federal and State Governments (2007), a competition between Germany’s most prestigious universities. It was one of the few institutions left competing for the title of an “elite university”, but did not succeed in the last round of the competition. There are currently nine universities in Germany that hold this title.

    The University of Bochum was one of the first universities in Germany to introduce international bachelor’s and master’s degrees, which replaced the traditional German Diplom and Magister. Except for a few special cases (for example in Law) these degrees are offered by all faculties of the Ruhr-University. Currently, the university offers a total of 184 different study programs from all academic fields represented at the university.

     
  • richardmitnick 5:43 pm on December 23, 2021 Permalink | Reply
    Tags: , "Researchers use electron microscope to turn nanotube into tiny transistor", Apple says the chip which powers the future iPhones contains 15 billion transistors., , Electron Microscopy, In recent years researchers have made significant steps in developing nanotransistors which are so small that millions of them could fit onto the head of a pin., It remains a great challenge to control the chirality of individual carbon nanotubes., , , Researchers created a transistor that's 25000 times smaller than the width of a human hair., Semiconducting carbon nanotubes are promising for fabricating energy-efficient nanotransistors to build beyond-silicon microprocessors., ShouId I sell my Intel stock?   

    From The Queensland University of Technology (AU) via phys.org : “Researchers use electron microscope to turn nanotube into tiny transistor” 

    From The Queensland University of Technology (AU)

    via

    phys.org

    December 23, 2021

    1
    A designer view of a single-wall carbon nanotube intramolecular junction with metallic portions on left and right ends and a semiconductor ultrashort ~3,0nm channel in between. Credit: The National University of Science and Technology MISiS[Национальный исследовательский технологический университет МИСиС](RU).

    An international team of researchers have used a unique tool inserted into an electron microscope to create a transistor that’s 25,000 times smaller than the width of a human hair.

    The research, published in the journal Science, involves researchers from Japan, China, Russia and Australia who have worked on the project that began five years ago.

    The Queensland University of Technology Center for Materials Science co-director Professor Dmitri Golberg, who led the research project, said the result was a “very interesting fundamental discovery” which could lead a way for the future development of tiny transistors for future generations of advanced computing devices.

    “In this work, we have shown it is possible to control the electronic properties of an individual carbon nanotube,” Professor Golberg said.

    The researchers created the tiny transistor by simultaneously applying a force and low voltage which heated a carbon nanotube made up of few layers until outer tube shells separate, leaving just a single-layer nanotube.

    The heat and strain then changed the “chilarity” of the nanotube, meaning the pattern in which the carbon atoms joined together to form the single-atomic layer of the nanotube wall was rearranged.

    The result of the new structure connecting the carbon atoms was that the nanotube was transformed into a transistor.

    Professor Golberg’s team members from The National University of Science and Technology MISiS[Национальный исследовательский технологический университет МИСиС](RU) created a theory explaining the changes in the atomic structure and properties observed in the transistor.

    Lead author Dr. Dai-Ming Tang, from The International Center for Materials Nanoarchitectonics[材料の国際センター](JP), said the research had demonstrated the ability to manipulate the molecular properties of the nanotube to fabricated nanoscale electrical device.

    Dr. Tang began working on the project five years ago when Professor Golberg headed up the research group at this center.

    “Semiconducting carbon nanotubes are promising for fabricating energy-efficient nanotransistors to build beyond-silicon microprocessors,” Dr. Tang said.

    “However, it remains a great challenge to control the chirality of individual carbon nanotubes, which uniquely determines the atomic geometry and electronic structure.

    “In this work, we designed and fabricated carbon nanotube intramolecular transistors by altering the local chirality of a metallic nanotube segment by heating and mechanical strain.”

    Professor Golberg said the research in demonstrating the fundamental science in creating the tiny transistor was a promising step towards building beyond-silicon microprocessors.

    Transistors, which are used to switch and amplify electronic signals, are often called the “building blocks” of all electronic devices, including computers. For example, Apple says the chip which powers the future iPhones contains 15 billion transistors.

    The computer industry has been focused on developing smaller and smaller transistors for decades, but faces the limitations of silicon.

    In recent years researchers have made significant steps in developing nanotransistors which are so small that millions of them could fit onto the head of a pin [Should I sell my Intel stock? (ed).

    “Miniaturization of transistors down to nanometer scale is a great challenge of the modern semiconducting industry and nanotechnology,” Professor Golberg said.

    “The present discovery, although not practical for a mass-production of tiny transistors, shows a novel fabrication principle and opens up a new horizon of using thermomechanical treatments of nanotubes for obtaining the smallest transistors with desired characteristics.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Queensland University of Technology (QUT) (AU) is a public research university located in the urban coastal city of Brisbane, Queensland, Australia. The Queensland University of Technology is located on two campuses in the Brisbane area viz. Gardens Point and Kelvin Grove. The university in its current form was founded in 1989, when the Queensland Institute of Technology (QIT) was made a university through The Queensland University of Technology Act 1988, with the resulting Queensland University of Technology beginning its operations from January 1989. In 1990, the Brisbane College of Advanced Education merged with The Queensland University of Technology .

    In 2020, The Queensland University of Technology has 52,672 students enrolled (composed of 39,156 undergraduate students, 10,390 postgraduate students, and 661 non-award students), employs 5,049 full-time equivalent (FTE) staff members, a total revenue of $1.054 billion, and a total expenditure of $1.028 billion.

    The Queensland University of Technology was a member of the Australian Technology Network of universities, but withdrew participation on 28 September 2018.

    History

    The Queensland University of Technology (QUT) has a history that dates to 1849 when the Brisbane School of Arts was established. Queensland Institute of Technology (QIT) succeeded the Central Technical College and was formed in 1965. The current Queensland University of Technology was established as a university in 1989 from the merger of several predecessor institutions listed below:

    Brisbane School of Arts (1849)
    Brisbane Technical College (1882)
    Central Technical College (1908)
    Queensland Institute of Technology (1965)

    Brisbane College of Advanced Education was formed in 1982, which itself is a combination of multiple predecessor institutions shown in the list below:

    Brisbane Kindergarten Training College (1911)
    Brisbane Kindergarten Teachers College (1965)
    Queensland Teachers’ Training College (1914)
    Kelvin Grove Teachers College (1961)
    Kelvin Grove College of Advanced Education (1976)
    Kedron Park Teachers College (1961)
    North Brisbane College of Advanced Education (1974)

    In 1988, The Queensland University of Technology Act was passed for the grant of university status to Queensland Institute of Technology (QIT). As a result, QIT was granted university status and was operational as Queensland University of Technology (QUT) beginning in January 1989. The Brisbane College of Advanced Education joined with QUT in 1990.

    The Gardens Point campus was once entirely housed in the 19th-century, former Government House of Queensland. In 1909, during the relocation of the governor’s residence, the Old Government House and the surrounding five hectares were set aside for both a university and a technical college. The first university on the site was the University of Queensland which was moved to St Lucia in 1945, where it remains today.[citation needed]

    Research

    The Queensland University of Technology establishes collaborative research partnerships between academia, industry, government and community actors. The university is a key member of the Brisbane Diamantina Health Partners, Queensland’s first academic health science system. QUT attracts national grants and industry funding and has a number of research centres, including:

    Research institutes

    Research Council Centre of Excellence for the Digital Child
    Centre for Agriculture and the Bioeconomy
    Centre for Biomedical Technologies
    Centre for Data Science
    Centre for Future Enterprise
    Centre for Genomics and Personalised Health
    Centre for Healthcare Transformation
    Centre for Justice
    Centre for Materials Science
    Centre for Robotics
    Digital Media Research Centre
    Australian Centre for Entrepreneurship Research
    Australian Centre for Health Law Research
    Australian Centre for Health Services Innovation
    Australian Centre for Philanthropy and Nonprofit Studies
    Australia-China Centre for Tissue Engineering and Regenerative Medicine
    Cancer and Palliative Care Outcomes Centre
    Centre for a Waste-Free World
    Centre for Accident Research and Road Safety
    Centre for Behavioural Economics, Society and Technology
    Centre for Clean Energy Technologies and Practices
    Centre for Decent Work and Industry

    Indigenous Research Centres

    Curumba Institute
    National Indigenous Research and Knowledges Network

    Research infrastructure

    Biorefining Research Facility
    Central Analytical Research Facility
    Design and Fabrication Facility
    Digital Observatory
    eResearch
    Medical Engineering Research Facility
    Samford Ecological Research Facility
    Research Engineering Facility
    Visualisation and Interactive Solutions for Engagement and Research

    Former research institutes

    Institute of Health and Biomedical Innovation
    Institute for Future Environments

     
  • richardmitnick 11:55 am on December 23, 2021 Permalink | Reply
    Tags: "Integrated photonics meet electron microscopy", , Electron Microscopy, Integrated photonics circuits based on low-loss silicon nitride have made tremendous progress and are intensively driving the progress of many emerging technologies and fundamental science., Interfacing electron microscopy with photonics has the potential to uniquely bridge atomic scale imaging with coherent spectroscopy., , MPG Institute for Biophysical Chemistry [MPG Institut für Biophysikaliche Chemie](DE), , Researchers have successfully demonstrated extremely efficient electron beam modulation using integrated photonic microresonators., Scientists in Switzerland and Germany have achieved efficient electron-beam modulation using integrated photonics – circuits that guide light on a chip., Simplification and efficiency increase in the optical control of electron beams.,   

    From The Swiss Federal Institute of Technology in Lausanne [EPFL-École Polytechnique Fédérale de Lausanne] (CH): “Integrated photonics meet electron microscopy” 

    From The Swiss Federal Institute of Technology in Lausanne [EPFL-École Polytechnique Fédérale de Lausanne] (CH)

    23.12.21
    Professor Claus Ropers MPG Institute for Biophysical Chemistry [MPG Institut für Biophysikaliche Chemie](DE); Arslan Raja, Nik Papageorgiou EPFL.

    1

    Scientists in Switzerland and Germany have achieved efficient electron-beam modulation using integrated photonics – circuits that guide light on a chip. The experiments could lead to entirely new quantum measurement schemes in electron microscopy.

    The transmission electron microscope (TEM) can image molecular structures at the atomic scale by using electrons instead of light, and has revolutionized materials science and structural biology. The past decade has seen a lot of interest in combining electron microscopy with optical excitations, trying, for example, to control and manipulate the electron beam by light. But a major challenge has been the rather weak interaction of propagating electrons with photons.

    In a new study, researchers have successfully demonstrated extremely efficient electron beam modulation using integrated photonic microresonators. The study was led by Professor Tobias J. Kippenberg at EPFL and by Professor Claus Ropers at the MPG Institute for Biophysical Chemistry [MPG Institut für Biophysikaliche Chemie](DE) and The University of Göttingen [Georg-August-Universität Göttingen](DE), and is published in Nature.

    The two laboratories formed an unconventional collaboration, joining the usually unconnected fields of electron microscopy and integrated photonics. Photonic integrated circuits can guide light on a chip with ultra-low low losses, and enhance optical fields using micro-ring resonators. In the experiments conducted by Ropers’ group, an electron beam was steered through the optical near field of a photonic circuit, to allow the electrons to interact with the enhanced light. The researchers then probed the interaction by measuring the energy of electrons that had absorbed or emitted tens to hundreds of photon energies. The photonic chips were engineered by Kippenberg’s group, built in such a way that the speed of light in the micro-ring resonators exactly matched the speed of the electrons, drastically increasing the electron-photon interaction.

    2
    The experimental setup, showing a transmission electron microscope and silicon nitride microresonator used to demonstrate the electron-photon interaction. Image credit: Murat Sivis.

    The technique enables a strong modulation of the electron beam, with only a few milli-Watts from a continuous wave laser – a power level generated by a common laser pointer. The approach constitutes a dramatic simplification and efficiency increase in the optical control of electron beams, which can be seamlessly implemented in a regular transmission electron microscope, and could make the scheme much more widely applicable.

    “Integrated photonics circuits based on low-loss silicon nitride have made tremendous progress and are intensively driving the progress of many emerging technologies and fundamental science such as LiDAR, telecommunication, and quantum computing, and now prove to be a new ingredient for electron beam manipulation,” says Kippenberg.

    “Interfacing electron microscopy with photonics has the potential to uniquely bridge atomic scale imaging with coherent spectroscopy,” adds Ropers. “For the future, we expect this to yield an unprecedented understanding and control of microscopic optical excitations.”

    The researchers plan to further extend their collaboration in the direction of new forms of quantum optics and attosecond metrology for free electrons.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    EPFL bloc

    EPFL campus

    The Swiss Federal Institute of Technology in Lausanne [EPFL-École polytechnique fédérale de Lausanne] (CH) is a research institute and university in Lausanne, Switzerland, that specializes in natural sciences and engineering. It is one of the two Swiss Federal Institutes of Technology, and it has three main missions: education, research and technology transfer.

    The QS World University Rankings ranks EPFL(CH) 14th in the world across all fields in their 2020/2021 ranking, whereas Times Higher Education World University Rankings ranks EPFL(CH) as the world’s 19th best school for Engineering and Technology in 2020.

    EPFL(CH) is located in the French-speaking part of Switzerland; the sister institution in the German-speaking part of Switzerland is The Swiss Federal Institute of Technology ETH Zürich [Eidgenössische Technische Hochschule Zürich)](CH) . Associated with several specialized research institutes, the two universities form The Domain of the Swiss Federal Institutes of Technology (ETH Domain) [ETH-Bereich; Domaine des Écoles polytechniques fédérales] (CH) which is directly dependent on the Federal Department of Economic Affairs, Education and Research. In connection with research and teaching activities, EPFL(CH) operates a nuclear reactor CROCUS; a Tokamak Fusion reactor; a Blue Gene/Q Supercomputer; and P3 bio-hazard facilities.

    ETH Zürich, EPFL (Swiss Federal Institute of Technology in Lausanne) [École polytechnique fédérale de Lausanne](CH), and four associated research institutes form The Domain of the Swiss Federal Institutes of Technology (ETH Domain) [ETH-Bereich; Domaine des Écoles polytechniques fédérales] (CH) with the aim of collaborating on scientific projects.

    The roots of modern-day EPFL(CH) can be traced back to the foundation of a private school under the name École spéciale de Lausanne in 1853 at the initiative of Lois Rivier, a graduate of the École Centrale Paris (FR) and John Gay the then professor and rector of the Académie de Lausanne. At its inception it had only 11 students and the offices was located at Rue du Valentin in Lausanne. In 1869, it became the technical department of the public Académie de Lausanne. When the Académie was reorganised and acquired the status of a university in 1890, the technical faculty changed its name to École d’ingénieurs de l’Université de Lausanne. In 1946, it was renamed the École polytechnique de l’Université de Lausanne (EPUL). In 1969, the EPUL was separated from the rest of the University of Lausanne and became a federal institute under its current name. EPFL(CH), like ETH Zürich(CH), is thus directly controlled by the Swiss federal government. In contrast, all other universities in Switzerland are controlled by their respective cantonal governments. Following the nomination of Patrick Aebischer as president in 2000, EPFL(CH) has started to develop into the field of life sciences. It absorbed the Swiss Institute for Experimental Cancer Research (ISREC) in 2008.

    In 1946, there were 360 students. In 1969, EPFL(CH) had 1,400 students and 55 professors. In the past two decades the university has grown rapidly and as of 2012 roughly 14,000 people study or work on campus, about 9,300 of these being Bachelor, Master or PhD students. The environment at modern day EPFL(CH) is highly international with the school attracting students and researchers from all over the world. More than 125 countries are represented on the campus and the university has two official languages, French and English.

    Organization

    EPFL is organised into eight schools, themselves formed of institutes that group research units (laboratories or chairs) around common themes:

    School of Basic Sciences (SB, Jan S. Hesthaven)

    Institute of Mathematics (MATH, Victor Panaretos)
    Institute of Chemical Sciences and Engineering (ISIC, Emsley Lyndon)
    Institute of Physics (IPHYS, Harald Brune)
    European Centre of Atomic and Molecular Computations (CECAM, Ignacio Pagonabarraga Mora)
    Bernoulli Center (CIB, Nicolas Monod)
    Biomedical Imaging Research Center (CIBM, Rolf Gruetter)
    Interdisciplinary Center for Electron Microscopy (CIME, Cécile Hébert)
    Max Planck-EPFL Centre for Molecular Nanosciences and Technology (CMNT, Thomas Rizzo)
    Swiss Plasma Center (SPC, Ambrogio Fasoli)
    Laboratory of Astrophysics (LASTRO, Jean-Paul Kneib)

    School of Engineering (STI, Ali Sayed)

    Institute of Electrical Engineering (IEL, Giovanni De Micheli)
    Institute of Mechanical Engineering (IGM, Thomas Gmür)
    Institute of Materials (IMX, Michaud Véronique)
    Institute of Microengineering (IMT, Olivier Martin)
    Institute of Bioengineering (IBI, Matthias Lütolf)

    School of Architecture, Civil and Environmental Engineering (ENAC, Claudia R. Binder)

    Institute of Architecture (IA, Luca Ortelli)
    Civil Engineering Institute (IIC, Eugen Brühwiler)
    Institute of Urban and Regional Sciences (INTER, Philippe Thalmann)
    Environmental Engineering Institute (IIE, David Andrew Barry)

    School of Computer and Communication Sciences (IC, James Larus)

    Algorithms & Theoretical Computer Science
    Artificial Intelligence & Machine Learning
    Computational Biology
    Computer Architecture & Integrated Systems
    Data Management & Information Retrieval
    Graphics & Vision
    Human-Computer Interaction
    Information & Communication Theory
    Networking
    Programming Languages & Formal Methods
    Security & Cryptography
    Signal & Image Processing
    Systems

    School of Life Sciences (SV, Gisou van der Goot)

    Bachelor-Master Teaching Section in Life Sciences and Technologies (SSV)
    Brain Mind Institute (BMI, Carmen Sandi)
    Institute of Bioengineering (IBI, Melody Swartz)
    Swiss Institute for Experimental Cancer Research (ISREC, Douglas Hanahan)
    Global Health Institute (GHI, Bruno Lemaitre)
    Ten Technology Platforms & Core Facilities (PTECH)
    Center for Phenogenomics (CPG)
    NCCR Synaptic Bases of Mental Diseases (NCCR-SYNAPSY)

    College of Management of Technology (CDM)

    Swiss Finance Institute at EPFL (CDM-SFI, Damir Filipovic)
    Section of Management of Technology and Entrepreneurship (CDM-PMTE, Daniel Kuhn)
    Institute of Technology and Public Policy (CDM-ITPP, Matthias Finger)
    Institute of Management of Technology and Entrepreneurship (CDM-MTEI, Ralf Seifert)
    Section of Financial Engineering (CDM-IF, Julien Hugonnier)

    College of Humanities (CDH, Thomas David)

    Human and social sciences teaching program (CDH-SHS, Thomas David)

    EPFL Middle East (EME, Dr. Franco Vigliotti)[62]

    Section of Energy Management and Sustainability (MES, Prof. Maher Kayal)

    In addition to the eight schools there are seven closely related institutions

    Swiss Cancer Centre
    Center for Biomedical Imaging (CIBM)
    Centre for Advanced Modelling Science (CADMOS)
    École cantonale d’art de Lausanne (ECAL)
    Campus Biotech
    Wyss Center for Bio- and Neuro-engineering
    Swiss National Supercomputing Centre

     
  • richardmitnick 10:57 am on December 31, 2020 Permalink | Reply
    Tags: "An Existential Crisis in Neuroscience", , , DNNs are mathematical models that string together chains of simple functions that approximate real neurons., , Electron Microscopy, It’s clear now that while science deals with facts a crucial part of this noble endeavor is making sense of the facts., , ,   

    From Nautilus: “An Existential Crisis in Neuroscience” 

    From Nautilus

    December 30, 2020 [Re-issued “Maps” issue January 23, 2020.]
    Grigori Guitchounts

    1
    A rendering of dendrites (red)—a neuron’s branching processes—and protruding spines that receive synaptic information, along with a saturated reconstruction (multicolored cylinder) from a mouse cortex. Credit: Lichtman Lab at Harvard University.

    We’re mapping the brain in amazing detail—but our brain can’t understand the picture.

    On a chilly evening last fall, I stared into nothingness out of the floor-to-ceiling windows in my office on the outskirts of Harvard’s campus. As a purplish-red sun set, I sat brooding over my dataset on rat brains. I thought of the cold windowless rooms in downtown Boston, home to Harvard’s high-performance computing center, where computer servers were holding on to a precious 48 terabytes of my data. I have recorded the 13 trillion numbers in this dataset as part of my Ph.D. experiments, asking how the visual parts of the rat brain respond to movement.

    Printed on paper, the dataset would fill 116 billion pages, double-spaced. When I recently finished writing the story of my data, the magnum opus fit on fewer than two dozen printed pages. Performing the experiments turned out to be the easy part. I had spent the last year agonizing over the data, observing and asking questions. The answers left out large chunks that did not pertain to the questions, like a map leaves out irrelevant details of a territory.

    But, as massive as my dataset sounds, it represents just a tiny chunk of a dataset taken from the whole brain. And the questions it asks—Do neurons in the visual cortex do anything when an animal can’t see? What happens when inputs to the visual cortex from other brain regions are shut off?—are small compared to the ultimate question in neuroscience: How does the brain work?

    2
    LIVING COLOR: This electron microscopy image of a slice of mouse cortex, which shows different neurons labeled by color, is just the beginning. “We’re working on a cortical slab of a human brain, where every synapse and every connection of every nerve cell is identifiable,” says Harvard’s Jeff Lichtman. “It’s amazing.” Credit: Lichtman Lab at Harvard University.

    The nature of the scientific process is such that researchers have to pick small, pointed questions. Scientists are like diners at a restaurant: We’d love to try everything on the menu, but choices have to be made. And so we pick our field, and subfield, read up on the hundreds of previous experiments done on the subject, design and perform our own experiments, and hope the answers advance our understanding. But if we have to ask small questions, then how do we begin to understand the whole?

    Neuroscientists have made considerable progress toward understanding brain architecture and aspects of brain function. We can identify brain regions that respond to the environment, activate our senses, generate movements and emotions. But we don’t know how different parts of the brain interact with and depend on each other. We don’t understand how their interactions contribute to behavior, perception, or memory. Technology has made it easy for us to gather behemoth datasets, but I’m not sure understanding the brain has kept pace with the size of the datasets.

    Some serious efforts, however, are now underway to map brains in full. One approach, called connectomics, strives to chart the entirety of the connections among neurons in a brain. In principle, a complete connectome would contain all the information necessary to provide a solid base on which to build a holistic understanding of the brain. We could see what each brain part is, how it supports the whole, and how it ought to interact with the other parts and the environment. We’d be able to place our brain in any hypothetical situation and have a good sense of how it would react.

    The question of how we might begin to grasp the entirety of the organ that generates our minds has been pressing me for a while. Like most neuroscientists, I’ve had to cultivate two clashing ideas: striving to understand the brain and knowing that’s likely an impossible task. I was curious how others tolerate this doublethink, so I sought out Jeff Lichtman, a leader in the field of connectomics and a professor of molecular and cellular biology at Harvard.

    Lichtman’s lab happens to be down the hall from mine, so on a recent afternoon, I meandered over to his office to ask him about the nascent field of connectomics and whether he thinks we’ll ever have a holistic understanding of the brain. His answer—“No”—was not reassuring, but our conversation was a revelation, and shed light on the questions that had been haunting me. How do I make sense of gargantuan volumes of data? Where does science end and personal interpretation begin? Were humans even capable of weaving today’s reams of information into a holistic picture? I was now on a dark path, questioning the limits of human understanding, unsettled by a future filled with big data and small comprehension.

    Lichtman likes to shoot first, ask questions later. The 68-year-old neuroscientist’s weapon of choice is a 61-beam electron microscope, which Lichtman’s team uses to visualize the tiniest of details in brain tissue. The way neurons are packed in a brain would make canned sardines look like they have a highly evolved sense of personal space. To make any sense of these images, and in turn, what the brain is doing, the parts of neurons have to be annotated in three dimensions, the result of which is a wiring diagram. Done at the scale of an entire brain, the effort constitutes a complete wiring diagram, or the connectome.

    To capture that diagram, Lichtman employs a machine that can only be described as a fancy deli slicer. The machine cuts pieces of brain tissue into 30-nanometer-thick sections, which it then pastes onto a tape conveyor belt. The tape goes on silicon wafers, and into Lichtman’s electron microscope, where billions of electrons blast the brain slices, generating images that reveal nanometer-scale features of neurons, their axons, dendrites, and the synapses through which they exchange information. The Technicolor images are a beautiful sight that evokes a fantastic thought: The mysteries of how brains create memories, thoughts, perceptions, feelings—consciousness itself—must be hidden in this labyrinth of neural connections.

    2
    THE MAPMAKER: Jeff Lichtman, a leader in brain mapping, says the word “understanding” has to undergo a revolution in reference to the human brain. “There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’”Credit: Lichtman Lab at Harvard University.

    A complete human connectome will be a monumental technical achievement. A complete wiring diagram for a mouse brain alone would take up two exabytes. That’s 2 billion gigabytes; by comparison, estimates of the data footprint of all books ever written come out to less than 100 terabytes, or 0.005 percent of a mouse brain. But Lichtman is not daunted. He is determined to map whole brains, exorbitant exabyte-scale storage be damned.

    Lichtman’s office is a spacious place with floor-to-ceiling windows overlooking a tree-lined walkway and an old circular building that, in the days before neuroscience even existed as a field, used to house a cyclotron. He was wearing a deeply black sweater, which contrasted with his silver hair and olive skin. When I asked if a completed connectome would give us a full understanding of the brain, he didn’t pause in his answer. I got the feeling he had thought a great deal about this question on his own.

    “I think the word ‘understanding’ has to undergo an evolution,” Lichtman said, as we sat around his desk. “Most of us know what we mean when we say ‘I understand something.’ It makes sense to us. We can hold the idea in our heads. We can explain it with language. But if I asked, ‘Do you understand New York City?’ you would probably respond, ‘What do you mean?’ There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’ ”

    “But we understand specific aspects of the brain,” I said. “Couldn’t we put those aspects together and get a more holistic understanding?”

    “I guess I would retreat to another beachhead, which is, ‘Can we describe the brain?’ ” Lichtman said. “There are all sorts of fundamental questions about the physical nature of the brain we don’t know. But we can learn to describe them. A lot of people think ‘description’ is a pejorative in science. But that’s what the Hubble telescope does. That’s what genomics does. They describe what’s actually there. Then from that you can generate your hypotheses.”

    “Why is description an unsexy concept for neuroscientists?”

    “Biologists are often seduced by ideas that resonate with them,” Lichtman said. That is, they try to bend the world to their idea rather than the other way around. “It’s much better—easier, actually—to start with what the world is, and then make your idea conform to it,” he said. Instead of a hypothesis-testing approach, we might be better served by following a descriptive, or hypothesis-generating methodology. Otherwise we end up chasing our own tails. “In this age, the wealth of information is an enemy to the simple idea of understanding,” Lichtman said.

    “How so?” I asked.

    “Let me put it this way,” Lichtman said. “Language itself is a fundamentally linear process, where one idea leads to the next. But if the thing you’re trying to describe has a million things happening simultaneously, language is not the right tool. It’s like understanding the stock market. The best way to make money on the stock market is probably not by understanding the fundamental concepts of economy. It’s by understanding how to utilize this data to know what to buy and when to buy it. That may have nothing to do with economics but with data and how data is used.”

    “Maybe human brains aren’t equipped to understand themselves,” I offered.

    “And maybe there’s something fundamental about that idea: that no machine can have an output more sophisticated than itself,” Lichtman said. “What a car does is trivial compared to its engineering. What a human brain does is trivial compared to its engineering. Which is the great irony here. We have this false belief there’s nothing in the universe that humans can’t understand because we have infinite intelligence. But if I asked you if your dog can understand something you’d say, ‘Well, my dog’s brain is small.’ Well, your brain is only a little bigger,” he continued, chuckling. “Why, suddenly, are you able to understand everything?”

    Was Lichtman daunted by what a connectome might achieve? Did he see his efforts as Sisyphean?

    “It’s just the opposite,” he said. “I thought at this point we would be less far along. Right now, we’re working on a cortical slab of a human brain, where every synapse is identified automatically, every connection of every nerve cell is identifiable. It’s amazing. To say I understand it would be ridiculous. But it’s an extraordinary piece of data. And it’s beautiful. From a technical standpoint, you really can see how the cells are connected together. I didn’t think that was possible.”

    Lichtman stressed his work was about more than a comprehensive picture of the brain. “If you want to know the relationship between neurons and behavior, you gotta have the wiring diagram,” he said. “The same is true for pathology. There are many incurable diseases, such as schizophrenia, that don’t have a biomarker related to the brain. They’re probably related to brain wiring but we don’t know what’s wrong. We don’t have a medical model of them. We have no pathology. So in addition to fundamental questions about how the brain works and consciousness, we can answer questions like, Where did mental disorders come from? What’s wrong with these people? Why are their brains working so differently? Those are perhaps the most important questions to human beings.”

    Late one night, after a long day of trying to make sense of my data, I came across a short story by Jorge Louis Borges that seemed to capture the essence of the brain mapping problem. In the story, On Exactitude in Science, a man named Suarez Miranda wrote of an ancient empire that, through the use of science, had perfected the art of map-making. While early maps were nothing but crude caricatures of the territories they aimed to represent, new maps grew larger and larger, filling in ever more details with each edition. Over time, Borges wrote, “the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province.” Still, the people craved more detail. “In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.”

    The Borges story reminded me of Lichtman’s view that the brain may be too complex to be understood by humans in the colloquial sense, and that describing it may be a better goal. Still, the idea made me uncomfortable. Much like storytelling, or even information processing in the brain, descriptions must leave some details out. For a description to convey relevant information, the describer has to know which details are important and which are not. Knowing which details are irrelevant requires having some understanding about the thing you’re describing. Will my brain, as intricate as it may be, ever be able to make sense of the two exabytes in a mouse brain?

    Humans have a critical weapon in this fight. Machine learning has been a boon to brain mapping, and the self-reinforcing relationship promises to transform the whole endeavor. Deep learning algorithms (also known as deep neural networks, or DNNs) have in the past decade allowed machines to perform cognitive tasks once thought impossible for computers—not only object recognition, but text transcription and translation, or playing games like Go or chess. DNNs are mathematical models that string together chains of simple functions that approximate real neurons. These algorithms were inspired directly by the physiology and anatomy of the mammalian cortex, but are crude approximations of real brains, based on data gathered in the 1960s. Yet they have surpassed expectations of what machines can do.

    The secret to Lichtman’s progress with mapping the human brain is machine intelligence. Lichtman’s team, in collaboration with Google, is using deep networks to annotate the millions of images from brain slices their microscopes collect. Each scan from an electron microscope is just a set of pixels. Human eyes easily recognize the boundaries of each blob in the image (a neuron’s soma, axon, or dendrite, in addition to everything else in the brain), and with some effort can tell where a particular bit from one slice appears on the next slice. This kind of labeling and reconstruction is necessary to make sense of the vast datasets in connectomics, and have traditionally required armies of undergraduate students or citizen scientists to manually annotate all chunks. DNNs trained on image recognition are now doing the heavy lifting automatically, turning a job that took months or years into one that’s complete in a matter of hours or days. Recently, Google identified each neuron, axon, dendrite, and dendritic spike—and every synapse—in slices of the human cerebral cortex. “It’s unbelievable,” Lichtman said.

    Scientists still need to understand the relationship between those minute anatomical features and dynamical activity profiles of neurons—the patterns of electrical activity they generate—something the connectome data lacks. This is a point on which connectomics has received considerable criticism, mainly by way of example from the worm: Neuroscientists have had the complete wiring diagram of the worm C. elegans for a few decades now, but arguably do not understand the 300-neuron creature in its entirety; how its brain connections relate to its behaviors is still an active area of research.

    Still, structure and function go hand-in-hand in biology, so it’s reasonable to expect one day neuroscientists will know how specific neuronal morphologies contribute to activity profiles. It wouldn’t be a stretch to imagine a mapped brain could be kickstarted into action on a massive server somewhere, creating a simulation of something resembling a human mind. The next leap constitutes the dystopias in which we achieve immortality by preserving our minds digitally, or machines use our brain wiring to make super-intelligent machines that wipe humanity out. Lichtman didn’t entertain the far-out ideas in science fiction, but acknowledged that a network that would have the same wiring diagram as a human brain would be scary. “We wouldn’t understand how it was working any more than we understand how deep learning works,” he said. “Now, suddenly, we have machines that don’t need us anymore.”

    Yet a masterly deep neural network still doesn’t grant us a holistic understanding of the human brain. That point was driven home to me last year at a Computational and Systems Neuroscience conference, a meeting of the who’s-who in neuroscience, which took place outside Lisbon, Portugal. In a hotel ballroom, I listened to a talk by Arash Afraz, a 40-something neuroscientist at the National Institute of Mental Health in Bethesda, Maryland. The model neurons in DNNs are to real neurons what stick figures are to people, and the way they’re connected is equally as sketchy, he suggested.

    Afraz is short, with a dark horseshoe mustache and balding dome covered partially by a thin ponytail, reminiscent of Matthew McConaughey in True Detective. As sturdy Atlantic waves crashed into the docks below, Afraz asked the audience if we remembered René Magritte’s Ceci n’est pas une pipe painting, which depicts a pipe with the title written out below it. Afraz pointed out that the model neurons in DNNs are not real neurons, and the connections among them are not real either. He displayed a classic diagram of interconnections among brain areas found through experimental work in monkeys—a jumble of boxes with names like V1, V2, LIP, MT, HC, each a different color, and black lines connecting the boxes seemingly at random and in more combinations than seems possible. In contrast to the dizzying heap of connections in real brains, DNNs typically connect different brain areas in a simple chain, from one “layer” to the next. Try explaining that to a rigorous anatomist, Afraz said, as he flashed a meme of a shocked baby orangutan cum anatomist. “I’ve tried, believe me,” he said.

    I, too, have been curious why DNNs are so simple compared to real brains. Couldn’t we improve their performance simply by making them more faithful to the architecture of a real brain? To get a better sense for this, I called Andrew Saxe, a computational neuroscientist at Oxford University. Saxe agreed that it might be informative to make our models truer to reality. “This is always the challenge in the brain sciences: We just don’t know what the important level of detail is,” he told me over Skype.

    How do we make these decisions? “These judgments are often based on intuition, and our intuitions can vary wildly,” Saxe said. “A strong intuition among many neuroscientists is that individual neurons are exquisitely complicated: They have all of these back-propagating action potentials, they have dendritic compartments that are independent, they have all these different channels there. And so a single neuron might even itself be a network. To caricature that as a rectified linear unit”—the simple mathematical model of a neuron in DNNs—“is clearly missing out on so much.”

    As 2020 has arrived, I have thought a lot about what I have learned from Lichtman, Afraz, and Saxe and the holy grail of neuroscience: understanding the brain. I have found myself revisiting my undergrad days, when I held science up as the only method of knowing that was truly objective (I also used to think scientists would be hyper-rational, fair beings paramountly interested in the truth—so perhaps this just shows how naive I was).

    It’s clear to me now that while science deals with facts, a crucial part of this noble endeavor is making sense of the facts. The truth is screened through an interpretive lens even before experiments start. Humans, with all our quirks and biases, choose what experiment to conduct in the first place, and how to do it. And the interpretation continues after data are collected, when scientists have to figure out what the data mean. So, yes, science gathers facts about the world, but it is humans who describe it and try to understand it. All these processes require filtering the raw data through a personal sieve, sculpted by the language and culture of our times.

    It seems likely that Lichtman’s two exabytes of brain slices, and even my 48 terabytes of rat brain data, will not fit through any individual human mind. Or at least no human mind is going to orchestrate all this data into a panoramic picture of how the human brain works. As I sat at my office desk, watching the setting sun tint the cloudless sky a light crimson, my mind reached a chromatic, if mechanical, future. The machines we have built—the ones architected after cortical anatomy—fall short of capturing the nature of the human brain. But they have no trouble finding patterns in large datasets. Maybe one day, as they grow stronger building on more cortical anatomy, they will be able to explain those patterns back to us, solving the puzzle of the brain’s interconnections, creating a picture we understand. Out my window, the sparrows were chirping excitedly, not ready to call it a day.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 1:05 pm on August 20, 2020 Permalink | Reply
    Tags: "2D Electronics Get an Atomic Tuneup", , Electron Microscopy, , , , , , TUNING THE BAND GAP   

    From Lawrence Berkeley National Lab: “2D Electronics Get an Atomic Tuneup” 


    From Lawrence Berkeley National Lab

    August 20, 2020
    Theresa Duque
    tnduque@lbl.gov
    (510) 495-2418

    Scientists at Berkeley Lab, UC Berkeley demonstrate tunable, atomically thin semiconductors.

    1
    Electron microscopy experiments revealed meandering stripes formed by metal atoms of rhenium and niobium in the lattice structure of a 2D transition metal dichalcogenide alloy. (Image courtesy of Amin Azizi.)

    TO TUNE THE BAND GAP, a key parameter in controlling the electrical conductivity and optical properties of semiconductors, researchers typically engineer alloys, a process in which two or more materials are combined to achieve properties that otherwise could not be achieved by a pristine material.

    But engineering band gaps of conventional semiconductors via alloying has often been a guessing game, because scientists have not had a technique to directly “see” whether the alloy’s atoms are arranged in a specific pattern, or randomly dispersed.

    Now, as reported in Physical Review Letters, a research team led by Alex Zettl and Marvin Cohen – senior faculty scientists in the Materials Sciences Division at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), and professors of physics at UC Berkeley – has demonstrated a new technique that could engineer the band gap needed to improve the performance of semiconductors for next-generation electronics such as optoelectronics, thermoelectrics, and sensors.

    For the current study, the researchers examined monolayer and multilayer samples of a 2D transition metal dichalcogenide (TMD) material made of the alloy rhenium niobium disulfide.

    Electron microscopy experiments revealed meandering stripes formed by metal atoms of rhenium and niobium in the lattice structure of the 2D TMD alloy.

    A statistical analysis confirmed what the research team had suspected – that metal atoms in the 2D TMD alloy prefer to be adjacent to the other metal atoms, “which is in stark contrast to the random structure of other TMD alloys of the same class,” said lead author Amin Azizi, a postdoctoral researcher in the Zettl lab at UC Berkeley.

    Calculations performed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC) by Mehmet Dogan, a postdoctoral researcher in the Cohen lab at UC Berkeley, demonstrated that such atomic ordering can modify the material’s band gap.

    NERSC at LBNL

    NERSC Cray Cori II supercomputer, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    NERSC Hopper Cray XE6 supercomputer, named after Grace Hopper, One of the first programmers of the Harvard Mark I computer

    NERSC Cray XC30 Edison supercomputer

    NERSC GPFS for Life Sciences


    The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

    NERSC PDSF computer cluster in 2003.

    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

    Future:

    Cray Shasta Perlmutter SC18 AMD Epyc Nvidia pre-exascale supeercomputer

    NERSC is a DOE Office of Science User Facility.

    Optical spectroscopy measurements performed at Berkeley Lab’s Advanced Light Source revealed that the band gap of the 2D TMD alloy can be additionally tuned by adjusting the number of layers in the material.

    LBNL ALS

    Also, the band gap of the monolayer alloy is similar to that of silicon – which is “just right” for many electronic and optical applications, Azizi said. And the 2D TMD alloy has the added benefits of being flexible and transparent.

    The researchers next plan to explore the sensing and optoelectronic properties of new devices based on the 2D TMD alloy.

    Co-authors with Azizi, Cohen, and Zettl include Jeffrey D. Cain, Mehmet Dogan, Rahmatollah Eskandari, Emily G. Glazer, and Xuanze Yu.

    The Advanced Light Source and NERSC are DOE Office of Science user facilities co-located at Berkeley Lab.

    This work was supported by the DOE Office of Science. Additional funding was provided by the National Science Foundation.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    LBNL campus

    LBNL Molecular Foundry

    Bringing Science Solutions to the World
    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

     
  • richardmitnick 7:14 am on July 13, 2020 Permalink | Reply
    Tags: (DMSE)-Department of Materials Science and Engineering, , Electron Microscopy, Frances Ross, , , MIT.nano facility,   

    From MIT News: “A wizard of ultrasharp imaging” Frances Ross 

    MIT News

    From MIT News

    July 12, 2020
    David L. Chandler

    To oversee its new cutting-edge electron microscopy systems, MIT sought out Frances Ross’ industry-honed expertise.

    1
    “I’m hoping that MIT becomes a center for electron microscopy,” professor Frances Ross says. “There is nothing that exists with the capabilities that we are aiming for here.” Photo: Jared Charney

    1
    A specially designed transmission electron microscope in MIT Materials Research Laboratory’s newly renovated Electron Microscopy (EM) Shared Facility in Building 13. Photo, Denis Paiste, Materials Research Laboratory.

    Though Frances Ross and her sister Caroline Ross both ended up on the faculty of MIT’s Department of Materials Science and Engineering, they got there by quite different pathways. While Caroline followed a more traditional academic route and has spent most of her career at MIT, Frances Ross spent most of her professional life working in the industrial sector, as a microscopy specialist at IBM.

    3
    IBM Research Ultra High Vacuum-Transmission Electron Microscope Lab In 360.

    It wasn’t until 2018 that she arrived at MIT to oversee the new state-of-the-art electron microscope systems being installed in the new MIT.nano facility.

    Frances, who bears a strong family resemblance to her sister, says “it’s confused a few people, if they don’t know there are two of us.”

    The sisters grew up in London in a strongly science- and materials-oriented family. Her father, who worked first as a scientist and then as a lawyer, is currently working on his third PhD degree, in classics. Her mother, a gemologist, specializes in precisely matching diamonds, and oversees certification testing for the profession.

    After earning her doctorate at Cambridge University in materials science, specializing in electron microscopy, Frances Ross went on to do a postdoc at Bell Labs in New Jersey, and then to the National Center for Electron Microscopy at the University of California at Berkeley. From there she continued her work in electron microscopy at IBM in Yorktown Heights, New York, where she spent 20 years working on development and application of electron microscope technology to studying crystal growth.

    When MIT built its new cutting-edge nanotechnology fabrication and analysis facility, MIT.nano, it was clear that state-of-the-art microscope technology would need to be a key feature of the new center. That’s when Ross was hired as a professor, along with Professor Jim LeBeau and Research Scientist Rami Dana, who had an academic and industrial research background, to oversee the creation, development, and application of those microscopes for the Department of Materials Science and Engineering (DMSE) and the wider MIT community.

    “Currently, our students have to go to other places to do high-performance microscopy, so they might go to Harvard, or one of the national labs,” says Ross, who is the Ellen Swallow Richards Professor in Materials Science and Engineering. “Very many advances in the instrumentation have come together over the last few years, so that if your equipment is a little older, it’s actually a big disadvantage in electron microcopy. This is an area where MIT had not invested for a little while, and therefore, once they made that decision, the jump is going to be very significant. We’re going to have a state-of-the-art imaging capability.”

    There will be two major electron microscope systems for materials science, which are gradually taking shape inside the vibration-isolated basement level of MIT.nano, alongside two others already installed that are specialized for biomedical imaging.

    One of these will be an advanced version of a standard electron microscope, she says, that will have a unique combination of features. “There is nothing that exists with the capabilities that we are aiming for here.”

    The most important of these, she says, is the quality of the vacuum inside the microscope: “In most of our experiments, we want to start with a surface that’s atomically clean.” For example, “we could start with atomically clean silicon, and then add some germanium. How do the germanium atoms add onto the silicon surface? That’s a very important question for microelectronics. But if the sample is in an environment that’s not well-controlled, then the results you get will depend on how dirty the vacuum is. Contamination may affect the process, and you can’t be sure that what you’re seeing is what happens in real life.” Ross is working with the manufacturers to reach exceptional levels of cleanliness in the vacuum of the electron microscope system being developed now.

    But ultra-high-quality vacuum is just one of its attributes. “We combine the good vacuum with capabilities to heat the sample, and flow gases, and record images at high speed,” Ross says. “Perhaps most importantly for a lot of our experiments, we use lower-energy electrons to do the imaging, because for many interesting materials like 2D materials, such as graphene, boron nitride, and related structures, the high-energy electrons that are normally used will damage the sample.”

    Putting that all together, she says, “is a unique instrument that will give us real insights into surface reactions, crystal growth processes, materials transformations, catalysis, all kinds of reactions involving nanostructure formation and chemistry on the surfaces of 2D materials.”

    Other instruments and capabilities are also being added to MIT’s microscopy portfolio. A new scanning transmission electron microscope is already installed in MIT.nano and is providing high-resolution structural and chemical analysis of samples for several projects at MIT. Another new capability is a special sample holder that allows researchers to make movies of unfolding processes in water or other liquids in the microscope. This allows detailed monitoring, at up to 100 frames per second, of a variety of phenomena, such as solution-phase growth, unfolding chemical reactions, or electrochemical processes such as battery charging and discharging. Making movies of processes taking place in water, she says, “is something of a new field for electron microscopy.”

    Ross already has set up an ultra-high vacuum electron microscope in DMSE but without the resolution and low-voltage operation of the new instrument. And finally, an ultra-high vacuum scanning tunneling microscope has just started to produce images and will measure current flow through nanoscale materials.

    In their free time, Ross and her husband Brian enjoy sailing, mostly off the coast of Maine, with their two children, Kathryn and Eric. As a hobby she collects samples of beach sand. “I have a thousand different kinds of sand from various places, and a lot of them from Massachusetts,” she says. “Everywhere I go, that’s my souvenir.”

    But with her intense focus on developing this new world-class microscopy facility, there’s little time for anything else these days. Her aim is to ensure that it’s the best facility possible.

    “I’m hoping that MIT becomes a center for electron microscopy,” she says. “You know, with all the interesting materials science and physics that goes on here, it matches up very well with this unique instrumentation, this high-quality combination of imaging and analysis. These unique characterization capabilities really complement the rest of the science that happens here.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 11:42 am on February 24, 2020 Permalink | Reply
    Tags: "A Simple Retrofit Transforms Ordinary Electron Microscopes Into High-Speed Atom-Scale Cameras", , Electron Microscopy,   

    From NIST: “A Simple Retrofit Transforms Ordinary Electron Microscopes Into High-Speed Atom-Scale Cameras” 


    From NIST

    February 24, 2020

    Ben P. Stein
    benjamin.stein@nist.gov
    (301) 975-2763

    Patented “beam chopper” provides cost-effective way to investigate super-fast processes important for tomorrow’s technology.

    1
    Credit: N. Hanacek/NIST

    Researchers at the National Institute of Standards and Technology (NIST) and their collaborators have developed a way to retrofit the transmission electron microscope — a long-standing scientific workhorse for making crisp microscopic images — so that it can also create high-quality movies of super-fast processes at the atomic and molecular scale. Compatible with electron microscopes old and new, the retrofit promises to enable fresh insights into everything from microscopic machines to next-generation computer chips and biological tissue by making this moviemaking capability more widely available to laboratories everywhere.

    “We want to be able to look at things in materials science that happen really quickly,” said NIST scientist June Lau. She reports the first proof-of-concept operation of this retrofitted design with her colleagues in the journal Review of Scientific Instruments. The team designed the retrofit to be a cost-effective add-on to existing instruments. “It’s expected to be a fraction of the cost of a new electron microscope,” she said.

    A nearly 100-year-old invention, the electron microscope remains an essential tool in many scientific laboratories. A popular version is known as the transmission electron microscope (TEM), which fires electrons through a target sample to produce an image. Modern versions of the microscope can magnify objects by as much as 50 million times. Electron microscopes have helped to determine the structure of viruses, test the operation of computer circuits, and reveal the effectiveness of new drugs.

    “Electron microscopes can look at very tiny things on the atomic scale,” Lau said. “They are great. But historically, they look at things that are fixed in time. They’re not good at viewing moving targets,” she said.

    In the last 15 years, laser-assisted electron microscopes made videos possible, but such systems have been complex and expensive. While these setups can capture events that last from nanoseconds (billionths of a second) to femtoseconds (quadrillionths of a second), a laboratory must often buy a newer microscope to accommodate this capability as well as a specialized laser, with a total investment that can run into the millions of dollars. A lab also needs in-house laser-physics expertise to help set up and operate such a system.

    “Frankly, not everyone has that capacity,” Lau said.

    In contrast, the retrofit enables TEMs of any age to make high-quality movies on the scale of picoseconds (trillionths of a second) by using a relatively simple “beam chopper.” In principle, the beam chopper can be used in any manufacturer’s TEM. To install it, NIST researchers open the microscope column directly under the electron source, insert the beam chopper and close up the microscope again. Lau and her colleagues have successfully retrofitted three TEMs of different capabilities and vintage.

    Like a stroboscope, this beam chopper releases precisely timed pulses of electrons that can capture frames of important repeating or cyclic processes.

    “Imagine a Ferris wheel, which moves in a cyclical and repeatable way,” Lau said. “If we’re recording it with a pinhole camera, it will look blurry. But we want to see individual cars. I can put a shutter in front of the pinhole camera so that the shutter speed matches the movement of the wheel. We can time the shutter to open whenever a designated car goes to the top. In this way I can make a stack of images that shows each car at the top of the Ferris wheel,” she said.

    Like the light shutter, the beam chopper interrupts a continuous electron beam. But unlike the shutter, which has an aperture that opens and closes, this beam aperture stays open all the time, eliminating the need for a complex mechanical part.

    Instead, the beam chopper generates a radio frequency (RF) electromagnetic wave in the direction of the electron beam. The wave causes the traveling electrons to behave “like corks bobbing up and down on the surface of a water wave,” Lau said.

    Riding this wave, the electrons follow an undulating path as they approach the aperture. Most electrons are blocked except for the ones that are perfectly aligned with the aperture. The frequency of the RF wave is tunable, so that electrons hit the sample anywhere from 40 million to 12 billion times per second. As a result, researchers can capture important processes in the sample at time intervals from about a nanosecond to 10 picoseconds.

    In this way, the NIST-retrofitted microscope can capture atom-scale details of the back-and-forth movements in tiny machines such as microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS). It can potentially study the regularly repeating signals in antennas used for high-speed communications and probe the movement of electric currents in next-generation computer processors.

    In one demo, the researchers wanted to prove that a retrofitted microscope functioned as it did before the retrofit. They imaged gold nanoparticles in both the traditional “continuous” mode and the pulsed beam mode. The images in the pulsed mode had comparable clarity and resolution to the still images.

    “We designed it so it should be the same,” Lau said.

    2
    A transmission electron microscope (TEM) image of gold (Au) nanoparticles magnified 200,000 times with a continuous electron beam (left) and a pulsed beam (right). The scale is 5 nanometers (nm).

    The beam chopper can also do double duty, pumping RF energy into the material sample and then taking pictures of the results. The researchers demonstrated this ability by injecting microwaves (a form of radio wave) into a metallic, comb-shaped MEMS device. The microwaves create electric fields within the MEMS device and cause the incoming pulses of electrons to deflect. These electron deflections enable researchers to build movies of the microwaves propagating through the MEMS comb.

    Lau and her colleagues hope their invention can soon make new scientific discoveries. For example, it could investigate the behavior of quickly changing magnetic fields in molecular-scale memory devices that promise to store more information than before.

    The researchers spent six years inventing and developing their beam chopper and have received several patents and an R&D 100 Award for their work. Co-authors in the work included Brookhaven National Laboratory in Upton, New York, and Euclid Techlabs in Bolingbrook, Illinois.

    One of the things that makes Lau most proud is that their design can breathe new life into any TEM, including the 25-year-old unit that performed the latest demonstration. The design gives labs everywhere the potential to use their microscopes to capture important fast-moving processes in tomorrow’s materials.

    “Democratizing science was the whole motivation,” Lau said.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    NIST Campus, Gaitherberg, MD, USA

    NIST Mission, Vision, Core Competencies, and Core Values

    NIST’s mission

    To promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.
    NIST’s vision

    NIST will be the world’s leader in creating critical measurement solutions and promoting equitable standards. Our efforts stimulate innovation, foster industrial competitiveness, and improve the quality of life.
    NIST’s core competencies

    Measurement science
    Rigorous traceability
    Development and use of standards

    NIST’s core values

    NIST is an organization with strong values, reflected both in our history and our current work. NIST leadership and staff will uphold these values to ensure a high performing environment that is safe and respectful of all.

    Perseverance: We take the long view, planning the future with scientific knowledge and imagination to ensure continued impact and relevance for our stakeholders.
    Integrity: We are ethical, honest, independent, and provide an objective perspective.
    Inclusivity: We work collaboratively to harness the diversity of people and ideas, both inside and outside of NIST, to attain the best solutions to multidisciplinary challenges.
    Excellence: We apply rigor and critical thinking to achieve world-class results and continuous improvement in everything we do.

     
  • richardmitnick 12:37 pm on February 21, 2019 Permalink | Reply
    Tags: "Big Data at the Atomic Scale: New Detector Reaches New Frontier in Speed", A new detector that can capture atomic-scale images in millionths-of-a-second increments., , , Electron Microscopy, known as the “4D Camera” (for Dynamic Diffraction Direct Detector), , , NCEM-National Center for Electron Microscopy, The Molecular Foundry, The new detector, The Transmission Electron Aberration-corrected Microscope (TEAM 0.5) at Berkeley Lab   

    From Lawrence Berkeley National Lab: “Big Data at the Atomic Scale: New Detector Reaches New Frontier in Speed” 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    February 21, 2019
    Glenn Roberts Jr.
    geroberts@lbl.gov
    (510) 486-5582

    1
    The Transmission Electron Aberration-corrected Microscope (TEAM 0.5) at Berkeley Lab has been upgraded with a new detector that can capture atomic-scale images in millionths-of-a-second increments. (Credit: Thor Swift/Berkeley Lab)


    This video provides an overview of the R&D effort to upgrade an electron microscope at Berkeley Lab’s Molecular Foundry with a superfast detector, the 4D Camera. The detector, which is linked to a supercomputer at Berkeley Lab via a high-speed data connection, can capture more images at a faster rate, revealing atomic-scale details across much larger areas than was possible before. (Credit: Marilyn Chung/Berkeley Lab)

    Advances in electron microscopy – using electrons as imaging tools to see things well beyond the reach of conventional microscopes that use light – have opened up a new window into the nanoscale world and brought a wide range of samples into focus as never before.

    Electron microscopy experiments can only use a fraction of the possible information generated as the microscope’s electron beam interacts with samples. Now, a team at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has designed a new kind of electron detector that captures all of the information in these interactions.

    This new tool, a superfast detector installed Feb. 12 at Berkeley Lab’s Molecular Foundry, a nanoscale science user facility, captures more images at a faster rate, revealing atomic-scale details across much larger areas than was possible before. The Molecular Foundry and its world-class electron microscopes in the National Center for Electron Microscopy (NCEM) provide access to researchers from around the world.

    Faster imaging can also reveal important changes that samples are undergoing and provide movies vs. isolated snapshots. It could, for example, help scientists to better explore working battery and microchip components at the atomic scale before the onset of damage.

    The detector, which has a special direct connection to the Cori supercomputer at the Lab’s National Energy Research Scientific Computing Center (NERSC), will enable scientists to record atomic-scale images with timing measured in microseconds, or millionths of a second – 100 times faster than possible with existing detectors.

    NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    “It is the fastest electron detector ever made,” said Andrew Minor, NCEM facility director at the Molecular Foundry.

    “It opens up a new time regime to explore with high-resolution microscopy. No one has ever taken continuous movies at this time resolution” using electron imaging, he said. “What happens there? There are all kinds of dynamics that might happen. We just don’t know because we’ve never been able to look at them before.” The new movies could reveal tiny deformations and movements in materials, for example, and show chemistry in action.

    The development of the new detector, known as the “4D Camera” (for Dynamic Diffraction Direct Detector), is the latest in a string of pioneering innovations in electron microscopy, atomic-scale imaging, and high-speed data transfer and computing at Berkeley Lab that span several decades.

    “Our group has been working for some time on making better detectors for microscopy,” said Peter Denes, a Berkeley Lab senior scientist and a longtime pioneer in the development of electron microscopy tools.

    “You get a whole scattering pattern instead of just one point, and you can go back and reanalyze the data to find things that maybe you weren’t focusing on before,” Denes said. This quickly produces a complete image of a sample by scanning across it with an electron beam and capturing information based on the electrons that scatter off the sample.

    Mary Scott, a faculty scientist at the Molecular Foundry, said that the unique geometry of the new detector allows studies of both light and heavyweight elements in materials side by side. “The reason you might want to perform one of these more complicated experiments would be to measure the positions of light elements, particularly in materials that might be really sensitive to the electron beam – like lithium in a battery material – and ideally you would be able to also precisely measure the positions of heavy elements in that same material,” she said.

    The new detector has been installed on the Transmission Electron Aberration-corrected Microscope 0.5 (TEAM 0.5) at the Molecular Foundry, which set high-resolution records when it launched at NCEM a decade ago and allows visiting researchers to access single-atom resolution for some samples. The detector will generate a whopping 4 terabytes of data per minute.

    “The amount of data is equivalent to watching about 60,000 HD movies simultaneously,” said Peter Ercius, a staff scientist at the Molecular Foundry who specializes in 3D atomic-scale imaging.

    Brent Draney, a networking architect at Berkeley Lab’s NERSC, said that Ercius and Denes had approached NERSC to see what it would take to build a system that could handle this huge, 400-gigabit stream of data produced by the 4D Camera.

    His response: “We actually already have a system capable of doing that. What we really needed to do is to build a network between the microscope and the supercomputer.”

    2
    A technician works on the TEAM 0.5 microscope. The microscope has been upgraded with a superfast detector called the 4D Camera that can capture atomic-scale images in millionths-of-a-second increments. (Credit: Thor Swift/Berkeley Lab)

    Camera data is transferred over about 100 fiber-optic connections into a high-speed ethernet connection that is about 1,000 times faster than the average home network, said Ian Johnson, a staff scientist in Berkeley Lab’s Engineering Division. The network connects the Foundry to the Cori supercomputer at NERSC.

    Berkeley Lab’s Energy Sciences Network (ESnet), which connects research centers with high-speed data networks, participated in the effort.

    Ercius said, “The supercomputer will analyze the data in about 20 seconds in order to provide rapid feedback to the scientists at the microscope to tell if the experiment was successful or not.”

    Jim Ciston, another Molecular Foundry staff scientist, said, “We’ll actually capture every electron that comes through the sample as it’s scattered. Through this really large data set we’ll be able to perform ‘virtual’ experiments on the sample – we won’t have to go back and take new data from different imaging conditions.”

    The work on the new detector and its supporting data systems should benefit other facilities that produce high volumes of data, such as the Advanced Light Source and its planned upgrade, and the LCLS-II project at SLAC National Accelerator Laboratory, Ciston noted.

    LBNL Advanced Light Source

    SLAC LCLS-II

    The Advanced Light Source, ESnet, Molecular Foundry, and NERSC are DOE Office of Science User Facilities.

    The development of the 4D Camera was supported by the Accelerator and Detector Research Program of the Department of Energy’s Office of Basic Energy Sciences, and work at the Molecular Foundry was supported by the DOE’s Office of Basic Energy Sciences.

    3
    This computer chip is a component in a superfast detector called the 4D Camera. The detector is an upgrade for a powerful electron microscope at Berkeley Lab’s Molecular Foundry. (Credit: Marilyn Chung/Berkeley Lab)

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Bringing Science Solutions to the World

    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

    DOE Seal

     
  • richardmitnick 10:13 am on January 8, 2019 Permalink | Reply
    Tags: , , Electron Microscopy, , , , ,   

    From SLAC National Accelerator Lab: “Study shows single atoms can make more efficient catalysts” 

    From SLAC National Accelerator Lab

    January 7, 2019
    Glennda Chui

    1
    Scientists used a combination of four techniques, represented here by four incoming beams, to reveal in unprecedented detail how a single atom of iridium catalyzes a chemical reaction. (Greg Stewart/SLAC National Accelerator Laboratory)

    Detailed observations of iridium atoms at work could help make catalysts that drive chemical reactions smaller, cheaper and more efficient.

    Catalysts are chemical matchmakers: They bring other chemicals close together, increasing the chance that they’ll react with each other and produce something people want, like fuel or fertilizer.

    Since some of the best catalyst materials are also quite expensive, like the platinum in a car’s catalytic converter, scientists have been looking for ways to shrink the amount they have to use.

    Now scientists have their first direct, detailed look at how a single atom catalyzes a chemical reaction. The reaction is the same one that strips poisonous carbon monoxide out of car exhaust, and individual atoms of iridium did the job up to 25 times more efficiently than the iridium nanoparticles containing 50 to 100 atoms that are used today.

    The research team, led by Ayman M. Karim of Virginia Tech, reported the results in Nature Catalysis.

    “These single-atom catalysts are very much a hot topic right now,” said Simon R. Bare, a co-author of the study and distinguished staff scientist at the Department of Energy’s SLAC National Accelerator Laboratory, where key parts of the work took place. “This gives us a new lens to look at reactions through, and new insights into how they work.”

    Karim added, “To our knowledge, this is the first paper to identify the chemical environment that makes a single atom catalytically active, directly determine how active it is compared to a nanoparticle, and show that there are very fundamental differences – entirely different mechanisms – in the way they react.”

    Is smaller really better?

    Catalysts are the backbone of the chemical industry and essential to oil refining, where they help break crude oil into gasoline and other products. Today’s catalysts often come in the form of nanoparticles attached to a surface that’s porous like a sponge – so full of tiny holes that a single gram of it, unfolded, might cover a basketball court. This creates an enormous area where millions of reactions can take place at once. When gas or liquid flows over and through the spongy surface, chemicals attach to the nanoparticles, react with each other and float away. Each catalyst is designed to promote one specific reaction over and over again.

    But catalytic reactions take place only on the surfaces of nanoparticles, Bare said, “and even though they are very small particles, the expensive metal on the inside of the nanoparticle is wasted.”

    Individual atoms, on the other hand, could offer the ultimate in efficiency. Each and every atom could act as a catalyst, grabbing chemical reactants and holding them close together until they bond. You could fit a lot more of them in a given space, and not a speck of precious metal would go to waste.

    Single atoms have another advantage: Unlike clusters of atoms, which are bound to each other, single atoms are attached only to the surface, so they have more potential binding sites available to perform chemical tricks – which in this case came in very handy.

    Research on single-atom catalysts has exploded over the past few years, Karim said, but until now no one has been able to study how they function in enough detail to see all the fleeting intermediate steps along the way.

    Grabbing some help

    To get more information, the team looked at a simple reaction where single atoms of iridium split oxygen molecules in two, and the oxygen atoms then react with carbon monoxide to create carbon dioxide.

    They used four approaches­ – infrared spectroscopy, electron microscopy, theoretical calculations and X-ray spectroscopy with beams from SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) – to attack the problem from different angles, and this was crucial for getting a complete picture.

    SLAC/SSRL

    SLAC SSRL Campus

    “It’s never just one thing that gives you the full answer,” Bare said. “It’s always multiple pieces of the jigsaw puzzle coming together.”

    The team discovered that each iridium atom does, in fact, perform a chemical trick that enhances its performance. It grabs a single carbon monoxide molecule out of the passing flow of gas and holds onto it, like a person tucking a package under their arm. The formation of this bond triggers tiny shifts in the configuration of the iridium atom’s electrons that help it split oxygen, so it can react with the remaining carbon monoxide gas and convert it to carbon dioxide much more efficiently.

    More questions lie ahead: Will this same mechanism work in other catalytic reactions, allowing them to run more efficiently or at lower temperatures? How do the nature of the single-atom catalyst and the surface it sits on affect its binding with carbon monoxide and the way the reaction proceeds?

    The team plans to return to SSRL in January to continue the work.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.

     
  • richardmitnick 9:05 am on August 23, 2018 Permalink | Reply
    Tags: , , , Bouncing barrier, , Electron Microscopy, , , NASA Researchers Find Evidence of Planet-Building Clumps, Planetesimal formation   

    From NASA Ames: “NASA Researchers Find Evidence of Planet-Building Clumps” 

    NASA Ames Icon

    From NASA AMES

    Aug. 21, 2018
    Darryl Waller
    NASA Ames Research Center, Silicon Valley
    650-604-2675
    darryl.e.waller@nasa.gov

    Noah Michelsohn
    NASA Johnson Space Center, Houston
    281-483-5111
    noah.j.michelsohn@nasa.gov

    1
    False-color image of Allendale meteorite showing the apparent golf ball size clumps. Credits: NASA/J. Simon and J. Cuzzi

    NASA scientists have found the first evidence supporting a theory that golf ball-size clumps of space dust formed the building blocks of our terrestrial planets.

    A new paper from planetary scientists at the Astromaterials Research and Exploration Science Division (ARES) at NASA’s Johnson Space Center in Houston, Texas, and NASA’s Ames Research Center in Silicon Valley, California, provides evidence for an astrophysical theory called “pebble accretion” where golf ball-sized clumps of space dust came together to form tiny planets, called planetesimals, during the early stages of planetary formation.

    “This is very exciting because our research provides the first direct evidence supporting this theory,” said Justin Simon, a planetary researcher in ARES. “There have been a lot of theories about planetesimal formation, but many have been stymied by a factor called the ‘bouncing barrier.’”

    “The bouncing barrier principle stipulates that planets cannot form directly through the accumulation of small dust particles colliding in space because the impact would knock off previously attached aggregates, stalling growth. Astrophysicists had hypothesized that once the clumps grew to the size of a golf ball, any small particle colliding with the clump would knock other material off. Yet, if the colliding objects were not the size of a particle, but much larger – for example, clumps of dust the size of a golf ball – that they could exhibit enough gravity to hold themselves together in clusters to form larger bodies.”

    2
    Mosaic photograph of the ancient Northwest Africa 5717 ordinary chondrite with clusters of particles. Credits: NASA/J. Simon and J. Cuzzi

    The research provides evidence of a common, possibly universal, dust sticking process from studying two ancient meteorites – Allende and Northwest Africa 5717 – that formed in the pre-planetary period of the Solar System and have remained largely unaltered since that time. Scientists know through dating methods that these meteorites are older than Earth, Moon, and Mars, which means they have remained unaltered since the birth of the Solar System. The meteorites studied for this research are so old that they are often used to date the Solar System itself.

    The meteorites were analyzed using electron microscope images and high-resolution photomicrographs that showed particles within the meteorite slices appeared to concentrate together in three to four-centimeter clumps. The existence of the clumps demonstrates that the meteorites themselves were produced by the clustering of golf ball-sized objects, providing strong evidence that the process was possible for other bodies as well.

    The research, titled “Particle size distributions in chondritic meteorites: Evidence for pre-planetesimal histories,” was published in the journal Earth and Planetary Science Letters in July. The publication culminated six years of research that was led by planetary scientists Simon at Johnson and Jeffrey Cuzzi at Ames.

    Dig up more about how NASA studies meteorites, visit:

    https://ares.jsc.nasa.gov/

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ames Research Center, one of 10 NASA field Centers, is located in the heart of California’s Silicon Valley. For over 60 years, Ames has led NASA in conducting world-class research and development. With 2500 employees and an annual budget of $900 million, Ames provides NASA with advancements in:
    Entry systems: Safely delivering spacecraft to Earth & other celestial bodies
    Supercomputing: Enabling NASA’s advanced modeling and simulation
    NextGen air transportation: Transforming the way we fly
    Airborne science: Examining our own world & beyond from the sky
    Low-cost missions: Enabling high value science to low Earth orbit & the moon
    Biology & astrobiology: Understanding life on Earth — and in space
    Exoplanets: Finding worlds beyond our own
    Autonomy & robotics: Complementing humans in space
    Lunar science: Rediscovering our moon
    Human factors: Advancing human-technology interaction for NASA missions
    Wind tunnels: Testing on the ground before you take to the sky

    NASA image

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: