Tagged: Electron Microscopy Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:57 am on December 31, 2020 Permalink | Reply
    Tags: "An Existential Crisis in Neuroscience", , , DNNs are mathematical models that string together chains of simple functions that approximate real neurons., , Electron Microscopy, It’s clear now that while science deals with facts a crucial part of this noble endeavor is making sense of the facts., , ,   

    From Nautilus: “An Existential Crisis in Neuroscience” 

    From Nautilus

    December 30, 2020 [Re-issued “Maps” issue January 23, 2020.]
    Grigori Guitchounts

    1
    A rendering of dendrites (red)—a neuron’s branching processes—and protruding spines that receive synaptic information, along with a saturated reconstruction (multicolored cylinder) from a mouse cortex. Credit: Lichtman Lab at Harvard University.

    We’re mapping the brain in amazing detail—but our brain can’t understand the picture.

    On a chilly evening last fall, I stared into nothingness out of the floor-to-ceiling windows in my office on the outskirts of Harvard’s campus. As a purplish-red sun set, I sat brooding over my dataset on rat brains. I thought of the cold windowless rooms in downtown Boston, home to Harvard’s high-performance computing center, where computer servers were holding on to a precious 48 terabytes of my data. I have recorded the 13 trillion numbers in this dataset as part of my Ph.D. experiments, asking how the visual parts of the rat brain respond to movement.

    Printed on paper, the dataset would fill 116 billion pages, double-spaced. When I recently finished writing the story of my data, the magnum opus fit on fewer than two dozen printed pages. Performing the experiments turned out to be the easy part. I had spent the last year agonizing over the data, observing and asking questions. The answers left out large chunks that did not pertain to the questions, like a map leaves out irrelevant details of a territory.

    But, as massive as my dataset sounds, it represents just a tiny chunk of a dataset taken from the whole brain. And the questions it asks—Do neurons in the visual cortex do anything when an animal can’t see? What happens when inputs to the visual cortex from other brain regions are shut off?—are small compared to the ultimate question in neuroscience: How does the brain work?

    2
    LIVING COLOR: This electron microscopy image of a slice of mouse cortex, which shows different neurons labeled by color, is just the beginning. “We’re working on a cortical slab of a human brain, where every synapse and every connection of every nerve cell is identifiable,” says Harvard’s Jeff Lichtman. “It’s amazing.” Credit: Lichtman Lab at Harvard University.

    The nature of the scientific process is such that researchers have to pick small, pointed questions. Scientists are like diners at a restaurant: We’d love to try everything on the menu, but choices have to be made. And so we pick our field, and subfield, read up on the hundreds of previous experiments done on the subject, design and perform our own experiments, and hope the answers advance our understanding. But if we have to ask small questions, then how do we begin to understand the whole?

    Neuroscientists have made considerable progress toward understanding brain architecture and aspects of brain function. We can identify brain regions that respond to the environment, activate our senses, generate movements and emotions. But we don’t know how different parts of the brain interact with and depend on each other. We don’t understand how their interactions contribute to behavior, perception, or memory. Technology has made it easy for us to gather behemoth datasets, but I’m not sure understanding the brain has kept pace with the size of the datasets.

    Some serious efforts, however, are now underway to map brains in full. One approach, called connectomics, strives to chart the entirety of the connections among neurons in a brain. In principle, a complete connectome would contain all the information necessary to provide a solid base on which to build a holistic understanding of the brain. We could see what each brain part is, how it supports the whole, and how it ought to interact with the other parts and the environment. We’d be able to place our brain in any hypothetical situation and have a good sense of how it would react.

    The question of how we might begin to grasp the entirety of the organ that generates our minds has been pressing me for a while. Like most neuroscientists, I’ve had to cultivate two clashing ideas: striving to understand the brain and knowing that’s likely an impossible task. I was curious how others tolerate this doublethink, so I sought out Jeff Lichtman, a leader in the field of connectomics and a professor of molecular and cellular biology at Harvard.

    Lichtman’s lab happens to be down the hall from mine, so on a recent afternoon, I meandered over to his office to ask him about the nascent field of connectomics and whether he thinks we’ll ever have a holistic understanding of the brain. His answer—“No”—was not reassuring, but our conversation was a revelation, and shed light on the questions that had been haunting me. How do I make sense of gargantuan volumes of data? Where does science end and personal interpretation begin? Were humans even capable of weaving today’s reams of information into a holistic picture? I was now on a dark path, questioning the limits of human understanding, unsettled by a future filled with big data and small comprehension.

    Lichtman likes to shoot first, ask questions later. The 68-year-old neuroscientist’s weapon of choice is a 61-beam electron microscope, which Lichtman’s team uses to visualize the tiniest of details in brain tissue. The way neurons are packed in a brain would make canned sardines look like they have a highly evolved sense of personal space. To make any sense of these images, and in turn, what the brain is doing, the parts of neurons have to be annotated in three dimensions, the result of which is a wiring diagram. Done at the scale of an entire brain, the effort constitutes a complete wiring diagram, or the connectome.

    To capture that diagram, Lichtman employs a machine that can only be described as a fancy deli slicer. The machine cuts pieces of brain tissue into 30-nanometer-thick sections, which it then pastes onto a tape conveyor belt. The tape goes on silicon wafers, and into Lichtman’s electron microscope, where billions of electrons blast the brain slices, generating images that reveal nanometer-scale features of neurons, their axons, dendrites, and the synapses through which they exchange information. The Technicolor images are a beautiful sight that evokes a fantastic thought: The mysteries of how brains create memories, thoughts, perceptions, feelings—consciousness itself—must be hidden in this labyrinth of neural connections.

    2
    THE MAPMAKER: Jeff Lichtman, a leader in brain mapping, says the word “understanding” has to undergo a revolution in reference to the human brain. “There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’”Credit: Lichtman Lab at Harvard University.

    A complete human connectome will be a monumental technical achievement. A complete wiring diagram for a mouse brain alone would take up two exabytes. That’s 2 billion gigabytes; by comparison, estimates of the data footprint of all books ever written come out to less than 100 terabytes, or 0.005 percent of a mouse brain. But Lichtman is not daunted. He is determined to map whole brains, exorbitant exabyte-scale storage be damned.

    Lichtman’s office is a spacious place with floor-to-ceiling windows overlooking a tree-lined walkway and an old circular building that, in the days before neuroscience even existed as a field, used to house a cyclotron. He was wearing a deeply black sweater, which contrasted with his silver hair and olive skin. When I asked if a completed connectome would give us a full understanding of the brain, he didn’t pause in his answer. I got the feeling he had thought a great deal about this question on his own.

    “I think the word ‘understanding’ has to undergo an evolution,” Lichtman said, as we sat around his desk. “Most of us know what we mean when we say ‘I understand something.’ It makes sense to us. We can hold the idea in our heads. We can explain it with language. But if I asked, ‘Do you understand New York City?’ you would probably respond, ‘What do you mean?’ There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’ ”

    “But we understand specific aspects of the brain,” I said. “Couldn’t we put those aspects together and get a more holistic understanding?”

    “I guess I would retreat to another beachhead, which is, ‘Can we describe the brain?’ ” Lichtman said. “There are all sorts of fundamental questions about the physical nature of the brain we don’t know. But we can learn to describe them. A lot of people think ‘description’ is a pejorative in science. But that’s what the Hubble telescope does. That’s what genomics does. They describe what’s actually there. Then from that you can generate your hypotheses.”

    “Why is description an unsexy concept for neuroscientists?”

    “Biologists are often seduced by ideas that resonate with them,” Lichtman said. That is, they try to bend the world to their idea rather than the other way around. “It’s much better—easier, actually—to start with what the world is, and then make your idea conform to it,” he said. Instead of a hypothesis-testing approach, we might be better served by following a descriptive, or hypothesis-generating methodology. Otherwise we end up chasing our own tails. “In this age, the wealth of information is an enemy to the simple idea of understanding,” Lichtman said.

    “How so?” I asked.

    “Let me put it this way,” Lichtman said. “Language itself is a fundamentally linear process, where one idea leads to the next. But if the thing you’re trying to describe has a million things happening simultaneously, language is not the right tool. It’s like understanding the stock market. The best way to make money on the stock market is probably not by understanding the fundamental concepts of economy. It’s by understanding how to utilize this data to know what to buy and when to buy it. That may have nothing to do with economics but with data and how data is used.”

    “Maybe human brains aren’t equipped to understand themselves,” I offered.

    “And maybe there’s something fundamental about that idea: that no machine can have an output more sophisticated than itself,” Lichtman said. “What a car does is trivial compared to its engineering. What a human brain does is trivial compared to its engineering. Which is the great irony here. We have this false belief there’s nothing in the universe that humans can’t understand because we have infinite intelligence. But if I asked you if your dog can understand something you’d say, ‘Well, my dog’s brain is small.’ Well, your brain is only a little bigger,” he continued, chuckling. “Why, suddenly, are you able to understand everything?”

    Was Lichtman daunted by what a connectome might achieve? Did he see his efforts as Sisyphean?

    “It’s just the opposite,” he said. “I thought at this point we would be less far along. Right now, we’re working on a cortical slab of a human brain, where every synapse is identified automatically, every connection of every nerve cell is identifiable. It’s amazing. To say I understand it would be ridiculous. But it’s an extraordinary piece of data. And it’s beautiful. From a technical standpoint, you really can see how the cells are connected together. I didn’t think that was possible.”

    Lichtman stressed his work was about more than a comprehensive picture of the brain. “If you want to know the relationship between neurons and behavior, you gotta have the wiring diagram,” he said. “The same is true for pathology. There are many incurable diseases, such as schizophrenia, that don’t have a biomarker related to the brain. They’re probably related to brain wiring but we don’t know what’s wrong. We don’t have a medical model of them. We have no pathology. So in addition to fundamental questions about how the brain works and consciousness, we can answer questions like, Where did mental disorders come from? What’s wrong with these people? Why are their brains working so differently? Those are perhaps the most important questions to human beings.”

    Late one night, after a long day of trying to make sense of my data, I came across a short story by Jorge Louis Borges that seemed to capture the essence of the brain mapping problem. In the story, On Exactitude in Science, a man named Suarez Miranda wrote of an ancient empire that, through the use of science, had perfected the art of map-making. While early maps were nothing but crude caricatures of the territories they aimed to represent, new maps grew larger and larger, filling in ever more details with each edition. Over time, Borges wrote, “the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province.” Still, the people craved more detail. “In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.”

    The Borges story reminded me of Lichtman’s view that the brain may be too complex to be understood by humans in the colloquial sense, and that describing it may be a better goal. Still, the idea made me uncomfortable. Much like storytelling, or even information processing in the brain, descriptions must leave some details out. For a description to convey relevant information, the describer has to know which details are important and which are not. Knowing which details are irrelevant requires having some understanding about the thing you’re describing. Will my brain, as intricate as it may be, ever be able to make sense of the two exabytes in a mouse brain?

    Humans have a critical weapon in this fight. Machine learning has been a boon to brain mapping, and the self-reinforcing relationship promises to transform the whole endeavor. Deep learning algorithms (also known as deep neural networks, or DNNs) have in the past decade allowed machines to perform cognitive tasks once thought impossible for computers—not only object recognition, but text transcription and translation, or playing games like Go or chess. DNNs are mathematical models that string together chains of simple functions that approximate real neurons. These algorithms were inspired directly by the physiology and anatomy of the mammalian cortex, but are crude approximations of real brains, based on data gathered in the 1960s. Yet they have surpassed expectations of what machines can do.

    The secret to Lichtman’s progress with mapping the human brain is machine intelligence. Lichtman’s team, in collaboration with Google, is using deep networks to annotate the millions of images from brain slices their microscopes collect. Each scan from an electron microscope is just a set of pixels. Human eyes easily recognize the boundaries of each blob in the image (a neuron’s soma, axon, or dendrite, in addition to everything else in the brain), and with some effort can tell where a particular bit from one slice appears on the next slice. This kind of labeling and reconstruction is necessary to make sense of the vast datasets in connectomics, and have traditionally required armies of undergraduate students or citizen scientists to manually annotate all chunks. DNNs trained on image recognition are now doing the heavy lifting automatically, turning a job that took months or years into one that’s complete in a matter of hours or days. Recently, Google identified each neuron, axon, dendrite, and dendritic spike—and every synapse—in slices of the human cerebral cortex. “It’s unbelievable,” Lichtman said.

    Scientists still need to understand the relationship between those minute anatomical features and dynamical activity profiles of neurons—the patterns of electrical activity they generate—something the connectome data lacks. This is a point on which connectomics has received considerable criticism, mainly by way of example from the worm: Neuroscientists have had the complete wiring diagram of the worm C. elegans for a few decades now, but arguably do not understand the 300-neuron creature in its entirety; how its brain connections relate to its behaviors is still an active area of research.

    Still, structure and function go hand-in-hand in biology, so it’s reasonable to expect one day neuroscientists will know how specific neuronal morphologies contribute to activity profiles. It wouldn’t be a stretch to imagine a mapped brain could be kickstarted into action on a massive server somewhere, creating a simulation of something resembling a human mind. The next leap constitutes the dystopias in which we achieve immortality by preserving our minds digitally, or machines use our brain wiring to make super-intelligent machines that wipe humanity out. Lichtman didn’t entertain the far-out ideas in science fiction, but acknowledged that a network that would have the same wiring diagram as a human brain would be scary. “We wouldn’t understand how it was working any more than we understand how deep learning works,” he said. “Now, suddenly, we have machines that don’t need us anymore.”

    Yet a masterly deep neural network still doesn’t grant us a holistic understanding of the human brain. That point was driven home to me last year at a Computational and Systems Neuroscience conference, a meeting of the who’s-who in neuroscience, which took place outside Lisbon, Portugal. In a hotel ballroom, I listened to a talk by Arash Afraz, a 40-something neuroscientist at the National Institute of Mental Health in Bethesda, Maryland. The model neurons in DNNs are to real neurons what stick figures are to people, and the way they’re connected is equally as sketchy, he suggested.

    Afraz is short, with a dark horseshoe mustache and balding dome covered partially by a thin ponytail, reminiscent of Matthew McConaughey in True Detective. As sturdy Atlantic waves crashed into the docks below, Afraz asked the audience if we remembered René Magritte’s Ceci n’est pas une pipe painting, which depicts a pipe with the title written out below it. Afraz pointed out that the model neurons in DNNs are not real neurons, and the connections among them are not real either. He displayed a classic diagram of interconnections among brain areas found through experimental work in monkeys—a jumble of boxes with names like V1, V2, LIP, MT, HC, each a different color, and black lines connecting the boxes seemingly at random and in more combinations than seems possible. In contrast to the dizzying heap of connections in real brains, DNNs typically connect different brain areas in a simple chain, from one “layer” to the next. Try explaining that to a rigorous anatomist, Afraz said, as he flashed a meme of a shocked baby orangutan cum anatomist. “I’ve tried, believe me,” he said.

    I, too, have been curious why DNNs are so simple compared to real brains. Couldn’t we improve their performance simply by making them more faithful to the architecture of a real brain? To get a better sense for this, I called Andrew Saxe, a computational neuroscientist at Oxford University. Saxe agreed that it might be informative to make our models truer to reality. “This is always the challenge in the brain sciences: We just don’t know what the important level of detail is,” he told me over Skype.

    How do we make these decisions? “These judgments are often based on intuition, and our intuitions can vary wildly,” Saxe said. “A strong intuition among many neuroscientists is that individual neurons are exquisitely complicated: They have all of these back-propagating action potentials, they have dendritic compartments that are independent, they have all these different channels there. And so a single neuron might even itself be a network. To caricature that as a rectified linear unit”—the simple mathematical model of a neuron in DNNs—“is clearly missing out on so much.”

    As 2020 has arrived, I have thought a lot about what I have learned from Lichtman, Afraz, and Saxe and the holy grail of neuroscience: understanding the brain. I have found myself revisiting my undergrad days, when I held science up as the only method of knowing that was truly objective (I also used to think scientists would be hyper-rational, fair beings paramountly interested in the truth—so perhaps this just shows how naive I was).

    It’s clear to me now that while science deals with facts, a crucial part of this noble endeavor is making sense of the facts. The truth is screened through an interpretive lens even before experiments start. Humans, with all our quirks and biases, choose what experiment to conduct in the first place, and how to do it. And the interpretation continues after data are collected, when scientists have to figure out what the data mean. So, yes, science gathers facts about the world, but it is humans who describe it and try to understand it. All these processes require filtering the raw data through a personal sieve, sculpted by the language and culture of our times.

    It seems likely that Lichtman’s two exabytes of brain slices, and even my 48 terabytes of rat brain data, will not fit through any individual human mind. Or at least no human mind is going to orchestrate all this data into a panoramic picture of how the human brain works. As I sat at my office desk, watching the setting sun tint the cloudless sky a light crimson, my mind reached a chromatic, if mechanical, future. The machines we have built—the ones architected after cortical anatomy—fall short of capturing the nature of the human brain. But they have no trouble finding patterns in large datasets. Maybe one day, as they grow stronger building on more cortical anatomy, they will be able to explain those patterns back to us, solving the puzzle of the brain’s interconnections, creating a picture we understand. Out my window, the sparrows were chirping excitedly, not ready to call it a day.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 1:05 pm on August 20, 2020 Permalink | Reply
    Tags: "2D Electronics Get an Atomic Tuneup", , Electron Microscopy, , , , , , TUNING THE BAND GAP   

    From Lawrence Berkeley National Lab: “2D Electronics Get an Atomic Tuneup” 


    From Lawrence Berkeley National Lab

    August 20, 2020
    Theresa Duque
    tnduque@lbl.gov
    (510) 495-2418

    Scientists at Berkeley Lab, UC Berkeley demonstrate tunable, atomically thin semiconductors.

    1
    Electron microscopy experiments revealed meandering stripes formed by metal atoms of rhenium and niobium in the lattice structure of a 2D transition metal dichalcogenide alloy. (Image courtesy of Amin Azizi.)

    TO TUNE THE BAND GAP, a key parameter in controlling the electrical conductivity and optical properties of semiconductors, researchers typically engineer alloys, a process in which two or more materials are combined to achieve properties that otherwise could not be achieved by a pristine material.

    But engineering band gaps of conventional semiconductors via alloying has often been a guessing game, because scientists have not had a technique to directly “see” whether the alloy’s atoms are arranged in a specific pattern, or randomly dispersed.

    Now, as reported in Physical Review Letters, a research team led by Alex Zettl and Marvin Cohen – senior faculty scientists in the Materials Sciences Division at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), and professors of physics at UC Berkeley – has demonstrated a new technique that could engineer the band gap needed to improve the performance of semiconductors for next-generation electronics such as optoelectronics, thermoelectrics, and sensors.

    For the current study, the researchers examined monolayer and multilayer samples of a 2D transition metal dichalcogenide (TMD) material made of the alloy rhenium niobium disulfide.

    Electron microscopy experiments revealed meandering stripes formed by metal atoms of rhenium and niobium in the lattice structure of the 2D TMD alloy.

    A statistical analysis confirmed what the research team had suspected – that metal atoms in the 2D TMD alloy prefer to be adjacent to the other metal atoms, “which is in stark contrast to the random structure of other TMD alloys of the same class,” said lead author Amin Azizi, a postdoctoral researcher in the Zettl lab at UC Berkeley.

    Calculations performed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC) by Mehmet Dogan, a postdoctoral researcher in the Cohen lab at UC Berkeley, demonstrated that such atomic ordering can modify the material’s band gap.

    NERSC at LBNL

    NERSC Cray Cori II supercomputer, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    NERSC Hopper Cray XE6 supercomputer, named after Grace Hopper, One of the first programmers of the Harvard Mark I computer

    NERSC Cray XC30 Edison supercomputer

    NERSC GPFS for Life Sciences


    The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

    NERSC PDSF computer cluster in 2003.

    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

    Future:

    Cray Shasta Perlmutter SC18 AMD Epyc Nvidia pre-exascale supeercomputer

    NERSC is a DOE Office of Science User Facility.

    Optical spectroscopy measurements performed at Berkeley Lab’s Advanced Light Source revealed that the band gap of the 2D TMD alloy can be additionally tuned by adjusting the number of layers in the material.

    LBNL ALS

    Also, the band gap of the monolayer alloy is similar to that of silicon – which is “just right” for many electronic and optical applications, Azizi said. And the 2D TMD alloy has the added benefits of being flexible and transparent.

    The researchers next plan to explore the sensing and optoelectronic properties of new devices based on the 2D TMD alloy.

    Co-authors with Azizi, Cohen, and Zettl include Jeffrey D. Cain, Mehmet Dogan, Rahmatollah Eskandari, Emily G. Glazer, and Xuanze Yu.

    The Advanced Light Source and NERSC are DOE Office of Science user facilities co-located at Berkeley Lab.

    This work was supported by the DOE Office of Science. Additional funding was provided by the National Science Foundation.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    LBNL campus

    LBNL Molecular Foundry

    Bringing Science Solutions to the World
    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

     
  • richardmitnick 7:14 am on July 13, 2020 Permalink | Reply
    Tags: (DMSE)-Department of Materials Science and Engineering, , Electron Microscopy, Frances Ross, , , MIT.nano facility,   

    From MIT News: “A wizard of ultrasharp imaging” Frances Ross 

    MIT News

    From MIT News

    July 12, 2020
    David L. Chandler

    To oversee its new cutting-edge electron microscopy systems, MIT sought out Frances Ross’ industry-honed expertise.

    1
    “I’m hoping that MIT becomes a center for electron microscopy,” professor Frances Ross says. “There is nothing that exists with the capabilities that we are aiming for here.” Photo: Jared Charney

    1
    A specially designed transmission electron microscope in MIT Materials Research Laboratory’s newly renovated Electron Microscopy (EM) Shared Facility in Building 13. Photo, Denis Paiste, Materials Research Laboratory.

    Though Frances Ross and her sister Caroline Ross both ended up on the faculty of MIT’s Department of Materials Science and Engineering, they got there by quite different pathways. While Caroline followed a more traditional academic route and has spent most of her career at MIT, Frances Ross spent most of her professional life working in the industrial sector, as a microscopy specialist at IBM.

    3
    IBM Research Ultra High Vacuum-Transmission Electron Microscope Lab In 360.

    It wasn’t until 2018 that she arrived at MIT to oversee the new state-of-the-art electron microscope systems being installed in the new MIT.nano facility.

    Frances, who bears a strong family resemblance to her sister, says “it’s confused a few people, if they don’t know there are two of us.”

    The sisters grew up in London in a strongly science- and materials-oriented family. Her father, who worked first as a scientist and then as a lawyer, is currently working on his third PhD degree, in classics. Her mother, a gemologist, specializes in precisely matching diamonds, and oversees certification testing for the profession.

    After earning her doctorate at Cambridge University in materials science, specializing in electron microscopy, Frances Ross went on to do a postdoc at Bell Labs in New Jersey, and then to the National Center for Electron Microscopy at the University of California at Berkeley. From there she continued her work in electron microscopy at IBM in Yorktown Heights, New York, where she spent 20 years working on development and application of electron microscope technology to studying crystal growth.

    When MIT built its new cutting-edge nanotechnology fabrication and analysis facility, MIT.nano, it was clear that state-of-the-art microscope technology would need to be a key feature of the new center. That’s when Ross was hired as a professor, along with Professor Jim LeBeau and Research Scientist Rami Dana, who had an academic and industrial research background, to oversee the creation, development, and application of those microscopes for the Department of Materials Science and Engineering (DMSE) and the wider MIT community.

    “Currently, our students have to go to other places to do high-performance microscopy, so they might go to Harvard, or one of the national labs,” says Ross, who is the Ellen Swallow Richards Professor in Materials Science and Engineering. “Very many advances in the instrumentation have come together over the last few years, so that if your equipment is a little older, it’s actually a big disadvantage in electron microcopy. This is an area where MIT had not invested for a little while, and therefore, once they made that decision, the jump is going to be very significant. We’re going to have a state-of-the-art imaging capability.”

    There will be two major electron microscope systems for materials science, which are gradually taking shape inside the vibration-isolated basement level of MIT.nano, alongside two others already installed that are specialized for biomedical imaging.

    One of these will be an advanced version of a standard electron microscope, she says, that will have a unique combination of features. “There is nothing that exists with the capabilities that we are aiming for here.”

    The most important of these, she says, is the quality of the vacuum inside the microscope: “In most of our experiments, we want to start with a surface that’s atomically clean.” For example, “we could start with atomically clean silicon, and then add some germanium. How do the germanium atoms add onto the silicon surface? That’s a very important question for microelectronics. But if the sample is in an environment that’s not well-controlled, then the results you get will depend on how dirty the vacuum is. Contamination may affect the process, and you can’t be sure that what you’re seeing is what happens in real life.” Ross is working with the manufacturers to reach exceptional levels of cleanliness in the vacuum of the electron microscope system being developed now.

    But ultra-high-quality vacuum is just one of its attributes. “We combine the good vacuum with capabilities to heat the sample, and flow gases, and record images at high speed,” Ross says. “Perhaps most importantly for a lot of our experiments, we use lower-energy electrons to do the imaging, because for many interesting materials like 2D materials, such as graphene, boron nitride, and related structures, the high-energy electrons that are normally used will damage the sample.”

    Putting that all together, she says, “is a unique instrument that will give us real insights into surface reactions, crystal growth processes, materials transformations, catalysis, all kinds of reactions involving nanostructure formation and chemistry on the surfaces of 2D materials.”

    Other instruments and capabilities are also being added to MIT’s microscopy portfolio. A new scanning transmission electron microscope is already installed in MIT.nano and is providing high-resolution structural and chemical analysis of samples for several projects at MIT. Another new capability is a special sample holder that allows researchers to make movies of unfolding processes in water or other liquids in the microscope. This allows detailed monitoring, at up to 100 frames per second, of a variety of phenomena, such as solution-phase growth, unfolding chemical reactions, or electrochemical processes such as battery charging and discharging. Making movies of processes taking place in water, she says, “is something of a new field for electron microscopy.”

    Ross already has set up an ultra-high vacuum electron microscope in DMSE but without the resolution and low-voltage operation of the new instrument. And finally, an ultra-high vacuum scanning tunneling microscope has just started to produce images and will measure current flow through nanoscale materials.

    In their free time, Ross and her husband Brian enjoy sailing, mostly off the coast of Maine, with their two children, Kathryn and Eric. As a hobby she collects samples of beach sand. “I have a thousand different kinds of sand from various places, and a lot of them from Massachusetts,” she says. “Everywhere I go, that’s my souvenir.”

    But with her intense focus on developing this new world-class microscopy facility, there’s little time for anything else these days. Her aim is to ensure that it’s the best facility possible.

    “I’m hoping that MIT becomes a center for electron microscopy,” she says. “You know, with all the interesting materials science and physics that goes on here, it matches up very well with this unique instrumentation, this high-quality combination of imaging and analysis. These unique characterization capabilities really complement the rest of the science that happens here.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 11:42 am on February 24, 2020 Permalink | Reply
    Tags: "A Simple Retrofit Transforms Ordinary Electron Microscopes Into High-Speed Atom-Scale Cameras", , Electron Microscopy,   

    From NIST: “A Simple Retrofit Transforms Ordinary Electron Microscopes Into High-Speed Atom-Scale Cameras” 


    From NIST

    February 24, 2020

    Ben P. Stein
    benjamin.stein@nist.gov
    (301) 975-2763

    Patented “beam chopper” provides cost-effective way to investigate super-fast processes important for tomorrow’s technology.

    1
    Credit: N. Hanacek/NIST

    Researchers at the National Institute of Standards and Technology (NIST) and their collaborators have developed a way to retrofit the transmission electron microscope — a long-standing scientific workhorse for making crisp microscopic images — so that it can also create high-quality movies of super-fast processes at the atomic and molecular scale. Compatible with electron microscopes old and new, the retrofit promises to enable fresh insights into everything from microscopic machines to next-generation computer chips and biological tissue by making this moviemaking capability more widely available to laboratories everywhere.

    “We want to be able to look at things in materials science that happen really quickly,” said NIST scientist June Lau. She reports the first proof-of-concept operation of this retrofitted design with her colleagues in the journal Review of Scientific Instruments. The team designed the retrofit to be a cost-effective add-on to existing instruments. “It’s expected to be a fraction of the cost of a new electron microscope,” she said.

    A nearly 100-year-old invention, the electron microscope remains an essential tool in many scientific laboratories. A popular version is known as the transmission electron microscope (TEM), which fires electrons through a target sample to produce an image. Modern versions of the microscope can magnify objects by as much as 50 million times. Electron microscopes have helped to determine the structure of viruses, test the operation of computer circuits, and reveal the effectiveness of new drugs.

    “Electron microscopes can look at very tiny things on the atomic scale,” Lau said. “They are great. But historically, they look at things that are fixed in time. They’re not good at viewing moving targets,” she said.

    In the last 15 years, laser-assisted electron microscopes made videos possible, but such systems have been complex and expensive. While these setups can capture events that last from nanoseconds (billionths of a second) to femtoseconds (quadrillionths of a second), a laboratory must often buy a newer microscope to accommodate this capability as well as a specialized laser, with a total investment that can run into the millions of dollars. A lab also needs in-house laser-physics expertise to help set up and operate such a system.

    “Frankly, not everyone has that capacity,” Lau said.

    In contrast, the retrofit enables TEMs of any age to make high-quality movies on the scale of picoseconds (trillionths of a second) by using a relatively simple “beam chopper.” In principle, the beam chopper can be used in any manufacturer’s TEM. To install it, NIST researchers open the microscope column directly under the electron source, insert the beam chopper and close up the microscope again. Lau and her colleagues have successfully retrofitted three TEMs of different capabilities and vintage.

    Like a stroboscope, this beam chopper releases precisely timed pulses of electrons that can capture frames of important repeating or cyclic processes.

    “Imagine a Ferris wheel, which moves in a cyclical and repeatable way,” Lau said. “If we’re recording it with a pinhole camera, it will look blurry. But we want to see individual cars. I can put a shutter in front of the pinhole camera so that the shutter speed matches the movement of the wheel. We can time the shutter to open whenever a designated car goes to the top. In this way I can make a stack of images that shows each car at the top of the Ferris wheel,” she said.

    Like the light shutter, the beam chopper interrupts a continuous electron beam. But unlike the shutter, which has an aperture that opens and closes, this beam aperture stays open all the time, eliminating the need for a complex mechanical part.

    Instead, the beam chopper generates a radio frequency (RF) electromagnetic wave in the direction of the electron beam. The wave causes the traveling electrons to behave “like corks bobbing up and down on the surface of a water wave,” Lau said.

    Riding this wave, the electrons follow an undulating path as they approach the aperture. Most electrons are blocked except for the ones that are perfectly aligned with the aperture. The frequency of the RF wave is tunable, so that electrons hit the sample anywhere from 40 million to 12 billion times per second. As a result, researchers can capture important processes in the sample at time intervals from about a nanosecond to 10 picoseconds.

    In this way, the NIST-retrofitted microscope can capture atom-scale details of the back-and-forth movements in tiny machines such as microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS). It can potentially study the regularly repeating signals in antennas used for high-speed communications and probe the movement of electric currents in next-generation computer processors.

    In one demo, the researchers wanted to prove that a retrofitted microscope functioned as it did before the retrofit. They imaged gold nanoparticles in both the traditional “continuous” mode and the pulsed beam mode. The images in the pulsed mode had comparable clarity and resolution to the still images.

    “We designed it so it should be the same,” Lau said.

    2
    A transmission electron microscope (TEM) image of gold (Au) nanoparticles magnified 200,000 times with a continuous electron beam (left) and a pulsed beam (right). The scale is 5 nanometers (nm).

    The beam chopper can also do double duty, pumping RF energy into the material sample and then taking pictures of the results. The researchers demonstrated this ability by injecting microwaves (a form of radio wave) into a metallic, comb-shaped MEMS device. The microwaves create electric fields within the MEMS device and cause the incoming pulses of electrons to deflect. These electron deflections enable researchers to build movies of the microwaves propagating through the MEMS comb.

    Lau and her colleagues hope their invention can soon make new scientific discoveries. For example, it could investigate the behavior of quickly changing magnetic fields in molecular-scale memory devices that promise to store more information than before.

    The researchers spent six years inventing and developing their beam chopper and have received several patents and an R&D 100 Award for their work. Co-authors in the work included Brookhaven National Laboratory in Upton, New York, and Euclid Techlabs in Bolingbrook, Illinois.

    One of the things that makes Lau most proud is that their design can breathe new life into any TEM, including the 25-year-old unit that performed the latest demonstration. The design gives labs everywhere the potential to use their microscopes to capture important fast-moving processes in tomorrow’s materials.

    “Democratizing science was the whole motivation,” Lau said.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    NIST Campus, Gaitherberg, MD, USA

    NIST Mission, Vision, Core Competencies, and Core Values

    NIST’s mission

    To promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.
    NIST’s vision

    NIST will be the world’s leader in creating critical measurement solutions and promoting equitable standards. Our efforts stimulate innovation, foster industrial competitiveness, and improve the quality of life.
    NIST’s core competencies

    Measurement science
    Rigorous traceability
    Development and use of standards

    NIST’s core values

    NIST is an organization with strong values, reflected both in our history and our current work. NIST leadership and staff will uphold these values to ensure a high performing environment that is safe and respectful of all.

    Perseverance: We take the long view, planning the future with scientific knowledge and imagination to ensure continued impact and relevance for our stakeholders.
    Integrity: We are ethical, honest, independent, and provide an objective perspective.
    Inclusivity: We work collaboratively to harness the diversity of people and ideas, both inside and outside of NIST, to attain the best solutions to multidisciplinary challenges.
    Excellence: We apply rigor and critical thinking to achieve world-class results and continuous improvement in everything we do.

     
  • richardmitnick 12:37 pm on February 21, 2019 Permalink | Reply
    Tags: "Big Data at the Atomic Scale: New Detector Reaches New Frontier in Speed", A new detector that can capture atomic-scale images in millionths-of-a-second increments., , , Electron Microscopy, known as the “4D Camera” (for Dynamic Diffraction Direct Detector), , , NCEM-National Center for Electron Microscopy, The Molecular Foundry, The new detector, The Transmission Electron Aberration-corrected Microscope (TEAM 0.5) at Berkeley Lab   

    From Lawrence Berkeley National Lab: “Big Data at the Atomic Scale: New Detector Reaches New Frontier in Speed” 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    February 21, 2019
    Glenn Roberts Jr.
    geroberts@lbl.gov
    (510) 486-5582

    1
    The Transmission Electron Aberration-corrected Microscope (TEAM 0.5) at Berkeley Lab has been upgraded with a new detector that can capture atomic-scale images in millionths-of-a-second increments. (Credit: Thor Swift/Berkeley Lab)


    This video provides an overview of the R&D effort to upgrade an electron microscope at Berkeley Lab’s Molecular Foundry with a superfast detector, the 4D Camera. The detector, which is linked to a supercomputer at Berkeley Lab via a high-speed data connection, can capture more images at a faster rate, revealing atomic-scale details across much larger areas than was possible before. (Credit: Marilyn Chung/Berkeley Lab)

    Advances in electron microscopy – using electrons as imaging tools to see things well beyond the reach of conventional microscopes that use light – have opened up a new window into the nanoscale world and brought a wide range of samples into focus as never before.

    Electron microscopy experiments can only use a fraction of the possible information generated as the microscope’s electron beam interacts with samples. Now, a team at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has designed a new kind of electron detector that captures all of the information in these interactions.

    This new tool, a superfast detector installed Feb. 12 at Berkeley Lab’s Molecular Foundry, a nanoscale science user facility, captures more images at a faster rate, revealing atomic-scale details across much larger areas than was possible before. The Molecular Foundry and its world-class electron microscopes in the National Center for Electron Microscopy (NCEM) provide access to researchers from around the world.

    Faster imaging can also reveal important changes that samples are undergoing and provide movies vs. isolated snapshots. It could, for example, help scientists to better explore working battery and microchip components at the atomic scale before the onset of damage.

    The detector, which has a special direct connection to the Cori supercomputer at the Lab’s National Energy Research Scientific Computing Center (NERSC), will enable scientists to record atomic-scale images with timing measured in microseconds, or millionths of a second – 100 times faster than possible with existing detectors.

    NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    “It is the fastest electron detector ever made,” said Andrew Minor, NCEM facility director at the Molecular Foundry.

    “It opens up a new time regime to explore with high-resolution microscopy. No one has ever taken continuous movies at this time resolution” using electron imaging, he said. “What happens there? There are all kinds of dynamics that might happen. We just don’t know because we’ve never been able to look at them before.” The new movies could reveal tiny deformations and movements in materials, for example, and show chemistry in action.

    The development of the new detector, known as the “4D Camera” (for Dynamic Diffraction Direct Detector), is the latest in a string of pioneering innovations in electron microscopy, atomic-scale imaging, and high-speed data transfer and computing at Berkeley Lab that span several decades.

    “Our group has been working for some time on making better detectors for microscopy,” said Peter Denes, a Berkeley Lab senior scientist and a longtime pioneer in the development of electron microscopy tools.

    “You get a whole scattering pattern instead of just one point, and you can go back and reanalyze the data to find things that maybe you weren’t focusing on before,” Denes said. This quickly produces a complete image of a sample by scanning across it with an electron beam and capturing information based on the electrons that scatter off the sample.

    Mary Scott, a faculty scientist at the Molecular Foundry, said that the unique geometry of the new detector allows studies of both light and heavyweight elements in materials side by side. “The reason you might want to perform one of these more complicated experiments would be to measure the positions of light elements, particularly in materials that might be really sensitive to the electron beam – like lithium in a battery material – and ideally you would be able to also precisely measure the positions of heavy elements in that same material,” she said.

    The new detector has been installed on the Transmission Electron Aberration-corrected Microscope 0.5 (TEAM 0.5) at the Molecular Foundry, which set high-resolution records when it launched at NCEM a decade ago and allows visiting researchers to access single-atom resolution for some samples. The detector will generate a whopping 4 terabytes of data per minute.

    “The amount of data is equivalent to watching about 60,000 HD movies simultaneously,” said Peter Ercius, a staff scientist at the Molecular Foundry who specializes in 3D atomic-scale imaging.

    Brent Draney, a networking architect at Berkeley Lab’s NERSC, said that Ercius and Denes had approached NERSC to see what it would take to build a system that could handle this huge, 400-gigabit stream of data produced by the 4D Camera.

    His response: “We actually already have a system capable of doing that. What we really needed to do is to build a network between the microscope and the supercomputer.”

    2
    A technician works on the TEAM 0.5 microscope. The microscope has been upgraded with a superfast detector called the 4D Camera that can capture atomic-scale images in millionths-of-a-second increments. (Credit: Thor Swift/Berkeley Lab)

    Camera data is transferred over about 100 fiber-optic connections into a high-speed ethernet connection that is about 1,000 times faster than the average home network, said Ian Johnson, a staff scientist in Berkeley Lab’s Engineering Division. The network connects the Foundry to the Cori supercomputer at NERSC.

    Berkeley Lab’s Energy Sciences Network (ESnet), which connects research centers with high-speed data networks, participated in the effort.

    Ercius said, “The supercomputer will analyze the data in about 20 seconds in order to provide rapid feedback to the scientists at the microscope to tell if the experiment was successful or not.”

    Jim Ciston, another Molecular Foundry staff scientist, said, “We’ll actually capture every electron that comes through the sample as it’s scattered. Through this really large data set we’ll be able to perform ‘virtual’ experiments on the sample – we won’t have to go back and take new data from different imaging conditions.”

    The work on the new detector and its supporting data systems should benefit other facilities that produce high volumes of data, such as the Advanced Light Source and its planned upgrade, and the LCLS-II project at SLAC National Accelerator Laboratory, Ciston noted.

    LBNL Advanced Light Source

    SLAC LCLS-II

    The Advanced Light Source, ESnet, Molecular Foundry, and NERSC are DOE Office of Science User Facilities.

    The development of the 4D Camera was supported by the Accelerator and Detector Research Program of the Department of Energy’s Office of Basic Energy Sciences, and work at the Molecular Foundry was supported by the DOE’s Office of Basic Energy Sciences.

    3
    This computer chip is a component in a superfast detector called the 4D Camera. The detector is an upgrade for a powerful electron microscope at Berkeley Lab’s Molecular Foundry. (Credit: Marilyn Chung/Berkeley Lab)

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Bringing Science Solutions to the World

    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

    DOE Seal

     
  • richardmitnick 10:13 am on January 8, 2019 Permalink | Reply
    Tags: , , Electron Microscopy, , , , ,   

    From SLAC National Accelerator Lab: “Study shows single atoms can make more efficient catalysts” 

    From SLAC National Accelerator Lab

    January 7, 2019
    Glennda Chui

    1
    Scientists used a combination of four techniques, represented here by four incoming beams, to reveal in unprecedented detail how a single atom of iridium catalyzes a chemical reaction. (Greg Stewart/SLAC National Accelerator Laboratory)

    Detailed observations of iridium atoms at work could help make catalysts that drive chemical reactions smaller, cheaper and more efficient.

    Catalysts are chemical matchmakers: They bring other chemicals close together, increasing the chance that they’ll react with each other and produce something people want, like fuel or fertilizer.

    Since some of the best catalyst materials are also quite expensive, like the platinum in a car’s catalytic converter, scientists have been looking for ways to shrink the amount they have to use.

    Now scientists have their first direct, detailed look at how a single atom catalyzes a chemical reaction. The reaction is the same one that strips poisonous carbon monoxide out of car exhaust, and individual atoms of iridium did the job up to 25 times more efficiently than the iridium nanoparticles containing 50 to 100 atoms that are used today.

    The research team, led by Ayman M. Karim of Virginia Tech, reported the results in Nature Catalysis.

    “These single-atom catalysts are very much a hot topic right now,” said Simon R. Bare, a co-author of the study and distinguished staff scientist at the Department of Energy’s SLAC National Accelerator Laboratory, where key parts of the work took place. “This gives us a new lens to look at reactions through, and new insights into how they work.”

    Karim added, “To our knowledge, this is the first paper to identify the chemical environment that makes a single atom catalytically active, directly determine how active it is compared to a nanoparticle, and show that there are very fundamental differences – entirely different mechanisms – in the way they react.”

    Is smaller really better?

    Catalysts are the backbone of the chemical industry and essential to oil refining, where they help break crude oil into gasoline and other products. Today’s catalysts often come in the form of nanoparticles attached to a surface that’s porous like a sponge – so full of tiny holes that a single gram of it, unfolded, might cover a basketball court. This creates an enormous area where millions of reactions can take place at once. When gas or liquid flows over and through the spongy surface, chemicals attach to the nanoparticles, react with each other and float away. Each catalyst is designed to promote one specific reaction over and over again.

    But catalytic reactions take place only on the surfaces of nanoparticles, Bare said, “and even though they are very small particles, the expensive metal on the inside of the nanoparticle is wasted.”

    Individual atoms, on the other hand, could offer the ultimate in efficiency. Each and every atom could act as a catalyst, grabbing chemical reactants and holding them close together until they bond. You could fit a lot more of them in a given space, and not a speck of precious metal would go to waste.

    Single atoms have another advantage: Unlike clusters of atoms, which are bound to each other, single atoms are attached only to the surface, so they have more potential binding sites available to perform chemical tricks – which in this case came in very handy.

    Research on single-atom catalysts has exploded over the past few years, Karim said, but until now no one has been able to study how they function in enough detail to see all the fleeting intermediate steps along the way.

    Grabbing some help

    To get more information, the team looked at a simple reaction where single atoms of iridium split oxygen molecules in two, and the oxygen atoms then react with carbon monoxide to create carbon dioxide.

    They used four approaches­ – infrared spectroscopy, electron microscopy, theoretical calculations and X-ray spectroscopy with beams from SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) – to attack the problem from different angles, and this was crucial for getting a complete picture.

    SLAC/SSRL

    SLAC SSRL Campus

    “It’s never just one thing that gives you the full answer,” Bare said. “It’s always multiple pieces of the jigsaw puzzle coming together.”

    The team discovered that each iridium atom does, in fact, perform a chemical trick that enhances its performance. It grabs a single carbon monoxide molecule out of the passing flow of gas and holds onto it, like a person tucking a package under their arm. The formation of this bond triggers tiny shifts in the configuration of the iridium atom’s electrons that help it split oxygen, so it can react with the remaining carbon monoxide gas and convert it to carbon dioxide much more efficiently.

    More questions lie ahead: Will this same mechanism work in other catalytic reactions, allowing them to run more efficiently or at lower temperatures? How do the nature of the single-atom catalyst and the surface it sits on affect its binding with carbon monoxide and the way the reaction proceeds?

    The team plans to return to SSRL in January to continue the work.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.

     
  • richardmitnick 9:05 am on August 23, 2018 Permalink | Reply
    Tags: , , , Bouncing barrier, , Electron Microscopy, , , NASA Researchers Find Evidence of Planet-Building Clumps, Planetesimal formation   

    From NASA Ames: “NASA Researchers Find Evidence of Planet-Building Clumps” 

    NASA Ames Icon

    From NASA AMES

    Aug. 21, 2018
    Darryl Waller
    NASA Ames Research Center, Silicon Valley
    650-604-2675
    darryl.e.waller@nasa.gov

    Noah Michelsohn
    NASA Johnson Space Center, Houston
    281-483-5111
    noah.j.michelsohn@nasa.gov

    1
    False-color image of Allendale meteorite showing the apparent golf ball size clumps. Credits: NASA/J. Simon and J. Cuzzi

    NASA scientists have found the first evidence supporting a theory that golf ball-size clumps of space dust formed the building blocks of our terrestrial planets.

    A new paper from planetary scientists at the Astromaterials Research and Exploration Science Division (ARES) at NASA’s Johnson Space Center in Houston, Texas, and NASA’s Ames Research Center in Silicon Valley, California, provides evidence for an astrophysical theory called “pebble accretion” where golf ball-sized clumps of space dust came together to form tiny planets, called planetesimals, during the early stages of planetary formation.

    “This is very exciting because our research provides the first direct evidence supporting this theory,” said Justin Simon, a planetary researcher in ARES. “There have been a lot of theories about planetesimal formation, but many have been stymied by a factor called the ‘bouncing barrier.’”

    “The bouncing barrier principle stipulates that planets cannot form directly through the accumulation of small dust particles colliding in space because the impact would knock off previously attached aggregates, stalling growth. Astrophysicists had hypothesized that once the clumps grew to the size of a golf ball, any small particle colliding with the clump would knock other material off. Yet, if the colliding objects were not the size of a particle, but much larger – for example, clumps of dust the size of a golf ball – that they could exhibit enough gravity to hold themselves together in clusters to form larger bodies.”

    2
    Mosaic photograph of the ancient Northwest Africa 5717 ordinary chondrite with clusters of particles. Credits: NASA/J. Simon and J. Cuzzi

    The research provides evidence of a common, possibly universal, dust sticking process from studying two ancient meteorites – Allende and Northwest Africa 5717 – that formed in the pre-planetary period of the Solar System and have remained largely unaltered since that time. Scientists know through dating methods that these meteorites are older than Earth, Moon, and Mars, which means they have remained unaltered since the birth of the Solar System. The meteorites studied for this research are so old that they are often used to date the Solar System itself.

    The meteorites were analyzed using electron microscope images and high-resolution photomicrographs that showed particles within the meteorite slices appeared to concentrate together in three to four-centimeter clumps. The existence of the clumps demonstrates that the meteorites themselves were produced by the clustering of golf ball-sized objects, providing strong evidence that the process was possible for other bodies as well.

    The research, titled “Particle size distributions in chondritic meteorites: Evidence for pre-planetesimal histories,” was published in the journal Earth and Planetary Science Letters in July. The publication culminated six years of research that was led by planetary scientists Simon at Johnson and Jeffrey Cuzzi at Ames.

    Dig up more about how NASA studies meteorites, visit:

    https://ares.jsc.nasa.gov/

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ames Research Center, one of 10 NASA field Centers, is located in the heart of California’s Silicon Valley. For over 60 years, Ames has led NASA in conducting world-class research and development. With 2500 employees and an annual budget of $900 million, Ames provides NASA with advancements in:
    Entry systems: Safely delivering spacecraft to Earth & other celestial bodies
    Supercomputing: Enabling NASA’s advanced modeling and simulation
    NextGen air transportation: Transforming the way we fly
    Airborne science: Examining our own world & beyond from the sky
    Low-cost missions: Enabling high value science to low Earth orbit & the moon
    Biology & astrobiology: Understanding life on Earth — and in space
    Exoplanets: Finding worlds beyond our own
    Autonomy & robotics: Complementing humans in space
    Lunar science: Rediscovering our moon
    Human factors: Advancing human-technology interaction for NASA missions
    Wind tunnels: Testing on the ground before you take to the sky

    NASA image

     
  • richardmitnick 12:41 pm on January 20, 2018 Permalink | Reply
    Tags: , , , , , Electron Microscopy, Meteoritic stardust unlocks timing of supernova dust formation,   

    From Carnegie Institution for Science: “Meteoritic stardust unlocks timing of supernova dust formation” 

    Carnegie Institution for Science
    Carnegie Institution for Science

    January 18, 2018
    Conel Alexander
    Larry Nittler

    Dust is everywhere—not just in your attic or under your bed, but also in outer space. To astronomers, dust can be a nuisance by blocking the light of distant stars, or it can be a tool to study the history of our universe, galaxy, and Solar System.

    For example, astronomers have been trying to explain why some recently discovered distant, but young, galaxies contain massive amounts of dust. These observations indicate that type II supernovae—explosions of stars more than ten times as massive as the Sun—produce copious amounts of dust, but how and when they do so is not well understood.

    1
    An electron microscope image of a micron-sized supernova silicon carbide, SiC, stardust grain (lower right) extracted from a primitive meteorite. Such grains originated more than 4.6 billion years ago in the ashes of Type II supernovae, typified here by a Hubble Space Telescope image of the Crab Nebula, the remnant of a supernova explosion in 1054. Laboratory analysis of such tiny dust grains provides unique information on these massive stellar explosions. (1 μm is one millionth of a meter.) Image credits: NASA and Larry Nittler.

    New work from a team of Carnegie cosmochemists published by Science Advances reports analyses of carbon-rich dust grains extracted from meteorites that show that these grains formed in the outflows from one or more type II supernovae more than two years after the progenitor stars exploded. This dust was then blown into space to be eventually incorporated into new stellar systems, including in this case, our own.

    The researchers—led by former-postdoctoral fellow Nan Liu, along with Larry Nittler, Conel Alexander, and Jianhua Wang of Carnegie’s Department of Terrestrial Magnetism—came to their conclusion not by studying supernovae with telescopes. Rather, they analyzed microscopic silicon carbide, SiC, dust grains that formed in supernovae more than 4.6 billion years ago and were trapped in meteorites as our Solar System formed from the ashes of the galaxy’s previous generations of stars.

    Some meteorites have been known for decades to contain a record of the original building blocks of the Solar System, including stardust grains that formed in prior generations of stars.

    “Because these presolar grains are literally stardust that can be studied in detail in the laboratory,” explained Nittler, “they are excellent probes of a range of astrophysical processes.”

    For this study, the team set out to investigate the timing of supernova dust formation by measuring isotopes—versions of elements with the same number of protons but different numbers of neutrons—in rare presolar silicon carbide grains with compositions indicating that they formed in type II supernovae.

    Certain isotopes enable scientists to establish a time frame for cosmic events because they are radioactive. In these instances, the number of neutrons present in the isotope make it unstable. To gain stability, it releases energetic particles in a way that alters the number of protons and neutrons, transmuting it into a different element.

    The Carnegie team focused on a rare isotope of titanium, titanium-49, because this isotope is the product of radioactive decay of vanadium-49 which is produced during supernova explosions and transmutes into titanium-49 with a half-life of 330 days. How much titanium-49 gets incorporated into a supernova dust grain thus depends on when the grain forms after the explosion.

    Using a state-of-the-art mass spectrometer to measure the titanium isotopes in supernova SiC grains with much better precision than could be accomplished by previous studies, the team found that the grains must have formed at least two years after their massive parent stars exploded.

    Because presolar supernova graphite grains are isotopically similar in many ways to the SiC grains, the team also argues that the delayed formation timing applies generally to carbon-rich supernova dust, in line with some recent theoretical calculations.

    “This dust-formation process can occur continuously for years, with the dust slowly building up over time, which aligns with astronomer’s observations of varying amounts of dust surrounding the sites of stellar explosions,” added lead author Liu. “As we learn more about the sources for dust, we can gain additional knowledge about the history of the universe and how various stellar objects within it evolve.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Carnegie Institution of Washington Bldg

    Andrew Carnegie established a unique organization dedicated to scientific discovery “to encourage, in the broadest and most liberal manner, investigation, research, and discovery and the application of knowledge to the improvement of mankind…” The philosophy was and is to devote the institution’s resources to “exceptional” individuals so that they can explore the most intriguing scientific questions in an atmosphere of complete freedom. Carnegie and his trustees realized that flexibility and freedom were essential to the institution’s success and that tradition is the foundation of the institution today as it supports research in the Earth, space, and life sciences.

    6.5 meter Magellan Telescopes located at Carnegie’s Las Campanas Observatory, Chile.
    6.5 meter Magellan Telescopes located at Carnegie’s Las Campanas Observatory, Chile

     
  • richardmitnick 10:53 am on January 4, 2018 Permalink | Reply
    Tags: , , Electron Microscopy, MicroED=micro-electron diffraction, ,   

    From UCLA Newsroom: “Imaging technique could be ‘new ballgame’ in drug development” 


    UCLA Newsroom

    January 02, 2018
    Tami Dennis

    UCLA researcher Tamir Gonen explores the potential of MicroED in neurological diseases.

    1
    Courtesy of Tamir Gonen
    Tamir Gonen published his proof of principle paper [eLIFE] on MicroED in 2013.

    Biochemistry and structural biology are surprisingly — at least to the uninitiated — visual fields. This is especially true in the study of proteins. Scientists like to see the structure of proteins within cells to help them truly understand how they work, how they don’t work or how they can be modified to work as they should. That is, how they can be targeted with drugs to cure disease.

    Current methods, however, have their downsides. Many widely used techniques require large amounts of protein for analysis, even though many diseases are caused by proteins that are far from abundant or that are difficult to amass in large quantities. A new method pioneered by a professor who recently joined UCLA overcomes this challenge, offering untold potential in the exploration of disease and treatment.

    Called “MicroED,” for micro-electron diffraction, the technique uses high-powered electron microscopy to determine the structure of proteins with atomic precision, using samples that are only one-billionth the size required by other imaging methods.

    Tamir Gonen, a new professor of physiology and biological chemistry at the David Geffen School of Medicine at UCLA, is the developer of MicroED. For the past five years, Gonen has been spearheading the exploration of MicroED in his lab at the Janelia Research Campus of the Howard Hughes Medical Institute near Washington D.C.

    Now, in joining UCLA’s faculty, Gonen’s goal is to set up a lab centered on this new imaging tool. Already the university is using the technique in the labs of Jose Rodriguez, assistant professor of biochemistry, and David Eisenberg, a professor of chemistry and biochemistry and of biological chemistry. Both use MicroED to view the structures of proteins involved in neurodegeneration.

    “With MicroED, the way we think about disease is going to be different,” Gonen said. “Because it uses minute samples and the resolution we get is very high, problems that were beyond our reach are suddenly attainable. We can see individual atoms and even peer deeper into subatomic space and see things that have not been seen before.”

    2
    Samples used in MicroED resemble jewels with one exception, they are made out of biological material rather than precious mineral. Gonen lab.

    Gonen began working on the technique at the Howard Hughes Medical Institute Janelia Research Campus, where he was a group leader. Upon moving to UCLA Gonen is now an investigator of the Howard Hughes Medical Institute and professor of physiology and biological chemistry. Prior to that, he was an assistant professor, then a tenured associate professor, at the University of Washington School of Medicine, as well as a Howard Hughes Medical Institute Early Career Scientist.

    Since 2013, which marked his publication of a proof of principle paper on MicroED [eLIFE], Gonen has been an advocate for the technique. As other researchers have come to understand the long-sought imaging potential MicroED offers, about 20 institutions worldwide have begun setting up MicroED labs, many with Gonen’s help.

    “I have fantastic folks working in my lab, and they are extremely collegial and want to help others get their science done,” Gonen said.

    That focus on getting “science done” — and MicroED itself — has enormous ramifications for the treatment of HIV, Parkinson’s, Alzheimer’s and other neurodegenerative diseases. “When you’re talking about drug discovery, it’s a whole new ballgame,” Gonen said.

    Gonen’s development of MicroED stemmed from his study of cell membranes, specifically the protein gateways within those membranes.

    These gateways can help cells maintain healthy homeostasis in which everything works as it should — think of it as a biological “peace.” When things go wrong, however, as happens with disease, these gateways might allow too much of one substance, such as water or sugar, in or out.

    3
    This illustration of a protein shows an example of a structure that could only be determined by the capabilities of micro-electron diffraction. Gonen lab.

    Gonen knew that targeting these gateways — or targeting their function — could lead to innovative ways to control disease and help patients. But he and other scientists needed good images to facilitate the design of better drugs.

    “More than 90 percent of all medicines sold these days target G-protein coupled receptors,” Gonen said. “When you feel pain, when you see light, when you taste, when you have any neurological sensation, all of this is occurring because of these receptors.”

    Another potential, and timely, target of this type: opioid receptors. “Opioids are an increasingly challenging problem, and not surprisingly there exists an opioid receptor,” Gonen said.

    MicroED makes it possible to image these G-protein coupled receptors, which move around a lot. This movement has made it difficult for traditional methods to capture images of them.

    MicroED’s potential goes far beyond such receptors, however.

    Larry Zipursky, a distinguished professor biological chemistry and the leader of the neuroscience research theme at the David Geffen School of Medicine at UCLA, called the approach “revolutionary.”

    “This technique uses a rational approach to disease,” Zipursky said. “It allows researchers to assess the structure of abnormal proteins that give rise to disease; from this structural determination, they can assess the disease in a more strategic way.”

    Gonen is enthusiastic about the larger potential as well. “For a medical school, this is going to be quite a resource for pushing research forward. I’m hoping to collaborate with a lot of people.”

    In fact, he’s already working on several projects, including one that points to a way to make more efficient HIV medicines. This is in addition to the projects with Eisenberg and Rodriguez, who’s studying the conversion of proteins from a normal state into an abnormal, clumped — or aggregated — state, as seen in Alzheimer’s and other diseases of the brain and nervous system.

    “We need new and better treatments for neurodegenerative diseases, one way to achieve this is to understand how atomic scale changes in the brain lead to disease. What do the structures look like before and after, and, quite simply, how are they so toxic?” Rodriguez said. “MicroED may finally open the door to that understanding.”

    The potential of such collaborations is what brought Gonen — and his emphasis on MicroED — to UCLA.

    “What I like about UCLA is it’s a top-rate institution — you can find some expert in every field,” Gonen said. “They’re a world leader, but they come without the ego you may find at other institutions.”

    That collaboration, he’s convinced, will lead to new approaches, new discoveries and new cures.

    Further papers:
    >Structure of catalase determined by MicroED
    Taking the measure of MicroED
    Protein structure determination by MicroED

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UC LA Campus

    For nearly 100 years, UCLA has been a pioneer, persevering through impossibility, turning the futile into the attainable.

    We doubt the critics, reject the status quo and see opportunity in dissatisfaction. Our campus, faculty and students are driven by optimism. It is not naïve; it is essential. And it has fueled every accomplishment, allowing us to redefine what’s possible, time after time.

    This can-do perspective has brought us 12 Nobel Prizes, 12 Rhodes Scholarships, more NCAA titles than any university and more Olympic medals than most nations. Our faculty and alumni helped create the Internet and pioneered reverse osmosis. And more than 100 companies have been created based on technology developed at UCLA.

     
  • richardmitnick 1:34 pm on December 21, 2017 Permalink | Reply
    Tags: , , Electron Microscopy, Magnetic fields of bacterial cells and magnetic nano-objects in liquid can be studied at high resolution using electron microscopy,   

    From Ames National Lab: “Ames Laboratory-led research team maps magnetic fields of bacterial cells and nano-objects for the first time” 

    Ames Laboratory

    Dec. 21, 2017
    Contacts:
    Tanya Prozorov, Division of Materials Sciences and Engineering
    tprozor@ameslab.gov
    (515) 294-3376

    Laura Millsaps, Ames Laboratory Public Affairs
    millsaps@ameslab.gov
    (515) 294-3474

    A research team led by a scientist from the U.S. Department of Energy’s Ames Laboratory has demonstrated for the first time that the magnetic fields of bacterial cells and magnetic nano-objects in liquid can be studied at high resolution using electron microscopy. This proof-of-principle capability allows first-hand observation of liquid environment phenomena, and has the potential to vastly increase knowledge in a number of scientific fields, including many areas of physics, nanotechnology, biofuels conversion, biomedical engineering, catalysis, batteries and pharmacology.

    1
    Left: Schematic of the off-axis electron holography using a fluid cell. Right: (A)
    Hologram of a magnetite nanocrystal chain released from a magnetotactic
    bacterium, and (B) corresponding magnetic induction map.

    “It is much like being able to travel to a Jurassic Park and witness dinosaurs walking around, instead of trying to guess how they walked by examining a fossilized skeleton,” said Tanya Prozorov, an associate scientist in Ames Laboratory’s Division of Materials Sciences and Engineering.

    Prozorov works with biological and bioinspired magnetic nanomaterials, and faced what initially seemed to be an insurmountable challenge of observing them in their native liquid environment. She studies a model system, magnetotactic bacteria, which form perfect nanocrystals of magnetite. In order to best learn how bacteria do this, she needed an alternative to the typical electron microscopy process of handling solid samples in vacuum, where soft matter is studied in prepared, dried, or vitrified form.

    For this work, Prozorov received DOE recognition through an Office of Science Early Career Research Program grant to use cutting-edge electron microscopy techniques with a liquid cell insert to learn how the individual magnetic nanocrystals form and grow with the help of biological molecules, which is critical for making artificial magnetic nanomaterials with useful properties.

    To study magnetism in bacteria, she applied off-axis electron holography, a specialized technique that is used for the characterization of magnetic nanostructures in the transmission electron microscope, in combination with the liquid cell.

    “When we look at samples prepared in the conventional way, we have to make many assumptions about their properties based on their final state, but with the new technique, we can now observe these processes first-hand,” said Prozorov. “It can help us understand the dynamics of macromolecule aggregation, nanoparticle self-assembly, and the effects of electric and magnetic fields on that process.”

    “This method allows us to obtain large amounts of new information,” said Prozorov. “It is a first step, proving that the mapping of magnetic fields in liquid at the nanometer scale with electron microscopy could be done; I am eager to see the discoveries it could foster in other areas of science.”

    The work was done in collaboration with the Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, Germany.

    The research is detailed in the paper, Off-axis electron holography of bacterial cells and magnetic nanoparticles in liquid, by T. Prozorov, T.P. Almeida, A. Kovács, and R.E. Dunin-Borkowski: and published in the Journal of the Royal Society Interface.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Ames Laboratory is a government-owned, contractor-operated research facility of the U.S. Department of Energy that is run by Iowa State University.

    For more than 60 years, the Ames Laboratory has sought solutions to energy-related problems through the exploration of chemical, engineering, materials, mathematical and physical sciences. Established in the 1940s with the successful development of the most efficient process to produce high-quality uranium metal for atomic energy, the Lab now pursues a broad range of scientific priorities.

    Ames Laboratory is a U.S. Department of Energy Office of Science national laboratory operated by Iowa State University. Ames Laboratory creates innovative materials, technologies and energy solutions. We use our expertise, unique capabilities and interdisciplinary collaborations to solve global problems.

    Ames Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
    DOE Banner

    DOE Banner

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: