Tagged: BNL Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:28 pm on September 13, 2019 Permalink | Reply
    Tags: , BNL, CBETA-Cornell-Brookhaven “Energy-Recovery Linac” Test Accelerator or, , , Innovative particle accelerator, , ,   

    From Brookhaven National Lab & Cornell University: “Innovative Accelerator Achieves Full Energy Recovery” 

    From Brookhaven National Lab

    September 10, 2019
    Karen McNulty Walsh
    kmcnulty@bnl.gov

    Collaborative Cornell University/Brookhaven Lab project known as CBETA offers promise for future accelerator applications.

    1
    Brookhaven Lab members of the CBETA team with Laboratory Director Doon Gibbs, front row, right.

    An innovative particle accelerator designed and built by scientists from the U.S. Department of Energy’s Brookhaven National Laboratory and Cornell University has achieved a significant milestone that could greatly enhance the efficiency of future particle accelerators. After sending a particle beam for one pass through the accelerator, machine components recovered nearly all of the energy required for accelerating the particles. This recovered energy can then be used for the next stage of acceleration—to accelerate another batch of particles—thus greatly reducing the potential cost of accelerating particles to high energies.

    “No new power is required to maintain the radiofrequency (RF) fields in the RF cavities used for acceleration, because the accelerated beam deposits its energy in the RF cavities when it is decelerated,” said Brookhaven Lab accelerator physicist Dejan Trbojevic, who led the design and construction of key components for the project and serves as the Principal Investigator for Brookhaven’s contributions.

    The prototype accelerator—known as the Cornell-Brookhaven ERL Test Accelerator (CBETA), where ERL stands for “energy-recovery linac”—was built at Cornell with funding from Brookhaven Science Associates (the managing entity of Brookhaven Lab) and the New York State Energy Research and Development Authority (NYSERDA) as a research and development project in support of a possible future nuclear physics facility, the Electron-Ion Collider (EIC). The energy-recovery approach could play an essential role in generating reusable electron beams for enhancing operations at a future EIC. The electrons would reduce the spread of ion beams in the EIC, thus increasing the number of particle collisions scientists can record to make physics discoveries.

    2
    Schematic of the CBETA energy recovery linac installed at Cornell University. Electrons produced by a direct-current (DC) photo-emitter electron source are transported by a high-power superconducting radiofrequency (SRF) injector linac into the high-current main linac cryomodule, where SRF cavities accelerate them to high energy before sending them around the racetrack-shaped accelerator. Each curved arc is made of a series of fixed field, alternating gradient (FFA) permanent magnets. After passing through the second FFA arc, the electrons re-enter the main linac cryomodule, which decelerates them and returns their energy to the RF cavities so it can be used again.

    In designing and executing this project, the Brookhaven team drew on its vast experience of improving the performance of the Relativistic Heavy Ion Collider (RHIC), a DOE Office of Science user facility for nuclear physics research.

    BNL/RHIC

    The accelerator technologies being developed for the EIC would push beyond the capabilities at RHIC and open up a new frontier in nuclear physics.

    3
    The injector and main linac cryomodule.

    Tech specs

    CBETA consists of a direct-current (DC) photo-emitter electron source that creates the electron beams to be accelerated. These electrons pass through a high-power superconducting radiofrequency (SRF) injector linac that transports them into a high-current main linac cryomodule (MLC). There, six SRF cavities accelerate the electrons to high energy, sending them around the racetrack-shaped accelerator. Each curved section of the racetrack is a single arc of permanent magnets designed with fixed-field alternating-gradient (FFA) optics that allow a single vacuum tube to accommodate beams at four different energies at the same time. After passing through the second FFA arc, the electrons re-enter the MLC, which has been uniquely optimized to decelerate the particles after a single pass and return their energy to the RF cavities so it can be used again.

    When completed, CBETA will accelerate particles through four complete turns, adding energy with each pass—all of which will be recovered during deceleration after the beams have been used. This will make it the world’s first four-turn superconducting radiofrequency ERL.

    Many scientists and engineers at Brookhaven Lab contributed to the design and construction of the magnets and other components of the accelerator, as well as the electronic devices that monitor the positions of the accelerated and decelerated beams: Francois Meot, Scott Berg, Stephen Brooks, and Nicholaos Tsoupas drove the design of the ERL’s optics; Brookhaven physicists led by Brooks and George Mahler designed, built, measured, and applied corrections to the permanent magnets; and Rob Michnoff led the design and construction of the beam position monitor system.

    “After building and successfully testing prototypes of the magnets, we established a very successful collaboration with Cornell, led by Principal Investigator Georg Hoffstaetter, to build the ERL using the refined fixed-field magnet designs,” Trbojevic said.

    Cornell provided the DC electron injector—the world’s record holder for producing high intensity, low emittance electron beams—which they recommissioned for the CBETA project. A team of young scientists and graduate students, including Adam Bartnik, Colwyn Gulliford, Kirsten Deitrick, and Nilanjan Banerjee, made other essential contributions: successfully commissioning the main linac cryomodule, and preparing the “command scripts”—computer-driven instructions—for running and commissioning the ERL in collaboration with Berg and other Brookhaven physicists.

    4
    Part of one of the fixed field, alternating gradient (FFA) permanent magnet arcs.

    “We hold weekly internet-based collaboration meetings and we had several visits and meetings at Cornell to ensure that the project was reaching the key milestones and that installation was proceeding according to the schedule,” said Michnoff, the Brookhaven Lab project manager.

    In May 2019, the team sent an electron beam with an energy of 42 million electron volts (MeV) through the FFA return loop for the first time. The beam made it through all 200 permanent magnets without the need for a single correction. In early June 2019, an energy scan in the FFA loop showed that the return beamline transported particles of different energies superbly, agreeing very well with the expectations for the design.

    Next, on June 13, the beam was accelerated to 42 MeV, transported through the FFA return loop back to the MLC, where the electrons were decelerated from 42 MeV back to the injection energy of 6 MeV, with the rest of their energy transferred back into the six SRF cavities of the main linac. And on June 24, the CBETA team achieved full energy recovery for the first time—demonstrating that each cavity could accelerate electrons on their second pass through the MLC without requiring additional external power.

    “Each cavity successfully regained the energy it expended in beam acceleration, eliminating or dramatically reducing the power needed to accelerate electrons,” Trbojevic said.

    “The successful demonstration of single-turn energy recovery shows that we are on the path toward creating this first-of-its-kind facility,” Trbojevic said. “The entire team is committed and excited to complete this four-turn energy-recovery linac—one of the most interesting and innovative accelerator physics project in the world today.”

    From Cornell University

    CORNELL LABORATORY FOR ACCELERATOR-BASED SCIENCES AND EDUCATION — CLASSE

    5

    Update on Beam Commissioning

    Cornell physicists, working with Brookhaven National Lab, are constructing a new type of particle accelerator called CBETA at Cornell’s Wilson Lab. This Energy Recovery Linac (ERL) is a test accelerator built with permanent magnets as well as electro magnets.

    How it works: CBETA will recirculate multiple beams of different energies around the accelerator at one time. The electrons will make four accelerating passes around the accelerator, while building up energy as they pass through a cryomodule with superconducting RF (SRF) accelerating structures. In four more passes, they will return to the superconducting cavities that accelerated them and return their energy back to these cavities – hence it is an Energy Recovery Linac (ERL). While this method conserves energy, it also creates beams that are tightly bound and are a factor of 1,000 times brighter than other sources. For more details, please contact the Cornell PI Prof. Georg Hoffstaetter.

    Although linear accelerators (Linac) can have superior beam densities when compared to large circular accelerators, they are exceedingly wasteful due to the beam being discarded after use and can therefore only have an extremely low current compared to ring accelerators. This means that the amount of data collected in one hour in a circular accelerator may take several years to collect in a linear accelerator. In an ERL, the energy is recovered, and the beam current can therefore be as large as in a circular accelerator while its beam density remains as large as in a Linac.

    CBETA: the first multi-turn SRF ERL

    The lynchpin of CBETA’s design is to repeat the acceleration in a SRF cavities four times by recirculating multiple beams at four different energies. The beam with highest energy (150MeV) is to be used for experiments and is then decelerated in the same cavities four times to recapture the beam’s energies into the SRF cavities. Reusing the same cavity multiple times significantly reduces the construction and operational costs of the accelerator. It also means that an accelerator which would span roughly a foot ball field can fit into a single experimental Hall at Cornell’s Wilson Laboratory.

    However, beams of different energies require different amounts of bending, in the same way that it is hard for your car to navigate a sharp bend at 100 miles per hour. Traditional magnet designs are simply unable to keep different beams on the same “track”. Instead, the CBETA design relies on cutting edge Fixed-Field Alternating Gradient (FFAG) magnets to contain all of the beams in a single 3 inch beam pipe. CBETA will be the first SRF ERL with more than one turn and it is also the first project in the history of accelerator physics to implement this new magnet technology in an Energy Recovery Linac.

    The task of creating and controlling eight beams of four different energies in a single accelerating structure sounds daunting. But by leveraging the pre-existing infrastructure and experience of Cornell with the power and expertise of Brookhaven National Laboratory, it will soon become a reality.

    Cornell University has prototyped technology essential for CBETA, including a DC gun and an SRF injector Linac with world-record current and normalized brightness in a bunch train, a high-current CW cryomodule for 70 MeV energy gain, a high-power beam stop, and several diagnostics tools for high-current and high-brightness beams, e.g. a beamline for measuring 6-D phase-space densities, a fast wire scanner for beam profiles, and beam loss diagnostics. All these now being used in the contrition of CBETA.

    Within the next several years, CBETA will develop into a powerhouse of accelerator physics and technology, and will be one of the most advanced on the planet (earth). When this prototype ERL is complete and expanded upon, it will be a critical resource to New York State and the nation, propelling high-power accelerator science, enabling applications of many particle accelerators, from biomedical advancement to basic physics and from computer-chip lithography to material science, driving economic development.

    7

    CBETA is composed of 4 main parts:

    -The Photoinjector that creates and prepares high-current electron beams to be injected into the Main Linac Cryomodule (MLC). The photoinjector in turn consists of a laser system that illuminates a photo-emitter cathode to produce electrons within a high-current DC electron source. These electrons traverses an emittance-matching section to produce a high-brightness beam which is then sent thorough the high-power injector cryomodule (ICM) for acceleration to the ERL’s injection energy.

    -The Main Linac Cryomodule (MLC) that accelerates the beam through several passages and then decelerates the beam the same number of times to recapture its energy.

    -The high-power Beam Stop where the electron beam is discarded after most of its energy has been recaptured.
    4 Spreaders and 4 combiners with electro magnets that separate beams at 4 different energies after the MLC to match them into the FFAG return loop and then combine them again before re-entering the MLC.

    -FFAG Magnets residing in the return loop. These cause very strong focusing so that beams with energies that differ by up to a factor 4 can be transported simultaneously.

    Dominant funding for CBETA comes from NYSERDA (2016 to 2020). Important for this agency is that CBETA emphasizes energy savings by its use of energy recovery technology, its application of permanent magnets, and its particle acceleration by superconducting structures. Previous funding came from the NSF (2005 – 2015) for the development of the complete accelerator chain from the source to the main ERL accelerating module, from DOE supporting developments for the LCLS (2014-2015), and from the industrial company ASML (2015-2016) for applications in computer chip lithography.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus


    BNL Center for Functional Nanomaterials

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 8:10 am on August 30, 2019 Permalink | Reply
    Tags: Advanced nanolithography, ALD-Atomic layer deposition, , BNL, , EUVL-Extreme ultraviolet lithography, , , PMMA-Polymer poly(methyl methacrylate),   

    From Brookhaven National Lab: “Enhancing Materials for Hi-Res Patterning to Advance Microelectronics” 

    From Brookhaven National Lab

    August 27, 2019
    Ariana Manglaviti
    amanglaviti@bnl.gov
    (631) 344-2347

    Peter Genzer
    genzer@bnl.gov
    (631) 344-3174

    Scientists at Brookhaven Lab’s Center for Functional Nanomaterials [below] created “hybrid” organic-inorganic materials for transferring ultrasmall, high-aspect-ratio features into silicon for next-generation electronic devices.

    1
    (Left to right): Ashwanth Subramanian, Ming Lu, Kim Kisslinger, Chang-Yong Nam, and Nikhil Tiwale in the Electron Microscopy Facility at Brookhaven Lab’s Center for Functional Nanomaterials. The scientists used scanning electron microscopes to image high-resolution, high-aspect-ratio silicon nanostructures they etched using a “hybrid” organic-inorganic resist.

    To increase the processing speed and reduce the power consumption of electronic devices, the microelectronics industry continues to push for smaller and smaller feature sizes. Transistors in today’s cell phones are typically 10 nanometers (nm) across—equivalent to about 50 silicon atoms wide—or smaller. Scaling transistors down below these dimensions with higher accuracy requires advanced materials for lithography—the primary technique for printing electrical circuit elements on silicon wafers to manufacture electronic chips. One challenge is developing robust “resists,” or materials that are used as templates for transferring circuit patterns into device-useful substrates such as silicon.

    Now, scientists from the Center for Functional Nanomaterials (CFN)—a U.S. Department of Energy (DOE) Office of Science User Facility at Brookhaven National Laboratory—have used the recently developed technique of infiltration synthesis to create resists that combine the organic polymer poly(methyl methacrylate), or PMMA, with inorganic aluminum oxide. Owing to its low cost and high resolution, PMMA is the most widely used resist in electron-beam lithography (EBL), a kind of lithography in which electrons are used to create the pattern template. However, at the resist thicknesses that are necessary to generate the ultrasmall feature sizes, the patterns typically start to degrade when they are etched into silicon, failing to produce the required high aspect ratio (height to width).

    As reported in a paper published online on July 8 in the Journal of Materials Chemistry C, these “hybrid” organic-inorganic resists exhibit a high lithographic contrast and enable the patterning of high-resolution silicon nanostructures with a high aspect ratio. By changing the amount of aluminum oxide (or a different inorganic element) infiltrated into PMMA, the scientists can tune these parameters for particular applications. For example, next-generation memory devices such as flash drives will be based on a three-dimensional stacking structure to increase memory density, so an extremely high aspect ratio is desirable; on the other hand, a very high resolution is the most important characteristic for future processor chips.

    “Instead of taking an entirely new synthesis route, we used an existing resist, an inexpensive metal oxide, and common equipment found in almost every nanofabrication facility,” said first author Nikhil Tiwale, a postdoctoral research associate in the CFN Electronic Nanomaterials Group.

    Though other hybrid resists have been proposed, most of them require high electron doses (intensities), involve complex chemical synthesis methods, or have expensive proprietary compositions. Thus, these resists are not optimal for the high-rate, high-volume manufacture of next-generation electronics.

    Advanced nanolithography for high-volume manufacturing

    Conventionally, the microelectronics industry has relied upon optical lithography, whose resolution is limited by the wavelength of light that the resist gets exposed to. However, EBL and other nanolithography techniques such as extreme ultraviolet lithography (EUVL) can push this limit because of the very small wavelength of electrons and high-energy ultraviolet light. The main difference between the two techniques is the exposure process.

    “In EBL, you need to write all of the area you need to expose line by line, kind of like making a sketch with a pencil,” said Tiwale. “By contrast, in EUVL, you can expose the whole area in one shot, akin to taking a photograph. From this point of view, EBL is great for research purposes, and EUVL is better suited for high-volume manufacturing. We believe that the approach we demonstrated for EBL can be directly applied to EUVL, which companies including Samsung have recently started using to develop manufacturing processes for their 7 nm technology node.”

    In this study, the scientists used an atomic layer deposition (ALD) system—a standard piece of nanofabrication equipment for depositing ultrathin films on surfaces—to combine PMMA and aluminum oxide. After placing a substrate coated with a thin film of PMMA into the ALD reaction chamber, they introduced a vapor of an aluminum precursor that diffused through tiny molecular pores inside the PMMA matrix to bind with the chemical species inside the polymer chains. Then, they introduced another precursor (such as water) that reacted with the first precursor to form aluminum oxide inside the PMMA matrix. These steps together constitute one processing cycle.

    2
    A schematic showing the process of creating the hybrid organic-inorganic resist through infiltration synthesis, patterning the resist via electron-beam lithography, and etching the pattern into silicon by bombarding the silicon surface with ions of sulfur hexafluoride (SF6).

    The team then performed EBL with hybrid resists that had up to eight processing cycles. To characterize the contrast of the resists under different electron doses, the scientists measured the change in resist thickness within the exposed areas. Surface height maps generated with an atomic force microscope (a microscope with an atomically sharp tip for tracking the topography of a surface) and optical measurements obtained through ellipsometry (a technique for determining film thickness based on the change in the polarization of light reflected from a surface) revealed that the thickness changes gradually with a low number of processing cycles but rapidly with additional cycles—i.e., a higher aluminum oxide content.

    “The contrast refers to how fast the resist changes after being exposed to the electron beam,” explained Chang-Yong Nam, a materials scientist in the CFN Electronic Nanomaterials Group, who supervised the project and conceived the idea in collaboration with Jiyoung Kim, a professor in the Department of Materials Science and Engineering at the University of Texas at Dallas. “The abrupt change in the height of the exposed regions suggests an increase in the resist contrast for higher numbers of infiltration cycles—almost six times higher than that of the original PMMA resist.”

    The scientists also used the hybrid resists to pattern periodic straight lines and “elbows” (intersecting lines) in silicon substrates, and compared the etch rate of the resists with substrates.

    3
    Left: A scanning electron microscope (SEM) image of silicon elbow-shaped nanopatterns with different feature sizes (linewidths). Right: A high-magnification SEM image of high-resolution, high-aspect-ratio silicon nanostructures patterned at a pitch resolution (linewidth plus spacewidth, or space between lines) of 500 nm.

    “You want silicon to be etched faster than the resist; otherwise the resist starts to degrade,” said Nam. “We found that the etch selectivity of our hybrid resist is higher than that of costly proprietary resists (e.g., ZEP) and techniques that use an intermediate “hard” mask layer such as silicon dioxide to prevent pattern degradation, but which require additional processing steps.”

    3
    After two processing cycles, the etch selectivity of the hybrid resist surpasses that of ZEP, a costly resist. After four cycles, the hybrid resist has a 40 percent higher etch selectivity than that of silicon dioxide (SiO2).

    Going forward, the team will study how the hybrid resists respond to EUV exposure. They have already started using soft x-rays (energy range corresponding to the wavelength of EUV light) at Brookhaven’s National Synchrotron Light Source II (NSLS-II) [below], and hope to use a dedicated EUV beamline operated by the Center for X-ray Optics at Lawrence Berkeley National Lab’s Advanced Light Source (ALS) in collaboration with industry partners.

    LBNL ALS

    “The energy absorption by the organic layer of EUVL resists is very weak,” said Nam. “Adding inorganic elements, such as tin or zirconium, can make them more sensitive to EUV light. We look forward to exploring how our approach can address the resist performance requirements of EUVL.”

    Both NSLS-II and ALS are DOE User Facilities.

    The other co-authors are CFN scientists Kim Kisslinger, Ming Lu, and Aaron Stein; and Ashwanth Subramanian, a PhD student in the Department of Materials Science and Chemical Engineering at Stony Brook University and a graduate research assistant at the CFN.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus


    BNL Center for Functional Nanomaterials

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 7:09 am on August 30, 2019 Permalink | Reply
    Tags: , BNL, , , , Small-angle x-ray scattering,   

    From Brookhaven National Lab: “Smarter Experiments for Faster Materials Discovery” 

    From Brookhaven National Lab

    August 28, 2019
    Cara Laasch,
    laasch@bnl.gov
    (631) 344-8458,

    Peter Genzer,
    genzer@bnl.gov
    (631) 344-3174

    Scientists created a new AI algorithm for making measurement decisions; autonomous approach could revolutionize scientific experiments.

    1
    (From left to right) Kevin Yager, Masafumi Fukuto, and Ruipeng Li prepared the Complex Materials Scattering (CMS) beamline at NSLS-II for a measurement using the new decision-making algorithm, which was developed by Marcus Noack (not pictured).

    A team of scientists from the U.S. Department of Energy’s Brookhaven National Laboratory and Lawrence Berkeley National Laboratory designed, created, and successfully tested a new algorithm to make smarter scientific measurement decisions.

    The algorithm, a form of artificial intelligence (AI), can make autonomous decisions to define and perform the next step of an experiment. The team described the capabilities and flexibility of their new measurement tool in a paper published on August 14, 2019 in Nature Scientific Reports.

    From Galileo and Newton to the recent discovery of gravitational waves, performing scientific experiments to understand the world around us has been the driving force of our technological advancement for hundreds of years. Improving the way researchers do their experiments can have tremendous impact on how quickly those experiments yield applicable results for new technologies.

    Over the last decades, researchers have sped up their experiments through automation and an ever-growing assortment of fast measurement tools. However, some of the most interesting and important scientific challenges—such as creating improved battery materials for energy storage or new quantum materials for new types of computers—still require very demanding and time-consuming experiments.

    By creating a new decision-making algorithm as part of a fully automated experimental setup, the interdisciplinary team from two of Brookhaven’s DOE Office of Science user facilities—the Center for Functional Nanomaterials (CFN) [below] and the National Synchrotron Light Source II (NSLS-II) [below]—and Berkeley Lab’s Center for Advanced Mathematics for Energy Research Applications (CAMERA) offers the possibility to study these challenges in a more efficient fashion.

    2

    The challenge of complexity

    The goal of many experiments is to gain knowledge about the material that is studied, and scientists have a well-tested way to do this: They take a sample of the material and measure how it reacts to changes in its environment.

    A standard approach for scientists at user facilities like NSLS-II and CFN is to manually scan through the measurements from a given experiment to determine the next area where they might want to run an experiment. But access to these facilities’ high-end materials-characterization tools is limited, so measurement time is precious. A research team might only have a few days to measure their materials, so they need to make the most out of each measurement.

    “The key to achieving a minimum number of measurements and maximum quality of the resulting model is to go where uncertainties are large,” said Marcus Noack, a postdoctoral scholar at CAMERA and lead author of the study. “Performing measurements there will most effectively reduce the overall model uncertainty.”

    As Kevin Yager, a co-author and CFN scientist, pointed out, “The final goal is not only to take data faster but also to improve the quality of the data we collect. I think of it as experimentalists switching from micromanaging their experiment to managing at a higher level. Instead of having to decide where to measure next on the sample, the scientists can instead think about the big picture, which is ultimately what we as scientists are trying to do.”

    “This new approach is an applied example of artificial intelligence,” said co-author Masafumi Fukuto, a scientist at NSLS-II. “The decision-making algorithm is replacing the intuition of the human experimenter and can scan through the data and make smart decisions about how the experiment should proceed.”

    3
    This animation shows a comparison between a traditional grid measurement (left) of a sample with a measurement steered by the newly-developed decision-making algorithm (right). This comparison shows that the algorithm can identify the edges and inner part of the sample and focuses the measurement in these regions to gain more knowledge about the sample.

    More information for less?

    In practice, before starting an experiment, the scientists define a set of goals they want to get out of the measurement. With these goals set, the algorithm looks at the previously measured data while the experiment is ongoing to determine the next measurement. On its search for the best next measurement, the algorithm creates a surrogate model of the data, which is an educated guess as to how the material will behave in the next possible steps, and calculates the uncertainty—basically how confident it is in its guess—for each possible next step. Based on this, it then selects the most uncertain option to measure next. The trick here is by picking the most uncertain step to measure next, the algorithm maximizes the amount of knowledge it gains by making that measurement. The algorithm not only maximizes the information gain during the measurement, it also defines when to end the experiment by figuring out the moment when any additional measurements would not result in more knowledge.

    “The basic idea is, given a bunch of experiments, how can you automatically pick the next best one?” said James Sethian, director of CAMERA and a co-author of the study. “Marcus has built a world which builds an approximate surrogate model on the basis of your previous experiments and suggests the best or most appropriate experiment to try next.”

    4
    To use the decision-making algorithm for their measurements, the team needed to automate the measurement and also the data analysis. This image shows how all pieces are integrated with each other to form a closed looped. The algorithm receives analyzed data from the last measurement step, adds this data to its model, calculates the best next step, and sends its decision to the beamline to execute the next measurement.

    How we got here

    To make autonomous experiments a reality, the team had to tackle three important pieces: the automation of the data collection, real-time analysis, and, of course, the decision-making algorithm.

    “This is an exciting part of this collaboration,” said Fukuto. “We all provided an essential piece for it: The CAMERA team worked on the decision-making algorithm, Kevin from CFN developed the real-time data analysis, and we at NSLS-II provided the automation for the measurements.”

    The team first implemented their decision-making algorithm at the Complex Materials Scattering (CMS) beamline at NSLS-II, which the CFN and NSLS-II operate in partnership. This instrument offers ultrabright x-rays to study the nanostructure of various materials. As the lead beamline scientist of this instrument, Fukuto had already designed the beamline with automation in mind. The beamline offers a sample-exchanging robot, automatic sample movement in various directions, and many other helpful tools to ensure fast measurements. Together with Yager’s real-time data analysis, the beamline was—by design—the perfect fit for the first “smart” experiment.

    The first “smart” experiment

    The first fully autonomous experiment the team performed was to map the perimeter of a droplet where nanoparticles segregate using a technique called small-angle x-ray scattering at the CMS beamline. During small-angle x-ray scattering, the scientists shine bright x-rays at the sample and, depending on the atomic to nanoscale structure of the sample, the x-rays bounce off in different directions. The scientists then use a large detector to capture the scattered x-rays and calculate the properties of the sample at the illuminated spot. In this first experiment, the scientists compared the standard approach of measuring the sample with measurements taken when the new decision-making algorithm was calling the shots. The algorithm was able to identify the area of the droplet and focused on its edges and inner parts instead of the background.

    “After our own initial success, we wanted to apply the algorithm more, so we reached out to a few users and proposed to test our new algorithm on their scientific problems,” said Yager. “They said yes, and since then we have measured various samples. One of the most interesting ones was a study on a sample that was fabricated to contain a spectrum of different material types. So instead of making and measuring an enormous number of samples and maybe missing an interesting combination, the user made one single sample that included all possible combinations. Our algorithm was then able to explore this enormous diversity of combinations efficiently,” he said.

    What’s next?

    After the first successful experiments, the scientists plan to further improve the algorithm and therefore its value to the scientific community. One of their ideas is to make the algorithm “physics-aware”—taking advantage of anything already known about material under study—so the method can be even more effective. Another development in progress is to use the algorithm during synthesis and processing of new materials, for example to understand and optimize processes relevant to advanced manufacturing as these materials are incorporated into real-world devices. The team is also thinking about the larger picture and wants to transfer the autonomous method to other experimental setups.

    “I think users view the beamlines of NSLS-II or microscopes of CFN just as powerful characterization tools. We are trying to change these capabilities into a powerful material discovery facility,” Fukuto said.

    This work was funded by the DOE Office of Science (ASCR and BES).

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus


    BNL Center for Functional Nanomaterials

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 8:50 am on July 26, 2019 Permalink | Reply
    Tags: "Imaging the Chemical Structure of Individual Molecules Atom by Atom", BNL, GXSM-Gnome X Scanning Microscopy, nc-AFM-Noncontact atomic force microscope, Scanning probe microscopy, ,   

    From Brookhaven National Lab: “Imaging the Chemical Structure of Individual Molecules, Atom by Atom” 

    From Brookhaven National Lab

    July 22, 2019

    Ariana Manglaviti
    amanglaviti@bnl.gov

    Using atomic force microscopy images, scientists at Brookhaven Lab’s Center for Functional Nanomaterials developed a guide for discriminating atoms other than hydrogen and carbon in aromatic molecules—ring-shaped molecules with special bonding properties—to help identify contaminants found in petroleum.

    1
    Brookhaven Lab physicist Percy Zahl with the noncontact atomic force microscope he adapted and used at the Center for Functional Nanomaterials (CFN) to image nitrogen- and sulfur-containing molecules in petroleum.

    For physicist Percy Zahl, optimizing and preparing a noncontact atomic force microscope (nc-AFM) to directly visualize the chemical structure of a single molecule is a bit like playing a virtual reality video game. The process requires navigating and manipulating the tip of the instrument over the world of atoms and molecules, eventually picking some up at the right location and in the right way. If these challenges are completed successfully, you advance to the highest level, obtaining images that precisely show where individual atoms are located and how they are chemically bonded to other atoms. But take one wrong move, and it is game over. Time to start again.

    “The nc-AFM has a very sensitive single-molecule tip that scans over a carefully prepared clean single-crystal surface at a constant height and “feels” the forces between the tip molecule and single atoms and bonds of molecules placed on this clean surface,” explained Zahl, who is part of the Interface Science and Catalysis Group at the Center for Functional Nanomaterials (CFN), a U.S. Department of Energy (DOE) Office of Science User Facility at Brookhaven National Laboratory. “It can take an hour or days to get this sensor working properly. You can’t simply press a button; fine tuning is required. But all of this effort is definitely worthwhile once you see the images appearing like molecules in a chemistry textbook.”

    A history of chemical structure determination

    Since the beginning of the field of chemistry, scientists have been able to determine the elemental composition of molecules. What has been more difficult is to figure out their chemical structures, or the particular arrangement of atoms in space. Knowing the chemical structure is important because it impacts the molecule’s reactivities and other properties.

    2
    Kekulé claims that the idea of the ring structure of benzene came to him in a dream of a snake eating its own tail.

    For example, Michael Faraday isolated benzene in 1825 from an oil gas residue. It was soon determined that benzene is composed of six hydrogen and six carbon atoms, but its chemical structure remained controversial until 1865, when Friedrich August Kekulé proposed a cyclic structure. However, his proposal was not based on a direct observation but rather on logic deduction from the number of isomers (compounds with the same chemical formula but different chemical structures) of benzene. The correct symmetric hexagonal structure of benzene was finally revealed through its diffraction pattern obtained by Kathleen Lonsdale via x-ray crystallography in 1929. In 1931, Erich Huckel used quantum theory to explain the origin of “aromaticity” in benzene. Aromaticity is a property of flat ring-shaped molecules in which electrons are shared between atoms. Because of this unique arrangement of electrons, aromatic compounds have a special stability (low reactivity).

    Today, x-ray crystallography continues to be a mainstream technique for determining chemical structures, along with nuclear magnetic resonance spectroscopy. However, both techniques require crystals or relatively pure samples, and chemical structure models must be deducted by analyzing the resulting diffraction patterns or spectra.

    The first-ever actual image of a chemical structure was obtained only a decade ago. In 2009, scientists at IBM Research–Zurich Lab in Switzerland used nc-AFM to resolve the atomic backbone of an individual molecule of pentacene, seeing its five fused benzene rings and even the carbon-hydrogen bonds. This breakthrough was made possible by selecting an appropriate molecule for the end of the tip—one that could come very close to the surface of pentacene without reacting with or binding to it. It also required optimized sensor readout electronics at cryogenic temperatures to measure small frequency shifts in the probe oscillation (which relates to the force) while maintaining mechanical and thermal stability through vibration damping setups, ultrahigh vacuum chambers, and low-temperature cooling systems.

    “Low-temperature nc-AFM is the only method that can directly image the chemical structure of a single molecule,” said Zahl. “With nc-AFM, you can visualize the positions of individual atoms and the arrangement of chemical bonds, which affect the molecule’s reactivity.”

    However, currently there are still some requirements for molecules to be suitable for nc-AFM imaging. Molecules must be mainly planar (flat), as the scanning occurs on the surface and thus is not suitable for large three-dimensional (3-D) structures such as proteins. In addition, because of the slow nature of scanning, only a few hundred molecules can be practically examined per experiment. Zahl notes that this limitation could be overcome in the future through artificial intelligence, which would pave the way toward automated scanning probe microscopy.

    According to Zahl, though nc-AFM has since been applied by a few groups around the world, it is not widespread, especially in the United States.

    “The technique is still relatively new and there is a long learning curve in acquiring CO tip-based molecular structures,” said Zahl. “It takes a lot of experience in scanning probe microscopy, as well as patience.”

    A unique capability and expertise

    The nc-AFM at the CFN represents one of a few in this country. Over the past several years, Zahl has upgraded and customized the instrument, most notably with the open-source software and hardware, GXSM (for Gnome X Scanning Microscopy). Zahl has been developing GXSM for more than two decades. A real-time signal processing control system and software continuously records operating conditions and automatically adjusts the tip position as necessary to avoid unwanted collisions when the instrument is operated in an AFM-specific scanning mode to record forces over molecules. Because Zahl wrote the software himself, he can program and implement new imaging or operating modes for novel measurements and add features to help operators better explore the atomic world.

    3
    DBT (left column) is one of the sulfur-containing compounds in petroleum; CBZ and ACR (right and middle columns, respectively) are nitrogen-containing compounds. Illustrations and ball-and-stick models of their chemical structures are shown at the top of each column (black indicates carbon atoms; yellow indicates sulfur, and blue indicates nitrogen). The simulated atomic force microscopy images (a, b, d, e, g, and h) well match the ones obtained experimentally (c, f, and i).

    For example, recently Zahl applied a custom “slicing” mode to determine the 3-D geometrical configuration in which a single molecule of dibenzothiopene (DBT)—a sulfur-containing aromatic molecule commonly found in petroleum—adsorbs on a gold surface. The DBT molecule is not entirely planar but rather tilted at an angle, so he combined a series of force images (slices) to create a topographic-like representation of the molecule’s entire structure.

    “In this mode, obstacles such as protruding atoms are automatically avoided,” said Zahl. “This capability is important, as the force measurements are ideally taken in one fixed plane, with the need to be very close to the atoms to feel the repulsive forces and ultimately to achieve detailed image contrast. When parts stick out of the molecule plane, they will likely negatively impact image quality.”

    This imaging of DBT was part of a collaboration with Yunlong Zhang, a physical organic chemist at ExxonMobil Research and Engineering Corporate Strategic Research in New Jersey. Zhang met Zahl at a conference two years ago and realized that the capabilities and expertise in nc-AFM at the CFN would have great potential for his research on petroleum chemistry.

    Zahl and Zhang used nc-AFM to image the chemical structure of not only DBT but also of two nitrogen-containing aromatic molecules—carbazole (CBZ) and acridine (ACR)—that are widely observed in petroleum. In analyzing the images, they developed a set of templates of common features in the ring-shaped molecules that can be used to find sulfur and nitrogen atoms and distinguish them from carbon atoms.

    Petroleum: a complex mixture

    The chemical composition of petroleum widely varies depending on where and how it formed, but in general it contains mostly carbon and hydrogen (hydrocarbons) and smaller amounts of other elements, including sulfur and nitrogen. During combustion, when the fuel is burned, these “heteroatoms” produce sulfur and nitrogen oxides, which contribute to the formation of acid rain and smog, both air pollutants that are harmful to human health and the environment. Heteroatoms can also reduce fuel stability and corrode engine components. Though refining processes exist, not all of the sulfur and nitrogen is removed. Identifying the most common structures of impure molecules containing nitrogen and sulfur atoms could lead to optimized refining processes for producing cleaner and more efficient fuels.

    “Our previous research with the IBM group at Zurich on petroleum asphaltenes and heavy oil mixtures provided the first “peek” into numerous structures in petroleum,” said Zhang. “However, more systemic studies are needed, especially on the presence of heteroatoms and their precise locations within aromatic hydrocarbon frameworks in order to broaden the application of this new technique to identify complex molecular structures in petroleum.”

    To image the atoms and bonds in DBT, CBZ, and ACR, the scientists prepared the tip of the nc-AFM with a single crystal of gold at the apex and a single molecule of carbon monoxide (CO) at the termination point (the same kind of molecule used in the original IBM experiment). The metal crystal provides an atomically clean and flat support from which the CO molecule can be picked up.

    After “functionalizing” the tip, they deposited a few of each of the molecules (dusting amount) on a gold surface inside the nc-AFM under ultrahigh vacuum at room temperature via sublimation. During sublimation, the molecules go directly from a solid to gas phase.

    Though the images they obtained strikingly resemble chemical structure drawings, you cannot directly tell from these images whether there is a nitrogen, sulfur, or carbon atom present in a particular site. It takes some input knowledge to deduct this information.

    “As a starting point, we imaged small well-known molecules with typical building blocks that are found in larger polycyclic aromatic hydrocarbons—in this case, in petroleum,” explained Zahl. “Our idea was to see what the basic building blocks of these chemical structures look like and use them to create a set of templates for finding them in larger unknown molecular mixtures.”

    5
    An illustration showing how nc-AFM can distinguish sulfur- and nitrogen-containing molecules commonly found in petroleum. A tuning fork (grey arm) with a highly sensitive tip containing a single carbon monoxide molecule (black is carbon and red is oxygen) is brought very close to the surface (outlined in white), with the oxygen molecule lying flat on the surface without making contact. As the tip scans across the surface, it “feels” the forces from the bonds between atoms to generate an image of the molecule’s chemical structure. One image feature that can be used to discriminate between the different types of atoms is the relative “size” of the elements (indicated by the size of the boxes in the overlaid periodic table).

    For example, for sulfur- and nitrogen-containing molecules in petroleum, sulfur is only found in ring structures with five atoms (pentagon ring structure), while nitrogen can be present in rings with either five or six (hexagonal ring structure) atoms. In addition to this bonding geometry, the relative “size,” or atomic radius, of the elements can help distinguish them. Sulfur is relatively larger than nitrogen and carbon, and nitrogen is slightly smaller than carbon. It is this size, or “height,” that AFM is extremely sensitive to.

    “Simply speaking, the force that the AFM records in very close proximity to an atom relates to the distance and thus to the size of that atom; as the AFM scans over a molecule at a fixed elevation, bigger atoms protrude more out of the plane,” explained Zahl. “Therefore, the larger the atom in a molecule, the bigger the force that the AFM records as it gets closer to its atomic shell, and the repulsion increases dramatically. That is why in the images sulfur appears as a bright dot, while nitrogen looks a hint fainter.”

    Zahl and Zhang then compared their experimental images to computer-simulated ones they obtained using the mechanical probe particle simulation method. This method simulates the actual forces acting on the CO molecule on the tip end as it scans over molecules and bends in response. They also performed theoretical calculations to determine how the electrostatic potential (charge distribution) of the molecules affects the measured force and relates to their appearance in the nc-AFM images.

    “We used density functional theory to study how the forces felt by the CO probe molecule behave in the presence of the charge environment surrounding the molecules,” said Zahl. “We need to know how the electrons are distributed in order to understand the atomic force and bond contrast mechanism. These insights even allow us to assign single or double bonds between atoms by analyzing image details.”

    Going forward, Zahl will continue developing and enhancing nc-AFM imaging modes and related technologies to explore many kinds of interesting, unknown, or novel molecules in collaboration with various users. Top candidate molecules of interest include those with large magnetic moments and special spin properties for quantum applications and novel graphene-like (graphene is a one-atom-thick sheet of carbon atoms arranged in a hexagonal lattice) materials with extraordinary electronic properties.

    “The CFN has unique capabilities and expertise in nc-AFM that can be applied to a wide range of molecules,” said Zahl. “In the coming years, I believe that artificial intelligence will make a big impact on the field by helping us operate the microscope autonomously to perform the most time-consuming, tedious, and error-prone parts of experiments. With this special power, our chances of winning the “game” will be much improved.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus


    BNL Center for Functional Nanomaterials

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 11:35 am on July 12, 2019 Permalink | Reply
    Tags: "Optimizing the Growth of Coatings on Nanowire Catalysts", BNL, , , ,   

    From Brookhaven National Lab: “Optimizing the Growth of Coatings on Nanowire Catalysts” 

    From Brookhaven National Lab

    July 8, 2019
    Ariana Manglaviti
    amanglaviti@bnl.gov
    (631) 344-2347

    Peter Genzer,
    genzer@bnl.gov
    (631) 344-3174

    1
    (Sitting from front) Iradwikanari Waluyo, Mingzhao Liu, Dario Stacchiola, (standing from front) Mehmet Topsakal, Mark Hybertsen, Deyu Lu, and Eli Stavitski at the Inner-Shell Spectroscopy beamline of Brookhaven Lab’s National Synchrotron Light Source II (NSLS-II). The scientists performed x-ray absorption spectroscopy experiments at NSLS-II to characterize the chemical state of titanium dioxide (titania) coatings on zinc oxide nanowires. They chemically processed the nanowires to make the coatings—which boost the efficiency of the nanowires in catalyzing the water-splitting reaction that produces oxygen and hydrogen, a sustainable fuel—more likely to adhere. These characterization results were coupled with electron microscopy imaging and theoretical analyses to generate a model of the amorphous (noncrystal) atomic structure of titania.

    Scientists chemically treated the surface of wire-looking nanostructures made of zinc oxide to apply a uniform coating of titanium dioxide; these semiconducting nanowires could be used as high-activity catalysts for solar fuel production.

    Solar energy harvested by semiconductors—materials whose electrical resistance is in between that of regular metals and insulators—can trigger surface electrochemical reactions to generate clean and sustainable fuels such as hydrogen. Highly stable and active catalysts are needed to accelerate these reactions, especially to split water molecules into oxygen and hydrogen. Scientists have identified several strong light-absorbing semiconductors as potential catalysts; however, because of photocorrosion, many of these catalysts lose their activity for the water-splitting reaction. Light-induced corrosion, or photocorrosion, occurs when the catalyst itself undergoes chemical reactions (oxidation or reduction) via charge carriers (electrons and “holes,” or missing electrons) generated by light excitation. This degradation limits catalytic activity.

    Now, scientists from the Center for Functional Nanomaterials (CFN)—a U.S. Department of Energy (DOE) Office of Science User Facility at Brookhaven National Laboratory—have come up with a technique for optimizing the activity of one such catalyst: 500-nanometer-long but relatively thin (40 to 50 nanometers) wire-looking nanostructures, or nanowires, made of zinc oxide (ZnO). Their technique—described in a paper published online in Nano Letters on May 3—involves chemically treating the surface of the nanowires in such a way that they can be uniformly coated with an ultrathin (two to three nanometers thick) film of titanium dioxide (titania), which acts as both a catalyst and protective layer.

    The CFN-led research is a collaboration between Brookhaven Lab’s National Synchrotron Light Source II (NSLS-II)—another DOE Office of Science User Facility— and Computational Science Initiative (CSI); the Center for Computational Materials Science at the Naval Research Laboratory; and the Department of Materials Science and Chemical Engineering at Stony Brook University.

    “Nanowires are ideal catalyst structures because they have a large surface area for absorbing light, and ZnO is an earth-abundant material that strongly absorbs ultraviolet light and has high electron mobility,” said co-corresponding author and study lead Mingzhao Liu, a scientist in the CFN Interface Science and Catalysis Group. “However, by themselves, ZnO nanowires do not have high enough catalytic activity or stability for the water-splitting reaction. Uniformly coating them with ultrathin films of titania, another low-cost material that is chemically more stable and more active in promoting interfacial charge transfer, enhances these properties to boost reaction efficiency by 20 percent compared to pure ZnO nanowires.”

    3
    (Background) A false-colored scanning electron microscope image of zinc oxide (ZnO) nanowires coated with titanium dioxide, or titania (TiO2). On average, the nanowires are 10 times longer than they are wide. The white-dashed inset contains a high-resolution transmission electron microscope image that distinguishes between the ZnO core and titania shell. The black-dashed inset features a structural model of the amorphous titania shell, with the red circles corresponding to oxygen atoms and the green and blue polyhedra corresponding to undercoordinated and coordinated titanium atoms, respectively.

    To “wet” the surface of the nanowires for the titania coating, the scientists combined two surface processing methods: thermal annealing and low-pressure plasma sputtering. For the thermal annealing, they heated the nanowires in an oxygen environment to remove defects and contaminants; for the plasma sputtering, they bombarded the nanowires with energetic oxygen gas ions (plasma), which ejected oxygen atoms from the ZnO surface.

    “These treatments modify the surface chemistry of the nanowires in such a way that the titania coating is more likely to adhere during atomic layer deposition,” explained Liu. “In atomic layer deposition, different chemical precursors react with a material surface in a sequential manner to build thin films with one layer of atoms at a time.”

    The scientists imaged the nanowire-shell structures with transmission electron microscopes at the CFN, shining a beam of electrons through the sample and detecting the transmitted electrons. However, because the ultrathin titania layer is not crystalline, they needed to use other methods to decipher its “amorphous” structure. They performed x-ray absorption spectroscopy experiments at two NSLS-II beamlines: Inner-Shell Spectroscopy (ISS) and In situ and Operando Soft X-ray Spectroscopy (IOS).

    “The x-ray energies at the two beamlines are different, so the x-rays interact with different electronic levels in the titanium atoms,” said co-author Eli Stavitski, ISS beamline physicist. “The complementary absorption spectra generated through these experiments confirmed the highly amorphous structure of titania, with crystalline domains limited to a few nanometers. The results also gave us information about the valence (charge) state of the titanium atoms—how many electrons are in the outermost shell surrounding the nucleus—and the coordination sphere, or the number of nearest neighboring oxygen atoms.”

    Theorists and computational scientists on the team then determined the most likely atomic structure associated with these experimental spectra. In materials with crystalline structure, the arrangement of an atom and its neighbors is the same throughout the crystal. But amorphous structures lack this uniformity or long-range order.

    “We had to figure out the correct combination of structural configurations responsible for the amorphous nature of the material,” explained co-corresponding author Deyu Lu, a scientist in the CFN Theory and Computation Group. “First, we screened an existing structural database and identified more than 300 relevant local structures using data analytics tools previously developed by former CFN postdoc Mehmet Topsakal and CSI computational scientist Shinjae Yoo. We calculated the x-ray absorption spectra for each of these structures and selected 11 representative ones as basis functions to fit our experimental results. From this analysis, we determined the percentage of titanium atoms with a particular local coordination.”

    The analysis showed that about half of the titanium atoms were “undercoordinated.” In other words, these titanium atoms were surrounded by only four or five oxygen atoms, unlike the structures in most common forms of titania, which have six neighboring oxygen atoms.

    To validate the theoretical result, Lu and the other theorists—Mark Hybertsen, leader of the CFN Theory and Computation Group; CFN postdoc Sencer Selcuk; and former CFN postdoc John Lyons, now a physical scientist at the Naval Research Lab—created an atomic-scale model of the amorphous titania structure. They applied the computational technique of molecular dynamics to simulate the annealing process that produced the amorphous structure. With this model, they also computed the x-ray absorption spectrum of titania; their calculations confirmed that about 50 percent of the titanium atoms were undercoordinated.

    “These two independent methods gave us a consistent message about the local structure of titania,” said Lu.

    “Fully coordinated atoms are not very active because they cannot bind to the molecules they do chemistry with in reactions,” explained Stavitski. “To make catalysts more active, we need to reduce their coordination.”

    “Amorphous titania transport behavior is very different from bulk titania,” added Liu. “Amorphous titania can efficiently transport both holes and electrons as active charge carriers, which drive the water-splitting reaction. But to understand why, we need to know the key atomic-scale motifs.”

    To the best of their knowledge, the scientists are the first to study amorphous titania at such a fine scale.

    “To understand the structural evolution of titania on the atomic level, we needed scientists who know how to grow active materials, how to characterize these materials with the tools that exist at the CFN and NSLS-II, and how to make sense of the characterization results by leveraging theory tools,” said Stavitski.

    Next, the team will extend their approach of combining experimental and theoretical spectroscopy data analysis to materials relevant to quantum information science (QIS). The emerging field of QIS takes advantage of the quantum effects in physics, or the strange behaviors and interactions that happen at ultrasmall scales. They hope that CFN and NSLS-II users will make use of the approach in other research fields, such as energy storage.

    This research used resources of Brookhaven Lab’s Scientific Data and Computing Center (part of CSI) and the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility operated by Lawrence Berkeley National Laboratory. The computational studies were in part supported by a DOE Laboratory Directed Research and Development (LDRD) project and the Office of Naval Research through the Naval Research Laboratory’s Basic Research Program.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus


    BNL Center for Functional Nanomaterials

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 12:23 pm on July 5, 2019 Permalink | Reply
    Tags: "Creating 'Movies' of Thin Film Growth at NSLS-II", BNL, Coherent x-rays at NSLS-II enable researchers to produce more accurate observations of thin film growth in real time., , The team used a technique called x-ray photon correlation spectroscopy., Thin films are used to build some of today’s most important technologies such as computer chips and solar cells.   

    From Brookhaven National Lab “Creating ‘Movies’ of Thin Film Growth at NSLS-II” 

    From Brookhaven National Lab

    July 2, 2019
    Stephanie Kossman
    skossman@bnl.gov

    Coherent x-rays at NSLS-II enable researchers to produce more accurate observations of thin film growth in real time.

    1
    Co-authors Peco Myint (BU) and Jeffrey Ulbrandt (UVM) are shown at NSLS-II’s CHX beamline, where the research was conducted.

    From paint on a wall to tinted car windows, thin films make up a wide variety of materials found in ordinary life. But thin films are also used to build some of today’s most important technologies, such as computer chips and solar cells. Seeking to improve the performance of these technologies, scientists are studying the mechanisms that drive molecules to uniformly stack together in layers—a process called crystalline thin film growth. Now, a new research technique could help scientists understand this growth process better than ever before.

    Researchers from the University of Vermont, Boston University, and the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have demonstrated a new experimental capability for watching thin film growth in real-time. Using the National Synchrotron Light Source II (NSLS-II)—a DOE Office of Science User Facility at Brookhaven—the researchers were able to produce a “movie” of thin film growth that depicts the process more accurately than traditional techniques can. Their research was published on June 14, 2019 in Nature Communications.

    2
    This animation is a simplified representation of thin film growth. As C60 molecules are deposited onto a material, they form multiple layers simultaneously—not one layer at a time. After a molecule reaches the surface of the material, it migrates by surface diffusion towards the boundary of an existing layer, or the “step-edge,” causing the step-edge to move out from the center of the mound. This process repeats as new layers are continuously formed in an organized pattern. The mound increases in height by one layer after an equivalent of one full layer of molecules has been deposited onto the material. The pattern of step-edges is self-similar after each full-layer-equivalent is deposited, just displaced one layer higher. The main result of the study is that this repeating self-similarity, or “autocorrelation,” can be quantitatively measured with coherent x-rays, and that the autocorrelations can be used to deduce certain details of how step-edges propagate during the deposition.

    How thin films grow

    Like building a brick wall, thin films “grow” by stacking in overlapping layers. In this study, the scientists focused on the growth process of a nanomaterial called C60, which is popular for its use in organic solar cells.

    “C60 is a spherical molecule that has the structure of a soccer ball,” said University of Vermont physicist Randall Headrick, lead author of the research. “There is a carbon atom at all of the corners where the ‘black’ and ‘white’ patches meet, for a total of 60 carbon atoms.”

    Though spherical C60 molecules don’t perfectly fit side-by-side like bricks in wall, they still create a uniform pattern.

    “Imagine you have a big bin and you fill it with one layer of marbles,” Headrick said. “The marbles would pack together in a nice hexagonal pattern along the bottom of the bin. Then, when you laid down the next layer of marbles, they would fit into the hollow areas between the marbles in the bottom layer, forming another perfect layer. We’re studying the mechanism that causes the marbles, or molecules, to find these ordered sites.”

    But in real life, thin films don’t stack this evenly. When filling a bin with marbles, for example, you may have three layers of marbles on one side of the bin and only one layer on the other side. Traditionally, this nonuniformity in thin films has been difficult to measure.

    “In other experiments, we could only study a single crystal that was specially polished so the whole surface behaved the same way at the same time,” Headrick said. “But that is not how materials behave in real life.”

    Studying thin film growth through coherent x-rays

    4
    A snapshot of the speckle pattern “movie” produced at CHX. The speckles are most visible at the boundaries of each color.

    To collect data that more accurately described thin film growth, Headrick went to the Coherent Hard X-ray Scattering (CHX) beamline at NSLS-II to design a new kind of experiment, one that made use of the beamline’s coherent x-rays. The team used a technique called x-ray photon correlation spectroscopy.

    “Typically, when you do an x-ray experiment, you see average information, like the average size of molecules or the average distance between them. And as the surface of a material become less uniform or ‘rougher,’ the features you look for disappear,” said Andrei Fluerasu, lead beamline scientist at CHX and a co-author of the research. “What is special about CHX is that we can use a coherent x-ray beam that produces an interference pattern, which can be thought of like a fingerprint. As a material grows and changes, its fingerprint does as well.”

    The “fingerprint” produced by CHX appears as a speckle pattern and it represents the exact arrangement of molecules in the top layer of the material. As layers continue to stack, scientists can watch the fingerprint change as if it were a movie of the thin film growth.

    “That is impossible to measure with other techniques,” Fluerasu said.

    Through computer processing, the scientists are able to convert the speckle patterns into correlation functions that are easier to interpret.

    “There are instruments like high resolution microscopes that can actually make a real image of these kinds of materials, but these images usually only show narrow views of the material,” Headrick said. “A speckle pattern that changes over time is not as intuitive, but it provides us with data that is much more relevant to the real-life case.”

    Co-author Lutz Wiegart, a beamline scientist at CHX, added, “This technique allows us to understand the dynamics of growth processes and, therefore, figure out how they relate to the quality of the films and how we can tune the processes.”

    The detailed observations of C60 from this study could be used to improve the performance of organic solar cells. Moving forward, the researchers plan to use this technique to study other types of thin films as well.

    5
    Members of the collaborating institutions are shown at NSLS-II’s CHX beamline. Pictured from left to right are Karl F. Ludwig Jr. (BU), Lutz Wiegart (NSLS-II), Randall Headrick (UVM), Xiaozhi Zhang (UVM), Jeffrey Ulbrandt (UVM), Yugang Zhang (NSLS-II), Andrei Fluerasu (NSLS-II), and Peco Myint (BU).

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus


    BNL Center for Functional Nanomaterials

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 1:41 pm on June 22, 2019 Permalink | Reply
    Tags: , BNL, LBCO (lanthanum barium copper oxide) was the first high-temperature (high-Tc) superconductor discovered some 33 years ago., ,   

    From Brookhaven National Lab: “Electron (or ‘Hole’) Pairs May Survive Effort to Kill Superconductivity” 

    From Brookhaven National Lab

    June 14, 2019
    Karen McNulty Walsh,
    kmcnulty@bnl.gov
    (631) 344-8350

    Peter Genzer
    genzer@bnl.gov
    (631) 344-3174

    Emergence of unusual metallic state supports role of charge stripes in formation of charge-carrier pairs essential to resistance-free flow of electrical current.

    1
    Showing their stripes: Brookhaven Lab physicists present new evidence that stripes—alternating areas of charge and magnetism in certain copper-oxide materials—are good for forming the charge-carrier pairs needed for electrical current to flow with no resistance. Left to right: Qiang Li, Genda Gu, John Tranquada, Alexei Tsvelik, and Yangmu Li in front of an image of wind-blown ripples in desert sand.

    Scientists seeking to understand the mechanism underlying superconductivity in “stripe-ordered” cuprates—copper-oxide materials with alternating areas of electric charge and magnetism—discovered an unusual metallic state when attempting to turn superconductivity off. They found that under the conditions of their experiment, even after the material loses its ability to carry electrical current with no energy loss, it retains some conductivity—and possibly the electron (or hole) pairs required for its superconducting superpower.

    “This work provides circumstantial evidence that the stripe-ordered arrangement of charges and magnetism is good for forming the charge-carrier pairs required for superconductivity to emerge,” said John Tranquada, a physicist at the U.S. Department of Energy’s Brookhaven National Laboratory.

    Tranquada and his co-authors from Brookhaven Lab and the National High Magnetic Field Laboratory at Florida State University, where some of the work was done, describe their findings in a paper just published in Science Advances. A related paper in the Proceedings of the National Academy of Sciences by co-author Alexei Tsvelik, a theorist at Brookhaven Lab, provides insight into the theoretical underpinnings for the observations.

    2
    This image represents the stripes of magnetism and charge in the cuprate (copper and oxygen) layers of the superconductor LBCO. Gray shading represents the modulation of the charge (“holes,” or electron vacancies), which is maximized in stripes that separate areas of magnetism, indicated by arrows representing alternating magnetic orientations on adjacent copper atoms.

    The scientists were studying a particular formulation of lanthanum barium copper oxide (LBCO) that exhibits an unusual form of superconductivity at a temperature of 40 Kelvin (-233 degrees Celsius). That’s relatively warm in the realm of superconductors. Conventional superconductors must be cooled with liquid helium to temperatures near -273°C (0 Kelvin or absolute zero) to carry current without energy loss. Understanding the mechanism behind such “high-temperature” superconductivity might guide the discovery or strategic design of superconductors that operate at higher temperatures.

    “In principle, such superconductors could improve the electrical power infrastructure with zero-energy-loss power transmission lines,” Tranquada said, “or be used in powerful electromagnets for applications like magnetic resonance imaging (MRI) without the need for costly cooling.”

    The mystery of high-Tc

    LBCO was the first high-temperature (high-Tc) superconductor discovered, some 33 years ago. It consists of layers of copper-oxide separated by layers composed of lanthanum and barium. Barium contributes fewer electrons than lanthanum to the copper-oxide layers, so at a particular ratio, the imbalance leaves vacancies of electrons, known as holes, in the cuprate planes. Those holes can act as charge carriers and pair up, just like electrons, and at temperatures below 30K, current can move through the material with no resistance in three dimensions—both within and between the layers.

    3
    Copper-oxide layers of LBCO (the lanthanum-barium layers would be between these). 3-D superconductivity occurs when current can flow freely in any direction within and between the copper-oxide layers, while 2-D superconductivity exists when current moves freely only within the layers (not perpendicular). The perpendicular orientations of stripe patterns from one layer to the next may be part of what inhibits movement of current between layers.

    An odd characteristic of this material is that, in the copper-oxide layers, at the particular barium concentration, the holes segregate into “stripes” that alternate with areas of magnetic alignment. Since this discovery, in 1995, there has been much debate about the role these stripes play in inducing or inhibiting superconductivity.

    In 2007, Tranquada and his team discovered the most unusual form of superconductivity in this material at the higher temperature of 40K. If they altered the amount of barium to be just under the amount that allowed 3-D superconductivity, they observed 2-D superconductivity—meaning just within the copper-oxide layers but not between them.

    “The superconducting layers seem to decouple from one another,” Tsvelik, the theorist, said. The current can still flow without loss in any direction within the layers, but there is resistivity in the direction perpendicular to the layers. This observation was interpreted as a sign that charge-carrier pairs were forming “pair density waves” with orientations perpendicular to one another in neighboring layers. “That’s why the pairs can’t jump from layer to another. It would be like trying to merge into traffic moving in a perpendicular direction. They can’t merge,” Tsvelik said.

    Superconducting stripes are hard to kill

    In the new experiment, the scientists dove deeper into exploring the origins of the unusual superconductivity in the special formulation of LBCO by trying to destroy it. “Often times we test things by pushing them to failure,” Tranquada said. Their method of destruction was exposing the material to powerful magnetic fields generated at Florida State.

    “As the external field gets bigger, the current in the superconductor grows larger and larger to try to cancel out the magnetic field,” Tranquada explained. “But there’s a limit to the current that can flow without resistance. Finding that limit should tell us something about how strong the superconductor is.”

    4
    A phase diagram of LBCO at different temperatures and magnetic field strengths. Colors represent how resistant the material is to the flow of electrical current, with purple being a superconductor with no resistance. When cooled to near absolute zero with no magnetic field, the material acts as a 3-D superconductor. As the magnetic field strength goes up, 3-D superconductivity disappears, but 2-D superconductivity reappears at higher field strength, then disappears again. At the highest fields, resistance grew, but the material retained some unusual metallic conductivity, which the scientists interpreted as an indication that charge-carrier pairs might persist even after superconductivity is destroyed.

    For example, if the stripes of charge order and magnetism in LBCO are bad for superconductivity, a modest magnetic field should destroy it. “We thought maybe the charge would get frozen in the stripes so that the material would become an insulator,” Tranquada said.

    But the superconductivity turned out to be a lot more robust.

    Using perfect crystals of LBCO grown by Brookhaven physicist Genda Gu, Yangmu Li, a postdoctoral fellow who works in Tranquada’s lab, took measurements of the material’s resistance and conductivity under various conditions at the National High Magnetic Field Laboratory. At a temperature just above absolute zero with no magnetic field present, the material exhibited full, 3-D superconductivity. Keeping the temperature constant, the scientists had to ramp up the external magnetic field significantly to make the 3-D superconductivity disappear. Even more surprising, when they increased the field strength further, the resistance within the copper-oxide planes went down to zero again!

    “We saw the same 2-D superconductivity we’d discovered at 40K,” Tranquada said.

    Ramping up the field further destroyed the 2-D superconductivity, but it never completely destroyed the material’s ability to carry ordinary current.

    “The resistance grew but then leveled off,” Tranquada noted.

    Signs of persistent pairs?

    Additional measurements made under the highest-magnetic-field indicated that the charge-carriers in the material, though no longer superconducting, may still exist as pairs, Tranquada said.

    “The material becomes a metal that no longer deflects the flow of current,” Tsvelik said. “Whenever you have a current in a magnetic field, you would expect some deflection of the charges—electrons or holes—in the direction perpendicular to the current [what scientists call the Hall effect]. But that’s not what happens. There is no deflection.”

    In other words, even after the superconductivity is destroyed, the material keeps one of the key signatures of the “pair density wave” that is characteristic of the superconducting state.

    “My theory relates the presence of the charge-rich stripes with the existence of magnetic moments between them to the formation of the pair density wave state,” Tsvelik said. “The observation of no charge deflection at high field shows that the magnetic field can destroy the coherence needed for superconductivity without necessarily destroying the pair density wave.”

    “Together these observations provide additional evidence that the stripes are good for pairing,” Tranquada said. “We see the 2-D superconductivity reappear at high field and then, at an even higher field, when we lose the 2-D superconductivity, the material doesn’t just become an insulator. There’s still some current flowing. We may have lost coherent motion of pairs between the stripes, but we may still have pairs within the stripes that can move incoherently and give us an unusual metallic behavior.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus


    BNL Center for Functional Nanomaterials

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 1:12 pm on June 22, 2019 Permalink | Reply
    Tags: "Researchers to Take Advantage of DOE's Advanced Supercomputers", BNL, D.O.E. Comscope   

    From Brookhaven National Lab: “Researchers to Take Advantage of DOE’s Advanced Supercomputers” 

    From Brookhaven National Lab

    June 18, 2019

    1
    Comscope is one of seven projects funded by the U.S. Department of Energy to accelerate the design of new materials through advanced computation.

    The U.S. Department of Energy announced today that it will invest $32 million over the next four years to accelerate the design of new materials through use of supercomputers.

    Seven projects will be supported, three led by teams at DOE National Laboratories and four by Universities. The teams are led by Argonne National Laboratory (ANL), Brookhaven National Laboratory (BNL) and Lawrence Livermore National Laboratory (LLNL) as well as the University of Illinois, the Pennsylvania State University, the University of Texas and the University of Southern California.

    These projects will develop widely applicable open source software utilizing DOE’s current leadership class and future exascale computing facilities. The goal is to provide the software platforms and data for the design of new functional materials with a broad range of applications, including alternative and renewable energy, electronics, data storage and materials for quantum information science.

    The new awards are part of DOE’s Computational Materials Sciences (CMS) program, begun in 2015 to reflect the enormous recent growth in computing power and the increasing capability of high-performance computers to model and simulate the behavior of matter at the atomic and molecular scales.

    “High performance computing has become an increasingly powerful tool of scientific discovery and technological innovation, and our capabilities continue to grow,” said Under Secretary for Science Paul Dabbar. “These projects will harness America’s leadership in supercomputing to deliver a new generation of materials for energy and a wide range of other applications.”

    Researchers are expected to make use of current generation petaflop supercomputers and prepare for next-generation exaflop machines scheduled for deployment in the early 2020s. Current machines include the 200-petaflop Summit computer at the Oak Ridge Leadership Computing Facility (OLCF), the 11-petaflop Theta computer at the Argonne Leadership Computing Facility (ALCF), and the 30-petaflop Cori machine at the National Energy Research Scientific Computing center (NERSC) at Lawrence Berkeley National Laboratory (LBNL). OLCF, ALCF, and NERSC are all DOE Office of Science user facilities. A petaflop is a million-billion floating-point operations per second. An exaflop is a billion-billion calculations.


    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    ANL/ALCF

    ANL ALCF Theta Cray XC40 supercomputer

    NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    Research will combine theory and software development with experimental validation, drawing on the resources of multiple DOE Office of Science user facilities, including the Advanced Light Source at LBNL, the Advanced Photon Source at ANL, the Spallation Neutron Source at Oak Ridge National Laboratory, the Linac Coherent Light Source at SLAC National Accelerator Facility and several of the five Nanoscale Science Research Centers across the DOE national laboratory complex.

    LBNL ALS

    ANL Advanced Photon Source

    ORNL Spallation Neutron Source

    SLAC/LCLS

    Funding for the new projects will total $8 million in Fiscal Year 2019. Subsequent annual funding will be contingent on available appropriations and project performance.

    Projects were chosen by competitive peer review under a DOE Funding Opportunity Announcement for Computational Materials Sciences. The CMS program is managed by the Department’s Office of Science through its Office of Basic Energy Sciences. Projects announced today are selections for negotiation of financial award. The final details for each project award are subject to final grant and contract negotiations between DOE and the awardees. A list of awards can be found here.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus


    BNL Center for Functional Nanomaterials

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 11:13 am on June 14, 2019 Permalink | Reply
    Tags: "Preparing Scientific Applications for Exascale Computing", , , BNL, Brookhaven Lab's Computational Science Initiative hosted a four-day coding workshop focusing on the latest version of OpenMP,   

    From Brookhaven National Lab: “Preparing Scientific Applications for Exascale Computing” 

    From Brookhaven National Lab

    June 11, 2019
    Ariana Tantillo
    atantillo@bnl.gov

    Brookhaven Lab’s Computational Science Initiative hosted a four-day coding workshop focusing on the latest version of OpenMP, a widely used programming standard that is being upgraded with new features to support next-generation supercomputing.

    1
    The 2019 OpenMP hackathon at Brookhaven Lab—hosted by the Computational Science Initiative from April 29 to May 2—brought together participants from Brookhaven, Argonne, Lawrence Berkeley, Lawrence Livermore, and Oak Ridge national labs; IBM; NASA; Georgia Tech; Indiana University; Rice University; and University of Illinois at Urbana-Champaign.

    Exascale computers are soon expected to debut, including Frontier at the U.S. Department of Energy’s (DOE) Oak Ridge Leadership Computing Facility (OLCF) and Aurora at the Argonne Leadership Computing Facility (ALCF), both DOE Office of Science User Facilities, in 2021.

    ORNL Cray Frontier Shasta based Exascale supercomputer with Slingshot interconnect featuring high-performance AMD EPYC CPU and AMD Radeon Instinct GPU technology

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer

    These next-generation computing systems are projected to surpass the speed of today’s most powerful supercomputers by five to 10 times. This performance boost will enable scientists to tackle problems that are otherwise unsolvable in terms of their complexity and computation time.

    But reaching such a high level of performance will require software adaptations. For example, OpenMP—the standard application programming interfaces for shared-memory parallel computing, or the use of multiple processors to complete a task—will have to evolve to support the layering of different memories, hardware accelerators such as graphics processing units (GPUs), various exascale computing architectures, and the latest standards for C++ and other programming languages.

    3
    Exascale computers will be used to solve problems in a wide range of scientific applications, including to simulate the lifetime operations of small modular nuclear reactors (left) and to understand the complex relationship between 3-D printing processes and material properties (right). Credit: Oak Ridge National Lab.

    Evolving OpenMP toward exascale with the SOLLVE project

    In September 2016, the DOE Exascale Computing Project (ECP) funded a software development project called SOLLVE (for Scaling OpenMP via Low-Level Virtual Machine for Exascale Performance and Portability) to help with this transition.

    The SOLLVE project team—led by DOE’s Brookhaven National Laboratory and consisting of collaborators from DOE’s Argonne, Lawrence Livermore, and Oak Ridge National Labs, and Georgia Tech—has been designing, implementing, and standardizing key OpenMP functionalities that ECP application developers have identified as important.

    Driven by SOLLVE and sponsored by ECP, Brookhaven Lab’s Computational Science Initiative (CSI) hosted a four-day OpenMP hackathon from April 29 to May 2, jointly organized with Oak Ridge and IBM. The OpenMP hackathon is the latest in a series of hackathons offered by CSI, including those focusing on NVIDIA GPUs and Intel Xeon Phi many-core processors.

    “OpenMP is undergoing substantial changes to address the requirements of upcoming exascale computing systems,” said local event coordinator Martin Kong, a computational scientist in CSI’s Computer Science and Mathematics Group and the Brookhaven Lab representative on the OpenMP Architecture Review Board, which oversees the OpenMP standard specification. “Porting scientific codes to the new exascale hardware and architectures will be a grand challenge. The main motivation of this hackathon is application engagement—to interact more deeply with different users, especially those from DOE labs, and make them aware of the changes they should expect in OpenMP and how these changes can benefit their scientific applications.”

    Laying the foundation for application performance portability

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Computational and domain scientists, code developers, and computing hardware experts from Brookhaven, Argonne, Lawrence Berkeley, Lawrence Livermore, Oak Ridge, Georgia Tech, Indiana University, Rice University, University of Illinois at Urbana-Champaign, IBM, and the National Aeronautics and Space Administration (NASA) participated in the hackathon. The eight teams were guided by national lab, university, and industry mentors who were selected based on their extensive experience in programming GPUs, participating in the OpenMP Language Committee, and conducting research and development in tools that support the latest OpenMP specifications.

    Throughout the week, the teams worked on porting their scientific applications from central processing units (CPU) to GPUs and optimizing them using the latest OpenMP version (4.5+). In between hacking sessions, the teams had tutorials on various advanced OpenMP features, including accelerator programming, profiling tools to assess performance, and application optimization strategies.

    Some teams also used the latest OpenMP functionalities to program IBM Power9 CPUs accelerated with NDIVIA GPUs. The world’s fastest supercomputer—the Summit supercomputer at OLCF—is based on this new architecture, with more than 9000 IBM Power9 CPUs and more than 27,000 NVIDIA GPUs.

    Taking steps toward exascale

    The teams’ applications spanned many areas, including nuclear and high-energy physics, lasers and optics, materials science, autonomous systems, and fluid mechanics.

    Participant David Wagner of the NASA Langley Research Center High Performance Computing Incubator and colleagues Gabriele Jost and Daniel Kokron of the NASA Ames Research Center came with a code for simulating elasticity. Their goal at the hackathon was to increase single-instruction, multiple-data (SIMD) parallelism—a type of computing in which multiple processors perform the same operation on many data points simultaneously—and optimize the speed at which data can be read from and stored into memory.

    “Scientists at NASA are trying to understand how and why aircraft and spacecraft materials fail,” said Wagner. “We need to make sure that these materials are durable enough to withstand all of the forces that are present in normal use during service. At the hackathon, we’re working on a mini app that is representative of the most computationally intensive parts of the larger program to model what happens physically when the materials are loaded, bent, and stretched. Our code has lots of little formulas that need to run billions of times over. The challenge is performing all of the calculations really fast.”

    According to Wagner, one of the reasons NASA is pushing for this computational capability now is to understand the processes used to generate additively manufactured (3-D printed) parts and the different material properties of these parts, which are increasingly being used in aircraft. Knowing this information is important to ensuring the safety, reliability, and durability of the materials over their operational lifetimes.

    “The hackathon was a success for us,” said Wagner. “We got our code set up for massively parallel execution and running correctly on GPU hardware. We’ll continue with debugging and parallel performance tuning, as we expect to have suitable NASA hardware and software available soon.”

    Another team took a similar approach in trying to get OpenMP to work for a small portion of their code, a lattice quantum chromodynamics (QCD) code that is at the center of an ECP project called Lattice QCD: Lattice Quantum Chromodynamics for Exascale. Lattice QCD is a numerical framework for simulating the strong interactions between elementary particles called quarks and gluons. Such simulations are important to many high-energy and nuclear physics problems. Typical simulations require months of running on supercomputers.

    4
    A schematic of the lattice for quantum chromodynamics calculations. The intersection points on the grid represent quark values, while the lines between them represent gluon values.

    “We would like our code to run on different exascale architectures,” said team member and computational scientist Meifeng Lin, deputy group lead of CSI’s new Quantum Computing Group and local coordinator of previous hackathons. “Right now, the code runs on NVIDIA GPUs but upcoming exascale computers are expected to have at least two different architectures. We hope that by using OpenMP, which is supported by major hardware vendors, we will be able to more easily port our code to these emerging platforms. We spent the first two days of the hackathon trying to get OpenMP to offload code from CPU to GPU across the entire library, without much success.”

    Mentor Lingda Li, a CSI research associate and a member of the SOLLVE project, helped Lin and fellow team member Chulwoo Jung, a physicist in Brookhaven’s High-Energy Theory Group, with the OpenMP offloading.

    Though the team was able to get OpenMP to work with a few hundred lines of code, its initial performance was poor. They used various performance profiling tools to determine what was causing the slowdown. With this information, they were able to make foundational progress in their overall optimization strategy, including solving problems related to initial GPU offloading and simplifying data mapping.

    Among the profiling tools available to teams at the hackathon was one developed by Rice University and University of Wisconsin.

    5
    John Mellor-Crummey gives a presentation about the HPCToolkit, an integrated suite of tools for measuring and analyzing program performance on systems ranging from desktops to supercomputers.

    “Our tool measures the performance of GPU-accelerated codes both on the host and the GPU,” said John Mellor-Crummey, professor of computer science and electrical and computer engineering at Rice University and the principal investigator on the corresponding ECP project Extending HPCToolkit to Measure and Analyze Code Performance on Exascale Platforms. “We’ve been using it on several simulation codes this week to look at the relative performance of computation and data movement in and out of GPUs. We can tell not only how long a code is running but also how many instructions were executed and whether the execution was at full speed or stalled, and if stalled, why. We also identified mapping problems with the compiler information that associates machine code and source code.”

    Other mentors from IBM were on hand to show the teams how to use IBM XL compilers—which are designed to exploit the full power of IBM Power processors—and help them through any issues they encountered.

    “Compilers are tools that scientists use to translate their scientific software into code that can be read by hardware, by the largest supercomputers in the world—Summit and Sierra [at Lawrence Livermore],” said Doru Bercea, a research staff member in the Advanced Compiler Technologies Group at the IBM TJ Watson Research Center. “The hackathon provides us with an opportunity to discuss compiler design decisions to get OpenMP to work better for scientists.”

    According to mentor Johannes Doerfert, a postdoctoral scholar at ALCF, the applications the teams brought to the hackathon were at various stages in terms of their readiness for upcoming computing systems.

    6
    QMCPack can be used to calculate the ground and excited state energies of localized defects in insulators and semiconductors—for example, in manganese (Mn)4+-doped phosphors, which are promising materials for improving the color quality and luminosity of white-light-emitting diodes. Source: Journal of Physical Chemistry Review Letters.

    “Some teams are facing porting problems, some are struggling with the compilers, and some have application performance issues,” explained Doerfert. “As mentors, we receive questions coming from anywhere in this large spectrum.”

    Some of the other scientific applications that teams brought include a code (pf3d) for simulating the interactions between high-intensity lasers and plasma (ionized gas) in experiments at Lawrence Livermore’s National Ignition Facility, and a code for calculating the electronic structure of atoms, molecules, and solids (QMCPack, also an ECP project). Another ECP team brought a portable programming environment (RAJA) for the C++ programming language.

    “We’re developing a high-level abstraction called RAJA so people can use whatever hardware or software frameworks are available on the backend of their computer systems,” said mentor Tom Scogland, a postdoctoral scholar in the Center for Applied Scientific Computing at Lawrence Livermore. “RAJA mainly targets OpenMP on the host and CUDA [another parallel computing programming model] on the backend. But we want RAJA to work with other programming models on the backend, including OpenMP.”

    “The theme of the hackathon was OpenMP 4.5+, an evolving and not fully mature version,” explained Kong. “The teams left with a better understanding of the new OpenMP features, knowledge about the new tools that are becoming available on Summit, and a roadmap to follow in the long term.”

    “I learned a number of things about OpenMP 4.5,” said pf3d team member Steve Langer, a computational physicist at Lawrence Livermore. “The biggest benefit was the discussions with mentors and IBM employees. I now know how to package my OpenMP offload directives to use NVIDIA GPUs without running into memory limitations.”

    A second OpenMP hackathon will be held in July at Oak Ridge and a third in August at the National Energy Research Scientific Computing Center, a division of Lawrence Berkeley, a DOE Office of Science User Facility, and the primary computing facility for DOE Office of Science–supported researchers.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus


    BNL Center for Functional Nanomaterials

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: