Tagged: BNL Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:03 pm on April 26, 2019 Permalink | Reply
    Tags: "Building a Printing Press for New Quantum Materials", “We realized that building a robot that can enable the design synthesis and testing of quantum materials is extremely well-matched to the skills and expertise of scientists at the CFN.”, BNL, CFN-Center for Functional Nanomaterials, Exotic electronic magnetic and optical properties emerge at such small (quantum) size scales., , , Once high-quality 2-D flakes from different crystals have been located and their properties characterized they can be assembled in the desired order to create the layered structures., Quantum Material Press or QPress, Structures obtained by stacking single atomic layers (“flakes”) peeled from different parent bulk crystals are of interest   

    From Brookhaven National Lab: “Building a Printing Press for New Quantum Materials” 

    From Brookhaven National Lab

    April 22, 2019
    Ariana Tantillo
    atantillo@bnl.gov

    Scientists at Brookhaven Lab’s Center for Functional Nanomaterials are developing an automated system to synthesize entirely new materials made from stacked atomically thin two-dimensional sheets and to characterize their exotic quantum properties.

    BNL Center for Functional Nanomaterials

    1
    Scientists at Brookhaven Lab’s Center for Functional Nanomaterials are building a robotic system to enable the design, synthesis, and testing of quantum materials, which exhibit unique properties. From left to right: Gregory Doerk, Jerzy Sadowski, Kevin Yager, Young Jae Shin, and Aaron Stein.

    Checking out a stack of books from the library is as simple as searching the library’s catalog and using unique call numbers to pull each book from their shelf locations. Using a similar principle, scientists at the Center for Functional Nanomaterials (CFN)—a U.S. Department of Energy (DOE) Office of Science User Facility at Brookhaven National Laboratory—are teaming with Harvard University and the Massachusetts Institute of Technology (MIT) to create a first-of-its-kind automated system to catalog atomically thin two-dimensional (2-D) materials and stack them into layered structures. Called the Quantum Material Press, or QPress, this system will accelerate the discovery of next-generation materials for the emerging field of quantum information science (QIS).

    Structures obtained by stacking single atomic layers (“flakes”) peeled from different parent bulk crystals are of interest because of the exotic electronic, magnetic, and optical properties that emerge at such small (quantum) size scales. However, flake exfoliation is currently a manual process that yields a variety of flake sizes, shapes, orientations, and number of layers. Scientists use optical microscopes at high magnification to manually hunt through thousands of flakes to find the desired ones, and this search can sometimes take days or even a week, and is prone to human error.

    Once high-quality 2-D flakes from different crystals have been located and their properties characterized, they can be assembled in the desired order to create the layered structures. Stacking is very time-intensive, often taking longer than a month to assemble a single layered structure. To determine whether the generated structures are optimal for QIS applications—ranging from computing and encryption to sensing and communications—scientists then need to characterize the structures’ properties.

    “In talking to our university collaborators at Harvard and MIT who synthesize and study these layered heterostructures, we learned that while bits of automation exist—such as software to locate the flakes and joysticks to manipulate the flakes—there is no fully automated solution,” said CFN Director Charles Black, the administrative lead on the QPress project.

    The idea for the QPress was conceived in early 2018 by Professor Amir Yacoby of the Department of Physics at Harvard. The concept was then refined through a collaboration between Yacoby; Black and Kevin Yager, leader of the CFN Electronic Nanomaterials Group; Philip Kim, also of Harvard’s Department of Physics; and Pablo Jarillo-Herrero and Joseph Checkelsky, both of the Department of Physics at MIT.

    According to Black, the unique CFN role was clear: “We realized that building a robot that can enable the design, synthesis, and testing of quantum materials is extremely well-matched to the skills and expertise of scientists at the CFN. As a user facility, CFN is meant to be a resource for the scientific community, and QIS is one of our growth areas for which we’re expanding our capabilities, scientific programs, and staff.”

    Graphene sparks 2-D materials research

    The interest in 2-D materials dates back to 2004, when scientists at the University of Manchester isolated the world’s first 2-D material, graphene—a single layer of carbon atoms. They used a surprisingly basic technique in which they placed a piece of graphite (the core material of pencils) on Scotch tape, repeatedly folding the tape in half and peeling it apart to extract ever-thinner flakes. Then, they rubbed the tape on a flat surface to transfer the flakes. Under an optical microscope, the one-atom-thick flakes can be located by their reflectivity, appearing as very faint spots. Recognized with a Nobel Prize in 2010, the discovery of graphene and its unusual properties—including its remarkable mechanical strength and electrical and thermal conductivity—has prompted scientists to explore other 2-D materials.

    Many labs continue to use this laborious approach to make and find 2-D flakes. While the approach has enabled scientists to perform various measurements on graphene, hundreds of other crystals—including magnets, superconductors, and semiconductors—can be exfoliated in the same way as graphite. Moreover, different 2-D flakes can be stacked to build materials that have never existed before. Scientists have very recently discovered that the properties of these stacked structures depend not only on the order of the layers but also on the relative angle between the atoms in the layers. For example, a material can be tuned from a metallic to an insulating state simply by controlling this angle. Given the wide variety of samples that scientists would like to explore and the error-prone and time-consuming nature of manual synthesis methods, automated approaches are greatly needed.

    “Ultimately, we would like to develop a robot that delivers a stacked structure based on the 2-D flake sequences and crystal orientations that scientists select through a web interface to the machine,” said Black. “If successful, the QPress would enable scientists to spend their time and energy studying materials, rather than making them.”

    A modular approach

    In September 2018, further development of the QPress was awarded funding by the DOE, with a two-part approach. One award was for QPress hardware development at Brookhaven, led by Black; Yager; CFN scientists Gregory Doerk, Aaron Stein, and Jerzy Sadowski; and CFN scientific associate Young Jae Shin. The other award was for a coordinated research project led by Yacoby, Kim, Jarillo-Herrero, and Checkelsky. The Harvard and MIT physicists will use the QPress to study exotic forms of superconductivity—the ability of certain materials to conduct electricity without energy loss at very low temperatures—that exist at the interface between a superconductor and magnet. Some scientists believe that such exotic states of matter are key to advancing quantum computing, which is expected to surpass the capabilities of even today’s most powerful supercomputing.

    3
    A photo of the prototype exfoliator. The robotic system transfers peeled 2-D flakes from the parent crystal to a substrate. The exfoliator allows scientists to control stamping pressure, pressing time, number of repeated presses, angle of pressing, and lateral force applied during transfer, for improved repeatability.

    A fully integrated automated machine consisting of an exfoliator, a cataloger, a library, a stacker, and a characterizer is expected in three years. However, these modules will come online in stages to enable the use of QPress early on.

    The team has already made some progress. They built a prototype exfoliator that mimics the action of a human peeling flakes from a graphite crystal. The exfoliator presses a polymer stamp into a bulk parent crystal and transfers the exfoliated flakes by pressing them onto a substrate. In their first set of experiments, the team investigated how changing various parameters—stamping pressure, pressing time, number of repeated presses, angle of pressing, and lateral force applied during transfer—impact the process.

    “One of the advantages of using a robot is that, unlike a human, it reproduces the same motions every time, and we can optimize these motions to generate lots of very thin large flakes,” explained Yager. “Thus, the exfoliator will improve both the quality and quantity of 2-D flakes peeled from parent crystals by refining the speed, precision, and repeatability of the process.”

    In collaboration with Stony Brook University assistant professor Minh Hoai Nguyen of the Department of Computer Science and PhD student Boyu Wang of the Computer Vision Lab, the scientists are also building a flake cataloger. Through image-analysis software, the cataloger scans a substrate and records the locations of exfoliated flakes and their properties.

    “The flakes that scientists are interested in are thin and thus faint, so manual visual inspection is a laborious and error-prone process,” said Nguyen. “We are using state-of-the-art computer vision and deep learning techniques to develop software that can automate this process with higher accuracy.”

    4
    A schematic showing the workflow for cataloging flake locations and properties. Image grids of exfoliated samples are automatically analyzed, with each flake tracked individually so that scientists can locate any desired flake on a sample.

    “Our collaborators have said that a system capable of mapping their sample of flakes and showing them where the “good” flakes are located—as determined by parameters they define—would be immensely helpful for them,” said Yager. “We now have this capability and would like to put it to use.”

    Eventually, the team plans to store a large set of different catalogued flakes on shelves, similar to books in a library. Scientists could then access this materials library to select the flakes they want to use, and the QPress would retrieve them.

    According to Black, the biggest challenge will be the construction of the stacker—the module that retrieves samples from the library, “drives” to the locations where the selected flakes reside, and picks the flakes up and places them in a repetitive process to build stacks according to the assembly instructions that scientists program into the machine. Ultimately, the scientists would like the stacker to assemble the layered structures not only faster but also more accurately than manual methods.

    5
    The QPress will have five modules when completed: an exfoliator, a cataloger, a materials library, a stacker, and a characterizer/fabricator.

    The final module of the robot will be a material characterizer, which will provide real-time feedback throughout the entire synthesis process. For example, the characterizer will identify the crystal structure and orientation of exfoliated flakes and layered structures through low-energy electron diffraction (LEED)—a technique in which a beam of low-energy electrons is directed toward the surface of a sample to produce a diffraction pattern characteristic of the surface geometry.

    “There are many steps to delivering a fully automated solution,” said Black. “We intend to implement QPress capabilities as they become available to maximize benefit to the QIS community.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus


    BNL Center for Functional Nanomaterials

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 10:48 am on April 22, 2019 Permalink | Reply
    Tags: "Optimizing Network Software to Advance Scientific Discovery", , , BNL, CSI-The computer is installed at Brookhaven's Scientific Data and Computing Center, DiRAC-Distributed Research Using Advanced Computing, Intel's high-speed communication network to accelerate application codes for particle physics and machine learning,   

    From Brookhaven National Lab: “Optimizing Network Software to Advance Scientific Discovery” 

    From Brookhaven National Lab

    April 16, 2019
    Ariana Tantillo
    atantillo@bnl.gov

    A team of computer scientists, physicists, and software engineers optimized software for Intel’s high-speed communication network to accelerate application codes for particle physics and machine learning.

    1
    Brookhaven Lab collaborated with Columbia University, University of Edinburgh, and Intel to optimize the performance of a 144-node parallel computer built from Intel’s Xeon Phi processors and Omni-Path high-speed communication network. The computer is installed at Brookhaven’s Scientific Data and Computing Center, as seen above with technology engineer Costin Caramarcu.

    High-performance computing (HPC)—the use of supercomputers and parallel processing techniques to solve large computational problems—is of great use in the scientific community. For example, scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory rely on HPC to analyze the data they collect at the large-scale experimental facilities on site and to model complex processes that would be too expensive or impossible to demonstrate experimentally.

    Modern science applications, such as simulating particle interactions, often require a combination of aggregated computing power, high-speed networks for data transfer, large amounts of memory, and high-capacity storage capabilities. Advances in HPC hardware and software are needed to meet these requirements. Computer and computational scientists and mathematicians in Brookhaven Lab’s Computational Science Initiative (CSI) are collaborating with physicists, biologists, and other domain scientists to understand their data analysis needs and provide solutions to accelerate the scientific discovery process.

    An HPC industry leader

    2
    An image of the Xeon Phi Knights Landing processor die. A die is a pattern on a wafer of semiconducting material that contains the electronic circuitry to perform a particular function. Credit: Intel.

    For decades, Intel Corporation has been one of the leaders in developing HPC technologies. In 2016, the company released the Intel® Xeon PhiTM processors (formerly code-named “Knights Landing”), its second-generation HPC architecture that integrates many processing units (cores) per chip. The same year, Intel released the Intel® Omni-Path Architecture high-speed communication network. In order for the 5,000 to 100,000 individual computers, or nodes, in modern supercomputers to work together to solve a problem, they must be able to quickly communicate with each other while minimizing network delays.

    Soon after these releases, Brookhaven Lab and RIKEN, Japan’s largest comprehensive research institution, pooled their resources to purchase a small 144-node parallel computer built from Xeon Phi processors and two independent network connections, or rails, using Intel’s Omni-Path Architecture.

    The computer was installed at Brookhaven Lab’s Scientific Data and Computing Center, which is part of CSI.

    With the installation completed, physicist Chulwoo Jung and CSI computational scientist Meifeng Lin of Brookhaven Lab; theoretical physicist Christoph Lehner, a joint appointee at Brookhaven Lab and the University of Regensburg in Germany; Norman Christ, the Ephraim Gildor Professor of Computational Theoretical Physics at Columbia University; and theoretical particle physicist Peter Boyle of the University of Edinburgh worked in close collaboration with software engineers at Intel to optimize the network software for two science applications: particle physics and machine learning.

    “CSI had been very interested in the Intel Omni-Path Architecture since it was announced in 2015,” said Lin. “The expertise of Intel engineers was critical to implementing the software optimizations that allowed us to fully take advantage of this high-performance communication network for our specific application needs.”

    Network requirements for scientific applications

    For many scientific applications, running one rank (a value that distinguishes one process from another) or possibly a few ranks per node on a parallel computer is much more efficient than running several ranks per node. Each rank typically executes as an independent process that communicates with the other ranks by using a standard protocol known as Message Passing Interface (MPI).

    4
    A schematic of the lattice for quantum chromodynamics calculations. The intersection points on the grid represent quark values, while the lines between them represent gluon values.

    For example, physicists seeking to understand how the early universe formed run complex numerical simulations of particle interactions based on the theory of quantum chromodynamics (QCD). This theory explains how elementary particles called quarks and gluons interact to form the particles we directly observe, such as protons and neutrons. Physicists model these interactions by using supercomputers that represent the three dimensions of space and the dimension of time in a four-dimensional (4D) lattice of equally spaced points, similar to that of a crystal. The lattice is split into smaller identical sub-volumes. For lattice QCD calculations, data need to be exchanged at the boundaries between the different sub-volumes. If there are multiple ranks per node, each rank hosts a different 4D sub-volume. Thus, splitting up the sub-volumes creates more boundaries where data need to be exchanged and therefore unnecessary data transfers that slow down the calculations.

    Software optimizations to advance science

    To optimize the network software for such a computationally intensive scientific application, the team focused on enhancing the speed of a single rank.

    “We made the code for a single MPI rank run faster so that a proliferation of MPI ranks would not be needed to handle the large communication load present for each node,” explained Christ.

    The software within the MPI rank exploits the threaded parallelism available on Xeon Phi nodes. Threaded parallelism refers to the simultaneous execution of multiple processes, or threads, that follow the same instructions while sharing some computing resources. With the optimized software, the team was able to create multiple communication channels on a single rank and to drive these channels using different threads.

    5
    Two-dimensional illustration of threaded parallelism. Key: green lines separate physical compute nodes; black lines separate MPI ranks; red lines are the communication contexts, with the arrows denoting communication between nodes or memory copy within a node via the Intel Omni-Path hardware.

    The MPI software was now set up for the scientific applications to run more quickly and to take full advantage of the Intel Omni-Path communications hardware. But after implementing the software, the team members encountered another challenge: in each run, a few nodes would inevitably communicate slowly and hold the others back.

    They traced this problem to the way that Linux—the operating system used by the majority of HPC platforms—manages memory. In its default mode, Linux divides memory into small chunks called pages. By reconfiguring Linux to use large (“huge”) memory pages, they resolved the issue. Increasing the page size means that fewer pages are needed to map the virtual address space that an application uses. As a result, memory can be accessed much more quickly.

    With the software enhancements, the team members analyzed the performance of the Intel Omni-Path Architecture and Intel Xeon Phi processor compute nodes installed on Intel’s dual-rail “Diamond” cluster and the Distributed Research Using Advanced Computing (DiRAC) single-rail cluster in the United Kingdom.

    DiRAC is the UK’s integrated supercomputing facility for theoretical modelling and HPC-based research in particle physics, astronomy and cosmology.

    For their analysis, they used two different classes of scientific applications: particle physics and machine learning. For both application codes, they achieved near-wirespeed performance—the theoretical maximum rate of data transfer. This improvement represents an increase in network performance that is between four and ten times that of the original codes.

    “Because of the close collaboration between Brookhaven, Edinburgh, and Intel, these optimizations were made available worldwide in a new version of the Intel Omni-Path MPI implementation and a best-practice protocol to configure Linux memory management,” said Christ. “The factor of five speedup in the execution of the physics code on the Xeon Phi computer at Brookhaven Lab—and on the University of Edinburgh’s new, even larger 800-node Hewlett Packard Enterprise “hypercube” computer—is now being put to good use in ongoing studies of fundamental questions in particle physics.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus


    BNL Center for Functional Nanomaterials

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 3:39 pm on April 11, 2019 Permalink | Reply
    Tags: “Our quantum memories operate at room temperature., BNL, BNL Scientific Data and Computing Center, DOE ESnet, Northeast Quantum Systems Center, Putting U.S. quantum networking research on the international map, , quantum entanglement is limited by decoherence, , The entanglement sources are portable and can be easily mounted in standard data center computer server racks that are connected to regular fiber distribution panels., This makes it natural to expand the test to principles of quantum repeaters which are the technological key to achieving quantum communication over hundreds of kilometers.”, Unlike digital transmissions in communication networks, Viable quantum repeaters will allow Figueroa and his team to scale up their ongoing experiments within “local-area” quantum networks to a distributed or “wide-area” version   

    From Stoney Brook University – SUNY and BNL: “Research Team Builds Quantum Network with Long-Distance Entanglement” 

    Brookhaven National Lab

    Stoney Brook bloc

    From Stoney Brook University – SUNY

    April 8, 2019
    Charity Plata
    cplata@bnl.gov

    Scientists from Stony Brook University, the U.S. Department of Energy’s Brookhaven National Laboratory, and DOE’s Energy Sciences Network (ESnet) are collaborating on an experiment that puts U.S. quantum networking research on the international map.

    Researchers, including Stony Brook’s Eden Figueroa, have built a quantum network testbed that connects several buildings on the Brookhaven Lab campus using unique portable quantum entanglement sources and an existing DOE ESnet communications fiber network—a significant step in building a large-scale quantum network that can transmit information over long distances.

    1
    Stony Brook’s Eden Figueroa describes the inner workings of the quantum network hardware at Brookhaven National Laboratory as Robinson Pino, acting director of Computational Science Research and Partnerships (SciDAC) Division overseen by DOE’s Advanced Scientific Computing Research program office, looks on.

    “In quantum mechanics, the physical properties of entangled particles remain associated, even when separated by vast distances. Thus, when measurements are performed on one side, it also affects the other,” said Kerstin Kleese van Dam, director of Brookhaven Lab’s Computational Science Initiative (CSI). “To date, this work has been successfully demonstrated with entangled photons separated by approximately 11 miles. This is one of the largest quantum entanglement distribution networks in the world, and the longest-distance entanglement experiment in the United States.”

    This quantum networking testbed project includes staff from CSI and Brookhaven’s Instrumentation Division and Physics Department, as well as faculty and students from Stony Brook University. The project also is part of the Northeast Quantum Systems Center. One distinct aspect of the team’s work that sets it apart from other quantum networks being run in China and Europe—both long-committed to quantum information science pursuits—is that the entanglement sources are portable and can be easily mounted in standard data center computer server racks that are connected to regular fiber distribution panels.

    The team successfully installed a portable quantum-entangled photon source in a server rack housed within the BNL Scientific Data and Computing Center, where the Lab’s central networking hub is located. With this connectivity, entangled photons now can be distributed to every building on the Lab’s campus using existing Brookhaven and ESnet fiber infrastructure. ESnet’s fibers have been introduced in paths between buildings to enable the distribution and study of entanglement over increasingly longer distances. The portable entanglement sources also are compatible with existing quantum memories, atom-filled glass cells that can store quantum information. Normally kept at super-cold temperatures, these cells can be stimulated using lasers to control the atomic states within them.

    In work sponsored by DOE’s Small Business Innovation Research program (SBIR), the Brookhaven-Stony Brook-ESnet testbed features portable quantum memories that can operate at room temperature. Such quantum memories, engineered for quantum networking on a large scale, have been a longtime “pet project” for Eden Figueroa, a joint appointee with Brookhaven’s CSI and Instrumentation Division and a Stony Brook University professor who leads its Quantum Information Technology group. He serves as lead investigator of the quantum networking testbed project.

    “The demonstration aims to combine entanglement with compatible atomic quantum memories,” Figueroa said. “Our quantum memories have the advantage of operating at room temperature rather than requiring subfreezing cold. This makes it natural to expand the test to principles of quantum repeaters, which are the technological key to achieving quantum communication over hundreds of kilometers.”

    Quantum networks send light pulses (photons) through the fiber, which requires the light to be periodically amplified as it travels through the lines. However, unlike digital transmissions in communication networks, quantum entanglement is limited by decoherence, where entangled photons, for example, revert to classical states because interactions with the environment cause them to lose the ability to remain entangled. This limits these fragile quantum states from being sent over large distances.

    Viable quantum repeaters will allow Figueroa and his team to scale up their ongoing experiments within “local-area” quantum networks to a distributed, or “wide-area,” version. In anticipation of this, the team is constructing the necessary optical connections to link Brookhaven Lab’s quantum network to ones that already exist at Stony Brook and Yale universities.

    “Realizing the quantum network with entangled photon sources mounted in server racks, portable quantum memories, and operable repeaters will mark the first real quantum communication network in the world that truly connects quantum computing processors and memories using photonic quantum entanglement,” Figueroa said. “It will mark a sea change in communications that can impact the world.”

    Funding for this quantum networking testbed project has been provided by SBIR, the Empire State Development Corporation, and Brookhaven Lab’s Laboratory Directed Research and Development program.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stoney Brook campus

    Stony Brook University-SUNY’s reach extends from its 1,039-acre campus on Long Island’s North Shore–encompassing the main academic areas, an 8,300-seat stadium and sports complex and Stony Brook Medicine–to Stony Brook Manhattan, a Research and Development Park, four business incubators including one at Calverton, New York, and the Stony Brook Southampton campus on Long Island’s East End. Stony Brook also co-manages Brookhaven National Laboratory, joining Princeton, the University of Chicago, Stanford, and the University of California on the list of major institutions involved in a research collaboration with a national lab.

    And Stony Brook is still growing. To the students, the scholars, the health professionals, the entrepreneurs and all the valued members who make up the vibrant Stony Brook community, this is a not only a great local and national university, but one that is making an impact on a global scale.

     
  • richardmitnick 1:35 pm on April 5, 2019 Permalink | Reply
    Tags: BNL, , GISAXS-grazing-incidence small-angle x-ray scattering, NSLS-II synchrotron, Polymer self-assembly, Samantha Nowak,   

    From Brookhaven National Lab: Women in STEM- “Samantha Nowak: From CFN User to CFN Postdoc” 

    From Brookhaven National Lab

    April 5, 2019
    Ariana Tantillo
    atantillo@bnl.gov

    The chemist first came to Brookhaven Lab in 2017 as a graduate student user of the Center for Functional Nanomaterials (CFN) [below] and has since returned to do postdoctoral research in polymer self-assembly

    1
    Polymer chemist Samantha Nowak recently joined Brookhaven Lab’s Center for Functional Nanomaterials as a postdoctoral researcher studying polymer self-assembly. Here, she holds silicon wafers containing block copolymer thin films. In front of her is a plasma etch tool, which she uses to remove the domains of one of the “blocks,” or polymers, in the block copolymer. This removal is part of a process that helps Nowak better see the nanoscale self-assembled patterns (using a scanning electron microscope) formed by the block copolymer.

    When Samantha Nowak was growing up, her grandmother would complain about how she could not get her nail polish off. At the time, pure acetone—the solvent that dissolves nail polish—was not widely available. Nowak’s grandfather, a polymer chemist, would bring the “magic” nail polish remover home from his lab, explaining how solubility works. Nowak also vividly remembers her grandfather dropping metal salts into solution as she watched them rapidly crystallize to form interesting structures.

    Despite her interest in science, Nowak was set on being a lawyer up until the end of high school, when her honors chemistry teacher told her about The College of New Jersey’s forensic chemistry program that her daughter was enrolled in.

    3

    Nowak, a big fan of the television series Law & Order: Special Victims Unit, figured a career in forensic chemistry would allow her to combine her dual interests in science and law. But after declaring chemistry in her first semester at the College of New Jersey, Nowak decided that forensic chemistry was not for her. She decided to continue the general chemistry track, receiving her bachelor’s degree in 2014, with an interdisciplinary concentration in law and society.

    After graduating, Nowak entered a PhD program in chemistry at the University of Maryland (UMD), College Park, where she joined the Sita Research Group and began synthesizing and studying a new class of self-assembling materials called sugar-polyolefin conjugates.

    Self-assembly refers to the ability of certain molecules to spontaneously organize into ordered structures—such as spheres, cylinders, and lamellae (sheets)—as they try to achieve their lowest-energy state.

    “In general, block copolymer self-assembly relies on a chemical incompatibility between two different types of polymers, or “blocks,” linked together by chemical bonds,” explained Nowak. “In my PhD group, we were trying to overcome some of the limitations of block copolymer self-assembly—including the difficulty in obtaining very small feature sizes—by switching out one of the blocks with a sugar. For the other block, we used a low-molecular-weight polyolefin, which is a polymer made out of hydrogen and carbon (hydrocarbon). An extremely high incompatibility exists between the hydrophilic (water-loving) sugar and hydrophobic (water-hating) polyolefin, and the sugar molecule is extremely small with respect to the size of a typical block in a block copolymer. Because of these characteristics, there is a higher mobility that enables the reorganization of the polymer chains into multiple self-assembled structures with incredibly small feature sizes, as small as three nanometers.”

    3
    An illustration of the three-dimensional gyroid structure. This geometric configuration is found in butterfly wings and elsewhere in nature.

    For example, the sugar-polyolefin conjugates can self-assemble into stable “gyroids”—infinitely connecting structures with a minimal surface area containing no straight lines—that are lightweight yet extremely strong. These rare and complex nanostructures would be difficult to obtain and stabilize within traditional block copolymer thin films, especially those as thin as needed for electronic and optical devices. But if scientists can access gyroids and other structures with unique geometries (and thus properties), new applications may be enabled.

    Aligned research themes

    In Nowak’s third year, advisor and principal investigator Lawrence Sita contacted Kevin Yager—group leader of Electronic Nanomaterials at the Center for Functional Nanomaterials (CFN), a U.S. Department of Energy (DOE) Office of Science User Facility at Brookhaven National Laboratory. Sita thought his group’s research on the sugar-polyolefin conjugates could progress even further with Yager’s expertise and the x-ray scattering capabilities available at Brookhaven Lab’s National Synchrotron Light Source II (NSLS-II) [below], another DOE Office of Science User Facility. At the time, Yager was in the process of developing new equipment and techniques and looking for users for the Complex Materials Scattering (CMS) beamline, which the CFN and NSLS-II operate in partnership.

    “The group’s results were intriguing to me—both because they were able to create very small self-assembled structures, and because their results seemed to violate my expectations for the kinds of structures those materials should form,” said Yager.

    Sita and Nowak wrote and submitted a proposal for beam time at CMS. Their proposal was accepted, and the research Nowak conducted at the beamline ended up becoming a large part of her PhD thesis. In particular, she used a scattering technique called grazing-incidence small-angle x-ray scattering (GISAXS). In GISAXS, a high-energy x-ray beam reflects off of a thin film or substrate at a very shallow angle. The pattern of the scattered x-rays provides information about the size, structure, and orientation of any self-assembled structures within and on the surface.

    4
    Atomic force microscope images of a sugar-polyolefin conjugate ultrathin film (30 nanometers) at room temperature that the Sita Research Group heated to 140 degrees Fahrenheit for different lengths of time: (a) original ultrathin film, (b) after 14 hours, (c) a zoomed-in region corresponding to the white square in (b), (d) after 24 hours, (e) zoomed-in region corresponding to the white square in (d), and (f) after 48 hours. The images reveal how the morphology evolves in response to heating over time. Source: Journal of the American Chemical Society 2017, 139, 5281–5284.

    “The University of Maryland has a lab-scale x-ray source but we would have never discovered all that we did about the behavior of these materials without the in situ studies at NSLS-II,” said Nowak. “Scans that would have taken an hour in our lab only took 10 seconds at NSLS-II. We were able to visualize in real time how the materials responded to changes in temperature, film thickness, and polymer chain length.”

    The conjugate materials in this case were made out of cellobiose (a sugar derived from cellulose in plants) and polypropylene with a low molecular weight. From their studies, they learned that increasing the temperature caused several different well-ordered morphologies (structural arrangements) with very tiny feature sizes to emerge in both the bulk material and ultrathin films. By jumping to a specific temperature or slowly increasing the temperature, they could control which morphology they ended up with. And if the polymer chain was too long, the structures that formed were more limited in variety.

    “The results were beyond my expectations,” said Yager. “We were able to measure the ordering of Sam’s materials during annealing—that is, watch them during the process of self-organization. Surprisingly, these materials not only organized but also reorganized into a succession of different configurations as we raised the temperature. This behavior would have been hard to see by any other measurement technique.”

    “When the collaboration began, I was just beginning my research project,” said Nowak. “I didn’t know how useful the technique at NSLS-II would be to build upon the work the group had already done with these materials. But once I learned what GISAXS with a synchrotron source could do, it was perfect.”

    From user to postdoc

    During one of her visits to the NSLS-II for beam time, Yager mentioned to Nowak that he was looking for a postdoctoral researcher at the CFN.

    “I was so impressed by Sam’s diligence and scientific insight that I reached out to her when the CFN had the open postdoc position,” said Yager. “I knew she would continue to do great things if she joined our team.”

    Nowak had all intentions of working for industry immediately following graduation, but the combination of her experience as a user and conversation with Yager changed her mind.

    “Kevin explained the differences between the academic postdoc that I was picturing in my head and a postdoc at a place like the CFN,” said Nowak. “I knew that coming here would open a lot of doors for me.”

    Nowak received her PhD in August 2018 and joined the CFN in October.

    “I love it here,” said Nowak. “The research is interesting, and I’m learning so many new techniques and ideas that I would have not otherwise been exposed to. The environment at the CFN is very collaborative, and I get to meet lots of people who are pursuing very different research projects.”

    4
    Samantha Nowak (front row, left) recently joined the Center for Functional Nanomaterials as a postdoctoral researcher in the Electronic Nanomaterials Group, led by Kevin Yager (back row, second from right).

    The perfect blend

    Under the co-advisement of Yager and CFN Director Charles (Chuck) Black, Nowak is studying self-assembly using thin films of well-established polymers (polystyrene (PS) and poly(methyl methacrylate) (PMMA)) to create novel “non-native” morphologies (i.e., those that deviate from the bulk morphologies). Mainly, she is blending block copolymers with different intrinsic morphologies—the morphology they prefer to adopt based on the volume fraction, molecular weight, and surface energy of the respective blocks. For example, one block copolymer may form cylindrical nanostructures and the other lamellae. But when the block copolymers are blended, they adopt morphologies that are completely different than those of the individual components.

    After forming block copolymer thin films by spin casting them from solution onto a flat surface, Nowak heats them on a hot plate. Introducing heat provides energy for the block copolymer film to spontaneously order into patterns with nanoscale features. In order to more easily see the structure of the films, Nowak then converts the PMMA domains into an inorganic replica through sequential infiltration synthesis—a chemical method in which a polymer is infused with an inorganic material by exposure to gaseous metal precursors in multiple cycles—and etches away the polymer with oxygen plasma.

    “With this approach, I have better contrast when I look at the films in the scanning electron microscope,” said Nowak.

    Most recently, Nowak has been seeing what happens when she changes the composition of the block copolymer blend. One unexpected result so far was the formation of hexagonally perforated lamellae from cylinder and lamellae block copolymers.

    “This morphology is not very common and is difficult to obtain,” explained Nowak. “There’s a very narrow region of the phase diagram where it is stable, so the fact that we expanded accessibility to this phase is very exciting.”

    In another experiment, Nowak used the same exact blend of block copolymers but changed the surface energy. The result was either a single nanostructure or a combination of line and dot patterns, hexagonally perforated lamellae, and horizontal lamellae. Nowak is also exploring how to chemically pattern substrates as a way to “program” which morphologies appear in particular regions of the substrate. She is in the process of getting training in the cleanroom of the CFN Nanofabrication Facility to perform this patterning.

    “We’re creating new nanostructures from already existing materials,” explained Nowak. “We don’t have to synthesize new types of block copolymers; we can use two easily obtainable ones and broaden what we can do with them.”

    The combination of different nanostructures within a single substrate in a predetermined fashion could expand the range of applications—something that Nowak had not previously thought much about.

    5
    Conventionally, block copolymers self-assemble into a limited range of morphologies, such as spheres and lamellae. But by using appropriate block copolymer blends and a chemically patterned substrate that contains the “instructions” for which morphologies appear where, scientists can significantly expand this range. Nowak, Yager, and other CFN scientists recently obtained four different nanostructures—dots, lines, horizontal lamellae, and hexagonally perforated lamellae—in predetermined regions of a single substrate.

    “As a chemist, I tend to focus on the very specific details of the research,” said Nowak. “That is where my brain is trained to stop. But, Chuck—who I meet with every other week to discuss my research and goals—has helped me broaden my viewpoint. He has me consider how we could use these nanostructures in different ways, how we can benefit society with them. I’ve always been interested in the fundamental science part, but now I’m retraining my mind to see the bigger picture. I’ll need to be able to look beyond my individual research projects for a career in industry.”

    After her postdoc, Nowak plans to enter industry as a polymer chemist. She has not yet decided which industry, but she is currently considering cosmetics or consumer goods.

    “One of my grandfather’s inventions was a way to stabilize color in paints and coatings,” said Nowak. “Before his invention, paint darkened or discolored exponentially faster than paints today. Now almost all paints today have this stabilizer in it. It would be great to follow in my grandfather’s footsteps.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus


    BNL Center for Functional Nanomaterials

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 12:50 pm on April 5, 2019 Permalink | Reply
    Tags: "Putting a New Spin on Majorana Fermions", , BNL, , Majorana fermions are particle-like excitations called quasiparticles that emerge as a result of the fractionalization (splitting) of individual electrons into two halves., , , , , Spin ladders- crystals formed of atoms with a three-dimensional (3-D) structure subdivided into pairs of chains that look like ladders.   

    From Brookhaven National Lab: “Putting a New Spin on Majorana Fermions” 

    From Brookhaven National Lab

    April 1, 2019
    Ariana Tantillo
    atantillo@bnl.gov
    (631) 344-2347

    Peter Genzer
    genzer@bnl.gov
    (631) 344-3174

    Split electrons that emerge at the boundaries between different magnetic states in materials known as spin ladders could act as stable bits of information in next-generation quantum computers.

    2
    Theoretical calculations performed by (left to right) Neil Robinson, Robert Konik, Alexei Tsvelik, and Andreas Weichselbaum of Brookhaven Lab’s Condensed Matter Physics and Materials Science Department suggest that Majorana fermions exist in the boundaries of magnetic materials with different magnetic phases. Majorana fermions are particle-like excitations that emerge when single electrons fractionalize into two halves, and their unique properties are of interest for quantum applications.

    The combination of different phases of water—solid ice, liquid water, and water vapor—would require some effort to achieve experimentally. For instance, if you wanted to place ice next to vapor, you would have to continuously chill the water to maintain the solid phase while heating it to maintain the gas phase.

    For condensed matter physicists, this ability to create different conditions in the same system is desirable because interesting phenomena and properties often emerge at the interfaces between two phases. Of current interest is the conditions under which Majorana fermions might appear near these boundaries.

    Majorana fermions are particle-like excitations called quasiparticles that emerge as a result of the fractionalization (splitting) of individual electrons into two halves. In other words, an electron becomes an entangled (linked) pair of two Majorana quasiparticles, with the link persisting regardless of the distance between them. Scientists hope to use Majorana fermions that are physically separated in a material to reliably store information in the form of qubits, the building blocks of quantum computers. The exotic properties of Majoranas—including their high insensitivity to electromagnetic fields and other environmental “noise”—make them ideal candidates for carrying information over long distances without loss.

    However, to date, Majorana fermions have only been realized in materials at extreme conditions, including at frigid temperatures close to absolute zero (−459 degrees Fahrenheit) and under high magnetic fields. And though they are “topologically” protected from local atomic impurities, disorder, and defects that are present in all materials (i.e., their spatial properties remain the same even if the material is bent, twisted, stretched, or otherwise distorted), they do not survive under strong perturbations. In addition, the range of temperatures over which they can operate is very narrow. For these reasons, Majorana fermions are not yet ready for practical technological application.

    Now, a team of physicists led by the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and including collaborators from China, Germany, and the Netherlands has proposed a novel theoretical method for producing more robust Majorana fermions. According to their calculations, as described in a paper published on Jan. 15 in Physical Review Letters, these Majoranas emerge at higher temperatures (by many orders of magnitude) and are largely unaffected by disorder and noise. Even though they are not topologically protected, they can persist if the perturbations change slowly from one point to another in space.

    “Our numerical and analytical calculations provide evidence that Majorana fermions exist in the boundaries of magnetic materials with different magnetic phases, or directions of electron spins, positioned next to one other,” said co-author Alexei Tsvelik, senior scientist and leader of the Condensed Matter Theory Group in Brookhaven Lab’s Condensed Matter Physics and Materials Science (CMPMS) Department. “We also determined the number of Majorana fermions you should expect to get if you combine certain magnetic phases.”

    For their theoretical study, the scientists focused on magnetic materials called spin ladders, which are crystals formed of atoms with a three-dimensional (3-D) structure subdivided into pairs of chains that look like ladders. Though the scientists have been studying the properties of spin ladder systems for many years and expected that they would produce Majorana fermions, they did not know how many. To perform their calculations, they applied the mathematical framework of quantum field theory for describing the fundamental physics of elementary particles, and a numerical method (density-matrix renormalization group) for simulating quantum systems whose electrons behave in a strongly correlated way.

    “We were surprised to learn that for certain configurations of magnetic phases we can generate more than one Majorana fermion at each boundary,” said co-author and CMPMS Department Chair Robert Konik.

    For Majorana fermions to be practically useful in quantum computing, they need to be generated in large numbers. Computing experts believe that the minimum threshold at which quantum computers will be able to solve problems that classical computers cannot is 100 qubits. The Majorana fermions also have to be moveable in such a way that they can become entangled.

    The team plans to follow up their theoretical study with experiments using engineered systems such as quantum dots (nanosized semiconducting particles) or trapped (confined) ions. Compared to the properties of real materials, those of engineered ones can be more easily tuned and manipulated to introduce the different phase boundaries where Majorana fermions may emerge.

    “What the next generation of quantum computers will be made of is unclear right now,” said Konik. “We’re trying to find better alternatives to the low-temperature superconductors of the current generation, similar to how silicon replaced germanium in transistors. We’re in such early stages that we need to explore every possibility available.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 6:03 pm on March 8, 2019 Permalink | Reply
    Tags: , , BNL,   

    From Brookhaven National Lab: “NETL Develops an Improved Process for Creating Building Blocks for $200 Billion Per Year Chemical Industry Market” 

    From Brookhaven National Lab

    March 6, 2019
    Stephanie Kossman
    skossman@bnl.gov

    1

    National Energy Technology Laboratory (NETL) researchers developed a new catalyst that can selectively convert syngas into light hydrocarbon compounds called olefins for application in a $200 billion per year chemical industry market. The work has been detailed in ChemCatChem, a premier catalysis journal.

    The catalyst was characterized using a variety of techniques from U.S. Department of Energy user facilities at Brookhaven National Laboratory including advanced electron microscopy at the Center for Functional Nanomaterials and synchrotron-based X-ray spectroscopy conducted at the National Synchrotron Light Source II.

    An olefin is a compound made up of hydrogen and carbon that contains one or more pairs of carbon atoms linked by a double bond. Because of their high reactivity and low cost, olefins are widely used as building blocks in the manufacture of plastics and the preparation of certain types of synthetic rubber, chemical fibers, and other commercially valuable products.

    The NETL research is significant because light olefins are currently produced using steam cracking of ethane or petroleum derived precursors. Steam cracking is a petrochemical process in which saturated hydrocarbons are broken down into smaller, often unsaturated hydrocarbons. It is one of the most energy intensive processes in the chemical industry. Research has been underway to develop alternative approaches to producing olefins that are less energy intensive, more sustainable and can use different feedstocks. The NETL research has shown promising results toward those goals.

    According to NETL researchers Congjun Wang and Christopher Matranga, the research led to development of a carbon nanosheet-supported iron oxide catalyst that has proven effective in converting syngas into light olefins. A catalyst is a substance that increases the rate of a chemical reaction without itself undergoing any permanent chemical change. A nanosheet is a two-dimensional nanostructure with thickness ranging from 1 to 100 nanometers.

    The carbon nanosheet-supported iron oxide catalyst was put to the test in the Fischer-Tropsch to Olefins synthesis process —a set of chemical reactions that changes a mixture of carbon monoxide gas and hydrogen gas into hydrocarbons that is showing promise as a method for creating olefins at lower cost.

    “The NETL-developed carbon nanosheets-supported iron oxide catalysts demonstrated extremely high activity that was 40 to 1,000 time higher than other catalysts used in the Fischer-Tropsch to Olefins process,” Wang said. “In addition, it was extraordinarily robust with no degradation observed after up to 500 hours of repeated catalytic reactions.”

    Matranga added that the carbon nanosheets promoted the effective transformation of iron oxide in the fresh catalysts to active iron carbide under reaction conditions.

    “This effect was not seen in other carbon-based catalyst support materials such as carbon nanotubes,” he said. “It is a result of the potassium citrate we use to make the carbon support. The potassium has a promotion effect on the catalyst in a manner that cannot be achieved by just adding potassium to the carbon support.”

    Eli Stavitski, a physicist at Brookhaven’s NSLS-II’s Inner Shell Spectroscopy (ISS) beamline, said the new catalyst performed well in his tests. ISS was one of the two beamlines at NSLS-II where the work was conducted.

    “Using the exceptionally bright X-ray beams available at NSLS-II, we were able to confirm that the new catalyst developed by the NETL team transforms into an active, iron carbide phase faster, and more completely, than the materials proposed for the Fischer Tropsch synthesis before,” he said.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 1:29 pm on March 8, 2019 Permalink | Reply
    Tags: , And finally theywill be shipped to CERN, “The need to go beyond the already excellent performance of the LHC is at the basis of the scientific method” said Giorgio Apollinari Fermilab scientist and HL-LHC AUP project manager., , BNL, , Each magnet will have four sets of coils making it a quadrupole., Earlier this month the AUP earned approval for both Critical Decisions 2 and 3b from DOE., Fermilab will manufacture 43 coils and Brookhaven National Laboratory in New York will manufacture another 41, , , In its current configuration on average an astonishing 1 billion collisions occur every second at the LHC., It’s also the reason behind the collider’s new name the High-Luminosity LHC., LHC AUP began just over two years ago and on Feb. 11 it received key approvals allowing the project to transition into its next steps., , , , Superconducting niobium-tin magnets have never been used in a high-energy particle accelerator like the LHC., The AUP calls for 84 coils fabricated into 21 magnets., The first upgrade is to the magnets that focus the particles., The magnets will be sent to Brookhaven to be tested before being shipped back to Fermilab., The new technologies developed for the LHC will boost that number by a factor of 10., The second upgrade is a special type of accelerator cavity., The U.S. Large Hadron Collider Accelerator Upgrade Project is the Fermilab-led collaboration of U.S. laboratories in partnership with CERN and a dozen other countries., These new magnets will generate a maximum magnetic field of 12 tesla roughly 50 percent more than the niobium-titanium magnets currently in the LHC., This means that significantly more data will be available to experiments at the LHC., This special cavity called a crab cavity is used to increase the overlap of the two beams so that more protons have a chance of colliding., Those will then be delivered to Lawrence Berkeley National Laboratory to be formed into accelerator magnets, Twenty successful magnets will be inserted into 10 containers which are then tested by Fermilab, U.S. Department of Energy projects undergo a series of key reviews and approvals referred to as “Critical Decisions” that every project must receive., U.S. physicists and engineers helped research and develop two technologies to make this upgrade possible.   

    From Brookhaven National Lab: “Large Hadron Collider Upgrade Project Leaps Forward” 

    From Brookhaven National Lab

    March 4, 2019
    Caitlyn Buongiorno

    1
    Staff members of the Superconducting Magnet Division at Brookhaven National Laboratory next to the “top hat”— the interface between the room temperature components of the magnet test facility and the LHC high-luminosity magnet to be tested. The magnet is attached to the bottom of the top hat and tested in superfluid helium at temperatures close to absolute zero. Left to right: Joseph Muratore, Domenick Milidantri, Sebastian Dimaiuta, Raymond Ceruti, and Piyush Joshi. Credit: Brookhaven National Laboratory

    The U.S. Large Hadron Collider Accelerator Upgrade Project is the Fermilab-led collaboration of U.S. laboratories that, in partnership with CERN and a dozen other countries, is working to upgrade the Large Hadron Collider.

    LHC AUP began just over two years ago and, on Feb. 11, it received key approvals, allowing the project to transition into its next steps.

    LHC

    CERN map

    CERN LHC Tunnel

    CERN LHC particles

    U.S. Department of Energy projects undergo a series of key reviews and approvals, referred to as “Critical Decisions” that every project must receive. Earlier this month, the AUP earned approval for both Critical Decisions 2 and 3b from DOE. CD-2 approves the performance baseline — the scope, cost and schedule — for the AUP. In order to stay on that schedule, CD-3b allows the project to receive the funds and approval necessary to purchase base materials and produce final design models of two technologies by the end of 2019.

    The LHC, a 17-mile-circumference particle accelerator on the French-Swiss border, smashes together two opposing beams of protons to produce other particles. Researchers use the particle data to understand how the universe operates at the subatomic scale.

    In its current configuration, on average, an astonishing 1 billion collisions occur every second at the LHC. The new technologies developed for the LHC will boost that number by a factor of 10. This increase in luminosity — the number of proton-proton interactions per second — means that significantly more data will be available to experiments at the LHC. It’s also the reason behind the collider’s new name, the High-Luminosity LHC.

    2
    This “crab cavity” is designed to maximize the chance of collision between two opposing particle beams. Photo: Paolo Berrutti

    “The need to go beyond the already excellent performance of the LHC is at the basis of the scientific method,” said Giorgio Apollinari, Fermilab scientist and HL-LHC AUP project manager. “The endorsement and support received for this U.S. contribution to the HL-LHC will allow our scientists to remain at the forefront of research at the energy frontier.”

    U.S. physicists and engineers helped research and develop two technologies to make this upgrade possible. The first upgrade is to the magnets that focus the particles. The new magnets rely on niobium-tin conductors and can exert a stronger force on the particles than their predecessors. By increasing the force, the particles in each beam are driven closer together, enabling more proton-proton interactions at the collision points.

    The second upgrade is a special type of accelerator cavity. Cavities are structures inside colliders that impart energy to the particle beam and propel them forward. This special cavity, called a crab cavity, is used to increase the overlap of the two beams so that more protons have a chance of colliding.

    “This approval is a recognition of 15 years of research and development started by a U.S. research program and completed by this project,” said Giorgio Ambrosio, Fermilab scientist and HL-LHC AUP manager for magnets.

    3
    This completed niobium-tin magnet coil will generate a maximum magnetic field of 12 tesla, roughly 50 percent more than the niobium-titanium magnets currently in the LHC. Photo: Alfred Nobrega

    Magnets help the particles go ’round

    Superconducting niobium-tin magnets have never been used in a high-energy particle accelerator like the LHC. These new magnets will generate a maximum magnetic field of 12 tesla, roughly 50 percent more than the niobium-titanium magnets currently in the LHC. For comparison, an MRI’s magnetic field ranges from 0.5 to 3 tesla, and Earth’s magnetic field is only 50 millionths of one tesla.

    There are multiple stages to creating the niobium-tin coils for the magnets, and each brings its challenges.

    Each magnet will have four sets of coils, making it a quadrupole. Together the coils conduct the electric current that produces the magnetic field of the magnet. In order to make niobium-tin capable of producing a strong magnetic field, the coils must be baked in an oven and turned into a superconductor. The major challenge with niobium-tin is that the superconducting phase is brittle. Similar to uncooked spaghetti, a small amount of pressure can snap it in two if the coils are not well supported. Therefore, the coils must be handled delicately from this point on.

    The AUP calls for 84 coils, fabricated into 21 magnets. Fermilab will manufacture 43 coils, and Brookhaven National Laboratory in New York will manufacture another 41. Those will then be delivered to Lawrence Berkeley National Laboratory to be formed into accelerator magnets. The magnets will be sent to Brookhaven to be tested before being shipped back to Fermilab. Twenty successful magnets will be inserted into 10 containers, which are then tested by Fermilab, and finally shipped to CERN.

    With CD-2/3b approval, AUP expects to have the first magnet assembled in April and tested by July. If all goes well, this magnet will be eligible for installation at CERN.

    Crab cavities for more collisions

    Cavities accelerate particles inside a collider, boosting them to higher energies. They also form the particles into bunches: As individual protons travel through the cavity, each one is accelerated or decelerated depending on whether they are below or above an expected energy. This process essentially sorts the beam into collections of protons, or particle bunches.

    HL-LHC puts a spin on the typical cavity with its crab cavities, which get their name from how the particle bunches appear to move after they’ve passed through the cavity. When a bunch exits the cavity, it appears to move sideways, similar to how a crab walks. This sideways movement is actually a result of the crab cavity rotating the particle bunches as they pass through.

    Imagine that a football was actually a particle bunch. Typically, you want to throw a football straight ahead, with the pointed end cutting through the air. The same is true for particle bunches; they normally go through a collider like a football. Now let’s say you wanted to ensure that your football and another football would collide in mid-air. Rather than throwing it straight on, you’d want to throw the football on its side to maximize the size of the target and hence the chance of collision.

    Of course, turning the bunches is harder than turning a football, as each bunch isn’t a single, rigid object.

    To make the rotation possible, the crab cavities are placed right before and after the collision points at two of the particle detectors at the LHC, called ATLAS and CMS. An alternating electric field runs through each cavity and “tilts” the particle bunch on its side. To do this, the front section of the bunch gets a “kick” to one side on the way in and, before it leaves, the rear section gets a “kick” to the opposite side. Now, the particle bunch looks like a football on its side. When the two bunches meet at the collision point, they overlap better, which makes the occurrence of a particle collision more likely.

    After the collision point, more crab cavities straighten the remaining bunches, so they can travel through the rest of the LHC without causing unwanted interactions.

    With CD-2/3b approval, all raw materials necessary for construction of the cavities can be purchased. Two crab cavity prototypes are expected by the end of 2019. Once the prototypes have been certified, the project will seek further approval for the production of all cavities destined to the LHC tunnel.

    After further testing, the cavities will be sent out to be “dressed”: placed in a cooling vessel. Once the dressed cavities pass all acceptance criteria, Fermilab will ship all 10 dressed cavities to CERN.

    “It’s easy to forget that these technological advances don’t benefit just accelerator programs,” said Leonardo Ristori, Fermilab engineer and an HL-LHC AUP manager for crab cavities. “Accelerator technology existed in the first TV screens and is currently used in medical equipment like MRIs. We might not be able to predict how these technologies will appear in everyday life, but we know that these kinds of endeavors ripple across industries.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 1:23 pm on January 4, 2019 Permalink | Reply
    Tags: BNL, , Cornell-Brookhaven “Energy-Recovery Linac” Test Accelerator or CBETA, , , When it comes to particle accelerators magnets are one key to success   

    From Brookhaven National Lab: “Brookhaven Delivers Innovative Magnets for New Energy-Recovery Accelerator” 

    From Brookhaven National Lab

    January 2, 2019
    Karen McNulty Walsh
    kmcnulty@bnl.gov

    Test accelerator under construction at Cornell will reuse energy, running beams through multi-pass magnets that help keep size and costs down.

    1
    Members of the Brookhaven National Laboratory team with the completed magnet assemblies for the CBETA project.

    When it comes to particle accelerators, magnets are one key to success. Powerful magnetic fields keep particle beams “on track” as they’re ramped up to higher energy, crashed into collisions for physics experiments, or delivered to patients to zap tumors. Innovative magnets have the potential to improve all these applications.

    That’s one aim of the Cornell-Brookhaven “Energy-Recovery Linac” Test Accelerator, or CBETA, under construction at Cornell University and funded by the New York State Energy Research and Development Authority (NYSERDA). CBETA relies on a beamline made of cutting-edge magnets designed by physicists at the U.S. Department of Energy’s Brookhaven National Laboratory that can carry four beams at very different energies at the same time.

    Cornell BNL ERL test accelerator

    “Scientists and engineers in Brookhaven’s Collider-Accelerator Department (C-AD) just completed the production and assembly of 216 exceptional quality fixed-field, alternating gradient, permanent magnets for this project—an important milestone,” said C-AD Chair Thomas Roser, who oversees the Lab’s contributions to CBETA.

    The novel magnet design, developed by Brookhaven physicist Stephen Brooks and C-AD engineer George Mahler, has a fixed magnetic field that varies in strength at different points within each circular magnet’s aperture. “Instead of having to ramp up the magnetic field to accommodate beams of different energies, beams with different energies simply find their own ‘sweet spot’ within the aperture,” said Brooks. The result: Beams at four different energies can pass through a single beamline simultaneously.

    In CBETA, a chain of these magnets strung together like beads on a necklace will form what’s called a return loop that repeatedly delivers bunches of electrons to a linear accelerator (linac). Four trips through the superconducting radiofrequency cavities of the linac will ramp up the electrons’ energy, and another four will ramp them down so the energy stored in the beam can be recovered and reused for the next round of acceleration.

    “The bunches at different energies are all together in the return loop, with alternating magnetic fields keeping them oscillating along their individual paths, but then they merge and enter the linac sequentially,” explained C-AD chief mechanical engineer Joseph Tuozzolo. “As one bunch goes through and gets accelerated, another bunch gets decelerated and the energy recovered from the deceleration can accelerate the next bunch.”

    Even when the beams are used for experiments, the energy recovery is expected to be close to 99.9 percent, making this “superconducting energy recovery linac (ERL)” a potential game changer in terms of efficiency. New bunches of near-light-speed electrons are brought up to the maximum energy every microsecond, so fresh beams are always available for experiments.

    That’s one of the big advantages of using permanent magnets. Electromagnets, which require electricity to change the strength of the magnetic field, would never be able to ramp up fast enough, he explained. Using permanent fixed field magnets that require no electricity—like the magnets that stick to your refrigerator, only much stronger—avoids that problem and reduces the energy/cost required to run the accelerator.

    To prepare the magnets for CBETA, the Brookhaven team started with high-quality permanent magnet assemblies produced by KYMA, a magnet manufacturing company, based on the design developed by Brooks and Mahler. C-AD’s Tuozzolo organized and led the procurement effort with KYMA and the acquisition of the other components for the return loop.

    Engineers in Brookhaven’s Superconducting Magnet Division took precise measurements of each magnet’s field strength and used a magnetic field correction system developed and built by Brooks to fine-tune the fields to achieve the precision needed for CBETA. Mahler then led the assembly of the finished magnets onto girder plates that will hold them in perfect alignment in the finished accelerator, while C-AD engineer Robert Michnoff led the effort to build and test electronics for beam position monitors that will track particle paths through the beamline.

    “Brookhaven’s CBETA team reached the goals of this milestone nine days earlier than scheduled thanks to the work of extremely dedicated people performing multiple magnetic measurements and magnet surveys over many long work days,” Roser said.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 11:31 am on December 21, 2018 Permalink | Reply
    Tags: , , BNL, , , , , Relativistic Heavy Ion Collider (RHIC), Theory Paper Offers Alternate Explanation for Particle Patterns   

    From Brookhaven National Lab: “Theory Paper Offers Alternate Explanation for Particle Patterns” 

    From Brookhaven National Lab

    December 19, 2018
    Karen McNulty Walsh
    kmcnulty@bnl.gov

    Quantum mechanical interactions among gluons may trigger patterns that mimic formation of quark-gluon plasma in small-particle collisions at RHIC.

    1
    Raju Venugopalan and Mark Mace, two members of a collaboration that maintains quantum mechanical interactions among gluons are the dominant factor creating particle flow patterns observed in collisions of small projectiles with gold nuclei at the Relativistic Heavy Ion Collider (RHIC).

    A group of physicists analyzing the patterns of particles emerging from collisions of small projectiles with large nuclei at the Relativistic Heavy Ion Collider (RHIC) say these patterns are triggered by quantum mechanical interactions among gluons, the glue-like particles that hold together the building blocks of the projectiles and nuclei. This explanation differs from that given by physicists running the PHENIX experiment at RHIC—a U.S. Department of Energy Office of Science user facility for nuclear physics research at DOE’s Brookhaven National Laboratory. The PHENIX collaboration describes the patterns as a telltale sign that the small particles are creating tiny drops of quark-gluon plasma, a soup of visible matter’s fundamental building blocks.

    The scientific debate has set the stage for discussions that will take place among experimentalists and theorists in early 2019.

    “This back-and-forth process of comparison between measurements, predictions, and explanations is an essential step on the path to new discoveries—as the RHIC program has demonstrated throughout its successful 18 years of operation,” said Berndt Mueller, Brookhaven’s Associate Laboratory Director for Nuclear and Particle Physics, who has convened the special workshop for experimentalists and theorists, which will take place at Rice University in Houston, March 15-17, 2019.

    The data come from collisions between small projectiles (single protons, two-particle deuterons, and three-particle helium-3 nuclei) with large gold nuclei “targets” moving in the opposite direction at nearly the speed of light at RHIC. The PHENIX team tracked particles produced in these collisions and detected distinct correlations among particles emerging in elliptical and triangular patterns. Their measurements were in good agreement with particle patterns predicted by models describing the hydrodynamic behavior of a nearly perfect fluid quark-gluon plasma (QGP), which relate these patterns to the initial geometric shapes of the projectiles (for details, see this press release and the associated paper published in Nature Physics).

    But former Stony Brook University (SBU) Ph.D. student Mark Mace, his advisor Raju Venugopalan of Brookhaven Lab and an adjunct professor at SBU, and their collaborators question the PHENIX interpretation, attributing the observed particle patterns instead to quantum mechanical interactions among gluons. They present their interpretation of the results at RHIC and also results from collisions of protons with lead ions at Europe’s Large Hadron Collider in two papers published recently in Physical Review Letters and Physics Letters B, respectively, showing that their model also finds good agreement with the data.

    Gluons’ quantum interactions

    Gluons are the force carriers that bind quarks—the fundamental building blocks of visible matter—to form protons, neutrons, and therefore the nuclei of atoms. When these composite particles are accelerated to high energy, the gluons are postulated to proliferate and dominate their internal structure. These fast-moving “walls” of gluons—sometimes called a “color glass condensate,” named for the “color” charge carried by the gluons—play an important role in the early stages of interaction when a collision takes place.

    “The concept of the color glass condensate helped us understand how the many quarks and gluons that make up large nuclei such as gold become the quark-gluon plasma when these particles collide at RHIC,” Venugopalan said. Models that assume a dominant role of color glass condensate as the initial state of matter in these collisions, with hydrodynamics playing a larger role in the final state, extract the viscosity of the QGP as near the lower limit allowed for a theoretical ideal fluid. Indeed, this is the property that led to the characterization of RHIC’s QGP as a nearly “perfect” liquid.

    But as the number of particles involved in a collision decreases, Venugopalan said, the contribution from hydrodynamics should get smaller too.

    “In large collision systems, such as gold-gold, the interacting coherent gluons in the color glass initial state decay into particle-like gluons that have time to scatter strongly amongst each other to form the hydrodynamic QGP fluid—before the particles stream off to the detectors,” Venugopalan said.

    But at the level of just a few quarks and gluons interacting, as when smaller particles collide with gold nuclei, the system has less time to build up the hydrodynamic response.

    “In this case, the gluons produced after the decay of the color glass do not have time to rescatter before streaming off to the detectors,” he said. “So what the detectors pick up are the multiparticle quantum correlations of the initial state alone.”

    Among these well-known quantum correlations are the effects of the electric color charges and fields generated by the gluons in the nucleus, which can give a small particle strongly directed kicks when it collides with a larger nucleus, Venugopalan said. According to the analysis the team presents in the two published papers, the distribution of these deflections aligns well with the particle flow patterns measured by PHENIX. That lends support to the idea that these quirky quantum interactions among gluons are sufficient to produce the particle flow patterns observed in the small systems without the formation of QGP.

    Such shifts to quantum quirkiness at the small scale are not uncommon, Venugopalan said.

    “Classical systems like billiard balls obey well-defined trajectories when they collide with each other because there are a sufficient number of particles that make up the billiard balls, causing them to behave in aggregate,” he said. “But at the subatomic level, the quantum nature of particles is far less intuitive. Quantum particles have properties that are wavelike and can create patterns that are more like that of colliding waves. The wave-like nature of gluons creates interference patterns that cannot be mimicked by classical billiard ball physics.”

    “How many such subatomic gluons does it take for them to stop exhibiting quantum weirdness and start obeying the classical laws of hydrodynamics? It’s a fascinating question. And what can we can learn about the nature of other forms of strongly interacting matter from this transition between quantum and classical physics?”

    The answers might be relevant to understanding what happens in ultracold atomic gases—and may even hold lessons for quantum information science and fundamental issues governing the construction of quantum computers, Venugopalan said.

    “In all of these systems, classical physics breaks down,” he noted. “If we can figure out the particle number or collision energy or other control variables that determine where the quantum interactions become more important, that may point to the more nuanced kinds of predictions we should be looking at in future experiments.”

    The nuclear physics theory work and the operation of RHIC at Brookhaven Lab are supported by the DOE Office of Science.

    Collaborators on this work include: Mark Mace (now a post-doc at the University of Jyväskylä), Vladimir V. Skokov (RIKEN-BNL Research Center at Brookhaven Lab and North Carolina State University), and Prithwish Tribedy (Brookhaven Lab).

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 11:07 am on December 21, 2018 Permalink | Reply
    Tags: , , BNL, Brookhaven Lab's Computational Science Initiative, DOE-supported Energy Sciences Network (ESnet)—a DOE Office of Science User Facility, Lighting the Way to Centralized Computing Support for Photon Science, , Synchrotron light sources   

    From Brookhaven National Lab: “Lighting the Way to Centralized Computing Support for Photon Science” 

    From Brookhaven National Lab

    December 18, 2018
    Ariana Tantillo
    atantillo@bnl.gov

    Brookhaven Lab’s Computational Science Initiative hosted a workshop for scientists and information technology specialists to discuss best practices for managing and processing data generated at light source facilities

    1
    On Sept. 24, scientists and information technology specialists from various labs in the United States and Europe participated in a full-day workshop—hosted by the Scientific Data and Computing Center at Brookhaven Lab—to share challenges and solutions to providing centralized computing support for photon science. From left to right, seated: Eric Lancon, Ian Collier, Kevin Casella, Jamal Irving, Tony Wong, and Abe Singer. Standing: Yee-Ting Li, Shigeki Misawa, Amedeo Perazzo, David Yu, Hironori Ito, Krishna Muriki, Alex Zaytsev, John DeStefano, Stuart Campbell, Martin Gasthuber, Andrew Richards, and Wei Yang.

    Large particle accelerator–based facilities known as synchrotron light sources provide intense, highly focused photon beams in the infrared, visible, ultraviolet, and x-ray regions of the electromagnetic spectrum. The photons, or tiny bundles of light energy, can be used to probe the structure, chemical composition, and properties of a wide range of materials on the atomic scale. For example, scientists direct the brilliant light at batteries to resolve charge and discharge processes, at protein-drug complexes to understand how the molecules bind, and at soil samples to identify environmental contaminants.

    As these facilities continue to become more advanced through upgrades to light sources, detectors, optics, and other technologies, they are producing data at a higher rate and with increasing complexity. These big data present a challenge to facility users, who have to be able to quickly analyze the data in real time to make sure their experiments are functioning as they should be. Once they have concluded their experiments, users also need ways to store, retrieve, and distribute the data for further analysis. High-performance computing hardware and software are critical to supporting such immediate analysis and post-acquisition requirements.

    The U.S. Department of Energy’s (DOE) Brookhaven National Laboratory hosted a one-day workshop on Sept. 24 for information technology (IT) specialists and scientists from various labs around the world to discuss best practices and share experiences in providing centralized computing support to photon science. Many institutions provide limited computing resources (e.g., servers, disk/tape storage systems) within their respective light source facilities for data acquisition and a quick check and feedback on the quality of the collected data. Though these facilities have computing infrastructure (e.g., login access, network connectivity, data management software) to support usage, access to computing resources is often time-constrained because of the high number and frequency of experiments being conducted at any given time. For example, the Diamond Light Source in the United Kingdom hosts about 9,000 experiments in a single year. Because of the limited computing resources, extensive (or multiple attempts at) data reconstruction and analysis must typically be performed outside of the facilities. But centralized computing centers can provide the resources needed to manage and process data being generated by such experiments.

    Continuing a legacy of computing support

    Brookhaven Lab is home to the National Synchrotron Light Source II (NSLS-II) [see below], a DOE Office of Science User Facility, that began operating in 2014 and is 10,000 times brighter than the original NSLS. Currently, 28 beamlines are in operation or commissioning and one beamline is under construction, and there is space to accommodate an additional 30 beamlines. NSLS-II is expected to generate tens of petabytes of data (one petabyte is equivalent to a stack of CDs standing nearly 10,000 feet tall) per year in the next decade.

    Brookhaven is also home to the Scientific Data and Computing Center (SDCC), part of the Computational Science Initiative (CSI). The centralized data storage, computing, and networking infrastructure that SDCC provides has historically supported the RHIC and ATLAS Computing Facility (RACF). This facility provides the necessary resources to store, process, analyze, and distribute experimental data from the Relativistic Heavy Ion Collider (RHIC)—another DOE Office of Science User Facility at Brookhaven—and the ATLAS detector at CERN’s Large Hadron Collider in Europe.

    2
    The amount of data that need to be archived and retrieved from tape storage has significantly increased over the past decade, as seen in the above graph. “Hot” storage refers to storing data that are frequently accessed, while “cold” storage refers to storing data that are rarely used.

    “Brookhaven has a long tradition of providing centralized computing support to the nuclear and high-energy physics communities,” said workshop organizer Tony Wong, deputy director of SDCC. “A standard approach for dealing with their computing requirements has been developed for more than 50 years. New and advanced photon science facilities such as NSLS-II have very different requirements, and therefore we need to reconsider our approach. The purpose of the workshop was to gain insights from labs with a proven track record of providing centralized computing support for photon science, and to apply those insights at SDCC and other centralized computing centers. There are a lot of research organizations around the world who are similar to Brookhaven in the sense that they have a long history in data-intensive nuclear and high-energy physics experiments and are now branching out to newer data-intensive areas, such as photon science.”

    Nearly 30 scientists and IT specialists from several DOE national laboratories—Brookhaven, Argonne, Lawrence Berkeley, and SLAC—and research institutions in Europe, including the Diamond Light Source and Science and Technology Facilities Council in the United Kingdom and the PETRA III x-ray light source at the Deutsches Elektronen-Synchrotron (DESY) in Germany, participated in this first-of-its-kind workshop. They discussed common challenges in storing, archiving, retrieving, sharing, and analyzing photon science data, and techniques to overcome these challenges.

    Meeting different computing requirements

    One of the biggest differences in computing requirements between nuclear and high-energy physics and photon science is the speed with which the data must be analyzed upon collection.

    “In nuclear and high-energy physics, the data-taking period spans weeks, months, or even years, and the data are analyzed at a later date,” said Wong. “But in photon science, experiments sometimes only last a few hours to a couple of days. When your time at a beamline is this limited, every second counts. Therefore, it is vitally important for the users to be able to immediately check their data as it is collected to ensure it is of value. It is through these data checks that scientists can confirm whether the detectors and instruments are working properly.”

    Photon science also has unique networking requirements, both internally within the light sources and central computing centers, and externally across the internet and remote facilities. For example, in the past, scientists could load their experimental results onto portable storage devices such as removable drives. However, because of the proliferation of big data, this take-it-home approach is often not feasible. Instead, scientists are investigating cloud-based data storage and distribution technology. While the DOE-supported Energy Sciences Network (ESnet)—a DOE Office of Science User Facility stewarded by Lawrence Berkeley National Laboratory—provides high-bandwidth connections for national labs, universities, and research institutions to share their data, no such vehicle exists for private companies. Additionally, sending, storing, and accessing data over the internet can pose security concerns in cases where the data are proprietary or involve confidential information, such as corporate entities.

    Even nonproprietary academic research requires that some security measures are in place to ensure that the appropriate personnel are accessing the computing resources and data. The workshop participants discussed authentication and authorization infrastructure and mechanisms to address these concerns.

    3
    ESnet provides network connections across the world to enable sharing of big data for scientific discovery.

    Identifying opportunities and challenges

    According to Wong, the workshop raised both concern and optimism. Many of the world’s light sources will be undergoing upgrades between 2020 and 2025 that will increase today’s data collection rates by three to 10 times.

    “If we are having trouble coping with data challenges today, even taking into account advancements in technology, we will continue to have problems in the future with respect to moving data from detectors to storage and performing real-time analysis on the data,” said Wong. “On the other hand, SDCC has extensive experience in providing software visualization, cloud computing, authentication and authorization, scalable disk storage, and other infrastructure for nuclear and high-energy physics research. This experience can be leveraged to tackle the unique challenges of managing and processing data for photon science.”

    Going forward, SDCC will continue to engage with the larger community of IT experts in scientific computing through existing information-exchange forums, such as HEPiX. Established in 1991, HEPiX comprises more than 500 scientists and IT system administrators, engineers, and managers who meet twice a year to discuss scientific computing and data challenges in nuclear and high-energy physics. Recently, HEPiX has been extending these discussions to other scientific areas, with scientists and IT professionals from various light sources in attendance. Several of the Brookhaven workshop participants attended the recent HEPiX Autumn/Fall 2018 Workshop in Barcelona, Spain.

    “The seeds have already been planted for interactions between the two communities,” said Wong. “It is our hope that the exchange of information will be mutually beneficial.”

    With this knowledge sharing, SDCC hopes to expand the amount of support provided to NSLS-II, as well as the Center for Functional Nanomaterials (CFN)—another DOE Office of Science User Facility at Brookhaven. In fact, several scientists from NSLS-II and CFN attended the workshop, providing a comprehensive view of their computing needs.

    “SDCC already supports these user facilities but we would like to make this support more encompassing,” said Wong. “For instance, we provide offline computing resources for post-data acquisition analysis but we are not yet providing a real-time data quality IT infrastructure. Events like this workshop are part of SDCC’s larger ongoing effort to provide adequate computing support to scientists, enabling them to carry out the world-class research that leads to scientific discoveries.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: