Tagged: LBNL Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:01 pm on November 14, 2017 Permalink | Reply
    Tags: Advanced Biofuels and Bioproducts Process Demonstration Unit (ABPDU), , , , , Here we’re cultivating an entire community of microbes to access enzymes that we couldn’t get from isolates, Joint BioEnergy Institute (JBEI) based at Lawrence Berkeley National Laboratory (Berkeley Lab), LBNL, Metagenomic analysis, New types of cellulases enzymes that help break down plants into ingredients that can be used to make biofuels and bioproducts   

    From LBNL: “To Find New Biofuel Enzymes, It Can Take a Microbial Village” 

    Berkeley Logo

    Berkeley Lab

    November 14, 2017
    Sarah Yang
    scyang@lbl.gov
    (510) 486-4575

    A new study led by researchers at the Department of Energy’s Joint BioEnergy Institute (JBEI), based at Lawrence Berkeley National Laboratory (Berkeley Lab), demonstrates the importance of microbial communities as a source of stable enzymes that could be used to convert plants to biofuels.

    1
    This 50-milliliter flask contains a symbiotic mix of bacteria derived from compost that was maintained for three years. (Credit: Steve Singer/JBEI)

    The study, recently published in the journal Nature Microbiology, reports on the discovery of new types of cellulases, enzymes that help break down plants into ingredients that can be used to make biofuels and bioproducts. The cellulases were cultured from a microbiome. Using a microbial community veers from the approach typically taken of using isolated organisms to obtain enzymes.

    The scientists first studied the microbial menagerie present in a few cups of municipal compost. Metagenomic analysis at the DOE Joint Genome Institute (JGI) of the microbiome helped reveal that 70 percent of the enzymatic activity originated from cellulases produced by a cluster of uncultivated bacteria in the compost. They found that the enzymes easily broke down the cellulose in plant biomass into glucose at temperatures up to 80 degrees Celsius.

    2
    This chart shows the bacterial composition of the community in the bioreactor after two weeks of culturing. (Credit: Sebastian Kolinko/JBEI)

    “Here we’re cultivating an entire community of microbes to access enzymes that we couldn’t get from isolates,” said study principal investigator Steve Singer, senior scientist in Berkeley Lab’s Biological Systems and Engineering Division and director of Microbial and Enzyme Discovery at JBEI. “Some microbes are difficult to culture in a lab. We are cultivating microbes living in communities, as they occur in the wild, which allows us to see things we don’t see when they are isolated. This opens up the opportunity to discover new types of enzymes that are only produced by microbes in communities.”

    The bacterial population, Candidatus Reconcilibacillus cellulovorans, yielded cellulases that were arranged in remarkably robust carbohydrate-protein complexes, a structure never before observed in isolates. The stability of the new cellulase complexes makes them attractive for applications in biofuels production, the study authors said.

    “The enzymes persist, even after a decline in bacterial abundance,” said Singer, who compared the microbial community with sourdough starters fermented from wild yeast and friendly bacteria. “We kept the microbial community cultivation going for more than three years in the lab.”

    3
    A bioreactor at ABPDU was used to scale the growth of a mixture of bacteria from 50 milliliters to 300 liters. (Credit: Roy Kaltschmidt/Berkeley Lab).

    This stability is a key advantage over other cellulases that degrade more rapidly at high temperature, the researchers said.

    To determine whether the enzyme production can be scalable for industrial applications, JBEI scientists collaborated with researchers from the Advanced Biofuels and Bioproducts Process Demonstration Unit (ABPDU) at Berkeley Lab, a scale-up facility established by DOE to help accelerate the commercialization of biofuels research discoveries.

    Researchers at JBEI, a DOE Bioenergy Research Center, were able to produce 50-milliliter samples, but in about six weeks, the scientists at ABPDU scaled the cultures to a volume 6,000 times larger – 300 liters – in industrial bioreactors.

    The study’s lead author is Sebastian Kolinko, who worked on the study as a JBEI postdoctoral researcher.

    Other co-authors on this study include researchers from Taipei Medical University, the University of Georgia, the Manheim University of Applied Sciences, and Technical University of Braunschweig in Germany.

    JGI is a DOE Office of Science User Facility. This work was primarily supported by the DOE Office of Science and the DOE Office of Energy Efficiency and Renewable Energy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    Advertisements
     
  • richardmitnick 9:28 am on November 13, 2017 Permalink | Reply
    Tags: , , Fuel Cell X-Ray Study Details Effects of Temperature and Moisture on Performance, , LBNL, ,   

    From LBNL: “Fuel Cell X-Ray Study Details Effects of Temperature and Moisture on Performance” 

    Berkeley Logo

    Berkeley Lab

    November 13, 2017
    Glenn Roberts Jr.
    geroberts@lbl.gov
    (510) 486-5582

    1
    This animated 3-D rendering (view larger size), generated by an X-ray-based imaging technique at Berkeley Lab’s Advanced Light Source, shows tiny pockets of water (blue) in a fibrous sample. The X-ray experiments showed how moisture and temperature can affect hydrogen fuel-cell performance. (Credit: Berkeley Lab)

    Like a well-tended greenhouse garden, a specialized type of hydrogen fuel cell – which shows promise as a clean, renewable next-generation power source for vehicles and other uses – requires precise temperature and moisture controls to be at its best. If the internal conditions are too dry or too wet, the fuel cell won’t function well.

    But seeing inside a working fuel cell at the tiny scales relevant to a fuel cell’s chemistry and physics is challenging, so scientists used X-ray-based imaging techniques at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and Argonne National Laboratory to study the inner workings of fuel-cell components subjected to a range of temperature and moisture conditions.

    The research team, led by Iryna Zenyuk, a former Berkeley Lab postdoctoral researcher now at Tufts University, included scientists from Berkeley Lab’s Energy Storage and Distributed Resources Division and the Advanced Light Source (ALS), an X-ray source known as a synchrotron.

    LBNL/ALS

    The ALS lets researchers image in 3-D at high resolution very quickly, allowing them to look inside working fuel cells in real-world conditions. The team created a test bed to mimic the temperature conditions of a working polymer-electrolyte fuel cell that is fed hydrogen and oxygen gases and produces water as a byproduct.

    “The water management and temperature are critical,” said Adam Weber, a staff scientist in the Energy Technologies Area at Berkeley Lab and deputy director for a multi-lab fuel cell research effort, the Fuel Cell Consortium for Performance and Durability (FC-PAD).

    The study has been published online in the journal Electrochimica Acta.

    2
    Temperature-controlled X-ray experiments on fuel-cell components were conducted at Berkeley Lab’s Advanced Light Source (bottom left) and Argonne National Laboratory’s Advanced Photon Source (bottom right).

    ANL/APS

    The computer renderings (top) show the specialized sample holder, which included a heating element near the top and cooling coils at the base. (Credit: Berkeley Lab)

    The research aims to find the right balance of humidity and temperature within the cell, and how water moves out of the cell.

    Controlling how and where water vapor condenses in a cell, for example, is critical so that it doesn’t block incoming gases that facilitate chemical reactions.

    “Water, if you don’t remove it, can cover the catalyst and prevent oxygen from reaching the reaction sites,” Weber said. But there has to be some humidity to ensure that the central membrane in the cell can efficiently conduct ions.

    The research team used an X-ray technique known as micro X-ray computed tomography to record 3-D images of a sample fuel cell measuring about 3 to 4 millimeters in diameter.

    “The ALS lets us image in 3-D at high resolution very quickly, allowing us to look inside working fuel cells in real-world conditions,” said Dula Parkinson, a research scientist at the ALS who participated in the study.

    The sample cell included thin carbon-fiber layers, known as gas-diffusion layers, which in a working cell sandwich a central polymer-based membrane coated with catalyst layers on both sides. These gas-diffusion layers help to distribute the reactant chemicals and then remove the products from the reactions.

    Weber said that the study used materials that are relevant to commercial fuel cells. Some previous studies have explored how water wicks through and is shed from fuel-cell materials, and the new study added precise temperature controls and measurements to provide new insight on how water and temperature interact in these materials.

    Complimentary experiments at the ALS and at Argonne’s Advanced Photon Source, a synchrotron that specializes in a different range of X-ray energies, provided detailed views of the water evaporation, condensation, and distribution in the cell during temperature changes.

    “It took the ALS to explore the physics of this,” Weber said, “so we can compare this to theoretical models and eventually optimize the water management process and thus the cell performance,” Weber said.

    The experiments focused on average temperatures ranging from about 95 to 122 degrees Fahrenheit, with temperature variations of 60 to 80 degrees (hotter to colder) within the cell. Measurements were taken over the course of about four hours. The results provided key information to validate water and heat models that detail fuel-cell function.

    3
    Water clusters in sample fuel-cell components shrink over time in this sequence of images, produced by a 3-D imaging technique known as micro X-ray computed tomography. The water clusters were contained in a fibrous membrane that was exposed to different temperatures. The mean temperature began at about 104 degrees Fahrenheit and was gradually increased to about 131 degrees Fahrenheit. The top side of the images was the hotter side of the sample, and the bottom of the images was the colder side. (Credit: Berkeley Lab)

    This test cell included a hot side designed to show how water evaporates at the site of the chemical reactions, and a cooler side to show how water vapor condenses and drives the bulk of the water movement in the cell.

    While the thermal conductivity of the carbon-fiber layers – their ability to transfer heat energy – decreased slightly as the moisture content declined, the study found that even the slightest degree of saturation produced nearly double the thermal conductivity of a completely dry carbon-fiber layer. Water evaporation within the cell appears to dramatically increase at about 120 degrees Fahrenheit, researchers found.

    The experiments showed water distribution with millionths-of-a-meter precision, and suggested that water transport is largely driven by two processes: the operation of the fuel cell and the purging of water from the cell.

    The study found that larger water clusters evaporate more rapidly than smaller clusters. The study also found that the shape of water clusters in the fuel cell tend to resemble flattened spheres, while voids imaged in the carbon-fiber layers tend to be somewhat football-shaped.

    There are also some ongoing studies, Weber said, to use the X-ray-based imaging technique to look inside a full subscale fuel cell one section at a time.

    “There are ways to stitch together the imaging so that you get a much larger field of view,” he said. This process is being evaluated as a way to find the origin of failure sites in cells through imaging before and after testing. A typical working subscale fuel cell measures around 50 square centimeters, he added.

    Other researchers participating in this study were from Tufts University, Argonne National Laboratory, and the Norwegian University of Science and Technology. The work was supported by the U.S. Department of Energy’s Fuel Cell Technologies Office and Office of Energy Efficiency and Renewable Energy, and the National Science Foundation.

    The Advanced Light Source and the Advanced Photon Source are DOE Office of Science User Facilities that are open to visiting scientists from around the U.S. and world.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 11:26 am on November 8, 2017 Permalink | Reply
    Tags: , , , LBNL, , , ,   

    From LBNL: “New Study: Scientists Narrow Down the Search for Dark Photons Using Decade-Old Particle Collider Data” 

    Berkeley Logo

    Berkeley Lab

    November 8, 2017
    Glenn Roberts Jr.
    geroberts@lbl.gov
    (510) 520-0843

    1
    The BaBar detector at SLAC National Accelerator Laboratory. (Credit: SLAC)

    SLAC BABAR

    In its final years of operation, a particle collider in Northern California was refocused to search for signs of new particles that might help fill in some big blanks in our understanding of the universe.

    A fresh analysis of this data, co-led by physicists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), limits some of the hiding places for one type of theorized particle – the dark photon, also known as the heavy photon – that was proposed to help explain the mystery of dark matter.

    The latest result, published in the journal Physical Review Letters by the roughly 240-member BaBar Collaboration, adds to results from a collection of previous experiments seeking, but not yet finding, the theorized dark photons.

    “Although it does not rule out the existence of dark photons, the BaBar results do limit where they can hide, and definitively rule out their explanation for another intriguing mystery associated with the property of the subatomic particle known as the muon,” said Michael Roney, BaBar spokesperson and University of Victoria professor.

    Dark matter, which accounts for an estimated 85 percent of the total mass of the universe, has only been observed by its gravitational interactions with normal matter. For example, the rotation rate of galaxies is much faster than expected based on their visible matter, suggesting there is “missing” mass that has so far remained invisible to us.

    So physicists have been working on theories and experiments to help explain what dark matter is made of – whether it is composed of undiscovered particles, for example, and whether there may be a hidden or “dark” force that governs the interactions of such particles among themselves and with visible matter. The dark photon, if it exists, has been put forward as a possible carrier of this dark force.

    Using data collected from 2006 to 2008 at SLAC National Accelerator Laboratory in Menlo Park, California, the analysis team scanned the recorded byproducts of particle collisions for signs of a single particle of light – a photon – devoid of associated particle processes.

    The BaBar experiment, which ran from 1999 to 2008 at SLAC, collected data from collisions of electrons with positrons, their positively charged antiparticles. The collider driving BaBar, called PEP-II, was built through a collaboration that included SLAC, Berkeley Lab, and Lawrence Livermore National Laboratory. At its peak, the BaBar Collaboration involved over 630 physicists from 13 countries.

    BaBar was originally designed to study the differences in the behavior between matter and antimatter involving a b-quark. Simultaneously with a competing experiment in Japan called Belle, BaBar confirmed the predictions of theorists and paved the way for the 2008 Nobel Prize.

    KEK Belle 2 detector, in Tsukuba, Ibaraki Prefecture, Japan

    Berkeley Lab physicist Pier Oddone proposed the idea for BaBar and Belle in 1987 while he was the Lab’s Physics division director.

    The latest analysis used about 10 percent of BaBar’s data – recorded in its final two years of operation. Its data collection was refocused on finding particles not accounted for in physics’ Standard Model – a sort of rulebook for what particles and forces make up the known universe.

    “BaBar performed an extensive campaign searching for dark sector particles, and this result will further constrain their existence,” said Bertrand Echenard, a research professor at Caltech who was instrumental in this effort.

    2
    This chart shows the search area (green) explored in an analysis of BaBar data where dark photon particles have not been found, compared with other experiments’ search areas. The red band shows the favored search area to determine whether dark photons are causing the so-called “g-2 anomaly,” and the white areas are among the unexplored territories for dark photons. (Credit: Muon g-2 Collaboration)

    In its final years of operation, a particle collider in Northern California was refocused to search for signs of new particles that might help fill in some big blanks in our understanding of the universe.

    A fresh analysis of this data, co-led by physicists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), limits some of the hiding places for one type of theorized particle – the dark photon, also known as the heavy photon – that was proposed to help explain the mystery of dark matter.

    The latest result, published in the journal Physical Review Letters by the roughly 240-member BaBar Collaboration, adds to results from a collection of previous experiments seeking, but not yet finding, the theorized dark photons.

    “Although it does not rule out the existence of dark photons, the BaBar results do limit where they can hide, and definitively rule out their explanation for another intriguing mystery associated with the property of the subatomic particle known as the muon,” said Michael Roney, BaBar spokesperson and University of Victoria professor.

    Dark matter, which accounts for an estimated 85 percent of the total mass of the universe, has only been observed by its gravitational interactions with normal matter. For example, the rotation rate of galaxies is much faster than expected based on their visible matter, suggesting there is “missing” mass that has so far remained invisible to us.

    So physicists have been working on theories and experiments to help explain what dark matter is made of – whether it is composed of undiscovered particles, for example, and whether there may be a hidden or “dark” force that governs the interactions of such particles among themselves and with visible matter. The dark photon, if it exists, has been put forward as a possible carrier of this dark force.

    Using data collected from 2006 to 2008 at SLAC National Accelerator Laboratory in Menlo Park, California, the analysis team scanned the recorded byproducts of particle collisions for signs of a single particle of light – a photon – devoid of associated particle processes.

    The BaBar experiment, which ran from 1999 to 2008 at SLAC, collected data from collisions of electrons with positrons, their positively charged antiparticles. The collider driving BaBar, called PEP-II, was built through a collaboration that included SLAC, Berkeley Lab, and Lawrence Livermore National Laboratory. At its peak, the BaBar Collaboration involved over 630 physicists from 13 countries.

    BaBar was originally designed to study the differences in the behavior between matter and antimatter involving a b-quark. Simultaneously with a competing experiment in Japan called Belle, BaBar confirmed the predictions of theorists and paved the way for the 2008 Nobel Prize. Berkeley Lab physicist Pier Oddone proposed the idea for BaBar and Belle in 1987 while he was the Lab’s Physics division director.

    The latest analysis used about 10 percent of BaBar’s data – recorded in its final two years of operation. Its data collection was refocused on finding particles not accounted for in physics’ Standard Model – a sort of rulebook for what particles and forces make up the known universe.

    “BaBar performed an extensive campaign searching for dark sector particles, and this result will further constrain their existence,” said Bertrand Echenard, a research professor at Caltech who was instrumental in this effort.
    Chart – This chart shows the search area (green) explored in an analysis of BaBar data where dark photon particles have not been found, compared with other experiments’ search areas. The red band shows the favored search area to determine whether dark photons are causing the so-called “g-2 anomaly,” and the white areas are among the unexplored territories for dark photons. (Credit: Muon g-2 Collaboration)

    This chart shows the search area (green) explored in an analysis of BaBar data where dark photon particles have not been found, compared with other experiments’ search areas. The red band shows the favored search area to determine whether dark photons are causing the so-called “g-2 anomaly,” and the white areas are among the unexplored territories for dark photons. (Credit: Muon g-2 Collaboration)

    Yury Kolomensky, a physicist in the Nuclear Science Division at Berkeley Lab and a faculty member in the Department of Physics at UC Berkeley, said, “The signature (of a dark photon) in the detector would be extremely simple: one high-energy photon, without any other activity.”

    A number of the dark photon theories predict that the associated dark matter particles would be invisible to the detector. The single photon, radiated from a beam particle, signals that an electron-positron collision has occurred and that the invisible dark photon decayed to the dark matter particles, revealing itself in the absence of any other accompanying energy.

    When physicists had proposed dark photons in 2009, it excited new interest in the physics community, and prompted a fresh look at BaBar’s data. Kolomensky supervised the data analysis, performed by UC Berkeley undergraduates Mark Derdzinski and Alexander Giuffrida.

    “Dark photons could bridge this hidden divide between dark matter and our world, so it would be exciting if we had seen it,” Kolomensky said.

    The dark photon has also been postulated to explain a discrepancy between the observation of a property of the muon spin and the value predicted for it in the Standard Model. Measuring this property with unprecedented precision is the goal of the Muon g-2 (pronounced gee-minus-two) Experiment at Fermi National Accelerator Laboratory [FNAL].

    FNAL G-2 magnet from Brookhaven Lab finds a new home in the FNAL Muon G-2 experiment

    FNAL Muon g-2 studio

    Earlier measurements at Brookhaven National Laboratory had found that this property of muons – like a spinning top with a wobble that is ever-slightly off the norm – is off by about 0.0002 percent from what is expected. Dark photons were suggested as one possible particle candidate to explain this mystery, and a new round of experiments begun earlier this year should help to determine whether the anomaly is actually a discovery.

    The latest BaBar result, Kolomensky said, largely “rules out these dark photon theories as an explanation for the g-2 anomaly, effectively closing this particular window, but it also means there is something else driving the g-2 anomaly if it’s a real effect.”

    It’s a common and constant interplay between theory and experiments, with theory adjusting to new constraints set by experiments, and experiments seeking inspiration from new and adjusted theories to find the next proving grounds for testing out those theories.

    Scientists have been actively mining BaBar’s data, Roney said, to take advantage of the well-understood experimental conditions and detector to test new theoretical ideas.

    “Finding an explanation for dark matter is one of the most important challenges in physics today, and looking for dark photons was a natural way for BaBar to contribute,” Roney said, adding that many experiments in operation or planned around the world are seeking to address this problem.

    An upgrade of an experiment in Japan that is similar to BaBar, called Belle II, turns on next year. “Eventually, Belle II will produce 100 times more statistics compared to BaBar,” Kolomensky said. “Experiments like this can probe new theories and more states, effectively opening new possibilities for additional tests and measurements.”

    “Until Belle II has accumulated significant amounts of data, BaBar will continue for the next several years to yield new impactful results like this one,” Roney said.

    The study featured participation by the international BaBar collaboration, which includes researchers from about 66 institutions in the U.S., Canada, France, Spain, Italy, Norway, Germany, Russia, India, Saudi Arabia, U.K., the Netherlands, and Israel. The work was supported by the U.S. Department of Energy’s Office of Science and the National Science Foundation; the Natural Sciences and Engineering Research Council in Canada; CEA and CNRS-IN2P3 in France; BMBF and DFG in Germany; INFN in Italy; FOM in the Netherlands; NFR in Norway; MES in Russia; MINECO in Spain; STFC in the U.K.; and BSF in Israel and the U.S. Individuals involved with this study have received support from the Marie Curie EIF in the European Union, and the Alfred P. Sloan Foundation in the U.S.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 10:08 am on October 23, 2017 Permalink | Reply
    Tags: , , , LBNL, , ,   

    From LBNL: “Experiment Provides Deeper Look into the Nature of Neutrinos” 

    Berkeley Logo

    Berkeley Lab

    October 23, 2017
    Glenn Roberts Jr.
    geroberts@lbl.gov
    (510) 486-5582

    The first glimpse of data from the full array of a deeply chilled particle detector operating beneath a mountain in Italy sets the most precise limits yet on where scientists might find a theorized process to help explain why there is more matter than antimatter in the universe.

    This new result, submitted today to the journal Physical Review Letters, is based on two months of data collected from the full detector of the CUORE (Cryogenic Underground Observatory for Rare Events) experiment at the Italian National Institute for Nuclear Physics’ (INFN’s) Gran Sasso National Laboratories (LNGS) in Italy. CUORE means “heart” in Italian.

    The CUORE detector array, shown here in this rendering is formed by 19 copper-framed “towers” that each house a matrix of 52 cube-shaped crystals Credit CUORE collaboration

    CUORE experiment UC Berkeley, experiment at the Italian National Institute for Nuclear Physics’ (INFN’s) Gran Sasso National Laboratories (LNGS), a search for neutrinoless double beta decay

    Gran Sasso LABORATORI NAZIONALI del GRAN SASSO, located in the Abruzzo region of central Italy

    The Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) leads the U.S. nuclear physics effort for the international CUORE collaboration, which has about 150 members from 25 institutions. The U.S. nuclear physics program has made substantial contributions to the fabrication and scientific leadership of the CUORE detector.

    CUORE is considered one of the most promising efforts to determine whether tiny elementary particles called neutrinos, which interact only rarely with matter, are “Majorana particles” – identical to their own antiparticles. Most other particles are known to have antiparticles that have the same mass but a different charge, for example. CUORE could also help us home in on the exact masses of the three types, or “flavors,” of neutrinos – neutrinos have the unusual ability to morph into different forms.

    “This is the first preview of what an instrument this size is able to do,” said Oliviero Cremonesi, a senior faculty scientist at INFN and spokesperson for the CUORE collaboration. Already, the full detector array’s sensitivity has exceeded the precision of the measurements reported in April 2015 after a successful two-year test run that enlisted one detector tower. Over the next five years CUORE will collect about 100 times more data.

    Yury Kolomensky, a senior faculty scientist in the Nuclear Science Division at Lawrence Berkeley National Laboratory (Berkeley Lab) and U.S. spokesperson for the CUORE collaboration, said, “The detector is working exceptionally well and these two months of data are enough to exceed the previous limits.” Kolomensky is also a professor in the UC Berkeley Physics Department.

    The new data provide a narrow range in which scientists might expect to see any indication of the particle process it is designed to find, known as neutrinoless double beta decay.

    “CUORE is, in essence, one of the world’s most sensitive thermometers,” said Carlo Bucci, technical coordinator of the experiment and Italian spokesperson for the CUORE collaboration. Its detectors, formed by 19 copper-framed “towers” that each house a matrix of 52 cube-shaped, highly purified tellurium dioxide crystals, are suspended within the innermost chamber of six nested tanks.

    Cooled by the most powerful refrigerator of its kind, the tanks subject the detector to the coldest known temperature recorded in a cubic meter volume in the entire universe: minus 459 degrees Fahrenheit (10 milliKelvin).

    The detector array was designed and assembled over a 10-year period. It is shielded from many outside particles, such as cosmic rays that constantly bombard the Earth, by the 1,400 meters of rock above it, and by thick lead shielding that includes a radiation-depleted form of lead rescued from an ancient Roman shipwreck. Other detector materials were also prepared in ultrapure conditions, and the detectors were assembled in nitrogen-filled, sealed glove boxes to prevent contamination from regular air.

    “Designing, building, and operating CUORE has been a long journey and a fantastic achievement,” said Ettore Fiorini, an Italian physicist who developed the concept of CUORE’s heat-sensitive detectors (tellurium dioxide bolometers), and the spokesperson-emeritus of the CUORE collaboration. “Employing thermal detectors to study neutrinos took several decades and brought to the development of technologies that can now be applied in many fields of research.”

    Together weighing over 1,600 pounds, CUORE’s matrix of roughly fist-sized crystals is extremely sensitive to particle processes, especially at this extreme temperature. Associated instruments can precisely measure ever-slight temperature changes in the crystals resulting from these processes.

    Berkeley Lab and Lawrence Livermore National Laboratory scientists supplied roughly half of the crystals for the CUORE project. In addition, the Berkeley Lab team designed and fabricated the highly sensitive temperature sensors – called neutron transmutation doped thermistors – invented by Eugene Haller, a senior faculty scientist in Berkeley Lab’s Materials Sciences Division and a UC Berkeley faculty member.

    2
    CUORE was assembled in this specially designed clean room to help protect it from contaminants. (Credit: CUORE collaboration)

    Berkeley Lab researchers also designed and built a specialized clean room supplied with air depleted of natural radioactivity, so that the CUORE detectors could be installed into the cryostat in ultraclean conditions. And Berkeley Lab scientists and engineers, under the leadership of UC Berkeley postdoc Vivek Singh, worked with Italian colleagues to commission the CUORE cryogenic systems, including a uniquely powerful cooling system called a dilution refrigerator.

    Former UC Berkeley postdoctoral students Tom Banks and Tommy O’Donnell, who also had joint appointments in the Nuclear Science Division at Berkeley Lab, led the international team of physicists, engineers, and technicians to assemble over 10,000 parts into towers in nitrogen-filled glove boxes. They bonded almost 8,000 gold wires, measuring just 25 microns in diameter, to 100-micron sized pads on the temperature sensors, and on copper pads connected to detector wiring.

    CUORE measurements carry the telltale signature of specific types of particle interactions or particle decays – a spontaneous process by which a particle or particles transform into other particles.

    In double beta decay, which has been observed in previous experiments, two neutrons in the atomic nucleus of a radioactive element become two protons. Also, two electrons are emitted, along with two other particles called antineutrinos.

    Neutrinoless double beta decay, meanwhile – the specific process that CUORE is designed to find or to rule out – would not produce any antineutrinos. This would mean that neutrinos are their own antiparticles. During this decay process the two antineutrino particles would effectively wipe each other out, leaving no trace in the CUORE detector. Evidence for this type of decay process would also help scientists explain neutrinos’ role in the imbalance of matter vs. antimatter in our universe.

    Neutrinoless double beta decay is expected to be exceedingly rare, occurring at most (if at all) once every 100 septillion (1 followed by 26 zeros) years in a given atom’s nucleus. The large volume of detector crystals is intended to greatly increase the likelihood of recording such an event during the lifetime of the experiment.

    There is growing competition from new and planned experiments to resolve whether this process exists using a variety of search techniques, and Kolomensky noted, “The competition always helps. It drives progress, and also we can verify each other’s results, and help each other with materials screening and data analysis techniques.”

    Lindley Winslow of the Massachusetts Institute of Technology, who coordinated the analysis of the CUORE data, said, “We are tantalizingly close to completely unexplored territory and there is great possibility for discovery. It is an exciting time to be on the experiment.”

    CUORE is supported jointly by the Italian National Institute for Nuclear Physics Istituto Nazionale di Fisica Nucleare (INFN) in Italy, and the U.S. Department of Energy’s Office of Nuclear Physics, the National Science Foundation, and the Alfred P. Sloan Foundation in the U.S. The CORE collaboration includes about 150 scientists from Italy, U.S., China, France, and Spain, and is based in the underground Italian facility called INFN Gran Sasso National Laboratories (LNGS) of the INFN.

    CUORE collaboration members include: Italian National Institute for Nuclear Physics (INFN), University of Bologna, University of Genoa, University of Milano-Bicocca, and Sapienza University in Italy; California Polytechnic State University, San Luis Obispo; Berkeley Lab; Lawrence Livermore National Laboratory; Massachusetts Institute of Technology; University of California, Berkeley; University of California, Los Angeles; University of South Carolina; Virginia Polytechnic Institute and State University; and Yale University in the US; Saclay Nuclear Research Center (CEA) and the Center for Nuclear Science and Materials Science (CNRS/IN2P3) in France; and the Shanghai Institute of Applied Physics and Shanghai Jiao Tong University in China.

    The U.S.-CUORE team was lead by late Prof. Stuart Freedman until his untimely passing in 2012. Other current and former Berkeley Lab members of the CUORE collaboration not previously mentioned include US Contractor Project Manager Sergio Zimmermann (Engineering Division), former U.S. Contractor Project Manager Richard Kadel, staff scientists Jeffrey Beeman, Brian Fujikawa, Sarah Morgan, Alan Smith, postdocs Giovanni Benato, Raul Hennings-Yeomans, Ke Han, Yuan Mei, Bradford Welliver, Benjamin Schmidt, graduate students Adam Bryant, Alexey Drobizhev, Roger Huang, Laura Kogler, Jonathan Ouellet, and Sachi Wagaarachchi, and engineers David Biare, Luigi Cappelli, Lucio di Paolo, and Joseph Wallig.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 11:33 am on October 16, 2017 Permalink | Reply
    Tags: , , , , , , LBNL, ,   

    From LBNL: “Scientists Decode the Origin of Universe’s Heavy Elements in the Light from a Neutron Star Merger” 

    Berkeley Logo

    Berkeley Lab

    October 16, 2017
    Glenn Roberts Jr.
    geroberts@lbl.gov
    (510) 486-5582

    1
    2
    When Neutron Stars Collide: (Credit: Caltech)

    Sometimes – even in matters of science – you have to be lucky.

    On Aug. 17, scientists around the globe were treated to near-simultaneous observations by separate instruments: One set of Earth-based detectors measured the signature of a cataclysmic event sending ripples through the fabric of space-time, and a space-based detector measured the gamma-ray signature of a high-energy outburst emanating from the same region of the sky.

    These parallel detections led astronomers and astrophysicists on an all-out hunt for more detailed measurements explaining this confluence of signals, which would ultimately be confirmed as the first measurement of the merger of two neutron stars and its explosive aftermath.

    Just a week earlier, Daniel Kasen, a scientist in the Nuclear Science Division at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and an associate professor of physics and astronomy at UC Berkeley was attending a science conference in Seattle.

    A hypothetical question was posed to attendees as to when would astronomers detect an astrophysical source that produced both a strong disruption in the space-time continuum – in the form of gravitational waves – and see an associated burst of light.

    2
    This image, from a simulation showing the formation of a cocoon-like release in the merger of two neutron stars, illustrates energy density nine seconds after the merger, with higher density shown in yellower shades. (Caltech)

    The likely target would be the violent merger of a neutron star, which is the ultradense remnant of an exploded star, with another neutron star or a black hole. Such events have been theorized to seed the universe with heavy elements like gold, platinum, and radioactive elements like uranium.

    Most scientists in the room expected that, based on the planned sensitivity of future instruments, and the presumed rarity of neutron star mergers, such a historic discovery might – with some luck – be more than a decade away.

    So Kasen, who had been working for years on models and simulations to help understand the likely signals from merging neutron stars, was stunned when data on a neutron star merger and its aftermath began to pour in just a week later.

    “It seemed too good to be true,” said Kasen. “Not only had they detected gravitational waves, but from a neutron star merger that was so close, it was practically in our backyard. Almost everybody on Earth with a telescope started pointing at the same part of the sky.”

    LIGO and VIRGO – a network of Earth-based gravitational wave detectors capable of observing some of the universe’s most violent events by detecting ever-so-slight changes in laser-measured distances caused by passing gravitational waves – had picked up an event.


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    1
    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    A couple of seconds later, a brief burst of gamma rays were detected by an instrument aboard the Fermi Gamma-ray Space Telescope. Less than 12 hours after that, astronomers spotted the first glimpse of visible light from the event.

    When Kasen saw the email alerts rolling in about the various observations, he couldn’t help but feel a sense of unease. “For years we had been studying what colliding neutron stars would look like, with nothing to go on but our theoretical imagination and computer modeling,” he said. “Now, real data was flooding in, and it was going to test everything we had predicted.”

    Over the following days and weeks, an influx of observations provided data confirming that the brilliant burst behaved remarkably like the theorized merger of two neutron stars.

    Computer simulations had suggested that, during such a merger, a small fraction of neutron star matter would be flung into surrounding space. Models predicted that this cloud of exotic debris would assemble into heavy elements and give off a radioactive glow over 10 million times brighter than the sun. The phenomenon is called a kilonova or macronova.

    3
    An animation for a model of a kilonova associated with a neutron star merger (right), showing fast effects in blue and slower effects in red, and associated graph that shows how the model matches with data from the observed kilonova. (Credit: Daniel Kasen/Berkeley Lab, UC Berkeley)

    Jennifer Barnes, an Einstein postdoctoral fellow at Columbia University, who as a UC Berkeley graduate student worked with Kasen to compute some of the first detailed model predictions of kilonovae, said, “We expected from theory and simulations that kilonovae would be tinged red if heavy elements were produced, and would shine blue if they weren’t.

    She added, “Understanding this relationship allowed us to more confidently interpret the emission from this event and diagnose the presence of heavy elements in the merger debris.”

    Kasen, Barnes, and two other Berkeley Lab scientists were among the co-authors of several papers published today in the journals Nature, Science, and The Astrophysical Journal. The publications detailed the discovery, follow-up observations, and theoretical interpretation of this event.

    See https://sciencesprings.wordpress.com/2017/10/16/from-hubble-nasa-missions-catch-first-light-from-a-gravitational-wave-event/ for science papers.

    Simulations related to the event were carried out at the Lab’s National Energy Research Scientific Computing Center (NERSC).

    NERSC Cray Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer

    NERSC Hopper Cray XE6 supercomputer

    Peter Nugent, a senior staff scientist in the Computational Research Division at Berkeley Lab and an adjunct professor of astronomy at UC Berkeley, also closely followed the alerts related to the Aug. 17 observations.

    At the time, he was assisting with the final preparations for the startup of the Zwicky Transient Facility (ZTF) at the Palomar Observatory in Southern California. Berkeley Lab is a member of the collaboration for ZTF, which is designed to discover supernovae and also to search for rare and exotic events such as those that occur during the aftermath of neutron star mergers.

    5
    The arrow in the left image points to light associated with matter expelled from a neutron star merger, as recorded by the Dark Energy Camera. (Credit: DECam/DES Collaboration)

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    “This event happened too early by three months,” Nugent said, as the soon-to-launch ZTF is designed to quickly follow up on LIGO/VIRGO gravitational wave measurements to look for their visible counterparts in the sky.

    Nugent said that, at first, he thought that the multiple observations of the object (known as an optical transient) associated with the neutron star merger and gamma-ray burst was just a common supernova. But the object was evolving too quickly and had an incredibly blue light signature that pointed to a different type of event than the supernovae normally associated with the type of galaxy hosting this event.

    Also, Nugent said, “We didn’t expect an event this close. It’s almost akin to having a supernova blow up in Andromeda,” which is about 2.5 million light years away from our Milky Way galaxy. “We hope this means there are going to be more of these events. We now know the rate is not zero.”

    Nugent contributed to an analysis in one of the papers in the journal Science that concludes there may be “many more events” like the observed merger, and that neutron star mergers are likely “the main production sites” for heavy elements in the Milky Way. The observation could also provide valuable clues about how scientists might look for other neutron star mergers in optical surveys without a LIGO/VIRGO detection.

    “How the heaviest elements came to be has been one of the longest standing questions of our cosmic origins,” Kasen said. “Now we have for the first time directly witnessed a cloud of freshly made precious metals right at their production site.”

    The debris cloud from the merger mushroomed from about the size of a city shortly after the merger to about the size of a solar system after only one day, Kasen said. It is also likely that only a few percent of the matter in the merging neutron stars escaped the central site of the merger; the rest likely collapsed to form a black hole.

    It is expected that the escaping debris will be very long-lived, diffusing across the galaxy over a billion years and enriching stars and planets with the heavy elements like those we find on Earth today.

    “For me, it is the astronomical event of a lifetime,” Kasen said “It’s also an incredible moment for the field of scientific computing. Simulations succeeded in modeling what would happen in an incredibly complex phenomenon like a neutron star merger. Without the models, we all probably all would have been mystified by exactly what we were seeing in the sky.”

    Future advances in computing, and new insights from the Facility for Rare Isotope Beams (FRIB) at Michigan State University on exotic reactions that produce heavy nuclei, should provide even more insight as to how the heavy elements came to be, and the extreme physics of matter and gravity that occurs in mergers.

    Kasen is also the lead investigator on a DOE Exascale Computing Project that is developing high-performance astrophysical simulation codes that will run on the next generation of U.S. supercomputers. He is also a member of a DOE-supported SciDAC (Scientific Discovery through Advanced Computing) collaboration that is using computing to simulate supernovae, neutron star mergers, and related high-energy events.

    “Before these observations, the signals from neutron star mergers were mainly theoretical speculation,” Kasen said. “Now, it has suddenly become a major new field of astrophysics.”

    The National Energy Research Scientific Computing Center is a DOE Office of Science User Facility.

    Berkeley Lab’s contributions to the simulations and observations were supported by the U.S. Department of Energy’s Office of Science.

    View a related UC Berkeley video:

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 2:31 pm on October 11, 2017 Permalink | Reply
    Tags: , Injecting Electrons Jolts 2-D Structure Into New Atomic Pattern, LBNL, , The researchers used molybdenum ditelluride (MoTe2) a typical 2-D semiconductorcoated it with an ionic liquid (DEME-TFSI)   

    From LBNL: “Injecting Electrons Jolts 2-D Structure Into New Atomic Pattern” 

    Berkeley Logo

    Berkeley Lab

    October 11, 2017
    Sarah Yang
    scyang@lbl.gov
    (510) 486-4575

    Berkeley Lab study is first to show potential of energy-efficient next-gen electronic memory.

    1
    Schematic shows the configuration for structural phase transition on a molybdenum ditelluride monolayer (MoTe2, shown as yellow and blue spheres), which is anchored by a metal electrodes (top gate and ground). The ionic liquid covering the monolayer and electrodes enables a high density of electrons to populate the monolayer, leading to changes in the structural lattice from a hexagonal (2H) to monoclinic (1T’) pattern. (Credit: Ying Wang/Berkeley Lab)

    The same electrostatic charge that can make hair stand on end and attach balloons to clothing could be an efficient way to drive atomically thin electronic memory devices of the future, according to a new study led by researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab).

    In a study published today in the journal Nature, scientists have found a way to reversibly change the atomic structure of a 2-D material by injecting, or “doping,” it with electrons. The process uses far less energy than current methods for changing the configuration of a material’s structure.

    “We show, for the first time, that it is possible to inject electrons to drive structural phase changes in materials,” said study principal investigator Xiang Zhang, senior faculty scientist at Berkeley Lab’s Materials Sciences Division and a professor at UC Berkeley. “By adding electrons into a material, the overall energy goes up and will tip off the balance, resulting in the atomic structure rearranging to a new pattern that is more stable. Such electron doping-driven structural phase transitions at the 2-D limit is not only important in fundamental physics; it also opens the door for new electronic memory and low-power switching in the next generation of ultra-thin devices.”

    Switching a material’s structural configuration from one phase to another is the fundamental, binary characteristic that underlies today’s digital circuitry. Electronic components capable of this phase transition have shrunk down to paper-thin sizes, but they are still considered to be bulk, 3-D layers by scientists. By comparison, 2-D monolayer materials are composed of a single layer of atoms or molecules whose thickness is 100,000 times as small as a human hair.

    “The idea of electron doping to alter a material’s atomic structure is unique to 2-D materials, which are much more electrically tunable compared with 3-D bulk materials,” said study co-lead author Jun Xiao, a graduate student in Zhang’s lab.

    The classic approach to driving the structural transition of materials involves heating to above 500 degrees Celsius. Such methods are energy-intensive and not feasible for practical applications. In addition, the excess heat can significantly reduce the life span of components in integrated circuits.

    A number of research groups have also investigated the use of chemicals to alter the configuration of atoms in semiconductor materials, but that process is still difficult to control and has not been widely adopted by industry.

    “Here we use electrostatic doping to control the atomic configuration of a two-dimensional material,” said study co-lead author Ying Wang, another graduate student in Zhang’s lab. “Compared to the use of chemicals, our method is reversible and free of impurities. It has greater potential for integration into the manufacturing of cell phones, computers, and other electronic devices.”

    The researchers used molybdenum ditelluride (MoTe2), a typical 2-D semiconductor, and coated it with an ionic liquid (DEME-TFSI), which has an ultra-high capacitance, or ability to store electric charges. The layer of ionic liquid allowed the researchers to inject the semiconductor with electrons at a density of a hundred trillion to a quadrillion per square centimeter. It is an electron density that is one to two orders higher in magnitude than what could be achieved in 3-D bulk materials, the researchers said.

    Through spectroscopic analysis, the researchers determined that the injection of electrons changed the atoms’ arrangement of the molybdenum ditelluride from a hexagonal shape to one that is monoclinic, which has more of a slanted cuboid shape. Once the electrons were retracted, the crystal structure returned to its original hexagonal pattern, showing that the phase transition is reversible. Moreover, these two types of atom arrangements have very different symmetries, providing a large contrast for applications in optical components.

    “Such an atomically thin device could have dual functions, serving simultaneously as optical or electrical transistors, and hence broaden the functionalities of the electronics used in our daily lives,” said Wang.

    This work was supported by DOE’s Office of Science and by the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 1:19 pm on September 13, 2017 Permalink | Reply
    Tags: , , , LBNL, , PHENIX (Python-based Hierarchical ENvironment for Integrated Xtallography), , TFIIH-Transcription factor IIH   

    From LBNL: “Berkeley Lab Scientists Map Key DNA Protein Complex at Near-Atomic Resolution” 

    Berkeley Logo

    Berkeley Lab

    September 13, 2017
    Sarah Yang
    scyang@lbl.gov
    (510) 486-4575

    1
    The cryo-EM structure of Transcription Factor II Human (TFIIH). The atomic coordinate model, colored according to the different TFIIH subunits, is shown inside the semi-transparent cryo-EM map. (Credit: Basil Greber/Berkeley Lab and UC Berkeley)

    Chalking up another success for a new imaging technology that has energized the field of structural biology, researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) obtained the highest resolution map yet of a large assembly of human proteins that is critical to DNA function.

    The scientists are reporting their achievement today in an advanced online publication of the journal Nature. They used cryo-electron microscopy (cryo-EM) to resolve the 3-D structure of a protein complex called transcription factor IIH (TFIIH) at 4.4 angstroms, or near-atomic resolution. This protein complex is used to unzip the DNA double helix so that genes can be accessed and read during transcription or repair.

    “When TFIIH goes wrong, DNA repair can’t occur, and that malfunction is associated with severe cancer propensity, premature aging, and a variety of other defects,” said study principal investigator Eva Nogales, faculty scientist at Berkeley Lab’s Molecular Biophysics and Integrated Bioimaging Division. “Using this structure, we can now begin to place mutations in context to better understand why they give rise to misbehavior in cells.”

    TFIIH’s critical role in DNA function has made it a prime target for research, but it is considered a difficult protein complex to study, especially in humans.

    ___________________________________________________________________
    How to Capture a Protein
    1
    It takes a large store of patience and persistence to prepare specimens of human transcription factor IIH (TFIIH) for cryo-EM. Because TFIIH exists in such minute amounts in a cell, the researchers had to grow 50 liters of human cells in culture to yield a few micrograms of the purified protein.

    Human TFIIH is particularly fragile and prone to falling apart in the flash-freezing process, so researchers need to use an optimized buffer solution to help protect the protein structure.

    “These compounds that protect the proteins also work as antifreeze agents, but there’s a trade-off between protein stability and the ability to produce a transparent film of ice needed for cryo-EM,” said study lead author Basil Greber.

    Once Greber obtains a usable sample, he settles down for several days at the cryo-electron microscope at UC Berkeley’s Stanley Hall for imaging.

    “Once you have that sample inside the microscope, you keep collecting data as long as you can,” he said. “The process can take four days straight.”
    ___________________________________________________________________

    Mapping complex proteins

    “As organisms get more complex, these proteins do, too, taking on extra bits and pieces needed for regulatory functions at many different levels,” said Eva Nogales, who is also a UC Berkeley professor of molecular and cell biology and a Howard Hughes Medical Institute investigator. “The fact that we resolved this protein structure from human cells makes this even more relevant to disease research. There’s no need to extrapolate the protein’s function based upon how it works in other organisms.”

    Biomolecules such as proteins are typically imaged using X-ray crystallography, but that method requires a large amount of stable sample for the crystallization process to work. The challenge with TFIIH is that it is hard to produce and purify in large quantities, and once obtained, it may not form crystals suitable for X-ray diffraction.

    Enter cryo-EM, which can work even when sample amounts are very small. Electrons are sent through purified samples that have been flash-frozen at ultracold temperatures to prevent crystalline ice from forming.

    Cryo-EM has been around for decades, but major advances over the past five years have led to a quantum leap in the quality of high-resolution images achievable with this technique.

    “When your goal is to get resolutions down to a few angstroms, the problem is that any motion gets magnified,” said study lead author Basil Greber, a UC Berkeley postdoctoral fellow at the California Institute for Quantitative Biosciences (QB3). “At high magnifications, the slight movement of the specimen as electrons move through leads to a blurred image.”

    Making movies

    The researchers credit the explosive growth in cryo-EM to advanced detector technology that Berkeley Lab engineer Peter Denes helped develop. Instead of a single picture taken for each sample, the direct detector camera shoots multiple frames in a process akin to recording a movie. The frames are then put together to create a high-resolution image. This approach resolves the blur from sample movement. The improved images contain higher quality data, and they allow researchers to study the sample in multiple states, as they exist in the cell.

    Since shooting a movie generates far more data than a single frame, and thousands of movies are being collected during a microscopy session, the researchers needed the processing punch of supercomputers at the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab.

    NERSC Cray Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer

    NERSC Hopper Cray XE6 supercomputer

    The output from these computations was a 3-D map that required further interpretation.

    “When we began the data processing, we had 1.5 million images of individual molecules to sort through,” said Greber. “We needed to select particles that are representative of an intact complex. After 300,000 CPU hours at NERSC, we ended up with 120,000 images of individual particles that were used to compute the 3-D map of the protein.”

    To obtain an atomic model of the protein complex based on this 3-D map, the researchers used PHENIX (Python-based Hierarchical ENvironment for Integrated Xtallography), a software program whose development is led by Paul Adams, director of Berkeley Lab’s Molecular Biophysics and Integrated Bioimaging Division and a co-author of this study.

    Not only does this structure improve basic understanding of DNA repair, the information could be used to help visualize how specific molecules are binding to target proteins in drug development.

    “In studying the physics and chemistry of these biological molecules, we’re often able to determine what they do, but how they do it is unclear,” said Nogales. “This work is a prime example of what structural biologists do. We establish the framework for understanding how the molecules function. And with that information, researchers can develop finely targeted therapies with more predictive power.”

    Other co-authors on this study are Pavel Afonine and Thi Hoang Duong Nguyen, both of whom have joint appointments at Berkeley Lab and UC Berkeley; and Jie Fang, a researcher at the Howard Hughes Medical Institute.

    NERSC is a DOE Office of Science User Facility located at Berkeley Lab. In addition to NERSC, the researchers used the Lawrencium computing cluster at Berkeley Lab. This work was funded by the National Institute of General Medical Sciences and the Swiss National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 8:33 am on August 28, 2017 Permalink | Reply
    Tags: , , Berkeley Lab’s Molecular Foundry, , Exciton effect, LBNL, Moly sulfide, , New Results Reveal High Tunability of 2-D Material, Photoluminescence excitation (PLE) spectroscopy,   

    From LBNL: “New Results Reveal High Tunability of 2-D Material” 

    Berkeley Logo

    Berkeley Lab

    August 25, 2017
    Glenn Roberts Jr
    geroberts@lbl.gov
    (510) 486-5582

    1
    From left: Kaiyuan Yao, Nick Borys, and P. James Schuck, seen here at Berkeley Lab’s Molecular Foundry, measured a property in a 2-D material that could help realize new applications. (Credit: Marilyn Chung/Berkeley Lab)

    Two-dimensional materials are a sort of a rookie phenom in the scientific community. They are atomically thin and can exhibit radically different electronic and light-based properties than their thicker, more conventional forms, so researchers are flocking to this fledgling field to find ways to tap these exotic traits.

    Applications for 2-D materials range from microchip components to superthin and flexible solar panels and display screens, among a growing list of possible uses. But because their fundamental structure is inherently tiny, they can be tricky to manufacture and measure, and to match with other materials. So while 2-D materials R&D is on the rise, there are still many unknowns about how to isolate, enhance, and manipulate their most desirable qualities.

    Now, a science team at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has precisely measured some previously obscured properties of moly sulfide, a 2-D semiconducting material also known as molybdenum disulfide or MoS2. The team also revealed a powerful tuning mechanism and an interrelationship between its electronic and optical, or light-related, properties.

    To best incorporate such monolayer materials into electronic devices, engineers want to know the “band gap,” which is the minimum energy level it takes to jolt electrons away from the atoms they are coupled to, so that they flow freely through the material as electric current flows through a copper wire. Supplying sufficient energy to the electrons by absorbing light, for example, converts the material into an electrically conducting state.

    As reported in the Aug. 25 issue of Physical Review Letters, researchers measured the band gap for a monolayer of moly sulfide, which has proved difficult to accurately predict theoretically, and found it to be about 30 percent higher than expected based on previous experiments. They also quantified how the band gap changes with electron density – a phenomenon known as “band gap renormalization.”

    “The most critical significance of this work was in finding the band gap,” said Kaiyuan Yao, a graduate student researcher at Berkeley Lab and the University of California, Berkeley, who served as the lead author of the research paper.

    2
    This diagram shows a triangular sample of monolayer moly sulfide (dark blue) on silicon-based layers (light blue and green) during an experimental technique known as photoluminescence excitation spectroscopy. (Credit: Berkeley Lab)

    “That provides very important guidance to all of the optoelectronic device engineers. They need to know what the band gap is” in orderly to properly connect the 2-D material with other materials and components in a device, Yao said.

    Obtaining the direct band gap measurement is challenged by the so-called “exciton effect” in 2-D materials that is produced by a strong pairing between electrons and electron “holes” ­– vacant positions around an atom where an electron can exist. The strength of this effect can mask measurements of the band gap.

    Nicholas Borys, a project scientist at Berkeley Lab’s Molecular Foundry who also participated in the study, said the study also resolves how to tune optical and electronic properties in a 2-D material.

    “The real power of our technique, and an important milestone for the physics community, is to discern between these optical and electronic properties,” Borys said.

    The team used several tools at the Molecular Foundry, a facility that is open to the scientific community and specializes in the creation and exploration of nanoscale materials.

    The Molecular Foundry technique that researchers adapted for use in studying monolayer moly sulfide, known as photoluminescence excitation (PLE) spectroscopy, promises to bring new applications for the material within reach, such as ultrasensitive biosensors and tinier transistors, and also shows promise for similarly pinpointing and manipulating properties in other 2-D materials, researchers said.

    The research team measured both the exciton and band gap signals, and then detangled these separate signals. Scientists observed how light was absorbed by electrons in the moly sulfide sample as they adjusted the density of electrons crammed into the sample by changing the electrical voltage on a layer of charged silicon that sat below the moly sulfide monolayer.

    3
    This image shows a slight “bump” (red arrow) in charted experimental data that reveals the band gap measurement in a 2-D material known as moly sulfide. (Credit: Berkeley Lab)

    Researchers noticed a slight “bump” in their measurements that they realized was a direct measurement of the band gap, and through a slew of other experiments used their discovery to study how the band gap was readily tunable by simply adjusting the density of electrons in the material.

    “The large degree of tunability really opens people’s eyes,” said P. James Schuck, who was director of the Imaging and Manipulation of Nanostructures facility at the Molecular Foundry during this study.

    “And because we could see both the band gap’s edge and the excitons simultaneously, we could understand each independently and also understand the relationship between them,” said Schuck, who is now at Columbia University. “It turns out all of these properties are dependent on one another.”

    4
    Kaiyuan Yao works with equipment at Berkeley Lab’s Molecular Foundry that was used to help measure a property in a 2-D material. (Credit: Marilyn Chung/Berkeley Lab)

    Moly sulfide, Schuck also noted, is “extremely sensitive to its local environment,” which makes it a prime candidate for use in a range of sensors. Because it is highly sensitive to both optical and electronic effects, it could translate incoming light into electronic signals and vice versa.

    Schuck said the team hopes to use a suite of techniques at the Molecular Foundry to create other types of monolayer materials and samples of stacked 2-D layers, and to obtain definitive band gap measurements for these, too. “It turns out no one yet knows the band gaps for some of these other materials,” he said.

    The team also has expertise in the use of a nanoscale probe to map the electronic behavior across a given sample.

    Borys added, “We certainly hope this work seeds further studies on other 2-D semiconductor systems.”

    The Molecular Foundry is a DOE Office of Science User Facility that provides free access to state-of-the-art equipment and multidisciplinary expertise in nanoscale science to visiting scientists.

    Researchers from the Kavli Energy NanoSciences Institute at UC Berkeley and Berkeley Lab, and from Arizona State University also participated in this study, which was supported by the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 12:09 pm on August 14, 2017 Permalink | Reply
    Tags: , , , , LBNL, New 3-D Simulations Show How Galactic Centers Cool Their Jets,   

    From LBNL: “New 3-D Simulations Show How Galactic Centers Cool Their Jets” 

    Berkeley Logo

    Berkeley Lab

    August 14, 2017
    Glenn Roberts Jr
    geroberts@lbl.gov
    (510) 486-5582

    1
    This rendering illustrates magnetic kink instability in simulated jets beaming from a galaxy’s center. The jets are believed to be associated with supermassive black holes. The magnetic field line (white) in each jet is twisted as the central object (black hole) rotates. As the jets contact higher-density matter the magnetic fields build up and become unstable. The irregular bends and asymmetries of the magnetic field lines are symptomatic of kink instability. The instability dissipates the magnetic fields into heat with the change in density, leading them to become less tightly wound. (Credit: Berkeley Lab, Purdue University, NASA).

    Some of the most extreme outbursts observed in the universe are the mysterious jets of energy and matter beaming from the center of galaxies at nearly the speed of light. These narrow jets, which typically form in opposing pairs are believed to be associated with supermassive black holes and other exotic objects, though the mechanisms that drive and dissipate them are not well understood.

    Now, a small team of researchers has developed theories supported by 3-D simulations to explain what’s at work.

    Finding common causes for instabilities in space jets

    “These jets are notoriously hard to explain,” said Alexander “Sasha” Tchekhovskoy, a former NASA Einstein fellow who co-led the new study as a member of the Nuclear Science Division at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), and the Astronomy and Physics departments and Theoretical Astrophysics Center at UC Berkeley. “Why are they so stable in some galaxies and in others they just fall apart?”

    As much as half of the jets’ energy can escape in the form of X-rays and stronger forms of radiation. The researchers showed how two different mechanisms – both related to the jets’ interaction with surrounding matter, known as the “ambient medium” – serve to reduce about half of the energy of these powerful jets.

    “The exciting part of this research is that we are now coming to understand the full range of dissipation mechanisms that are working in the jet,” no matter the size or type of jet, he said.

    2
    An animation showing magnetic field instabilities in two jets of radiation and matter beaming from a supermassive black hole (center). The magnetic field (white) is twisted by the black hole’s spin. (Credit: Berkeley Lab, Purdue University)

    The study that Tchekhovskoy co-led with Purdue University scientists Rodolfo Barniol Duran and Dimitrios Giannios is published in the Aug. 21 edition of Monthly Notices of the Royal Astronomical Society. The study concludes that the ambient medium itself has a lot to do with how the jets release energy.

    “We were finally able to simulate jets that start from the black hole and propagate to very large distances – where they bump into the ambient medium,” said Duran, formerly a postdoctoral research associate at Purdue University who is now a faculty member at California State University, Sacramento.

    Tchekhovskoy, who has studied these jets for over a decade, said that an effect known as magnetic kink stability, which causes a sudden bend in the direction of some jets, and another effect that triggers a series of shocks within other jets, appear to be the primary mechanisms for energy release. The density of the ambient medium that the jets encounter serves as the key trigger for each type of release mechanism.

    “For a long time, we have speculated that shocks and instabilities trigger the spectacular light displays from jets. Now these ideas and models can be cast on a much firmer theoretical ground,” said Giannios, assistant professor of physics and astronomy at Purdue.

    The length and intensity of the jets can illuminate the properties of their associated black holes, such as their age and size and whether they are actively “feeding” on surrounding matter. The longest jets extend for millions of light years into surrounding space.

    “When we look at black holes, the first things we notice are the central streaks of these jets. You can make images of these streaks and measure their lengths, widths, and speeds to get information from the very center of the black hole,” Tchekhovskoy noted. “Black holes tend to eat in binges of tens and hundreds of millions of years. These jets are like the ‘burps’ of black holes – they are determined by the black holes’ diet and frequency of feeding.”

    3
    This animation shows the propagation of a jet of high-energy radiation and matter from a black hole (at the base of the animation) in a simulation, at four different time points. The frames show what happens as the jet contacts denser matter as it reaches out into surrounding space. (Credit: Berkeley Lab, Purdue University)

    While nothing – not even light – can escape a black hole’s interior, the jets somehow manage to draw their energy from the black hole. The jets are driven by a sort of accounting trick, he explained, like writing a check for a negative amount and having money appear in your account. In the black hole’s case, it’s the laws of physics rather than a banking loophole that allow black holes to spew energy and matter even as they suck in surrounding matter.

    The incredible friction and heating of gases spiraling in toward the black hole cause extreme temperatures and compression in magnetic fields, resulting in an energetic backlash and an outflow of radiation that escapes the black hole’s strong pull.

    A tale of magnetic kinks and sequenced shocks

    Earlier studies had shown how magnetic instabilities (kinks) in the jets can occur when jets run into the ambient medium. This instability is like a magnetic spring. If you squish the spring from both ends between your fingers, the spring will fly sideways out of your hand. Likewise, a jet experiencing this instability can change direction when it rams into matter outside of the black hole’s reach.

    The same type of instability frustrated scientists working on early machines that attempted to create and harness a superhot, charged state of matter known as a plasma in efforts to develop fusion energy, which powers the sun. The space jets, also known as active galactic nuclei (AGN) jets, also are a form of plasma.

    The latest study found that in cases where an earlier jet had “pre-drilled” a hole in the ambient medium surrounding a black hole and the matter impacted by the newly formed jet was less dense, a different process is at work in the form of “recollimation” shocks.

    These shocks form as matter and energy in the jet bounce off the sides of the hole. The jet, while losing energy from every shock, immediately reforms a narrow column until its energy eventually dissipates to the point that the beam loses its tight focus and spills out into a broad area.

    “With these shocks, the jet is like a phoenix. It comes out of the shock every time,” though with gradually lessening energy, Tchekhovskoy said. “This train of shocks cumulatively can dissipate quite a substantial amount of the total energy.”

    The researchers designed the models to smash against different densities of matter in the ambient medium to create instabilities in the jets that mimic astrophysical observations.

    Peering deeper into the source of jets

    New, higher-resolution images of regions in space where supermassive black holes are believed to exist – from the Event Horizon Telescope (EHT), for example – should help to inform and improve models and theories explaining jet behavior, Tchekhovskoy said, and future studies could also include more complexity in the jet models, such as a longer sequence of shocks.

    “It would be really interesting to include gravity into these models,” he said, “and to see the dynamics of buoyant cavities that the jet fills up with hot magnetized plasma as it drills a hole” in the ambient medium.

    4
    Side-by-side comparison of density “snapshots” produced in a 3-D simulation of jets beaming out from a black hole (at the base of images). Red shows higher density and blue shows lower density. The black directional lines show magnetic field streamlines. The perturbed magnetic lines reflect both the emergence of irregular magnetic fields in the jets and the large-scale deviations of the jets out of the image plane, both caused by the 3-D magnetic kink instability. (Credit: Berkeley Lab, Purdue University)

    He added, “Seeing deeper into where the jets come from – we think the jets start at the black hole’s event horizon (a point of no return for matter entering the black hole) – would be really helpful to see in nature these ‘bounces’ in repeating shocks, for example. The EHT could resolve this structure and provide a nice test of our work.”

    This work was supported by NASA through the Astrophysics Theory Program and Einstein Fellowship, the National Science Foundation through an XSEDE supercomputer allocation, the NASA High-End Computing Program through the NASA Advanced Supercomputing Division at Ames Research Center, Purdue University, and UC Berkeley through the Theoretical Astrophysics Center fellowship and access to the Savio supercomputer.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 5:22 pm on August 2, 2017 Permalink | Reply
    Tags: LBNL, , New Simulations Could Help in Hunt for Massive Mergers of Neutron Stars and Black Holes,   

    From LBNL: “New Simulations Could Help in Hunt for Massive Mergers of Neutron Stars, Black Holes” 

    Berkeley Logo

    Berkeley Lab

    August 2, 2017
    Glenn Roberts Jr
    geroberts@lbl.gov

    1
    This image, from a computerized simulation, shows the formation of an inner disk of matter and a wide, hot disk of matter 5.5 milliseconds after the merger of a neutron star and a black hole. (Credit: Classical and Quantum Gravity)

    Now that scientists can detect the wiggly distortions in space-time created by the merger of massive black holes, they are setting their sights on the dynamics and aftermath of other cosmic duos that unify in catastrophic collisions.

    Working with an international team, scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed new computer models to explore what happens when a black hole joins with a neutron star – the superdense remnant of an exploded star.

    Using supercomputers to rip open neutron stars

    The simulations, carried out in part at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), are intended to help detectors home in on the gravitational-wave signals.

    NERSC

    NERSC Cray Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer

    NERSC Hopper Cray XE6 supercomputer

    Telescopes, too, can search for the brilliant bursts of gamma-rays and the glow of the radioactive matter that these exotic events can spew into surrounding space.

    In separate papers published in a special edition of the scientific journal Classical and Quantum Gravity, Berkeley Lab and other researchers present the results of detailed simulations.

    One of the studies models the first milliseconds (thousandths of a second) in the merger of a black hole and neutron star, and the other details separate simulations that model the formation of a disk of material formed within seconds of the merger, and of the evolution of matter that is ejected in the merger.

    2
    Early “snapshots” from a simulation of a neutron star-black hole merger. This entire animated sequence occurs within 43 milliseconds (43 thousandths of a second). (Credit: Classical and Quantum Gravity)

    That ejected matter likely includes gold and platinum and a range of radioactive elements that are heavier than iron.

    Any new information scientists can gather about how neutron stars rip apart in these mergers can help to unlock their secrets, as their inner structure and their likely role in seeding the universe with heavy elements are still shrouded in mystery.

    “We are steadily adding more realistic physics to the simulations,” said – Foucart, who served as a lead author for one of the studies as a postdoctoral researcher in Berkeley Lab’s Nuclear Science Division.

    “But we still don’t know what’s happening inside neutron stars. The complicated physics that we need to model make the simulations very computationally intensive.”

    Finding signs of a black hole–neutron star merger

    Foucart, who will soon be an assistant professor at the University of New Hampshire, added, “We are trying to move more toward actually making models of the gravitational-wave signals produced by these mergers,” which create a rippling in space-time that researchers hope can be detected with improvements in the sensitivity of experiments including Advanced LIGO, the Laser Interferometer Gravitational-Wave Observatory.


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project


    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    In February 2016, LIGO scientists confirmed the first detection of a gravitational wave, believed to be generated by the merger of two black holes, each with masses about 30 times larger than the sun.

    The signals of a neutron star merging with black holes or another neutron star are expected to generate gravitational waves that are slightly weaker but similar to those of black hole–black hole mergers, Foucart said.

    Radioactive ‘waste’ in space

    Daniel Kasen, a scientist in the Nuclear Science Division at Berkeley Lab and associate professor of physics and astronomy at UC Berkeley who participated in the research, said that inside neutron stars “there may be exotic states of matter unlike anything realized anywhere else in the universe.”

    In some computer simulations the neutron stars were swallowed whole by the black hole, while in others there was a fraction of matter coughed up into space. This ejected matter is estimated to range up to about one-tenth of the mass of the sun.

    While much of the matter gets sucked into the larger black hole that forms from the merger, “the material that gets flung out eventually turns into a kind of radioactive ‘waste,’” he said. “You can see the radioactive glow of that material for a period of days or weeks, from more than a hundred million light years away.” Scientists refer to this observable radioactive glow as a “kilonova.”

    The simulations use different sets of calculations to help scientists visualize how matter escapes from these mergers. By modeling the speed, trajectory, amount and type of matter, and even the color of the light it gives off, astrophysicists can learn how to track down actual events.

    The weird world of neutron stars

    The size range of neutron stars is set by the ultimate limit on how densely matter can be compacted, and neutron stars are among the most superdense objects we know about in the universe.

    Neutron stars have been observed to have masses up to at least two times that of our sun but measure only about 12 miles in diameter, on average, while our own sun has a diameter of about 865,000 miles. At large enough masses, perhaps about three times the mass of the sun, scientists expect that neutron stars must collapse to form black holes.

    A cubic inch of matter from a neutron star is estimated to weigh up to 10 billion tons. As their name suggests, neutron stars are thought to be composed largely of the neutrally charged subatomic particles called neutrons, and some models expect them to contain long strands of matter – known as “nuclear pasta” – formed by atomic nuclei that bind together.

    Neutron stars are also expected to be almost perfectly spherical, with a rigid and incredibly smooth crust and an ultrapowerful magnetic field. They can spin at a rate of about 43,000 revolutions per minute (RPMs), or about five times faster than a NASCAR race car engine’s RPMs.

    The aftermath of neutron star mergers

    The researchers’ simulations showed that the radioactive matter that first escapes the black hole mergers may be traveling at speeds of about 20,000 to 60,000 miles per second, or up to about one-third the speed of light, as it is swung away in a long “tidal tail.”

    “This would be strange material that is loaded with neutrons,” Kasen said. “As that expanding material cools and decompresses, the particles may be able to combine to build up into the heaviest elements.” This latest research shows how scientists might find these bright bundles of heavy elements.

    “If we can follow up LIGO detections with telescopes and catch a radioactive glow, we may finally witness the birthplace of the heaviest elements in the universe,” he said. “That would answer one of the longest-standing questions in astrophysics.”

    Most of the matter in a black hole–neutron star merger is expected to be sucked up by the black hole within a millisecond of the merger, and other matter that is not flung away in the merger is likely to form an extremely dense, thin, donut-shaped halo of matter.

    The thin, hot disk of matter that is bound by the black hole is expected to form within about 10 milliseconds of the merger, and to be concentrated within about 15 to 70 miles of it, the simulations showed. This first 10 milliseconds appears to be key in the long-term evolution of these disks.

    Over timescales ranging from tens of milliseconds to several seconds, the hot disk spreads out and launches more matter into space. “A number of physical processes – from magnetic fields to particle interactions and nuclear reactions – combine in complex ways to drive the evolution of the disk,” said Rodrigo Fernández, an assistant professor of physics at the University of Alberta in Canada who led one of the studies.

    Simulations carried out on NERSC’s Edison supercomputer were crucial in understanding how the disk ejects matter and in providing clues for how to observe this matter, said Fernández, a former UC Berkeley postdoctoral researcher.

    What’s next?

    Eventually, it may be possible for astronomers scanning the night sky to find the “needle in a haystack” of radioactive kilonovae from neutron star mergers that had been missed in the LIGO data, Kasen said.

    “With improved models, we are better able to tell the observers exactly which flashes of light are the signals they are looking for,” he said. Kasen is also working to build increasingly sophisticated models of neutron star mergers and supernovae through his involvement in the DOE Exascale Computing Project.

    As the sensitivity of gravitational-wave detectors improves, Foucart said, it may be possible to detect a continuous signal produced by even a tiny bump on the surface of a neutron star, for example, or signals from theorized one-dimensional objects known as cosmic strings.

    “This could also allow us to observe events that we have not even imagined,” he said.

    5

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: