Tagged: Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:56 pm on April 24, 2017 Permalink | Reply
    Tags: Change the color of assembled nanoparticles with an electrical stimulant, Color is dynamically tunable, , , Physics   

    From LLNL: “Research comes through with flying colors” 


    Lawrence Livermore National Laboratory

    April 24, 2017
    Anne M Stark
    stark8@llnl.gov
    925-422-9799

    1
    Dynamic color tunability of amorphous photonic structures in response to external electrical stimuli using electrophoretic deposition process. Image by Ryan Chen/LLNL

    Like a chameleon changing colors to blend into the environment, Lawrence Livermore researchers have created a technique to change the color of assembled nanoparticles with an electrical stimulant.

    The team used core/shell nanoparticles to improve color contrast and expand color schemes by using a combination of pigmentary color (from inherent properties) and structural color (from particle assemblies).

    “We were motivated by various examples in living organisms, such as birds, insects and plants,” said Jinkyu Han, lead author of a paper appearing on the cover of the April 3 edition of the journal Advanced Optics Materials . “The assemblies of core/shell nanoparticles can not only imitate interesting colors observed in living organisms, but can be applied in electronic paper displays and colored-reflective photonic displays.”

    Applications of electronic visual displays include electronic pricing labels in retail shops and digital signage, time tables at bus stations, electronic billboards, mobile phone displays and e-readers able to display digital versions of books and magazines.

    The resulting non-iridescent brilliant colors can be manipulated by shell thickness, particle concentration and external electrical stimuli using an electrophoretic deposition process.

    The technique is fully reversible with instantaneous color changes as well as noticeable differences between transmitted and reflected colors.

    2
    The photographs of nanostructures in an electrophoretic deposition (EPD) cell in the absence (OFF state) and presence (ON state) of applied voltage under diffusive illumination. Black carbon tape (LLNL logo) with a white paper was put on the backside of the cell to distinguish the reflected and transmitted color more clearly.

    The particle arrangement in the system is not perfectly ordered nor crystalline, referred to as “amorphous photonic crystal,” which creates the resulting color from light reflection that does not change with viewing angles.

    “The angle independence of the observed colors from the assemblies is quite a unique and interesting property of our system and is ideal for display applications,” Han said.

    The resulting color is dynamically tunable in response to electric stimuli since the nanoparticle arrangement (i.e., inter-particle distance, particle structures) is highly affected by the electric field.

    Contributing authors are Elaine Lee, Jessica Dudoff, Michael Bagge-Hansen, Jonathan Lee, Andrew Pascall, Joshua Kuntz, Trevor Willey, Marcus Worsley and T. Yong-Jin Han.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 9:54 am on April 23, 2017 Permalink | Reply
    Tags: , , , Computer modelling, , , , , Modified Newtonian Dynamics, or MOND, Physics, Simulating galaxies,   

    From Durham: “Simulated galaxies provide fresh evidence of dark matter” 

    Durham U bloc

    Durham University

    21 April 2017
    No writer credit

    1
    A simulated galaxy is pictured, showing the main ingredients that make up a galaxy: the stars (blue), the gas from which the stars are born (red), and the dark matter halo that surrounds the galaxy (light grey). No image credit.

    Further evidence of the existence of dark matter – the mysterious substance that is believed to hold the Universe together – has been produced by Cosmologists at Durham University.

    Using sophisticated computer modelling techniques, the research team simulated the formation of galaxies in the presence of dark matter and were able to demonstrate that their size and rotation speed were linked to their brightness in a similar way to observations made by astronomers.

    One of the simulations is pictured, showing the main ingredients that make up a galaxy: the stars (blue), the gas from which the stars are born (red), and the dark matter halo that surrounds the galaxy (light grey).

    Alternative theories

    Until now, theories of dark matter have predicted a much more complex relationship between the size, mass and brightness (or luminosity) of galaxies than is actually observed, which has led to dark matter sceptics proposing alternative theories that are seemingly a better fit with what we see.

    The research led by Dr Aaron Ludlow of the Institute for Computational Cosmology, is published in the academic journal, Physical Review Letters.

    Most cosmologists believe that more than 80 per cent of the total mass of the Universe is made up of dark matter – a mysterious particle that has so far not been detected but explains many of the properties of the Universe such as the microwave background measured by the Planck satellite.

    CMB per ESA/Planck

    ESA/Planck

    Convincing explanations

    Alternative theories include Modified Newtonian Dynamics, or MOND. While this does not explain some observations of the Universe as convincingly as dark matter theory it has, until now, provided a simpler description of the coupling of the brightness and rotation velocity, observed in galaxies of all shapes and sizes.

    The Durham team used powerful supercomputers to model the formation of galaxies of various sizes, compressing billions of years of evolution into a few weeks, in order to demonstrate that the existence of dark matter is consistent with the observed relationship between mass, size and luminosity of galaxies.

    Long-standing problem resolved

    Dr Ludlow said: “This solves a long-standing problem that has troubled the dark matter model for over a decade. The dark matter hypothesis remains the main explanation for the source of the gravity that binds galaxies. Although the particles are difficult to detect, physicists must persevere.”

    Durham University collaborated on the project with Leiden University, Netherlands; Liverpool John Moores University, England and the University of Victoria, Canada. The research was funded by the European Research Council, the Science and Technology Facilities Council, Netherlands Organisation for Scientific Research, COFUND and The Royal Society.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Durham U campus

    Durham University is distinctive – a residential collegiate university with long traditions and modern values. We seek the highest distinction in research and scholarship and are committed to excellence in all aspects of education and transmission of knowledge. Our research and scholarship affect every continent. We are proud to be an international scholarly community which reflects the ambitions of cultures from around the world. We promote individual participation, providing a rounded education in which students, staff and alumni gain both the academic and the personal skills required to flourish.

     
  • richardmitnick 11:20 am on April 19, 2017 Permalink | Reply
    Tags: , , , , OSU - Ohio State University, , Physics, Tidal disruption event (TDE) known as iPTF16fnl   

    From Ohio State via phys.org: “Ultraviolet spectroscopic evolution of a tidal disruption event investigated by astronomers” 

    OSU

    Ohio State University

    phys.org

    April 19, 2017
    Tomasz Nowakowski

    1
    The UV evolution of iPTF16fnl as revealed by HST/STIS spectra and Swift photometry.
    The spectra have been smoothed with a 5 pixel boxcar and scaled by a constant factor to best match the Swift photometry for ease of comparison. The dashed lines show our blackbody fits to the host subtracted Swift fluxes. Prominent atomic transitions are marked with vertical dotted lines. The thin gray line shows our estimate of the UV spectrum of the host based on the SED model. Credit: Brown et al., 2017.

    NASA/ESA Hubble Telescope

    NASA/SWIFT Telescope

    An international team of astronomers led by Jonathan S. Brown of the Ohio State University in Columbus, Ohio, has studied the ultraviolet spectroscopic evolution of a nearby low-luminosity tidal disruption event (TDE) known as iPTF16fnl. The results of this study, published Apr. 7 on arXiv.org., offer new clues on the nature of this TDE.

    TDE occurs when a star passes close enough to a supermassive black hole and is pulled apart by the black hole’s tidal forces, causing the process of disruption. Such tidally disrupted stellar debris then rains down on the black hole and radiation emerges from the innermost region of accreting debris, which indicates the presence of a TDE.

    TDEs serve as invaluable probes of strong gravity and accretion physics, providing answers about the formation and evolution of supermassive black holes. While most such events are discovered in optical transient surveys, ultraviolet observations provide an opportunity to learn much more about the kinematics and ionization structure of tidally disrupted stellar debris.

    iPTF16fnl was discovered on Aug. 26, 2016 as a transient consistent with the center of the galaxy Mrk 0950. This transient was later classified as a rapidly evolving, low-luminosity TDE, located about 220 million light years away. It is the nearest TDE found so far and its black hole mass is estimated to be not greater than 5.5 million solar masses.

    Due to the proximity of iPTF16fnl to Earth, Brown and his colleagues decided to initiate a follow-up observational campaign in order to study this event in detail. These observations were conducted using the Hubble Space Telescope’s (HST) Space Telescope Imaging Spectrograph (STIS) and the Ultraviolet/Optical Telescope (UVOT) onboard NASA’s Swift spacecraft. The researchers also employed several ground-based observatories in order to perform photometric and spectroscopic monitoring of this event. All these instruments allowed the team to spectroscopically observe the temporal evolution of a TDE in ultraviolet light for the first time ever.

    “We presented for the first time the UV spectroscopic evolution of a TDE using data from HST/STIS,” Brown’s team wrote in a research paper available on arXiv.org.

    The results show that shape and velocity offset of the broad ultraviolet emission in iPTF16fnl and absorption features evolve with time.

    “There is significant evolution in the shape and central wavelength of the line profiles over the course of our observations, such that at early times, the lines are broad and redshifted, while at later times, the lines are significantly narrower and peak near the wavelengths of their corresponding atomic transitions,” the paper reads.

    The researchers found that ultraviolet spectra of iPTF16fnl closely resemble those of ASASSN-14li (other nearby TDE) and nitrogen-rich quasars. When it comes to optical spectra of iPTF16fnl, the findings indicate that they resemble those of several other optically discovered TDEs.

    “The dominant emission features closely resemble those seen in the UV spectra of the TDE ASASSN-14li and are also similar to those of N-rich quasars,” the authors wrote.

    All the data obtained by various space and ground-based telescopes allowed the scientists to draw conclusion that iPTF16fnl is subluminous and evolves more rapidly than other optically detected TDEs.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 11:05 am on April 19, 2017 Permalink | Reply
    Tags: , , , Physics,   

    From Ethan Siegel: “Why Does The Proton Spin? Physics Holds A Surprising Answer” 

    Ethan Siegel
    Apr 19, 2017

    1
    The three valence quarks of a proton contribute to its spin, but so do the gluons, sea quarks and antiquarks, and orbital angular momentum as well. APS/Alan Stonebraker

    You can take any particle in the Universe and isolate it from everything else, yet there are some properties that can never be taken away. These are intrinsic, physical properties of the particle itself — properties like mass, charge, or angular momentum — and will always be the same for any single particle. Some particles are fundamental, like electrons, and their mass, charge and angular momentum are fundamental, too. But other particles are composite particles, like the proton. While the proton’s charge (of +1) is due to the sum of the three quarks that make it up (two up quarks of +2/3 and one down quark of -1/3), the story of its angular momentum is much more complicated. Even though it’s a spin = 1/2 particle, just like the electron, simply adding the spins of the three quarks that make it up together isn’t enough.

    2
    The three valence quarks in the proton, two up and one down, were initially thought to constitute its spin of 1/2. But that simple idea didn’t conform to experiments. Arpad Horvath.

    There are two things that contribute to angular momentum: spin, which is the intrinsic angular momentum inherent to any fundamental particle, and orbital angular momentum, which is what you get from two or more fundamental particles that make up a composite particle. (Don’t be fooled: no particles are actually, physically spinning, but “spin” is the name we give to this property of intrinsic angular momentum.) A proton has two up quarks and one down quark, and they’re held together by gluons: massless, color-charged particles which mutually bind the three quarks together. Each quark has a spin of 1/2, so you might simply think that so long as one spins in the opposite direction of the other two, you’d get the proton’s spin. Up until the 1980s, that’s exactly how the standard reasoning went.

    3
    The proton’s structure, modeled along with its attendant fields, show that the three valence quarks alone cannot account for the proton’s spin, and instead account only for a fraction of it. Brookhaven National Laboratory

    With two up quarks — two identical particles — in the ground state, you’d expect that the Pauli exclusion principle would prevent these two identical particles from occupying the same state, and so one would have to be +1/2 while the other was -1/2. Therefore, you’d reason, that third quark (the down quark) would give you a total spin of 1/2. But then the experiments came, and there was quite a surprise at play: when you smashed high-energy particles into the proton, the three quarks inside (up, up, and down) only contributed about 30% to the proton’s spin.

    4
    The internal structure of a proton, with quarks, gluons, and quark spin shown. Brookhaven National Laboratory

    There are three good reasons that these three components might not add up so simply.

    The quarks aren’t free, but are bound together inside a small structure: the proton. Confining an object can shift its spin, and all three quarks are very much confined.
    There are gluons inside, and gluons spin, too. The gluon spin can effectively “screen” the quark spin over the span of the proton, reducing its effects.
    And finally, there are quantum effects that delocalize the quarks, preventing them from being in exactly one place like particles and requiring a more wave-like analysis. These effects can also reduce or alter the proton’s overall spin.

    In other words, that missing 70% is real.

    4
    As better experiments and theoretical calculations have come about, our understanding of the proton has gotten more sophisticated, with gluons, sea quarks, and orbital interactions coming into play. Brookhaven National Laboratory

    Maybe, you’d think, that those were just the three valence quarks, and that quantum mechanics, from the gluon field, could spontaneously create quark/antiquark pairs. That part is true, and makes important contributions to the proton’s mass. But as far as the proton’s angular momentum goes, these “sea quarks” are negligible.

    5
    The fermions (quarks and gluons), antifermions (antiquarks and antileptons), all spin = 1/2, and the bosons (of integer spin) of the standard model, all shown together. E. Siegel

    Maybe, then, the gluons would be an important contributor? After all, the standard model of elementary particles is full of fermions (quarks and leptons) which are all spin = 1/2, and bosons like the photon, the W-and-Z, and the gluons, all of which are spin = 1. (Also, there’s the Higgs, of spin = 0, and if quantum gravity is real, the graviton, of spin = 2.) Given all the gluons inside the proton, perhaps they matter, too?

    6
    By colliding particles together at high energies inside a sophisticated detector, like Brookhaven’s PHENIX detector at RHIC, have led the way in measuring the spin contributions of gluons. Brookhaven National Laboratory

    There are two ways to test that: experimentally and theoretically. From an experimental point of view, you can collide particles deep inside the proton, and measure how the gluons react. The gluons that contribute the most to the proton’s overall momentum are seen to contribute substantially to the proton’s angular momentum: about 40%, with an uncertainty of ±10%. With better experimental setups (which would require a new electron/ion collider), we could probe down to lower-momentum gluons, achieving even greater accuracies.

    7
    When two protons collide, it isn’t just the quarks making them up that can collide, but the sea quarks, gluons, and beyond that, field interactions. All can provide insights into the spin of the individual components. CERN / CMS Collaboration

    But the theoretical calculations matter, too! A calculational technique known as Lattice QCD has been steadily improving over the past few decades, as the power of supercomputers has increased exponentially. Lattice QCD has now reached the point where it can predict that the gluon contribution to the proton’s spin is 50%, again with a few percent uncertainty. What’s most remarkable is that the calculations show that — with this contribution — the gluon screening of the quark spin is ineffective; the quarks must be screened from a different effect.

    8
    As computational power and Lattice QCD techniques have improved over time, so has the accuracy to which various quantities about the proton, such as its component spin contribtuions, can be computed. Laboratoire de Physique de Clermont / ETM Collaboration

    The remaining 20% must come from orbital angular momentum, where gluons and even virtual pions surround the three quarks, since the “sea quarks” have a negligible contribution, both experimentally and theoretically.

    9
    A proton, more fully, is made up of spinning valence quarks, sea quarks and antiquarks, spinning gluons, all of which mutually orbit one another. That is where their spins come from. Zhong-Bo Kang, 2012, RIKEN, Japan

    It’s remarkable and fascinating that both theory and experiment agree, but most incredible of all is the fact that the simplest explanation for the proton’s spin — simply adding up the three quarks — gives you the right answer for the wrong reason! With 70% of the proton’s spin coming from gluons and orbital interactions, and with experiments and Lattice QCD calculations improving hand-in-hand, we’re finally closing in on exactly why the proton “spins” with the exact value that it has.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 1:45 pm on April 18, 2017 Permalink | Reply
    Tags: , , , Gabriella Carini, How do you catch femtosecond light?, , Physics, , , ,   

    From SLAC: “How do you catch femtosecond light?” 


    SLAC Lab

    1
    Gabriella Carini
    Staff Scientist
    Joined SLAC: 2011
    Specialty: Developing detectors that capture light from X-ray sources
    Interviewed by: Amanda Solliday

    Gabriella Carini enjoys those little moments—after hours and hours of testing in clean rooms, labs and at X-ray beamlines—when she first sees an instrument work.

    She earned her PhD in electronic engineering at the University of Palermo in Italy and now heads the detectors department at the Linac Coherent Light Source (LCLS), the X-ray free-electron laser at SLAC.

    SLAC/LCLS

    Scientists from around the world use the laser to probe natural processes that occur in tiny slivers of time. To see on this timescale, they need a way to collect the light and convert it into data that can be examined and interpreted.

    It’s Carini’s job to make sure LCLS has the right detector equipment at hand to catch the “precious”, very intense laser pulses, which may last only a few femtoseconds.

    When the research heads in new directions, as it constantly does, this requires her to look for fresh technology and turn these ideas into reality.

    When did you begin working with detectors?

    I moved to the United States as a doctoral student. My professor at the time suggested I join a collaboration at Brookhaven National Laboratory, where I started developing gamma ray detectors to catch radioactive materials.

    Radioactive materials give off gamma rays as they decay, and gamma rays are the most energetic photons, or particles of light. The detectors I worked on were made from cadmium zinc telluride, which has very good stopping power for highly energetic photons. These detectors can identify radioactive isotopes for security—such as the movement of nuclear materials—and contamination control, but also gamma rays for medical and astrophysical observations.

    We had some medical projects going on at the time, too, with detectors that scan for radioactive tracers used to map tissues and organs with positron emission tomography.

    From gamma ray detectors, I then moved to X-rays, and I began working on the earliest detectors for LCLS.

    How do you explain your job to someone outside the X-ray science community?

    I say, “There are three ingredients for an experiment—the source, the sample and the detector.”

    You need a source of light that illuminates your sample, which is the problem you want to solve or investigate. To understand what is happening, you have to be able to see the signal produced by the light as it interacts with the sample. That’s where the detector comes in. For us, the detector is like the “eyes” of the experimental set-up.

    What do you like most about your work?

    2

    There’s always a way we can help researchers optimize their experiments, tweak some settings, do more analysis and correction.

    This is important because scientists are going to encounter a lot of different types of detectors if they work at various X-ray facilities.

    I like to have input from people who are running the experiments. Because I did experiments myself as a graduate student, I’m very sensitive to whether a system is user-friendly. If you don’t make something that researchers can take the best advantage of, then you didn’t do your job fully.

    And detectors are never perfect, no matter which one you buy or build.

    There are a lot of people who have to come together to make a detector system. It’s not one person’s work. It’s many, many people with lots of different expertise. You need to have lots of good interpersonal skills.

    What are some of the challenges of creating detectors for femtosecond science?

    In more traditional X-ray sources the photons arrive distributed over time, one after the other, but when you work with ultrafast laser pulses like the ones from LCLS, all your information about a sample arrives in a few femtoseconds. Your detector has to digest this entire signal at once, process the information and send it out before another pulse comes. This requires deep understanding of the detector physics and needs careful engineering. You need to optimize the whole signal chain from the sensor to the readout electronics to the data transmission.

    We also have mechanical challenges because we have to operate in very unusual conditions: intense optical lasers, injectors with gas and liquids, etc. In many cases we need to use special filters to protect the detectors from these sources of contamination.

    4
    And often, you work in vacuum. With “soft” or low-energy X-rays, they are absorbed very quickly in air. Your entire system has to be vacuum-compatible. With many of our substantial electronics, this requires some care.

    So there are lots of things to take into account. Those are just a few examples. It’s very complicated and can vary quite a bit from experiment to experiment.

    Is there a new project you are really excited about?

    All of LCLS-II—this fills my life! We’re coming up with new ideas and new technologies for SLAC’s next X-ray laser, which will have a higher firing rate—up to a million pulses per second. For me, this is a multidimensional puzzle. Every science case and every instrument has its own needs and we have to find a route through the many options and often-competing parameters to achieve our goals.

    X-ray free-electron lasers are a big driver for detector development. Ten years ago, no one would have talked about X-ray cameras delivering 10,000 pictures per second. The new X-ray lasers are really a game-changer in developing detectors for photon science, because they require detectors that are just not readily available.

    LCLS-II will be challenging, but it’s exciting. For me, it’s thinking about what we can do now for the very first day of operation. And while doing that, we need to keep pushing the limits of what we have to do next to take full advantage of our new machine.

    6

    SLAC LCLS-II

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1

     
  • richardmitnick 7:10 am on April 18, 2017 Permalink | Reply
    Tags: Agostino Marinelli, , Physics,   

    From SLAC: “Seeing the World in Femtoseconds” 


    SLAC Lab

    Q&A series with SLAC scientists

    4.17.17
    Kathryn Jepsen

    1
    Agostino Marinelli
    Accelerator Physicist

    Joined SLAC: 2012

    Specialty: Improving the capabilities of the Linac Coherent Light Source

    SLAC/LCLS

    Agostino “Ago” Marinelli first met pioneering accelerator physicist Claudio Pellegrini as an undergraduate student at the University of Rome. It was 2007, a couple of years before the Linac Coherent Light Source (LCLS) came online at SLAC, and people were abuzz about free-electron laser physics.

    Caught up in the excitement, Marinelli pursued a PhD in accelerator physics at the University of California, Los Angeles under Jamie Rosenzweig’s mentorship. Today he is involved in research and development related to femtosecond science at LCLS.

    Marinelli focuses on research at the femtosecond timescale because, he says, “it’s the fastest we can reach now with X-rays, and as an accelerator physicist, I get excited about technical things like that.”

    Why did you get involved in X-ray science?

    2
    Claudio Pellegrini

    Part of it was Claudio—he’s a very charismatic character. He’s an inspiring character. The field was very interesting. I thought it was a good way to spend my PhD.
    LCLS was promising so much innovation: a laser 10 billion times brighter than we had then. That sounds like something that somebody who is 24 would love to get involved in. It just sounded like something that would change science in a positive way, and I wanted to be a part of it.

    What is a free-electron laser?
    Free-electron lasers were invented by John Madey at Stanford in 1971; later on in the ’90s Claudio Pellegrini and collaborators proposed to extend free-electron lasers to the X-ray regime. They were the next step after synchrotron light sources.

    Synchrotrons send electrons around in a circle. That gives you radiation you can use in experiments. The difference between a synchrotron and the free-electron laser is the same difference between this light [points to a ceiling light] and a laser. It’s the difference between a bunch of kids making noise and a choir.

    In a synchrotron, the electrons are all doing the same thing, going around in a circle, but they are unaware of each other. They are all emitting X-rays in a random way. What makes a free-electron laser a laser is that all the electrons are emitting radiation in a coherent way. They are all synchronized.

    Also, since in an FEL you are using very intense and short electron bunches, the X-ray pulses will also be very short, down to the femtosecond level.

    What do you do with the free-electron laser?

    We talk to the users—they’re researchers that have some science they want to study with the machine. Then we “shape” the X-rays—set up the machine in a way that’s ideal for that experiment. The LCLS accelerator is very flexible. You can do all sorts of tricks with it—like arbitrarily changing the pulse duration, varying the X-ray polarization or making multiple pulses of different colors.

    Speaking of which, in 2014 the European Physical Society awarded you the Frank Sacherer Prize for your work using “two-color” beams with LCLS. What is that about?

    Normally LCLS shoots 120 X-ray pulses a second. But you can also make it send two pulses of different energies, separated by a few to 100 femtoseconds. You excite your sample with the first one and probe it with the second. You have to observe it within femtoseconds after you excite it because reactions happen that fast.

    3
    Normally you would excite the sample with an external optical laser; that’s how pump-probe is done. But in molecular dynamics, if you can excite a molecule with X-rays instead of an optical laser, you can get atom specificity—you can target a specific atom in the molecule.

    Each atom has a core energy level. If you know that, you can shoot the X-ray and hit only the oxygen in a molecule; oxygen is the only thing that is going to react. With two pulses at separate energies, you can target different atoms in a molecule to see which one triggers a certain reaction.

    What kinds of things do you study on the femtosecond scale?

    A femtosecond is close to the fundamental scale of atomic and molecular physics—so, things like chemistry.

    A chemical reaction is essentially two molecules or atoms interacting in some way and sharing charge and giving away energy. Ultimately to understand that, you have to understand how charge and energy flow in a molecule. You have to understand the very fundamental motion of electrons and ions in the molecule. On the femtosecond scale, you can see the positions of the atoms rearranging as it happens.

    Chemical reactions are a dynamic process. They start with something. They end with something. We want to know what happens in between.

    Why?

    If you want the reaction to end with something else, if you want it to end with something slightly different, you want to understand how it happens so you can make changes on purpose.

    What are you most excited about now?

    I’m really excited about what I’m about to do, which is this sub-femtosecond project called XLEAP. We will shape the LCLS electron beam with a high-power infrared laser and use it to generate pulses that are shorter than a femtosecond! What we will be looking at is energy and electrons moving around a molecule, which happens even faster than the atoms rearranging.

    Right now we’re really blind to all of this. To me, the way I understand it is, going to that timescale, you’re peeking into the very fundamental, quantum nature of the electrons in the molecule.

    If you ask me, “What is the ultimate problem it will solve for us?”—the answer is: I don’t know. In general when you’re blind to some fundamental process in nature and suddenly you can see it, my guess is something good is going to come of it.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1

     
  • richardmitnick 3:55 pm on April 17, 2017 Permalink | Reply
    Tags: , , Physics, , , , , ,   

    From SLAC: “SLAC’s X-ray Laser Glimpses How Electrons Dance with Atomic Nuclei in Materials” 


    SLAC Lab

    September 22, 2016

    Studies Could Help Design and Control Materials with Intriguing Properties, Including Novel Electronics, Solar Cells and Superconductors.

    From hard to malleable, from transparent to opaque, from channeling electricity to blocking it: Materials come in all types. A number of their intriguing properties originate in the way a material’s electrons “dance” with its lattice of atomic nuclei, which is also in constant motion due to vibrations known as phonons.

    This coupling between electrons and phonons determines how efficiently solar cells convert sunlight into electricity. It also plays key roles in superconductors that transfer electricity without losses, topological insulators that conduct electricity only on their surfaces, materials that drastically change their electrical resistance when exposed to a magnetic field, and more.

    At the Department of Energy’s SLAC National Accelerator Laboratory, scientists can study these coupled motions in unprecedented detail with the world’s most powerful X-ray laser, the Linac Coherent Light Source (LCLS). LCLS is a DOE Office of Science User Facility.

    SLAC/LCLS

    1
    An illustration shows how laser light excites electrons (white spheres) in a solid material, creating vibrations in its lattice of atomic nuclei (black and blue spheres). SLAC’s LCLS X-ray laser reveals the ultrafast “dance” between electrons and vibrations that accounts for many important properties of materials. (SLAC National Accelerator Laboratory)

    “It has been a long-standing goal to understand, initiate and control these unusual behaviors,” says LCLS Director Mike Dunne. “With LCLS we are now able to see what happens in these materials and to model complex electron-phonon interactions. This ability is central to the lab’s mission of developing new materials for next-generation electronics and energy solutions.”

    LCLS works like an extraordinary strobe light: Its ultrabright X-rays take snapshots of materials with atomic resolution and capture motions as fast as a few femtoseconds, or millionths of a billionth of a second. For comparison, one femtosecond is to a second what seven minutes is to the age of the universe.

    Two recent studies made use of these capabilities to study electron-phonon interactions in lead telluride, a material that excels at converting heat into electricity, and chromium, which at low temperatures has peculiar properties similar to those of high-temperature superconductors.

    Turning Heat into Electricity and Vice Versa

    Lead telluride, a compound of the chemical elements lead and tellurium, is of interest because it is a good thermoelectric: It generates an electrical voltage when two opposite sides of the material have different temperatures.

    “This property is used to power NASA space missions like the Mars rover Curiosity and to convert waste heat into electricity in high-end cars,” says Mariano Trigo, a staff scientist at the Stanford PULSE Institute and the Stanford Institute for Materials and Energy Sciences (SIMES), both joint institutes of Stanford University and SLAC. “The effect also works in the opposite direction: An electrical voltage applied across the material creates a temperature difference, which can be exploited in thermoelectric cooling devices.”

    Mason Jiang, a recent graduate student at Stanford, PULSE and SIMES, says, “Lead telluride is exceptionally good at this. It has two important qualities: It’s a bad thermal conductor, so it keeps heat from flowing from one side to the other, and it’s also a good electrical conductor, so it can turn the temperature difference into an electric current. The coupling between lattice vibrations, caused by heat, and electron motions is therefore very important in this system. With our study at LCLS, we wanted to understand what’s naturally going on in this material.”

    In their experiment, the researchers excited electrons in a lead telluride sample with a brief pulse of infrared laser light, and then used LCLS’s X-rays to determine how this burst of energy stimulated lattice vibrations.

    2
    This illustration shows the arrangement of lead and tellurium atoms in lead telluride, an excellent thermoelectric that efficiently converts heat into electricity and vice versa. In its normal state (left), lead telluride’s structure is distorted and has a relatively large degree of lattice vibrations (blurring). When scientists hit the sample with a laser pulse, the structure became more ordered (right). The results elucidate how electrons couple with these distortions – an interaction that is crucial for lead telluride’s thermoelectric properties. (SLAC National Accelerator Laboratory)

    “Lead telluride sits at the precipice of a coupled electronic and structural transformation,” says principal investigator David Reis from PULSE, SIMES and Stanford. “It has a tendency to distort without fully transforming – an instability that is thought to play an important role in its thermoelectric behavior. With our method we can study the forces involved and literally watch them change in response to the infrared laser pulse.”

    The scientists found that the light pulse excites particular electronic states that are responsible for this instability through electron-phonon coupling. The excited electrons stabilize the material by weakening certain long-range forces that were previously associated with the material’s low thermal conductivity.

    “The light pulse actually walks the material back from the brink of instability, making it a worse thermoelectric,” Reis says. “This implies that the reverse is also true – that stronger long-range forces lead to better thermoelectric behavior.”

    The researchers hope their results, published July 22 in Nature Communications, will help them find other thermoelectric materials that are more abundant and less toxic than lead telluride.

    Controlling Materials by Stimulating Charged Waves

    The second study looked at charge density waves – alternating areas of high and low electron density across the nuclear lattice – that occur in materials that abruptly change their behavior at a certain threshold. This includes transitions from insulator to conductor, normal conductor to superconductor, and from one magnetic state to another.

    These waves don’t actually travel through the material; they are stationary, like icy waves near the shoreline of a frozen lake.

    “Charge density waves have been observed in a number of interesting materials, and establishing their connection to material properties is a very hot research topic,” says Andrej Singer, a postdoctoral fellow in Oleg Shpyrko’s lab at the University of California, San Diego. “We’ve now shown that there is a way to enhance charge density waves in crystals of chromium using laser light, and this method could potentially also be used to tweak the properties of other materials.”

    This could mean, for example, that scientists might be able to switch a material from a normal conductor to a superconductor with a single flash of light. Singer and his colleagues reported their results on July 25 in Physical Review Letters.

    The research team used the chemical element chromium as a simple model system to study charge density waves, which form when the crystal is cooled to about minus 280 degrees Fahrenheit. They stimulated the chilled crystal with pulses of optical laser light and then used LCLS X-ray pulses to observe how this stimulation changed the amplitude, or height, of the charge density waves.

    “We found that the amplitude increased by up to 30 percent immediately after the laser pulse,” Singer says. “The amplitude then oscillated, becoming smaller and larger over a period of 450 femtoseconds, and it kept going when we kept hitting the sample with laser pulses. LCLS provides unique opportunities to study such process because it allows us to take ultrafast movies of the related structural changes in the lattice.”

    Based on their results, the researchers suggested a mechanism for the amplitude enhancement: The light pulse interrupts the electron-phonon interactions in the material, causing the lattice to vibrate. Shortly after the pulse, these interactions form again, which boosts the amplitude of the vibrations, like a pendulum that swings farther out when it receives an extra push.

    A Bright Future for Studies of the Electron-Phonon Dance

    Studies like these have a high priority in solid-state physics and materials science because they could pave the way for new materials and provide new ways to control material properties.

    With its 120 ultrabright X-ray pulses per second, LCLS reveals the electron-phonon dance with unprecedented detail. More breakthroughs in the field are on the horizon with LCLS-II – a next-generation X-ray laser under construction at SLAC that will fire up to a million X-ray flashes per second and will be 10,000 times brighter than LCLS.

    “LCLS-II will drastically increase our chances of capturing these processes,” Dunne says. “Since it will also reveal subtle electron-phonon signals with much higher resolution, we’ll be able to study these interactions in much greater detail than we can now.”

    Other research institutions involved in the studies were University College Cork, Ireland; Imperial College London, UK; Duke University; Oak Ridge National Laboratory; RIKEN Spring-8 Center, Japan; University of Tokyo, Japan; University of Michigan; and University of Kiel, Germany. Funding sources included DOE Office of Science; Science Foundation Ireland; Volkswagen Foundation, Germany; and Federal Ministry of Education and Research, Germany. Preliminary X-ray studies on lead telluride were performed at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL), a DOE Office of Science User Facility, and at the Spring-8 Angstrom Compact Free-electron Laser (SACLA), Japan.

    SLAC/SSRL

    SACLA Free-Electron Laser Riken Japan


    his movie introduces LCLS-II, a future light source at SLAC. It will generate over 8,000 times more light pulses per second than today’s most powerful X-ray laser, LCLS, and produce an almost continuous X-ray beam that on average will be 10,000 times brighter. (SLAC National Accelerator Laboratory)

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1

     
  • richardmitnick 1:25 pm on April 14, 2017 Permalink | Reply
    Tags: , , , , Physics,   

    From Ethan Siegel: “Can muons — which live for microseconds — save experimental particle physics?” 

    Ethan Siegel

    Apr 14, 2017

    You lose whether you use protons or electrons in your collider, for different reasons. Could the unstable muon solve both problems?

    1
    A four-muon candidate event in the ATLAS detector at the Large Hadron Collider. The muon/anti-muon tracks are highlighted in red, as the long-lived muons travel farther than any other unstable particle. Image credit: ATLAS Collaboration / CERN.

    “It does not matter how slowly you go as long as you do not stop.” -Confucius

    High-energy physics is facing its greatest crisis ever. The Standard Model is complete, as all the particles our most successful physics theories have predicted have been discovered.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    The Large Hadron Collider at CERN, the most energetic particle collider ever developed (with more than six times the energies of any prior collider), discovered the long-sought-after Higgs boson, but nothing else.

    CERN/LHC Map

    CERN LHC Tube


    LHC at CERN

    Traditionally, the way to discover new particles has been to go to higher energies with one of two strategies:

    Collide electrons and positrons, getting a “clean” signal where 100% of the collider energy goes into producing new particles.
    Collide protons and either anti-protons or other protons, getting a messy signal but reaching higher energies due to the heavier mass of the proton.

    Both methods have their limitations, but one unstable particle might give us a third option to make the elusive breakthrough we desperately need: the muon.

    2
    The known particles in the Standard Model. These are all the fundamental particles that have been directly discovered. Image credit: E. Siegel.

    The Standard Model is made up of all the fundamental particles and antiparticles we’ve ever discovered. They include six quarks and antiquarks, each in three colors, three charged leptons and three types of neutrino, along with their antiparticle counterparts, and the bosons: the photon, the weak bosons (W+, W-, Z0), the eight gluons (with color/anticolor combinations attached), and the Higgs boson. While countless different combinations of these particles exist in nature, only a precious few are stable. The electron, photon, proton (made of two up and one down quark), and, if they’re bound together in nuclei, the neutron (with two down and one up quark) are stable, along with their antimatter counterparts. That’s why all the normal matter we see in the Universe is made up of protons, neutrons, and electrons; nothing else with any significant interactions is stable.

    3
    While many unstable particles, both fundamental and composite, can be produced in particle physics, only protons, neutrons (bound in nuclei) and the electron are stable, along with their antimatter counterparts and the photon. Everything else is short-lived. Image credit: Contemporary Physics Education Project (CPEP), U.S. Department of Energy / NSF / LBNL.

    The way you create these unstable particles is by colliding the stable ones together at high enough energies. Because of a fundamental principle of nature — mass/energy equivalence, given by Einstein’s E = mc2 — you can turn pure energy into mass if you have enough of it. (So long as you obey all the other conservation laws.) This is exactly the way we’ve created almost all the other particles of the Standard Model: by colliding particles into one another at enough energy that the energy you get out (E) is high enough to create the new particles (of mass m) you’re attempting to discover.

    4
    The particle tracks emanating from a high energy collision at the LHC in 2014 show the creation of many new particles. It’s only because of the high-energy nature of this collision that new masses can be created.

    We know there are almost certainly more particles beyond the ones we’ve discovered; we expect there to be particle explanations for mysteries like the baryon asymmetry (why there’s more matter than antimatter), the missing mass problem in the Universe (what we suspect will be solved by dark matter), the neutrino mass problem (why they’re so incredibly light), the quantum nature of gravity (i.e., there should be a force-carrying particle for the gravitational interaction, like the graviton), and the strong-CP problem (why certain decays don’t happen), among others. But our colliders haven’t reached the energies necessary to uncover those new particles, if they even exist. What’s even worse: both of the current methods have severe drawbacks that may prohibit us from building colliders that go to significantly higher energies.

    The Large Hadron Collider is the current record-holder, accelerating protons up to energies of 6.5 TeV apiece before smashing them together. The energy you can reach is directly proportional to two things only: the radius of your accelerator (R) and the strength of the magnetic field used to bend the protons into a circle (B). Collide those two protons together, and they hit with an energy of 13 TeV. But you’ll never make a 13 TeV particle colliding two protons at the LHC; only a fraction of that energy is available to create new particles via E = mc². The reason? A proton is made of multiple, composite particles — quarks, gluons, and even quark/antiquark pairs inside — meaning that only a tiny fraction of that energy goes into making new, massive particles.

    5
    A candidate Higgs event in the ATLAS detector. Note how even with the clear signatures and transverse tracks, there is a shower of other particles; this is due to the fact that protons are composite particles. Image credit: The ATLAS collaboration / CERN.

    CERN ATLAS Higgs Event

    CERN/ATLAS detector

    You might think to use fundamental particles instead, then, like electrons and positrons. If you were to put them in the same ring (with the same R) and subject them to the same magnetic field (the same B), you might think you could reach the same energies, only this time, 100% of the energy could make new particles. And that would be true, if it weren’t for one factor: synchrotron radiation. You see, when you accelerate a charged particle in a magnetic field, it gives off radiation. Because a proton is so massive compared to its electric charge, that radiation is negligible, and you can take protons up to the highest energies we’ve ever reached without worrying about it. But electrons and positrons are only 1/1836th of a proton’s mass, and synchrotron radiation would limit them to only about 0.114 TeV of energy under the same conditions.

    6
    Relativistic electrons and positrons can be accelerated to very high speeds, but will emit synchrotron radiation (blue) at high enough energies, preventing them from moving faster. Image credit: Chung-Li Dong, Jinghua Guo, Yang-Yuan Chen, and Chang Ching-Lin, ‘Soft-x-ray spectroscopy probes nanomaterial-based devices’.

    But there’s a third option that’s never been put into practice: use muons and anti-muons. A muon is just like an electron in the sense that it’s a fundamental particle, it’s charged, it’s a lepton, but it’s 206 times heavier than the electron. This is massive enough that synchrotron radiation doesn’t matter for muons or anti-muons, which is great! The only downside? The muon is unstable, with a mean lifetime of only 2.2 microseconds before decaying away.

    5
    The prototype MICE 201-megahertz RF module, with the copper cavity mounted, is shown during assembly at Fermilab. This apparatus could focus and collimate a muon beam, enabling the muons to be accelerated and survive for much longer than 2.2 microseconds. Image credit: Y. Torun / IIT / Fermilab Today.

    That might be okay, though, because special relativity can rescue us! When you bring an unstable particle close to the speed of light, the amount of time that it lives increases dramatically, thanks to the relativistic phenomenon of time dilation. If you brought a muon all the way up to 6.5 TeV of energy, it would live for 135,000 microseconds: enough time to circle the Large Hadron Collider 1,500 times before decaying away. And this time, your hopes would be absolutely true: 100% of that energy, 6.5 TeV + 6.5 TeV = 13 TeV, would be available for particle creation.

    6
    A design plan for a full-scale muon-antimuon collider at Fermilab, the source of the world’s second-most powerful particle accelerator. Image credit: Fermilab.

    We can always build a bigger ring or invent stronger magnets, and we may well do exactly that. But there’s no cure for synchrotron radiation except to use heavier particles, and there’s no cure for energy spreading out among the components of composite particles other than not to use them at all. Muons are unstable and difficult to keep alive for a long time, but as we get to higher and higher energies, that task gets progressively easier. Muon colliders have long been touted as a mere pipe dream, but recent progress by the MICE collaboration — for Muon Ionization Cooling Experiment — has demonstrated that this may be possible after all. A circular muon/anti-muon collider may be the particle accelerator that takes us beyond the LHC’s reach, and, if we’re lucky, into the realm of the new physics we’re so desperately seeking.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 9:23 pm on April 11, 2017 Permalink | Reply
    Tags: , , Physics, , , , , Theory Institute for Materials and Energy Spectroscopies (TIMES), X-ray spectroscopy,   

    From SLAC: “New SLAC Theory Institute Aims to Speed Research on Exotic Materials at Light Sources” 


    SLAC Lab

    April 11, 2017
    Glennda Chui

    A new institute at the Department of Energy’s SLAC National Accelerator Laboratory is using the power of theory to search for new types of materials that could revolutionize society – by making it possible, for instance, to transmit electricity over power lines with no loss.

    The Theory Institute for Materials and Energy Spectroscopies (TIMES) focuses on improving experimental techniques and speeding the pace of discovery at West Coast X-ray facilities operated by SLAC and by Lawrence Berkeley National Laboratory, its DOE sister lab across the bay.

    But the institute aims to have a much broader impact on studies aimed at developing new materials for energy and other technological applications by making the tools it develops available to scientists around the world.

    TIMES opened in August 2016 as part of the Stanford Institute for Materials and Energy Sciences (SIMES), a DOE-funded institute operated jointly with Stanford.

    Materials that Surprise

    “We’re interested in materials with remarkable properties that seem to emerge out of nowhere when you arrange them in particular ways or squeeze them down into a single, two-dimensional layer,” says Thomas Devereaux, a SLAC professor of photon science who directs both TIMES and SIMES.

    This general class of materials is known as “quantum materials.” Some of the best-known examples are high-temperature superconductors, which conduct electricity with no loss; topological insulators, which conduct electricity only along their surfaces; and graphene, a form of pure carbon whose superior strength, electrical conductivity and other surprising qualities derive from the fact that it’s just one layer of atoms thick.

    In another research focus, Devereaux says, “We want to see what happens when you push materials far beyond their resting state – out of equilibrium, is the way we put it – by exciting them in various ways with pulses of X-ray light at facilities known as light sources.

    “This tells you how materials will behave under realistic operating conditions, for instance in a lightweight airplane or a new type of battery. Understanding and controlling out-of-equilibrium behavior and learning how novel properties emerge in complex materials are two of the scientific grand challenges in our field, and light sources are ideal places to do this work.”

    Joining Forces With Light Sources

    A key part of the institute’s work is to use theory and computation to improve experimental techniques – especially X-ray spectroscopy, which probes the chemical composition and electronic structure of materials – in order to make research at light sources more productive.

    “We are in a golden age of X-ray spectroscopy, in which many billions of dollars have been invested worldwide to develop new X-ray and neutron sources that allow us to study very small details and very fast processes in materials,” Devereaux says. “In fact, we are on the threshold of being able to control matter at a much deeper level than ever possible before.

    “But while X-ray spectroscopy has a long history of collaboration between experimentalists and theorists, there has not been a companion theory institute anywhere. TIMES fills this gap. It aims to solidify collaboration and development of new methods and tools for theory relevant to this new landscape.”

    Devereaux, a theorist who uses computation to study quantum materials, came to SLAC 10 years ago from the University of Waterloo in Canada to work more closely with researchers at three light sources – SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL), Berkeley Lab’s Advanced Light Source (ALS) and the Linac Coherent Light Source (LCLS), the world’s first X-ray free-electron laser, which at the time was under construction at SLAC. Opened for research in 2009, LCLS gives scientists access to pulses a billion times brighter than any available before and that arrive up to 120 times per second, opening whole new avenues for research.

    SLAC/SSRL

    LBNL/ALS

    SLAC LCLS

    SLAC/LCLS II

    With LCLS, Devereaux says, “It became clear that we had an unprecedented opportunity to study materials that have been pushed farther away from equilibrium than was ever possible before.”

    Basic Questions and Practical Answers

    The DOE-funded theory institute has hired two staff scientists, Chunjing Jia and Das Pemmaraju, and works closely with SLAC staff scientists Brian Moritz and Hongchen Jiang and with a number of scientists at the three light sources.

    “We have two main goals,” Jia says. “One is to use X-ray spectroscopy and other techniques to look at practical materials, like the ones in batteries – to study the charging and discharging process and see how the structure of the battery changes, for instance. The second is to understand the fundamental underlying physics principles that govern the behavior of materials.”

    Eventually, she added, theorists want to understand those physics principles so well that they can predict the results of high-priority experiments at facilities that haven’t even been built yet – for instance at LCLS-II, a major upgrade to LCLS that will add a much brighter X-ray laser beam that fires up to a million pulses per second. These predictions have the potential to make experiments at new facilities much more productive and efficient.

    Running Experiments in Supercomputers

    Theoretical work can involve a lot of math and millions of hours of supercomputer time, as theorists struggle to clarify how the fundamental laws of quantum mechanics apply to the materials they are investigating, Pemmaraju says.

    “We use these laws in a form that can be simulated on a computer to make predictions about new materials and their properties,” he says. “The full richness and complexity of the theory are still being discovered, and its equations can only be solved approximately with the aid of supercomputers.”

    Jia adds that you can think of these computer simulations as numerical experiments – working “in silico,” rather than at a lab bench. By simulating what’s going on in a material, scientists can decide which of all the experimental options are the best ones, saving both time and money.

    The institute’s core research team includes theorists Joel Moore of the University of California, Berkeley and John Rehr of the University of Washington. Rehr is the developer of FEFF, an efficient and widely accessible software code that is used by the X-ray light source community worldwide. Devereaux says the plan is to establish a center for FEFF within the institute, which will serve as a home for its further development and for making those advances widely available to theorists and experimentalists at various levels of sophistication.

    TIMES and SIMES are funded by the DOE Office of Science, and the three light sources – ALS, SSRL and LCLS – are DOE Office of Science User Facilities.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1

     
  • richardmitnick 9:57 am on April 10, 2017 Permalink | Reply
    Tags: , , , , , , Physics   

    From Many Worlds: “What Scientists Expect to Learn From Cassini’s Upcoming Plunge Into Saturn” 

    NASA NExSS bloc

    NASA NExSS

    Many Words icon

    Many Worlds

    2017-04-10
    Marc Kaufman

    1
    Saturn as imaged from above by Cassini last year. Over the next five months, the spacecraft will orbit closer and closer to the planet and will finally plunge into its atmosphere. (NASA)

    NASA/ESA/ASI Cassini Spacecraft

    Seldom has the planned end of a NASA mission brought so much expectation and scientific high drama.

    The Cassini mission to Saturn has already been a huge success, sending back iconic images and breakthrough science of the planet and its system. Included in the haul have been the discovery of plumes of water vapor spurting from the moon Encedalus and the detection of liquid methane seas on Titan. But as members of the Cassini science team tell it, the end of the 13-year mission at Saturn may well be its most scientifically productive time.

    Linda Spilker, Cassini project scientist at NASA’s Jet Propulsion Laboratory (JPL) put it this way: “Cassini will make some of its most extraordinary observations at the end of its long life.”

    This news was first announced last week, but I thought it would be useful to go back to the story to learn more about what “extraordinary” science might be coming our way, with the help of Spilker and NASA headquarters Cassini program scientist Curt Niebur.

    And the very up close encounters with Saturn’s rings and its upper atmosphere — where Cassini is expected to ultimately lose contact with Earth — certainly do offer a trove of scientific riches about the basic composition and workings of the planet, as well as the long-debated age and origin of the rings. What’s more, everything we learn about Saturn will have implications for, and offer insights into, the vast menagerie of gas giant exoplanets out there.

    “The science potential here is just huge,” Niebur told me. “I could easily conceive of a billion dollar mission for the science we’ll get from the grand finale alone.”

    2
    The Cassini spacecraft will make 22 increasingly tight orbits of Saturn before it disappears into the planet’s atmosphere in mid-September, as shown in this artist rendering. (NASA/JPL-Caltech)

    The 20-year, $3.26 billion Cassini mission, a collaboration of NASA, the European Space Agency and the Italian Space Agency, is coming to an end because the spacecraft will soon run out of fuel. The agency could have just waited for that moment and let the spacecraft drift off into space, but decided instead on the taking the big plunge.

    This was considered a better choice not only because of those expected scientific returns, but also because letting the dead spacecraft drift meant that theoretically it could be pulled towards Titan or Enceladus — moons that researchers now believe just might support life.

    Although the spacecraft was sterilized before launch, scientists didn’t want to take the chance that some bacteria might remain in the capsule that could possibly contaminate the moons with life from Earth.

    So instead Cassini will be sent on 22 closer and closer passes around Saturn, into the region between the innermost ring and the atmosphere where no spacecraft has ever gone. On April 26, Cassini will make the first of those dives through a 1,500-mile-wide gap between Saturn and its rings as part of the mission’s grand finale.

    As it makes those terminal orbits, the spacecraft will have to be maneuvered with precision so it doesn’t actually fly into one of the rings. They consist of water ice, small meteorites and dust, and are sufficiently dense to fatally damage Cassini.

    “Based on our best models, we expect the gap to be clear of particles large enough to damage the spacecraft. But we’re also being cautious by using our large antenna as a shield on the first pass, as we determine whether it’s safe to expose the science instruments to that environment on future passes,” said Earl Maize, Cassini project manager at the NASA Jet Propulsion Lab. “Certainly there are some unknowns, but that’s one of the reasons we’re doing this kind of daring exploration at the end of the mission.”

    Then in mid-September, following a distant encounter with Titan and its gravity, the spacecraft’s path will be bent so that it dives into the planet itself. The final descent will occur in mid September, when Cassini enters the atmosphere where it will soon begin to spin and tumble, lose radio contact with Earth, and then ultimately explode due to pressures created by the enormous planet.

    All the while it will be taking pioneering measurements, and sending back images predicted to be spectacular.

    3
    The age and origin of the rings of Saturn remains a subject of a great debate that may soon come to an end. Ring particle sizes range from tiny, dust-sized icy grains to a few particles as large as mountains. Two tiny moons orbit in gaps (Encke and Keeler gaps) in the rings and keep the gaps open. (NASA)

    While the Cassini team has to keep clear of the rings, the spacecraft is expected to get close enough to most likely answer one of the most long-debated questions about Saturn: how old are those grand features, unique in our solar system?

    One school of thought says they date from the earliest formation of the planet, some 4.6 billion years ago. In other words, they’ve been there as long as the planet has been there.

    But another school says they are a potentially much newer addition. They could potentially be the result of the break-up of a moon (of which Saturn has 53-plus) or a comet, or perhaps of several moons at different times. In this scenario, Saturn may have been ring-less for eons.

    As Niebur explained it, the key to dating the rings is a close view of, essentially, how dirty they are. Because small meteorites and dust are a ubiquitous feature of space, the rings would have significantly more mass if they have been there 4.6 billion years. But if they are determined to be relatively clean, then the age is likely younger, and perhaps much younger.

    “Space is a very dirty place, with dust and micro-meteorites hitting everything. Over significant time scales this stuff coats things. So if the rings the rings are old, we should find very dirty ice. If there is little covering of the ice, then the rings must be young. We may well be coming to the end of a great debate.”

    A corollary of the question of the age of Saturn’s rings is, naturally, how stable they are.

    4
    Curt Neibur, lead program scientist at NASA headquarters for the Cassini mission. (NASA)

    If they turn out to be as old as the planet, then they are certainly very stable. But if they are not old, then it is entirely plausible that they could be a passing phenomenon and will some day disappear — to perhaps re-appear after another moon is shattered or comet arrives.

    Another way of looking at the rings is that they may well have been formed at different times.

    As Spilker explained in an email, Cassini’s measurements of the mass of the rings will be key. “More massive rings could be as old as Saturn itself while less massive rings must be young. Perhaps a moon or comet got too close and was torn apart by Saturn’s gravity.”

    The voyage between the rings will also potentially provide some new insights into the workings of the disks present at the formation of all solar systems.

    “The rings can teach us about the physics of disks, which are huge rings floating majestically and with synchronicity around the new sun,” Niebur said. “That said, the rings of Saturn have a very active regime, with particles and meteorites and micrometeorites smacking into each other. It’s an amazing environment and has direct relevance to the nebular model of planetary formation.”

    5
    This recently released Cassini image show’s moon Daphnis, which is embedded within a ring. The moon
    kicks up waves as it orbits within what is called the Keeler gap. This mosaic combines several previous images to show more waves in the gap edges. (NASA/JPL-Caltech)

    Another open question that scientists hope will be answered during the plunge is how long, precisely, is a day on Saturn.

    The saturnine day is often given as between 10.5 and 11 hours, but that lack of precision is unique in our solar system.

    The usual way to determine a planet’s rotation is to look for a distinctive point and watch to see how long it takes to reappear. But Saturn has thousands of miles of thick clouds between the rings and the core, and so no distinctive points have been found.

    The planet’s inner rocky core and outer core of metallic hydrogen create magnetic fields that potentially could be traced to measure a full rotation. But competing magnetic fields in the complex Saturn ring and moon system make that also difficult.

    “The truth is that we don’t know how long a day is on Saturn,” Niebur said. “But after the finale, we will finally know.”

    The answer will hopefully come by measuring the expected “wobble” of the magnetic field inside the rings. Since Cassini will pass beyond the magnetic interference of those rings, the probe should get the most precise magnetic readings ever taken.

    Project scientist Spilker is optimistic. “With the magnetic field we’ll be able to get, for the first time, the length of day for the interior of Saturn. If there’s just a slight tilt to the magnetic field, then it will wobble around and give us the length of a day.”

    6
    Artist rendering of Cassini over Saturn’s north pole, with it huge hexagon-shaped storm. (NASA/JPL-Caltech)

    Perhaps the most consequential findings to come out of the Cassini finale are expected to involve the planet’s internal structure and composition.

    The atmosphere is known to contain hydrogen, helium, ammonia and methane, but Niebur said that other important trace elements are expected to be present. The probe will use its mass spectrometer to “taste” the chemistry of the gases on the outermost edge of Saturn’s atmosphere and return the most detailed information ever about Saturn’s high-altitude clouds, as well as about the ring material.

    Instruments will also measure Saturn’s powerful winds (which blow up to 1,000 miles an hour), and determine how deep they go in the atmosphere. Like much about Saturn, that basic fact falls in the “unknown” category.

    For both Spilker and Niebur, the biggest prize is probably determining the size and mass of Saturn’s rocky core, made up largely of iron and nickel. That core is estimated to be 9 to 22 times the mass of the Earth, and to have a diameter of perhaps 18,000 miles.

    But these are broad estimates, and neither the size nor mass is really known. Those thousands of miles of thick clouds atop the atmosphere and the planet’s chaotic magnetic fields have made the necessary readings impossible.

    The Cassini instruments, however, are expected to make those measurements during its final months. As Cassini makes its close-in passes and then enters the atmosphere for the final plunge, it will send back the data needed to make detailed maps of Saturn’s inner magnetic and gravitational fields. These are what scientists need to understand the core and other structures that lay beneath the planet’s atmosphere.

    This work will compliment the parallel efforts underway at Jupiter, where the Juno mission is collecting data on that planet’s core as well. If scientists can measure the sizes and masses of both cores, they will be able to use that new information to answer many other questions about our solar system and beyond.

    “A better understanding Saturn’s interior, coupled with what Juno mission learns about the interior of Jupiter, will lead to (new insights into) how the planets in our solar system formed, and how our solar system itself formed,” Spilker said in an email.

    “This is then related to how exoplanets form around other stars. Studying our own giant planets will help us understand giant planets around other stars.”

    In other words, Saturn and Jupiter are planetary types expected to be found across the galaxies. And it’s our good fortune to be able to touch and learn from them, and to use that information to analyze distant planets that we can only indirectly detect or just barely see.

    NASA at Saturn: Cassini’s Grand Finale

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Many Worlds

    There are many worlds out there waiting to fire your imagination.

    Marc Kaufman is an experienced journalist, having spent three decades at The Washington Post and The Philadelphia Inquirer, and is the author of two books on searching for life and planetary habitability. While the “Many Worlds” column is supported by the Lunar Planetary Institute/USRA and informed by NASA’s NExSS initiative, any opinions expressed are the author’s alone.

    This site is for everyone interested in the burgeoning field of exoplanet detection and research, from the general public to scientists in the field. It will present columns, news stories and in-depth features, as well as the work of guest writers.

    About NExSS

    The Nexus for Exoplanet System Science (NExSS) is a NASA research coordination network dedicated to the study of planetary habitability. The goals of NExSS are to investigate the diversity of exoplanets and to learn how their history, geology, and climate interact to create the conditions for life. NExSS investigators also strive to put planets into an architectural context — as solar systems built over the eons through dynamical processes and sculpted by stars. Based on our understanding of our own solar system and habitable planet Earth, researchers in the network aim to identify where habitable niches are most likely to occur, which planets are most likely to be habitable. Leveraging current NASA investments in research and missions, NExSS will accelerate the discovery and characterization of other potentially life-bearing worlds in the galaxy, using a systems science approach.
    The National Aeronautics and Space Administration (NASA) is the agency of the United States government that is responsible for the nation’s civilian space program and for aeronautics and aerospace research.

    President Dwight D. Eisenhower established the National Aeronautics and Space Administration (NASA) in 1958 with a distinctly civilian (rather than military) orientation encouraging peaceful applications in space science. The National Aeronautics and Space Act was passed on July 29, 1958, disestablishing NASA’s predecessor, the National Advisory Committee for Aeronautics (NACA). The new agency became operational on October 1, 1958.

    Since that time, most U.S. space exploration efforts have been led by NASA, including the Apollo moon-landing missions, the Skylab space station, and later the Space Shuttle. Currently, NASA is supporting the International Space Station and is overseeing the development of the Orion Multi-Purpose Crew Vehicle and Commercial Crew vehicles. The agency is also responsible for the Launch Services Program (LSP) which provides oversight of launch operations and countdown management for unmanned NASA launches. Most recently, NASA announced a new Space Launch System that it said would take the agency’s astronauts farther into space than ever before and lay the cornerstone for future human space exploration efforts by the U.S.

    NASA science is focused on better understanding Earth through the Earth Observing System, advancing heliophysics through the efforts of the Science Mission Directorate’s Heliophysics Research Program, exploring bodies throughout the Solar System with advanced robotic missions such as New Horizons, and researching astrophysics topics, such as the Big Bang, through the Great Observatories [Hubble, Chandra, Spitzer, and associated programs. NASA shares data with various national and international organizations such as from the [JAXA]Greenhouse Gases Observing Satellite.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: