Tagged: Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:28 pm on February 21, 2018 Permalink | Reply
    Tags: , , , , , Physics, Proton booster   

    From FNAL: “Fermilab’s Booster accelerator delivers record-setting proton beam” 

    FNAL II photo

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    February 21, 2018
    Bill Pellico

    1
    This plot shows the ramp-up of proton flux in the Proton Source under PIP.

    FNAL booster

    On Jan. 29, Fermilab’s Booster accelerator achieved a record proton flux of 2.4×1017 protons per hour. This milestone achievement fulfills one of the most important requirements in the Proton Improvement Plan (PIP), which Fermilab has been implementing over the last five years.

    The main goal of the PIP project is to increase the proton beam output to meet Fermilab’s experimental needs, in particular for neutrino and muon experiments such as NOvA, MicroBooNE and Muon g-2. The Booster delivers beam to all of the lab’s experiments, and according to PIP, the Booster’s proton beam output, also known as proton flux, had to meet a certain minimum.

    FNAL NOvA Near Detector


    FNAL/NOvA experiment map

    FNAL/MicrobooNE

    FNAL Muon g-2 studio

    We delivered on that promise in January and have been operating the Booster at the new level since then. The record proton flux is about two-and-a-half times higher than what the accelerator was capable of delivering before the PIP upgrades, a flux of 1.1×1017. Now, with the Booster generating 2.4×1017 protons per hour at 15 hertz, the NuMI beamline, Booster Neutrino Beamline and the Muon Campus can all operate simultaneously. (Prior to this, we could operate only one at a time.)

    PIP started in 2012 to upgrade our aging Proton Source accelerators. Not only did we set out to increase the proton flux, we also aimed to provide a reliable source of protons for Fermilab’s scientific program. Reliability translates into “up time” — the fraction of time the accelerator is operating. PIP specified an up time of 85 percent, and we’ve exceeded that: We currently run at 92 percent up time, and we’re working to maintain this high performance level in the years to come.

    We could not have reached this milestone accelerator goal without the dedication of numerous people at the lab, who took on challenging engineering and beam physics problems and addressed other issues related to the viability and reliability of Fermilab’s Proton Source.

    It is truly remarkable that the Booster and the Linac — the oldest machines at the lab — are performing at record levels almost 50 years after they were first built, well higher than their design called for and beyond what anyone could have hoped for at the birth of the lab.

    Now we look to the next steps, working to achieve even higher proton flux levels. We’re also working to make sure PIP’s goal of providing a viable beam source until the successor plan, called PIP-II, is put in place. The PIP-II project will replace the current Linac with a new Superconducting Linac — in time for the operation of our flagship, LBNF/DUNE.

    The successful implementation of PIP ensures that the Proton Source can generate the beam needed to carry out Fermilab’s — and the nation’s — high-energy physics program. This was no small effort, and we congratulate and thank everyone involved for delivering world-class accelerators for fundamental science.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.

    Advertisements
     
  • richardmitnick 1:42 pm on February 21, 2018 Permalink | Reply
    Tags: , In a First Tiny Diamond Anvils Trigger Chemical Reactions by Squeezing, Physics,   

    From SLAC: “In a First, Tiny Diamond Anvils Trigger Chemical Reactions by Squeezing” 


    SLAC Lab

    February 21, 2018
    Glennda Chui

    Press Office Contact:
    Andy Freeberg
    afreeberg@slac.stanford.edu
    (650) 926-4359

    Experiments with ‘molecular anvils’ mark an important advance for mechanochemistry, which has the potential to make chemistry greener and more precise.

    1
    An illustration shows complexes of soft molecules (yellow and pink) attached to “molecular anvils” (red and blue) that are about to be squeezed between two diamonds in a diamond anvil cell. The molecular anvils distribute this pressure unevenly, breaking bonds and triggering other chemical reactions in the softer molecules. (Peter Allen/UC-Santa Barbara)

    3
    A disassembled diamond anvil cell. Each half contains a tiny diamond housed in stainless steel. Samples are placed between the diamond tips; then the cell is closed and the tips squeezed together by tightening screws. This small device can generate pressures in the gigapascal range – 10,000 times the atmospheric pressure at the Earth’s surface. (Dawn Harmer/SLAC National Accelerator Laboratory)

    3
    An animation shows how attaching molecular anvils (gray cages) to softer molecules (red and yellow balls) distributes the pressure from a bigger diamond anvil unevenly, so chemical bonds bend and eventually break around the atom that bears the largest deformation (circled red ball). (Greg Stewart/SLAC National Accelerator Laboratory)

    Scientists have turned the smallest possible bits of diamond and other super-hard specks into “molecular anvils” that squeeze and twist molecules until chemical bonds break and atoms exchange electrons. These are the first such chemical reactions triggered by mechanical pressure alone, and researchers say the method offers a new way to do chemistry at the molecular level that is greener, more efficient and much more precise.

    The research was led by scientists from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University, who reported their findings in Nature today.

    “Unlike other mechanical techniques, which basically pull molecules until they break apart, we show that pressure from molecular anvils can both break chemical bonds and trigger another type of reaction where electrons move from one atom to another,” said Hao Yan, a physical science research associate at SIMES, the Stanford Institute for Materials and Energy Sciences, and one of the lead authors of the study.

    “We can use molecular anvils to trigger changes at a specific point in a molecule while protecting the areas we don’t want to change,” he said, “and this creates a lot of new possibilities.”

    A reaction that’s mechanically driven has the potential to produce entirely different products from the same starting ingredients than one driven the conventional way by heat, light or electrical current, said study co-author Nicholas Melosh, a SIMES investigator and associate professor at SLAC and Stanford. It’s also much more energy efficient, and because it doesn’t need heat or solvents, it should be environmentally friendly.

    Putting the Squeeze on Materials with Diamonds

    The experiments were carried out with a diamond anvil cell about the size of an espresso cup in the laboratory of paper co-author Wendy Mao, an associate professor at SLAC and Stanford and an investigator with SIMES, which is a joint SLAC/Stanford institute.

    Diamond anvil cells squeeze materials between the flattened tips of two diamonds and can reach tremendous pressures – over 500 gigapascals, or about one and a half times the pressure at the center of the Earth. They’re used to explore what minerals deep inside the Earth are like and how materials under pressure develop unusual properties, among other things.

    These pressures are reached in a surprisingly straightforward way, by tightening screws to bring the diamonds closer together, Mao said. “Pressure is force per unit area, and we are compressing a tiny amount of sample between the tips of two small diamonds that each weigh only about a quarter of a carat,” she said, “so you only need a modest amount of force to reach high pressures.”

    Since the diamonds are transparent, light can go through them and reach the sample, said Yu Lin, a SIMES associate staff scientist who led the high-pressure part of the experiment.

    “We can use a lot of experimental techniques to study the reaction while the sample is compressed,” she said. “For instance, when we shine an X-ray beam into the sample, the sample responds by scattering or absorbing the light, which travels back through the diamond into a detector. Analyzing the signal from that light tells you if a reaction has occurred.”

    3
    Illustration of a diamond anvil cell, where samples can be compressed to very high pressures between the flattened tips of two diamonds. (Argonne National Laboratory, Greg Stewart/SLAC National Accelerator Laboratory)

    What usually happens when you squeeze a sample is that it deforms uniformly, with all the bonds between atoms shrinking by the same amount, Melosh said.

    Yet this is not always the case, he said: “If you compress a material that has both hard and soft components, such as carbon fibers embedded in epoxy, the bonds in the soft epoxy will deform a whole lot more than the ones in the carbon fiber.”

    They wondered if they could harness that same principle to bend or break specific bonds in an individual molecule.

    What got them thinking along those lines was a series of experiments Melosh’s team had done with diamondoids, the smallest possible bits of diamond, which are invisible to the naked eye and weigh less than a billionth of a billionth of a carat. Melosh co-directs a joint SLAC-Stanford program that isolates diamondoids from petroleum fluid and looks for ways to put them to use. In a recent study, his team had attached diamondoids to smaller, softer molecules to create Lego-like blocks that assembled themselves into the thinnest possible electrical wires, with a conducting core of sulfur and copper.

    Like carbon fibers in epoxy, these building blocks contained hard and soft parts. If put into a diamond anvil, would the hard parts act as mini-anvils that squeeze and deform the soft parts in a non-uniform way?

    The answer, they discovered, was yes.

    5
    A disassembled diamond anvil cell. Each half contains a tiny diamond housed in stainless steel. Samples are placed between the diamond tips; then the cell is closed and the tips squeezed together by tightening screws. This small device can generate pressures in the gigapascal range – 10,000 times the atmospheric pressure at the Earth’s surface. (Dawn Harmer/SLAC National Accelerator Laboratory)

    Tiny Anvils Open New Possibilities

    For their first experiments, they used copper sulfur clusters – tiny particles consisting of eight atoms – attached to molecular anvils made of another rigid molecule called carborane. They put this combination into the diamond anvil cell and cranked up the pressure.

    When the pressure got high enough, atomic bonds in the cluster broke, but that’s not all. Electrons moved from its sulfur atoms to its copper atoms and pure crystals of copper formed, which would not have occurred in conventional reactions driven by heat, the researchers said. They discovered a point of no return where this change becomes irreversible. Below that pressure point, the cluster goes back to its original state when pressure is removed.

    Computational studies revealed what had happened: Pressure from the diamond anvil cell moved the molecular anvils, and they in turn squeezed chemical bonds in the clusters, compressing them at least 10 times more than their own bonds had been compressed. This compression was also uneven, Yan said, and it bent or twisted some of the cluster’s bonds in a way that caused bonds to break, electrons to move and copper crystals to form.

    Other experiments, this time with diamondoids as molecular anvils, showed that small changes in the sizes and positions of the tiny anvils can make the difference between triggering a reaction or protecting part of a molecule so it doesn’t bend or react.

    The scientists were able to observe these changes with several techniques, including electron microscopy at Stanford and X-ray measurements at two DOE Office of Science user facilities – the Advanced Light Source at Lawrence Berkeley National Laboratory and the Advanced Photon Source at Argonne National Laboratory.

    LBNL/ALS

    ANL/APS

    6
    Researchers in a SIMES lab with equipment used in the molecular anvil study. From left: Hao Yan, a physical science research associate at SIMES; Nicholas Melosh, a SIMES investigator and associate professor at SLAC and Stanford; and Yu Lin, a SIMES associate staff scientist. (Dawn Harmer/SLAC National Accelerator Laboratory)

    “This is exciting, and it opens up a whole new field,” Mao said. “From our side, we’re interested in looking at how pressure can affect a wide range of technologically interesting materials, from superconductors that transmit electricity with no loss to halide perovskites, which have a lot of potential for next-generation solar cells. Once we understand what’s possible from a very basic science point of view we can think about the more practical side.”

    Going forward, the researchers also want to use this technique to look at reactions that are hard to do in conventional ways and see if compression makes them easier, Yan said.

    “If we want to dream big, could compression help us turn carbon dioxide from the air into fuel, or nitrogen from the air into fertilizer?” he said. “These are some of the questions that molecular anvils will allow people to explore.”

    In addition to SLAC, Stanford, Berkeley Lab and Argonne, researchers who contributed to this study came from the National Autonomous University of Mexico (UNAM), Justus-Liebig University in Germany, Hong Kong University of Science and Technology and the University of Chicago. Major funding came from the DOE Office of Science.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.

     
  • richardmitnick 4:39 pm on February 20, 2018 Permalink | Reply
    Tags: "Rare hyperon-decay anomaly under the spotlight, , , , , , , , Physics   

    From CERN Courier: “Rare hyperon-decay anomaly under the spotlight” 


    CERN Courier

    Feb 16, 2018

    1
    The invariant mass distribution

    The LHCb collaboration has shed light on a long-standing anomaly in the very rare hyperon decay Σ+ → pµ+µ– first observed in 2005 by Fermilab’s HyperCP experiment. The HyperCP team found that the branching fraction for this process is consistent with Standard Model (SM) predictions, but that the three signal events observed exhibited an interesting feature: all muon pairs had invariant masses very close to each other, instead of following a scattered distribution.

    This suggested the existence of a new light particle, X0, with a mass of about 214 MeV/c2, which would be produced in the Σ+ decay along with the proton and would decay subsequently to two muons. Although this particle has been long sought in various other decays and at several experiments, no experiment other than HyperCP has so far been able to perform searches using the same Σ+ decay mode.

    The large rate of hyperon production in proton–proton collisions at the LHC has recently allowed the LHCb collaboration to search for the Σ+ → pµ+µ– decay. Given the modest transverse momentum of the final-state particles, the probability that such a decay is able to pass the LHCb trigger requirements is very small. Consequently, events where the trigger is activated by particles produced in the collisions other than those in the decay under study are also employed.

    This search was performed using the full Run 1 dataset, corresponding to an integrated luminosity of 3 fb–1 and about 1014 Σ+ hyperons. An excess of about 13 signal events is found with respect to the background-only expectation, with a significance of four standard deviations. The dimuon invariant- mass distribution of these events was examined and found to be consistent with the SM expectation, with no evidence of a cluster around 214  eV/c2. The signal yield was converted to a branching fraction of (2.1+1.6–1.2) × 10–8 using the known Σ+ → pπ0 decay as a normalisation channel, in excellent agreement with the SM prediction. When restricting the sample explicitly to the case of a decay with the putative X0 particle as an intermediate state, no excess was found. This sets an upper limit on the branching fraction at 9.5 × 10–9 at 90% CL, to be compared with the HyperCP result (3.1+2.4–1.9 ± 1.5) × 10–8.

    This result, together with the recent search for the rare decay KS → μ+μ– shows the potential of LHCb in performing challenging measurements with strange hadrons. As with a number of results in other areas reported recently, LHCb is demonstrating its power not only as a b-physics experiment but as a general-purpose one in the forward region. With current data, and in particular with the upgraded detector thanks to the software trigger from Run 3 onwards, LHCb will be the dominant experiment for the study of both hyperons and KS mesons, exploiting their rare decays to provide a new perspective in the quest for physics beyond the SM.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN LHC Map
    CERN LHC Grand Tunnel

    CERN LHC particles

     
  • richardmitnick 3:18 pm on February 20, 2018 Permalink | Reply
    Tags: , , DarkMatter, , , , Physics,   

    From Symmetry: “The secret life of Higgs bosons” 

    Symmetry Mag

    Symmetry

    02/20/18
    Sarah Charley

    Are these mass-giving particles hanging out with dark matter?

    CERN CMS Higgs Event


    CERN ATLAS Higgs Event


    The Standard Model of elementary particles , with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    The Higgs boson has existed since the earliest moments of our universe. Its directionless field permeates all of space and entices transient particles to slow down and burgeon with mass. Without the Higgs field, there could be no stable structures; the universe would be cold, dark and lifeless.

    Many scientists are hoping that the Higgs boson will help them understand phenomena not predicted by the Standard Model, physicists’ field guide to the subatomic world. While the Standard Model is an ace at predicting the the properties of all known subatomic particles, it falls short on things like gravity, the accelerating expansion of the universe, the supernatural speeds of spinning galaxies, the absurd excess of matter over antimatter, and beyond.

    “We can use the Higgs boson as a tool to look for new physics that might not readily interact with our standard set of particles,” says Darin Acosta, a physicist at the University of Florida.

    In particular, there’s hope that the Higgs boson might interact with dark matter, thought to be a widespread but never directly detected kind of matter that outnumbers regular matter five to one. This theoretical massive particle makes itself known through its gravitational attraction. Physicists see its fingerprint all over the cosmos in the rotational speed of galaxies, the movements of galaxy clusters and the bending of distant light. Even though dark matter appears to be everywhere, scientists have yet to find a tool that can bridge the light and dark sectors.

    Dark matter halo. Image credit: Virgo consortium / A. Amblard / ESA

    If the Higgs field is the only vendor of mass in the cosmos, then dark matter must be a client. This means that the Higgs boson, the spokesparticle of the Higgs field, must have some relationship with dark matter particles.

    “It could be that dark matter aids in the production of Higgs bosons, or that Higgs bosons can transform into dark matter particles as they decay,” Acosta says. “It’s simple on paper, but the challenge is finding evidence of it happening, especially when so many parts of the equation are completely invisible.”

    The particle that wasn’t there

    To find evidence of the Higgs boson flirting with dark matter, scientists must learn how to see the invisible. Scientists never see the Higgs boson directly; in fact, they discovered the Higgs boson by tracing the particles it produces as it decays. Now, they want to precisely measure how frequently the Higgs boson transforms into different types of particles. It’s not easy.

    “All we can see with our detector is the last step of the decay, which we call the final state,” says Will Buttinger, a CERN research fellow. “In many cases, the Higgs is not the parent of the particles we see in the final state, but the grandparent.”

    The Standard Model not only predicts all the different possible decays of Higgs bosons, but how favorable each decay is. For instance, it predicts that about 60 percent of Higgs bosons will transform into a pair of bottom quarks, whereas only 0.2 percent will transform into a pair of photons. If the experimental results show Higgs bosons decaying into certain particles more or less often than predicted, it could mean that a few Higgs bosons are sneaking off and transforming into dark matter.

    Of course, these kinds of precision measurements cannot tell scientists if the Higgs is evolving into dark matter as part of its decay path—only that it is behaving strangely. To catch the Higgs in the act, scientists need irrefutable evidence of the Higgs schmoozing with dark matter.

    “How do we see invisible things?” asks Buttinger. “By the influence it has on what we can see.”

    For example, humans cannot see the wind, but we can look outside our windows and immediately know if it’s windy based whether or not trees are swaying. Scientists can look for dark matter particles in a similar way.

    “For every action, there is an equal and opposite reaction,” Buttinger says. “If we see particles shooting off in one direction, we know that there must be something shooting off in the other direction.”

    If a Higgs boson transforms into a visible particle paired with a dark matter particle, the solitary tracks of the visible particles will have an odd and inexplicable trajectory—an indication that, perhaps, a dark matter particle is escaping.

    The Higgs boson is the newest tool scientists have to explore the uncharted terrain within and beyond the Standard Model. The continued research at the LHC and its future upgrades will enable scientists to characterize this reticent particle and learn its close-held secrets.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 2:46 pm on February 19, 2018 Permalink | Reply
    Tags: , Edward Witten, Physics,   

    From Quanta Magazine: “A Physicist’s Physicist Ponders the Nature of Reality” Edward Witten 

    Quanta Magazine
    Quanta Magazine

    FOR L.Z. OF HP AND RUTGERS. I HOPE HE SEES IT.

    November 28, 2017 [Just found this. Did I miss it in November? I would not have skipped it.]
    Natalie Wolchover

    1
    Edward Witten in his office at the Institute for Advanced Study in Princeton, New Jersey.

    Among the brilliant theorists cloistered in the quiet woodside campus of the Institute for Advanced Study in Princeton, New Jersey, Edward Witten stands out as a kind of high priest. The sole physicist ever to win the Fields Medal, mathematics’ premier prize, Witten is also known for discovering M-theory, the leading candidate for a unified physical “theory of everything.” A genius’s genius, Witten is tall and rectangular, with hazy eyes and an air of being only one-quarter tuned in to reality until someone draws him back from more abstract thoughts.

    During a visit this fall, I spotted Witten on the Institute’s central lawn and requested an interview; in his quick, alto voice, he said he couldn’t promise to be able to answer my questions but would try. Later, when I passed him on the stone paths, he often didn’t seem to see me.

    Physics luminaries since Albert Einstein, who lived out his days in the same intellectual haven, have sought to unify gravity with the other forces of nature by finding a more fundamental quantum theory to replace Einstein’s approximate picture of gravity as curves in the geometry of space-time. M-theory, which Witten proposed in 1995, could conceivably offer this deeper description, but only some aspects of the theory are known. M-theory incorporates within a single mathematical structure all five versions of string theory, which renders the elements of nature as minuscule vibrating strings. These five string theories connect to each other through “dualities,” or mathematical equivalences. Over the past 30 years, Witten and others have learned that the string theories are also mathematically dual to quantum field theories — descriptions of particles moving through electromagnetic and other fields that serve as the language of the reigning “Standard Model” of particle physics. While he’s best known as a string theorist, Witten has discovered many new quantum field theories and explored how all these different descriptions are connected. His physical insights have led time and again to deep mathematical discoveries.

    Researchers pore over his work and hope he’ll take an interest in theirs. But for all his scholarly influence, Witten, who is 66, does not often broadcast his views on the implications of modern theoretical discoveries. Even his close colleagues eagerly suggested questions they wanted me to ask him.

    When I arrived at his office at the appointed hour on a summery Thursday last month, Witten wasn’t there. His door was ajar. Papers covered his coffee table and desk — not stacks, but floods: text oriented every which way, some pages close to spilling onto the floor. (Research papers get lost in the maelstrom as he finishes with them, he later explained, and every so often he throws the heaps away.) Two girls smiled out from a framed photo on a shelf; children’s artwork decorated the walls, one celebrating Grandparents’ Day. When Witten arrived minutes later, we spoke for an hour and a half about the meaning of dualities in physics and math, the current prospects of M-theory, what he’s reading, what he’s looking for, and the nature of reality. The interview has been condensed and edited for clarity.

    Physicists are talking more than ever lately about dualities, but you’ve been studying them for decades. Why does the subject interest you?

    People keep finding new facets of dualities. Dualities are interesting because they frequently answer questions that are otherwise out of reach. For example, you might have spent years pondering a quantum theory and you understand what happens when the quantum effects are small, but textbooks don’t tell you what you do if the quantum effects are big; you’re generally in trouble if you want to know that. Frequently dualities answer such questions. They give you another description, and the questions you can answer in one description are different than the questions you can answer in a different description.

    What are some of these newfound facets of dualities?

    It’s open-ended because there are so many different kinds of dualities. There are dualities between a gauge theory [a theory, such as a quantum field theory, that respects certain symmetries] and another gauge theory, or between a string theory for weak coupling [describing strings that move almost independently from one another] and a string theory for strong coupling. Then there’s AdS/CFT duality, between a gauge theory and a gravitational description. That duality was discovered 20 years ago, and it’s amazing to what extent it’s still fruitful. And that’s largely because around 10 years ago, new ideas were introduced that rejuvenated it. People had new insights about entropy in quantum field theory — the whole story about “it from qubit.”

    That’s the idea that space-time and everything in it emerges like a hologram out of information stored in the entangled quantum states of particles.

    Yes. Then there are dualities in math, which can sometimes be interpreted physically as consequences of dualities between two quantum field theories. There are so many ways these things are interconnected that any simple statement I try to make on the fly, as soon as I’ve said it I realize it didn’t capture the whole reality. You have to imagine a web of different relationships, where the same physics has different descriptions, revealing different properties. In the simplest case, there are only two important descriptions, and that might be enough. If you ask me about a more complicated example, there might be many, many different ones.

    Given this web of relationships and the issue of how hard it is to characterize all duality, do you feel that this reflects a lack of understanding of the structure, or is it that we’re seeing the structure, only it’s very complicated?

    I’m not certain what we should hope for. Traditionally, quantum field theory was constructed by starting with the classical picture [of a smooth field] and then quantizing it. Now we’ve learned that there are a lot of things that happen that that description doesn’t do justice to. And the same quantum theory can come from different classical theories. Now, Nati Seiberg [a theoretical physicist who works down the hall] would possibly tell you that he has faith that there’s a better formulation of quantum field theory that we don’t know about that would make everything clearer. I’m not sure how much you should expect that to exist. That would be a dream, but it might be too much to hope for; I really don’t know.

    There’s another curious fact that you might want to consider, which is that quantum field theory is very central to physics, and it’s actually also clearly very important for math. But it’s extremely difficult for mathematicians to study; the way physicists define it is very hard for mathematicians to follow with a rigorous theory. That’s extremely strange, that the world is based so much on a mathematical structure that’s so difficult.

    2
    Jean Sweep for Quanta Magazine

    What do you see as the relationship between math and physics?

    I prefer not to give you a cosmic answer but to comment on where we are now. Physics in quantum field theory and string theory somehow has a lot of mathematical secrets in it, which we don’t know how to extract in a systematic way. Physicists are able to come up with things that surprise the mathematicians. Because it’s hard to describe mathematically in the known formulation, the things you learn about quantum field theory you have to learn from physics.

    I find it hard to believe there’s a new formulation that’s universal. I think it’s too much to hope for. I could point to theories where the standard approach really seems inadequate, so at least for those classes of quantum field theories, you could hope for a new formulation. But I really can’t imagine what it would be.

    You can’t imagine it at all?

    No, I can’t. Traditionally it was thought that interacting quantum field theory couldn’t exist above four dimensions, and there was the interesting fact that that’s the dimension we live in. But one of the offshoots of the string dualities of the 1990s was that it was discovered that quantum field theories actually exist in five and six dimensions. And it’s amazing how much is known about their properties.

    I’ve heard about the mysterious (2,0) theory, a quantum field theory describing particles in six dimensions, which is dual to M-theory describing strings and gravity in seven-dimensional AdS space. Does this (2,0) theory play an important role in the web of dualities?

    Yes, that’s the pinnacle. In terms of conventional quantum field theory without gravity, there is nothing quite like it above six dimensions. From the (2,0) theory’s existence and main properties, you can deduce an incredible amount about what happens in lower dimensions. An awful lot of important dualities in four and fewer dimensions follow from this six-dimensional theory and its properties. However, whereas what we know about quantum field theory is normally from quantizing a classical field theory, there’s no reasonable classical starting point of the (2,0) theory. The (2,0) theory has properties [such as combinations of symmetries] that sound impossible when you first hear about them. So you can ask why dualities exist, but you can also ask why is there a 6-D theory with such and such properties? This seems to me a more fundamental restatement.

    Dualities sometimes make it hard to maintain a sense of what’s real in the world, given that there are radically different ways you can describe a single system. How would you describe what’s real or fundamental?

    What aspect of what’s real are you interested in? What does it mean that we exist? Or how do we fit into our mathematical descriptions?

    The latter.

    Well, one thing I’ll tell you is that in general, when you have dualities, things that are easy to see in one description can be hard to see in the other description. So you and I, for example, are fairly simple to describe in the usual approach to physics as developed by Newton and his successors. But if there’s a radically different dual description of the real world, maybe some things physicists worry about would be clearer, but the dual description might be one in which everyday life would be hard to describe.

    What would you say about the prospect of an even more optimistic idea that there could be one single quantum gravity description that really does help you in every case in the real world?

    Well, unfortunately, even if it’s correct I can’t guarantee it would help. Part of what makes it difficult to help is that the description we have now, even though it’s not complete, does explain an awful lot. And so it’s a little hard to say, even if you had a truly better description or a more complete description, whether it would help in practice.

    Are you speaking of M-theory?

    M-theory is the candidate for the better description.

    You proposed M-theory 22 years ago. What are its prospects today?

    Personally, I thought it was extremely clear it existed 22 years ago, but the level of confidence has got to be much higher today because AdS/CFT has given us precise definitions, at least in AdS space-time geometries. I think our understanding of what it is, though, is still very hazy. AdS/CFT and whatever’s come from it is the main new perspective compared to 22 years ago, but I think it’s perfectly possible that AdS/CFT is only one side of a multifaceted story. There might be other equally important facets.

    3
    Jean Sweep for Quanta Magazine

    What’s an example of something else we might need?

    Maybe a bulk description of the quantum properties of space-time itself, rather than a holographic boundary description. There hasn’t been much progress in a long time in getting a better bulk description. And I think that might be because the answer is of a different kind than anything we’re used to. That would be my guess.

    Are you willing to speculate about how it would be different?

    I really doubt I can say anything useful. I guess I suspect that there’s an extra layer of abstractness compared to what we’re used to. I tend to think that there isn’t a precise quantum description of space-time — except in the types of situations where we know that there is, such as in AdS space. I tend to think, otherwise, things are a little bit murkier than an exact quantum description. But I can’t say anything useful.

    The other night I was reading an old essay by the 20th-century Princeton physicist John Wheeler. He was a visionary, certainly. If you take what he says literally, it’s hopelessly vague. And therefore, if I had read this essay when it came out 30 years ago, which I may have done, I would have rejected it as being so vague that you couldn’t work on it, even if he was on the right track.

    You’re referring to Information, Physics, Quantum, Wheeler’s 1989 essay propounding the idea that the physical universe arises from information, which he dubbed “it from bit.” Why were you reading it?

    I’m trying to learn about what people are trying to say with the phrase “it from qubit.” Wheeler talked about “it from bit,” but you have to remember that this essay was written probably before the term “qubit” was coined and certainly before it was in wide currency. Reading it, I really think he was talking about qubits, not bits, so “it from qubit” is actually just a modern translation.

    Don’t expect me to be able to tell you anything useful about it — about whether he was right. When I was a beginning grad student, they had a series of lectures by faculty members to the new students about theoretical research, and one of the people who gave such a lecture was Wheeler. He drew a picture on the blackboard of the universe visualized as an eye looking at itself. I had no idea what he was talking about. It’s obvious to me in hindsight that he was explaining what it meant to talk about quantum mechanics when the observer is part of the quantum system. I imagine there is something we don’t understand about that.

    Observing a quantum system irreversibly changes it, creating a distinction between past and future. So the observer issue seems possibly related to the question of time, which we also don’t understand. With the AdS/CFT duality, we’ve learned that new spatial dimensions can pop up like a hologram from quantum information on the boundary. Do you think time is also emergent — that it arises from a timeless complete description?

    I tend to assume that space-time and everything in it are in some sense emergent. By the way, you’ll certainly find that that’s what Wheeler expected in his essay. As you’ll read, he thought the continuum was wrong in both physics and math. He did not think one’s microscopic description of space-time should use a continuum of any kind — neither a continuum of space nor a continuum of time, nor even a continuum of real numbers. On the space and time, I’m sympathetic to that. On the real numbers, I’ve got to plead ignorance or agnosticism. It is something I wonder about, but I’ve tried to imagine what it could mean to not use the continuum of real numbers, and the one logician I tried discussing it with didn’t help me.

    Do you consider Wheeler a hero?

    I wouldn’t call him a hero, necessarily, no. Really I just became curious what he meant by “it from bit,” and what he was saying. He definitely had visionary ideas, but they were too far ahead of their time. I think I was more patient in reading a vague but inspirational essay than I might have been 20 years ago. He’s also got roughly 100 interesting-sounding references in that essay. If you decided to read them all, you’d have to spend weeks doing it. I might decide to look at a few of them.

    Why do you have more patience for such things now?

    I think when I was younger I always thought the next thing I did might be the best thing in my life. But at this point in life I’m less persuaded of that. If I waste a little time reading somebody’s essay, it doesn’t seem that bad.

    Do you ever take your mind off physics and math?

    My favorite pastime is tennis. I am a very average but enthusiastic tennis player.

    In contrast to Wheeler, it seems like your working style is to come to the insights through the calculations, rather than chasing a vague vision.

    In my career I’ve only been able to take small jumps. Relatively small jumps. What Wheeler was talking about was an enormous jump. And he does say at the beginning of the essay that he has no idea if this will take 10, 100 or 1,000 years.

    And he was talking about explaining how physics arises from information.

    Yes. The way he phrases it is broader: He wants to explain the meaning of existence. That was actually why I thought you were asking if I wanted to explain the meaning of existence.

    I see. Does he have any hypotheses?

    No. He only talks about things you shouldn’t do and things you should do in trying to arrive at a more fundamental description of physics.

    Do you have any ideas about the meaning of existence?

    No. [Laughs.]

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:49 am on February 19, 2018 Permalink | Reply
    Tags: Advanced NMR spectroscopy, , , DNP-Dynamic Nuclear Polarization, DNP-NMR spectrometer, , MRI-Magnetic resonance imaging, Physics   

    From Ames Lab: “Seeing the future of new energy materials” 

    Ames Laboratory

    Using advanced NMR spectroscopy methods to guide materials design.

    1
    (l-r) Jason Goh, Takeshi Kobayashi, Linlin Wang, Wenyu Huang, Amrit Venkatesh, Aaron Rossini, Frederic Perras, Marek Pruski, Mike Hanrahan and Zhuoran Wang.

    How do small defects in the surface of solar cell material affect its ability to absorb and convert sunlight to electricity? How does the molecular structure of a porous material determine its ability to separate gases from one another? Understanding the structure and function of materials at the atomic scale is one of the frontiers of energy science.

    “Many new materials have been developed in the past decade to address needs for energy conversion and storage,” said Aaron Rossini, a scientist at the U.S. Department of Energy’s Ames Laboratory, and a professor of chemistry at Iowa State University. “However, there is still a lot we don’t know about how these materials function. We want to change that and bring new information to the table that will be used to optimize these materials.”

    Ames Laboratory has recently received new funding to study such materials by developing and applying new techniques in solid-state nuclear magnetic resonance (NMR) spectroscopy. “NMR has a long and distinguished history at Ames Laboratory, in terms of both expertise and facilities, and this new research project is its latest chapter,” said Ames Laboratory scientist Marek Pruski, “Understanding the structure of materials is fundamentally important to many research groups here, and we will be collaborating with them at a new level to expand their insights.”

    Most people associate NMR with magnetic resonance imaging (MRI), which is used as a diagnostic tool in medicine. Nuclear magnetic resonance probes the nuclei of atoms as they absorb and re-emit radio waves when they are placed in a magnetic field. Those nuclei resonate at measurable radio-frequencies that precisely depend on the local structure of material, the element being studied, and the strength of the magnetic field.

    In late 2014, the spectroscopy experts at Ames Laboratory took their NMR capabilities a quantum leap forward with the acquisition of the first commercial DNP-NMR spectrometer used for materials research in North America. “DNP” stands for “Dynamic Nuclear Polarization,” a method which uses microwaves to excite unpaired electrons in radicals and transfer their high spin polarization to the nuclei in the sample being analyzed. It’s an ‘extra-oomph’ version of conventional NMR technology, offering drastically higher sensitivity and faster data acquisition—and it has already provided game-changing insight into the physical, chemical, and electronic properties of materials. For example, with DNP-enhanced NMR it is possible to measure the distances in between atoms with precision of a trillionth of a meter or measure two-dimensional correlation spectra between rare nuclei, such as carbon-13.

    “We‘ve had a ball here for the last two and a half years, publishing research findings at the rate of a journal paper per month since the DNP-NMR became operational,” said Pruski. “That’s really a very high pace for high-impact science.”

    “It’s a perfect tool for this type of investigations. The properties of energy materials are governed by the structure of their surfaces and the interfaces, and DNP-NMR is especially well-adapted and sensitive to exploring these.”

    Ames Laboratory will pair these rapidly expanding capabilities in DNP-NMR with a technique called ultrafast magic-angle spinning (UFMAS), which relies on spinning the sample at extremely high frequencies (> 6 million RPM). UFMAS greatly improves NMR experiments by allowing signals from hydrogen to be well resolved in most solids.

    Theoretical physicists will be joining the efforts of the experimentalists, developing models that computationally verify or explain their results. Conversely, NMR experiments will guide the development of improved theoretical models.

    “Our work could have far-reaching impact on a lot of fields, in electronics, lighting, solar cells, nanoparticle design, materials with a variety of energy applications,” said Rossini. “If we are able to explain how structure and function are related, we can help direct intelligent materials design.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Ames Laboratory is a government-owned, contractor-operated research facility of the U.S. Department of Energy that is run by Iowa State University.

    For more than 60 years, the Ames Laboratory has sought solutions to energy-related problems through the exploration of chemical, engineering, materials, mathematical and physical sciences. Established in the 1940s with the successful development of the most efficient process to produce high-quality uranium metal for atomic energy, the Lab now pursues a broad range of scientific priorities.

    Ames Laboratory is a U.S. Department of Energy Office of Science national laboratory operated by Iowa State University. Ames Laboratory creates innovative materials, technologies and energy solutions. We use our expertise, unique capabilities and interdisciplinary collaborations to solve global problems.

    Ames Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
    DOE Banner

    DOE Banner

     
  • richardmitnick 10:41 am on February 19, 2018 Permalink | Reply
    Tags: , , , Physics, Quantum computers, Topological superconductors   

    From phys.org: “Unconventional superconductor may be used to create quantum computers of the future” 

    physdotorg
    phys.org

    February 19, 2018

    1
    After an intensive period of analyses the research team led by Professor Floriana Lombardi, Chalmers University of Technology, was able to establish that they had probably succeeded in creating a topological superconductor. Credit: Johan Bodell/Chalmers University of Technology

    With their insensitivity to decoherence, Majorana particles could become stable building blocks of quantum computers. The problem is that they only occur under very special circumstances. Now, researchers at Chalmers University of Technology have succeeded in manufacturing a component that is able to host the sought-after particles.

    Researchers throughout the world are struggling to build quantum computers. One of the great challenges is to overcome the sensitivity of quantum systems to decoherence, the collapse of superpositions. One track within quantum computer research is therefore to make use of Majorana particles, which are also called Majorana fermions. Microsoft, among other organizations, is exploring this type of quantum computer.

    Majorana fermions are highly original particles, quite unlike those that make up the materials around us. In highly simplified terms, they can be seen as half-electron. In a quantum computer, the idea is to encode information in a pair of Majorana fermions separated in the material, which should, in principle, make the calculations immune to decoherence.

    So where do you find Majorana fermions? In solid state materials, they only appear to occur in what are known as topological superconductors. But a research team at Chalmers University of Technology is now among the first in the world to report that they have actually manufactured a topological superconductor.

    “Our experimental results are consistent with topological superconductivity,” says Floriana Lombardi, professor at the Quantum Device Physics Laboratory at Chalmers.

    To create their unconventional superconductor, they started with what is called a topological insulator made of bismuth telluride, Be2Te3. A topological insulator conducts current in a very special way on the surface. The researchers placed a layer of aluminum, a conventional superconductor, on top, which conducts current entirely without resistance at low temperatures.

    “The superconducting pair of electrons then leak into the topological insulator, which also becomes superconducting,” explains Thilo Bauch, associate professor in quantum device physics.

    However, the initial measurements all indicated that they only had standard superconductivity induced in the Bi2Te3 topological insulator. But when they cooled the component down again later, to routinely repeat some measurements, the situation suddenly changed—the characteristics of the superconducting pairs of electrons varied in different directions.

    “And that isn’t compatible at all with conventional superconductivity. Unexpected and exciting things occurred,” says Lombardi.

    “For practical applications, the material is mainly of interest to those attempting to build a topological quantum computer. We want to explore the new physics hidden in topological superconductors—this is a new chapter in physics,” Lombardi says.

    The results were recently published in Nature Communications in a study titled “Induced unconventional superconductivity on the surface states of Bi2Te3 topological insulator.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

     
  • richardmitnick 9:11 am on February 19, 2018 Permalink | Reply
    Tags: , , Electric Eels, Electrocytes, Physics,   

    From The Atlantic: “A New Kind of Soft Battery, Inspired by the Electric Eel” 

    Atlantic Magazine

    The Atlantic Magazine

    Dec 13, 2017
    Ed Yong

    1
    Thomas Schroeder / Anirvan Guha

    In 1799, the Italian scientist Alessandro Volta fashioned an arm-long stack of zinc and copper discs, separated by salt-soaked cardboard. This “voltaic pile” was the world’s first synthetic battery, but Volta based its design on something far older—the body of the electric eel.

    This infamous fish makes its own electricity using an electric organ that makes up 80 percent of its two-meter length. The organ contains thousands of specialized muscle cells called Electric Eel. Each only produces a small voltage, but together, they can generate up to 600 volts—enough to stun a human, or even a horse. They also provided Volta with ideas for his battery, turning him into a 19th-century celebrity.

    Two centuries on, and batteries are everyday objects. But even now, the electric eel isn’t done inspiring scientists. A team of researchers led by Michael Mayer at the University of Fribourg have now created a new kind of power source [Nature] that ingeniously mimics the eel’s electric organ. It consists of blobs of multicolored gels, arranged in long rows much like the eel’s electrocytes. To turn this battery on, all you need to do is to press the gels together.

    Unlike conventional batteries, the team’s design is soft and flexible, and might be useful for powering the next generation of soft-bodied robots. And since it can be made from materials that are compatible with our bodies, it could potentially drive the next generation of pacemakers, prosthetics, and medical implants. Imagine contact lenses that generate electric power, or pacemakers that run on the fluids and salts within our own bodies—all inspired by a shocking fish.

    To create their unorthodox battery, the team members Tom Schroeder and Anirvan Guha began by reading up on how the eel’s electrocytes work. These cells are stacked in long rows with fluid-filled spaces between them. Picture a very tall tower of syrup-smothered pancakes, turned on its side, and you’ll get the idea.

    When the eel’s at rest, each electrocyte pumps positively charged ions out of both its front-facing and back-facing sides. This creates two opposing voltages that cancel each other out. But at the eel’s command, the back side of each electrocyte flips, and starts pumping positive ions in the opposite direction, creating a small voltage across the entire cell. And crucially, every electrocyte performs this flip at the same time, so their tiny voltages add up to something far more powerful. It’s as if the eel has thousands of small batteries in its tail; half are pointing in the wrong direction but it can flip them at a whim, so that all of them align. “It’s insanely specialized,” says Schroeder.

    2
    How an electric eel’s electrocytes work (Schroeder et al. / Nature).

    He and his colleagues first thought about re-creating the entire electric organ in a lab, but soon realized that it’s far too complicated. Next, they considered setting up a massive series of membranes to mimic the stacks of electrocytes—but these are delicate materials that are hard to engineer in the thousands. If one broke, the whole series would shut down. “You’d run into the string-of-Christmas-lights problem,” says Schroeder.

    In the end, he and Guha opted for a much simpler setup, involving lumps of gel that are arranged on two separate sheets. Look at the image below, and focus on the bottom sheet. The red gels contain saltwater, while blue ones contain freshwater. Ions would flow from the former to the latter, but they can’t because the gels are separated. That changes when the green and yellow gels on the other sheet bridge the gaps between the blue and red ones, providing channels through which ions can travel.

    Here’s the clever bit: The green gel lumps only allow positive ions to flow through them, while the yellow ones only let negative ions pass. This means (as the inset in the image shows) that positive ions flow into the blue gels from only one side, while negative ions flow in from the other. This creates a voltage across the blue gel, exactly as if it was an electrocyte. And just as in the electrocytes, each gel only produces a tiny voltage, but thousands of them, arranged in a row, can produce up to 110 volts

    3
    Schroeder et al. / Nature.

    The eel’s electrocytes fire when they receive a signal from the animal’s neurons. But in Schroeder’s gels, the trigger is far simpler—all he needs to do is to press the gels together.

    It would be cumbersome to have incredibly large sheets of these gels. But Max Shtein, an engineer at the University of Michigan, suggested a clever solution—origami. Using a special folding pattern that’s also used to pack solar panels into satellites, he devised a way of folding a flat sheet of gels so the right colors come into contact in the right order. That allowed the team to generate the same amount of power in a much smaller space—in something like a contact lens, which might one day be realistically worn.

    For now, such batteries would have to be actively recharged. Once activated, they produce power for up to a few hours, until the levels of ions equalize across the various gels, and the battery goes flat. You then need to apply a current to reset the gels back to alternating rows of high-salt and low-salt. But Schroeder notes that our bodies constantly replenish reservoirs of fluid with varying levels of ions. He imagines that it might one day be possible to harness these reservoirs to create batteries.

    Essentially, that would turn humans into something closer to an electric eel. It’s unlikely that we’d ever be able to stun people, but we could conceivably use the ion gradients in our own bodies to power small implants. Of course, Schroeder says, that’s still more a flight of fancy than a goal he has an actual road map for. “Plenty of things don’t work for all sorts of reasons, so I don’t want to get too far ahead of myself,” he says.

    It’s not unreasonable to speculate, though, says Ken Catania from Vanderbilt University, who has spent years studying the biology of the eels. “Volta’s battery was not exactly something you could fit in a cellphone, but over time we have all come to depend on it,” he says. “Maybe history will repeat itself.”

    “I’m amazed at how much electric eels have contributed to science,” he adds. “It’s a good lesson in the value of basic science.” Schroeder, meanwhile, has only ever seen electric eels in zoos, and he’d like to encounter one in person. “I’ve never been shocked by one, but I feel like I should at some point,” he says.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:10 pm on February 17, 2018 Permalink | Reply
    Tags: A new approach to rechargeable batteries, , , Physics   

    From MIT: “A new approach to rechargeable batteries” 

    MIT News

    MIT Widget

    MIT News

    January 22, 2018 [Just now in social media.]
    David L. Chandler


    A type of battery first invented nearly five decades ago could catapult to the forefront of energy storage technologies, thanks to a new finding by researchers at MIT. Illustration modified from an original image by Felice Frankel

    A type of battery first invented nearly five decades ago could catapult to the forefront of energy storage technologies, thanks to a new finding by researchers at MIT. The battery, based on electrodes made of sodium and nickel chloride and using a new type of metal mesh membrane, could be used for grid-scale installations to make intermittent power sources such as wind and solar capable of delivering reliable baseload electricity.

    The findings are being reported today in the journal Nature Energy, by a team led by MIT professor Donald Sadoway, postdocs Huayi Yin and Brice Chung, and four others.

    Although the basic battery chemistry the team used, based on a liquid sodium electrode material, was first described in 1968, the concept never caught on as a practical approach because of one significant drawback: It required the use of a thin membrane to separate its molten components, and the only known material with the needed properties for that membrane was a brittle and fragile ceramic. These paper-thin membranes made the batteries too easily damaged in real-world operating conditions, so apart from a few specialized industrial applications, the system has never been widely implemented.

    But Sadoway and his team took a different approach, realizing that the functions of that membrane could instead be performed by a specially coated metal mesh, a much stronger and more flexible material that could stand up to the rigors of use in industrial-scale storage systems.

    “I consider this a breakthrough,” Sadoway says, because for the first time in five decades, this type of battery — whose advantages include cheap, abundant raw materials, very safe operational characteristics, and an ability to go through many charge-discharge cycles without degradation — could finally become practical.

    While some companies have continued to make liquid-sodium batteries for specialized uses, “the cost was kept high because of the fragility of the ceramic membranes,” says Sadoway, the John F. Elliott Professor of Materials Chemistry. “Nobody’s really been able to make that process work,” including GE, which spent nearly 10 years working on the technology before abandoning the project.

    As Sadoway and his team explored various options for the different components in a molten-metal-based battery, they were surprised by the results of one of their tests using lead compounds. “We opened the cell and found droplets” inside the test chamber, which “would have to have been droplets of molten lead,” he says. But instead of acting as a membrane, as expected, the compound material “was acting as an electrode,” actively taking part in the battery’s electrochemical reaction.

    “That really opened our eyes to a completely different technology,” he says. The membrane had performed its role — selectively allowing certain molecules to pass through while blocking others — in an entirely different way, using its electrical properties rather than the typical mechanical sorting based on the sizes of pores in the material.

    In the end, after experimenting with various compounds, the team found that an ordinary steel mesh coated with a solution of titanium nitride could perform all the functions of the previously used ceramic membranes, but without the brittleness and fragility. The results could make possible a whole family of inexpensive and durable materials practical for large-scale rechargeable batteries.

    The use of the new type of membrane can be applied to a wide variety of molten-electrode battery chemistries, he says, and opens up new avenues for battery design. “The fact that you can build a sodium-sulfur type of battery, or a sodium/nickel-chloride type of battery, without resorting to the use of fragile, brittle ceramic — that changes everything,” he says.

    The work could lead to inexpensive batteries large enough to make intermittent, renewable power sources practical for grid-scale storage, and the same underlying technology could have other applications as well, such as for some kinds of metal production, Sadoway says.

    Sadoway cautions that such batteries would not be suitable for some major uses, such as cars or phones. Their strong point is in large, fixed installations where cost is paramount, but size and weight are not, such as utility-scale load leveling. In those applications, inexpensive battery technology could potentially enable a much greater percentage of intermittent renewable energy sources to take the place of baseload, always-available power sources, which are now dominated by fossil fuels.

    The research team included Fei Chen, a visiting scientist from Wuhan University of Technology; Nobuyuki Tanaka, a visiting scientist from the Japan Atomic Energy Agency; MIT research scientist Takanari Ouchi; and postdocs Huayi Yin, Brice Chung, and Ji Zhao. The work was supported by the French oil company Total S.A. through the MIT Energy Initiative.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 2:17 pm on February 16, 2018 Permalink | Reply
    Tags: , , , Physics, PRIMA   

    From MIT: “Integrated simulations answer 20-year-old question in fusion research” 

    MIT News

    MIT Widget

    MIT News

    February 16, 2018
    Leda Zimmerman

    To make fusion energy a reality, scientists must harness fusion plasma, a fiery gaseous maelstrom in which radioactive particles react to generate heat for electricity. But the turbulence of fusion plasma can confront researchers with unruly behaviors that confound attempts to make predictions and develop models. In experiments over the past two decades, an especially vexing problem has emerged: In response to deliberate cooling at its edges, fusion plasma inexplicably undergoes abrupt increases in central temperature.

    These counterintuitive temperature spikes, which fly against the physics of heat transport models, have not found an explanation — until now.

    A team led by Anne White, the Cecil and Ida Green Associate Professor in the Department of Nuclear Science and Engineering, and Pablo Rodriguez Fernandez, a graduate student in the department, has conducted studies that offer a new take on the complex physics of plasma heat transport and point toward more robust models of fusion plasma behavior. The results of their work appear this week in the journal Physical Review Letters. Rodriguez Fernandez is first author on the paper.

    In experiments using MIT’s Alcator C-Mod tokamak (a toroidal-shaped device that deploys a magnetic field to contain the star-furnace heat of plasma), the White team focused on the problem of turbulence and its impact on heating and cooling.

    Alcator C-Mod tokamak at MIT, no longer in operation

    In tokamaks, heat transport is typically dominated by turbulent movement of plasma, driven by gradients in plasma pressure.

    Hot and cold

    Scientists have a good grasp of turbulent transport of heat when the plasma is held at steady-state conditions. But when the plasma is intentionally perturbed, standard models of heat transport simply cannot capture plasma’s dynamic response.

    In one such case, the cold-pulse experiment, researchers perturb the plasma near its edge by injecting an impurity, which results in a rapid cooling of the edge.

    “Now, if I told you we cooled the edge of hot plasma, and I asked you what will happen at the center of the plasma, you would probably say that the center should cool down too,” says White. “But when scientists first did this experiment 20 years ago, they saw that edge cooling led to core heating in low-density plasmas, with the temperature in the core rising, and much faster than any standard transport model would predict.” Further mystifying researchers was the fact that at higher densities, the plasma core would cool down.

    Replicated many times, these cold-pulse experiments with their unlikely results defy what is called the standard local model for the turbulent transport of heat and particles in fusion devices. They also represent a major barrier to predictive modeling in high-performance fusion experiments such as ITER, the international nuclear fusion project, and MIT’s own proposed smaller-scale fusion reactor, ARC.

    MIT ARC Fusion Reactor

    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    To achieve a new perspective on heat transport during cold-pulse experiments, White’s team developed a unique twist.

    “We knew that the plasma rotation, that is, how fast the plasma was spinning in the toroidal direction, would change during these cold-pulse experiments, which complicates the analysis quite a bit,” White notes. This is because the coupling between momentum transport and heat transport in fusion plasmas is still not fully understood,” she explains. “We needed to unambiguously isolate one effect from the other.”

    As a first step, the team developed a new experiment that conclusively demonstrated how the cold-pulse phenomena associated with heat transport would occur irrespective of the plasma rotation state. With Rodriguez Fernandez as first author, White’s group reported this key result in the journal Nuclear Fusion in 2017.

    A new integrated simulation

    From there, a tour de force of modeling was needed to recreate the cold-pulse dynamics seen in the experiments. To tackle the problem, Rodriguez Fernandez built a new framework, called PRIMA, which allowed him to introduce cold-pulses in time-dependent simulations. Using special software that factored in the turbulence, radiation and heat transport physics inside a tokamak, PRIMA could model cold-pulse phenomena consistent with experimental measurements.

    “I spent a long time simulating the propagation of cold pulses by only using an increase in radiated power, which is the most intuitive effect of a cold-pulse injection,” Rodriguez Fernandez says.

    Because experimental data showed that the electron density increased with every cold pulse injection, Rodriguez Fernandez implemented an analogous effect in his simulations. He observed a very good match in amplitude and time-scales of the core temperature behavior. “That was an ‘aha!’ moment,” he recalls.

    Using PRIMA, Rodriguez Fernandez discovered that a competition between types of turbulent modes in the plasma could explain the cold-pulse experiments. These different modes, explains White, compete to become the dominant cause of the heat transport. “Whichever one wins will determine the temperature profile response, and determine whether the center heats up or cools down after the edge cooling,” she says.

    By determining the factors behind the center-heating phenomenon (the so-called nonlocal response) in cold-pulse experiments, White’s team has removed a central concern about limitations in the standard, predictive (local) model of plasma behavior. This means, says White, that “we are more confident that the local model can be used to predict plasma behavior in future high performance fusion plasma experiments — and eventually, in reactors.”

    “This work is of great significance for validating fundamental assumptions underpinning the standard model of core tokamak turbulence,” says Jonathan Citrin, Integrated Modelling and Transport Group leader at the Dutch Institute for Fundamental Energy Research (DIFFER), who was not involved in the research. “The work also validated the use of reduced models, which can be run without the need for supercomputers, allowing to predict plasma evolution over longer timescales compared to full-physics simulations,” says Citrin. “This was key to deciphering the challenging experimental observations discussed in the paper.”

    The work isn’t over for the team. As part of a separate collaboration between MIT and General Atomics, Plasma Science and Fusion Center scientists are installing a new laser ablation system to facilitate cold-pulse experiments at the DIII-D tokamak in San Diego, California, with first data expected soon. Rodriguez Fernandez has used the integrated simulation tool PRIMA to predict the cold-pulse behavior at DIII-D, and he will perform an experimental test of the predictions later this year to complete his PhD research.

    The research team included Brian Grierson and Xingqiu Yuan, research scientists at Princeton Plasma Physics Laboratory; Gary Staebler, research scientist at General Atomics; Martin Greenwald, Nathan Howard, Amanda Hubbard, Jerry Hughes, Jim Irby and John Rice, research scientists from the MIT Plasma Science and Fusion Center; and MIT grad students Norman Cao, Alex Creely, and Francesco Sciortino. The work was supported by the US DOE Fusion Energy Sciences.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: