Tagged: Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:40 pm on September 1, 2015 Permalink | Reply
    Tags: , , , , Physics   

    From Carnegie: “A distant planet’s interior chemistry may differ from our own” 

    Carnegie Institution for Science
    Carnegie Institution for Science

    1
    The crystal structure of magnesium peroxide, MgO2, courtesy of Sergey Lobanov, created using K. Momma’s program for drawing crystal structures.

    As astronomers continue finding new rocky planets around distant stars, high-pressure physicists are considering what the interiors of those planets might be like and how their chemistry could differ from that found on Earth. New work from a team including three Carnegie scientists demonstrates that different magnesium compounds could be abundant inside other planets as compared to Earth. Their work is published by Scientific Reports.

    Oxygen and magnesium are the two most-abundant elements in Earth’s mantle. However, when scientists are predicting the chemical compositions of rocky, terrestrial planets outside of our own Solar System, they shouldn’t assume that other rocky planets would have Earth-like mantle mineralogy, according to a research team including Carnegie’s Sergey Lobanov, Nicholas Holtgrewe, and Alexander Goncharov.

    Stars that have rocky planets are known to vary in chemical composition. This means that the mineralogies of these rocky planets are probably different from each other and from our own Earth, as well. For example, elevated oxygen contents have been observed in stars that host rocky planets. As such, oxygen may be more abundant in the interiors of other rocky planets, because the chemical makeup of a star would affect the chemical makeups of the planets that formed around it. If a planet is more oxidized than Earth, then this could affect the composition of the compounds found in its interior, too, including the magnesium compounds that are the subject of this study.

    Magnesium oxide, MgO, is known to be remarkably stable, even under very high pressures. And it isn’t reactive under the conditions found in Earth’s lower mantle. Whereas magnesium peroxide, MgO2, can be formed in the laboratory under high-oxygen concentrations, but it is highly unstable when heated, as would be the case in a planetary interior.

    Previous theoretical calculations had indicated that magnesium peroxide would become stable under high-pressure conditions. Taking that idea one step further, the team set out to test whether stable magnesium peroxide could be synthesized under extreme conditions mimicking planetary interiors.

    Using a laser-heated, diamond-anvil cell, they brought very small samples of magnesium oxide and oxygen to different pressures meant to mimic planetary interiors, from ambient pressure to 1.6 million times normal atmospheric pressure (0-160 gigapascals), and heated them to temperatures above 3,140 degrees Fahrenheit (2,000 Kelvin). They found that under about 950,000 times normal atmospheric pressure (96 gigapascals) and at temperatures of 3,410 degrees Fahrenheit (2,150 Kelvin), magnesium oxide reacted with oxygen to form magnesium peroxide.

    “Our findings suggest that magnesium peroxide may be abundant in extremely oxidized mantles and cores of rocky planets outside our Solar System,” said Lobanov, the paper’s lead author “When we develop theories about distant planets, it’s important that we don’t assume their chemistry and mineralogy is Earth-like.”

    “These findings provide yet another example of the ways that high-pressure laboratory experiments can teach us about not only our own planet, but potentially about distant ones as well,” added Goncharov.

    Because of its chemical inertness, MgO has also long been used as a conductor that transmits heat and pressure to an experimental sample. “But this new information about its chemical reactivity under high pressure means that such experimental uses of MgO need to be revised, because this very stable at ambient conditions material could be creating unwanted reactions at high pressures,” Goncharov added.

    The other co-authors are Qiang Zhu and Artem Oganov of Stony Brook University and Clemens Prescher and Vitali Prakapenka of University of Chicago.

    This study was funded by the Deep Carbon Observatory, the National Science Foundation, DARPA, the Government of the Russian Federation, and the Foreign Talents Introduction and Academic Exchange Program. Calculations were performed on XSEDE facilities and on the cluster of the Center for Functional Nonomaterials Brookhaven National Laboratory, which is supported by the DOE-BES.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Carnegie Institution of Washington Bldg

    Andrew Carnegie established a unique organization dedicated to scientific discovery “to encourage, in the broadest and most liberal manner, investigation, research, and discovery and the application of knowledge to the improvement of mankind…” The philosophy was and is to devote the institution’s resources to “exceptional” individuals so that they can explore the most intriguing scientific questions in an atmosphere of complete freedom. Carnegie and his trustees realized that flexibility and freedom were essential to the institution’s success and that tradition is the foundation of the institution today as it supports research in the Earth, space, and life sciences.

     
  • richardmitnick 10:59 am on August 26, 2015 Permalink | Reply
    Tags: , , Information Paradox, Physics,   

    From Discovery: “Has Stephen Hawking Just Solved a Huge Black-Hole Mystery?” 

    Discovery News
    Discovery News

    Aug 26, 2015
    Mike Wall

    1
    Stephen Hawking arrives at the stage in Beckman Auditorium, Caltech.

    Stephen Hawking may have just solved one of the most vexing mysteries in physics — the “information paradox.”

    [Albert] Einstein’s theory of general relativity predicts that the physical information about material gobbled up by a black hole is destroyed, but the laws of quantum mechanics stipulate that information is eternal. Therein lies the paradox.

    Hawking — working with Malcolm Perry, of the University of Cambridge in England, and Harvard University’s Andrew Stromberg — has come up with a possible solution: The quantum-mechanical information about infalling particles doesn’t actually make it inside the black hole.

    “I propose that the information is stored not in the interior of the black hole, as one might expect, but on its boundary, the event horizon,” Stephen Hawking said during a talk Tuesday at the Hawking Radiation conference, which is being held at the KTH Royal Institute of Technology in Stockholm, Sweden.

    The information is stored at the boundary as two-dimensional holograms known as “super translations,” he explained. But you wouldn’t want super translations, which were first introduced as a concept in 1962, to back up your hard drive.

    “The information about ingoing particles is returned, but in a chaotic and useless form,” Hawking said. “For all practical purposes, the information is lost.”

    Hawking also discussed black holes — whose gravitational pull is so intense that nothing, not even light, can escape once it passes the event horizon — during a lecture Monday night in Stockholm.

    It’s possible that black holes could actually be portals to other universes, he said.

    “The hole would need to be large, and if it was rotating, it might have a passage to another universe. But you couldn’t come back to our universe,” Hawking said at the lecture, according to a KTH Royal Institute of Technology statement. “So, although I’m keen on spaceflight, I’m not going to try that.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 5:02 pm on August 20, 2015 Permalink | Reply
    Tags: , , , Physics,   

    From Berkeley: “Experiment attempts to snare a dark energy ‘chameleon’” 

    UC Berkeley

    UC Berkeley

    August 20, 2015
    Robert Sanders

    1
    The vacuum chamber of the atom interferometer contains a one-inch diameter aluminum sphere. If chameleons exist, cesium atoms would fall toward the sphere with a slightly greater acceleration than their gravitational attraction would predict. (Holger Muller photo)

    If dark energy is hiding in our midst in the form of hypothetical particles called “chameleons,” Holger Müller and his team at UC Berkeley plan to flush them out.

    The results of an experiment reported in this week’s issue of Science narrows the search for chameleons a thousand times compared to previous tests, and Müller, an assistant professor of physics, hopes that his next experiment will either expose chameleons or similar ultralight particles as the real dark energy, or prove they were a will-o’-the-wisp after all.

    Dark energy was first discovered in 1998 when scientists observed that the universe was expanding at an ever increasing rate, apparently pushed apart by an unseen pressure permeating all of space and making up about 68 percent of the energy in the cosmos. Several UC Berkeley scientists were members of the two teams that made that Nobel Prize-winning discovery, and physicist Saul Perlmutter shared the prize.

    Since then, theorists have proposed numerous theories to explain the still mysterious energy. It could be simply woven into the fabric of the universe, a cosmological constant [Λ] that Albert Einstein proposed in the equations of general relativity and then disavowed. Or it could be quintessence, represented by any number of hypothetical particles, including offspring of the Higgs boson.

    In 2004, theorist and co-author Justin Khoury of the University of Pennsylvania proposed one possible reason why dark energy particles haven’t been detected: they’re hiding from us.

    2
    If chameleons exist, they would have a very small effect on the gravitational attraction between cesium atoms and an aluminum sphere.

    Specifically, Khoury proposed that dark energy particles, which he dubbed chameleons, vary in mass depending on the density of surrounding matter.

    In the emptiness of space, chameleons would have a small mass and exert force over long distances, able to push space apart. In a laboratory, however, with matter all around, they would have a large mass and extremely small reach. In physics, a low mass implies a long-range force, while a high mass implies a short-range force.

    This would be one way to explain why the energy that dominates the universe is hard to detect in a lab.

    “The chameleon field is light in empty space but as soon as it enters an object it becomes very heavy and so couples only to the outermost layer of a big object, and not to the internal parts,” said Müller, who is also a faculty scientist at Lawrence Berkeley National Laboratory. “It would pull only on the outermost nanometer.”

    Lifting the camouflage

    When UC Berkeley post-doctoral fellow Paul Hamilton read an article by theorist Clare Burrage last August outlining a way to detect such a particle, he suspected that the atom interferometer he and Müller had built at UC Berkeley would be able to detect chameleons if they existed. Müller and his team have built some of the most sensitive detectors of forces anywhere, using them to search for slight gravitational anomalies that would indicate a problem with Einstein’s General Theory of Relativity. While the most sensitive of these are physically too large to sense the short-range chameleon force, the team immediately realized that one of their less sensitive atom interferometers would be ideal.

    3
    The dark energy group: Holger Müller, Philipp Haslinger, Justin Khoury (on computer monitor), Matt Jaffe, Paul Hamilton. (Enar de Dios Rodriguez photo)

    Burrage suggested measuring the attraction caused by the chameleon field between an atom and a larger mass, instead of the attraction between two large masses, which would suppress the chameleon field to the point of being undetectable.

    That’s what Hamilton, Müller and his team did. They dropped cesium atoms above an inch-diameter aluminum sphere and used sensitive lasers to measure the forces on the atoms as they were in free fall for about 10 to 20 milliseconds. They detected no force other than Earth’s gravity, which rules out chameleon-induced forces a million times weaker than gravity. This eliminates a large range of possible energies for the particle.

    What about symmetrons?

    Experiments at CERN in Geneva and the Fermi National Accelerator Laboratory in Illinois, as well as other tests using neutron interferometers, also are searching for evidence of chameleons, so far without luck. Müller and his team are currently improving their experiment to rule out all other possible particle energies or, in the best-case scenario, discover evidence that chameleons really do exist.

    “Holger has ruled out chameleons that interact with normal matter more strongly than gravity, but he is now pushing his experiment into areas where chameleons interact on the same scale as gravity, where they are more likely to exist,” Khoury said.

    Their experiments may also help narrow the search for other hypothetical screened dark energy fields, such as symmetrons and forms of modified gravity, such as so-called f(R) gravity.

    “In the worst case, we will learn more of what dark energy is not. Hopefully, that gives us a better idea of what it might be,” Müller said. “One day, someone will be lucky and find it.”

    The work was funded by the David and Lucile Packard Foundation, the National Science Foundation and the National Aeronautics and Space Administration. Co-authors with Müller, Hamilton and Khoury are UC Berkeley physics graduate students Matt Jaffe and Quinn Simmons and post-doctoral fellow Philipp Haslinger.

    RELATED INFORMATION

    Atom-interferometry constraints on dark energy (preprint)
    Muller’s matter wave research group

    See the full article here..

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Founded in the wake of the gold rush by leaders of the newly established 31st state, the University of California’s flagship campus at Berkeley has become one of the preeminent universities in the world. Its early guiding lights, charged with providing education (both “practical” and “classical”) for the state’s people, gradually established a distinguished faculty (with 22 Nobel laureates to date), a stellar research library, and more than 350 academic programs.

    UC Berkeley Seal

     
  • richardmitnick 3:53 pm on August 20, 2015 Permalink | Reply
    Tags: , , , James Bullock, Physics,   

    From Quanta: “The Case for Complex Dark Matter” 

    Quanta Magazine
    Quanta Magazine

    August 20, 2015
    Liz Kruesi

    The physicist James Bullock explains how a complicated “dark sector” of interacting particles may illuminate some puzzling observations of the centers of galaxies.

    1
    James Bullock, a physicist at the University of California, Irvine, imagines what the universe would look like if dark matter interacted with itself.

    Dark matter — the unseen 80 percent of the universe’s mass — doesn’t emit, absorb or reflect light. Astronomers know it exists only because it interacts with our slice of the ordinary universe through gravity. Hence the hunt for this missing mass has focused on so-called WIMPs — Weakly Interacting Massive Particles — which interact with each other as infrequently as they interact with normal matter.

    Physicists have reasons to look for alternatives to WIMPs. For two decades, astronomers have found less dark matter at the centers of galaxies than what WIMP models suggest they should. The discrepancy is even worse at the cores of the universe’s tiny dwarf galaxies, which have few ordinary stars but lots of dark matter.

    About four years ago, James Bullock, a professor of physics and astronomy at the University of California, Irvine, began to wonder whether the standard view of dark matter was failing important empirical tests. “This was the point where I really started thinking hard about alternatives,” he said.

    Bullock thinks that dark matter might instead be complex, something that interacts with itself strongly in the way that ordinary matter interacts with itself to form intricate structures like atoms and atomic elements. Such a self-interacting dark matter, Bullock suspects, could exist in a “dark sector,” somewhat parallel to our own light sector, but detectable only through the way it affects gravity.

    He and his colleagues have created numerical simulations that predict what the universe would look like if dark matter feels strong interactions. They expected to see the model fail. Instead, they found that it was consistent with what astronomers observe.

    Quanta Magazine spoke with Bullock about complex dark matter, how this mysterious mass might behave, and the best places in the universe to find it. An edited and condensed version of the interview follows.

    QUANTA MAGAZINE: What do we know about dark matter?

    JAMES BULLOCK: We are confident that it’s there, that it has mass, and that it tugs on itself and on other things via gravity. That’s about it. While dark matter has a gravitational tug, it doesn’t interact with normal matter — the stuff that makes up you and me — in a very intense way. It doesn’t shine. It’s invisible. It’s transparent. It doesn’t glow when it gets hot. Unfortunately, those are the ways astronomers usually study the universe; we usually follow the light.

    So we don’t know what it’s made of?

    We’ve come to understand that we can describe the world that we experience by the Standard Model of particle physics.

    2
    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    We think of the particles that make up you and me as being broken down into constituent things, like quarks, and those quarks combine into neutrons and protons. There is a complicated dance that allows these particles to interact in certain ways. It gives rise to the periodic table of elements and all of the vast complexity we see around us. Just 20 percent of the mass of the universe is all of this complexity.

    On the other hand, dark matter makes up something like 80 percent of the mass. First-guess models for what it is suggests that it is one particle that doesn’t really interact with much of anything — WIMPs. These are collisionless, meaning when two dark matter particles come at each other they basically go through each other.

    Another possibility is this 80 percent of the universe is also complex. Maybe there’s something interesting going on in what’s called the dark sector. We know that whatever ties us to the dark matter is pretty weak or else we would have already seen it. This observation has led to the belief that all the interactions that could be going on with dark matter are weak. But there’s another possibility: When dark matter particles see themselves, there are complex and potentially very strong interactions. There even could be dark atoms and dark photons.

    Those two worlds — this dark sector and our own sector — only communicate by gravity and perhaps other weak processes, which haven’t yet been seen.

    How can you probe this dark sector if you can’t interact with it?

    Now what we’re talking about doing is not just looking at the gross properties of the dark matter but the very makeup of the dark matter, too. The most obvious place to see those effects is where dark matter is bunched up. We believe the centers of galaxies and galaxy clusters are densest. And so by studying the behavior of dark matter by indirect methods — basically by the dynamics of stars and gas and galaxies in galaxy clusters — we can start to understand how dark matter is distributed in space. To start to discriminate between models, we can compare differences in dark matter’s spatial clumpiness in simulations, for example, and then look for those differences in data.

    What does the data say?

    In models using cold, collisionless dark matter — WIMPs — the dark matter is very dense at the middle of galaxies. It appears that those predicted densities are much higher than what’s observed.

    What might be going on is that something a little more complex is happening in the dark sector, and that complexity is causing these slight disagreements between theory and observation at places where the dark matter is really clumped or starts congregating, like in the centers of galaxies or the centers of galaxy clusters.

    I’m interested in running cosmological simulations of how the universe should evolve from the very beginning until now. I look at what happens, when I run those simulations forward, if I allow cold dark matter to occasionally collide and exchange energy. The simulations start with a small, almost-smooth primordial universe and end with beautiful agreement with large-scale structure — galaxies stretched out across the universe in the way we observe them. But the hearts of galaxies are less dense in dark matter in my simulations than they are in simulations where the dark matter is cold and collisionless.

    How long have researchers known about these disagreements between the models and the data?

    We’ve known that there’s a bit of a problem at the centers of galaxies for about 20 years. At first it was thought maybe we’re interpreting the data wrong. And now the question comes down to: Does galaxy formation eject dark matter somehow, or do we need to modify our understanding of dark matter?

    Why did you start looking into self-interacting dark matter?

    The first paper exploring ideas that the dark matter might be more complex was in Physical Review Letters, April 2000, by David Spergel and Paul Steinhardt. I actually started working on this several years later when I began seeing papers from the particle physics community exploring these ideas. My initial reaction was, that couldn’t be true, because I had this prejudice that things work so well with collisionless dark matter.

    In the first set of simulations we ran, we gave dark matter a cross-section with itself. The bigger the cross-section is, the higher the probability that these particles are going to run into one another in any given amount of time. We set the value of the cross-section to something we were convinced would be ruled out [by the data], but when we ran our simulation we found that we couldn’t see any difference between that model and the classic one. And so we thought maybe we don’t know quite as much as we thought we knew.

    Then, we dialed it up and looked at a strong interaction similar to if you threw two neutrons together. We saw something that looks really close to observations on large scales but does produce differences in the hearts of galaxies. Rather than the dark matter getting denser and denser as you approach the center of the galaxy, it reached a threshold density.

    Could it be that these little discrepancies we’ve been seeing in the observational data are actually a clue that there’s something interesting and fun going on in the dark sector that we weren’t thinking about before?

    How have these simulations evolved since the first ones you performed?

    We’ve been running very high cross-section values to see when this model starts to break compared to some observations. We’re also focusing energy on including all of the star-formation and galaxy-formation physics in these simulations. The hardest part with these simulations is that the universe isn’t just made of dark matter. There’s all of this other annoying normal stuff that we have to think about, too. Gas that can turn into stars — and some of those stars are going to be so massive that they blow up as supernovae. When they blow up as supernovae, they are effectively jostling the gravitational field around them, and this jostling can potentially move the dark matter around. Is it possible that these discrepancies that we’re seeing in the observed densities of dark matter and the predicted densities of dark matter is because the galaxy-formation process itself is changing things in a way that we don’t understand very well?

    Something else that I spend my time on is figuring out the cleanest and clearest cases for determining what comes from the physics of dark matter versus the physics of star formation and galaxy formation. We have to think hard about how clean our cosmological experiments are.

    Where is that cleanest cosmological laboratory?

    My opinion is that the cleanest sites are the teeniest, tiniest galaxies we know about — dwarf galaxies. They have very few stars but huge amounts of dark matter. In some cases they have 100 times as much dark matter within their visible extent as they have visible matter. (The Milky Way interior to the Sun is about half dark matter and half normal matter.) Dwarf galaxies have so much dark matter compared to their stars, they’re excellent laboratories for dark matter. They’re as clean as we have.

    But studying dark matter physics in something that doesn’t give off much light is pretty difficult.

    The nice thing about these objects is that a lot of them are really close by. They’re close enough that you can actually measure the velocities of individual stars. That allows you to build as precise a model as you can of the dark matter density at the centers of these galaxies. They’re close enough to study with great precision, but they’re chock full of dark matter so you don’t have to worry as much about what’s going on with the stars.

    There have been recent observational studies focusing on galaxy clusters. Are observations and theoretical models starting to move in a similar direction?

    Imagine a swarm of bees; a cluster of galaxies is sort of like that. Massive collisions, where two galaxy clusters have come at each other and pass through each other, are one place to look for complex dark matter. If the dark matter is strongly interacting, when those massive clusters come together, the galaxies will keep flying right on through, but the dark matter, because it’s strongly interacting with itself, will sort of bunch up in the middle.

    2
    The Bullet Cluster shows the aftermath of a cosmic collision between two galaxy clusters. In this false-color image, the hot gas (pink) slowed down in the collision due to a drag force, while the dark matter (blue) appeared to keep passing through, as one would expect if dark matter is collisionless.

    In the famous example of the Bullet Cluster, astronomers used the effect of gravitational lensing to look at where the dark matter was. They found that the dark matter has moved right on through along with the galaxies, which is what you’d expect with collisionless dark matter. Because of this result, people said, “Well, there’s no way the dark matter is strongly interacting with itself.”

    That was a few years ago and a couple things have happened since then. We’ve realized that a lot of the first-order estimates people have used to determine how much the dark matter ought to drag on itself were overestimated. Also, several other clusters have less-clear results, and in some cases maybe there is more drag than we thought before. Richard Massey’s group found evidence that some kind of dark pressure, ram pressure, is ripping the dark matter out of a galaxy.

    We really aren’t at the point yet where I think we’ve done enough, though. We need to invest more effort into simulating the calculations properly with these various classes of dark matter to figure out what it is we know and what it is we don’t know. I think we’ve seen exciting hints, and they motivate us to try to do as well as we can to figure out what they mean.

    See the full article here.

    [James Bullock was featured in the NatGeo TV special Inside the Milky Way, which I have presented many times. But, hey, it might be the single greatest science video ever made, so, what is wrong with one more time? If you have not seen it, you are in for a treat. As I say often when presenting a video, watch, enjoy, learn.]

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 10:27 am on August 11, 2015 Permalink | Reply
    Tags: , , Physics   

    From Nautilus: “Is It Time to Embrace Unverified Theories?” 

    Nautilus

    Nautilus

    Aug 11, 2015
    Shannon Hall

    1
    Juergen Faelchle

    In the world of modern physics, there is change afoot. Researchers are striving so hard to leap beyond the mostly settled science of the Standard Model that they’re daring to break from one of science’s crucial traditions.

    2
    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    In pursuit of a definitive, unifying description of reality, some scientists are arguing that scientific theories may not require experimental proof to be accepted as truth.

    The philosophical tradition of defining scientific knowledge as inherently empirical, founded on observation, goes back centuries. And in the 20th century, Karl Popper, one of the few philosophers regarded as a hero among scientists, moved the paradigm one step further. He argued that a theory must be falsifiable to be scientific. So a scientist not only has to be able to support a theory with evidence; she also has to be able to show that there could be evidence that would prove it wrong.

    Any movement that deviates from this tradition alarms most scientists. And now that alarm is spilling into the pages of prominent journals.

    “[Last year], debates in physics circles took a worrying turn,” physicists George Ellis and Joseph Silk wrote in an essay in Nature. “Faced with difficulties in applying fundamental theories to the observed universe, some researchers called for a change in how theoretical physics is done. They began to argue—explicitly—that if a theory is sufficiently elegant and explanatory, it need not be tested experimentally.”

    In other words, some scientists are calling for new rules of the game. They’re asking the community to put as much faith in math as they historically have in evidence.

    The challenge arises from two ideas prominent in modern theoretical physics. The first is string theory, in which small, vibrating strings replace the point-like particles, like protons and electrons, which most physicists accept as accurate representations of the subatomic universe. The second is the so-called multiverse, which postulates that the Big Bang created not just one universe, but instead an infinite array of universes. Both ideas are beautiful; neither can, as far as we know, be tested.

    “If you look at the history of physics, we’ve always been able to find these more powerful, deep, and unified theories,” says Peter Woit, a mathematician at Columbia University, who is well-known for attacking string theory’s lack of evidence. Sometimes those deeper theories have come to fruition well before experiments were available that could test them. Take Albert Einstein’s theory of general relativity, for example. It was developed in 1917 based on his remarkable physical intuition, but it wasn’t confirmed until the 1960s.

    Although Woit thinks a single unified theory exists, he remains doubtful that we’ll find one anytime soon. Few realize that in the latter decades of Einstein’s life, he pursued a different theory that simply didn’t pan out: a unified field theory to connect gravity and electromagnetism. “He spent the last 20 years of his life working down what most people have agreed is almost certainly a dead end,” says Woit. “That’s the danger of not having experiments.”

    So there’s a delicate dialogue between experimental data and physical theories. Theoretical physicists need experimental results to bring them back on track. “But if that doesn’t happen then I think it’s much more difficult to get people to admit that some things are not working and try to find more fruitful things to do,” says Woit.

    And the time gap between experimental data and physical theories is stretching out even further.

    What if that gap becomes infinite? Many physicists fear that string theory and the multiverse might in practice never be observable. Evidence for the six extra dimensions needed to make string theory work requires that physicists reach vastly higher experimental energies, far beyond the reach of the Large Hadron Collider.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    Physicists need an accelerator built to astronomical proportions.

    And the other universes that make up the multiverse seem to lie permanently beyond the observable horizon by definition. They simply can’t be seen.

    “There are two ways a theory can fail. It can fail by predicting something wrong, and it can fail by turning into being an empty idea that predicts nothing,” says Woit. He thinks that both string theory and the multiverse fail in that they predict nothing observable.

    As Ellis and Silk wrote in Nature, “the issue boils down to clarifying one question: What potential observational or experimental evidence is there that would persuade you that the theory is wrong and lead you to abandoning it? If there is none, it is not a scientific theory.”

    Have we reached the point in physics where it’s time to abandon string theory and the multiverse? David Albert, a philosopher at Columbia University, argues no. “It’s true there are worries about string theory that they’ll never make any contact with experience,” says Albert. But it’s a little too early to be defeated, or assume that “string theory will never produce anything empirically accessible to us.”

    Albert draws on the history of science, arguing that it is “full of situations where people thought that something was going to be difficult to get at with technologies envisioned, and then a certain amount of cleverness finds a back door.” Albert is careful to say, however, that he isn’t necessarily optimistic. He acknowledges the kinds of worries felt throughout the entire physics community but remains hopeful that physicists will find a new way, as they’ve always done.

    “A lot of people claim that you can never empirically test a claim like the multiverse because by definition you can only see what’s in our universe,” says Albert. “But I think that’s much too quick.” He argues that certain fundamental laws, which we can empirically prove in our own universe, might mathematically predict the existence of other universes. These laws would therefore be indirect, but compelling evidence of the existence of other universes.

    So perhaps it isn’t accurate to conclusively say that we can’t empirically test these theories. “I think the right thing to say is that we don’t know yet how to test them,” says Albert. “People shouldn’t be so scared of that.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 9:30 am on August 11, 2015 Permalink | Reply
    Tags: , , , Physics   

    From LLNL: “Researchers reveal new electron ring formations” 


    Lawrence Livermore National Laboratory

    Aug. 11, 2015
    Breanna Bishop
    bishop33@llnl.gov (link sends e-mail)
    925-423-9802

    1
    Using the ultra-short-pulse Callisto laser system at LLNL’s Jupiter Laser Facility, a team of scientists from LLNL and UCLA revealed new, never-before-seen electron ring formations. Photo by Julie Russell/LLNL

    2
    This image from the 3D simulation shows the laser pulse propagating to the right through the low-density plasma. The black region behind the laser contains background ions, which are responsible for accelerating electrons in this region to high energy. The white contours represent regions of high background electron density; the roughly triangular region they form between the two black regions is called the “pocket,” and it is able to guide electrons through the plasma and allow them to leave it with a ring-like structure.

    Laser wakefield acceleration, a process where electron acceleration is driven by high-powered lasers, is well-known for being able to produce high-energy beams of electrons in tabletop-scale distances. However, in recent experiments, a team of scientists from Lawrence Livermore National Laboratory (LLNL) and the University of California, Los Angeles (UCLA) revealed new, never-before-seen electron ring formations in addition to the typically observed beams.

    In a recently published Physical Review Letters , the team described electron acceleration experiments performed at LLNL’s Jupiter Laser Facility. Using the ultra-short-pulse Callisto laser system, a plasma was produced in a low-density gas cell target. The interaction of the high-intensity laser with the gas created a relativistic plasma wave, which then accelerated some of the electrons in the plasma to more than 100 megaelectron volt (MeV) energies.

    These electron beams are usually directed along the laser axis and have fairly low divergence. In these experiments, the typical beams were observed, but in certain cases were also accompanied by a second, off-axis beam that had a ring-like shape. This new feature had never before been reported, and its origin was unclear until the UCLA collaborators finished computationally intensive three-dimensional calculations of the experimental conditions.

    “The dynamics of the plasma wave are often calculated in simulations, but the small spatial scale and fast timescale of the wakefield process has made direct measurements of many effects difficult or impractical,” said lead author Brad Pollock. “The discovery of new features, such as the electron rings here, allows us to compare with simulations and infer what is going on in the experiments with much greater confidence.”

    In the simulations, a ring-like electron structure was produced during the wakefield acceleration process if the plasma was sufficiently long and the total number of electrons was large enough to perturb the plasma wave structure. Under these conditions, the plasma wave structure was modified in such a way as to force some electrons off of the laser axis and into a “pocket” outside of the plasma wave, which then guided some of these electrons through the remainder of the plasma.

    “In addition to the diagnostic implications of this particular feature, it may also be possible to tailor the parameters of electron ring-beams for their own applications, including accelerating positively charged particles – positrons, for example,” Pollock added.

    LLNL co-authors include Felicie Albert, Arthur Pak and Joseph Ralph, and UCLA co-authors include Frank Tsung, Jessica Shaw, Chris Clayton, Asher Davidson, Nuno Lemos, Ken Marsh, Warren Mori and Chan Joshi.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 9:18 am on August 11, 2015 Permalink | Reply
    Tags: , , Physics   

    From MIT: “A new look at superfluidity” 


    MIT News

    August 10, 2015
    Jennifer Chu

    1
    The Ketterle Group is working with lasers to create superfluids at MIT. Pictured, from left to right: grad student Colin Kenned, Professor Wolfgang Ketterle, grad student William Cody Burton, and grad student Woo Chang Chung. Photo: Bryce Vickmark

    MIT team creates a superfluid in a record-high magnetic field.

    MIT physicists have created a superfluid gas, the so-called Bose-Einstein condensate, for the first time in an extremely high magnetic field. The magnetic field is a synthetic magnetic field, generated using laser beams, and is 100 times stronger than that of the world’s strongest magnets. Within this magnetic field, the researchers could keep a gas superfluid for a tenth of a second — just long enough for the team to observe it. The researchers report their results this week in the journal Nature Physics.

    A superfluid is a phase of matter that only certain liquids or gases can assume, if they are cooled to extremely low temperatures. At temperatures approaching absolute zero, atoms cease their individual, energetic trajectories, and start to move collectively as one wave.

    Superfluids are thought to flow endlessly, without losing energy, similar to electrons in a superconductor. Observing the behavior of superfluids therefore may help scientists improve the quality of superconducting magnets and sensors, and develop energy-efficient methods for transporting electricity.

    But superfluids are temperamental, and can disappear in a flash if atoms cannot be kept cold or confined. The MIT team combined several techniques in generating ultracold temperatures, to create and maintain a superfluid gas long enough to observe it at ultrahigh synthetic magnetic fields.

    “Going to extremes is the way to make discoveries,” says team leader Wolfgang Ketterle, the John D. MacArthur Professor of Physics at MIT. “We use ultracold atoms to map out and understand the behavior of materials which have not yet been created. In this sense, we are ahead of nature.”

    Ketterle’s team members include graduate students Colin Kennedy, William Cody Burton, and Woo Chang Chung.

    A superfluid with loops

    The team first used a combination of laser cooling and evaporative cooling methods, originally co-developed by Ketterle, to cool atoms of rubidium to nanokelvin temperatures. Atoms of rubidium are known as bosons, for their even number of nucleons and electrons. When cooled to near absolute zero, bosons form what’s called a Bose-Einstein condensate — a superfluid state that was first co-discovered by Ketterle, and for which he was ultimately awarded the 2001 Nobel Prize in physics.

    After cooling the atoms, the researchers used a set of lasers to create a crystalline array of atoms, or optical lattice. The electric field of the laser beams creates what’s known as a periodic potential landscape, similar to an egg carton, which mimics the regular arrangement of particles in real crystalline materials.

    When charged particles are exposed to magnetic fields, their trajectories are bent into circular orbits, causing them to loop around and around. The higher the magnetic field, the tighter a particle’s orbit becomes. However, to confine electrons to the microscopic scale of a crystalline material, a magnetic field 100 times stronger than that of the strongest magnets in the world would be required.

    The group asked whether this could be done with ultracold atoms in an optical lattice. Since the ultracold atoms are not charged, as electrons are, but are instead neutral particles, their trajectories are normally unaffected by magnetic fields.

    Instead, the MIT group came up with a technique to generate a synthetic, ultrahigh magnetic field, using laser beams to push atoms around in tiny orbits, similar to the orbits of electrons under a real magnetic field. In 2013, Ketterle and his colleagues demonstrated the technique, along with other researchers in Germany, which uses a tilt of the optical lattice and two additional laser beams to control the motion of the atoms. On a flat lattice, atoms can easily move around from site to site. However, in a tilted lattice, the atoms would have to work against gravity. In this scenario, atoms could only move with the help of laser beams.

    “Now the laser beams could be used to make neutral atoms move around like electrons in a strong magnetic field,” added Kennedy.

    Using laser beams, the group could make the atoms orbit, or loop around, in a radius as small as two lattice squares, similar to how particles would move in an extremely high magnetic field.

    “Once we had the idea, we were really excited about it, because of its simplicity. All we had to do was take two suitable laser beams and carefully align them at specific angles, and then the atoms drastically change their behavior,” Kennedy says.

    “New perspectives to known physics”

    After developing the tilting technique to simulate a high magnetic field, the group worked for a year and a half to optimize the lasers and electronic controls to avoid any extraneous pushing of the atoms, which could make them lose their superfluid properties.

    “It’s a complicated experiment, with a lot of laser beams, electronics, and magnets, and we really had to get everything stable,” Burton says. “It took so long just to iron out all the details to eventually have this ultracold matter in the presence of these high fields, and keep them cold — some of it was painstaking work.”

    In the end, the researchers were able to keep the superfluid gas stable for a tenth of a second. During that time, the team took time-of-flight pictures of the distribution of atoms to capture the topology, or shape, of the superfluid. Those images also reveal the structure of the magnetic field — something that’s been known, but never directly visualized until now.

    “The main accomplishment is that we were able to verify and identify the superfluid state,” Ketterle says. “If we can get synthetic magnetic fields under even better control, our laboratory could do years of research on this topic. For the expert, what it opens up is a new window into the quantum world, where materials with new properties can be studied.”

    Going forward, the team plans to carry out similar experiments, but to add strong interactions between ultracold atoms, or to incorporate different quantum states, or spins. Ketterle says such experiments would connect the research to important frontiers in material research, including quantum Hall physics and topological insulators.

    “We are adding new perspectives to physics,” Ketterle says. “We are touching on the unknown, but also showing physics that in principle is known, but at a new level of clarity.”

    This research was funded by the National Science Foundation, the Air Force Office for Scientific Research, and the Army Research Office.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 4:34 pm on August 7, 2015 Permalink | Reply
    Tags: , Borexino, KamLAND, , Physics   

    From Physics: “Focus: Neutrinos Detected from the Earth’s Mantle” 

    Physics LogoAbout Physics

    Physics Logo 2

    Physics

    August 7, 2015
    Mark Buchanan

    1
    Neutrino Eyes. The Borexino experiment in Italy reports detecting 24 neutrinos produced by radioactive decay in the Earth over a seven-year period.

    KamLAND
    KamLAND

    The steady decay of long-lived radioactive isotopes within the Earth heats the planet and also sends out streams of neutrinos, which can be observed by large detectors. The Borexino Collaboration now reports a new set of data for such “geoneutrinos” and indicates that at least some of them originate from the Earth’s mantle. The work could improve researchers’ understanding of how radioactive decays help drive internal geophysical processes, including the slow convection of rock in the Earth’s mantle.

    Neutrinos are notorious for interacting with matter extremely rarely—a light-year-thick wall of lead would only stop half of the neutrinos flying through—so detection is challenging. But using large detectors, both Borexino and KamLAND, another international collaboration, have previously detected geoneutrinos with very high confidence. With more data, researchers hope to gain further information about the distribution of radioactive isotopes in the Earth’s interior and about the amount of heat they deliver to various subterranean regions.

    The Borexino detector, which contains 300 metric tons of a fluid that can emit light flashes in response to particles, operates at the underground Gran Sasso National Laboratory in Italy and detects electron antineutrinos, commonly created in nuclear decays. From December of 2007 through March of 2015, the detector recorded a total of 77 candidate geoneutrino events, compared with 46 events the team reported in 2013 [1].

    Of all known long-lived radioactive isotopes, only uranium-238 and thorium-232 are abundant enough and produce antineutrinos of sufficient energy to contribute significantly to detection events. However, nuclear reactors also generate antineutrinos. Using data from the International Atomic Energy Agency, the Borexino team calculated that about 53 of the 77 detected antineutrinos were likely to be from reactors, leaving about 24 true geoneutrinos. The certainty of this detection is the highest ever achieved for geoneutrinos; the chance that all of these particles come from reactors is less than one in a hundred million.

    The Borexino collaboration estimated the number of geoneutrinos originating from the Earth’s mantle, rather than from the crust. Their previous estimate had a large uncertainty and not very high confidence that any of the detected geoneutrinos came from the mantle. With the larger dataset, the team reduced the error bars enough to say with 98% confidence that they have detected mantle neutrinos. To find the fraction from the mantle, as before, they estimated the number of neutrinos expected from the crust based on the measured abundance of uranium and thorium and then subtracted this number from the total. They found that about half of the geoneutrinos most likely originated from the mantle.

    The researchers also estimated the total amount of heat generated by radioactive decays. Geoscientists know that the Earth generates about 47 terawatts of power from its interior, some from “primordial heat” left over from the Earth’s formation and the rest from radioactive decays. The fraction of heat attributable to each of these sources remains largely unknown. The new Borexino analysis gives an estimate for the radiogenic component of heating of about 33 terawatts (with large error bars)—higher than earlier studies.

    Borexino team leader Aldo Ianni, of the Gran Sasso Laboratory, suggests that future studies conducted over longer periods of time will reduce uncertainties and allow accurate geoneutrino spectroscopy—distinguishing neutrinos according to the element from which they originated. Such data would provide information on the distribution of isotopes throughout the Earth’s interior. The current study could just barely distinguish between antineutrinos coming from uranium-238 decays and those from thorium-232 decays, based on the particles’ energies. However, the uncertainties remain too large to make definitive statements.

    “For those of us in the field, this is very impressive progress,” says Jason Detwiler of the University of California at Berkeley, a member of the KamLAND group. “Their spectrum is very clean and beautifully and incontrovertibly demonstrates the presence of the geoneutrino signal.” Detwiler says that the number of mantle neutrinos seen by Borexino may have geophysical significance. ”The data are consistent with there being enough radiogenic heat to drive mantle convection,” he says, referring to the slow turnover of mantle material over geologic time.

    This research is published in Physical Review D.

    References

    1. G. Bellini et al. (Borexino Collaboration), Phys. Lett. B 722, 295. (2013) .

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Physicists are drowning in a flood of research papers in their own fields and coping with an even larger deluge in other areas of physics. How can an active researcher stay informed about the most important developments in physics? Physics highlights a selection of papers from the Physical Review journals. In consultation with expert scientists, the editors choose these papers for their importance and/or intrinsic interest. To highlight these papers, Physics features three kinds of articles: Viewpoints are commentaries written by active researchers, who are asked to explain the results to physicists in other subfields. Focus stories are written by professional science writers in a journalistic style and are intended to be accessible to students and non-experts. Synopses are brief editor-written summaries. Physics provides a much-needed guide to the best in physics, and we welcome your comments (physics@aps.org).

     
  • richardmitnick 4:03 pm on August 7, 2015 Permalink | Reply
    Tags: , , Physics   

    From Caltech: “Caltech Announces Discovery in Fundamental Physics” 

    Caltech Logo
    Caltech

    08/07/2015
    Rod Pyle

    1
    One of the metallic samples studied, niobium diselenide, is seen here–the square in the center–as prepared for an X-ray diffraction experiment.
    Credit: University of Chicago/Argonne National Laboratory

    2
    This cutaway schematic shows the diamond anvil cell, a pressure vessel in which the experiments were conducted. The target material is situated between two diamonds, represented here in blue. For this study, a diamond anvil generated pressures to 100,000 times sea level.Credit: University of Chicago/Argonne National Laboratory

    When the transistor was invented in 1947 at Bell Labs, few could have foreseen the future impact of the device. This fundamental development in science and engineering was critical to the invention of handheld radios, led to modern computing, and enabled technologies such as the smartphone. This is one of the values of basic research.

    In a similar fashion, a branch of fundamental physics research, the study of so-called correlated electrons, focuses on interactions between the electrons in metals.

    The key to understanding these interactions and the unique properties they produce—information that could lead to the development of novel materials and technologies—is to experimentally verify their presence and physically probe the interactions at microscopic scales. To this end, Caltech’s Thomas F. Rosenbaum and colleagues at the University of Chicago and the Argonne National Laboratory recently used a synchrotron X-ray source to investigate the existence of instabilities in the arrangement of the electrons in metals as a function of both temperature and pressure, and to pinpoint, for the first time, how those instabilities arise. Rosenbaum, professor of physics and holder of the Sonja and William Davidow Presidential Chair, is the corresponding author on the paper that was published on July 27, 2015, in the journal Nature Physics.

    “We spent over 10 years developing the instrumentation to perform these studies,” says Yejun Feng of Argonne National Laboratory, a coauthor of the paper. “We now have a very unique capability that’s due to the long-term relationship between Dr. Rosenbaum and the facilities at the Argonne National Laboratory.”

    Within atoms, electrons are organized into orbital shells and subshells. Although they are often depicted as physical entities, orbitals actually represent probability distributions—regions of space where electrons have a certain likelihood of being found in a particular element at a particular energy. The characteristic electron configuration of a given element explains that element’s peculiar properties.

    The work in correlated electrons looks at a subset of electrons. Metals, as an example, have an unfilled outermost orbital and electrons are free to move from atom to atom. Thus, metals are good electrical conductors. When metal atoms are tightly packed into lattices (or crystals) these electrons mingle together into a “sea” of electrons. The metallic element mercury is liquid at room temperature, in part due to its electron configuration, and shows very little resistance to electric current due to its electron configuration. At 4 degrees above absolute zero (just barely above -460 degrees Fahrenheit), mercury’s electron arrangement and other properties create communal electrons that show no resistance to electric current, a state known as superconductivity.

    Mercury’s superconductivity and similar phenomena are due to the existence of many pairs of correlated electrons. In superconducting states, correlated electrons pair to form an elastic, collective state through an excitation in the crystal lattice known as a phonon (specifically, a periodic, collective excitation of the atoms). The electrons are then able to move cooperatively in the elastic state through a material without energy loss.

    Electrons in crystals can interact in many ways with the periodic structure of the underlying atoms. Sometimes the electrons modulate themselves periodically in space. The question then arises as to whether this “charge order” derives from the interactions of the electrons with the atoms, a theory first proposed more than 60 years ago, or solely from interactions among the sea of electrons themselves. This question was the focus of the Nature Physics study. Electrons also behave as microscopic magnets and can demonstrate “spin order,” which raises similar questions about the origin of the local magnetism.

    To see where the charge order arises, the researchers turned to the Advanced Photon Source at Argonne [APS].

    ANL APS
    ANL APS interior
    APS

    The Photon Source is a synchrotron (a relative of the cyclotron, commonly known as an “atom-smasher”). These machines generate intense X-ray beams that can be used for X-ray diffraction studies. In X-ray diffraction, the patterns of scattered X-rays are used to provide information about repeating structures with wavelengths at the atomic scale.

    In the experiment, the researchers used the X-ray beams to investigate charge-order effects in two metals, chromium and niobium diselenide, at pressures ranging from 0 (a vacuum) to 100 kilobar (100,000 times normal atmospheric pressure) and at temperatures ranging from 3 to 300 K (or -454 to 80 degrees Fahrenheit). Niobium diselenide was selected because it has a high degree of charge order, while chromium, in contrast, has a high degree of spin order.

    The researchers found that there is a simple correlation between pressure and how the communal electrons organize themselves within the crystal. Materials with completely different types of crystal structures all behave similarly. “These sorts of charge- and spin-order phenomena have been known for a long time, but their underlying mechanisms have not been understood until now,” says Rosenbaum.

    Paper coauthors Jasper van Wezel, formerly of Argonne National Laboratory and presently of the Institute for Theoretical Physics at the University of Amsterdam, and Peter Littlewood, a professor at the University of Chicago and the director of Argonne National Laboratory, helped to provide a new theoretical perspective to explain the experimental results.

    Rosenbaum and colleagues point out that there are no immediate practical applications of the results. However, Rosenbaum notes, “This work should have applicability to new materials as well as to the kind of interactions that are useful to create magnetic states that are often the antecedents of superconductors,” says Rosenbaum.

    “The attraction of this sort of research is to ask fundamental questions that are ubiquitous in nature,” says Rosenbaum. “I think it is very much a Caltech tradition to try to develop new tools that can interrogate materials in ways that illuminate the fundamental aspects of the problem.” He adds, “There is real power in being able to have general microscopic insights to develop the most powerful breakthroughs.”

    The coauthors on the paper, titled Itinerant density wave instabilities at classical and quantum critical points, are Yejun Feng and Peter Littlewood of the Argonne National Laboratory, Jasper van Wezel of the University of Amsterdam, Daniel M. Silevitch and Jiyang Wang of the University of Chicago, and Felix Flicker of the University of Bristol. Work performed at the Argonne National Laboratory was supported by the U.S. Department of Energy. Work performed at the University of Chicago was funded by the National Science Foundation. Additional support was received from the Netherlands Organization for Scientific Research

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 6:40 am on August 7, 2015 Permalink | Reply
    Tags: , Friction, , Physics   

    From NOVA: “Friction Fighters” 

    PBS NOVA

    NOVA

    05 Aug 2015
    Anna Lieb

    Is friction real? Once, with the quiet certainty of someone who just stayed up all night in the company of equations describing concrete, my college roommate told me that friction was made up.

    Now, I’m pondering her words as I stare at six yterrbium atoms. They are blue and dancing, projected on the wall of a small room off a long hallway at MIT. Lasers and electronics march all over a wide tabletop, climbing up into the ceiling and slithering down to the floor. I’m about to learn that everything I thought I knew about friction is a 14th-century work of fiction—and that the truth is stranger by far.

    I’m in the lab of Vladan Vuletic, a professor of physics here, where two of his graduate students are feeding electrical current through a circlet of aggressively coiled wires into a shoebox-sized, airless vault, instructing the ytterbium atoms to move in unison—in time with swing music, even.


    Ytterbium ions move on command in Vladan Vuletic’s lab at MIT.
    Video can be downloaded at orignal article.

    Vuletic and his lab group spent years setting up this maze of a room in order to study technology so new we’re not quite sure if it really exists yet: quantum computing. But when he realized that their experiment could do much more, his curiosity sent him on an unexpected detour. “We could study friction in a way that was not possible before, namely, have direct access to looking at each atom individually.”

    Friction is a simple word that glosses over a complex phenomenon arising from a dizzying array of interactions. “Friction is a very elusive thing. It’s not something you can touch, but you always feel the effect,” says Ali Erdemir, a senior scientist at Argonne National Laboratory who has spent decades figuring out how to reduce friction losses in transportation. Friction gives and friction takes away. It ensures that our shoes don’t slip and our vehicles stop on command. But friction also eats up roughly one third of all the fuel we burn in our cars, and deep underground, friction between bits of the earth’s crust decides when and where an earthquake will occur.

    1
    Friction in engines and other mechanical parts wastes massive amounts of energy in transportation.

    Tribologists, the clan of scientists and engineers who study interacting surfaces, think on scales ranging from atoms to airplane wings, and their efforts have huge potential payoffs. In transportation alone, researchers think that reducing the energy lost by surfaces rubbing against each other in engines could save 1% of the all the energy used in the U.S. alone, says Robert Carpick, a professor of mechanical engineering at the University of Pennsylvania.

    For tribologists, the experiments going on right now in Vuletic’s lab could offer a fresh window into a force that’s almost as poorly understood as it is ubiquitous. “In some ways there’s more fundamental physics in our understanding of black holes light years away from us than there is about the friction between our feet and the ground,” Caprick says.

    Laws and Loopholes

    Most people’s first encounters with the scientific side of friction are brief and quickly forgotten. Carpick, for example, had no idea friction was a subject of active research until he started working in a tribology lab as a graduate student. Jaqueline Krim, who heads a nanotribology lab at North Carolina State University, says that just two basic laws about friction, wedged into an introductory physics course, comprised “almost 100% of what I learned up to my Ph.D.”

    In fact, those basic two laws go back a long way. “Leonardo da Vinci and the other guy—whose name I don’t actually remember—wrote down the laws by 1600,” says George Smith, a historian of science at Tufts University and mechanical engineer. Da Vinci worked out his rules 200 years before the other guy—Guillerme Amontons—but da Vinci never published. Amontons printed up his laws in 1599, died shortly thereafter at the age of 42, and all but disappeared from history.

    Here’s what Amontons’s laws say: Imagine dragging a reluctant elephant across a parking lot. Suppose this hypothetical pachyderm is stubborn enough to keep all four legs locked in place and all four feet touching the pavement. Once you overcome inertia to get the beast moving, all your effort goes into fighting the friction between the elephant’s hooves and the asphalt. Amontons’s first law says that the friction force is proportional to the force of the pavement pushing against the weight of the elephant. (In physics, this is called the “normal force” because it’s normal—that is, perpendicular—to the surfaces in question.) So if you stack a second elephant on top of the first, you get twice as much friction because you have twice the normal force. (Though the normalcy of the stacked elephant situation is admittedly debatable.) The second law states that friction doesn’t depend on how much area is in contact. So if your elephant daintily lifts up one of its front legs, and one of its back legs, the friction doesn’t change, even though there’s only half as much hoof area touching the ground.

    Amontons’s laws do a reasonably good job of describing many everyday situations, but they are nonetheless fiction. They fall short because they don’t really tell us anything about what’s going on between two sliding surfaces. The closer we look, the more loopholes tribologists are finding in Amontons’s laws.

    For example, let’s take a second look at the second law. If friction comes from interactions between two surfaces, then wouldn’t more surface mean more opportunities for things to catch and snag against each other and thus more friction? “This is something that always intrigued me, you know,” Vuletic says. “It turns out that even this is not perfectly well understood.”

    Or take this other example: Which is easier, dragging a box across an ice rink or a soccer field? You might expect that smoother surfaces like ice always slide more easily than rough ones like grass. But this is not always true. If you take two copper surfaces and polish them to perfection, then the copper refuses to slide at all. “When the atoms in contact are all of the same kind,” explained the physicist Richard Feynman in one of his lectures, “there is no way for the atoms to ‘know’ that they are in different pieces of copper.”

    Yet another unsolved tribology mystery involves a Soviet physicist named J.W. Obreimoff, who in 1929 was using a Gillette razor to slice rock the hard way. He cut into a thin sheet of mica, blade parallel to the glittery surface. As he sliced Obreimoff saw what he described as a “splash of light.” To this day, neither Amontons’s laws nor any other description of interacting surfaces can explain the phenomenon, says Seth Putterman, a professor of physics at UCLA. Yet it’s everywhere. The same physics is at work when you crunch down on a wintergreen-flavored Lifesaver candy and see sparks or when a cat’s fur crackles with static electricity after it walks across carpet. “For sure we don’t understand the cat’s fur,” Putterman says.

    A Rough Place

    Our partial ignorance may really be an issue of scale. If you zoom in enough, the seemingly smooth surface of an ice sheet or mirror would resemble a mountain range. “Atomically speaking, there’s no such thing as a flat surface,” says Michael Strano, a professor of chemical engineering at MIT. When you slide one surface over another, it’s like you’ve turned the Himalayas upside down and started dragging them across the Rocky Mountains. The peaks, called “asperities” in tribology lingo, bump into each other. Each time they stretch, compress, or break off saps energy from the motion.

    The rough nature of smooth-looking surfaces could help explain why the second law of friction suffices for macroscopic objects but breaks down if we zoom close enough. Most of what we measure as an object’s surface area (say, the elephant foot) doesn’t interact with the other surface (say, the pavement). In fact, only a few atoms at the tops of the asperities in the foot get close to the tops of the asperities in the pavement. These are the only atoms that “actually see each other,” Erdemir says. “They are intimately interacting.”

    If we could master those interactions, we might be able to get rid of friction.

    3
    Up close, even the smoothest surfaces resemble mountain ranges.

    Researchers theorized in the late 1980s about how to eliminate one type of friction, known as stick-slip. Stick-slip friction happens when the peaks of one surface nestle down into the valleys of the other and get stuck—until you apply enough force to coax them up and out. In many cases, it’s the dominant frictional effect at atomic scales.

    The trick to overcoming stick-slip is to induce apathy, convincing the two surfaces not to give a damn if you move them across one another. Such surfaces are called “incommensurate.” To picture incommensurate surfaces, suppose we papier-mâchéd over one-inch round marbles spaced exactly one inch apart. To make the second surface incommensurate with the first, we papier-mâché over more marbles to make a surface that can’t mesh with the first one . That means making the space between the marbles in the second surface different. (Not just any spacing will do—if the new bumps are exactly two inches apart, then the surfaces will still fit together, with every other peak corresponding to a valley. For the surfaces to be incommensurate, the ratio of the spacing must be an irrational number, which cannot be written as a ratio of integers. A ratio of π would work, but ratios of 17 or ⅓ would not, because then every 17th atom or every third atom would line up with atoms in the other surface.)

    4
    When the marbles are equally spaced as in (a), or when one spacing is an integer multiple of the other as in (b), the surfaces can interlock. When the ratio of the spacing is an irrational number like pi (c), the surfaces are incommensurate.

    If you build two incommensurate surfaces, no matter how you shove them around, “you’ll always have some fitting and some not fitting,” says James Hone, a professor of mechanical engineering at Columbia University. If the surfaces can interlock, they’ll prefer the stuck-together arrangement. But for incommensurate surfaces, apathy sets in. “Then the system doesn’t care if it’s moving sideways,” he says, so you don’t lose energy as you move. If done right, the two incommensurate surfaces might slide past one another with vanishingly low friction. Such surfaces are especially intriguing to the materials scientists, physicists, and engineers that have spent the last 25 years trying to observe frictionless sliding, a phenomenon known as superlubricity. Some argue they’ve already found it.

    Vanishing Act

    In an airless chamber in the center of the lab, Alexei Bylinskii, a graduate student in Vuletic’s group, uses electric fields to corral a handful of ytterbium ions into a space the size of a matchbox. By changing the electric current flowing through the maze of wires, they can carefully pull these atoms over a surface below and measure how much friction the atoms feel.

    This lower surface, which is designed to be incommensurate with the string of ytterbium ions above, is called an optical lattice. It is made out of light but that doesn’t mean the surface is an illusion. By bouncing light between two mirrors, the group creates a standing wave of light—imagine the peaks and troughs of a frozen ocean wave. These peaks and troughs correspond to points of higher and lower energy for the ytterbium ions, which want to move down into troughs and away from peaks. From the ion’s point of view, this landscape resembles the high and low points on the surface of a material like copper—but the scientists can control the shape and size of the optical lattice far more precisely than they can control the surface of a physical chunk of metal. When Vuletic’s lab rigged the spacing of the ions to be incommensurate with the spacing of the optical lattice below, they observed a dramatic reduction in friction.

    Even outside this pristine vacuum chamber, researchers have created systems with incredibly low friction. Ali Erdemir and colleagues at Argonne National Lab recently created a surface coating that resembles miniscule ball-bearings. The “ball-bearings” are actually tiny diamonds, wrapped up in a wispy layer of graphene to produce two incommensurate—and incredibly slippery—surfaces. Erdemir calls the work a clear example of superlubricity.

    But some tribologists argue the term is misapplied—“you might call it very good lubricity, not superlubricity,” Carpick says. In physics, the prefix “super-” typically applies only in extreme situations. Sokoloff explains that when researchers observe superconductivity, current flows unhindered because “electrical resistance really does go down to zero.” Similarly, when liquid helium exhibits superfluidity, its viscosity vanishes, allowing the stuff to eerily climb up and over the walls of its container. So far, superlubricity experiments have demonstrated very low—but not actually zero—friction.

    The terminology dispute hints at something deeper than a quibble over nomenclature. The theoretical picture of superconductivity relies on quantum mechanics. Superfluidity also defies classical physics. So will quantum mechanics help us understand where to look for “true” superlubricity?

    Not necessarily, Sokoloff argues. “Right now, according to the way we understand things…you’re probably not going to see truly zero friction,” Sokoloff says.

    Scientists don’t yet know what role quantum mechanics might play in friction on the atomic scale. Vuletic’s lab is working on cooling their experiment down to just a hair above absolute zero, where they hope to see ytterbium ions quantum tunneling—moving through the peaks, rather than over them. They want to see how this quantum tunneling affects friction, an observation that may help us understand friction at larger scales. But it’s not a done deal, Bylinskii says. “Whether friction in the real world depends on quantum mechanical effects, that’s an open question.”
    Sliding Forward

    If answering that and other questions would be helpful to traditional mechanical engineers, it would be a breakthrough for nano engineers. At large scales, we’ve come up with shortcuts to make friction less destructive. Take a car tire. On average, every revolution on pavement wears off one layer of atoms, Vladan Vulutec says. “For a tire it doesn’t matter, because there’s billions and billions of layers until you have a millimeter or centimeter of loss of profile.”

    But as researchers design devices that are only 100 or even ten atoms thick, and losing even one layer of atoms is a pretty big deal. “Nanoscale stuff is all surface,” says Hone, the Columbia nano engineer, “and so once you contaminate the surface, you’ve changed what it is.”

    Back at MIT, Strano’s group is interested in scaling up nanoscale discoveries about friction and other phenomena to make exotic materials for safer, lighter cars and airplanes The potential applications are huge: Ali Erdemir of Argonne estimates that mitigating friction losses in transportation alone could save an estimated $500 billion in fuel costs and 800 million tons of CO­­2 annually.

    Mastering friction could also help make cars safer. When a car crashes, several thousand of pounds of mass that were moving suddenly aren’t anymore. As the energy of motion dissipates, the frame of the car—not to mention the occupants—often crumple up. “You’d like to flow that energy in a certain way, and you’d like it not to go to you,” Strano says. Controlling atomic-level friction could help design a material that’s rigid in most cases but bends easily when pushed from a certain direction, allowing designers to carefully orchestrate how the frame of a car deforms in the event of an accident.

    All of the researchers I talked to say they’re a long way from completely eliminating or even expertly controlling friction in the messy world outside the laboratory. Strano points out that, in general, researchers observe amazing properties on atomic scales, but they have made slower progress in advancing tantalizing technologies like ultra-efficient engines or futuristic airplane wings.

    But that hasn’t stopped them. “It used to be that friction was okay if your car wasn’t wearing away,” says Krim, the nanotribologist. “Our world is less tolerant now of waste.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 462 other followers

%d bloggers like this: