Tagged: Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:16 pm on December 15, 2014 Permalink | Reply
    Tags: , , , , , Physics,   

    From SPACE.com: “Will We Ever Find Dark Matter?” Previously Covered Elsewhere, But K.T. is an Excellent Exponent of her Material 

    space-dot-com logo

    SPACE.com

    December 11, 2014
    Kelen Tuttle, The Kavli Foundation

    Scientists have long known about dark matter, a mysterious substance that neither emits nor absorbs light. But despite decades of searching, they have not yet detected dark matter particles.

    With ten times the sensitivity of previous detectors, three recently funded dark matter experiments — the Axion Dark Matter eXperimen Gen 2, LUX-ZEPLIN and the Super Cryogenic Dark Matter Search at the underground laboratory SNOLAB — have scientists crossing their fingers that they may finally glimpse these long-sought particles.

    AXION DME
    University of Washington physicists Gray Rybka (right) and Leslie Rosenberg examine the primary components of the ADMX detector.
    Credit: Mary Levin, University of Washington

    LUX Dark matter
    LUX-ZEPLIN

    SUPER CDMS
    Super Cryogenic Dark Matter Search

    Late last month, The Kavli Foundation hosted a Google Hangout so that scientists on each of those experiments could discuss just how close we are to identifying dark matter. In the conversation below are three of the leading scientists in the dark matter hunt:

    Enectali Figueroa-Feliciano: Figueroa-Feliciano is a member of the SuperCDMS collaboration and an associate professor of physics at the MIT Kavli Institute for Astrophysics and Space Research.

    Harry Nelson: Nelson is the science lead for the LUX-ZEPLIN experiment and is a professor of physics at the University of California, Santa Barbara.

    Gray Rybka: Rybka leads the ADMX Gen 2 experiment as a co-spokesperson and is a research assistant professor of physics at the University of Washington.

    s
    The SuperCDMS experiment at the Soudan Underground Laboratory uses five towers like the one shown here to search for WIMP dark matter particles.
    Credit: Reidar Hahn, Fermilab

    Below is a modified transcript of the discussion. Edits and changes have been made by the participants to clarify spoken comments recorded during the live webcast. To view and listen to the discussion with unmodified remarks, you can watch the original video.

    The Kavli Foundation: Let’s start with a very basic, yet far from simple question. One of our viewers asks how do we know for sure that dark matter even exists. Enectali, I’m hoping you can start us off. How do you know that there’s something out there for you to find?

    E.F.F.: The primary evidence telling us dark matter is out there is from astronomical observations. In the 1930s, evidence first came in the observations of the velocities of galaxies inside galaxy clusters. Then, in the 1970s, it came in the velocities of stars inside galaxies. One way to explain this is if you imagine tying a string around a rock and twirling it around. The faster you twirl the rock on the string, the more force you have to use to hold onto that string. When people looked at the velocity rotations of galaxies, they noticed that stars were moving way too fast around the center of the galaxy to be explained from the force you could see due to gravity from the mass that we knew was there from our observations. The implication was that if the stars are moving faster than gravity could hold them together, there must be more matter than we can see holding everything in place.

    Today, many different types of observations have been done at the very largest scales, using clusters of galaxies and what’s called the cosmic microwave background. Even when we look at the small scale of particle physics, we know that there are things about the Standard Model that aren’t quite right. We’re trying to find out what’s missing. That’s part of what’s being done at the Large Hadron Collider
    at CERN and other collider experiments. Some of the theories predict particles that would be good candidates for dark matter. So from the largest cosmic scales to the smallest particle physics scales there are reasons to believe that dark matter is there and there are candidates for what that dark matter can be.

    Cosmic Microwave Background  Planck
    CMB erp ESA/Planck

    sm
    The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    TKF: Harry, I’m hoping that you can follow up on that a little bit. Your experiment and the one Enectali works on both look for the most promising type of theoretical particle, one that interacts so weakly with the matter in our world that it’s called the WIMP. In fact there are more than thirty dark matter experiments that are currently planned or underway, and the great majority of them search for this same type of particle. Why do all these experiments focus on the WIMP?

    H.N.: First I want to emphasize that WIMP is an acronym, W-I-M-P, which stands for weakly interacting massive particle. “Massive” means a mass that’s anywhere from a little smaller than the mass of a proton up to many times the mass of a proton. The WIMP is so popular in part because it’s easy to fit into descriptions of the Big Bang — maybe the easiest to fit. The concept to understand here is called thermal equilibrium, and that’s just when you put something in the refrigerator it ends up at the same temperature as the refrigerator. I had a leftover sandwich last night from when I went out to dinner and I put in my refrigerator, and now it’s cold. In much the same way, with WIMPs we hypothesize that dark matter in the early universe was in thermal equilibrium with our matter. But after the Big Bang, the universe gradually cooled down and our matter fell out of equilibrium with the dark matter. Then the dark matter keeps finding itself and, through a process called annihilation, turning into our matter. But the reverse process can no longer go on because our matter doesn’t have enough thermal energy.

    To explain the current abundance of dark matter, the interaction between dark matter and us numerically must be about the same as the weak interaction. That’s W-I in WIMP: weakly interacting. It implies a numerical strength that is consistent with beta decay in radioactivity or, for example, the production of the Higgs particle at the Large Hadron Collider. That the weak interaction appears from this idea about the Big Bang appeals to people. Occam’s Razor, the idea that you don’t want to make things any more complicated than they need to be, makes it attractive. But it doesn’t prove it.

    There are at least two other ways to detect the WIMP. One is at the Large Hadron Collider, as Enectali mentioned, and another is in these WIMP annihilations, where the WIMPs find each other and turn into our matter in certain places in the universe such as the center of stars or the center of our galaxy. If we get lucky, we could hit a trifecta; we could see the same particle with experiments like mine — LUX/LZ — or Enectali’s SuperCDMS, we could see it in the Large Hadron Collider, and we could also see it astrophysically. That would be the trifecta.

    Of course there’s a second reason why so many people are building these WIMP experiments, and that’s that we’ve made a lot of progress in how to build them. There’s been a lot of creativity using many techniques to look for these things. I will say that’s partly true because they’re a little easier to look for than the axion, for which you need really talented and expert people like Gray and Leslie Rosenberg.

    TKF: That’s a nice segue into Gray’s experiment. Gray, you don’t look for the WIMP; instead, you look for something called the axion. It’s a very lightweight particle, with no electric charge and no spin that interacts with our world very rarely. Can you tell us a little bit more about your experiment and why you look for the axion?

    G.R.: I look for the axion because if I looked for WIMPs then I would have to compete with very smart people like Harry and Enectali! But there are other really good reasons as well. The axion is a very good dark matter candidate. We think it may exist because of how physics works inside nuclei. It’s different from the WIMP in that it’s extremely light and you look for it by coupling to photons or, say a radio frequency kind of energy. I got involved in this because I was looking at dark matter and saw that there are a lot of people looking for WIMPs and not many people looking for axions. It’s difficult to look for, but there have been some technical breakthroughs that help. For example, just about everyone has cell phones now and so a lot of work has been done at those frequencies — which just happen to be the right frequencies to use when looking for axions. Meanwhile, there’s a lot of work on quantum computers, which means that there’s also a lot of really nice low temperature radio frequency amplification. That too helps with these experiments. So the time is right be looking for axions.

    TKF: Besides the WIMP and the axion, there are a lot of other theorized particles out there. One of our viewers wrote in and would like to know how likely is it that dark matter is in fact neither of the particles that your experiments look for but rather is composed of super heavy particles called WIMPzillas.

    H.N.: The WIMPzilla has WIMP in its name, so that means it’s weakly interacting, and the zilla part is that it’s just as massive as Godzilla. The way it works is that all of our astrophysical measurements tell us how much mass there is per unit volume – essentially, the cumulative total mass per unit volume of dark matter. But these measurements don’t tell us how to apportion that mass. Are there a great many light particles or just a few really heavy particles? We can’t tell from astrophysical data. So it could be that the dark matter consists of just a few super duper heavy things, like WIMPzillas. But because there wouldn’t be many of them out there, to detect them you’d have to build it a gigantic detector. What we run into there is that nobody wants to give us billions of dollars to build that gigantic detector. It’s just too much money. I think that’s what keeps us from making progress on the idea of the WIMPzilla.

    LUX Dark matter
    The LUX detector before its large tank was filled with more than 70,000 gallons of ultra-pure water. The water shields the detector from background radiation.
    Credit: Matt Kapust, Sanford Underground Research Facility

    E.F.F.: There are many theoretical dark matter particles. We have to pick a combination of what we can look for with the experiments that we can build and what theory and our current understanding suggests are the best places to look. Now, not all of the theories have as good a foundation as others. Some would work but have different types of assumptions built into them and so we need to make a value judgment as experimentalists. We go to the “theory café” and choose which are the best courses on the menu, then we trim the list down to those that are the most feasible to detect, and then we look at which of those we can afford. That convolution of parameters is what prompts us to look for particular candidates. And if we don’t find dark matter in those places, we will look for them elsewhere. And of course there’s no reason why dark matter has to be one thing; it might be composed of several different particles. We might find WIMPs and axions and other things we don’t know of yet.

    TKF: One of our viewers points us to a press release issued last week by Case Western University that describes a theory in which dark matter is made up of macroscopic objects. This viewer would like to know whether there’s any reason why dark matter would be more likely to be made up of the individual exotic particles that you look for than it is to be made up of macroscopic particles.

    H.N.: Papers like that are one of the reasons this field is so exciting. There are just so many different ideas out there and there’s this big discussion going on all the time. New ideas come in, we discuss them and think about them, and sometimes the new idea is inconsistent but other times people say, “Wow we have no idea that could be great.”

    This concept that the dark matter might consist of particles that coalesce into solid or massive objects has been around for a long time. In fact, there was a search 20 or so years ago where they looked for large objects in our galaxy that were creating gravitational lensing. When you look at stars out in our galaxy, if they suddenly become brighter that’s evidence of a massive object moving in front of them. You might wonder how an object moving in front of something would give it more light, but that’s the beauty of gravitational lensing — the light focuses around the object. So this idea has been out there, and this paper looks to be a very careful reanalysis.

    Another example is an idea that’s been around for a long time that maybe there is a different kind of nuclear matter out there. Our nuclear matter is made of up and down quarks and maybe there’s another type of nuclear matter that involves the strange quark. People have been searching for that for 30 or 40 years, but we’ve never been able to find it. Maybe it exists and maybe it’s the dark matter we’re searching for. I would say that in some estimate of probabilities it’s less likely, but we could be wrong. What’s great is to have the scientific discussion always going because the probabilities get reassessed all the time.

    G.R.: These massive objects have a very amusing acronym. They’re massive compact halo objects, MACHOs. So for a while it was MACHOs versus WIMPs.

    E.F.F.: One thing that I would add is that this paper and this whole idea of the variety of models really highlights how diverse the possibilities for looking for dark matter are. In that paper, they looked into mica samples that had been buried for many, many years, looking for tracks. When you have a candidate, the theoretical community starts scanning every possibility of a signal that might have been left — not just in our detectors, but also in the atmosphere, in meteorites, in stars and in the structure that we see in the universe. There are other detectors out there that are more indirect than the ones we’ve specifically designed for dark matter. That’s one of the things that makes it exciting: maybe we find dark matter in our detectors and we might also find traces of it in other things that we haven’t even thought about yet.

    TKF: In the history of particle physics, there have been a number of particles that we knew existed long before we were able to detect them — and in a lot of cases, we knew a lot about these particles’ characteristics before we found them. This seems very different from where we are now with dark matter. Why is that? What is fundamentally different here?

    G.R.: We know about dark matter from gravitational interactions, and we have a hard time fitting gravity in with the fundamental particles to begin with. I think that’s a big part of it. Would you all agree?

    H.N.: There are some analogues, but you have to go back in time quite a bit. One of the famous analogues is the discovery of the neutron. The proton was discovered in a fantastic series of experiments during World War I by [Ernest] Rutherford, but he had good intuition and thought there should be another particle that’s like the proton that is neutral, which they called the neutron. Even though they had a pretty good idea what it should be like, it took 12 or 15 years for them to detect one because it was just difficult. Then there was an experiment done by Frederic and Irene Joliot-Curie and their group in France and they interpreted the results in a very strange way. But a guy named James Chadwick looked at their data and said, “My God that’s it!” He repeated the experiment and proved the existence of the neutron.

    That story is so important because the neutron is the key to most uses of nuclear energy. I suspect with dark matter we’ll have some sort of rerun of that. We’re all looking and somewhere, maybe even now, there’s a little bit of data that will cause someone to have an “Ah ha!” moment.

    E.F.F.: We also have this nice framework of the Standard Model, but right now we don’t really have one single theory of what should come after it. The most popular possibility is supersymmetry, which is one of the things that a large number of physicists at the Large Hadron Collider are trying to find. But it’s not at all clear that this is the solution of what lies beyond the Standard Model. That ambiguity leads to a plethora of dark matter models because dark matter lies outside of the framework of the Standard Model and we don’t know in which direction this model will grow or how it will change. Physicists are looking at all the possibilities, many of which have good dark matter candidates. There’s this chasm between where we are now and where the light of understanding is, and we don’t yet know which direction to go to find it. So people are looking in all possible directions generating a lot of great ideas.

    Supersymmetry standard model
    Standard Model of Supersymmetry

    TKF: It seems that the results of your experiments will direct the search in one way or another. One of our viewers would like to know a little bit more about how you go about detecting dark matter in your experiments. Since dark matter really doesn’t interact with us very much, how do you go about seeing it?

    G.R.: Our experiments use very different techniques. My experiment looks for axions that every once in a while couple to photons. They do so in a way that the photons produced are of microwave frequencies. This is quite literally the frequency used by your cell phone or in your microwave oven. So we look for a very occasional transmutation of an axion from the dark matter around us into a microwave photon. We also help this process along using a strong magnetic field. Because the frequency of the photon coming from the axion is very specific, this ends up being a scanning experiment. It’s almost like tuning an AM radio; you know there’s a signal out there at a certain frequency, but you don’t know what the frequency is so you tune around, listening to hear a station. Only we’re looking for a signal that’s coming from dark matter turning into photons.

    E.F.F.: Both Harry and I look for similar particles, these WIMPs. My experiment is particularly good at looking for WIMPs that are about the mass of a proton or a couple times heavier than that, while Harry’s experiment is better at looking for particles that are maybe a hundred to several hundred times heavier than the proton. But the idea is the same. As Harry mentioned before, we know the density of dark matter particles in our region of space in the galaxy, so we can calculate how many of these dark matter particles should be going through me, through you, through your room right now.

    If you stick out your hand and you assume that WIMPs are maybe sixty times the mass of the proton — I’m just picking a number here — you calculate that there should be about 20 million WIMPs going through your hand every second. Now these dark matter particles go straight through your hand and straight through the Earth, but perhaps very occasionally they interact with one of the atoms in the matter that the Earth is made of. So we build detectors that hope to catch some of those very, very rare interactions.

    My experiment uses a crystal made out of germanium or silicon that we cool down to milli-kelvin temperatures: almost at absolute zero. If you remember your high school physics, atoms stop vibrating when they get very, very cold. So the atoms in this crystal are not vibrating much at all. If a dark matter particle interacts with one of the atoms in the crystal, the whole crystal starts vibrating and those vibrations are sensed by little microphones that we call phonon sensors. They also release charge and we measure that charge as well. Both of those help us to determine not only the energy that was imparted to the target but what type of interaction it was: Was it an interaction like the one you would expect from a photon or an electron, or was it an interaction you would expect from a WIMP or perhaps a neutron? That helps us to distinguish a dark matter signal from backgrounds coming from radioactivity in our environment. That’s very important when you’re looking for a very elusive signal.

    TKF: In fact you even go to the extent of working far underground to reduce this background noise, is that right?

    E.F.F.: That’s right. And I’ll actually let Harry take it from here.

    H.N.: Our experiments are going to be in two different mines. Ours is about a mile underground in western South Dakota in the Black Hills — the same black hills mentioned in the Beatles song Rocky Raccoon. Meanwhile, Enectali is up in Sudbury, Ontario, where there’s a heavy metal mine.

    One analogy I wanted to bring up is that what Enectali and I do is a microscopic version of billiards. The targets — in my case are xenon and in his case germanium and silicon — are like the colored balls on a pool table, and what we’re trying to detect is the cue ball — the dark matter particle we can’t see. But if the cue ball collides with the colored balls, they suddenly move. That’s what we detect.

    As Enectali said, the reason we go deep in a mine and the reason we build elaborate shields around these things is so that we aren’t fooled by radioactivity or neutrons or neutrinos moving the billiard balls. And there are a lot fewer of these fakes when we go deep. Plus, it’s an awful lot of fun to go in these mines. I’ve been working in them for ten or fifteen years now and it’s great to go a mile underground.

    TKF: If one of your experiments is successful in seeing dark matter, Enectali you said in a previous conversation that the next steps would be to study the dark matter particle’s characteristics and use that knowledge to better understand the particle’s role in the universe. I’m hoping you can explain that last bit a little bit further. Just how far-reaching would such a discovery be?

    E.F.F.: We’re very lucky in that we get to ask these really big questions about what the universe is made of. We know that dark matter makes up about 25 or 26 percent of the universe, and through direct detection we’re trying to figure out what that is exactly.

    But even once we know the mass of the dark matter particle, we still need to understand a lot of other things: whether it has spin, whether it is its own anti-particle, all kinds of properties of the particle itself. But that’s not all that there is to it. This particle was produced some time ago. We want to know how it was produced, when it was produced, what did that do to the universe and to the formation of the universe. There’s a very complicated history of what happened in the universe between the Big Bang and today, and dark matter has a big role to play.

    Dark matter is the glue that holds all the galaxies, all the clusters of galaxies and all the super clusters together. So without dark matter, the universe would not look like it does today. The type of dark matter could change the way that structure formed. So that’s one very important thing that we would like to understand. Another thing is that we don’t really know how dark matter behaves here in our galaxy today. We know its density, but we don’t really know how it’s moving. We have some assumptions, but it will be very interesting to really understand the motion of dark matter – whether it’s clumpy, whether it has structures or streams, whether some of it is in a flat disk. The answers to these questions will have implications for the stars in our galaxy and beyond. All those things will be the next step in what we would love to be doing, which is dark matter astronomy.

    TKF: We have one last question from a viewer who identifies herself as “an interested artist.” Her question is: If you find dark matter, what are you going to call it? It won’t be dark anymore.

    G.R.: I can start with a bad idea. It was called dark matter originally because when you look up at the sky, there are things that produce light — like stars — and there are things that we know are out there because they interact gravitationally but they’re not producing light. They’re dark. But that name kind of implies that they absorb or block light, when in fact dark matter doesn’t. Light goes right through it. So you can call it clear matter, but dark matter at least sounds mysterious. Clear matter sounds rather boring.

    H.N.: I hope you get people better at language than physicists to answer this! If it’s physicists who name it, we’ll end up with a name like gluon. I’d prefer to have a better name than that. Since this viewer is an artist, I’ll point out a sculpture at the Tate in London by Cornelia Parker called Cold Dark Matter: An Exploded View. This idea that there’s something out there that we can’t sense yet is one of those things that sends chills down my spine. I think that scientists share that feeling of wonderment with artists.

    E.F.F.: I’d love to have a naming contest for this 20-some-odd percent of the universe. I think it would produce much better names than we would come up with on our own.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:47 pm on December 9, 2014 Permalink | Reply
    Tags: , , , , , Physics,   

    From WSJ: “The Perils of Romanticizing Physics” 

    Wall Street Journal

    The Wall Street Journal

    Opinion
    Dec. 8, 2014
    Ira Rothstein
    Mr. Rothstein is a professor of physics at Carnegie Mellon University.

    It is good to see movies such as Interstellar and The Theory of Everything achieving critical and box-office success—the latest evidence that the ideas involved in relativity and quantum mechanics can capture the imagination.

    m
    Getty Images/Science Photo Library

    Physicists, such as myself, who work on these abstract subjects are funded predominantly by dwindling government grants and have an obligation to communicate these ideas to the public. But relating abstract mathematical ideas to those with less training is difficult, and it requires some pedagogical shortcuts that by necessity are oversimplifications. Quite often one must rely on metaphorical tools that, while vaguely capturing the idea, can often lead to false conclusions.

    The classic example of this comes from Albert Einstein. When trying to explain the concept of time dilation predicted in the theory of relativity, he said, “When you are courting a nice girl, an hour seems like a second. When you sit on a red-hot cinder, a second seems like an hour. That’s relativity.”

    The maestro’s explanation is romantic, but it is also misleading: What Einstein was referring to is a psychological phenomenon, while time dilation is physical, as wonderfully depicted in Interstellar when the protagonist, Cooper, is forced to spend time in the proximity of a black-hole horizon, where his clock slows down relative to the Earth’s clock. Upon returning to Earth, he finds that his daughter, who was a teenager when he left, is now elderly, while he is still a young man.

    Understanding the fundamental nature of space and time gives us an appreciation of our place in the universe. But we should be careful not to extrapolate these ideas improperly. The possible implications of quantum entanglement in particular have resonated in modern culture—whether in the physics-infused movies of the moment or in the poet and essayist Christian Wiman’s My Bright Abyss: Meditation of a Modern Believer (2013). He wrote: “If quantum entanglement is true, if related particles react in similar or opposite ways even when separated by tremendous distances, then it is obvious that the whole world is alive and communicating in ways we do not fully understand. And we are part of that life, part of that communication.”

    While this is lovely prose, the conclusion is misleading. Quantum entanglement is a phenomenon in which two particles—say, electrons—are produced in such a way that they are correlated. So, if we know that one of them is spinning in one direction, then we know the other is spinning in the opposite way. What is remarkable about quantum mechanics is that we don’t know which way either particle is spinning until we measure it.

    Moreover, the spin of each particle is “fuzzy”—in some sense, in multiple spin states at the same time—until this measurement is made. But once we measure one particle’s spin, the other particle’s spin is fixed instantaneously, even if they were on opposite sides of our galaxy. (This seems to violate the idea that nothing can travel faster then the speed of light, but that’s another story.)

    Yet this quantum entanglement is extremely fragile and is destroyed by interactions with the surrounding environment. Trying to keep particles entangled at macroscopic-distance scales is a significant challenge for experimentalists. The truth is that humans are not interconnected by entanglement, at least not in the sense related by Mr. Wiman. I find it remarkable and inspiring that some of the discipline’s esoteric ideas have percolated into public consciousness, but we should be wary of applying them to matters that are better left to philosophers and theologians.

    With millions of moviegoers seeing Interstellar and The Theory of Everything, the temptation is stronger than ever to misapply modern ideas of physics in viewing the world. But we don’t need science to illuminate how we are interconnected—it is our humanity and our shared experiences, our joys and sorrows, not quantum mechanics and relativity, that bind us.

    Maybe that’s why the concept of time fascinates us so. After all, time is the fabric into which our lives are woven and in some sense defines the human condition. If we could only understand time better, maybe by finding the one “final equation”—as sought in “The Theory of Everything” by Stephen Hawking ( Eddie Redmayne ) and in “Interstellar” by Jessica Chastain ’s astrophysicist character—we would find some underlying secret that would shed light on the nature of our existence.

    Personally, I can say that my research on black holes hasn’t helped me get any closer to the most effective answer to my children’s most profound question about time and space: “Are we there yet?”

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 6:57 pm on December 8, 2014 Permalink | Reply
    Tags: , , , Physics, , ,   

    From BNL: “Unusual Electronic State Found in New Class of Unconventional Superconductors” 

    Brookhaven Lab

    December 8, 2014
    Karen McNulty Walsh, (631) 344-8350 or Peter Genzer, (631) 344-3174

    Finding gives scientists a new group of materials to explore to unlock secrets of some materials’ ability to carry current with no energy loss

    A team of scientists from the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, Columbia Engineering, Columbia Physics and Kyoto University has discovered an unusual form of electronic order in a new family of unconventional superconductors. The finding, described in the journal Nature Communications, establishes an unexpected connection between this new group of titanium-oxypnictide superconductors and the more familiar cuprates and iron-pnictides, providing scientists with a whole new family of materials from which they can gain deeper insights into the mysteries of high-temperature superconductivity.

    t
    Team members conducting research at Brookhaven Lab, led by Simon Billinge of Brookhaven and Columbia Engineering (seated), included (l to r) Columbia U graduate student Ben Frandsen and Weiguo Yin, Yimei Zhu, and Emil Bozin of Brookhaven’s Condensed Matter Physics and Materials Science Department. They used the aberation-corrected electron microscope in Zhu’s lab to conduct electron diffraction experiments that were a key component of this study. Collaborators not shown: Hefei Hu, formerly of Brookhaven Lab and now at Intel, Yasumasa Nozaki and Hiroshi Kageyama of Kyoto University, and Yasutomo Uemura of Columbia.

    “Finding this new material is a bit like an archeologist finding a new Egyptian pharaoh’s tomb,” said Simon Billinge, a physicist at Brookhaven Lab and Columbia University’s School of Engineering and Applied Science, who led the research team. “As we try and solve the mysteries behind unconventional superconductivity, we need to discover different but related systems to give us a more complete picture of what is going on—just as a new tomb will turn up treasures not found before, giving a more complete picture of ancient Egyptian society.”

    Harnessing the power of superconductivity, or the ability of certain materials to conduct electricity with zero energy loss, is one of the most exciting possibilities for creating a more energy-efficient future. But because most superconductors only work at very low temperatures—just a few degrees above absolute zero, or -273 degrees Celsius—they are not yet useful for everyday life. The discovery in the 1980s of “high-temperature” superconductors that work at warmer temperatures (though still not room temperature) was a giant step forward, offering scientists the hope that a complete understanding of what enables these materials to carry loss-free current would help them design new materials for everyday applications. Each new discovery of a common theme among these materials is helping scientists unlock pieces of the puzzle.

    One of the greatest mysteries is seeking to understand how the electrons in high-temperature superconductors interact, sometimes trying to avoid each other and at other times pairing up—the crucial characteristic enabling them to carry current with no resistance. Scientists studying these materials at Brookhaven and elsewhere have discovered special types of electronic states, such as “charge density waves,” where charges huddle to form stripes, and checkerboard patterns of charge. Both of these break the “translational symmetry” of the material—the repetition of sameness as you move across the surface (e.g., moving across a checkerboard you move from white squares to black squares).

    Another pattern scientists have observed in the two most famous classes of high-temperature superconductors is broken rotational symmetry without a change in translational symmetry. In this case, called nematic order, every space on the checkerboard is white, but the shapes of the spaces are distorted from a square to a rectangle; as you turn round and round on one space, your neighboring space is nearer or farther depending on the direction you are facing. Having observed this unexpected state in the cuprates and iron-pnictides, scientists were eager to see whether this unusual electronic order would also be observed in a new class of titanium-oxypnictide high-temperature superconductors discovered in 2013.

    “These titanium-oxypnictide compounds are structurally similar to the other exotic superconductor systems, and they had all the telltale signs of a broken symmetry, such as anomalies in resistivity and thermodynamic measurements. But there was no sign of any kind of charge density wave in any previous measurement. It was a mystery,” said Emil Bozin, whose group at Brookhaven specializes in searching for hidden local broken symmetries. “It was a natural for us to jump on this problem.”

    a
    Top: Ripples extending down the chain of atoms breaks translational symmetry (like a checkerboard with black and white squares), which would cause extra spots in the diffraction pattern (shown as red dots in the underlying diffraction pattern). Bottom: Stretching along one direction breaks rotational symmetry but not translational symmetry (like a checkerboard with identical squares but stretched in one of the directions), causing no additional diffraction spots. The experiments proved these new superconductors have the second type of electron density distribution, called a nematic. Image credit: Ben Frandsen

    The team searched for the broken rotational symmetry effect, a research question that had been raised by Tomo Uemura of Columbia, using samples provided by his collaborators in the group of Hiroshi Kageyama at Kyoto University. They conducted two kinds of diffraction studies: neutron scattering experiments at the Los Alamos Neutron Science Center (LANSCE) at DOE’s Los Alamos National Laboratory, and electron diffraction experiments using a transmission electron microscope at Brookhaven Lab.

    “We used these techniques to observe the pattern formed by beams of particles shot through powder samples of the superconductors under a range of temperatures and other conditions to see if there’s a structural change that corresponds to the formation of this special type of nematic state,” said Ben Frandsen, a graduate student in physics at Columbia and first author on the paper.

    The experiments revealed a telltale symmetry breaking distortion at low temperature. A collaborative effort among experimentalists and theorists established the particular nematic nature of the order.

    “Critical in this study was the fact that we could rapidly bring to bear multiple complementary experimental methods, together with crucial theoretical insights—something made easy by having most of the expertise in residence at Brookhaven Lab and wonderfully strong collaborations with colleagues at Columbia and beyond,” Billinge said.

    The discovery of nematicity in titanium-oxypnictides, together with the fact that their structural and chemical properties bridge those of the cuprate and iron-pnictide high-temperature superconductors, render these materials an important new system to help understand the role of electronic symmetry breaking in superconductivity.

    As Billinge noted, “This new pharaoh’s tomb indeed contained a treasure: nematicity.”

    This work was supported by the DOE Office of Science, the U.S. National Science Foundation (NSF, OISE-0968226), the Japan Society of the Promotion of Science, the Japan Atomic Energy Agency, and the Friends of Todai Inc.

    See the full article here.

    BNL Campus

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 6:28 pm on December 8, 2014 Permalink | Reply
    Tags: , Entropy, , Physics,   

    From Quanta: “A New Physics Theory of Life” 

    Quanta Magazine
    Quanta Magazine

    January 22, 2014
    Natalie Wolchover

    Why does life exist?

    Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

    From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

    j
    Jeremy England, a 31-year-old physicist at MIT, thinks he has found the underlying physics driving the origin and evolution of life.

    p
    Cells from the moss Plagiomnium affine with visible chloroplasts, organelles that conduct photosynthesis by capturing sunlight. Kristian Peters

    “You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

    England’s theory is meant to underlie, rather than replace, [Charles]Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

    His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

    England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

    “Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

    Others, such as Eugene Shakhnovich, a professor of chemistry, chemical biology and biophysics at Harvard University, are not convinced. “Jeremy’s ideas are interesting and potentially promising, but at this point are extremely speculative, especially as applied to life phenomena,” Shakhnovich said.

    England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab.

    “He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”

    t
    A computer simulation by Jeremy England and colleagues shows a system of particles confined inside a viscous fluid in which the turquoise particles are driven by an oscillating force. Over time (from top to bottom), the force triggers the formation of more bonds among the particles. Courtesy of Jeremy England

    At the heart of England’s idea is the second law of thermodynamics, also known as the law of increasing entropy or the “arrow of time.” Hot things cool down, gas diffuses through air, eggs scramble but never spontaneously unscramble; in short, energy tends to disperse or spread out as time progresses. Entropy is a measure of this tendency, quantifying how dispersed the energy is among the particles in a system, and how diffuse those particles are throughout space. It increases as a simple matter of probability: There are more ways for energy to be spread out than for it to be concentrated. Thus, as particles in a system move around and interact, they will, through sheer chance, tend to adopt configurations in which the energy is spread out. Eventually, the system arrives at a state of maximum entropy called “thermodynamic equilibrium,” in which energy is uniformly distributed. A cup of coffee and the room it sits in become the same temperature, for example. As long as the cup and the room are left alone, this process is irreversible. The coffee never spontaneously heats up again because the odds are overwhelmingly stacked against so much of the room’s energy randomly concentrating in its atoms.

    Although entropy must increase over time in an isolated or “closed” system, an “open” system can keep its entropy low — that is, divide energy unevenly among its atoms — by greatly increasing the entropy of its surroundings. In his influential 1944 monograph “What Is Life?” the eminent quantum physicist Erwin Schrödinger argued that this is what living things must do. A plant, for example, absorbs extremely energetic sunlight, uses it to build sugars, and ejects infrared light, a much less concentrated form of energy. The overall entropy of the universe increases during photosynthesis as the sunlight dissipates, even as the plant prevents itself from decaying by maintaining an orderly internal structure.

    Life does not violate the second law of thermodynamics, but until recently, physicists were unable to use thermodynamics to explain why it should arise in the first place. In Schrödinger’s day, they could solve the equations of thermodynamics only for closed systems in equilibrium. In the 1960s, the Belgian physicist Ilya Prigogine made progress on predicting the behavior of open systems weakly driven by external energy sources (for which he won the 1977 Nobel Prize in chemistry [corrected below]). But the behavior of systems that are far from equilibrium, which are connected to the outside environment and strongly driven by external sources of energy, could not be predicted.

    This situation changed in the late 1990s, due primarily to the work of Chris Jarzynski, now at the University of Maryland, and Gavin Crooks, now at Lawrence Berkeley National Laboratory. Jarzynski and Crooks showed that the entropy produced by a thermodynamic process, such as the cooling of a cup of coffee, corresponds to a simple ratio: the probability that the atoms will undergo that process divided by their probability of undergoing the reverse process (that is, spontaneously interacting in such a way that the coffee warms up). As entropy production increases, so does this ratio: A system’s behavior becomes more and more “irreversible.” The simple yet rigorous formula could in principle be applied to any thermodynamic process, no matter how fast or far from equilibrium. “Our understanding of far-from-equilibrium statistical mechanics greatly improved,” Grosberg said. England, who is trained in both biochemistry and physics, started his own lab at MIT two years ago and decided to apply the new knowledge of statistical physics to biology.

    Using Jarzynski and Crooks’ formulation, he derived a generalization of the second law of thermodynamics that holds for systems of particles with certain characteristics: The systems are strongly driven by an external energy source such as an electromagnetic wave, and they can dump heat into a surrounding bath. This class of systems includes all living things. England then determined how such systems tend to evolve over time as they increase their irreversibility. “We can show very simply from the formula that the more likely evolutionary outcomes are going to be the ones that absorbed and dissipated more energy from the environment’s external drives on the way to getting there,” he said. The finding makes intuitive sense: Particles tend to dissipate more energy when they resonate with a driving force, or move in the direction it is pushing them, and they are more likely to move in that direction than any other at any given moment.

    “This means clumps of atoms surrounded by a bath at some temperature, like the atmosphere or the ocean, should tend over time to arrange themselves to resonate better and better with the sources of mechanical, electromagnetic or chemical work in their environments,” England explained.

    s
    Self-Replicating Sphere Clusters: According to new research at Harvard, coating the surfaces of microspheres can cause them to spontaneously assemble into a chosen structure, such as a polytetrahedron (red), which then triggers nearby spheres into forming an identical structure. Courtesy of Michael Brenner/Proceedings of the National Academy of Sciences

    Self-replication (or reproduction, in biological terms), the process that drives the evolution of life on Earth, is one such mechanism by which a system might dissipate an increasing amount of energy over time. As England put it, “A great way of dissipating more is to make more copies of yourself.” In a September paper in the Journal of Chemical Physics, he reported the theoretical minimum amount of dissipation that can occur during the self-replication of RNA molecules and bacterial cells, and showed that it is very close to the actual amounts these systems dissipate when replicating. He also showed that RNA, the nucleic acid that many scientists believe served as the precursor to DNA-based life, is a particularly cheap building material. Once RNA arose, he argues, its “Darwinian takeover” was perhaps not surprising.

    The chemistry of the primordial soup, random mutations, geography, catastrophic events and countless other factors have contributed to the fine details of Earth’s diverse flora and fauna. But according to England’s theory, the underlying principle driving the whole process is dissipation-driven adaptation of matter.

    This principle would apply to inanimate matter as well. “It is very tempting to speculate about what phenomena in nature we can now fit under this big tent of dissipation-driven adaptive organization,” England said. “Many examples could just be right under our nose, but because we haven’t been looking for them we haven’t noticed them.”

    Scientists have already observed self-replication in nonliving systems. According to new research led by Philip Marcus of the University of California, Berkeley, and reported in Physical Review Letters in August, vortices in turbulent fluids spontaneously replicate themselves by drawing energy from shear in the surrounding fluid. And in a paper appearing online this week in Proceedings of the National Academy of Sciences, Michael Brenner, a professor of applied mathematics and physics at Harvard, and his collaborators present theoretical models and simulations of microstructures that self-replicate. These clusters of specially coated microspheres dissipate energy by roping nearby spheres into forming identical clusters. “This connects very much to what Jeremy is saying,” Brenner said.

    Besides self-replication, greater structural organization is another means by which strongly driven systems ramp up their ability to dissipate energy. A plant, for example, is much better at capturing and routing solar energy through itself than an unstructured heap of carbon atoms. Thus, England argues that under certain conditions, matter will spontaneously self-organize. This tendency could account for the internal order of living things and of many inanimate structures as well. “Snowflakes, sand dunes and turbulent vortices all have in common that they are strikingly patterned structures that emerge in many-particle systems driven by some dissipative process,” he said. Condensation, wind and viscous drag are the relevant processes in these particular cases.

    “He is making me think that the distinction between living and nonliving matter is not sharp,” said Carl Franck, a biological physicist at Cornell University, in an email. “I’m particularly impressed by this notion when one considers systems as small as chemical circuits involving a few biomolecules.”

    sf
    If a new theory is correct, the same physics it identifies as responsible for the origin of living things could explain the formation of many other patterned structures in nature. Snowflakes, sand dunes and self-replicating vortices in the protoplanetary disk may all be examples of dissipation-driven adaptation. Wilson Bentley

    England’s bold idea will likely face close scrutiny in the coming years. He is currently running computer simulations to test his theory that systems of particles adapt their structures to become better at dissipating energy. The next step will be to run experiments on living systems.

    Prentiss, who runs an experimental biophysics lab at Harvard, says England’s theory could be tested by comparing cells with different mutations and looking for a correlation between the amount of energy the cells dissipate and their replication rates. “One has to be careful because any mutation might do many things,” she said. “But if one kept doing many of these experiments on different systems and if [dissipation and replication success] are indeed correlated, that would suggest this is the correct organizing principle.”

    Brenner said he hopes to connect England’s theory to his own microsphere constructions and determine whether the theory correctly predicts which self-replication and self-assembly processes can occur — “a fundamental question in science,” he said.

    Having an overarching principle of life and evolution would give researchers a broader perspective on the emergence of structure and function in living things, many of the researchers said. “Natural selection doesn’t explain certain characteristics,” said Ard Louis, a biophysicist at Oxford University, in an email. These characteristics include a heritable change to gene expression called methylation, increases in complexity in the absence of natural selection, and certain molecular changes Louis has recently studied.

    If England’s approach stands up to more testing, it could further liberate biologists from seeking a Darwinian explanation for every adaptation and allow them to think more generally in terms of dissipation-driven organization. They might find, for example, that “the reason that an organism shows characteristic X rather than Y may not be because X is more fit than Y, but because physical constraints make it easier for X to evolve than for Y to evolve,” Louis said.

    “People often get stuck in thinking about individual problems,” Prentiss said. Whether or not England’s ideas turn out to be exactly right, she said, “thinking more broadly is where many scientific breakthroughs are made.”

    Correction: This article was revised on January 22, 2014, to reflect that Ilya Prigogine won the Nobel Prize in chemistry, not physics.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 6:17 pm on December 7, 2014 Permalink | Reply
    Tags: , , , Physics,   

    From Harvard: “The ever-smaller future of physics” 

    Harvard University

    Harvard University

    December 5, 2014
    Alvin Powell

    If physicists want to find their long-sought “theory of everything,” they have to get small. And Nobel Prize-winning theoretical physicist Steven Weinberg thinks he knows roughly how small.

    sw
    Nobel winner Steven Weinberg brought his thoughts on a “theory of everything” to the Physics Department’s Lee Historical Lecture. Jon Chase/Harvard Staff Photographer

    Weinberg, who spoke at a packed Geological Lecture Hall Monday evening, said there are hints that the answers to fundamental questions will reveal themselves at around a million billionths — between 10­-17 and 10-19 — of the radius of the typical atomic nucleus.

    “It is in that range that we expect to find really new physics,” said Weinberg, a onetime Harvard professor now on the faculty at the University of Texas at Austin.

    Physicists understand that there are four fundamental forces of nature. Two are familiar in our everyday lives: those of gravity and electromagnetism. The two less-familiar forces operate at the atomic level. The strong force holds the nucleus together while the weak force is responsible for the radioactive decay that changes one type of particle to another and the nuclear fusion that powers the sun.

    For decades, physicists have toiled to create a single theory that explains how all four of these forces work, but without success, instead settling on one theory that explains how gravity acts on a macro scale and another to describe the other three forces and their interactions at the atomic level.

    Weinberg, who won the 1979 Nobel Prize in Physics, with Sheldon Glashow and Abdus Salam, for electroweak theory explaining how the weak force and electromagnetism are related, returned to Harvard to deliver the Physics Department’s annual David M. Lee Historical Lecture. He was introduced by department chair Masahiro Morii and by Andrew Strominger, the Gwill E. York Professor of Physics, who recalled taking Weinberg’s class on general relativity as a Harvard undergrad.

    “I wish I could say I remembered you in Physics 210,” Weinberg said to laughs as he took the podium.

    The event also recognized the outstanding work of four graduate students — two in experimental physics, Dennis Huang and Siyuan Sun, and two in theoretical physics, Shu-Heng Shao and Bo Liu — with the Gertrude and Maurice Goldhaber Prize.

    Weinberg pointed to several hints of something significant going on at the far extremes of tininess. One hint is that the strong force, which weakens at shorter scales, and the weak and electromagnetic forces, which get stronger across shorter distances, appear to converge at that scale.

    Gravity is so weak that it isn’t felt at the atomic scale, overpowered by the other forces that operate there. However, Weinberg said, if you calculate how much mass two protons or two electrons would need for gravity to balance their repulsive electrical force, it would have to not just be enormous, but on a similar scale as the other measurements, the equivalent of 1.04 x 1018 gigaelectron volts.

    “There is a strong suggestion that gravity is somehow unified with those other forces at these scales,” Weinberg said.

    Weinberg also said there are experimental hints in the extremely small masses of neutrinos and in possible proton decay that the tiniest scales are significant in ways that are fundamental to physics.

    “This is a very crude estimate, but the mass of neutrinos which are being observed are in the same ballpark that you would expect from new physics associated with a fundamental length,” Weinberg said. “It all seems to hang together.”

    A major challenge for physicists is that the energy needed to probe what is actually going on at the smallest levels is far beyond current technology, something like 10 trillion times the highest energy we can harness now. And new technology to explore the problem experimentally is not on the horizon. Even with all the wealth in the world, scientists wouldn’t know where to begin, Weinberg said.

    But the experiment may have already been done, by nature, and there may be a way to look back at it, Weinberg said. During the inflationary period immediately after the Big Bang there was that kind of energy, he said, and it would be evident as gravitation waves in the cosmic microwave background, an echo of the Big Bang that astronomers study for hints of the early universe. In fact, astronomers announced they had found such waves earlier this year, though they are waiting for confirmation of the results.

    Gravitational Wave Background
    gravitational waves

    Cosmic Background Radiation Planck
    CMB per ESA/Planck

    ESA Planck
    ESA/Planck

    “The big question that we face … is, can we find a truly fundamental theory uniting all the forces, including gravitation … characterized by tiny lengths like 10-17 to 10-19 nuclear radii?” Weinberg said. “Is it a string theory? That seems like the most beautiful candidate, but we don’t have any direct evidence that it is a string theory. The only handle we have … on this to do further experiments is in cosmology.”

    See the full article here.

    Harvard is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:13 pm on December 6, 2014 Permalink | Reply
    Tags: , , Physics,   

    From Stanford: “Stanford engineers take big step toward using light instead of wires inside computers” 

    Stanford University Name
    Stanford University

    December 2, 2014
    Chris Cesare

    Stanford engineers have designed and built a prism-like device that can split a beam of light into different colors and bend the light at right angles, a development that could eventually lead to computers that use optics, rather than electricity, to carry data.

    They describe what they call an “optical link” in an article in Scientific Reports.

    The optical link is a tiny slice of silicon etched with a pattern that resembles a bar code. When a beam of light is shined at the link, two different wavelengths (colors) of light split off at right angles to the input, forming a T shape. This is a big step toward creating a complete system for connecting computer components with light rather than wires.

    b
    This tiny slice of silicon, etched in Jelena Vuckovic’s lab at Stanford with a pattern that resembles a bar code, is one step on the way toward linking computer components with light instead of wires.

    “Light can carry more data than a wire, and it takes less energy to transmit photons than electrons,” said electrical engineering Professor Jelena Vuckovic, who led the research.

    In previous work her team developed an algorithm that did two things: It automated the process of designing optical structures and it enabled them to create previously unimaginable, nanoscale structures to control light.

    Now, she and lead author Alexander Piggott, a doctoral candidate in electrical engineering, have employed that algorithm to design, build and test a link compatible with current fiber optic networks.

    Creating a silicon prism

    The Stanford structure was made by etching a tiny bar code pattern into silicon that split waves of light like a small-scale prism. The team engineered the effect using a subtle understanding of how the speed of light changes as it moves through different materials.

    What we call the speed of light is how fast light travels in a vacuum. Light travels a bit more slowly in air and even more slowly in water. This speed difference is why a straw in a glass of water looks dislocated.

    A property of materials called the index of refraction characterizes the difference in speed. The higher the index, the more slowly light will travel in that material. Air has an index of refraction of nearly 1 and water of 1.3. Infrared light travels through silicon even more slowly: it has an index of refraction of 3.5.

    The Stanford algorithm designed a structure that alternated strips of silicon and gaps of air in a specific way. The device takes advantage of the fact that as light passes from one medium to the next, some light is reflected and some is transmitted. When light traveled through the silicon bar code, the reflected light interfered with the transmitted light in complicated ways.

    The algorithm designed the bar code to use this subtle interference to direct one wavelength to go left and a different wavelength to go right, all within a tiny silicon chip eight microns long.

    Both 1300-nanometer light and 1550-nanometer light, corresponding to and O-band wavelengths widely used in fiber optic networks, were beamed at the device from above. The bar code-like structure redirected C-band light one way and O-band light the other, right on the chip.

    Convex optimization

    The researchers designed these bar code patterns already knowing their desired function. Since they wanted C-band and O-band light routed in opposite directions, they let the algorithm design a structure to achieve it.

    “We wanted to be able to let the software design the structure of a particular size given only the desired inputs and outputs for the device,” Vuckovic said.

    To design their device they adapted concepts from convex optimization, a mathematical approach to solving complex problems such as stock market trading. With help from Stanford electrical engineering Professor Stephen Boyd, an expert in convex optimization, they discovered how to automatically create novel shapes at the nanoscale to cause light to behave in specific ways.

    “For many years, nanophotonics researchers made structures using simple geometries and regular shapes,” Vuckovic said. “The structures you see produced by this algorithm are nothing like what anyone has done before.”

    The algorithm began its work with a simple design of just silicon. Then, through hundreds of tiny adjustments, it found better and better bar code structures for producing the desired output light.

    Previous designs of nanophotonic structures were based on regular geometric patterns and the designer’s intuition. The Stanford algorithm can design this structure in just 15 minutes on a laptop computer.

    They have also used this algorithm to design a wide variety of other devices, like the super-compact “Swiss cheese” structures that route light beams to different outputs not based on their color, but based on their mode, i.e., based on how they look. For example, a light beam with a single lobe in the cross-section goes to one output, and a double lobed beam (looking like two rivers flowing side by side) goes to the other output. Such a mode router is equally as important as the bar code color splitter, as different modes are also used in optical communications to transmit information.

    The algorithm is the key. It gives researchers a tool to create optical components to perform specific functions, and in many cases such components didn’t even exist before. “There’s no way to analytically design these kinds of devices,” Piggott said.

    Media Contact

    Tom Abate, School of Engineering: (650) 736-2245, tabate@stanford.edu

    Dan Stober, Stanford News Service: (650) 721-6965, dstober@stanford.edu

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 6:58 pm on December 6, 2014 Permalink | Reply
    Tags: , Magnetometer, , Physics   

    From MIT Tech Review: “Diamond Magnetometer Breaks Sensitivity Records” 

    MIT Technology Review
    M.I.T Technology Review

    December 5, 2014
    No Writer Credit

    Diamonds are a physicist’s best friend–when it comes to measuring the tiniest magnetic fields.

    Back in 1896, a young physicist called Pieter Zeeman was fired for carrying out an experiment against the specific wishes of his laboratory supervisor. Despite the consequences, the experiment led to a remarkable discovery that changed Zeeman’s life.

    The experiment involved measuring the light emitted by elements placed in a powerful magnetic field. When he did this, Zeeman discovered that the spectral lines were split by the field. In 1902, he was awarded the Nobel Prize in physics for this discovery which is now known as the Zeeman effect.

    m

    It is particularly useful for measuring magnetic fields at a distance. For example, astrophysicists use it to map variations in the magnetic field on the sun. But it can also be used to measure fields on a much smaller scale. In theory, the effect could be used to observe the influence of a magnetic field on a single atom.

    While they have not got quite this far, Thomas Wolf at the University of Stuttgart in Germany and a few pals, have come pretty close. These guys have used the spectra from nitrogen atoms embedded in diamond to build perhaps the most sensitive magnetometer ever made. They say their new device could soon be capable of measuring the magnetic field associated with protons.

    First, some background about magnetometers. In recent years, physicists have made increasingly sensitive magnetometers using a variety of different techniques. One problem they all come up against is that magnetic fields decay very quickly with distance, as 1/r^3.

    That means the size of the sensor has an important impact on what it can detect, since magnetic field can change significantly throughout the volume of the sensor. So an important task is to make magnetometers as small as possible.

    That’s where diamond comes in. Diamond is a three-dimensional crystal made of carbon. However, when a carbon atom in the structure is replaced with nitrogen, this produces an additional unbound electron.

    When this electron is excited with laser light, it then fluoresces at a frequency that depends on its environment. A magnetic field in particular can change this frequency, via the Zeeman effect, making nitrogen defects in diamond a promising type of magnetometer.

    Of course, addressing a single atom in such a structure and recording its fluorescence accurately is a tricky business. So Wolf and co use an entire ensemble of nitrogen defects in a volume of diamond occupying just a fraction of a cubic millimetre. They estimate that this contains several billion nitrogen atoms.

    Although a centre of this size is many orders of magnitude larger than an individual atom, it produces a fluorescent signal that is much easier to measure. That makes the device practical. Even at this size, the magnetometer is one of the smallest ever made.

    To find out how sensitive, Wolf and co put the device through its paces, carefully eliminating noise at every step. The results are impressive. The team eventually measured a field strength of only 100 femtoTesla. That’s comparable with the most sensitive magnetometers on the planet. And they think they can do even better with relatively straightforward improvements that should increase the sensitivity by two orders of magnitude.

    But here’s the thing: what’s unique about this device is that it is both small and sensitive, a combination that has never been achieved before. That makes this device a kind of record breaker. It can measure magnetic field strengths in tiny volumes that have never been accessible before. In other words, it opens up magnetic field strength detection on an entirely new scale using a solid state device that works at room temperature.

    One goal in this area is to measure the magnetic fields of protons in water. The sensitivity of this device looks to make this possible. “This value itself allows for detection of proton spins in a microscopically resolvable volume in less than one second,” says Wolf and co.

    Magnetometers are used in a wide range of applications, ranging from mineral exploration and archaeology to weapon systems positioning and heartbeat monitors. So a robust, highly sensitive solid-state device that works at room temperature is likely to come in handy. Zeeman would have been impressed.

    Ref: arxiv.org/abs/1411.6553 A Subpicotesla Diamond Magnetometer

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 2:39 pm on December 5, 2014 Permalink | Reply
    Tags: , , , Physics   

    From FNAL- “Frontier Science Result: CMS Precisely measuring nothing” 


    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    Friday, Dec. 5, 2014

    FNAL Don Lincoln
    Don Lincoln

    The CMS detector is a technical tour de force. It can simultaneously measure the passage of electrons, pions, muons, photons and all manner of particles, both short-lived and long-lived.

    CERN CMS New
    CMS at The LHC at CERN

    However, there are some particles that simply don’t interact very much with matter. These include neutrinos and some hypothetical long-lived and weakly interacting particles that may appear in collisions that probe supersymmetry, extra dimensions of space and dark matter. The CMS detector simply doesn’t see those kinds of particles.

    m
    Collisions like these indicate the existence of invisible particles. The blob of color in the upper left hand corner shows where particles were knocked out of the collision to deposit energy in the detector. The fact that we see no balancing energy in the lower right hand corner means that an invisible particle has escaped the detector. As the number of simultaneous collisions in the LHC increases, it will become increasingly difficult to study this kind of physics.

    Supersymmetry standard model
    Standard Model of Supersymmetry

    That sounds like a terrible oversight, but the reality is more comforting. We can use physical principles of the kind taught in high school physics classes to identify collisions in which these particles are made. Essentially, we see them by not seeing them.

    In the first semester of physics, we learn about a quantity called momentum and how it is conserved, which means it doesn’t change. In the classical world, momentum is determined by multiplying an object’s mass and velocity. In the world of relativity and particles, the definition is a bit different, but the basic idea is the same and the principle that momentum is conserved still applies.

    Prior to a collision, particles travel exclusively along the beam direction. This means that before the beam particles collide, there is no momentum perpendicular to the beams, or what scientists call transverse momentum. According to the laws of momentum conservation, there should be zero net transverse momentum after the collision as well. If we sum the transverse momentum of all particles coming out of the collision, that’s what we find.

    However, when there are undetectable particles, the measured transverse momentum is unbalanced. Scientists call the unobserved transverse momentum missing transverse energy, or MET. MET is a clear signature of the existence of one or more invisible particles. Accordingly, it is important to measure carefully the transverse momentum of all observable particles.

    Particle experiments have been employing this technique for decades, but few experiments have operated in the challenging collision environment that exists at the LHC. Any time the beams pass through one another, typically dozens of collisions between beam particles occur. Most often, one of those collisions involve some “interesting” process, while the others usually involve much lower-energy collisions. However, those low-energy collisions still spray particles throughout the detector. The existence of these extra particles confuse the measurement of MET and make it tricky to know the exact momentum of the invisible particles.

    CMS scientists have worked long and hard to figure out how to mitigate these effects and recently submitted for publication a paper describing their algorithms. With the impending resumption of operations of the LHC in the spring of 2015 (which could involve as many as 200 simultaneous collisions), researchers will continually revise and improve their techniques.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics.

     
  • richardmitnick 8:22 pm on December 3, 2014 Permalink | Reply
    Tags: , Physics, ,   

    From SLAC: “SLAC, RadiaBeam Build New Tool to Tweak Rainbows of X-ray Laser Light” 


    SLAC Lab

    December 3, 2014

    ‘Dechirper’ Will Give Scientists More Control Over ‘Color Spectrum’ of LCLS X-ray Pulses

    The Department of Energy’s SLAC National Accelerator Laboratory has teamed up with Santa Monica-based RadiaBeam Systems to develop a device known as a dechirper, which will provide a new way of adjusting the range of energies within single pulses from SLAC’s X-ray laser.

    d
    Design drawing of the outer structure of a dechirper that will be used to tweak the “color range” of light pulses from SLAC’s X-ray laser, LCLS. Two dechirpers will be lined up in front of the LCLS undulator—a magnetic structure that generates ultrabright, ultrafast X-rays from bunches of electrons. (RadiaBeam Systems)

    i
    The inside of the dechirper consists of two parallel, 2-meter-long, flat aluminum rails. Electron bunches will travel through a variable gap between the rails at nearly the speed of light. (RadiaBeam Systems)

    r
    The aluminum rails have comb-like grooves that are half a millimeter deep and a quarter millimeter wide. The electron bunches “sense” the grooves, leading to a change in the energy spread of the X-rays they produce. (RadiaBeam Systems)

    “For many experiments it is important to use a specific X-ray energy so that we can study specific chemical elements in our samples,” says LCLS scientist William Schlotter. “The narrower the energy bandwidth, the more precisely we can study those elements.”

    Tweaking the ‘Color Spectrum’ of X-ray Pulses

    LCLS generates ultrabright and ultrashort X-ray pulses from packets of electrons that travel through a magnetic structure, called an undulator, at almost the speed of light. The properties of the electron bunches determine the characteristics of the X-ray light that they produce.

    un
    Working of the undulator. 1: magnets, 2: electron beam entering from the upper left, 3: synchrotron radiation exiting to the lower right

    Many experiments demand X-ray pulses that last only a few quadrillionths of a second, but it is difficult to make electron bunches this short. Therefore, scientists have turned to nature and adopted a solution reminiscent of a bird’s chirp. They create a spread of energies in the electron bunch, with the tail having more energy than the head. When electron bunches pass through another magnetic device known as a chicane, this so-called “energy chirp” allows lagging electrons in the tail to catch up with the ones in the head, creating shorter electron bunches, and thus shorter X-ray pulses.

    However, since the chirp consists of a spectrum of energies, the X-rays also have multiple energies—a rainbow of X-ray “colors” known as the energy bandwidth. Depending on the type of experiment, this can be an advantage or disadvantage, and researchers would like to have new tools to adjust the energy bandwidth to match their needs.

    As the name suggests, the dechirper’s primary task will be to minimize the chirp, i.e. to make pulses with a smaller spread in X-ray energies. Additionally, the dechirper can do the opposite and make X-ray pulses with a broader energy spectrum. In fact, many users have had a desire for a wider bandwidth since LCLS started operations in 2009, as LCLS scientist Sébastien Boutet points out.

    Precision Tool to Manipulate Electron Bunches

    SLAC scientists first proposed the idea for a dechirper in 2012 and, together with researchers from Lawrence Berkeley National Laboratory, demonstrated its feasibility in a test experiment at the Pohang Accelerator Laboratory in South Korea.

    The LCLS device, whose final design review will take place on Dec. 4, will consist of two flat, parallel aluminum rails, each 2 meters long, with comb-like grooves that are half a millimeter deep and a quarter millimeter wide. Two of these devices will be lined up in front of the undulator, with the electron beam traveling through the gap between the rails.

    Even though the electron bunches will not touch the rails, they will “sense” the grooves. These “bumps” along the electrons’ flight path will create a wake at the tail of the bunch, similar to the wake behind a boat gliding over water. “In this process, the tail loses energy while the front stays the same,” explains accelerator physicist Richard Iverson, the project lead at SLAC, where the technical requirements for the dechirper were specified.

    Varying the gap between the rails changes the effect on the electrons, allowing scientists to adjust the chirp of the electron bunches and, consequently, the energy bandwidth of the X-ray pulses generated in the undulator.

    What may sound like a relatively simple setup poses significant challenges for the manufacturing process. “The dechirper’s grooves are only as wide as three or four human hairs,” says project manager Marcos Ruelas at RadiaBeam, where the device is being designed and constructed. “Moreover, the rails must be very flat and smooth. Over the entire length of 4 meters, their height can only differ by 50 micrometers.” To meet these requirements, each 2-meter rail will be manufactured in four smaller blocks.

    The new device is expected to be installed at SLAC in August 2015. It will not only start providing LCLS users with more flexibility for their experiments, but will also become the test bed for dechirpers at SLAC’s next-generation LCLS-II facility and other X-ray lasers worldwide.

    Other key personnel of the project include Karl Bane, Paul Emma, Timothy Maxwell, Zhirong Huang, Gennady Stupakov and Zhen Zhang from SLAC’s Accelerator Directorate, as well as RadiaBeam’s Pedro Frigola, Mark Harrison and David Martin.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1

     
  • richardmitnick 5:25 pm on December 3, 2014 Permalink | Reply
    Tags: , , , Physics   

    From NIF at LLNL: “Measuring NIF’s enormous shocks” 


    Lawrence Livermore National Laboratory

    Nov. 21, 2014

    Breanna Bishop
    bishop33@llnl.gov
    925-423-9802

    LLNL NIF Banner

    LLNL NIF

    NIF experiments generate enormous pressures—many millions of atmospheres—in a short time: just a few billionths of a second. When a pressure source of this type is applied to any material, the pressure wave in the material will quickly evolve into a shock front. One of NIF’s most versatile and frequently-used diagnostics, the Velocity Interferometer System for Any Reflector (VISAR), is used to measure these shocks, providing vital information for future experiment design and calibration.

    LLNL NIF VISAR
    Target Diagnostics Operator Andrew Wong sets up the Velocity Interferometer System for Any Reflector (VISAR) optical diagnostic system for a shock timing shot.

    Invented in the 1970s, VISAR was developed by Sandia National Laboratory scientists to study the motion of samples driven by shocks and other kinds of dynamic pressure loading. It has since become a standard measurement tool in many areas where dynamic pressure loading is applied to materials.

    “It is a big challenge to figure out how to apply these enormous pressures without immediately forming a shock wave,’ said Peter Celliers, responsible scientist for VISAR. “Instead of trying to avoid forming shocks, many NIF experiments use a sequence of increasing shocks as a convenient way of monitoring the performance of the target as the pressure drive is increased—for example, during a (target) capsule implosion.”

    v
    Target Area Operator Mike Visaya aligns a VISAR transport mirror in preparation for an experiment.

    To measure these shocks, VISAR determines the speed of a moving object by measuring the Doppler shift in a reflected light beam. More specifically, it directs a temporally-coherent laser beam at the object, collects a returned reflection, and sends it through a specially-configured interferometer. The interferometer produces an interference pattern containing information about the Doppler shift.

    The Doppler shift provides information on how fast the reflecting part of the target is moving. In most cases the reflector is a shock front, which acts like a mirror moving through a transparent material (for example liquid deuterium, quartz or fused silica). In some cases the moving mirror is a physical surface on the back part of a package (called a free surface) that is accelerated during the experiment. In yet other scenarios, the moving mirror could be a reflecting interface embedded in the target behind a transparent window.

    After the light reflected from the target passes through the interferometers, it forms a fringe pattern. With the NIF VISAR design, this light is collected in the form of a two-dimensional image with an optical image relay system. The fringe pattern is superimposed on the image, then projected on the slit of a streak camera. Because the target image is spatially-resolved across the slit of the streak camera, this type of VISAR is called a line-imaging VISAR. The spatial and temporal arrangement of the fringe pattern in the output streak record reveals how different parts of the target move during the experiment.

    There is a very close connection between the velocities of the moving parts of the target and the pressure driving the motion. If the velocity is measured accurately, a highly accurate picture of the driving pressure can be formed. This information is vital for understanding the details of target performance.

    “Our simulation models are not accurate enough to calculate the timing of the shocks that produces the best performance without some sort of calibration,” Celliers said. “But by monitoring the shocks with the VISAR, we have precise and detailed information that can be used to tune the laser pulse (the pressure drive) to achieve optimal target performance, and to calibrate the simulation codes.”

    Looking to the future, VISAR will see improvements to its streaked optical pyrometer (SOP), an instrument that can be used to infer the temperature of a hot object by measuring the heat radiated from the object in the form of visible light. The SOP is undergoing modifications to improve its imaging performance and to reduce background levels on the streak camera detectors. This will benefit future equation-of-state experiments where accurate thermal emission data is crucial. This upgrade will be complete in early 2015.

    men
    Physicists Dave Farley (left) and Peter Celliers and scientist Curtis Walter watch a live VISAR image as they monitor the deuterium fill of a keyhole capsule in the NIF Control Room during shock-timing experiments.

    Along with Celliers, the VISAR implementation team includes Stephen Azevedo, David Barker, Jeff Baron, Mark Bowers, Aaron Busby, Allen Casey, John Celeste, Hema Chandrasekaran, Kim Christensen, Philip Datte, Jon Eggert, Gene Frieders, Brad Golick, Robin Hibbard, Matthew Hutton, John Jackson , Dan Kalantar, Kenn Knittel, Kerry Krauter, Brandi Lechleiter, Tony Lee, Brendan Lyon, Brian MacGowan, Stacie Manuel, JoAnn Matone, Marius Millot, Jason Neumann, Ed Ng, Brian Pepmeier, Karl Pletcher, Lynn Seppala, Ray Smith, Zack Sober, Doug Speck, Bill Thompson, Gene Vergel de Dios, Abbie Warrick, Phil Watts, Eric Wen, Ziad Zeid and colleagues from National Security Technologies.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 377 other followers

%d bloggers like this: