Tagged: Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:06 pm on April 26, 2016 Permalink | Reply
    Tags: "Why Physics Needs Diamonds", , , Physics   

    From Jlab via DOE: “Why Physics Needs Diamonds” 

    April 26, 2016
    Kandice Carter

    A detailed view of the diamond wafers scientists use to get a better measure of spinning electrons. | Photo courtesy of Jefferson Lab.

    Diamonds are one of the most coveted gemstones. But while some may want the perfect diamond for its sparkle, physicists covet the right diamonds to perfect their experiments. The gem is a key component in a novel system at Jefferson Lab that enables precision measurements to discover new physics in the sub-atomic realm — the domain of the particles and forces that build the nucleus of the atom.

    Explorations of this realm require unique probes with just the right characteristics, such as the electrons that are prepared for experiments inside the Continuous Electron Beam Accelerator Facility [CEBAF] at Jefferson Lab.

    Jlab CEBAF
    Jlab CEBAF

    CEBAF is an atom smasher. It can take ordinary electrons and pack them with just the right energy, group them together in just the right number and set those groups to spinning in just the right way to probe the nucleus of the atom and get the information that physicists want.

    But to ensure that electrons with the correct characteristics have been dialed up for the job, nuclear physicists need to be able to measure the electrons before they are sent careening into the nucleus of the atom. That’s where the diamonds in a device called the Hall C Compton Polarimeter come in. The polarimeter measures the spins of the groups of electrons that CEBAF is about to use for experiments.

    This quantity, called the beam polarization, is a key unit in many experiments. Physicists can measure it by shining laser light on the electrons as they pass by on their way to an experiment. The light will knock some of the electrons off the path, where they’re gathered up into a detector to be counted, a procedure that yields the beam polarization.

    Ordinarily, this detector would be made of silicon, but silicon is relatively easily damaged when struck by too many particles. The physicists needed something a bit hardier, so they turned to diamond, hoping it could also be a physicist’s best friend.

    The Hall C Compton Polarimeter uses a novel detector system built of thin wafers of diamond. Specially lab-grown plates of diamond, measuring roughly three-quarters of an inch square and a mere two hundredths of an inch thick, are outfitted like computer chips, with 96 tiny electrodes stuck to them. The electrodes send a signal when the diamond detector counts an electron.

    This novel detector was recently put to the test, and it delivered. The detector provided the most direct and accurate measurement to date of electron beam polarization at high current in CEBAF.

    But the team isn’t resting on its laurels: New experiments for probing the subatomic realm will require even higher accuracies. Now, the physicists are focused on improving the polarimeter, so that its diamonds will be ready to sparkle for the next precision experiment.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Thomas Jefferson National Accelerator Facility is managed by Jefferson Science Associates, LLC for the U.S. Department of Energy

  • richardmitnick 12:07 pm on April 23, 2016 Permalink | Reply
    Tags: , , Physics   

    From Physics: “Q&A: Keeping a Watchful Eye on Earth” 

    Physics LogoAbout Physics

    Physics Logo 2


    Anna Hogg

    Andrew Shepherd explains how he uses data from satellites to study polar ice and describes what it’s like to work in the politically charged field of climate science.

    From the baking hot savannahs of Africa to the icy cold wastelands of Greenland and Antarctica, Andrew Shepherd has worked in some of the most extreme environments on Earth. In college, he studied astrophysics, and flirted with the idea of pursuing it as a career. But a professor’s warning that few of his classmates would find a permanent job in that field turned him off. Instead he took advantage of a new department of Earth observation science at Leicester University in the UK to follow a career studying our planet’s climate. Now, rather than pointing satellites towards space to observe the stars, Shepherd flips them around to monitor the Earth. He has studied the arid land in Zimbabwe and the ice sheets at Earth’s poles. (From his fieldwork in these places, Shepherd has concluded that it is far better to bundle up warm for the cold than to boil in the heat.) As the director of the Centre for Polar Observation and Modeling in the UK and a professor at Leeds University, Shepherd also has a hand in designing and building new satellites. Physics spoke to Shepherd to learn more about his work.

    –Katherine Wright

    Your current focus is measuring changes in the amount of ice stored in Antarctica and Greenland. How did you get involved in that?

    There are dozens of estimates for how much ice is being lost from the polar ice sheets, some of which my group has produced. But climate scientists and policy makers don’t want to pick and choose between different estimates; they need a single, authoritative one. I worked with the European Space Agency (ESA) and the National Aeronautics and Space Administration (NASA), and the world’s leading experts, to pull together all the satellite measurements and deliver a single assessment of polar ice sheet losses. The project, called IMBIE—the Ice Sheet Mass Balance Inter-comparison Exercise—has been really well received. Now the space agencies want us to generate yearly assessments of ice sheet loss to chart its impact on global sea-level rise.

    What techniques are used to monitor polar ice?

    People have been exploring the polar regions for centuries, but Antarctica and Greenland are simply too large to track on foot. Satellites have solved this problem. We can now measure changes in the flow, thickness, and mass of the polar ice sheets routinely from space. These data have revolutionized our understanding of how Antarctica and Greenland interact with the climate system. Although most satellite orbits don’t cover the highest latitudes, some—such as ESA’s CryoSat—have been specially designed for that purpose.

    ESA/CryoSat 2
    ESA/CryoSat 2

    Unfortunately, we can’t measure everything from space. For example, the radio frequencies that we use to survey the bedrock beneath ice sheets can interfere with satellite television and telecommunications, so instead we rely on aircraft measurements.

    What questions about polar ice are you trying to answer?

    The headline science question is, how much ice is being lost from Antarctica and Greenland? It’s an important question, but there are many other things that we are interested in finding out. For example, how fast can ice sheets flow? Ask a glaciologist today and they’ll tell you that some glaciers flow at speeds greater than 15 km per year—you can sit next to Greenland’s Jacobshavn Isbrae glacier during your lunch break and watch it move; it’s that quick. But 10 years ago we thought the maximum speed was only 4 or 5 km per year. The speed is a useful piece of information because it’s an indication of how much ice [is available to] contribute to a future rise in sea level.
    Your group is part of several international collaborations.

    What’s your experience of working with so many other people towards a common goal?

    I enjoy it. As scientists, we are able to rely on the expertise of other people; we don’t have to have the answer to every problem. In climate and Earth science, problems are often much larger than any one group, or even institution, can solve alone, so teamwork is important.

    What’s it like to work in a field that’s often in the political and media spotlight?

    This adds excitement to our work: it’s great to know that people are interested in what we do. But it also adds an element of caution. Science moves forward by people challenging what has come before them. It can be daunting to do that in climate science, because it’s easy to be labeled an extremist. If you discover glaciers that aren’t shrinking, people assume you are going against an immense body of science. If you find evidence that the future sea-level rise will be higher than the latest predictions, you get labeled an alarmist. But often the worst option is to adopt a central position. If we assume that everyone else is right, and that there is no need to change the way we look at a problem, then we can rapidly slip into a situation where our knowledge ceases to expand.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Physicists are drowning in a flood of research papers in their own fields and coping with an even larger deluge in other areas of physics. How can an active researcher stay informed about the most important developments in physics? Physics highlights a selection of papers from the Physical Review journals. In consultation with expert scientists, the editors choose these papers for their importance and/or intrinsic interest. To highlight these papers, Physics features three kinds of articles: Viewpoints are commentaries written by active researchers, who are asked to explain the results to physicists in other subfields. Focus stories are written by professional science writers in a journalistic style and are intended to be accessible to students and non-experts. Synopses are brief editor-written summaries. Physics provides a much-needed guide to the best in physics, and we welcome your comments (physics@aps.org).

  • richardmitnick 10:10 am on April 21, 2016 Permalink | Reply
    Tags: , , Physics, Three Ways Physics Could Help Save Humanity   

    From PI: “Three Ways Physics Could Help Save Humanity” 

    Perimeter Institute
    Perimeter Institute

    April 21, 2016

    Technology has put our global environment in crisis. Could it also provide the solution?

    PROBLEM: fossil fuels for power and transit
    SOLUTION: Superconductors


    Fossil fuels generate most electricity, which is then transported through wires and cables – a process that loses between eight and 15 percent of the original power production. But exotic materials called superconductors could just save the day.
    Superconductors let electric current flow without resistance or loss, and allow movement with no friction. Today’s superconductors operate at extremely low temperatures and require supercooling. Creating – or finding – room-temperature superconductors is one of modern science’s great quests.

    High-temperature superconductors could be used to create extremely efficient rotating machines (think: steam-free turbines), and power networks with near-100-percent efficiency.

    They could also revolutionize transit. Magnetic levitation (maglev) trains already use supercooled superconducting magnets to levitate and propel the train floating above the tracks. High-temperature versions would do away with energy-guzzling cooling systems and pave the way for even-more-Earth-friendly commutes.

    Problem: Gravity and inertia
    Solution: Advanced materials


    Many resources devoted to overcoming the effects of gravity and inertia also contribute to climate change. Just think of the fuel used simply to get heavy vehicles to move. Cue the arrival of, and excitement about, graphene.

    Graphene is a sheet of carbon just one atom thick, and it’s the strongest material in the world. (If it was the thickness of cling wrap, it would take the force of a large car to puncture it with a pencil.)

    Experimentalists are currently working towards creating a graphene-composite material that would replace steel in aircraft and other vehicles, making them significantly more fuel-efficient.

    But some theorists are looking even further afield. Graphene could prove strong enough to fabricate long-theorized space elevators. These elevators could tether a satellite to the Earth, turning the satellite into a base station for mining natural resources on asteroids, among other possibilities.

    Advanced quantum materials are also expected to significantly improve our ability to create and store energy, from high-efficiency solar panels to high-performance batteries.

    Problem: Humans
    Solution: Artificial intelligence


    The Anthropocene is not an official epoch yet – the International Commission on Stratigraphy (the people who define geologic time scales) will decide this year whether to officially recognize it – but scientists have no doubt that human society has been, and continues to be, profoundly damaging to the Earth.

    So why not consider a non-human effort to ameliorate that impact? Powered by recent advances in neural networks and deep-learning algorithms, computers are becoming increasingly “human” in their abilities. (Google hit a milestone this year when its AlphaGo computer beat the world champion of the ancient Chinese board game of Go.)

    But artificial intelligence could do much more than play a mean board game. A combination of machine-learning algorithms and future supercomputer hardware – including quantum computers – could forge the new era of AI and help realize efficiencies in infrastructure design, conduct fundamental research projects, and even mediate arguments.


    BUT THAT’S NOT ALL: The physics of chaos theory, quantum information, and next-generation supercomputing could also help scientists understand and predict climate change.

    According to Tim Palmer, the Oxford University Royal Society Research Professor in Climate Physics, the emerging concept of inexact supercomputing could provide a powerful approach to assessing the chaotic, uncertain nature of our climate system.
    Tune in on May 4 to watch the live webcast of Dr. Palmer’s Perimeter Public Lecture “Climate Change, Chaos, and Inexact Computing.”

    Access mp4 video here .

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Perimeter

    Perimeter Institute is a leading centre for scientific research, training and educational outreach in foundational theoretical physics. Founded in 1999 in Waterloo, Ontario, Canada, its mission is to advance our understanding of the universe at the most fundamental level, stimulating the breakthroughs that could transform our future. Perimeter also trains the next generation of physicists through innovative programs, and shares the excitement and wonder of science with students, teachers and the general public.

  • richardmitnick 6:48 am on April 21, 2016 Permalink | Reply
    Tags: , , Physics   

    From Nautilus: “Why Physics Is Not a Discipline” 



    April 21, 2016
    Philip Ball

    Instructive: Phase transitions in physical systems, like that between water vapor and ice, can give insight into other scientific problems, including evolution. Wikipedia

    Have you heard the one about the biologist, the physicist, and the mathematician? They’re all sitting in a cafe watching people come and go from a house across the street. Two people enter, and then some time later, three emerge. The physicist says, “The measurement wasn’t accurate.” The biologist says, “They have reproduced.” The mathematician says, “If now exactly one person enters the house then it will be empty again.”

    Hilarious, no? You can find plenty of jokes like this—many invoke the notion of a spherical cow—but I’ve yet to find one that makes me laugh. Still, that’s not what they’re for. They’re designed to show us that these academic disciplines look at the world in very different, perhaps incompatible ways.

    There’s some truth in that. Many physicists, for example, will tell stories of how indifferent biologists are to their efforts in that field, regarding them as irrelevant and misconceived. It’s not just that the physicists were thought to be doing things wrong. Often the biologists’ view was that (outside perhaps of the well established but tightly defined discipline of biophysics) there simply wasn’t any place for physics in biology.

    But such objections (and jokes) conflate academic labels with scientific ones. Physics, properly understood, is not a subject taught at schools and university departments; it is a certain way of understanding how processes happen in the world. When Aristotle wrote his Physics in the fourth century B.C., he wasn’t describing an academic discipline, but a mode of philosophy: a way of thinking about nature. You might imagine that’s just an archaic usage, but it’s not. When physicists speak today (as they often do) about the “physics” of the problem, they mean something close to what Aristotle meant: neither a bare mathematical formalism nor a mere narrative, but a way of deriving process from fundamental principles.

    This is why there is a physics of biology just as there is a physics of chemistry, geology, and society. But it’s not necessarily “physicists” in the professional sense who will discover it.

    In the mid-20th century, the boundary between physics and biology was more porous than it is today. Several pioneers of 20th-century molecular biology, including Max Delbrück, Seymour Benzer, and Francis Crick, were trained as physicists. And the beginnings of the “information” perspective on genes and evolution that found substance in James Watson and Francis Crick’s 1953 discovery of genetic coding in DNA is usually attributed to physicist Erwin Schrödinger’s 1944 book What Is Life? (Some of his ideas were anticipated, however, by the biologist Hermann Muller.)

    A merging of physics and biology was welcomed by many leading biologists in the mid-century, including Conrad Hal Waddington, J. B. S. Haldane, and Joseph Needham, who convened the Theoretical Biology Club at Cambridge University. And an understanding of the “digital code” of DNA emerged at much the same time as applied mathematician Norbert Wiener was outlining the theory of cybernetics, which purported to explain how complex systems from machines to cells might be controlled and regulated by networks of feedback processes. In 1955 the physicist George Gamow published a prescient article in Scientific American called Information transfer in the living cell, and cybernetics gave biologists Jacques Monod and François Jacob a language for formulating their early theory of gene regulatory networks in the 1960s.

    But then this “physics of biology” program stalled. Despite the migration of physicists toward biologically related problems, there remains a void separating most of their efforts from the mainstream of genomic data-collection and detailed study of genetic and biochemical mechanisms in molecular and cell biology. What happened?

    Some of the key reasons for the divorce are summarized in Ernst Mayr’s 2004 book What Makes Biology Unique. Mayr was one of the most eminent evolutionary biologists of the modern age, and the title alone reflected a widely held conception of exceptionalism within the life sciences. In Mayr’s view, biology is too messy and complicated for the kind of general theories offered by physics to be of much help—the devil is always in the details.


    Scientific ideas developed in one field can turn out to be relevant in another.


    Mayr made perhaps the most concerted attempt by any biologist to draw clear disciplinary boundaries around his subject, smartly isolating it from other fields of science. In doing so, he supplies one of the clearest demonstrations of the folly of that endeavor.

    He identifies four fundamental features of physics that distinguish it from biology. It is essentialist (dividing the world into sharply delineated and unchanging categories, such as electrons and protons); it is deterministic (this always necessarily leads to that); it is reductionist (you understand a system by reducing it to its components); and it posits universal natural laws, which in biology are undermined by chance, randomness, and historical contingency. Any physicists will tell you that this characterization of physics is thoroughly flawed, as a passing familiarity with quantum theory, chaos, and complexity would reveal.

    The skeptic: Ernst Mayr argued that general theories from physics would be unlikely to be of great use in biology. Wikipedia

    But Mayr’s argument gets more interesting—if not actually more valid—when he claims that what makes biology truly unique is that it is concerned with purpose: with the designs ingeniously forged by blind mutation and selection during evolution. Particles bumping into one another on their random walks don’t have to do anything. But the genetic networks and protein molecules and complex architectures of cells are shaped by the exigencies of survival: they have a kind of goal. And physics doesn’t deal with goals, right? As Massimo Pigliucci of City University of New York, an evolutionary biologist turned philosopher, recently stated, “It makes no sense to ask what is the purpose or goal of an electron, a molecule, a planet or a mountain.”

    Purpose or teleology are difficult words in biology: They all too readily suggest a deterministic goal for evolution’s “blind watchmaker,” and lend themselves to creationist abuse. But there’s no escaping the compunction to talk about function in biology: Its components and structures play a role in the survival of the organism and the propagation of genes.

    The thing is, physical scientists aren’t deterred by the word either. When Norbert Wiener wrote his 1943 paper “Behaviour, purpose and teleology,” he was being deliberately provocative. And the Teleological Society that Wiener formed two years later with Hungarian mathematical physicist John von Neumann announced as its mission the understanding of “how purpose is realised in human and animal conduct.” Von Neumann’s abiding interest in replication—an essential ingredient for evolving “biological function”—as a computational process laid the foundations of the theory of cellular automata, which are now widely used to study complex adaptive processes including Darwinian evolution (even Richard Dawkins has used them).

    Apparent purpose arises from Darwinian adaptation to the environment. But isn’t that then perfectly understood by Darwin’s random mutation and natural selection, without any appeal to a “physics” of adaptation?

    Actually, no. For one thing, it isn’t obvious that these two ingredients—random inheritable mutation between replicating organisms, and selective pressure from the environment—will necessarily produce adaptation, diversity, and innovation. How does this depend on, say, the rate of replication, the fidelity of the copying process and the level of random noise in the system, the strength of selective pressure, the relationship between the inheritable information and the traits they govern (genotype and phenotype), and so on? Evolutionary biologists have mathematical models to investigate these things, but doing calculations tells you little without a general framework to relate it to.

    That general framework is the physics of evolution. It might be mapped out in terms of, say, threshold values of the variables above which a qualitatively new kind of global behavior appears: what physicists call a phase diagram. The theoretical chemist Peter Schuster and his coworkers have found such a threshold in the error rate of genetic copying, below which the information contained in the replicating genome remains stable. In other words, above this error rate there can be no identifiable species as such: Their genetic identity “melts.” Schuster’s colleague, Nobel laureate chemist Manfred Eigen, argues that this switch is a phase transition entirely analogous to those like melting that physicists more traditionally study.

    Meanwhile, evolutionary biologist Andreas Wagner has used computer models to show that the ability of Darwinian evolution to innovate and generate qualitatively new forms and structures rather than just minor variations on a theme doesn’t follow automatically from natural selection. Instead, it depends on there being a very special “shape” to the combinatorial space of possibilities which describes how function (the chemical effect of a protein, say) depends on the information that encodes it (such as the sequences of amino acids in the molecular chain). Here again is the “physics” underpinning evolutionary variety.

    And physicist Jeremy England of the Massachusetts Institute of Technology has argued that adaptation itself doesn’t have to rely on Darwinian natural selection and genetic inheritance, but may be embedded more deeply in the thermodynamics of complex systems. The very notions of fitness and adaptation have always been notoriously hard to pin down—they easily end up sounding circular. But England says that they might be regarded in their most basic form as an ability of a particular system to persist in the face of a constant throughput of energy by suppressing big fluctuations and dissipating that energy: you might say, by a capacity to keep calm and carry on.

    “Our starting assumptions are general physical ones, and they carry us forward to a claim about the general features of nonequilibrium evolution of which the Darwinian story becomes a special case that obtains in the event that your system contains self-replicating things,” says England. “The notion becomes that thermally fluctuating matter gets spontaneously beaten into shapes that are good at work absorption from the external fields in the environment.” What’s exciting about this, he says, is that “when we give a physical account of the origins of some of the ‘adapted’-looking structures we see, they don’t necessarily have to have had parents in the usual biological sense.” Already, some researchers are starting to suggest that England’s ideas offer the foundational physics for Darwin’s.

    Notice that there is really no telling where this “physics” of the biological phenomenon will come from—it could be from chemists and biologists as much as from “physicists” as such. There is nothing at all chauvinistic, from a disciplinary perspective, about calling these fundamental ideas and theories the physics of the problem. We just need to rescue the word from its departmental definition, and the academic turf wars that come with it.

    Familiar patterns: British mathematician Alan Turing proposed a general approach to pattern formation in chemical and biological systems. Both dots (top left) and stripes (top right) can be produced using “activators” and “inhibitors.” Some patterns have a striking resemblance to patterns found in nature, like the zebra’s.
    Top: Turing Patterns courtesy of Jacques Boissonade and Patrick De Kepper at Bordeaux University; Bottom: Zebra, Ishara Kodikara / Getty

    You could regard these incursions into biology of ideas more familiar within physics as just another example of the way in which scientific ideas developed in one field can turn out to be relevant in another.

    But the issue is deeper than that, and phrasing it as cross-talk (or border raids) between disciplines doesn’t capture the whole truth. We need to move beyond attempts like those of Mayr to demarcate and defend the boundaries.

    The habit of physicists to praise peers for their ability to see to the “physics of the problem” might sound odd. What else would a physicist do but think about the “physics of the problem?” But therein lies a misunderstanding. What is being articulated here is an ability to look beyond mathematical descriptions or details of this or that interaction, and to work out the underlying concepts involved—often very general ones that can be expressed concisely in non-mathematical, perhaps even colloquial, language. Physics in this sense is not a fixed set of procedures, nor does it alight on a particular class of subject matter. It is a way of thinking about the world: a scheme that organizes cause and effect.


    We don’t yet know quite what a physics of biology will consist of. But we won’t understand life without it.


    This kind of thinking can come from any scientist, whatever his or her academic label. It’s what Jacob and Monod displayed when they saw that feedback processes were the key to genetic regulation, and so forged a link with cybernetics and control theory. It’s what the developmental biologist Hans Meinhardt did in the 1970s when he and his colleague Alfred Gierer unlocked the physics of Turing structures. These are spontaneous patterns that arise in a mathematical model of diffusing chemicals, devised by mathematician Alan Turing in 1952 to account for the generation of form and order in embryos. Meinhardt and Gierer identified the physics underlying Turing’s maths: the interaction between a self-generating “activator” chemical and an ingredient that inhibits its behavior.

    Once we move past the departmental definition of physics, the walls around other disciplines become more porous, to positive effect. Mayr’s argument that biological agents are motivated by goals in ways that inanimate objects are not was closely tied to a crude interpretation of biological information springing from the view that everything begins with DNA. As Mayr puts it, “there is not a single phenomenon or a single process in the living world which is not controlled by a genetic program contained in the genome.”

    This “DNA chauvinism,” as it is sometimes now dubbed, leads to the very reductionism and determinism that Mayr wrongly ascribes to physics, and which the physics of biology is undermining. For even if we recognize (as we must) that DNA and genes really are central to the detailed particulars of how life evolves and survives, there’s a need for a broader picture in which information for maintaining life doesn’t just come from a DNA data bank. One of the key issues here is causation: In what directions does information flow? It’s now becoming possible to quantify these questions of causation—and that reveals the deficiencies of a universal bottom-up picture.

    Neuroscientist Giulio Tononi and colleagues at the University of Wisconsin-Madison have devised a generic model of a complex system of interacting components—which could conceivably be neurons or genes, say—and they find that sometimes the system’s behavior is caused not so much in a bottom-up way, but by higher levels of organization among the components.

    This picture is borne out in a recent analysis of information flow in yeast gene networks by Paul Davies and colleagues at Arizona State University in Tempe. The study reveals that indeed “downward” causation is involved in this case. Davies and colleagues believe that top-down causation might be a general feature of the physics of life, and that it could have played a key role in some major shifts in evolution, such as the appearance of the genetic code, the evolution of complex compartmentalized cells (eukaryotes), the development of multicellular organisms, and even the origin of life itself. At such pivotal points, they say, information flow may have switched direction so that processes at higher levels of organization affected and altered those at lower levels, rather than everything being “driven” by mutations at the level of genes.

    One thing this work, and that of Wagner, Schuster, and Eigen, suggests is that the way DNA and genetic networks connect to the maintenance and evolution of living organisms can only be fully understood once we have a better grasp of the physics of information itself.

    A case in point is the observation that biological systems often operate close to what physicists call a critical phase transition or critical point: a state poised on the brink of switching between two modes of organization, one of them orderly and the other disorderly. Critical points are well known in physical systems like magnetism, liquid mixtures, and superfluids. William Bialek, a physicist working on biological problems at Princeton University, and his colleague Thierry Mora at the École Normale Supérieure in Paris, proposed in 2010 that a wide variety of biological systems, from flocking birds to neural networks in the brain and the organization of amino-acid sequences in proteins, might also be close to a critical state.8

    By operating close to a critical point, Bialek and Mora said, a system undergoes big fluctuations that give it access to a wide range of different configurations of its components. As a result, Mora says, “being critical may confer the necessary flexibility to deal with complex and unpredictable environments.” What’s more, a near-critical state is extremely responsive to disturbances in the environment, which can send rippling effects throughout the whole system. That can help a biological system to adapt very rapidly to change: A flock of birds or a school of fish can respond very quickly to the approach of a predator, say.

    Criticality can also provide an information-gathering mechanism. Physicist Amos Maritan at the University of Padova in Italy and coworkers have shown that a critical state in a collection of “cognitive agents”—they could be individual organisms, or neurons, for example—allows the system to “sense” what is going on around it: to encode a kind of ‘internal map’ of its environment and circumstances, rather like a river network encoding a map of the surrounding topography.9 “Being poised at criticality provides the system with optimal flexibility and evolutionary advantage to cope with and adapt to a highly variable and complex environment,” says Maritan. There’s mounting evidence that brains, gene networks, and flocks of animals really are organized this way. Criticality may be everywhere.

    Examples like these give us confidence that biology does have a physics to it. Bialek has no patience with the common refrain that biology is just too messy—that, as he puts it, “there might be some irreducible sloppiness that we’ll never get our arms around.”10 He is confident that there can be “a theoretical physics of biological systems that reaches the level of predictive power that has become the standard in other areas of physics.” Without it, biology risks becoming mere anecdote and contingency. And one thing we can be fairly sure about is that biology is not like that, because it would simply not work if it was.

    We don’t yet know quite what a physics of biology will consist of. But we won’t understand life without it. It will surely have something to say about how gene networks produce both robustness and adaptability in the face of a changing environment—why, for example, a defective gene need not be fatal and why cells can change their character in stable, reliable ways without altering their genomes. It should reveal why evolution itself is both possible at all and creative.

    Saying that physics knows no boundaries is not the same as saying that physicists can solve everything. They too have been brought up inside a discipline, and are as prone as any of us to blunder when they step outside. The issue is not who “owns” particular problems in science, but about developing useful tools for thinking about how things work—which is what Aristotle tried to do over two millennia ago. Physics is not what happens in the Department of Physics. The world really doesn’t care about labels, and if we want to understand it then neither should we.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 3:03 pm on April 19, 2016 Permalink | Reply
    Tags: , Physics, , The wonders of photons of light   

    From Symmetry: “Eight things you might not know about light” 

    Symmetry Mag


    Matthew R. Francis

    Light is all around us, but how much do you really know about the photons speeding past you?

    Illustration by Sandbox Studio, Chicago with Kimberly Boustead

    There’s more to light than meets the eye. Here are eight enlightening facts about photons:

    1. Photons can produce shock waves in water or air, similar to sonic booms.

    Nothing can travel faster than the speed of light in a vacuum. However, light slows down in air, water, glass and other materials as photons interact with atoms, which has some interesting consequences.

    The highest-energy gamma rays from space hit Earth’s atmosphere moving faster than the speed of light in air.

    Gamma rays from the Fermi Gamma-ray Space Telescope, could be produced by proposed dark matter interactions
    Gamma rays from the Fermi Gamma-ray Space Telescope, could be produced by proposed dark matter interactions.

    NASA/Fermi Telescope
    NASA/Fermi Telescope

    These photons produce shock waves in the air, much like a sonic boom, but the effect is to make more photons instead of sound. Observatories like VERITAS in Arizona look for those secondary photons, which are known as Cherenkov radiation. Nuclear reactors also exhibit Cherenkov light in the water surrounding the nuclear fuel.

    CfA/VERITAS Cherenkov telescope installation

    Nuclear reactors also exhibit Cherenkov light in the water surrounding the nuclear fuel.

    2. Most types of light are invisible to our eyes.

    Colors are our brains’ way of interpreting the wavelength of light: how far the light travels before the wave pattern repeats itself. But the colors we see—called “visible” or “optical” light—are only a small sample of the total electromagnetic spectrum.

    Red is the longest wavelength light we see, but stretch the waves more and you get infrared, microwaves (including the stuff you cook with) and radio waves. Wavelengths shorter than violet span ultraviolet, X-rays and gamma rays. Wavelength is also a stand-in for energy: The long wavelengths of radio light have low energy, and the short-wavelength gamma rays have the highest energy, a major reason they’re so dangerous to living tissue.

    3. Scientists can perform measurements on single photons.

    Light is made of particles called photons, bundles of the electromagnetic field that carry a specific amount of energy. With sufficiently sensitive experiments, you can count photons or even perform measurements on a single one. Researchers have even frozen light temporarily.

    But don’t think of photons like they are pool balls. They’re also wave-like: they can interfere with each other to produce patterns of light and darkness. The photon model was one of the first triumphs of quantum physics; later work showed that electrons and other particles of matter also have wave-like properties.

    4. Photons from particle accelerators are used in chemistry and biology.

    Visible light’s wavelengths are larger than atoms and molecules, so we literally can’t see the components of matter. However, the short wavelengths of X-rays and ultraviolet light are suited to showing such small structure. With methods to see these high-energy types of light, scientists get a glimpse of the atomic world.

    Particle accelerators can make photons of specific wavelengths by accelerating electrons using magnetic fields; this is called “synchrotron radiation.” Researchers use particle accelerators to make X-rays and ultraviolet light to study the structure of molecules and viruses and even make movies of chemical reactions.

    CERN Proton Synchrotron
    CERN Proton Synchrotron

    5. Light is the manifestation of one of the four fundamental forces of nature.

    Photons carry the electromagnetic force, one of the four fundamental forces (along with the weak force, the strong force, and gravity). As an electron moves through space, other charged particles feel it thanks to electrical attraction or repulsion. Because the effect is limited by the speed of light, other particles actually react to where the electron was rather than where it actually is. Quantum physics explains this by describing empty space as a seething soup of virtual particles. Electrons kick up virtual photons, which travel at the speed of light and hit other particles, exchanging energy and momentum.

    6. Photons are easily created and destroyed.

    Unlike matter, all sorts of things can make or destroy photons. If you’re reading this on a computer screen, the backlight is making photons that travel to your eye, where they are absorbed—and destroyed.

    The movement of electrons is responsible for both the creation and destruction of the photons, and that’s the case for a lot of light production and absorption. An electron moving in a strong magnetic field will generate photons just from its acceleration.

    Similarly, when a photon of the right wavelength strikes an atom, it disappears and imparts all its energy to kicking the electron into a new energy level. A new photon is created and emitted when the electron falls back into its original position. The absorption and emission are responsible for the unique spectrum of light each type of atom or molecule has, which is a major way chemists, physicists, and astronomers identify chemical substances.

    7. When matter and antimatter annihilate, light is a byproduct.

    An electron and a positron have the same mass, but opposite quantum properties such as electric charge. When they meet, those opposites cancel each other, converting the masses of the particles into energy in the form of a pair of gamma ray photons.

    8. You can collide photons to make particles.

    Photons are their own antiparticles. But here’s the fun bit: the laws of physics governing photons are symmetric in time. That means if we can collide an electron and a positron to get two gamma ray photons, we should be able to collide two photons of the right energy and get an electron-positron pair.

    In practice that’s hard to do: successful experiments generally involve other particles than just light. However, inside the LHC, the sheer number of photons produced during collisions of protons means that some of them occasionally hit each other.

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    Some physicists are thinking about building a photon-photon collider, which would fire beams of photons into a cavity full of other photons to study the particles that come out of collisions.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 4:37 pm on April 17, 2016 Permalink | Reply
    Tags: , , Physics   

    From INVERSE: “Physicists Built a Super Tiny Engine Powered by a Single Calcium Atom” 



    Physicists have developed an engine that you can’t see with the naked eye.

    In a paper published* today in the journal Science, the research team from the University of Mainz and University of Kassel in Germany created an electromagnetic system that traps a single charged Calcium-40 atom and oscillates it, creating energy just like steam-locomotive and car engines. And because of the quantum mechanical condition of this tiny engine, the physicists believe that the system works at the same level and can even be more efficient than the average car engine.

    “There has been a lot of theoretical explanations and investigations in quantum properties of engines since the late ‘50s,” lead researcher and experimental physicist at the University of Mainz, Johannes Roßnagel tells Inverse. “We have now shown that it’s possible.”

    Roßnagel and his team embarked on the project four years ago when they wanted to explore quantum effects in thermodynamics, and thought the best way to experiment would be to create an engine. They had to build everything from scratch on a small budget of a few hundred Euros, he says. They had to construct personalized electronics and a system that could control the ion at a very precise level. It took them an entire year just to develop the technique to obtain temperature measurements — common methods being too slow or inaccurate for their engine.

    What they ended up with was an eight millimeter-long and four-millimeter-in-diameter ion trap with gold plates and electrodes that sequestered the lone calcium atom (but any charged atom could do the job) inside an electromagnetic field. Two lasers point at the ends of the trap, one heating up the atom and the other cooling it down. This fluctuation in temperature drives the ion to create an ever-increasing harmonic oscillation — like a sound wave. It’s the same idea as larger thermal engines that rely on the gas or liquids to generate mechanical work, except in this case there’s just one particle, Roßnagel explains.

    The single-atom engine and ion trap apparatus. Johannes Roßnagel/University of Mainz.

    While the single-atom engine could only produce 10-22 watts, but is actually comparable to a car engine, says Roßnagel. If you calculate the amount of energy individual gas particles in the average car engine, the power of the motor is in the same order of magnitude than the single-atom engine.

    “This was very surprising to us,” Roßnagel says. “This means that when you scale such a system down to a single particle, it’s still performing on the same level as macroscopic engines do.”

    He and his colleagues believe this level of efficiency is due to quantum effects, unique properties that can only be generated through single atoms and particles. Roßnagel explains that thermodynamic quantum effects would make it so engines do not have to rely on temperature as a sole energy source, and that a single-atom engine’s quantum properties have the potential to generate even more power than a thermal engine. However, outside physicists are not so certain.

    “I wouldn’t accept this efficiency is just from ‘the weirdness of quantum mechanics,’” Hartmut Häffner, a theoretical physicist at the University of California, Berkeley, who was not involved in the experiment, told Popular Mechanics in 2014 when Roßnagel wrote a proposal paper about the engine. Häffner adds that the potential single-atom engine itself “is very interesting and very well-described. It’s trying to push the boundaries of what we know about thermodynamics into a new regime.”

    The single-atom engine Roßnagel and his team built is the smallest engine that he knows exists today. Yet, there is a possibility to create even smaller ones with a single electron, but he doesn’t believe there is any interest in pursuing one. “We have a single particle which is running in the engine, and whether this is a calcium atom or an electron, from our research point of view, it doesn’t make a difference.”

    Next, Roßnagel wants to build tiny refrigerators with the technology. By turning the thermodynamic cycle around, the single-atom engine would run exactly like a refrigerator, he explains. They system generates a temperature difference, creating a side that is heated and a side that is cooled — like our food storage appliances. In the far future, he can also see these nanoscale engines improving chips and single atom transistors.

    “The heat that’s produced during an operation is a very huge problem for [the chip industry]. I think to have additional cooling systems at hand would be very helpful,” he says.

    Regardless of what exactly comes out of their initial single-atom engine, Roßnagel believes “this will find some greater applications some day.”

    *Science paper:
    A single-atom heat engine

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 2:42 pm on April 15, 2016 Permalink | Reply
    Tags: , , , Physics,   

    From SLAC: “SLAC Researchers Recreate the Extreme Universe in the Lab” 

    SLAC Lab

    April 15, 2016

    Three Recent Studies Reveal Details about Meteor Impacts, Giant Planets and Cosmic Particle Accelerators

    Conditions in the vast universe can be quite extreme: Violent collisions scar the surfaces of planets. Nuclear reactions in bright stars generate tremendous amounts of energy. Gigantic explosions catapult matter far out into space. But how exactly do processes like these unfold? What do they tell us about the universe? And could their power be harnessed for the benefit of humankind?

    To find out, researchers from the Department of Energy’s SLAC National Accelerator Laboratory perform sophisticated experiments and computer simulations that recreate violent cosmic conditions on a small scale in the lab.

    “The field of laboratory astrophysics is growing very rapidly, fueled by a number of technological breakthroughs,” says Siegfried Glenzer, head of SLAC’s High Energy Density Science Division. “We now have high-power lasers to create extreme states of matter, cutting-edge X-ray sources to analyze these states at the atomic level, and high-performance supercomputers to run complex simulations that guide and help explain our experiments. With its outstanding capabilities in these areas, SLAC is a particularly fertile ground for this type of research.”

    Three recent studies exemplify this approach, shining light on meteor impacts, the cores of giant planets and cosmic particle accelerators a million times more powerful than the Large Hadron Collider, the largest particle racetrack on Earth.

    Artist representation of laboratory astrophysics experiments. By mimicking fundamental physics aspects in the lab, researchers hope to better understand violent cosmic phenomena. (SLAC National Accelerator Laboratory)

    Cosmic ‘Bling’ as Marker for Meteor Impacts

    High pressure can turn a soft form of carbon – graphite, used as pencil lead – into an extremely hard form of carbon, diamond. Could the same thing happen when a meteor hits graphite in the ground? Scientists have predicted that it could, and that these impacts, in fact, might be powerful enough to produce a form of diamond, called lonsdaleite, that is even harder than regular diamond.

    “The existence of lonsdaleite has been disputed, but we’ve now found compelling evidence for it,” says Glenzer, the co-principal investigator of a study* published March 14 in Nature Communications.

    The team heated the surface of graphite with a powerful optical laser pulse that set off a shock wave inside the sample and rapidly compressed it. By shining bright, ultrafast X-rays from SLAC’s X-ray laser Linac Coherent Light Source (LCLS) through the sample, the researchers were able to see how the shock changed the graphite’s atomic structure. LCLS is a DOE Office of Science User Facility.


    “We saw that lonsdaleite formed for certain graphite samples within a few billionths of a second and at a pressure of about 200 gigapascals – 2 million times the atmospheric pressure at sea level,” says lead author Dominik Kraus from the German Helmholtz Center Dresden-Rossendorf, who was a postdoctoral researcher at the University of California, Berkeley at the time of the study. “These results strongly support the idea that violent impacts can synthesize this form of diamond, and that traces of it in the ground could help identify meteor impact sites.”

    Meteor impacts generate shock waves so powerful that they turn graphite into diamond. (NASA/D. Davis)

    Giant Planets Turn Hydrogen into Metal

    A second study**, published today in Nature Communications, looked at another peculiar transformation that might occur inside giant gas planets like Jupiter, whose interior is largely made of liquid hydrogen: At high pressure and temperature, this material is believed to switch from its “normal,” electrically insulating state into a metallic, conducting one.

    “Understanding this process provides new details about planet formation and the evolution of the solar system,” says Glenzer, who was also the co-principal investigator of this study. “Although the transition had already been predicted in the 1930s, we’ve never had a direct window into the atomic processes.”

    That is, not until Glenzer and his fellow scientists performed an experiment at Lawrence Livermore National Laboratory (LLNL), where they used the high-power Janus laser to rapidly compress and heat a sample of liquid deuterium, a heavy form of hydrogen, and to create a burst of X-rays that probed subsequent structural changes in the sample.

    The team saw that above a pressure of 250,000 atmospheres and a temperature of 7,000 degrees Fahrenheit, deuterium indeed changed from a neutral, insulating fluid to an ionized, metallic one.

    “Computer simulations suggest that the transition coincides with the separation of the two atoms normally bound together in deuterium molecules,” says lead author Paul Davis, who was a graduate student at the University of California, Berkeley and LLNL at the time of the study. ”It appears that as the pressure and temperature of the laser-induced shock wave rip the molecules apart, their electrons become unbound and are able to conduct electricity.”

    In addition to planetary science, the study could also inform energy research aimed at using deuterium as nuclear fuel for fusion reactions that replicate analogous processes inside the sun and other stars.

    The interior of giant gas planets like Jupiter is so hot and dense that hydrogen turns into a metal. (NASA; ESA; A. Simon/Goddard Space Flight Center)

    How to Build a Cosmic Accelerator

    In a third example of the extreme universe, tremendously powerful cosmic particle accelerators – near supermassive black holes, for instance – propel streams of ionized gas, called plasma, hundreds of thousands of light-years into space. The energy stored in these streams and in their electromagnetic fields can convert into a few extremely energetic particles, which produce very brief but intense bursts of gamma rays that can be detected on Earth.

    Scientists want to know how these energy boosters work because it would help them better understand the universe. It could also give them fresh ideas for building better accelerators – particle racetracks that are at the heart of a large number of fundamental physics experiments and medical devices.

    Researchers believe one of the main driving forces behind cosmic accelerators could be “magnetic reconnection” – a process in which the magnetic field lines in plasmas break and reconnect in a different way, releasing magnetic energy.

    “Magnetic reconnection has been observed in the lab before, for instance in experiments with two colliding plasmas that were created with high-power lasers,” says Frederico Fiúza, a researcher from SLAC’s High Energy Density Science Division and the principal investigator of a theoretical study*** published March 3 in Physical Review Letters. “However, none of these laser experiments have seen non-thermal particle acceleration – an acceleration not just related to the heating of the plasma. But our work demonstrates that with the right design, current experiments should be able to see it.”

    His team ran a number of computer simulations that predicted how plasma particles would behave in such experiments. The most demanding calculations, with about 100 billion particles, took more than a million CPU hours and more than a terabyte of memory on Argonne National Laboratory’s Mira supercomputer.

    “We determined key parameters for the required detectors, including the energy range they should operate in, the energy resolution they should have, and where they must be located in the experiment,” says the study’s lead author, Samuel Totorica, a PhD student in Tom Abel’s group at Stanford University’s and SLAC’s Kavli Institute for Particle Astrophysics and Cosmology (KIPAC). “Our results are a recipe for the design of future experiments that want to study how particles gain energy through magnetic reconnection.”

    Cosmic particle accelerators, for instance near supermassive black holes, propel streams of ionized gas, called plasma, hundreds of thousands of light-years into space. (NASA/JPL-Caltech)

    Meteor impacts, planetary science and cosmic accelerators are just three of a large number of laboratory astrophysics topics that will be discussed at the 11th International Conference on High Energy Density Laboratory Astrophysics (HEDLA2016), to be held May 16-20 at SLAC.

    Other contributions to the projects described in this feature came from researchers at the GSI Helmholtz Center for Heavy Ion Research, Germany; the Max Planck Institute for the Physics of Complex Systems, Germany; Sandia National Laboratories, Albuquerque; the Technical University Darmstadt, Germany; the University of California, Los Angeles; the University of Oxford, UK; the University of Rostock, Germany; and the University of Warwick, UK. Funding was received from the DOE Office of Science and its Fusion Energy Sciences program. Other funding sources included the Department of Defense; the German Ministry for Education and Research (BMBF); the German Research Foundation (DFG); the National Center for Supercomputing Alliance (NCSA); the National Nuclear Security Administration (NNSA); and the National Science Foundation (NSF).

    *D. Kraus et al., Nature Communications, 14 March 2016 (10.1038/ncomms10970):
    Nanosecond formation of diamond and lonsdaleite by shock compression of graphite
    Science team and affiliations:


    Department of Physics, University of California, Berkeley, California 94720, USA
    D. Kraus, B. Barbrel & R. W. Falcone
    SLAC National Accelerator Laboratory, Menlo Park, California 94025, USA
    A. Ravasio, M. Gauthier, L. B. Fletcher, B. Nagler, E. J. Gamboa, S. Göde, E. Granados, H. J. Lee, W. Schumaker & S. H. Glenzer
    Centre for Fusion, Space and Astrophysics, Department of Physics, University of Warwick, Coventry CV4 7AL, UK
    D. O. Gericke
    Max-Planck-Institut für Physik Komplexer Systeme, Nöthnitzer Strasse 38, 01187 Dresden, Germany
    J. Vorberger
    Institute of Radiation Physics, Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstrasse 400, 01328 Dresden, Germany
    J. Vorberger
    Institut für Kernphysik, Technische Universität Darmstadt, Schlossgartenstrasse 9, 64289 Darmstadt, Germany
    S. Frydrych, J. Helfrich, G. Schaumann & M. Roth
    Lawrence Livermore National Laboratory, Livermore, California 94550, USA
    B. Bachmann & T. Döppner
    Department of Physics, University of Oxford, Parks Road, Oxford OX1 3PU, UK
    G. Gregori
    GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstrasse 1, 64291 Darmstadt, Germany
    P. Neumayer


    D.K., R.W.F., B.N., H.J.L., T.D., S.H.G., G.S., D.O.G., J.V., G.G., P.N. and M.R. were involved in the project planning. D.K., A.R., M.G., S.F., J.H., L.B.F., B.N., B. Barbrel, B. Bachmann, E.J.G., S.G., E.G., H.J.L., W.S. and T.D. carried out the experiment. G.S., J.H., S.F., M.R. and D.K. designed and built the samples. Experimental data were analysed and discussed by D.K., S.H.G., A.R., M.G., D.O.G., J.V. and T.D. The manuscript was written by D.K., S.H.G. and D.O.G.

    **P. Davis et al., Nature Communications, 15 April 2016 (10.1038/ncomms11189)
    X-ray scattering measurements of dissociation-induced metallization of dynamically compressed deuterium

    Science team and affiliations:


    University of California, Berkeley, California 94720, USA
    P. Davis & R. W. Falcone
    Lawrence Livermore National Laboratory, PO Box 808, Livermore, California 94551, USA
    P. Davis, T. Döppner, J. R. Rygg, C. Fortmann, L. Divol, A. Pak, P. Celliers, G. W. Collins & O. L. Landen
    University of California, Los Angeles, California 90095, USA
    C. Fortmann
    SLAC National Accelerator Laboratory, Menlo Park, California 94025, USA
    L. Fletcher & S. H. Glenzer
    Institut für Physik, Universität Rostock, D-18051 Rostock, Germany
    A. Becker, B. Holst, P. Sperling & R. Redmer
    Sandia National Laboratories, Albuquerque, New Mexico 87185, USA
    M. P. Desjarlais


    P.D., T.D., J.R.R., A.P. and L.F. performed the experiments. P.D., J.R.R. and S.H.G. analysed the data. C.F., A.B., B.H., P.S. and R.R. performed simulations of ionization, dissociation, reflectivity and conductivity; L.D. performed hydrodynamic simulations. P.C., G.W.C., M.P.D., O.L.L., R.W.F., R.R. and S.H.G. provided additional support for experiment design, analysis and interpretation. P.D. and S.H.G. wrote the paper.

    ***S. Totorica et al., Physical Review Letters, 3 March 2016 (10.1103/PhysRevLett.116.095003).
    Nonthermal Electron Energization from Magnetic Reconnection in Laser-Driven Plasmas

    Science team:
    Samuel R. Totorica1,2,3, Tom Abel1,2,4, and Frederico Fiuza3,*

    1Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, California 94305, USA
    2Department of Physics, Stanford University, Stanford, California 94305, USA
    3High Energy Density Science Division, SLAC National Accelerator Laboratory, Menlo Park, California 94025, USA
    4SLAC National Accelerator Laboratory, Menlo Park, California 94025, USA


    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.

  • richardmitnick 11:16 am on April 15, 2016 Permalink | Reply
    Tags: , , Magnetic remanence, , Physics   

    From EPFL: “A single-atom magnet breaks new ground for future data storage” 

    EPFL bloc

    École Polytechnique Fédérale de Lausanne EPFL

    Nik Papageorgiou

    Scientists at EPFL and ETH Zürich have built a single-atom magnet that is the most stable to-date. The breakthrough paves the way for the scalable production of miniature magnetic storage devices.

    Magnetic storage devices such as computer hard drives or memory cards are widespread today. But as computer technology grows smaller, there is a need to also miniaturize data storage. This is epitomized by an effort to build magnets the size of a single atom. However, a magnet that small is very hard to keep “magnetized”, which means that it would be unable to retain information for a meaningful amount time. In a breakthrough study published in Science*, researchers led by EPFL and ETH Zürich have now built a single-atom magnet that, although working at around 40 Kelvin (-233.15 oC), is the smallest and most stable to date.

    Magnets work because of electron spin, which is a complicated motion best imagined as a spinning top. Electrons can spin up or down (something like clockwise or anti-clockwise), which creates a tiny magnetic field. In an atom, electrons usually come in pairs with opposite spins, thus cancelling out each other’s magnetic field. But in a magnet, atoms have unpaired electrons, and their spins create an overall magnetic field.

    A challenge today is to build smaller and smaller magnets that can be implemented in data storage devices. The problem is something called “magnetic remanence”, which describes the ability of a magnet to remain magnetized. Remanence is very difficult to observe from a single atom, because environmental fluctuations can flip its magnetic fields. In terms of technology, a limited remanence would mean limited information storage for atom-sized magnets.

    A team of scientists led by Harald Brune at EPFL and Pietro Gambardella at ETH Zürich, have built a prototypical single-atom magnet based on atoms of the rare-earth element holmium. The researchers, placed single holmium atoms on ultrathin films of magnesium oxide, which were previously grown on a surface of silver. This method allows the formation of single-atom magnets with robust remanence. The reason is that the electron structure of holmium atoms protects the magnetic field from being flipped.

    The magnetic remanence of the holmium atoms is stable at temperatures around 40 Kelvin (-233.15 oC), which, though far from room temperature, are the highest achieved ever. The scientists’ calculations demonstrate that the remanence of single holmium atoms at these temperatures is much higher than the remanence seen in previous magnets, which were also made up of 3-12 atoms. This makes the new single-atom magnet a worldwide record in terms of both size and stability.

    This project involved a collaboration of EPFL’s Institute of Condensed Matter Physics with ETH Zürich, Swiss Light Source (PSI), Vinča Institute of Nuclear Sciences (Belgrade), the Texas A&M University at Qatar and the European Synchrotron Radiation Facility (Grenoble).

    It was funded by the Swiss National Science Foundation, the Swiss Competence Centre for Materials Science and Technology (CCMX), the ETH Zurich, EPFL and the Marie Curie Institute, and the Serbian Ministry of Education and Science.


    Donati F, Rusponi S, Stepanow S, Wäckerlin C, Singha A, Persichetti L, Baltic R, Diller K, Patthey F, Fernandes E, Dreiser J, Šljivančanin Ž, Kummer K, Nistor C, Gambardella P, Brune H. Magnetic remanence in single atoms. Science 14 April 2016. DOI: 10.1126/science.aad9898

    *Science paper:P
    Magnetic remanence in single atoms

    Science team:
    F. Donati1, S. Rusponi1, S. Stepanow2, C. Wäckerlin1, A. Singha1, L. Persichetti2, R. Baltic1, K. Diller1, F. Patthey1, E. Fernandes1, J. Dreiser1,3, Ž. Šljivančanin4,5, K. Kummer6, C. Nistor2, P. Gambardella2,*, H. Brune1,*

    1Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Station 3, CH-1015 Lausanne, Switzerland.
    2Department of Materials, ETH Zürich, Hönggerbergring 64, CH-8093 Zürich, Switzerland.
    3Swiss Light Source, Paul Scherrer Institute, CH-5232 Villigen PSI, Switzerland.
    4Vinča Institute of Nuclear Sciences (020), Post Office Box 522, 11001 Belgrade, Serbia.
    5Texas A&M University at Qatar, Doha, Qatar.
    6European Synchrotron Radiation Facility (ESRF), F-38043 Grenoble, France.

    ↵*Corresponding author. E-mail: pietro.gambardella@mat.ethz.ch (P.G.); harald.brune@epfl.ch (H.B.)

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    EPFL campus

    EPFL is Europe’s most cosmopolitan technical university with students, professors and staff from over 120 nations. A dynamic environment, open to Switzerland and the world, EPFL is centered on its three missions: teaching, research and technology transfer. EPFL works together with an extensive network of partners including other universities and institutes of technology, developing and emerging countries, secondary schools and colleges, industry and economy, political circles and the general public, to bring about real impact for society.

  • richardmitnick 4:50 pm on April 8, 2016 Permalink | Reply
    Tags: , , , Physics   

    From New Scientist: “Physics adventures down the superfluid supersonic black hole” 


    New Scientist

    9 April 2016
    Anil Ananthaswamy

    What happens to the stuff a black hole sucks in? Jan Kornstaedt/Gallerystock

    IMAGINE lying in a giant bathtub when someone pulls the plug. Sliding towards a watery exit from this world, it gets worse. The fluid gathers pace to supersonic speed and you realise no one can even hear you scream. Your sounds are transported with you down the drain, lost to the bathtub for all time.

    It is the stuff of surrealist nightmares – and a pretty fair description of what happens to an atom or a photon of light as it crosses a black hole’s event horizon. Black holes famously devour anything that comes too close: light, matter, information. In doing so, they cause some almighty headaches for our best theories of physical reality.

    Or do they? Although we are pretty certain black holes exist, we’ve never observed one directly, let alone got up close and personal. That’s where the bathtub analogy is now coming into serious play. Get fully to grips with it, and we could have a new way not just to fathom black holes, but also to crack some of cosmology’s other toughest nuts – from why the expansion of the universe is accelerating to how it all began.

    There’s a catch, naturally. To make the analogy real, we can’t use any old water from the tap. It takes a fluid so extreme and bizarre that it was fabricated for the first time just 20 years ago, and only exists within a whisker of absolute zero, the lowest temperature there is. With that magic ingredient, you can begin to make a superfluid sonic black hole.

    Black holes are the most mysterious of the many predictions made by general relativity, Einstein’s theory of gravity that he formulated just over a century ago. General relativity is a peerless guide to the workings of gravity, but puts gravity at odds with the other known forces of nature. Unlike them, gravity is not caused by the exchange of quantum particles; instead, massive bodies bend space and time around them, creating dents in the fabric of the universe that dictate how other bodies move.

    The world according to general relativity contains some shady spectres – invisible dark matter to explain why galaxies whirl at the speeds they do, and dark energy to explain why the expansion of the universe is accelerating. The theory also fails completely when you wind the universe back to its first instants and the big bang. Here, it predicts a seemingly nonsensical “singularity” of infinite temperature and density.

    Still, black holes take the biscuit. We now think these impossibly dense scrunchings of mass exist across the cosmos – where massive stars have collapsed in on themselves, and at the heart of galaxies including our own.

    For all their heft, however, black holes seem strangely tenuous, at least in theory. In 1974, physicist Stephen Hawking used quantum rules to show that all black holes must eventually evaporate, apparently destroying any information they might have swallowed, a physical no-no.

    According to quantum physics, space-time is a roiling broth of particles and their antiparticles that pop up spontaneously in pairs, disappearing again almost instantaneously. But when such a pair pops up at the edge of a black hole’s event horizon – the point beyond which nothing can escape its gravity – sometimes one will have the energy to whizz away, while the other falls in. By the law of conservation of energy, this second particle must have negative energy, causing the black hole to slowly lose its oomph and evaporate. The signal this is happening is a faint stream of escaping partner particles – Hawking radiation.

    In theory at least, Hawking radiation has a temperature: the smaller the black hole, the warmer it is. For a black hole 30 times the mass of our sun, it is a titchy nanokelvin or so, impossible to measure in the chaotic surroundings of an astrophysical black hole. Hopes were high that the Large Hadron Collider at CERN near Geneva, Switzerland, might produce mini black holes with measurable Hawking radiation – but not a peep.

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles

    “This is a pity, because if they had, I would have got a Nobel prize,” Hawking said in a BBC lecture this February.

    Bose-Einstein condensates arise when supercooled gases of atoms all enter the same quantum state. Pascal Goetgheluck/Science Photo Library

    Wind back to 1981, however, and physicist William Unruh of the University of British Columbia in Vancouver, Canada, was thinking of ways to study Hawking radiation. It led him to some strange parallels between the “metric” – a mathematical construction in general relativity that expresses the geometry of space-time – and equations used to describe superfluid flow.

    Unruh showed that the equations governing such flow at supersonic speed mimicked the metric of space-time around a black hole. This implied a superfluid could create a black hole that would trap “phonons” of sound, just as an astrophysical black hole traps photons of light. It’s the surrealist nightmare, with an added twist. Just as with an astrophysical black hole, quantum fluctuations would make a sonic black hole emit Hawking radiation – but made of phonons, not photons.

    Unruh realised this could be just the thing to test Hawking’s idea. Prove the radiation exists in one situation, and the mathematical mirror provides a pretty good indication that it does in the other.

    He was rather ahead of the times. Although the first superfluid state was created in liquid helium in the late 1930s, for a sonic black hole the fluid had to be flowing faster than the speed of sound in that fluid – in superfluid helium, that’s hundreds of metres a second. Experimental verification of Unruh’s idea would have to wait.

    Then, in 1995, came a Nobel-prizewinning development: the creation of the first Bose-Einstein condensates (BECs). This is an entirely different state of matter beyond solid, liquid and gas, made up of collections of atoms cooled down to temperatures so low, sometimes a few nanokelvin above absolute zero, that the individual atoms lose their identity. They occupy the same quantum state, and behave and flow as one.

    Sonic event horizon

    Creating this extreme, bizarre form of superfluid was an experimental tour de force, and came with an important detail as far as the black hole story was concerned: in a BEC, the speed of sound is just millimetres a second. Sonic black holes suddenly looked feasible.

    Iacopo Carusotto, a theorist at the BEC Center in Trento, Italy, was initially a sceptic. But returning from holiday in 2005, he found himself sitting next to a college friend on a train who turned out to be a gravitational physicist, and the two got talking shop. The friend introduced Carusotto to Roberto Balbinot, an expert on general relativity at the University of Bologna. These two started to build computer models of sonic black holes that took into account factors such as how the speed of sound varies according to how a fluid is moving, its temperature, the wavelength of the phonons and so on.

    Carusotto still has the first image spewed out by the simulation in 2008 hanging on the wall in his office. “I jumped off my chair,” he says. It shows that as a Bose-Einstein condensate starts flowing at supersonic speeds, a sonic event horizon forms and phonons of Hawking radiation spontaneously appear. “To see it so precisely in agreement with theory was a great surprise, and a great success,” says Carusotto.

    Jeff Steinhauer, an atomic physicist at the Technion-Israel Institute of Technology in Haifa, was the man who could make the analogy an experimental reality. He had developed some crucial tools: a way of measuring a condensate’s temperature to an accuracy of a nanokelvin, and complex systems of adjustable magnetic fields to stop condensates sagging and being disrupted under the effect of real gravity. By 2009, he was able to use lasers to accelerate a long, thin stretch of condensate to supersonic speed. The result was the first sonic black hole with an event horizon (Physical Review Letters*, vol 105, p 240401).

    Measuring individual phonons to verify the existence of Hawking radiation proved more tricky. In 2014, Steinhauer reached a halfway house by accelerating a thin condensate stream to supersonic speed and then allowing it to slow again. This created the equivalent of two event horizons – a black-hole horizon from which no sound could escape, and a “white-hole” horizon into which no sound could enter. In such a situation, Hawking phonons produced by the black hole bounce between the two horizons, producing more and more Hawking radiation in a similar way to how light is amplified in a laser.

    Bose-Einstein condensates may hold the answer. National Institute of Syandards and ethnology/Science Photo Library

    And amplified radiation is certainly what Steinhauer saw. “It was very exciting to suddenly see this effect,” he says. “It was very gratifying to think that the physics Hawking predicted was creating it.” The question raised by Carusotto and others since is how to tell for certain whether the initial phonon was created by spontaneous, random quantum fluctuation rather than some classical process. Final confirmation could be coming soon: Steinhauer currently has a paper** under peer review in which he reports seeing unadorned Hawking radiation from a single sonic horizon (arxiv.org/abs/1510.00621).

    Steinhauer himself wouldn’t discuss this work further, but theorist Stefano Liberati of the International School for Advanced Studies in Trieste, Italy, is excited. “If this result is confirmed, it’s definitely a major breakthrough,” he says. “It would be the first experimental detection of Hawking radiation.”

    Whether it’s enough for Hawking to get his Nobel prize remains to be seen, but Liberati thinks this work is just the beginning. Not only might sonic black holes illuminate further mysteries of the real thing (see “What are black holes made of?“), but get superfluids flowing in different ways and you can create other space-time geometries that equate to other cosmological problems. One is the exponential expansion of the universe in the period known as inflation, thought to have occurred immediately after the big bang. Current cosmological theories predict that during this phase, the quantum fluctuations of space-time also got stretched, eventually giving rise to the particles we see everywhere today. We can’t test this idea directly, but Liberati and his colleagues have shown how a similar situation implemented using a condensate should give rise to phonons. “You should be able to reproduce the salient features of cosmological particle creation,” he says.

    One way of doing this involves using lasers or magnetic fields to suddenly compress a condensate, thus changing the speed of sound within it. This creates an analogy to the change in light’s travel time between two points in space as the universe expands. In 2012, Christoph Westbrook and his colleagues at the Charles Fabry Laboratory at the University of Paris-Sud in France did just that and saw indirect effects of phonon creation – although the experimental temperature of 200 nanokelvin was still too high to rule out thermal fluctuations as the source.

    Liberati suggests that a similar analogy could provide clues to another huge cosmological conundrum, dark energy. The peculiar problem of dark energy is not so much that it exists. General relativity allows for a “cosmological constant” that represents the energy of empty space and whose effect would be to expand space ever faster, just as dark energy is thought to do. But calculating the value of this constant from observations gives a number 10120 times smaller than the value you get from quantum field theory.

    Again, Bose-Einstein condensates could hold the answer. In a condensate, not all the atoms that you cool down end up in the lowest-energy condensate state: you never get a perfect condensate. What’s more, these stragglers “backreact” with the condensate, an interaction that appears in the equations in a similar way to the cosmological constant.

    “The superfluid analogy might provide clues to the cosmic conundrum of dark energy“

    To Liberati, this is suggestive of the real nature of dark energy, and space-time itself. What if the fabric of the universe, and hence gravity, emerge from some as-yet-unknown “atoms” of space-time, just as a superfluid state emerges from normal atoms when they are cooled? If some of these atoms are left over and do not form the basis of space-time, then their backreaction with those that do could reduce the value of the cosmological constant to match what astronomers find.

    In this view, the equations of general relativity might just be a high-level picture that emerges from a more fundamental description. In fluid dynamics, the set of equations known as Euler’s equations similarly describes the flow as a whole, but not the molecular interactions that underlie it. “It’s teaching you a very important lesson,” says Liberati. “If gravity is emergent, the only way you can calculate the cosmological constant is by knowing the fundamental system from which gravity emerges.”

    The quest for a more fundamental picture of gravity is central to the search for a “theory of everything” that will finally unite all the forces of nature, gravity included. So far, convincing answers have been thin on the ground – in part because we have lacked any way to test ideas experimentally. In that sense, listening carefully to sounds swirling through superfluids could be the stuff of physicists’ dreams, rather than their nightmares. “It’s a success story,” says Liberati. “It’s a case in which theoretical physics finally made connection with experiments.”


    What are black holes made of?

    For all we know, it could be snails and puppy-dog tails. There is no microscopic theory of a black hole’s innards, but Georgi Dvali of the Ludwig Maximilian University of Munich, Germany, thinks we might find clues in parallels between how black holes and Bose-Einstein condensates process information.

    Black holes are careless stewards of information, apparently dribbling it away as they evaporate (see main story), but they are efficient stores of it. It would take about 10-5 electronvolts of energy to stuff one quantum bit of information into a cubic-centimetre box. To stuff that qubit into a black hole of the same size – which would have the mass of Earth – would take 1066 times less energy, says Dvali.

    Intriguingly, Dvali and his colleagues have shown that Bose-Einstein condensates seem to process information similarly to black holes. “There is a one-to-one correspondence,” he says. “In particular, the system delivers very cheap qubits for storing information.”

    Bose-Einstein condensates exist in a so-called quantum-critical state, transitioning from a normal state to one in which all the atoms act as a coherent quantum whole. Dvali speculates that the parallels indicate that black holes are quantum-critical states too – albeit not of atoms, but of quantum particles of gravity known as gravitons.


    *Realization of a sonic black hole analogue in a Bose-Einstein condensate
    Oren Lahav, Amir Itah, Alex Blumkin, Carmit Gordon, Shahar Rinott,
    Alona Zayats, and Jeff Steinhauer
    Technion – Israel Institute of Technology, Haifa, Israel

    **Observation of thermal Hawking radiation and its entanglement in an analogue black hole
    Jeff Steinhauer
    Department of Physics, Technion—Israel Institute of Technology, Technion City, Haifa 32000, Israel

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 8:18 am on April 8, 2016 Permalink | Reply
    Tags: , Astrophysical gasdynamics, Physics   

    From astrobites: “Expected or not – It’s all the same physics” 

    Astrobites bloc


    Apr 8, 2016
    Michael Küffmeier

    Title: Surprises in astrophysical gasdynamics
    Authors: Steven A. Balbus & William J. Potter

    Authors’ Institutions: Department of Physics, Astrophysics, University of Oxford; Laboratoire de Radioastronomie, École Normale Supérieure, Paris; Institut universitaire de France, Maison des Universités, Paris

    Status: to appear in Rep. Prog. Phys

    Why astrophysics is beautiful

    A fascinating part of astrophysics is the following: regardless of the enormous range of scales occurring in astrophysics, the essential part of a phenomenon is often determined by one underlying fundamental process. Even more remarkable, such a key process can be very analogue to the predominant process known from totally unrelated phenomena. That’s the beauty of doing physics – at the end it all comes down to a few fundamental equations.

    Today’s astrobite deals exactly with this underlying beauty and presents some astonishing of astrophysical processes to everyday life phenomena. In fact, this astrobite is more a reading recommendation than a summary and presents only a small subset of the content that is presented in the paper. The featured article gives an overview of several fascinating astrophysical phenomena that can be explained with only four fundamental equations (namely the equations of magnetohydrodynamics MHD).

    To quicken your appetite

    The paper discusses examples of idealized physical setups that may be or become unstable by changing one particular parameter. These instabilities can be expressed mathematically by solving some underlying equations analytically. Thinking through and solving the given examples provides a lot of deeper insight into processes that are important to be aware of in order to comprehend a more complex (astro)physical situation. However, our website is not called astrofeast, which is why this astrobite only consists of a brief heuristic description of two interesting, well-studied situations – evaporation in a cloudy medium and the magnetorotational instability. No equations occur in this astrobite, but to understand the underlying effects properly, you have to dig deeper into the maths.

    Figure 1: An illustration of the setup in the first example. A spherical cloud of cold and dense gas evaporates with an expansion velocity into the hot and low-density surrounding due to the heat flux pointing towards the center of the cloud. [This figure is a slightly modified version of figure 1 in the featured articles.]

    Evaporation in a cloudy medium

    As a first example, the authors discuss the setup of a cool interstellar cloud, which they approximate with a spherical cloud filled with cold and dense gas cloud, in a hot medium of low density (figure 1). The pressure is approximately the same all over the place and gravity is negligible. In such a scenario the outer part of the cloud will evaporate and you might intuitively think that the mass loss rate is proportional to the surface area and thus to the square of the cloud’s radius. However, solving the problem properly reveals that the mass loss rate only depends only scales with the radius and not with the square of it. The reason for it is very astonishing. It turns out that the mass loss rate is proportional to the capacitance of the cloud, a quantity that otherwise occurs in the very different context of a conductor with zero surface potential and a given potential at infinity.

    Magnetorotational instability

    Probably the most actively investigated instability in astrophysics during the last approximately 25 years – especially among researchers working with accretion disks – has been the magnetorotational instability or in short the MRI. Interestingly enough, a specific case of the MRI has already been discovered Velikhov in 1959 and independently by Chandrasekhar in 1960, but did not rise much attention before its rediscovery by Balbus (who is one of the authors of the featured paper) & Hawley in 1991.

    One reason why the MRI has been studied intensively since then is the fact that it transports angular momentum radially outward and thus provides a possible solution to reduce the angular momentum in accretion disks around protostars and black holes. Regardless, whether the MRI actually is the main contributor to angular momentum transport or not (recent studies indicate that magnetic braking is more likely the main mechanism for angular momentum transport around young protostars), the authors point out a very fascinating feature that magnetic fields can have: The MRI sets in as soon as a magnetic field is present, but neither its characteristics (instability criterion, maximum growth rate, most unstable displacement eigenvector) depend on any property of the magnetic field (neither the magnetic field’s shape nor its strength)!

    That is why the MRI can be heuristically understood with the following analogy (figure 2): Imagine two point masses at two different orbits in a rotating disk that are connected with a spring (the spring acts as an analog for the magnetic field). Due to Kepler’s third law, we know that the inner point mass rotates faster than the outer one. Now, consider what will happen. The spring in between the two point masses is under tension and forces the inner point mass to move slower, while the outer point mass gets accelerated. But what happens next? The inner point mass that rotates slower than the Keplerian rotation, falls inwards and the outer point mass that moves faster than the Keplerian rotation moves outwards.

    Figure 2: A schematic illustration of the magnetorotational instability. Two fluid elements are connected by a spring and due to this connection, the inner element transfers its angular momentum to the outer one causing the two moving apart from each other. [The figure is the same as figure on this website.]

    In summary: Be careful and don’t get fooled by intuition!

    Today’s astrobite presented briefly two astrophysical phenomena that show remarkable physical analogs, which are not immediately obvious as well as non-obvious relations or properties. The article itself contains additional examples that even more counter-intuitive and provides useful references to very detailed studies of the presented phenomena. These results can also be understood as a warning and might even be the biggest take-home message from the article. Intuition is important, but can be misleading. As shown in the article, it is crucial to carefully consider the validity and effects of a model’s assumptions before setting it up your own model or before praising the results of others.

    Science Paper:
    Surprises in astrophysical gasdynamics

    Science team:
    Steven A. Balbus 1;2;3 and William J. Potter 1

    1 Department of Physics, Astrophysics, University of Oxford, Denys Wilkinson
    Building, Keble Road, Oxford OX13RH
    E-mail: steven.balbus@physics.ox.ac.uk
    2 Laboratoire de Radioastronomie, Ecole Normale Superieure, 24 rue Lhomond,
    75231 Paris CEDEX 05, France
    3 Institut universitaire de France, Maison des Universites, 103 blvd. Saint-Michel,
    75005 Paris, France

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What do we do?

    Astrobites is a daily astrophysical literature journal written by graduate students in astronomy. Our goal is to present one interesting paper per day in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.
    Why read Astrobites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.
    Our goal is to solve this problem, one paper at a time. In 5 minutes a day reading Astrobites, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in a new area of astronomy.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 552 other followers

%d bloggers like this: