Updates from May, 2016 Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:10 pm on May 10, 2016 Permalink | Reply
    Tags: , , , Tiny Tests Seek the Universe’s Big Mysteries   

    From Quanta: “Tiny Tests Seek the Universe’s Big Mysteries” 

    Quanta Magazine
    Quanta Magazine

    May 3, 2016
    Joshua Sokol

    Huge supercolliders aren’t the only way to search for new physical phenomena. A new generation of experiments that can fit on a tabletop are probing the nature of dark matter and dark energy and searching for evidence of extra dimensions.

    Access mp4 video here .
    Video: David Moore of Stanford University describes how, inside this chamber, silica spheres probe for distortions of gravity. Peter DaSilva for Quanta Magazine

    To answer some of the biggest unsolved questions in the cosmos, you might not need a supercollider. For decades, theorists have been dreaming up a Wild West of exotic physics that could be visible at scales just below the thickness of a dollar bill — provided you build a clever-enough experiment, one small enough to fit on a tabletop. Over distances of a few dozen microns — a little thinner than that dollar — known forces like gravity could get weird, or, even more exciting, previously unknown forces could pop up. Now a new generation of tabletop experiments is coming online to look into these phenomena.

    One such experiment uses levitated spheres of silica — “basically a glass bead that we hold up using light,” according to Andrew Geraci, the lead investigator — to search for hidden forces far weaker than anything we can imagine. In a paper* uploaded to the scientific preprint site arxiv.org in early March, his team announced that they had detected sensitivities of a few zeptonewtons — a level of force 21 orders of magnitude below a newton, which is about what is needed to depress a computer key.

    “A bathroom scale might be able to tell your weight to maybe 0.1 newtons if it was very accurate,” said Geraci, a physicist at the University of Nevada, Reno. “If you had a single virus land on you, that would be about 10^–19 newtons, so we’re about two orders of magnitude below that.”

    The targets of these searches feature in some of the most compelling questions in physics, including those that center on the nature of gravity, dark matter and dark energy. “There’s a whole panoply of things these experiments could look for,” said Nima Arkani-Hamed, a physicist at the Institute for Advanced Study in Princeton, N.J. For example, dark matter, the massive stuff whose existence has been inferred only on astronomical scales, might leave faint electric charges behind when it interacts with ordinary particles. Dark energy, the pressure powering the accelerating expansion of the universe, might make itself felt through so-called “chameleon” particles that a tabletop experiment could theoretically be able to spot. And certain theories predict that gravity will be much weaker than expected at short range, while others predict that it will be stronger. If the extra dimensions posited by string theory exist, the tug of gravity between objects separated by a micron might exceed what Isaac Newton’s law predicts by a factor of 10 billion.

    Janet Conrad, a physicist at the Massachusetts Institute of Technology who is not directly involved with any of these small-scale searches, thinks that they complement the work done at massive accelerators such as the Large Hadron Collider.

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    “We are like dinosaurs. We have gotten bigger, and bigger, and bigger,” she said. But experiments like these offer the chance for a more agile kind of fundamental physics, in which individual researchers with small devices can make a big impact. “I really do believe that this is a new field,” she said.

    For theorists like Arkani-Hamed, what happens just beyond the limits of our vision is interesting because of a curious numerical connection. The Planck scale, the infinitesimal size scale in which quantum gravity is thought to rule, is 16 orders of magnitude smaller than the weak scale, the neighborhood of particle physics explored in the Large Hadron Collider.

    Theories that blend these length scales often compare the two. (Physicists will take the length of the weak scale, square it, then divide this number by the length of the Planck scale.) The result of the comparison yields a range of distances matching what may be another fundamental scale: one that runs between a micron and a millimeter. Here, Arkani-Hamed suspects, new forces and particles may arise.

    Similar sizes arise when physicists consider the dark energy that fills empty space throughout the universe. When that energy density is associated with a length scale on which particles may be acting, it turns out to be about 100 microns — again suggesting this neighborhood would be an auspicious place to look for signs of new physics.

    One such search started in the late 1990s, after Arkani-Hamed and two colleagues suggested that gravity may be leaking into extra dimensions of space, a process that would explain why gravity is far weaker than the other forces known to physics. At scales smaller than the extra dimensions, before gravity had a chance to leak away, its attraction would be stronger than expected. The researchers calculated that these dimensions could be as big as a millimeter in size.

    This inspired Eric Adelberger and his colleagues to search for those dimensions. They already had the device to do it. In the 1980s, Adelberger and the so-called Eöt-Wash group at the University of Washington had built a device called a “torsion balance” that would twist in response to small forces. At first the group used the balance to search for a “fifth force” that had been proposed based on century-old experimental results. They failed to find it. “We built an apparatus, and we found that this thing wasn’t true,” Adelberger said. “It was so much fun, and it was much easier than we thought it would be.”

    Now they set out to work on Arkani-Hamed’s prediction that gravity would be much stronger at small distances — before it has a chance to leak into extra dimensions — than when objects are farther away.

    Since 2001, the team has published results from four torsion balances, each more sensitive than the last. So far, any diminutive dimensions haven’t revealed themselves. The team first reported that gravity acts normally at a distance of 218 microns. Then they reduced this number to 197 microns, then 56, and finally 42, as reported in a 2013 study. Today, their data come from two different instruments with pendulums. One pendulum twists at a rate determined by the strength of gravity; the other should stay still unless gravity behaves unexpectedly.

    But they haven’t been able to shrink their measurements much beyond 42 microns. Currently, they’re tweaking the 2013 analysis, and they hope to publish updated numbers soon. While Adelberger is hesitant to cite the new limit they’re pushing for, he said it’s unlikely to be under 20 microns. “When you first do something, the bar is relatively low,” he said. “It gets so much harder when you make the distances shorter.”

    Techniques borrowed from atomic physics may indicate another way down the ladder, even to nanoscopic scales.

    In 2010, Geraci, then a physicist at the National Institute of Standards and Technology in Boulder, Colo., suggested a scheme****to probe hidden forces at tiny scales. Instead of using the pendulums at Washington, small-force hunters could use spheres of silica levitated by lasers. By measuring how nearby objects change the position of a floating bead, this kind of experiment can look at the forces spanning just a few microns.

    The experiment is able to probe scales of smaller lengths, but there’s a catch. Gravity is most easily measured using massive objects. Geraci’s design, now built, uses spheres just 0.3 microns in size. David Moore, a physicist at Stanford University who works in the lab of Giorgio Gratta, has his own working version that uses larger silica spheres about five microns in diameter. Compared to the Eöt-Wash team, which uses torsion balances that are a few centimeters wide, both experiments trade away the larger gravitational signals for more precision at close range.

    Geraci’s and Moore’s masses are so light that the teams are not yet able to directly measure the gravitational pull of nearby objects; they can only see it if it turns out stronger than predicted by Newton’s law. That may make it harder to determine if gravity or something else is behind anything strange they might see. “One thing we always like to point out about gravity is that having the force sensitivity to see gravity is basically table stakes to play the game,” said Charlie Hagedorn, a postdoc at Washington. Adelberger adds, “If you want to know what gravity does, you’ve got to be able to see it.

    But to Geraci and Moore, the levitated beads are a general platform they can use to investigate small physics beyond just gravity. “The vision here is that once you’re able to measure these tiny forces, there’s a lot you can do,” Moore said. At the end of 2014, Moore conducted a search for particles with electric charges much smaller than one electron. Some models of dark matter suggest these “millicharged” particles could have formed in the early universe, and could still be lurking in ordinary matter.

    To try to find these particles, Moore held positively charged spheres between a pair of electrodes. He then zapped the entire apparatus with flashes of ultraviolet light to knock electrons off the electrodes. These electrons then attached to the positively charged spheres, turning them neutral. Then he applied an electric field. If any millicharged particles were still stuck on the spheres, they would impart a small force. Moore didn’t see any effects, which means that any millicharged particles must have an exceedingly small charge, or the particles themselves must be rare, or both.

    In a more recent test published** in April, Moore, working with his colleagues Alex Rider and Charles Blakemore, also used the microspheres to look for so-called “chameleon” particles that may explain dark energy. They didn’t find any, a result that echoed one published*** last year in the journal Science by a team at the University of California, Berkeley.

    “These small-scale experiments are — I don’t know what it’s called in English — ‘wild goose chase’?” said Savas Dimopoulos, a physicist at Stanford who was a co-author of the paper with Arkani-Hamed that proposed the search for millimeter-size extra dimensions. “You don’t really know where to look, but you look wherever you can.”

    For Dimopoulos, these tabletop searches are an appealing cottage industry. They offer a cheap alternative way to study provocative theories. “These ideas have been proposed over the last 40 years, but they’ve been staying on the back burner, because the main focus of fundamental physics has been accelerators,” he said.

    It’s a pitch Dimopoulos has been honing in talks over the last three years. Several experiments like those aimed at short-range forces are in the works, but they’re underfunded and underappreciated. “The field doesn’t even have a proper name,” he said.

    What might help is what Dimopoulos calls a “super lab” — a facility that would bring many such tabletop experiments together under one roof, like the research communities that have built up around high-energy projects like the Large Hadron Collider. Conrad, for her part, would like these endeavors to be better supported while still remaining at universities.

    Either way, both argue that more effort is warranted in the search for lower-energy particles, especially those predicted to lurk at scales only a little smaller than the width of a human hair. “There is a whole zoo of these things,” Dimopoulos said. “High energy is not the only frontier that exists.”

    *Science paper:
    Zeptonewton force sensing with nanospheres in an optical lattice

    **Science paper:
    Search for Screened Interactions Below the Dark Energy Length Scale Using Optically Levitated Microspheres

    ***Science paper
    Atom-interferometry constraints on dark energy

    ****Science paper:
    Short-range force detection using optically-cooled levitated microspheres

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 7:02 pm on May 9, 2016 Permalink | Reply
    Tags: , , , Researchers find unexpected magnetic effect of two thin films   

    From MIT: “Researchers find unexpected magnetic effect” 

    MIT News
    MIT News
    MIT Widget

    May 9, 2016
    David L. Chandler

    Arrows indicate the spin direction in the ferromagnetic insulator (EuS, shown in red) and topological insulator (Bi2Se3, shown in blue) at the interface between the two materials. Image: Ferhat Katmis

    A new and unexpected magnetic effect has taken researchers by surprise, and could open up a new pathway to advanced electronic devices and even robust quantum computer architecture.

    The finding is based on a family of materials called topological insulators (TIs) that has drawn much interest in recent years. The novel electronic properties of TIs might ultimately lead to new generations of electronic, spintronic, or quantum computing devices. The materials behave like ordinary insulators throughout their interiors, blocking electrons from flowing, but their outermost surfaces are nearly perfect conductors, allowing electrons to move freely. The confinement of electrons to this vanishingly thin surface makes then behave in unique ways.

    But harnessing the materials’ promise still faces numerous obstacles, one of which is to find a way of combining a TI with a material that has controllable magnetic properties. Now, researchers at MIT and elsewhere say they have found a way to overcome that hurdle.

    The team at MIT, led by Jagadeesh Moodera of the Department of Physics and postdoc Ferhat Katmis, was able to bond together several molecular layers of a topological insulator material called bismuth selenide (Bi2Se3) with an ultrathin layer of a magnetic material, europium sulfide (EuS). The resulting bilayer material retains all the exotic electronic properties of a TI and the full magnetization capabilities of the EuS.

    But the big surprise was the stability of that effect. While EuS itself is known to retain its ability to hold a magnetic state only at extremely low temperatures, just 17 degrees above absolute zero (17 Kelvin), the combined material keeps those characteristics all the way up to ordinary room temperature. That could make all the difference for developing devices that are practical to operate, and could open up new avenues of device design as well as research into a new area of basic physical phenomena.

    The findings are being reported* in the journal Nature, in a paper by Katmis, Moodera, and 10 others at MIT, and a multinational, multidisciplinary team from Oak Ridge, Argonne National Laboratories, and institutions in Germany, France, and India.

    The room-temperature magnetic effect seen in this work, Moodera says, was something that “wasn’t in anybody’s wildest expectations. This is what astonished us.” Research like this, he says, is still so near the frontiers of scientific knowledge that the phenomena are impossible to predict. “You can’t tell what you’re going to see next week or what’s going to happen” in the next experiment, he says.

    In particular, novel combinations of two materials with very different properties “is an area with very little depth of research.” And getting clear and repeatable results depends on a high degree of precision in the preparation of the surfaces and joining of the two materials; any contamination or imperfections at the interface between the two – even down to the level of individual atomic layer – can throw off the results, Moodera says. “What happens, happens where they meet,” he says, and the careful and persistent effort of Katmis in making these materials was key to the new discovery.

    The finding could be a step toward new kinds of magnetic interactions at the interfaces between materials, with stability that could result in magnetic memory devices which could store information at the level of individual molecules, the team says.

    The effect, which the researchers call proximity-induced magnetism, could also enable a new variety of “spintronic” devices based on a property of electrons called spin, rather than on their electrical charge. It might also provide the first practical way of producing a kind of particle called Majorana fermions, predicted by physicists but not yet observed convincingly. That in turn could help in the development of quantum computers, they say.

    “A nice thing about this is that it shows both very fundamental physics and also takes us forward to many possible applications,” Katmis says. He says the effect is somewhat similar to unexpected findings a decade ago in the interfaces between some oxide materials, which has triggered a decade of intensive research.

    This new finding, coupled with other recent quantum behavior observed in TIs, can lead to many possibilities for future electronics and spintronics, the team says.

    “This beautiful work from Moodera’s group is a very exciting demonstration that the whole is greater than the sum of its parts,” says Philip Kim, a professor of physics at Harvard University, who was not involved in this work. “Topological insulators and magnetic insulators are two completely dissimilar materials. Yet they produce very unusual emergent effects at their atomically clean interface,” he adds. “The enhanced interfacial magnetism shown in this work can be very relevant to building up novel spintronics devices that can process information with low energy consumption.”

    The team also included associate professor of physics Pablo Jarillo-Herrero and postdoc Peng Wei at MIT, and researchers at the Institute for Theoretical Physics in Bochum and the Institute for Theoretical Solid State Physics in Dresden, both in Germany; the Ecole Normale Superieure in Paris; and the Institute of Nuclear Physics, in Kolkata, India. The work was supported by the National Science Foundation, Office of Naval Research, and the U.S. Department of Energy.

    *Science paper:
    A high-temperature ferromagnetic topological insulating phase by proximity coupling

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 1:18 pm on May 9, 2016 Permalink | Reply
    Tags: , , Nobel laureate Frank Wilczek joins ASU,   

    From ASU: “Nobel laureate Frank Wilczek joins ASU” 

    ASU Bloc




    Frank Wilczek, a theoretical physicist and mathematician who shared the Nobel Prize in Physics in 2004, is joining Arizona State University as a professor in the physics department.

    Wilczek will work on a variety of important issues in theoretical physics. He will also be organizing workshops to gather the best and brightest physicists worldwide at ASU to help propel the advancement of the discipline. He is the second Nobel Prize-winning professor to join ASU in the last week.

    “At a minimum I will be giving lectures to advance students on frontier topics, basically what I’m working on,” he said. “It’s also quite possible I will try to involve students at earlier stages in some of the more practical work, where they don’t need as much theoretical background.”

    Wilczek said he’s looking for a new adventure, and his move to Arizona State will be another step in the evolution of an existing relationship.

    “(My wife) Betsy and I have visited ASU regularly for the past several years,” he said. “We’ve had many great experiences in the Tempe and Phoenix community already. We’ve been impressed with the visionary ambition and dynamism of the university in general, and with its encouragement of new scientific and cross-disciplinary initiatives in particular. I’m looking forward to exciting adventures in advancing the frontiers of science, sharing it and putting it to use in coming years.”

    Wilczek received his bachelor of science in mathematics at the University of Chicago in 1970, a master of arts in mathematics at Princeton University in 1972, and a PhD in physics at Princeton University in 1974. Currently he is the Herman Feshbach professor of physics at the Massachusetts Institute of Technology.

    Wilczek, along with David Gross and H. David Politzer, was awarded the Nobel for their discovery of asymptotic freedom in the theory of the strong interaction.

    Theoretical physicist and cosmologist Lawrence KraussKrauss is a Foundation Professor in the School of Earth and Space Exploration in the College of Liberal Arts and Sciences, and director of its Origins Project. called Wilczek the pre-eminent theoretical physicist of his generation.

    “Yes, he won the Nobel Prize for work he did as a graduate student when he was 21, but that just tells a small part of the story,” Krauss said. “He is a true polymath, working in and mastering almost every area of physics, but his interests range far more broadly. …

    “What has interested Frank in ASU in particular is the breadth of work being done here, the highly interactive transdisciplinary atmosphere — which Origins in particular benefits from — and the openness of the university, from the president on down, to new ideas.”

    Ferran Garcia-Pichel, dean of natural sciences in the College of Liberal Arts and Sciences, said he looked forward to Wilczek’s contributions to the university.

    “He is sure to contribute seminally to the development of theoretical physics at ASU and to the teaching and mentoring of our students, as he has already done during previous stays as a visiting professor,” Garcia-Pichel said. “He will definitely help us attract the field’s center of gravity closer to home.”

    Garcia-Pichel announced Wednesday that Sidney Altman, who won the Nobel Prize in Chemistry in 1989, will join the School of Life Sciences at ASU.

    On a trip to Arizona this past January, Wilczek toured an art installation called “Field of Lights” at the Desert Botanical Garden in Phoenix.

    The display, by the artist Bruce Munro, consists of thousands of spheres of colored light, slowly pulsating and strewn across the desert.

    Wilczek wrote in a column for the Wall Street Journal that as he walked among the lights, “I felt I’d gotten an inkling of what thought looks like.”

    That experience, he wrote, changed the way he thinks about the brain, and himself, and it helped him conceive of a potentially innovative way of teaching the complexity of the brain.

    “Frank Wilczek’s quest for different ways of examining some of the most complicated questions and ideas fits perfectly with ASU’s distinguished faculty and the university’s principles of finding your own path to discovery, both in learning and research,” said Mark Searle, ASU’s executive vice president and university provost.

    Wilczek’s Nobel Prize-winning work focused on the strong force, one of the four fundamental forces in nature, together with gravity, electromagnetism and the weak force.

    “At the early part of the 20th century, when people looked at the interior of atoms, they found that the classic forces — gravity and electromagnetism — were inadequate,” Wilczek said. “Two new forces were required — the strong and the weak force. … There was a long period of exploration. It’s not easy to access the new forces, because nuclei are so small. [In the Nobel Prize-winning work] we put together some key experimental observations, together with the principles of quantum mechanics and relativity, to propose a complete, precise set of equations for the strong force: the theory known as quantum chromodynamics, or QCD. We made many predictions based on this work, which proved to be correct.”

    Wilczek and his colleagues discovered, theoretically, new subatomic particles called color gluons, which, he said, “hold atomic nuclei together.” Color gluons were subsequently observed experimentally.

    His current research strikes a balance between theoretical ideas and observable phenomena, like applying particle physics to cosmology and the application of field theory techniques to condensed matter physics. He described it as “more nitty-gritty experimental realities.”

    “If anything, in recent years my work has gotten more down to earth,” Wilczek said. “As time has gone on, my interests have expanded. I haven’t lost my interest in fundamental cosmology. … What I hope to accomplish is to continue the same sort of thing I’ve always done, which is look for new opportunities.

    “I’m a theorist, not an experimentalist, so it doesn’t take me long to change from one subject to another. … I’m going to be looking into applications that are driven by our increasing control of the quantum world. I’ve also been exploring classical applications, like the physical basis of perception.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ASU is the largest public university by enrollment in the United States.[11] Founded in 1885 as the Territorial Normal School at Tempe, the school underwent a series of changes in name and curriculum. In 1945 it was placed under control of the Arizona Board of Regents and was renamed Arizona State College.[12][13][14] A 1958 statewide ballot measure gave the university its present name.
    ASU is classified as a research university with very high research activity (RU/VH) by the Carnegie Classification of Institutions of Higher Education, one of 78 U.S. public universities with that designation. Since 2005 ASU has been ranked among the Top 50 research universities, public and private, in the U.S. based on research output, innovation, development, research expenditures, number of awarded patents and awarded research grant proposals. The Center for Measuring University Performance currently ranks ASU 31st among top U.S. public research universities.[15]

    ASU awards bachelor’s, master’s and doctoral degrees in 16 colleges and schools on five locations: the original Tempe campus, the West campus in northwest Phoenix, the Polytechnic campus in eastern Mesa, the Downtown Phoenix campus and the Colleges at Lake Havasu City. ASU’s “Online campus” offers 41 undergraduate degrees, 37 graduate degrees and 14 graduate or undergraduate certificates, earning ASU a Top 10 rating for Best Online Programs.[16] ASU also offers international academic program partnerships in Mexico, Europe and China. ASU is accredited as a single institution by The Higher Learning Commission.

    ASU Tempe Campus
    ASU Tempe Campus

  • richardmitnick 9:16 am on May 8, 2016 Permalink | Reply
    Tags: , New mathematical model maps the expansion of the early Universe better than ever before, ,   

    From Science Alert: “New mathematical model maps the expansion of the early Universe better than ever before” 


    Science Alert

    9 MAR 2016


    This is where we came from.

    Physicists in Switzerland are using new code called ‘gevolution’ together with Einstein’s theory of general relativity to map the expansion of the early Universe more accurately than ever before. The new model factors in the rotation of space-time and the amplitude of gravitational waves – the existence of which was confirmed just last month.

    It’s more accurate than previous software simulations, its developers say, because it takes into account the high-speed movements of particles and the fluctuations of dark energy. In line with Einstein’s general relativity theory, the aim was to predict the amplitude and impact of gravitational waves, and the unique rotation of space-time to map the growth of the Universe.

    To achieve their target, the University of Geneva team analysed a cubic portion in space, consisting of 60 billion zones, each containing a particle (a portion of a galaxy). This enabled them to study the way these particles moved in relation to their close neighbours.

    By plugging in data from Einstein’s equations, and using the UNIGE LATfield2 library and a Swiss supercomputer, the model could measure the metric of distances and time between two galaxies in the Universe.

    Previously, scientists have studied the formation of large-scale cosmological structures using the gravitational law set down by Isaac Newton: that the attraction between two bodies is directly related to their mass and the distance between them.

    While Einstein’s general relativity theory has since superseded it, linking gravity with acceleration and providing a more accurate method of tracking a constantly changing Universe, the ideas set down by Newton are still extensively used to model the effects of gravity and large masses.

    And that brings us back to the gevolution model, which is able to map the latest theories in scientific thinking and celestial movements against Newtonian codes.

    What comes out the other end is a mathematical model that provides a more accurate and more complex look at how the Universe expanded at the beginning of its history – it should also help us understand more about gravitational waves and dark energy (thought to be responsible for up to 70 percent of the Universe).

    “This conceptually clean approach is very general and can be applied to various settings where the Newtonian approximation fails or becomes inaccurate, ranging from simulations of models with dynamical dark energy or warm/hot dark matter to core collapse supernova explosions,” explains the new paper*, published in the journal Nature Physics.

    The new code is also going to make it possible for the theory of general relativity to be tested on a larger scale than ever before, and to help foster further research, the team plans to make the gevolution code open to the public in the near future.

    *Science paper:
    General relativity and cosmic structure formation

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:05 am on May 8, 2016 Permalink | Reply
    Tags: , , , Why the Universe ended up with three dimensions   

    From Science Alert: “New paper explains why the Universe ended up with three dimensions” 


    Science Alert

    6 MAY 2016


    And there’s no string theory in sight.

    It’s probably not news to you that as residents of this fine Universe we call home, we can only move left or right, up or down, backwards or forwards. That’s it. There aren’t any other possible directions that aren’t some combination of those three.

    These are our Universe’s three spatial dimensions, and why we have exactly three of them (not just one or two, five or 80) is still something of a mystery.

    Not that physicists haven’t been searching for an answer – explaining the fundamental nature of reality is just a really hard nut to crack. But a new paper* has shown that a universe with our laws of thermodynamics (which describe how energy moves around) will always get stuck with exactly three spatial dimensions. So basically, this paper just explained the Universe.

    The researchers, from the University of Salamanca in Spain and the National Polytechnic Institute of Mexico, explained it with the first and second laws of thermodynamics.

    For our purposes, these laws say that a system – whether it’s a universe, a human, or a rock – can’t do anything that requires more energy than it has to start out, unless it gets more energy added. And if the system gets bigger without gaining energy, like we think our Universe has, then, on average, there’s less energy available in any particular place.

    Put those together, and it means that once the Universe stopped having enough energy to complete the same action everywhere, the whole Universe could never do that thing again – though certain parts of it might be able to if they could concentrate enough energy.

    We’ll get back to that shortly, but the above description probably irks some of my fellow physicists. Take a deep breath. It’ll be okay.

    Thermodynamics works in any number of dimensions. It works in our 3D Universe, of course, but it also would work in two spatial dimensions, where the only possible directions to travel were left-right and up-down. In a two-dimensional universe, it would be physically impossible to move backward or forward, because that direction just wouldn’t exist.

    But, as the authors of this new paper, published in Europhysics Letters, explain, a universe could also have four dimensions: left-right, up-down, backward-forward, and ‘flirp-flarp’ – or whatever you want to call the new direction.

    In that universe, it would be possible to travel in a direction that’s completely impossible in our Universe. And, similarly, in such a universe, the laws of thermodynamics could work perfectly well.

    With this in mind, we know that energy can move from one place to another, but it would still be impossible for a system to use more energy than it has available. The same goes for five, or six, or 30 dimensions.

    The physicists decided to see what happens if you start a universe with a completely undefined number of dimensions – a universe where it’s unclear how many directions you can move in. As Lisa Zyga reports** for Phys.org, they found something interesting.

    In our incredibly early Universe – like, millionths of a trillionth of a trillionth of a trillionth of a trillionth of a second after the Big Bang – everything was really, really hot, and there were huge amounts of energy in every tiny part of space. Any number of dimensions could have worked equally well at this point; there wasn’t really any way to tell the difference between a universe with one dimension and a universe with seven.

    But very quickly afterwards, as the energy spread out, the Universe got caught in a kind of rut and didn’t have enough energy everywhere to get out. And remember: once the Universe doesn’t have enough energy to get out of somewhere, it’s never going to.

    The rut that everywhere in the Universe settled into was one with three spatial dimensions – exactly the Universe that we see today, says the team. The paper makes it clear that among all of the possible numbers of dimensions, our lowly three was inevitable.

    Oh and by the way, the researchers also propose that it’s possible, in theory, to pack enough energy into a tiny bit of space that – in that one spot – the Universe momentarily escapes its rut. It might take a particle accelerator the size of the Solar System, but in principle, it’s doable.

    If we ever do get something like that running, maybe we’ll see a proton, for the most fleeting of moments, move flirp for a few trillionths of a metre before returning to the boring old left.

    *Science paper:
    Is the (3 + 1)-d nature of the universe a thermodynamic necessity?

    **Science paper:
    Why is space three-dimensional?

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 8:53 am on May 5, 2016 Permalink | Reply
    Tags: , , , FNAL G-2,   

    From Don Lincoln at FNAL: “The physics of g-2” 

    FNAL II photo

    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    FNAL Don Lincoln
    Don Lincoln

    At any time in history, a few scientific measurements disagreed with the best theoretical predictions of the time. Currently, one such discrepancy involves the measurement of the strength of the magnetic field of a subatomic particle called a muon. In this video, Fermilab’s Dr. Don Lincoln explains this mystery and sketches ongoing efforts to determine if this disagreement signifies a discovery. If it does, this measurement will mean that we will have to rewrite the textbooks.

    Access the mp4 video here .

    Watch, enjoy, learn.

    FNAL G-2
    FNAL G-2

    FNAL Muon g-2 studio
    FNAL Muon g-2 studio

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.

  • richardmitnick 4:06 pm on April 26, 2016 Permalink | Reply
    Tags: "Why Physics Needs Diamonds", , ,   

    From Jlab via DOE: “Why Physics Needs Diamonds” 

    April 26, 2016
    Kandice Carter

    A detailed view of the diamond wafers scientists use to get a better measure of spinning electrons. | Photo courtesy of Jefferson Lab.

    Diamonds are one of the most coveted gemstones. But while some may want the perfect diamond for its sparkle, physicists covet the right diamonds to perfect their experiments. The gem is a key component in a novel system at Jefferson Lab that enables precision measurements to discover new physics in the sub-atomic realm — the domain of the particles and forces that build the nucleus of the atom.

    Explorations of this realm require unique probes with just the right characteristics, such as the electrons that are prepared for experiments inside the Continuous Electron Beam Accelerator Facility [CEBAF] at Jefferson Lab.

    Jlab CEBAF
    Jlab CEBAF

    CEBAF is an atom smasher. It can take ordinary electrons and pack them with just the right energy, group them together in just the right number and set those groups to spinning in just the right way to probe the nucleus of the atom and get the information that physicists want.

    But to ensure that electrons with the correct characteristics have been dialed up for the job, nuclear physicists need to be able to measure the electrons before they are sent careening into the nucleus of the atom. That’s where the diamonds in a device called the Hall C Compton Polarimeter come in. The polarimeter measures the spins of the groups of electrons that CEBAF is about to use for experiments.

    This quantity, called the beam polarization, is a key unit in many experiments. Physicists can measure it by shining laser light on the electrons as they pass by on their way to an experiment. The light will knock some of the electrons off the path, where they’re gathered up into a detector to be counted, a procedure that yields the beam polarization.

    Ordinarily, this detector would be made of silicon, but silicon is relatively easily damaged when struck by too many particles. The physicists needed something a bit hardier, so they turned to diamond, hoping it could also be a physicist’s best friend.

    The Hall C Compton Polarimeter uses a novel detector system built of thin wafers of diamond. Specially lab-grown plates of diamond, measuring roughly three-quarters of an inch square and a mere two hundredths of an inch thick, are outfitted like computer chips, with 96 tiny electrodes stuck to them. The electrodes send a signal when the diamond detector counts an electron.

    This novel detector was recently put to the test, and it delivered. The detector provided the most direct and accurate measurement to date of electron beam polarization at high current in CEBAF.

    But the team isn’t resting on its laurels: New experiments for probing the subatomic realm will require even higher accuracies. Now, the physicists are focused on improving the polarimeter, so that its diamonds will be ready to sparkle for the next precision experiment.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Thomas Jefferson National Accelerator Facility is managed by Jefferson Science Associates, LLC for the U.S. Department of Energy

  • richardmitnick 12:07 pm on April 23, 2016 Permalink | Reply
    Tags: , ,   

    From Physics: “Q&A: Keeping a Watchful Eye on Earth” 

    Physics LogoAbout Physics

    Physics Logo 2


    Anna Hogg

    Andrew Shepherd explains how he uses data from satellites to study polar ice and describes what it’s like to work in the politically charged field of climate science.

    From the baking hot savannahs of Africa to the icy cold wastelands of Greenland and Antarctica, Andrew Shepherd has worked in some of the most extreme environments on Earth. In college, he studied astrophysics, and flirted with the idea of pursuing it as a career. But a professor’s warning that few of his classmates would find a permanent job in that field turned him off. Instead he took advantage of a new department of Earth observation science at Leicester University in the UK to follow a career studying our planet’s climate. Now, rather than pointing satellites towards space to observe the stars, Shepherd flips them around to monitor the Earth. He has studied the arid land in Zimbabwe and the ice sheets at Earth’s poles. (From his fieldwork in these places, Shepherd has concluded that it is far better to bundle up warm for the cold than to boil in the heat.) As the director of the Centre for Polar Observation and Modeling in the UK and a professor at Leeds University, Shepherd also has a hand in designing and building new satellites. Physics spoke to Shepherd to learn more about his work.

    –Katherine Wright

    Your current focus is measuring changes in the amount of ice stored in Antarctica and Greenland. How did you get involved in that?

    There are dozens of estimates for how much ice is being lost from the polar ice sheets, some of which my group has produced. But climate scientists and policy makers don’t want to pick and choose between different estimates; they need a single, authoritative one. I worked with the European Space Agency (ESA) and the National Aeronautics and Space Administration (NASA), and the world’s leading experts, to pull together all the satellite measurements and deliver a single assessment of polar ice sheet losses. The project, called IMBIE—the Ice Sheet Mass Balance Inter-comparison Exercise—has been really well received. Now the space agencies want us to generate yearly assessments of ice sheet loss to chart its impact on global sea-level rise.

    What techniques are used to monitor polar ice?

    People have been exploring the polar regions for centuries, but Antarctica and Greenland are simply too large to track on foot. Satellites have solved this problem. We can now measure changes in the flow, thickness, and mass of the polar ice sheets routinely from space. These data have revolutionized our understanding of how Antarctica and Greenland interact with the climate system. Although most satellite orbits don’t cover the highest latitudes, some—such as ESA’s CryoSat—have been specially designed for that purpose.

    ESA/CryoSat 2
    ESA/CryoSat 2

    Unfortunately, we can’t measure everything from space. For example, the radio frequencies that we use to survey the bedrock beneath ice sheets can interfere with satellite television and telecommunications, so instead we rely on aircraft measurements.

    What questions about polar ice are you trying to answer?

    The headline science question is, how much ice is being lost from Antarctica and Greenland? It’s an important question, but there are many other things that we are interested in finding out. For example, how fast can ice sheets flow? Ask a glaciologist today and they’ll tell you that some glaciers flow at speeds greater than 15 km per year—you can sit next to Greenland’s Jacobshavn Isbrae glacier during your lunch break and watch it move; it’s that quick. But 10 years ago we thought the maximum speed was only 4 or 5 km per year. The speed is a useful piece of information because it’s an indication of how much ice [is available to] contribute to a future rise in sea level.
    Your group is part of several international collaborations.

    What’s your experience of working with so many other people towards a common goal?

    I enjoy it. As scientists, we are able to rely on the expertise of other people; we don’t have to have the answer to every problem. In climate and Earth science, problems are often much larger than any one group, or even institution, can solve alone, so teamwork is important.

    What’s it like to work in a field that’s often in the political and media spotlight?

    This adds excitement to our work: it’s great to know that people are interested in what we do. But it also adds an element of caution. Science moves forward by people challenging what has come before them. It can be daunting to do that in climate science, because it’s easy to be labeled an extremist. If you discover glaciers that aren’t shrinking, people assume you are going against an immense body of science. If you find evidence that the future sea-level rise will be higher than the latest predictions, you get labeled an alarmist. But often the worst option is to adopt a central position. If we assume that everyone else is right, and that there is no need to change the way we look at a problem, then we can rapidly slip into a situation where our knowledge ceases to expand.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Physicists are drowning in a flood of research papers in their own fields and coping with an even larger deluge in other areas of physics. How can an active researcher stay informed about the most important developments in physics? Physics highlights a selection of papers from the Physical Review journals. In consultation with expert scientists, the editors choose these papers for their importance and/or intrinsic interest. To highlight these papers, Physics features three kinds of articles: Viewpoints are commentaries written by active researchers, who are asked to explain the results to physicists in other subfields. Focus stories are written by professional science writers in a journalistic style and are intended to be accessible to students and non-experts. Synopses are brief editor-written summaries. Physics provides a much-needed guide to the best in physics, and we welcome your comments (physics@aps.org).

  • richardmitnick 10:10 am on April 21, 2016 Permalink | Reply
    Tags: , , , Three Ways Physics Could Help Save Humanity   

    From PI: “Three Ways Physics Could Help Save Humanity” 

    Perimeter Institute
    Perimeter Institute

    April 21, 2016

    Technology has put our global environment in crisis. Could it also provide the solution?

    PROBLEM: fossil fuels for power and transit
    SOLUTION: Superconductors


    Fossil fuels generate most electricity, which is then transported through wires and cables – a process that loses between eight and 15 percent of the original power production. But exotic materials called superconductors could just save the day.
    Superconductors let electric current flow without resistance or loss, and allow movement with no friction. Today’s superconductors operate at extremely low temperatures and require supercooling. Creating – or finding – room-temperature superconductors is one of modern science’s great quests.

    High-temperature superconductors could be used to create extremely efficient rotating machines (think: steam-free turbines), and power networks with near-100-percent efficiency.

    They could also revolutionize transit. Magnetic levitation (maglev) trains already use supercooled superconducting magnets to levitate and propel the train floating above the tracks. High-temperature versions would do away with energy-guzzling cooling systems and pave the way for even-more-Earth-friendly commutes.

    Problem: Gravity and inertia
    Solution: Advanced materials


    Many resources devoted to overcoming the effects of gravity and inertia also contribute to climate change. Just think of the fuel used simply to get heavy vehicles to move. Cue the arrival of, and excitement about, graphene.

    Graphene is a sheet of carbon just one atom thick, and it’s the strongest material in the world. (If it was the thickness of cling wrap, it would take the force of a large car to puncture it with a pencil.)

    Experimentalists are currently working towards creating a graphene-composite material that would replace steel in aircraft and other vehicles, making them significantly more fuel-efficient.

    But some theorists are looking even further afield. Graphene could prove strong enough to fabricate long-theorized space elevators. These elevators could tether a satellite to the Earth, turning the satellite into a base station for mining natural resources on asteroids, among other possibilities.

    Advanced quantum materials are also expected to significantly improve our ability to create and store energy, from high-efficiency solar panels to high-performance batteries.

    Problem: Humans
    Solution: Artificial intelligence


    The Anthropocene is not an official epoch yet – the International Commission on Stratigraphy (the people who define geologic time scales) will decide this year whether to officially recognize it – but scientists have no doubt that human society has been, and continues to be, profoundly damaging to the Earth.

    So why not consider a non-human effort to ameliorate that impact? Powered by recent advances in neural networks and deep-learning algorithms, computers are becoming increasingly “human” in their abilities. (Google hit a milestone this year when its AlphaGo computer beat the world champion of the ancient Chinese board game of Go.)

    But artificial intelligence could do much more than play a mean board game. A combination of machine-learning algorithms and future supercomputer hardware – including quantum computers – could forge the new era of AI and help realize efficiencies in infrastructure design, conduct fundamental research projects, and even mediate arguments.


    BUT THAT’S NOT ALL: The physics of chaos theory, quantum information, and next-generation supercomputing could also help scientists understand and predict climate change.

    According to Tim Palmer, the Oxford University Royal Society Research Professor in Climate Physics, the emerging concept of inexact supercomputing could provide a powerful approach to assessing the chaotic, uncertain nature of our climate system.
    Tune in on May 4 to watch the live webcast of Dr. Palmer’s Perimeter Public Lecture “Climate Change, Chaos, and Inexact Computing.”

    Access mp4 video here .

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Perimeter

    Perimeter Institute is a leading centre for scientific research, training and educational outreach in foundational theoretical physics. Founded in 1999 in Waterloo, Ontario, Canada, its mission is to advance our understanding of the universe at the most fundamental level, stimulating the breakthroughs that could transform our future. Perimeter also trains the next generation of physicists through innovative programs, and shares the excitement and wonder of science with students, teachers and the general public.

  • richardmitnick 6:48 am on April 21, 2016 Permalink | Reply
    Tags: , ,   

    From Nautilus: “Why Physics Is Not a Discipline” 



    April 21, 2016
    Philip Ball

    Instructive: Phase transitions in physical systems, like that between water vapor and ice, can give insight into other scientific problems, including evolution. Wikipedia

    Have you heard the one about the biologist, the physicist, and the mathematician? They’re all sitting in a cafe watching people come and go from a house across the street. Two people enter, and then some time later, three emerge. The physicist says, “The measurement wasn’t accurate.” The biologist says, “They have reproduced.” The mathematician says, “If now exactly one person enters the house then it will be empty again.”

    Hilarious, no? You can find plenty of jokes like this—many invoke the notion of a spherical cow—but I’ve yet to find one that makes me laugh. Still, that’s not what they’re for. They’re designed to show us that these academic disciplines look at the world in very different, perhaps incompatible ways.

    There’s some truth in that. Many physicists, for example, will tell stories of how indifferent biologists are to their efforts in that field, regarding them as irrelevant and misconceived. It’s not just that the physicists were thought to be doing things wrong. Often the biologists’ view was that (outside perhaps of the well established but tightly defined discipline of biophysics) there simply wasn’t any place for physics in biology.

    But such objections (and jokes) conflate academic labels with scientific ones. Physics, properly understood, is not a subject taught at schools and university departments; it is a certain way of understanding how processes happen in the world. When Aristotle wrote his Physics in the fourth century B.C., he wasn’t describing an academic discipline, but a mode of philosophy: a way of thinking about nature. You might imagine that’s just an archaic usage, but it’s not. When physicists speak today (as they often do) about the “physics” of the problem, they mean something close to what Aristotle meant: neither a bare mathematical formalism nor a mere narrative, but a way of deriving process from fundamental principles.

    This is why there is a physics of biology just as there is a physics of chemistry, geology, and society. But it’s not necessarily “physicists” in the professional sense who will discover it.

    In the mid-20th century, the boundary between physics and biology was more porous than it is today. Several pioneers of 20th-century molecular biology, including Max Delbrück, Seymour Benzer, and Francis Crick, were trained as physicists. And the beginnings of the “information” perspective on genes and evolution that found substance in James Watson and Francis Crick’s 1953 discovery of genetic coding in DNA is usually attributed to physicist Erwin Schrödinger’s 1944 book What Is Life? (Some of his ideas were anticipated, however, by the biologist Hermann Muller.)

    A merging of physics and biology was welcomed by many leading biologists in the mid-century, including Conrad Hal Waddington, J. B. S. Haldane, and Joseph Needham, who convened the Theoretical Biology Club at Cambridge University. And an understanding of the “digital code” of DNA emerged at much the same time as applied mathematician Norbert Wiener was outlining the theory of cybernetics, which purported to explain how complex systems from machines to cells might be controlled and regulated by networks of feedback processes. In 1955 the physicist George Gamow published a prescient article in Scientific American called Information transfer in the living cell, and cybernetics gave biologists Jacques Monod and François Jacob a language for formulating their early theory of gene regulatory networks in the 1960s.

    But then this “physics of biology” program stalled. Despite the migration of physicists toward biologically related problems, there remains a void separating most of their efforts from the mainstream of genomic data-collection and detailed study of genetic and biochemical mechanisms in molecular and cell biology. What happened?

    Some of the key reasons for the divorce are summarized in Ernst Mayr’s 2004 book What Makes Biology Unique. Mayr was one of the most eminent evolutionary biologists of the modern age, and the title alone reflected a widely held conception of exceptionalism within the life sciences. In Mayr’s view, biology is too messy and complicated for the kind of general theories offered by physics to be of much help—the devil is always in the details.


    Scientific ideas developed in one field can turn out to be relevant in another.


    Mayr made perhaps the most concerted attempt by any biologist to draw clear disciplinary boundaries around his subject, smartly isolating it from other fields of science. In doing so, he supplies one of the clearest demonstrations of the folly of that endeavor.

    He identifies four fundamental features of physics that distinguish it from biology. It is essentialist (dividing the world into sharply delineated and unchanging categories, such as electrons and protons); it is deterministic (this always necessarily leads to that); it is reductionist (you understand a system by reducing it to its components); and it posits universal natural laws, which in biology are undermined by chance, randomness, and historical contingency. Any physicists will tell you that this characterization of physics is thoroughly flawed, as a passing familiarity with quantum theory, chaos, and complexity would reveal.

    The skeptic: Ernst Mayr argued that general theories from physics would be unlikely to be of great use in biology. Wikipedia

    But Mayr’s argument gets more interesting—if not actually more valid—when he claims that what makes biology truly unique is that it is concerned with purpose: with the designs ingeniously forged by blind mutation and selection during evolution. Particles bumping into one another on their random walks don’t have to do anything. But the genetic networks and protein molecules and complex architectures of cells are shaped by the exigencies of survival: they have a kind of goal. And physics doesn’t deal with goals, right? As Massimo Pigliucci of City University of New York, an evolutionary biologist turned philosopher, recently stated, “It makes no sense to ask what is the purpose or goal of an electron, a molecule, a planet or a mountain.”

    Purpose or teleology are difficult words in biology: They all too readily suggest a deterministic goal for evolution’s “blind watchmaker,” and lend themselves to creationist abuse. But there’s no escaping the compunction to talk about function in biology: Its components and structures play a role in the survival of the organism and the propagation of genes.

    The thing is, physical scientists aren’t deterred by the word either. When Norbert Wiener wrote his 1943 paper “Behaviour, purpose and teleology,” he was being deliberately provocative. And the Teleological Society that Wiener formed two years later with Hungarian mathematical physicist John von Neumann announced as its mission the understanding of “how purpose is realised in human and animal conduct.” Von Neumann’s abiding interest in replication—an essential ingredient for evolving “biological function”—as a computational process laid the foundations of the theory of cellular automata, which are now widely used to study complex adaptive processes including Darwinian evolution (even Richard Dawkins has used them).

    Apparent purpose arises from Darwinian adaptation to the environment. But isn’t that then perfectly understood by Darwin’s random mutation and natural selection, without any appeal to a “physics” of adaptation?

    Actually, no. For one thing, it isn’t obvious that these two ingredients—random inheritable mutation between replicating organisms, and selective pressure from the environment—will necessarily produce adaptation, diversity, and innovation. How does this depend on, say, the rate of replication, the fidelity of the copying process and the level of random noise in the system, the strength of selective pressure, the relationship between the inheritable information and the traits they govern (genotype and phenotype), and so on? Evolutionary biologists have mathematical models to investigate these things, but doing calculations tells you little without a general framework to relate it to.

    That general framework is the physics of evolution. It might be mapped out in terms of, say, threshold values of the variables above which a qualitatively new kind of global behavior appears: what physicists call a phase diagram. The theoretical chemist Peter Schuster and his coworkers have found such a threshold in the error rate of genetic copying, below which the information contained in the replicating genome remains stable. In other words, above this error rate there can be no identifiable species as such: Their genetic identity “melts.” Schuster’s colleague, Nobel laureate chemist Manfred Eigen, argues that this switch is a phase transition entirely analogous to those like melting that physicists more traditionally study.

    Meanwhile, evolutionary biologist Andreas Wagner has used computer models to show that the ability of Darwinian evolution to innovate and generate qualitatively new forms and structures rather than just minor variations on a theme doesn’t follow automatically from natural selection. Instead, it depends on there being a very special “shape” to the combinatorial space of possibilities which describes how function (the chemical effect of a protein, say) depends on the information that encodes it (such as the sequences of amino acids in the molecular chain). Here again is the “physics” underpinning evolutionary variety.

    And physicist Jeremy England of the Massachusetts Institute of Technology has argued that adaptation itself doesn’t have to rely on Darwinian natural selection and genetic inheritance, but may be embedded more deeply in the thermodynamics of complex systems. The very notions of fitness and adaptation have always been notoriously hard to pin down—they easily end up sounding circular. But England says that they might be regarded in their most basic form as an ability of a particular system to persist in the face of a constant throughput of energy by suppressing big fluctuations and dissipating that energy: you might say, by a capacity to keep calm and carry on.

    “Our starting assumptions are general physical ones, and they carry us forward to a claim about the general features of nonequilibrium evolution of which the Darwinian story becomes a special case that obtains in the event that your system contains self-replicating things,” says England. “The notion becomes that thermally fluctuating matter gets spontaneously beaten into shapes that are good at work absorption from the external fields in the environment.” What’s exciting about this, he says, is that “when we give a physical account of the origins of some of the ‘adapted’-looking structures we see, they don’t necessarily have to have had parents in the usual biological sense.” Already, some researchers are starting to suggest that England’s ideas offer the foundational physics for Darwin’s.

    Notice that there is really no telling where this “physics” of the biological phenomenon will come from—it could be from chemists and biologists as much as from “physicists” as such. There is nothing at all chauvinistic, from a disciplinary perspective, about calling these fundamental ideas and theories the physics of the problem. We just need to rescue the word from its departmental definition, and the academic turf wars that come with it.

    Familiar patterns: British mathematician Alan Turing proposed a general approach to pattern formation in chemical and biological systems. Both dots (top left) and stripes (top right) can be produced using “activators” and “inhibitors.” Some patterns have a striking resemblance to patterns found in nature, like the zebra’s.
    Top: Turing Patterns courtesy of Jacques Boissonade and Patrick De Kepper at Bordeaux University; Bottom: Zebra, Ishara Kodikara / Getty

    You could regard these incursions into biology of ideas more familiar within physics as just another example of the way in which scientific ideas developed in one field can turn out to be relevant in another.

    But the issue is deeper than that, and phrasing it as cross-talk (or border raids) between disciplines doesn’t capture the whole truth. We need to move beyond attempts like those of Mayr to demarcate and defend the boundaries.

    The habit of physicists to praise peers for their ability to see to the “physics of the problem” might sound odd. What else would a physicist do but think about the “physics of the problem?” But therein lies a misunderstanding. What is being articulated here is an ability to look beyond mathematical descriptions or details of this or that interaction, and to work out the underlying concepts involved—often very general ones that can be expressed concisely in non-mathematical, perhaps even colloquial, language. Physics in this sense is not a fixed set of procedures, nor does it alight on a particular class of subject matter. It is a way of thinking about the world: a scheme that organizes cause and effect.


    We don’t yet know quite what a physics of biology will consist of. But we won’t understand life without it.


    This kind of thinking can come from any scientist, whatever his or her academic label. It’s what Jacob and Monod displayed when they saw that feedback processes were the key to genetic regulation, and so forged a link with cybernetics and control theory. It’s what the developmental biologist Hans Meinhardt did in the 1970s when he and his colleague Alfred Gierer unlocked the physics of Turing structures. These are spontaneous patterns that arise in a mathematical model of diffusing chemicals, devised by mathematician Alan Turing in 1952 to account for the generation of form and order in embryos. Meinhardt and Gierer identified the physics underlying Turing’s maths: the interaction between a self-generating “activator” chemical and an ingredient that inhibits its behavior.

    Once we move past the departmental definition of physics, the walls around other disciplines become more porous, to positive effect. Mayr’s argument that biological agents are motivated by goals in ways that inanimate objects are not was closely tied to a crude interpretation of biological information springing from the view that everything begins with DNA. As Mayr puts it, “there is not a single phenomenon or a single process in the living world which is not controlled by a genetic program contained in the genome.”

    This “DNA chauvinism,” as it is sometimes now dubbed, leads to the very reductionism and determinism that Mayr wrongly ascribes to physics, and which the physics of biology is undermining. For even if we recognize (as we must) that DNA and genes really are central to the detailed particulars of how life evolves and survives, there’s a need for a broader picture in which information for maintaining life doesn’t just come from a DNA data bank. One of the key issues here is causation: In what directions does information flow? It’s now becoming possible to quantify these questions of causation—and that reveals the deficiencies of a universal bottom-up picture.

    Neuroscientist Giulio Tononi and colleagues at the University of Wisconsin-Madison have devised a generic model of a complex system of interacting components—which could conceivably be neurons or genes, say—and they find that sometimes the system’s behavior is caused not so much in a bottom-up way, but by higher levels of organization among the components.

    This picture is borne out in a recent analysis of information flow in yeast gene networks by Paul Davies and colleagues at Arizona State University in Tempe. The study reveals that indeed “downward” causation is involved in this case. Davies and colleagues believe that top-down causation might be a general feature of the physics of life, and that it could have played a key role in some major shifts in evolution, such as the appearance of the genetic code, the evolution of complex compartmentalized cells (eukaryotes), the development of multicellular organisms, and even the origin of life itself. At such pivotal points, they say, information flow may have switched direction so that processes at higher levels of organization affected and altered those at lower levels, rather than everything being “driven” by mutations at the level of genes.

    One thing this work, and that of Wagner, Schuster, and Eigen, suggests is that the way DNA and genetic networks connect to the maintenance and evolution of living organisms can only be fully understood once we have a better grasp of the physics of information itself.

    A case in point is the observation that biological systems often operate close to what physicists call a critical phase transition or critical point: a state poised on the brink of switching between two modes of organization, one of them orderly and the other disorderly. Critical points are well known in physical systems like magnetism, liquid mixtures, and superfluids. William Bialek, a physicist working on biological problems at Princeton University, and his colleague Thierry Mora at the École Normale Supérieure in Paris, proposed in 2010 that a wide variety of biological systems, from flocking birds to neural networks in the brain and the organization of amino-acid sequences in proteins, might also be close to a critical state.8

    By operating close to a critical point, Bialek and Mora said, a system undergoes big fluctuations that give it access to a wide range of different configurations of its components. As a result, Mora says, “being critical may confer the necessary flexibility to deal with complex and unpredictable environments.” What’s more, a near-critical state is extremely responsive to disturbances in the environment, which can send rippling effects throughout the whole system. That can help a biological system to adapt very rapidly to change: A flock of birds or a school of fish can respond very quickly to the approach of a predator, say.

    Criticality can also provide an information-gathering mechanism. Physicist Amos Maritan at the University of Padova in Italy and coworkers have shown that a critical state in a collection of “cognitive agents”—they could be individual organisms, or neurons, for example—allows the system to “sense” what is going on around it: to encode a kind of ‘internal map’ of its environment and circumstances, rather like a river network encoding a map of the surrounding topography.9 “Being poised at criticality provides the system with optimal flexibility and evolutionary advantage to cope with and adapt to a highly variable and complex environment,” says Maritan. There’s mounting evidence that brains, gene networks, and flocks of animals really are organized this way. Criticality may be everywhere.

    Examples like these give us confidence that biology does have a physics to it. Bialek has no patience with the common refrain that biology is just too messy—that, as he puts it, “there might be some irreducible sloppiness that we’ll never get our arms around.”10 He is confident that there can be “a theoretical physics of biological systems that reaches the level of predictive power that has become the standard in other areas of physics.” Without it, biology risks becoming mere anecdote and contingency. And one thing we can be fairly sure about is that biology is not like that, because it would simply not work if it was.

    We don’t yet know quite what a physics of biology will consist of. But we won’t understand life without it. It will surely have something to say about how gene networks produce both robustness and adaptability in the face of a changing environment—why, for example, a defective gene need not be fatal and why cells can change their character in stable, reliable ways without altering their genomes. It should reveal why evolution itself is both possible at all and creative.

    Saying that physics knows no boundaries is not the same as saying that physicists can solve everything. They too have been brought up inside a discipline, and are as prone as any of us to blunder when they step outside. The issue is not who “owns” particular problems in science, but about developing useful tools for thinking about how things work—which is what Aristotle tried to do over two millennia ago. Physics is not what happens in the Department of Physics. The world really doesn’t care about labels, and if we want to understand it then neither should we.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: