Tagged: Wired Science Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 5:29 pm on January 31, 2016 Permalink | Reply
    Tags: , , The Arrow of Time, Wired Science   

    From wired.com: “Following Time’s Arrow to the Universe’s Biggest Mystery” 

    Wired logo

    Wired

    01.31.16
    Frank Wilczek

    Destiny The Arrow of Time
    DESTINY – The Arrow of Time from BBC Wonders of the Universe with Brian Cox

    Few facts of experience are as obvious and pervasive as the distinction between past and future. We remember one, but anticipate the other. If you run a movie backwards, it doesn’t look realistic. We say there is an arrow of time, which points from past to future.

    One might expect that a fact as basic as the existence of time’s arrow would be embedded in the fundamental laws of physics. But the opposite is true. If you could take a movie of subatomic events, you’d find that the backward-in-time version looks perfectly reasonable. Or, put more precisely: The fundamental laws of physics—up to some tiny, esoteric exceptions, as we’ll soon discuss—will look to be obeyed, whether we follow the flow of time forward or backward. In the fundamental laws, time’s arrow is reversible.

    Logically speaking, the transformation that reverses the direction of time might have changed the fundamental laws. Common sense would suggest that it should. But it does not. Physicists use convenient shorthand—also called jargon—to describe that fact. They call the transformation that reverses the arrow of time “time reversal,” or simply T. And they refer to the (Tapproximate) fact that T does not change the fundamental laws as T invariance, or T symmetry.

    Everyday experience violates T invariance, while the fundamental laws respect it. That blatant mismatch raises challenging questions. How does the actual world, whose fundamental laws respect T symmetry, manage to look so asymmetric? Is it possible that someday we’ll encounter beings with the opposite flow—beings who grow younger as we grow older? Might we, through some physical process, turn around our own body’s arrow of time?

    Those are great questions, and I hope to write about them in a past future posting. Here, however, I want to consider a complementary question. It arises when we start from the other end, in the facts of common experience. From that perspective, the puzzle is this:

    Why should the fundamental laws have that bizarre and problem-posing property, T invariance?

    The answer we can offer today is incomparably deeper and more sophisticated than that we could offer 50 years ago. Today’s understanding emerged from a brilliant interplay of experimental discovery and theoretical analysis, which yielded several Nobel prizes. Yet our answer still contains a serious loophole. As I’ll explain, closing that loophole may well lead us, as an unexpected bonus, to identify the cosmological dark matter.

    II.

    The modern history of T invariance begins in 1956. In that year, T. D. Lee and C. N. Yang questioned a different but related feature of physical law, which until then had been taken for granted. Lee and Yang were not concerned with T itself, but with its spatial analogue, the parity transformation, “P.” Whereas T involves looking at movies run backward in time, P involves looking at movies reflected in a mirror. Parity invariance is the hypothesis that the events you see in the reflected movies follow the same laws as the originals. Lee and Yang identified circumstantial evidence against that hypothesis and suggested critical experiments to test it. Within a few months, experiments proved that P invariance fails in many circumstances. (P invariance holds for gravitational., electromagnetic and strong interactions, but generally fails in the so-called weak interactions.)

    Those dramatic developments around P (non)invariance stimulated physicists to question T invariance, a kindred assumption they had also once taken for granted. But the hypothesis of T invariance survived close scrutiny for several years. It was only in 1964 that a group led by James Cronin and Valentine Fitch discovered a peculiar, tiny effect in the decays of K mesons that violates T invariance.

    III.

    The wisdom of Joni Mitchell’s insight—that “you don’t know what you’ve got ‘til it’s gone”—was proven in the aftermath.

    If, like small children, we keep asking, “Why?” we may get deeper answers for a while, but eventually we will hit bottom, when we arrive at a truth that we can’t explain in terms of anything simpler. At that point we must call a halt, in effect declaring victory: “That’s just the way it is.” But if we later find exceptions to our supposed truth, that answer will no longer do. We will have to keep going.

    As long as T invariance appeared to be a universal truth, it wasn’t clear that our italicized question was a useful one. Why was the universe T invariant? It just was. But after Cronin and Fitch, the mystery of T invariance could not be avoided.

    Many theoretical physicists struggled with the vexing challenge of understanding how T invariance could be extremely accurate, yet not quite exact. Here the work of Makoto Kobayashi and Toshihide Maskawa proved decisive. In 1973, they proposed that approximate T invariance is an accidental consequence of other, more-profound principles.

    The time was ripe. Not long before, the outlines of the modern Standard Model of particle physics had emerged and with it a new level of clarity about fundamental interactions.

    Standard model with Higgs New
    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    By 1973 there was a powerful—and empirically successful!—theoretical framework, based on a few “sacred principles.” Those principles are relativity, quantum mechanics and a mathematical rule of uniformity called “gauge symmetry.”

    It turns out to be quite challenging to get all those ideas to cooperate. Together, they greatly constrain the possibilities for basic interactions.

    Kobayashi and Maskawa, in a few brief paragraphs, did two things. First they showed that if physics were restricted to the particles then known (for experts: if there were just two families of quarks and leptons), then all the interactions allowed by the sacred principles also respect T invariance. If Cronin and Fitch had never made their discovery, that result would have been an unalloyed triumph. But they had, so Kobayashi and Maskawa went a crucial step further. They showed that if one introduces a very specific set of new particles (a third family), then those particles bring in new interactions that lead to a tiny violation of T invariance. It looked, on the face of it, to be just what the doctor ordered.

    In subsequent years, their brilliant piece of theoretical detective work was fully vindicated. The new particles whose existence Kobayashi and Maskawa inferred have all been observed, and their interactions are just what Kobayashi and Maskawa proposed they should be.

    Before ending this section, I’d like to add a philosophical coda. Are the sacred principles really sacred? Of course not. If experiments force scientists to modify those principles, they will do so. But at the moment, the sacred principles look awfully good. And evidently it’s been fruitful to take them very seriously indeed.

    IV.

    So far I’ve told a story of triumph. Our italicized question, one of the most striking puzzles about how the world works, has received an answer that is deep, beautiful and fruitful.

    But there’s a worm in the rose.

    A few years after Kobayashi and Maskawa’s work, Gerard ’t Hooft discovered a loophole in their explanation of T invariance. The sacred principles allow an additional kind of interaction. The possible new interaction is quite subtle, and ’t Hooft’s discovery was a big surprise to most theoretical physicists.

    The new interaction, were it present with substantial strength, would violate T invariance in ways that are much more obvious than the effect that Cronin, Fitch and their colleagues discovered. Specifically, it would allow the spin of a neutron to generate an electric field, in addition to the magnetic field it is observed to cause. (The magnetic field of a spinning neutron is broadly analogous to that of our rotating Earth, though of course on an entirely different scale.) Experimenters have looked hard for such electric fields, but so far they’ve come up empty.

    Nature does not choose to exploit ’t Hooft’s loophole. That is her prerogative, of course, but it raises our italicized question anew: Why does Nature enforce T invariance so accurately?

    Several explanations have been put forward, but only one has stood the test of time. The central idea is due to Roberto Peccei and Helen Quinn. Their proposal, like that of Kobayashi and Maskawa, involves expanding the standard model in a fairly specific way. One introduces a neutralizing field, whose behavior is especially sensitive to ’t Hooft’s new interaction. Indeed if that new interaction is present, then the neutralizing field will adjust its own value, so as to cancel that interaction’s influence. (This adjustment process is broadly similar to how negatively charged electrons in a solid will congregate around a positively charged impurity and thereby screen its influence.) The neutralizing field thereby closes our loophole.

    Peccei and Quinn overlooked an important, testable consequence of their idea. The particles produced by their neutralizing field—its quanta—are predicted to have remarkable properties. Since they didn’t take note of these particles, they also didn’t name them. That gave me an opportunity to fulfill a dream of my adolescence.

    A few years before, a supermarket display of brightly colored boxes of a laundry detergent named Axion had caught my eye. It occurred to me that “axion” sounded like the name of a particle and really ought to be one. So when I noticed a new particle that “cleaned up” a problem with an “axial” current, I saw my chance. (I soon learned that Steven Weinberg had also noticed this particle, independently. He had been calling it the “Higglet.” He graciously, and I think wisely, agreed to abandon that name.) Thus began a saga whose conclusion remains to be written.

    In the chronicles of the Particle Data Group you will find several pages, covering dozens of experiments, describing unsuccessful axion searches.

    Yet there are grounds for optimism.

    The theory of axions predicts, in a general way, that axions should be very light, very long-lived particles whose interactions with ordinary matter are very feeble. But to compare theory and experiment we need to be quantitative. And here we meet ambiguity, because existing theory does not fix the value of the axion’s mass. If we know the axion’s mass we can predict all its other properties. But the mass itself can vary over a wide range. (The same basic problem arose for the charmed quark, the Higgs particle, the top quark and several other others. Before each of those particles was discovered, theory predicted all of its properties except for the value of its mass.) It turns out that the strength of the axion’s interactions is proportional to its mass. So as the assumed value for axion mass decreases, the axion becomes more elusive.

    In the early days physicists focused on models in which the axion is closely related to the Higgs particle. Those ideas suggested that the axion mass should be about 10 keV—that is, about one-fiftieth of an electron’s mass. Most of the experiments I alluded to earlier searched for axions of that character. By now we can be confident such axions don’t exist.

    Attention turned, therefore, toward much smaller values of the axion mass (and in consequence feebler couplings), which are not excluded by experiment. Axions of this sort arise very naturally in models that unify the interactions of the standard model. They also arise in string theory.

    Axions, we calculate, should have been abundantly produced during the earliest moments of the Big Bang. If axions exist at all, then an axion fluid will pervade the universe. The origin of the axion fluid is very roughly similar to the origin of the famous cosmic microwave background (CMB) radiation, but there are three major differences between those two entities.

    Cosmic Background Radiation Planck
    CMB per ESA/Planck

    ESA Planck
    ESA/Planck

    First: The microwave background has been observed, while the axion fluid is still hypothetical. Second: Because axions have mass, their fluid contributes significantly to the overall mass density of the universe. In fact, we calculate that they contribute roughly the amount of mass astronomers have identified as dark matter! Third: Because axions interact so feebly, they are much more difficult to observe than photons from the CMB.

    The experimental search for axions continues on several fronts. Two of the most promising experiments are aimed at detecting the axion fluid. One of them, ADMX (Axion Dark Matter eXperiment) uses specially crafted, ultrasensitive antennas to convert background axions into electromagnetic pulses.

    ADMX Axion Dark Matter Experiment
    U Washington ADMX experiment

    The other, CASPEr (Cosmic Axion Spin Precession Experiment) looks for tiny wiggles in the motion of nuclear spins, which would be induced by the axion fluid. Between them, these difficult experiments promise to cover almost the entire range of possible axion masses.

    CASPEr Experiment
    CASPer

    Do axions exist? We still don’t know for sure. Their existence would bring the story of time’s reversible arrow to a dramatic, satisfying conclusion, and very possibly solve the riddle of the dark matter, to boot. The game is afoot.

    Frank Wilczek is a Nobel Prize-winning physicist at the Massachusetts Institute of Technology.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 12:28 pm on December 15, 2015 Permalink | Reply
    Tags: , California gas leak, Wired Science   

    From WIRED: “California Has a Huge Gas Leak, and Crews Can’t Stop It Yet” 

    Wired logo

    Wired

    12.15.15
    Sarah Zhang

    1
    Crews from SoCalGas and outside experts work on a relief well at the Aliso Canyon facility above the Porter Ranch area of Los Angeles, on December 9, 2015. Dean Musgrove/Los Angeles Daily News/AP/Pool

    While the world was hammering out a historic agreement to curb carbon emissions—urged along by California, no less—the state was dealing with an embarrassing belch of its own. Methane, a greenhouse gas 70 times more potent than carbon dioxide, has been leaking out of a natural gas storage site in southern California for nearly two months, and a fix won’t arrive until spring.

    The site is leaking up to 145,000 pounds per hour, according to the California Air Resources Board. In just the first month, that’s added up to 80,000 tons, or about a quarter of the state’s ordinary methane emissions over the same period. The Federal Aviation Administration recently banned low-flying planes from flying over the site, since engines plus combustible gas equals kaboom.

    Steve Bohlen, who until recently was state oil and gas supervisor, can’t remember the last time California had to deal with a gas leak this big. “I asked this question of our staff of 30 years,” says Bohlen. “This is unique in the last three or four decades. This is an unusual event, period.”

    Families living downwind of the site have also noticed the leak—boy, have they noticed. Methane itself is odorless, but the mercaptan added to natural gas gives it a characteristic sulfurous smell. Over 700 households have at least temporarily relocated, and one family has filed a lawsuit against the Southern California Gas Company alleging health problems from the gas. The gas levels are too low for long-term health effects, according to health officials, but the odor is hard to ignore.

    Given both the local and global effects of the gas leak, why is it taking so long to stop? The answer has to do with the site at Aliso Canyon, an abandoned oil field. Yes, that’s right, natural gas is stored underground in old oil fields. It’s common practice in the US, but largely unique to this country. The idea goes that geological sites that were good at keeping in oil for millions of years would also be good at keeping in gas.

    Across the US, over 300 depleted oil fields, of which a dozen are in California, are now natural gas storage sites. “We have the largest natural gas storage system in the world,” says Chris McGill, a vice president of the American Gas Association. And the site at Aliso Canyon is one of the largest in the country, with a capacity of 86 billion cubic feet. Aliso become a natural gas storage site in the 1970s. Each summer, SoCalGas pumps natural gas into the field, and each winter, it pumps it out. The sites are basically giant underground reserves for winter heating.

    On October 23, workers noticed the leak at a 40-year-old well in Aliso Canyon. Small leaks are routine, says Bohlen, and SoCalGas did what it routinely does: put fluid down the well to stop the leak and tinker with the well head. It didn’t work. The company tried it five more times, and the gas kept leaking. At this point, it was clear the leak was far from routine, and the problem was deeper underground.

    2
    How crew will drill the relief well at Aliso Canyon. SoCalGas.

    Here’s the new plan: SoCalGas began drilling a relief well on December 4. The relief well will intercept the steel pipe of the original well—all of seven inches in diameter—thousands of feet below ground. Then crews will then pour in cement to seal the wells off permanently. “Relief wells are a proven approach to shutting down oil and gas wells,” said SoCalGas in a statement.

    As if finding a skinny pipe hundreds of feet below ground weren’t hard enough, the presence of all that explosive natural gas adds an extra layer of complication. A tiny spark and everything can go boom. So at the leaking well site, work is restricted to daylight, says Bohlen, as lightning equipment could produce stray sparks. (The relief well far enough away that drilling there can proceed 24/7.) Back in 1975, a well at Aliso Canyon caught fire because of sparks from sand flying up the well.

    And crews can’t set a deliberate fire, also known as flaring, which they often do at other remote areas with excess gas. The leak is so big and the flare would be so hot that it could make the mess even harder to contain.

    “There is no stone being left unturned to get this well closed. It’s our top priority,” says Bohlen. But even that is slow, with months of drilling to come as methane continues to billow into the air.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 8:32 pm on September 28, 2015 Permalink | Reply
    Tags: , , , Wired Science   

    From WIRED: “The Other Way A Quantum Computer Could Revive Moore’s Law” 

    Wired logo

    Wired

    09.28.15
    Cade Metz

    1
    D-Wave’s quantum chip. Google

    Google is upgrading its quantum computer. Known as the D-Wave, Google’s machine is making the leap from 512 qubits—the fundamental building block of a quantum computer—to more than a 1000 qubits. And according to the company that built the system, this leap doesn’t require a significant increase in power, something that could augur well for the progress of quantum machines.

    Together with NASA and the Universities Space Research Association, or USRA, Google operates its quantum machine at the NASA Ames Research center not far from its Mountain View, California headquarters. Today, D-Wave Systems, the Canadian company that built the machine, said it has agreed to provide regular upgrades to the system—keeping it “state-of-the-art”—for the next seven years. Colin Williams, director of business development and strategic partnerships for D-Wave, calls this “the biggest deal in the company’s history.” The system is also used by defense giant Lockheed Martin, among others.

    Though the D-Wave machine is less powerful than many scientists hope quantum computers will one day be, the leap to 1000 qubits represents an exponential improvement in what the machine is capable of. What is it capable of? Google and its partners are still trying to figure that out. But Google has said it’s confident there are situations where the D-Wave can outperform today’s non-quantum machines, and scientists at the University of Southern California have published research suggesting that the D-Wave exhibits behavior beyond classical physics.

    Over the life of Google’s contract, if all goes according to plan, the performance of the system will continue to improve. But there’s another characteristic to consider. Williams says that as D-Wave expands the number of qubits, the amount of power needed to operate the system stays roughly the same. “We can increase performance with constant power consumption,” he says. At a time when today’s computer chip makers are struggling to get more performance out of the same power envelope, the D-Wave goes against the trend.

    The Qubit

    A quantum computer operates according to the principles of quantum mechanics, the physics of very small things, such as electrons and photons. In a classical computer, a transistor stores a single “bit” of information. If the transistor is “on,” it holds a 1, and if it’s “off,” it holds a 0. But in quantum computer, thanks to what’s called the superposition principle, information is held in a quantum system that can exist in two states at the same time. This “qubit” can store a 0 and 1 simultaneously.

    Two qubits, then, can hold four values at any given time (00, 01, 10, and 11). And as you keep increasing the number of qubits, you exponentially increase the power of the system. The problem is that building a qubit is a extreme difficult thing. If you read information from a quantum system, it “decoheres.” Basically, it turns into a classical bit that houses only a single value.

    D-Wave believes it has found a way around this problem. It released its first machine, spanning 16 qubits, in 2007. Together with NASA, Google started testing the machine when it reached 512 qubits a few years back. Each qubit, D-Wave says, is a superconducting circuit—a tiny loop of flowing current—and these circuits are dropped to extremely low temperatures so that the current flows in both directions at once. The machine then performs calculations using algorithms that, in essence, determine the probability that a collection of circuits will emerge in a particular pattern when the temperature is raised.

    Reversing the Trend

    Some have questioned whether the system truly exhibits quantum properties. But researchers at USC say that the system appears to display a phenomenon called “quantum annealing” that suggests it’s truly operating in the quantum realm. Regardless, the D-Wave is not a general quantum computer—that is, it’s not a computer for just any task. But D-Wave says the machine is well-suited to “optimization” problems, where you’re facing many, many different ways forward and must pick the best option, and to machine learning, where computers teach themselves tasks by analyzing large amount of data.

    D-Wave says that most of the power needed to run the system is related to the extreme cooling. The entire system consumes about 15 kilowatts of power, while the quantum chip itself uses a fraction of a microwatt. “Most of the power,” Williams says, “is being used to run the refrigerator.” This means that the company can continue to improve its performance without significantly expanding the power it has to use. At the moment, that’s not hugely important. But in a world where classical computers are approaching their limits, it at least provides some hope that the trend can be reversed.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 11:42 am on September 18, 2015 Permalink | Reply
    Tags: , Environment, Wired Science   

    From wired: “This Tower Purifies a Million Cubic Feet of Air an Hour” 

    Wired logo

    Wired

    09.18.15
    Liz Stinson

    1
    Daan Roosegaarde worked with scientist Bob Ursem and European Nano Solutions to Create the Smog Free Tower. This one is in Rotterdam.

    There’s a massive vacuum cleaner in the middle of a Rotterdam park and it’s sucking all the smog out of the air. A decent portion of it, anyway. And it isn’t a vacuum, exactly. It looks nothing like a Dyson or a Hoover. It’s probably more accurate to describe it as the world’s largest air purifier.

    The Smog Free Tower, as it’s called, is a collaboration between Dutch designer Daan Roosegaarde, Delft Technology University researcher Bob Ursem, and European Nano Solutions, a green tech company in the Netherlands. The metal tower, nearly 23 feet tall, can purify up to 1 million cubic feet of air every hour. To put that in perspective, the Smog Free Tower would need just 10 hours to purify enough air to fill Madison Square Garden. “When this baby is up and running for the day you can clean a small neighborhood,” says Roosegaarde.

    It does this by ionizing airborne smog particles. Particles smaller than 10 micrometers in diameter (about the width of a cotton fiber) are tiny enough to inhale and can be harmful to the heart and lungs. Ursem, who has been researching ionization since the early 2000s, says a radial ventilation system at the top of the tower (powered by wind energy) draws in dirty air, which enters a chamber where particles smaller than 15 micrometers are given a positive charge. Like iron shavings drawn to a magnet, the the positively charged particles attach themselves to a grounded counter electrode in the chamber. The clean air is then expelled through vents in the lower part of the tower, surrounding the structure in a bubble of clean air. Ursem notes that this process doesn’t produce ozone, like many other ionic air purifiers, because the particles are charged with positive voltage rather than a negative.

    Ursem has used the same technique in hospital purification systems, parking garages, and along roadsides, but the tower is by far the biggest and prettiest application of his technology. Indeed, it’s meant to be a design object as much as a technological innovation. Roosegaarde is known for wacky, socially conscious design projects—he’s the same guy who did the glowing Smart Highway in the Netherlands. He says making the tower beautiful brings widespread attention to a problem typically hidden behind bureaucracy. “I’m tired of design being about chairs, tables, lamps, new cars, and new watches,” he says. “It’s boring, we have enough of this stuff. Let’s focus on the real issues in life.”

    Roosegaarde has been working with Ursem and ENS, the company that fabricated the tower, for two years to bring it into existence, and now that it’s up and running, he says people are intrigued. He just returned from Mumbai where he spoke to city officials about installing a similar tower in a park, and officials in Mexico City, Paris, and Beijing (the smoggy city that inspired the project) also are interested. “We’ve gotten a lot of requests from property developers who want to place it in a few filthy rich neighborhoods of course, and I tend to say no to these right now,” he says. “I think that it should be in a public space.”

    Roosegaarde has plans to take the tower on a “smog-free tour” in the coming year so he can demonstrate the tower’s abilities in cities around the world. It’s a little bit of showmanship that he hopes will garner even more attention for the machine, which he calls a “shrine-like temple of clean air.” Roosegaarde admits that his tower isn’t a final solution for cleaning a city’s air. “The real solution everybody knows,” he says, adding that it’s more systematic than clearing a hole of clean air in the sky. He views the Smog Free tower as an initial step in a bottom-up approach to cleaner air, with citizens acting as the driving force. “How can we create a city where in 10 years these towers aren’t necessary anymore?” he says. “This is the bridge towards the solution.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 10:51 am on August 13, 2015 Permalink | Reply
    Tags: , , Wired Science   

    From WIRED: “The Way We Measure Earthquakes Is Stupid” 

    Wired logo

    Wired

    08.13.15
    Sarah Zhang

    1
    Barisonal/Getty Images

    This weekend, a 3.3-magnitude earthquake rattled San Francisco ever so slightly. The small quake, like so many before it, passed, and San Franciscans went back to conveniently ignoring their seismic reality. Magnitude 3.3 earthquakes are clearly no big deal, and the city survived a 6.9-magnitude earthquake in 1989 mostly fine—how how much bigger will the Big One, at 8.0, be than 1989?

    Ten times! As smarty-pants among you who understand logarithms may be thinking. But…that’s wrong. On the current logarithmic earthquake scale, a whole number increase, like from 7.0 to 8.0, actually means a 32-fold increase in earthquake energy. Even if you can mentally do that math—and feel smug doing it—the logarithmic scale for earthquakes is terrible for intuitively communicating risk. “It’s arbitrary,” says Lucy Jones, a seismologist with the US Geological Survey. “I’ve never particularly liked it.”

    Just how arbitrary? Oh, let us count the ways.

    First, there’s the matter of star light, which has no relevance to earthquakes except that Charles Richter was once an amateur astronomer. When Richter and Beno Gutenberg were developing what would become the Richter scale in 1935, they took inspiration from magnitude, the logarithmic measure of the brightness of stars.They defined earthquake magnitude as the logarithm of shaking amplitude recorded on a particular seismograph in southern California.

    Now, logarithms might make sense for stars a million, billion, or gazillion miles away whose brightnesses varied widely—but back on Earth, the rationale is shakier. Understanding the severity of earthquakes is important for millions of people, and the logarithmic scale is hard to grok: 8 seems only marginally larger than 6, but on our earthquake logarithmic scale, it’s a 1000-fold difference in intensity. Seismologists have to unpack it every time they use it with non-experts. “We just undo the logarithm when we try to tell people,” says Thomas Heaton, a seismologist at Caltech.

    A better way to measure earthquakes does exist—at least among scientists. That would be seismic moment, equal to (take a breath) the area of rupture along a fault multiplied by the average displacement multiplied by the rigidity of the earth—which boils down to the amount of energy released in a quake. The Richter scale uses surface shaking amplitude as a proxy of energy, but seismologists can now get at energy more directly and accurately. The moment of the largest earthquake ever recorded came to 2.5 × 1023 joules.

    A big number but meaningless without context, right? (That biggest earthquake ever was a 9.6 in Chile in 1960.) Seismologists now have a tortured formula (below) to convert seismic moment (Mo) to the familiar old logarithmic magnitude scale (M). That gets us the aptly named moment magnitude scale, which supplanted the Richter scale in popular use in the 1970s. The Richter scale may be obsolete now, but its logarithm definition of magnitude remains. “Through the years, seismologists have tried to be consistent,” says Heaton. “It’s been confusing ever since.”

    M = (2/3)*(log Mo)-10.7

    That formula explains why going up one unit in magnitude actually means a 32-fold increase in energy. (Thirty-two is approximately equal to 103/2.) But many of us learned, at one point, that a magnitude-6 earthquake is 10 times worse than a 5. Where did this misconception come from? Richter, of course. Richter looked at his data and calculated that a 10-fold increase in shaking amplitude on his instrument correlated with a 32-fold increase in energy released. He literally did it by drawing a line. (See the diagram below). But Richter was only working on earthquakes in southern California.

    2
    USGS

    Seismologists now understand that many variables, like the type of soil, affect the intensity of surface shaking from earthquakes. In other words, the relationship between shaking amplitude and earthquake energy from Richter doesn’t hold for all earthquakes, and moment magnitude doesn’t easily translate to earthquake intensity. Seismologists use seismic moment—which, remember, is basically energy released—to compare earthquakes because it does get at the totality of an earthquake rather than the shaking at just one particular place in the ground.

    Back in 2000, Jones wrote an article in the Seismological Review Letters suggesting a new earthquake scale. “I hate the Richter scale,” she began the piece. “It feels almost sacrilegious, but I have to say it.” Instead, she proposed a scale based on seismic moment using Akis, named after the inventor of the seismic moment, Keiiti Aki. A 5.0 earthquake might be equivalent to 400 Akis—so that a tiny 2.0 could be measured in milli-Akis and a devastating 9.0 in billions of Akis. It’s more logical than Richter but also more layperson-friendly than seismic moment. “We got a lot of pushback,” says Jones. “Seismologists were saying people understand magnitude. I was like, ‘No they don’t. Have you tried to explain it to people?’”

    “This confusion over earthquake magnitude seems to be creating a lot of confusion in the design of buildings,” says Heaton. New and ugly things happen when earthquakes get past 8.0 or 8.5. Tsunamis are one. Another is that tall buildings are more vulnerable to long, slow lurches of the ground that only occur in huge quakes. And since 1906, San Francisco has built a lot more tall buildings downtown. At this point, I mentioned to Heaton that I was in fact speaking to him from an office building in downtown San Francisco. His parting words? “Good luck to you.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 2:16 pm on July 24, 2015 Permalink | Reply
    Tags: , , Wired Science   

    From WIRED: “There’s a Volcano Called Kick ‘Em Jenny, and It’s Angry” 

    Wired logo

    Wired

    07.24.15
    Erik Klemetti

    1
    A bathymetric map of the seafloor off northern Grenada showing the volcanic cluster surrounding Kick’ Em Jenny. NOAA and Seismic Research Institute, 2003 (published in GVN Bulletin).

    A submarine volcano near the coast of Grenada in the West Indies (Less Antilles) looks like it might be headed towards a new eruption. A new swarm of earthquakes has begun in the area of Kick ‘Em Jenny (one of the best volcano names on Earth) and locals have noticed more bubbles in the ocean above the volcano (which reaches within ~180 meters of the surface). The intensity of this degassing and earthquake swarm is enough to have the volcano moved to “Orange” alert status by Seismic Research Center at the University of the West Indies, meaning they expect an eruption soon. A 5 kilometer (3 mile) exclusion zone has also been set up for boat traffic around the volcano.

    Kick ‘Em Jenny doesn’t pose a threat to Grenada itself enough though its only 8 kilometers from the island. The biggest hazard is to boats that frequent the area as the release of volcanic gases and debris into the water could heat up the water and make it tumultuous. In 1939, the volcano did also produce an eruption plume that breached the surface of the ocean, so there is a small chance that any new eruption could do the same. However, eruptions since 1939, including the most recent in 2001, have been minor and had no surface expression — think of something like the 2010 eruptions at El Hierro in the Canary Islands.

    Kick ‘Em Jenny doesn’t pose a threat to Grenada itself enough though its only 8 kilometers from the island. The biggest hazard is to boats that frequent the area as the release of volcanic gases and debris into the water could heat up the water and make it tumultuous. In 1939, the volcano did also produce an eruption plume that breached the surface of the ocean, so there is a small chance that any new eruption could do the same. However, eruptions since 1939, including the most recent in 2001, have been minor and had no surface expression — think of something like the 2010 eruptions at El Hierro in the Canary Islands.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:20 pm on April 27, 2015 Permalink | Reply
    Tags: , , Wired Science   

    From WIRED: “Turns Out Satellites Work Great for Mapping Earthquakes” 

    Wired logo

    Wired

    1
    Satellite radar image of the magnitude 6.0 South Napa earthquake. European Space Agency

    The Nepal earthquake on Saturday devastated the region and killed over 2,500 people, with more casualties mounting across four different countries. The first 24 hours of a disaster are the most important, and first-responders scramble to get as much information about the energy and geological effects of earthquakes as they can. Seismometers can help illustrate the location and magnitude of earthquakes around the world, but for more precise detail, you need to look at three-dimensional models of the ground’s physical displacement.

    The easiest way to characterize that moving and shaking is with GPS and satellite data, together called geodetic data. That information is already used by earthquake researchers and geologists around the world to study the earth’s tectonic plate movements—long-term trends that establish themselves over years.

    4
    The tectonic plates of the world were mapped in the second half of the 20th century.

    But now, researchers at the University of Iowa and the U.S. Geological Survey (USGS) have shown a faster way to use geodetic data to assess fault lines, turning over reports in as little as a day to help guide rapid responses to catastrophic quakes.

    2
    A radar interferogram of the August 2014 South Napa earthquake. A single cycle of color represents about a half inch of surface displacement. Jet Propulsion Laboratory

    Normally, earthquake disaster aid and emergency response requires detailed information about surface movements: If responders know how much ground is displaced, they’ll know better what kind of infrastructure damage to expect, or what areas pose the greatest risk to citizens. Yet emergency response agencies don’t use geodetic data immediately, choosing instead to wait several days or even weeks before finally processing the data, says University of Iowa geologist William Barnhart. By then, the damage has been done and crews are already on the ground, with relief efforts well underway.

    The new results are evidence that first responders can get satellite data fast enough to inform how they should respond. Barnhart and his team used geodetic data to measure small deformations in the surface caused by an 6.0-magnitude quake that hit Napa Valley in August 2014 (the biggest the Bay Area had seen in 25 years). By analyzing those measurements, the geologists determined how much the ground moved with relation to the fault plane, which helps describe the exact location, orientation, and dimensions of the entire fault.

    3
    A 3D slip map of the Napa quake generated from GPS surface displacements. Jet Propulsion Laboratory

    Then they created the Technicolor map above, showing just how much the ground shifted. In this so-called interferogram of the Napa earthquake epicenter, the cycles of color represent vertical ground displacement, where every full cycle indicates 6 centimeters (e.g. between every green band is 6 cm of vertical ground).

    According to the Barnhart, this is the first demonstration of geodetic data being acquired and analyzed the same day of an earthquake. John Langbein, a geologist at the USGS, finds the results very encouraging, and hopes to see geodetic data used regularly as a tool to make earthquake responses faster and more efficient.

    Barnhart is quick to point out that this method is most useful for moderate earthquakes (between magnitudes of 5.5 and 7.0). Although the Nepal earthquake had a magnitude of 7.8, over 35 aftershocks continued to rock the region, including one as high as 6.7 on Sunday. The earthquake itself flattened broad swaths of the capital city of Kathmandu, and caused avalanches across the Himalayan mountains (including Mount Everest), killing and stranding many climbers. But the aftershocks are stymieing relief efforts, paralyzing citizens with immobilizing fear, and creating new avalanches in nearby mountains.

    It’s also worth remembering that the 2010 earthquake that devastated Haiti—and killed about 316,000 people—had a magnitude of 7.0. Most areas of the world, especially developing nations, aren’t equipped to withstand even small tremors in the earth. It’s those places that are also likely to have fewer seismometers, making the satellite information even more helpful.

    As the situation in Nepal moves forward, the aftermath might hopefully speed up plans to make geodetic data available just hours after an earthquake occurs. Satellite systems could be integral in allowing first responders to move swiftly in the face of unpredictable, unpreventable events.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:58 am on January 27, 2015 Permalink | Reply
    Tags: , , , Wired Science   

    From Wired: “Why We’re Looking for Alien Life on Moons, Not Just Planets” 

    Wired logo

    Wired

    01.27.15
    Marcus Woo

    1
    An artist’s depiction of what a moon around another planet may look like. JPL-Caltech/NASA

    Think “moon” and you probably envision a desolate, cratered landscape, maybe with an American flag and some old astronaut footprints. Earth’s moon is no place for living things. But that isn’t necessarily true for every moon. Whirling around Saturn, Enceladus spits out geysers of water from an underground ocean. Around Jupiter, Europa has a salty, subsurface sea and Titan has lakes of ethane and methane. A handful of the roughly 150 moons in the solar system have atmospheres, organic compounds, ice, and maybe even liquid water. They all seem like places where something could live—albeit something weird.

    e
    Enceladus

    2
    Europa

    3
    Titan

    So now that the Kepler space telescope has found more than 1,000 planets—data that suggest the Milky Way galaxy could contain a hundred billion worlds—it makes sense to some alien-hunters to concentrate not on them but on their moons.

    NASA Kepler Telescope
    Kepler

    The odds for life on these so-called exoplanets look a lot better—multiply that hundred billion by 150 and you get a lot of places to look for ET. “Because there are so many more moons than planets, if life can get started on moons, then that’s going to be a lot of lively moons,” says Seth Shostak, an astronomer at the SETI Institute.

    Even better, more of those moons might be in the habitable zone, the region around a star where liquid water can exist. That’s one reason Harvard astronomer David Kipping got interested in exomoons. He says about 1.7 percent of all stars similar to the sun have a rocky planet in their habitable zones. But if you’re talking about planets made out of gas, like Saturn and Jupiter, that number goes up to 9.2 percent. Gaseous planets don’t have the solid surfaces that astronomers think life needs, but their moons might.

    So far, no one has found a moon outside the solar system yet. But people like Kipping are looking hard. He leads a project called the Hunt for Exomoons with Kepler, the only survey project dedicated to finding moons in other planetary systems. The team has looked at 55 systems, and this year they plan to add 300 more. “It’s going to be a very big year for us,” Kipper says.

    Finding moons isn’t easy. Kepler was designed to find planets—the telescope watches for dips in starlight when a planet passes in front of its star. But if a moon accompanies that planet, it could further lessen that starlight, called a light curve. A moon’s gravitational tug also causes the planet to wobble, a subtle motion that scientists can measure.

    In their search, Kipping’s team sifts through more than 4,000 potential planets in Kepler’s database, identifying 400 that have the best chances of hosting a detectable moon. They then use a supercomputer to simulate how a hypothetical moon of every possible size and orientation would orbit each of the 400 planets. The computer simulations produce hypothetical light curves that the astronomers can then compare to the real Kepler data. The real question, Kipping says, isn’t whether moons exist—he’s pretty sure they do—but how big they are. If the galaxy is filled with big moons about the same size as Earth or larger, then the researchers might find a dozen such moons in the Kepler data. But if it turns out that the universe doesn’t make moons that big, and they’re as small as the moons in our solar system, then the chances of detecting a moon drop.

    According to astronomer Gregory Laughlin of the University of California, Santa Cruz, the latter case may be more likely. “My gut feeling is that because the moon formation process seems so robust in our solar system, I would expect a similar thing is going on in an exoplanetary system,” he says. Which means it’ll be tough for Kipping’s team to find anything, even though they’re getting better at detecting the teeny ones—in one case, down to slightly less than twice the mass of the solar system’s largest moon, Ganymede.

    5
    Ganymede

    Whether anything can live on those moons is a whole other story. Even if astronomers eventually detect a moon, determining whether it’s habitable (with an atmosphere, water, and organic compounds)—let alone actually inhabited—would be extremely difficult. The starlight reflected off the planet would be overwhelming. Current and near-future telescopes won’t be able to discern much of anything in detail at all—which is why some researchers aren’t optimistic about Kipping’s ideas. “I just don’t see any great path to characterize the moons,” says Jonathan Fortney, an astronomer at UC Santa Cruz.

    Even Kipping acknowledges that it’s impossible to place any odds on whether he’ll actually find an exomoon. Still, thanks to improvements in detecting smaller moons and the 300 additional planets to analyze, Kipping says he’s optimistic. “It would be kind of surprising if we don’t find anything at all,” he says.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 5:18 pm on January 23, 2015 Permalink | Reply
    Tags: , , Wired Science   

    From Wired: “How Three Guys With $10K and Decades-Old Data Almost Found the Higgs Boson First” An Absolutely Great Story 

    Wired logo

    Wired

    1
    The Large Electron-Positron collider’s ALEPH detector was disassembled in 2001 to make room for the Large Hadron Collider. ALEPH collaboration/CERN

    On a fall morning in 2009, a team of three young physicists huddled around a computer screen in a small office overlooking Broadway in New York. They were dressed for success—even the graduate student’s shirt had buttons—and a bottle of champagne was at the ready. With a click of the mouse, they hoped to unmask a fundamental particle that had eluded physicists for decades: the Higgs boson.

    Of course, these men weren’t the only physicists in pursuit of the Higgs boson. In Geneva, a team of hundreds of physicists with an $8 billion machine called the Large Hadron Collider, and the world’s attention, also was in the hunt. But shortly after starting for the first time, the LHC had malfunctioned and was offline for repairs, opening a window three guys at NYU hoped to take advantage of.

    The key to their strategy was a particle collider that had been dismantled in 2001 to make room for the more powerful LHC. For $10,000 in computer time, they would attempt to show that the Large Electron-Positron collider had been making dozens of Higgs bosons without anybody noticing.

    “Two possible worlds stood before us then,” said physicist Kyle Cranmer, the leader of the NYU group. “In one, we discover the Higgs and a physics fairy tale comes true. Maybe the three of us share a Nobel prize. In the other, the Higgs is still hiding, and instead of beating the LHC, we have to go back to working on the LHC.”

    Cranmer had spent years working on both colliders, beginning as a graduate student at the Large Electron-Positron collider. He had been part of a 100-person statistical team that combed through terabytes of LEP data for evidence of new particles. “Everyone thought we had been very thorough,” he said. “But our worldview was colored by the ideas that were popular at the time.” A few years later, he realized the old data might look very different through the lens of a new theory.

    So, like detectives poring through evidence in a cold case, the researchers aimed to prove that the Higgs, and some supersymmetric partners in crime, had been at the scene in disguise.

    Dreaming up the Higgs

    The Higgs boson is now viewed as an essential component of the Standard Model of physics, a theory that describes all known particles and their interactions. But back in the 1960s, before the Standard Model had coalesced, the Higgs was part of a theoretical fix for a radioactive problem.

    2
    The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Here’s the predicament they faced. Sometimes an atom of one element will suddenly transform into an atom of a different element in a process called radioactive decay. For example, an atom of carbon can decay into an atom of nitrogen by emitting two light subatomic particles. (The carbon dating of fossils is a clever use of this ubiquitous process.) Physicists trying to describe the decay using equations ran into trouble—the math predicted that a sufficiently hot atom would decay infinitely quickly, which isn’t physically possible.

    To fix this, they introduced a theoretical intermediate step into the decay process, involving a never-before-seen particle that blinks into existence for just a trillionth of a trillionth of a second. As if that weren’t far-fetched enough, in order for the math to work, the particle—called the W boson—would need to weigh 10 times as much as the carbon atom that kicked off the process.
    “Figuring out what happened in a collider is like trying to figure out what your dog ate at the park yesterday. You can find out, but you have to sort through a lot of shit to do it.”

    To explain the bizarrely large mass of the W boson, three teams of physicists independently came up with the same idea: a new physical field. Just as your legs feel sluggish and heavy when you wade through deep water, the W boson seems heavy because it travels through what became known as the Higgs field (named after physicist Peter Higgs, who was a member of one of the three teams). The waves kicked up by the motion of this field, by way of a principle known as wave-particle duality, become particles called Higgs bosons.

    Their solution boiled down to this: Radioactive decay requires a heavy W boson, and a heavy W boson requires the Higgs field, and disturbances in the Higgs field produce Higgs bosons. “Explaining” radioactive decay in terms of one undetected field and two undiscovered particles may seem ridiculous. But physicists are conspiracy theorists with a very good track record.

    Forensic physics

    How do you find out if a theoretical particle is real? By the time Cranmer came of age, there was an established procedure. To produce evidence of new particles, you smash old ones together really, really hard. This works because E = mc2 means energy can be exchanged for matter; in other words, energy is the fungible currency of the subatomic world. Concentrate enough energy in one place and even the most exotic, heavy particles can be made to appear. But, they explode almost immediately. The only way to figure out they were there is to catch and analyze the detritus.

    How particle detectors work

    The innermost layer of a modern detector is made of thin silicon strips, like in a camera. A zooming particle, such as an electron, leaves a track of activated pixels. The track curves slightly, thanks to a magnetic field, and the degree of curvature reveals the electron’s momentum. Next the electron enters a series of chambers of excitable gas, where it ionizes little trails behind it. An electric field pulls the charged trails over to an array of wire sensors. Finally, the electron enters an iron or steel calorimeter which slows the particle to a halt, gathering and recording all of it’s energy.

    Modern particle accelerators like the LEP and LHC are like high-tech surveillance states. Thousands of electronic sensors, photoreceptors, and gas chambers monitor the collision site. Particle physics has become a forensic science.

    It’s also a messy science. “Figuring out what happened in a collider is like trying to figure out what your dog ate at the park yesterday,” said Jesse Thaler, the MIT physicist who first told me of Cranmer’s quest. “You can find out, but you have to sort through a lot of shit to do it.”

    The situation may be even worse than that. To reason backward from the particles that live long enough to detect to the short-lived undetected ones, requires detailed knowledge of each intermediate decay—almost like an exact description of all the chemical reactions in the dog’s gut. Complicating matters further, small changes in the theory you’re working with can affect the whole chain of reasoning, causing big changes in what you conclude really happened.
    The fine-tuning problem

    While the LEP was running, the Standard Model was the theory used to interpret its data. A panoply of particles were made, from the beauty quark to the W boson, but Cranmer and others had found no sign of a Higgs. They started to get worried: If the Higgs wasn’t real, how much of the rest of the Standard Model was also a convenient fiction?

    The model had at least one troubling feature beyond a missing Higgs: For matter to be capable of forming planets and stars, for the fundamental forces to be strong enough to hold things together but weak enough to avoid total collapse, an absurdly lucky cancellation (where two equivalent units of opposite sign combine to make zero) had to occur in some foundational formulas. This degree of what’s known as “fine-tuning” has a snowball’s chance in hell of happening by coincidence, according to physicist Flip Tanedo of the University of California, Irvine. It’s like a snowball never melting because every molecule of scorching hot air whizzing through hell just happens to avoid it by chance.

    So Cranmer was quite excited when he got wind of a new model that could explain both the fine-tuning problem and the hiding Higgs. The Nearly-Minimal Supersymmetric Standard Model has a host of new fundamental particles. The cancellation which seemed so lucky before is explained in this model by new terms corresponding to some of the new particles. Other new particles would interact with the Higgs, giving it a covert way to decay that would have gone unnoticed at the LEP.

    Supersymmetry standard model
    Standard Model of Supersymmetry

    If this new theory was correct, evidence for the Higgs boson was likely just sitting there in the old LEP data. And Cranmer had just the right tools to find it: He had experience with the old collider, and he had two ambitious apprentices. So he sent his graduate student James Beacham to retrieve the data from magnetic tapes sitting in a warehouse outside Geneva, and tasked NYU postdoctoral researcher Itay Yavin with working out the details of the new model. After laboriously deciphering dusty FORTRAN code from the original experiment and loading and cleaning information from the tapes, they brought the data back to life.

    This is what the team hoped to see evidence of in the LEP data:

    First, an electron and positron smash into each other, and their energy converts into the matter of a Higgs boson. The Higgs then decays into two ‘a’ particles—predicted by supersymmetry but never before seen—which fly in opposite directions. After a fraction of a second, each of the two ‘a’ particles decays into two tau particles. Finally each of the four tau particles decays into lighter particles, like electrons and pions, which survive long enough to strike the detector.

    As light particles hurtled through the detector’s many layers, detailed information on their trajectory was gathered (see sidebar). A tau particle would appear in the data as a common origin for a few of those trails. Like a firework shot into the sky, a tau particle can be identified by the brilliant arcs traced by its shrapnel. A Higgs, in turn, would appear as a constellation of light particles indicating the simultaneous explosion of four taus.

    Unfortunately, there are almost guaranteed to be false positives. For example, if an electron and a positron collide glancingly, they could create a quark with some of their energy. The quark could explode into pions, mimicking the behavior of a tau that came from a Higgs.

    1
    A computer simulation of a Higgs decaying into more elementary particles. The colored tracks show what the detector would see. ALEPH Collaboration/CERN

    To claim that a genuine Higgs had been made, rather than a few impostors, Beacham and Yavin needed to be extremely careful. Electronics sensitive enough to measure a single particle will often misfire, so there are countless decisions about which events to count and which to discard as noise. Confirmation bias makes it too dangerous to set those thresholds while looking at actual data from the LEP, as Beachem and Yavin would have been tempted to shade things in favor of a Higgs discovery. Instead, they decided to build two simulations of the LEP. In one, collisions took place in a universe governed by the Standard Model; in the other, the universe followed the rules of the Nearly-Minimal Supersymmetric Model. After carefully tuning their code on the simulated data, the team concluded that they had enough power to proceed: If the Higgs had been made by the LEP, they would detect significantly more four-tau events than if it had not.
    Moment of theoretical truth

    The team was hopeful and nervous as the moment of truth approached. Yavin had hardly been sleeping, checking and re-checking the code. A bottle of champagne was ready. With one click, the count of four-tau events at the LEP would come onscreen. If the Standard Model was correct, there would be around six, an expected number of false positives. If the Nearly-Minimal Supersymmetric Standard Model was correct, there would be around 30, a big enough excess to conclude that there really had been a Higgs.

    “I had done my job,” Cranmer said. “Now it was up to nature.”

    4
    Kyle Cranmer clicks for the Higgs! Also pictured: Itay Yavin (standing), James Beacham (sitting), and Veuve Clicquot (boxed). Courtesy Particle Fever

    There were just two tau quartets.

    “Honey, we didn’t find the Higgs,” Cranmer told his wife on the phone. Yavin collapsed in his chair. Beacham was thrilled the code had worked at all, and drank the champagne anyway.

    If Cranmer’s little team had found the Higgs boson before the multi-billion-dollar LHC and unseated the Standard Model, if the count had been 32 instead of 2, their story would have been front-page news. Instead, it was a typical success for the scientific method: A theory was carefully developed, rigorously tested, and found to be false.

    “With one keystroke, we rendered over a hundred theory papers null and void,” Beacham said.

    Three years later, a huge team of physicists at the LHC announced they had found the Higgs and that it was entirely consistent with the Standard Model. This was certainly a victory—for massive engineering projects, for international collaborations, for the theorists who dreamt up the Higgs field and boson 50 years ago. But the Standard Model probably won’t stand forever. It still has problems with fine-tuning and with integrating general relativity, problems that many physicists hope some new model will resolve. The question is, which one?

    “There are a lot of possibilities for how nature works,” said physicist Matt Strassler, a visiting scholar at Harvard University. “Once you go beyond the Standard Model, there are a gazillion ways to try to fix the fine-tuning problem.” Each proposed model has to be tested against nature, and each test invariably requires months or years of labor to do right, even if you’re cleverly reusing old data. The adrenaline builds until the moment of truth—will this be the new law of physics? But the vast number of possible models means that almost every test ends with the same answer: No. Try again.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 12:18 pm on December 16, 2014 Permalink | Reply
    Tags: , , , , Wired Science   

    From WIRED via James Webb: “The Fastest Stars in the Universe May Approach Light Speed” 

    Wired logo

    Wired

    Via James Webb
    NASA Webb Telescope
    James Webb Space Telescope

    Our sun orbits the Milky Way’s center at an impressive 450,000 mph. Recently, scientists have discovered stars hurtling out of our galaxy at a couple million miles per hour. Could there be stars moving even faster somewhere out there?

    After doing some calculations, Harvard University astrophysicists Avi Loeb and James Guillochon realized that yes, stars could go faster. Much faster. According to their analysis, which they describe in two papers recently posted online, stars can approach light speed. The results are theoretical, so no one will know definitively if this happens until astronomers detect such stellar speedsters—which, Loeb says, will be possible using next-generation telescopes.

    But it’s not just speed these astronomers are after. If these superfast stars are found, they could help astronomers understand the evolution of the universe. In particular, they give scientists another tool to measure how fast the cosmos is expanding. Moreover, Loeb says, if the conditions are right, planets could orbit the stars, tagging along for an intergalactic ride. And if those planets happen to have life, he speculates, such stars could be a way to carry life from one galaxy to another.

    It all started in 2005 when a star was discovered speeding away from our galaxy fast enough to escape the gravitational grasp of the Milky Way. Over the next few years, astronomers would find several more of what became known as hypervelocity stars. Such stars were cast out by the supermassive black hole at the center of the Milky Way. When a pair of stars orbiting each other gets close to the central black hole, which weighs about four million times as much as the sun, the three objects engage in a brief gravitational dance that ejects one of the stars. The other remains in orbit around the black hole.

    Loeb and Guillochon realized that if instead you had two supermassive black holes on the verge of colliding, with a star orbiting around one of the black holes, the gravitational interactions could catapult the star into intergalactic space at speeds reaching hundreds of times those of hypervelocity stars. Papers describing their analysis have been submitted to the Astrophysical Journal and the journal Physical Review Letters.

    2
    The galaxy known as Markarian 739 is actually two galaxies in the midst of merging. The two bright spots at the center are the cores of the two original galaxies, each of which harbors a supermassive black hole. SDSS

    This appears to be the most likely scenario that would produce the fastest stars in the universe, Loeb says. After all, supermassive black holes collide more often than you might think. Nearly all galaxies have supermassive black holes at their centers, and nearly all galaxies were the product of two smaller galaxies merging. When galaxies combine, so do their central black holes.

    Loeb and Guillochon calculated that merging supermassive black holes would eject stars at a wide range of speeds. Only some would reach near light speed, but many of the rest would still be plenty fast. For example, Loeb says, the observable universe could have more than a trillion stars moving at a tenth of light speed, about 67 million miles per hour.

    Because a single, isolated star streaking through intergalactic space would be so faint, only powerful future telescopes like the James Webb Space Telescope, planned for launch in 2018, would be able to detect them. Even then, telescopes would likely only see the stars that have reached our galactic neighborhood. Many of the ejected stars probably would have formed near the centers of their galaxies, and would have been thrown out soon after their birth. That means that they would have been traveling for the vast majority of their lifetimes. The star’s age could therefore approximate how long the star has been traveling. Combining travel time with its measured speed, astronomers can determine the distance between the star’s home galaxy and our galactic neighborhood.

    If astronomers can find stars that were kicked out of the same galaxy at different times, they can use them to measure the distance to that galaxy at different points in the past. By seeing how the distance has changed over time, astronomers can measure how fast the universe is expanding.

    These superfast rogue stars could have another use as well. When supermassive black holes smash into each other, they generate ripples in space and time called gravitational waves, which reveal the intimate details of how the black holes coalesced. A space telescope called eLISA, scheduled to launch in 2028, is designed to detect gravitational waves. Because the superfast stars are produced when black holes are just about to merge, they would act as a sort of bat signal pointing eLISA to possible gravitational wave sources.

    2
    The bottom part of this illustration shows the scale of the universe versus time. Specific events are shown such as the formation of neutral Hydrogen at 380 000 years after the big bang. Prior to this time, the constant interaction between matter (electrons) and light (photons) made the universe opaque. After this time, the photons we now call the CMB started streaming freely. The fluctuations (differences from place to place) in the matter distribution left their imprint on the CMB photons. The density waves appear as temperature and “E-mode” polarization. The gravitational waves leave a characteristic signature in the CMB polarization: the “B-modes”. Both density and gravitational waves come from quantum fluctuations which have been magnified by inflation to be present at the time when the CMB photons were emitted.
    National Science Foundation (NASA, JPL, Keck Foundation, Moore Foundation, related) – Funded BICEP2 Program
    http://bicepkeck.org/faq.html
    http://bicepkeck.org/visuals.html

    Cosmic Microwave Background  Planck
    CMB per ESA/Planck

    ESA Planck
    ESA Planck schematic
    ESA/Planck

    The existence of these stars would be one of the clearest signals that two supermassive black holes are on the verge of merging, says astrophysicist Enrico Ramirez-Ruiz of the University of California, Santa Cruz. Although they may be hard to detect, he adds, they will provide a completely novel tool for learning about the universe.

    In about 4 billion years, our own Milky Way Galaxy will crash into the Andromeda Galaxy.

    a
    The Andromeda Galaxy is a spiral galaxy approximately 2.5 million light-years away in the constellation Andromeda. The image also shows Messier Objects 32 and 110, as well as NGC 206 (a bright star cloud in the Andromeda Galaxy) and the star Nu Andromedae. This image was taken using a hydrogen-alpha filter.
    Adam Evans

    The two supermassive black holes at their centers will merge, and stars could be thrown out. Our own sun is a bit too far from the galaxy’s center to get tossed, but one of the ejected stars might harbor a habitable planet. And if humans are still around, Loeb muses, they could potentially hitch a ride on that planet and travel to another galaxy. Who needs warp drive anyway?

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 534 other followers

%d bloggers like this: