Tagged: Wired Science Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:32 pm on September 28, 2015 Permalink | Reply
    Tags: , , , Wired Science   

    From WIRED: “The Other Way A Quantum Computer Could Revive Moore’s Law” 

    Wired logo


    Cade Metz

    D-Wave’s quantum chip. Google

    Google is upgrading its quantum computer. Known as the D-Wave, Google’s machine is making the leap from 512 qubits—the fundamental building block of a quantum computer—to more than a 1000 qubits. And according to the company that built the system, this leap doesn’t require a significant increase in power, something that could augur well for the progress of quantum machines.

    Together with NASA and the Universities Space Research Association, or USRA, Google operates its quantum machine at the NASA Ames Research center not far from its Mountain View, California headquarters. Today, D-Wave Systems, the Canadian company that built the machine, said it has agreed to provide regular upgrades to the system—keeping it “state-of-the-art”—for the next seven years. Colin Williams, director of business development and strategic partnerships for D-Wave, calls this “the biggest deal in the company’s history.” The system is also used by defense giant Lockheed Martin, among others.

    Though the D-Wave machine is less powerful than many scientists hope quantum computers will one day be, the leap to 1000 qubits represents an exponential improvement in what the machine is capable of. What is it capable of? Google and its partners are still trying to figure that out. But Google has said it’s confident there are situations where the D-Wave can outperform today’s non-quantum machines, and scientists at the University of Southern California have published research suggesting that the D-Wave exhibits behavior beyond classical physics.

    Over the life of Google’s contract, if all goes according to plan, the performance of the system will continue to improve. But there’s another characteristic to consider. Williams says that as D-Wave expands the number of qubits, the amount of power needed to operate the system stays roughly the same. “We can increase performance with constant power consumption,” he says. At a time when today’s computer chip makers are struggling to get more performance out of the same power envelope, the D-Wave goes against the trend.

    The Qubit

    A quantum computer operates according to the principles of quantum mechanics, the physics of very small things, such as electrons and photons. In a classical computer, a transistor stores a single “bit” of information. If the transistor is “on,” it holds a 1, and if it’s “off,” it holds a 0. But in quantum computer, thanks to what’s called the superposition principle, information is held in a quantum system that can exist in two states at the same time. This “qubit” can store a 0 and 1 simultaneously.

    Two qubits, then, can hold four values at any given time (00, 01, 10, and 11). And as you keep increasing the number of qubits, you exponentially increase the power of the system. The problem is that building a qubit is a extreme difficult thing. If you read information from a quantum system, it “decoheres.” Basically, it turns into a classical bit that houses only a single value.

    D-Wave believes it has found a way around this problem. It released its first machine, spanning 16 qubits, in 2007. Together with NASA, Google started testing the machine when it reached 512 qubits a few years back. Each qubit, D-Wave says, is a superconducting circuit—a tiny loop of flowing current—and these circuits are dropped to extremely low temperatures so that the current flows in both directions at once. The machine then performs calculations using algorithms that, in essence, determine the probability that a collection of circuits will emerge in a particular pattern when the temperature is raised.

    Reversing the Trend

    Some have questioned whether the system truly exhibits quantum properties. But researchers at USC say that the system appears to display a phenomenon called “quantum annealing” that suggests it’s truly operating in the quantum realm. Regardless, the D-Wave is not a general quantum computer—that is, it’s not a computer for just any task. But D-Wave says the machine is well-suited to “optimization” problems, where you’re facing many, many different ways forward and must pick the best option, and to machine learning, where computers teach themselves tasks by analyzing large amount of data.

    D-Wave says that most of the power needed to run the system is related to the extreme cooling. The entire system consumes about 15 kilowatts of power, while the quantum chip itself uses a fraction of a microwatt. “Most of the power,” Williams says, “is being used to run the refrigerator.” This means that the company can continue to improve its performance without significantly expanding the power it has to use. At the moment, that’s not hugely important. But in a world where classical computers are approaching their limits, it at least provides some hope that the trend can be reversed.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 11:42 am on September 18, 2015 Permalink | Reply
    Tags: , Environment, Wired Science   

    From wired: “This Tower Purifies a Million Cubic Feet of Air an Hour” 

    Wired logo


    Liz Stinson

    Daan Roosegaarde worked with scientist Bob Ursem and European Nano Solutions to Create the Smog Free Tower. This one is in Rotterdam.

    There’s a massive vacuum cleaner in the middle of a Rotterdam park and it’s sucking all the smog out of the air. A decent portion of it, anyway. And it isn’t a vacuum, exactly. It looks nothing like a Dyson or a Hoover. It’s probably more accurate to describe it as the world’s largest air purifier.

    The Smog Free Tower, as it’s called, is a collaboration between Dutch designer Daan Roosegaarde, Delft Technology University researcher Bob Ursem, and European Nano Solutions, a green tech company in the Netherlands. The metal tower, nearly 23 feet tall, can purify up to 1 million cubic feet of air every hour. To put that in perspective, the Smog Free Tower would need just 10 hours to purify enough air to fill Madison Square Garden. “When this baby is up and running for the day you can clean a small neighborhood,” says Roosegaarde.

    It does this by ionizing airborne smog particles. Particles smaller than 10 micrometers in diameter (about the width of a cotton fiber) are tiny enough to inhale and can be harmful to the heart and lungs. Ursem, who has been researching ionization since the early 2000s, says a radial ventilation system at the top of the tower (powered by wind energy) draws in dirty air, which enters a chamber where particles smaller than 15 micrometers are given a positive charge. Like iron shavings drawn to a magnet, the the positively charged particles attach themselves to a grounded counter electrode in the chamber. The clean air is then expelled through vents in the lower part of the tower, surrounding the structure in a bubble of clean air. Ursem notes that this process doesn’t produce ozone, like many other ionic air purifiers, because the particles are charged with positive voltage rather than a negative.

    Ursem has used the same technique in hospital purification systems, parking garages, and along roadsides, but the tower is by far the biggest and prettiest application of his technology. Indeed, it’s meant to be a design object as much as a technological innovation. Roosegaarde is known for wacky, socially conscious design projects—he’s the same guy who did the glowing Smart Highway in the Netherlands. He says making the tower beautiful brings widespread attention to a problem typically hidden behind bureaucracy. “I’m tired of design being about chairs, tables, lamps, new cars, and new watches,” he says. “It’s boring, we have enough of this stuff. Let’s focus on the real issues in life.”

    Roosegaarde has been working with Ursem and ENS, the company that fabricated the tower, for two years to bring it into existence, and now that it’s up and running, he says people are intrigued. He just returned from Mumbai where he spoke to city officials about installing a similar tower in a park, and officials in Mexico City, Paris, and Beijing (the smoggy city that inspired the project) also are interested. “We’ve gotten a lot of requests from property developers who want to place it in a few filthy rich neighborhoods of course, and I tend to say no to these right now,” he says. “I think that it should be in a public space.”

    Roosegaarde has plans to take the tower on a “smog-free tour” in the coming year so he can demonstrate the tower’s abilities in cities around the world. It’s a little bit of showmanship that he hopes will garner even more attention for the machine, which he calls a “shrine-like temple of clean air.” Roosegaarde admits that his tower isn’t a final solution for cleaning a city’s air. “The real solution everybody knows,” he says, adding that it’s more systematic than clearing a hole of clean air in the sky. He views the Smog Free tower as an initial step in a bottom-up approach to cleaner air, with citizens acting as the driving force. “How can we create a city where in 10 years these towers aren’t necessary anymore?” he says. “This is the bridge towards the solution.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 10:51 am on August 13, 2015 Permalink | Reply
    Tags: , , Wired Science   

    From WIRED: “The Way We Measure Earthquakes Is Stupid” 

    Wired logo


    Sarah Zhang

    Barisonal/Getty Images

    This weekend, a 3.3-magnitude earthquake rattled San Francisco ever so slightly. The small quake, like so many before it, passed, and San Franciscans went back to conveniently ignoring their seismic reality. Magnitude 3.3 earthquakes are clearly no big deal, and the city survived a 6.9-magnitude earthquake in 1989 mostly fine—how how much bigger will the Big One, at 8.0, be than 1989?

    Ten times! As smarty-pants among you who understand logarithms may be thinking. But…that’s wrong. On the current logarithmic earthquake scale, a whole number increase, like from 7.0 to 8.0, actually means a 32-fold increase in earthquake energy. Even if you can mentally do that math—and feel smug doing it—the logarithmic scale for earthquakes is terrible for intuitively communicating risk. “It’s arbitrary,” says Lucy Jones, a seismologist with the US Geological Survey. “I’ve never particularly liked it.”

    Just how arbitrary? Oh, let us count the ways.

    First, there’s the matter of star light, which has no relevance to earthquakes except that Charles Richter was once an amateur astronomer. When Richter and Beno Gutenberg were developing what would become the Richter scale in 1935, they took inspiration from magnitude, the logarithmic measure of the brightness of stars.They defined earthquake magnitude as the logarithm of shaking amplitude recorded on a particular seismograph in southern California.

    Now, logarithms might make sense for stars a million, billion, or gazillion miles away whose brightnesses varied widely—but back on Earth, the rationale is shakier. Understanding the severity of earthquakes is important for millions of people, and the logarithmic scale is hard to grok: 8 seems only marginally larger than 6, but on our earthquake logarithmic scale, it’s a 1000-fold difference in intensity. Seismologists have to unpack it every time they use it with non-experts. “We just undo the logarithm when we try to tell people,” says Thomas Heaton, a seismologist at Caltech.

    A better way to measure earthquakes does exist—at least among scientists. That would be seismic moment, equal to (take a breath) the area of rupture along a fault multiplied by the average displacement multiplied by the rigidity of the earth—which boils down to the amount of energy released in a quake. The Richter scale uses surface shaking amplitude as a proxy of energy, but seismologists can now get at energy more directly and accurately. The moment of the largest earthquake ever recorded came to 2.5 × 1023 joules.

    A big number but meaningless without context, right? (That biggest earthquake ever was a 9.6 in Chile in 1960.) Seismologists now have a tortured formula (below) to convert seismic moment (Mo) to the familiar old logarithmic magnitude scale (M). That gets us the aptly named moment magnitude scale, which supplanted the Richter scale in popular use in the 1970s. The Richter scale may be obsolete now, but its logarithm definition of magnitude remains. “Through the years, seismologists have tried to be consistent,” says Heaton. “It’s been confusing ever since.”

    M = (2/3)*(log Mo)-10.7

    That formula explains why going up one unit in magnitude actually means a 32-fold increase in energy. (Thirty-two is approximately equal to 103/2.) But many of us learned, at one point, that a magnitude-6 earthquake is 10 times worse than a 5. Where did this misconception come from? Richter, of course. Richter looked at his data and calculated that a 10-fold increase in shaking amplitude on his instrument correlated with a 32-fold increase in energy released. He literally did it by drawing a line. (See the diagram below). But Richter was only working on earthquakes in southern California.


    Seismologists now understand that many variables, like the type of soil, affect the intensity of surface shaking from earthquakes. In other words, the relationship between shaking amplitude and earthquake energy from Richter doesn’t hold for all earthquakes, and moment magnitude doesn’t easily translate to earthquake intensity. Seismologists use seismic moment—which, remember, is basically energy released—to compare earthquakes because it does get at the totality of an earthquake rather than the shaking at just one particular place in the ground.

    Back in 2000, Jones wrote an article in the Seismological Review Letters suggesting a new earthquake scale. “I hate the Richter scale,” she began the piece. “It feels almost sacrilegious, but I have to say it.” Instead, she proposed a scale based on seismic moment using Akis, named after the inventor of the seismic moment, Keiiti Aki. A 5.0 earthquake might be equivalent to 400 Akis—so that a tiny 2.0 could be measured in milli-Akis and a devastating 9.0 in billions of Akis. It’s more logical than Richter but also more layperson-friendly than seismic moment. “We got a lot of pushback,” says Jones. “Seismologists were saying people understand magnitude. I was like, ‘No they don’t. Have you tried to explain it to people?’”

    “This confusion over earthquake magnitude seems to be creating a lot of confusion in the design of buildings,” says Heaton. New and ugly things happen when earthquakes get past 8.0 or 8.5. Tsunamis are one. Another is that tall buildings are more vulnerable to long, slow lurches of the ground that only occur in huge quakes. And since 1906, San Francisco has built a lot more tall buildings downtown. At this point, I mentioned to Heaton that I was in fact speaking to him from an office building in downtown San Francisco. His parting words? “Good luck to you.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 2:16 pm on July 24, 2015 Permalink | Reply
    Tags: , , Wired Science   

    From WIRED: “There’s a Volcano Called Kick ‘Em Jenny, and It’s Angry” 

    Wired logo


    Erik Klemetti

    A bathymetric map of the seafloor off northern Grenada showing the volcanic cluster surrounding Kick’ Em Jenny. NOAA and Seismic Research Institute, 2003 (published in GVN Bulletin).

    A submarine volcano near the coast of Grenada in the West Indies (Less Antilles) looks like it might be headed towards a new eruption. A new swarm of earthquakes has begun in the area of Kick ‘Em Jenny (one of the best volcano names on Earth) and locals have noticed more bubbles in the ocean above the volcano (which reaches within ~180 meters of the surface). The intensity of this degassing and earthquake swarm is enough to have the volcano moved to “Orange” alert status by Seismic Research Center at the University of the West Indies, meaning they expect an eruption soon. A 5 kilometer (3 mile) exclusion zone has also been set up for boat traffic around the volcano.

    Kick ‘Em Jenny doesn’t pose a threat to Grenada itself enough though its only 8 kilometers from the island. The biggest hazard is to boats that frequent the area as the release of volcanic gases and debris into the water could heat up the water and make it tumultuous. In 1939, the volcano did also produce an eruption plume that breached the surface of the ocean, so there is a small chance that any new eruption could do the same. However, eruptions since 1939, including the most recent in 2001, have been minor and had no surface expression — think of something like the 2010 eruptions at El Hierro in the Canary Islands.

    Kick ‘Em Jenny doesn’t pose a threat to Grenada itself enough though its only 8 kilometers from the island. The biggest hazard is to boats that frequent the area as the release of volcanic gases and debris into the water could heat up the water and make it tumultuous. In 1939, the volcano did also produce an eruption plume that breached the surface of the ocean, so there is a small chance that any new eruption could do the same. However, eruptions since 1939, including the most recent in 2001, have been minor and had no surface expression — think of something like the 2010 eruptions at El Hierro in the Canary Islands.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:20 pm on April 27, 2015 Permalink | Reply
    Tags: , , Wired Science   

    From WIRED: “Turns Out Satellites Work Great for Mapping Earthquakes” 

    Wired logo


    Satellite radar image of the magnitude 6.0 South Napa earthquake. European Space Agency

    The Nepal earthquake on Saturday devastated the region and killed over 2,500 people, with more casualties mounting across four different countries. The first 24 hours of a disaster are the most important, and first-responders scramble to get as much information about the energy and geological effects of earthquakes as they can. Seismometers can help illustrate the location and magnitude of earthquakes around the world, but for more precise detail, you need to look at three-dimensional models of the ground’s physical displacement.

    The easiest way to characterize that moving and shaking is with GPS and satellite data, together called geodetic data. That information is already used by earthquake researchers and geologists around the world to study the earth’s tectonic plate movements—long-term trends that establish themselves over years.

    The tectonic plates of the world were mapped in the second half of the 20th century.

    But now, researchers at the University of Iowa and the U.S. Geological Survey (USGS) have shown a faster way to use geodetic data to assess fault lines, turning over reports in as little as a day to help guide rapid responses to catastrophic quakes.

    A radar interferogram of the August 2014 South Napa earthquake. A single cycle of color represents about a half inch of surface displacement. Jet Propulsion Laboratory

    Normally, earthquake disaster aid and emergency response requires detailed information about surface movements: If responders know how much ground is displaced, they’ll know better what kind of infrastructure damage to expect, or what areas pose the greatest risk to citizens. Yet emergency response agencies don’t use geodetic data immediately, choosing instead to wait several days or even weeks before finally processing the data, says University of Iowa geologist William Barnhart. By then, the damage has been done and crews are already on the ground, with relief efforts well underway.

    The new results are evidence that first responders can get satellite data fast enough to inform how they should respond. Barnhart and his team used geodetic data to measure small deformations in the surface caused by an 6.0-magnitude quake that hit Napa Valley in August 2014 (the biggest the Bay Area had seen in 25 years). By analyzing those measurements, the geologists determined how much the ground moved with relation to the fault plane, which helps describe the exact location, orientation, and dimensions of the entire fault.

    A 3D slip map of the Napa quake generated from GPS surface displacements. Jet Propulsion Laboratory

    Then they created the Technicolor map above, showing just how much the ground shifted. In this so-called interferogram of the Napa earthquake epicenter, the cycles of color represent vertical ground displacement, where every full cycle indicates 6 centimeters (e.g. between every green band is 6 cm of vertical ground).

    According to the Barnhart, this is the first demonstration of geodetic data being acquired and analyzed the same day of an earthquake. John Langbein, a geologist at the USGS, finds the results very encouraging, and hopes to see geodetic data used regularly as a tool to make earthquake responses faster and more efficient.

    Barnhart is quick to point out that this method is most useful for moderate earthquakes (between magnitudes of 5.5 and 7.0). Although the Nepal earthquake had a magnitude of 7.8, over 35 aftershocks continued to rock the region, including one as high as 6.7 on Sunday. The earthquake itself flattened broad swaths of the capital city of Kathmandu, and caused avalanches across the Himalayan mountains (including Mount Everest), killing and stranding many climbers. But the aftershocks are stymieing relief efforts, paralyzing citizens with immobilizing fear, and creating new avalanches in nearby mountains.

    It’s also worth remembering that the 2010 earthquake that devastated Haiti—and killed about 316,000 people—had a magnitude of 7.0. Most areas of the world, especially developing nations, aren’t equipped to withstand even small tremors in the earth. It’s those places that are also likely to have fewer seismometers, making the satellite information even more helpful.

    As the situation in Nepal moves forward, the aftermath might hopefully speed up plans to make geodetic data available just hours after an earthquake occurs. Satellite systems could be integral in allowing first responders to move swiftly in the face of unpredictable, unpreventable events.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:58 am on January 27, 2015 Permalink | Reply
    Tags: , , , Wired Science   

    From Wired: “Why We’re Looking for Alien Life on Moons, Not Just Planets” 

    Wired logo


    Marcus Woo

    An artist’s depiction of what a moon around another planet may look like. JPL-Caltech/NASA

    Think “moon” and you probably envision a desolate, cratered landscape, maybe with an American flag and some old astronaut footprints. Earth’s moon is no place for living things. But that isn’t necessarily true for every moon. Whirling around Saturn, Enceladus spits out geysers of water from an underground ocean. Around Jupiter, Europa has a salty, subsurface sea and Titan has lakes of ethane and methane. A handful of the roughly 150 moons in the solar system have atmospheres, organic compounds, ice, and maybe even liquid water. They all seem like places where something could live—albeit something weird.




    So now that the Kepler space telescope has found more than 1,000 planets—data that suggest the Milky Way galaxy could contain a hundred billion worlds—it makes sense to some alien-hunters to concentrate not on them but on their moons.

    NASA Kepler Telescope

    The odds for life on these so-called exoplanets look a lot better—multiply that hundred billion by 150 and you get a lot of places to look for ET. “Because there are so many more moons than planets, if life can get started on moons, then that’s going to be a lot of lively moons,” says Seth Shostak, an astronomer at the SETI Institute.

    Even better, more of those moons might be in the habitable zone, the region around a star where liquid water can exist. That’s one reason Harvard astronomer David Kipping got interested in exomoons. He says about 1.7 percent of all stars similar to the sun have a rocky planet in their habitable zones. But if you’re talking about planets made out of gas, like Saturn and Jupiter, that number goes up to 9.2 percent. Gaseous planets don’t have the solid surfaces that astronomers think life needs, but their moons might.

    So far, no one has found a moon outside the solar system yet. But people like Kipping are looking hard. He leads a project called the Hunt for Exomoons with Kepler, the only survey project dedicated to finding moons in other planetary systems. The team has looked at 55 systems, and this year they plan to add 300 more. “It’s going to be a very big year for us,” Kipper says.

    Finding moons isn’t easy. Kepler was designed to find planets—the telescope watches for dips in starlight when a planet passes in front of its star. But if a moon accompanies that planet, it could further lessen that starlight, called a light curve. A moon’s gravitational tug also causes the planet to wobble, a subtle motion that scientists can measure.

    In their search, Kipping’s team sifts through more than 4,000 potential planets in Kepler’s database, identifying 400 that have the best chances of hosting a detectable moon. They then use a supercomputer to simulate how a hypothetical moon of every possible size and orientation would orbit each of the 400 planets. The computer simulations produce hypothetical light curves that the astronomers can then compare to the real Kepler data. The real question, Kipping says, isn’t whether moons exist—he’s pretty sure they do—but how big they are. If the galaxy is filled with big moons about the same size as Earth or larger, then the researchers might find a dozen such moons in the Kepler data. But if it turns out that the universe doesn’t make moons that big, and they’re as small as the moons in our solar system, then the chances of detecting a moon drop.

    According to astronomer Gregory Laughlin of the University of California, Santa Cruz, the latter case may be more likely. “My gut feeling is that because the moon formation process seems so robust in our solar system, I would expect a similar thing is going on in an exoplanetary system,” he says. Which means it’ll be tough for Kipping’s team to find anything, even though they’re getting better at detecting the teeny ones—in one case, down to slightly less than twice the mass of the solar system’s largest moon, Ganymede.


    Whether anything can live on those moons is a whole other story. Even if astronomers eventually detect a moon, determining whether it’s habitable (with an atmosphere, water, and organic compounds)—let alone actually inhabited—would be extremely difficult. The starlight reflected off the planet would be overwhelming. Current and near-future telescopes won’t be able to discern much of anything in detail at all—which is why some researchers aren’t optimistic about Kipping’s ideas. “I just don’t see any great path to characterize the moons,” says Jonathan Fortney, an astronomer at UC Santa Cruz.

    Even Kipping acknowledges that it’s impossible to place any odds on whether he’ll actually find an exomoon. Still, thanks to improvements in detecting smaller moons and the 300 additional planets to analyze, Kipping says he’s optimistic. “It would be kind of surprising if we don’t find anything at all,” he says.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 5:18 pm on January 23, 2015 Permalink | Reply
    Tags: , , Wired Science   

    From Wired: “How Three Guys With $10K and Decades-Old Data Almost Found the Higgs Boson First” An Absolutely Great Story 

    Wired logo


    The Large Electron-Positron collider’s ALEPH detector was disassembled in 2001 to make room for the Large Hadron Collider. ALEPH collaboration/CERN

    On a fall morning in 2009, a team of three young physicists huddled around a computer screen in a small office overlooking Broadway in New York. They were dressed for success—even the graduate student’s shirt had buttons—and a bottle of champagne was at the ready. With a click of the mouse, they hoped to unmask a fundamental particle that had eluded physicists for decades: the Higgs boson.

    Of course, these men weren’t the only physicists in pursuit of the Higgs boson. In Geneva, a team of hundreds of physicists with an $8 billion machine called the Large Hadron Collider, and the world’s attention, also was in the hunt. But shortly after starting for the first time, the LHC had malfunctioned and was offline for repairs, opening a window three guys at NYU hoped to take advantage of.

    The key to their strategy was a particle collider that had been dismantled in 2001 to make room for the more powerful LHC. For $10,000 in computer time, they would attempt to show that the Large Electron-Positron collider had been making dozens of Higgs bosons without anybody noticing.

    “Two possible worlds stood before us then,” said physicist Kyle Cranmer, the leader of the NYU group. “In one, we discover the Higgs and a physics fairy tale comes true. Maybe the three of us share a Nobel prize. In the other, the Higgs is still hiding, and instead of beating the LHC, we have to go back to working on the LHC.”

    Cranmer had spent years working on both colliders, beginning as a graduate student at the Large Electron-Positron collider. He had been part of a 100-person statistical team that combed through terabytes of LEP data for evidence of new particles. “Everyone thought we had been very thorough,” he said. “But our worldview was colored by the ideas that were popular at the time.” A few years later, he realized the old data might look very different through the lens of a new theory.

    So, like detectives poring through evidence in a cold case, the researchers aimed to prove that the Higgs, and some supersymmetric partners in crime, had been at the scene in disguise.

    Dreaming up the Higgs

    The Higgs boson is now viewed as an essential component of the Standard Model of physics, a theory that describes all known particles and their interactions. But back in the 1960s, before the Standard Model had coalesced, the Higgs was part of a theoretical fix for a radioactive problem.

    The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Here’s the predicament they faced. Sometimes an atom of one element will suddenly transform into an atom of a different element in a process called radioactive decay. For example, an atom of carbon can decay into an atom of nitrogen by emitting two light subatomic particles. (The carbon dating of fossils is a clever use of this ubiquitous process.) Physicists trying to describe the decay using equations ran into trouble—the math predicted that a sufficiently hot atom would decay infinitely quickly, which isn’t physically possible.

    To fix this, they introduced a theoretical intermediate step into the decay process, involving a never-before-seen particle that blinks into existence for just a trillionth of a trillionth of a second. As if that weren’t far-fetched enough, in order for the math to work, the particle—called the W boson—would need to weigh 10 times as much as the carbon atom that kicked off the process.
    “Figuring out what happened in a collider is like trying to figure out what your dog ate at the park yesterday. You can find out, but you have to sort through a lot of shit to do it.”

    To explain the bizarrely large mass of the W boson, three teams of physicists independently came up with the same idea: a new physical field. Just as your legs feel sluggish and heavy when you wade through deep water, the W boson seems heavy because it travels through what became known as the Higgs field (named after physicist Peter Higgs, who was a member of one of the three teams). The waves kicked up by the motion of this field, by way of a principle known as wave-particle duality, become particles called Higgs bosons.

    Their solution boiled down to this: Radioactive decay requires a heavy W boson, and a heavy W boson requires the Higgs field, and disturbances in the Higgs field produce Higgs bosons. “Explaining” radioactive decay in terms of one undetected field and two undiscovered particles may seem ridiculous. But physicists are conspiracy theorists with a very good track record.

    Forensic physics

    How do you find out if a theoretical particle is real? By the time Cranmer came of age, there was an established procedure. To produce evidence of new particles, you smash old ones together really, really hard. This works because E = mc2 means energy can be exchanged for matter; in other words, energy is the fungible currency of the subatomic world. Concentrate enough energy in one place and even the most exotic, heavy particles can be made to appear. But, they explode almost immediately. The only way to figure out they were there is to catch and analyze the detritus.

    How particle detectors work

    The innermost layer of a modern detector is made of thin silicon strips, like in a camera. A zooming particle, such as an electron, leaves a track of activated pixels. The track curves slightly, thanks to a magnetic field, and the degree of curvature reveals the electron’s momentum. Next the electron enters a series of chambers of excitable gas, where it ionizes little trails behind it. An electric field pulls the charged trails over to an array of wire sensors. Finally, the electron enters an iron or steel calorimeter which slows the particle to a halt, gathering and recording all of it’s energy.

    Modern particle accelerators like the LEP and LHC are like high-tech surveillance states. Thousands of electronic sensors, photoreceptors, and gas chambers monitor the collision site. Particle physics has become a forensic science.

    It’s also a messy science. “Figuring out what happened in a collider is like trying to figure out what your dog ate at the park yesterday,” said Jesse Thaler, the MIT physicist who first told me of Cranmer’s quest. “You can find out, but you have to sort through a lot of shit to do it.”

    The situation may be even worse than that. To reason backward from the particles that live long enough to detect to the short-lived undetected ones, requires detailed knowledge of each intermediate decay—almost like an exact description of all the chemical reactions in the dog’s gut. Complicating matters further, small changes in the theory you’re working with can affect the whole chain of reasoning, causing big changes in what you conclude really happened.
    The fine-tuning problem

    While the LEP was running, the Standard Model was the theory used to interpret its data. A panoply of particles were made, from the beauty quark to the W boson, but Cranmer and others had found no sign of a Higgs. They started to get worried: If the Higgs wasn’t real, how much of the rest of the Standard Model was also a convenient fiction?

    The model had at least one troubling feature beyond a missing Higgs: For matter to be capable of forming planets and stars, for the fundamental forces to be strong enough to hold things together but weak enough to avoid total collapse, an absurdly lucky cancellation (where two equivalent units of opposite sign combine to make zero) had to occur in some foundational formulas. This degree of what’s known as “fine-tuning” has a snowball’s chance in hell of happening by coincidence, according to physicist Flip Tanedo of the University of California, Irvine. It’s like a snowball never melting because every molecule of scorching hot air whizzing through hell just happens to avoid it by chance.

    So Cranmer was quite excited when he got wind of a new model that could explain both the fine-tuning problem and the hiding Higgs. The Nearly-Minimal Supersymmetric Standard Model has a host of new fundamental particles. The cancellation which seemed so lucky before is explained in this model by new terms corresponding to some of the new particles. Other new particles would interact with the Higgs, giving it a covert way to decay that would have gone unnoticed at the LEP.

    Supersymmetry standard model
    Standard Model of Supersymmetry

    If this new theory was correct, evidence for the Higgs boson was likely just sitting there in the old LEP data. And Cranmer had just the right tools to find it: He had experience with the old collider, and he had two ambitious apprentices. So he sent his graduate student James Beacham to retrieve the data from magnetic tapes sitting in a warehouse outside Geneva, and tasked NYU postdoctoral researcher Itay Yavin with working out the details of the new model. After laboriously deciphering dusty FORTRAN code from the original experiment and loading and cleaning information from the tapes, they brought the data back to life.

    This is what the team hoped to see evidence of in the LEP data:

    First, an electron and positron smash into each other, and their energy converts into the matter of a Higgs boson. The Higgs then decays into two ‘a’ particles—predicted by supersymmetry but never before seen—which fly in opposite directions. After a fraction of a second, each of the two ‘a’ particles decays into two tau particles. Finally each of the four tau particles decays into lighter particles, like electrons and pions, which survive long enough to strike the detector.

    As light particles hurtled through the detector’s many layers, detailed information on their trajectory was gathered (see sidebar). A tau particle would appear in the data as a common origin for a few of those trails. Like a firework shot into the sky, a tau particle can be identified by the brilliant arcs traced by its shrapnel. A Higgs, in turn, would appear as a constellation of light particles indicating the simultaneous explosion of four taus.

    Unfortunately, there are almost guaranteed to be false positives. For example, if an electron and a positron collide glancingly, they could create a quark with some of their energy. The quark could explode into pions, mimicking the behavior of a tau that came from a Higgs.

    A computer simulation of a Higgs decaying into more elementary particles. The colored tracks show what the detector would see. ALEPH Collaboration/CERN

    To claim that a genuine Higgs had been made, rather than a few impostors, Beacham and Yavin needed to be extremely careful. Electronics sensitive enough to measure a single particle will often misfire, so there are countless decisions about which events to count and which to discard as noise. Confirmation bias makes it too dangerous to set those thresholds while looking at actual data from the LEP, as Beachem and Yavin would have been tempted to shade things in favor of a Higgs discovery. Instead, they decided to build two simulations of the LEP. In one, collisions took place in a universe governed by the Standard Model; in the other, the universe followed the rules of the Nearly-Minimal Supersymmetric Model. After carefully tuning their code on the simulated data, the team concluded that they had enough power to proceed: If the Higgs had been made by the LEP, they would detect significantly more four-tau events than if it had not.
    Moment of theoretical truth

    The team was hopeful and nervous as the moment of truth approached. Yavin had hardly been sleeping, checking and re-checking the code. A bottle of champagne was ready. With one click, the count of four-tau events at the LEP would come onscreen. If the Standard Model was correct, there would be around six, an expected number of false positives. If the Nearly-Minimal Supersymmetric Standard Model was correct, there would be around 30, a big enough excess to conclude that there really had been a Higgs.

    “I had done my job,” Cranmer said. “Now it was up to nature.”

    Kyle Cranmer clicks for the Higgs! Also pictured: Itay Yavin (standing), James Beacham (sitting), and Veuve Clicquot (boxed). Courtesy Particle Fever

    There were just two tau quartets.

    “Honey, we didn’t find the Higgs,” Cranmer told his wife on the phone. Yavin collapsed in his chair. Beacham was thrilled the code had worked at all, and drank the champagne anyway.

    If Cranmer’s little team had found the Higgs boson before the multi-billion-dollar LHC and unseated the Standard Model, if the count had been 32 instead of 2, their story would have been front-page news. Instead, it was a typical success for the scientific method: A theory was carefully developed, rigorously tested, and found to be false.

    “With one keystroke, we rendered over a hundred theory papers null and void,” Beacham said.

    Three years later, a huge team of physicists at the LHC announced they had found the Higgs and that it was entirely consistent with the Standard Model. This was certainly a victory—for massive engineering projects, for international collaborations, for the theorists who dreamt up the Higgs field and boson 50 years ago. But the Standard Model probably won’t stand forever. It still has problems with fine-tuning and with integrating general relativity, problems that many physicists hope some new model will resolve. The question is, which one?

    “There are a lot of possibilities for how nature works,” said physicist Matt Strassler, a visiting scholar at Harvard University. “Once you go beyond the Standard Model, there are a gazillion ways to try to fix the fine-tuning problem.” Each proposed model has to be tested against nature, and each test invariably requires months or years of labor to do right, even if you’re cleverly reusing old data. The adrenaline builds until the moment of truth—will this be the new law of physics? But the vast number of possible models means that almost every test ends with the same answer: No. Try again.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 12:18 pm on December 16, 2014 Permalink | Reply
    Tags: , , , , Wired Science   

    From WIRED via James Webb: “The Fastest Stars in the Universe May Approach Light Speed” 

    Wired logo


    Via James Webb
    NASA Webb Telescope
    James Webb Space Telescope

    Our sun orbits the Milky Way’s center at an impressive 450,000 mph. Recently, scientists have discovered stars hurtling out of our galaxy at a couple million miles per hour. Could there be stars moving even faster somewhere out there?

    After doing some calculations, Harvard University astrophysicists Avi Loeb and James Guillochon realized that yes, stars could go faster. Much faster. According to their analysis, which they describe in two papers recently posted online, stars can approach light speed. The results are theoretical, so no one will know definitively if this happens until astronomers detect such stellar speedsters—which, Loeb says, will be possible using next-generation telescopes.

    But it’s not just speed these astronomers are after. If these superfast stars are found, they could help astronomers understand the evolution of the universe. In particular, they give scientists another tool to measure how fast the cosmos is expanding. Moreover, Loeb says, if the conditions are right, planets could orbit the stars, tagging along for an intergalactic ride. And if those planets happen to have life, he speculates, such stars could be a way to carry life from one galaxy to another.

    It all started in 2005 when a star was discovered speeding away from our galaxy fast enough to escape the gravitational grasp of the Milky Way. Over the next few years, astronomers would find several more of what became known as hypervelocity stars. Such stars were cast out by the supermassive black hole at the center of the Milky Way. When a pair of stars orbiting each other gets close to the central black hole, which weighs about four million times as much as the sun, the three objects engage in a brief gravitational dance that ejects one of the stars. The other remains in orbit around the black hole.

    Loeb and Guillochon realized that if instead you had two supermassive black holes on the verge of colliding, with a star orbiting around one of the black holes, the gravitational interactions could catapult the star into intergalactic space at speeds reaching hundreds of times those of hypervelocity stars. Papers describing their analysis have been submitted to the Astrophysical Journal and the journal Physical Review Letters.

    The galaxy known as Markarian 739 is actually two galaxies in the midst of merging. The two bright spots at the center are the cores of the two original galaxies, each of which harbors a supermassive black hole. SDSS

    This appears to be the most likely scenario that would produce the fastest stars in the universe, Loeb says. After all, supermassive black holes collide more often than you might think. Nearly all galaxies have supermassive black holes at their centers, and nearly all galaxies were the product of two smaller galaxies merging. When galaxies combine, so do their central black holes.

    Loeb and Guillochon calculated that merging supermassive black holes would eject stars at a wide range of speeds. Only some would reach near light speed, but many of the rest would still be plenty fast. For example, Loeb says, the observable universe could have more than a trillion stars moving at a tenth of light speed, about 67 million miles per hour.

    Because a single, isolated star streaking through intergalactic space would be so faint, only powerful future telescopes like the James Webb Space Telescope, planned for launch in 2018, would be able to detect them. Even then, telescopes would likely only see the stars that have reached our galactic neighborhood. Many of the ejected stars probably would have formed near the centers of their galaxies, and would have been thrown out soon after their birth. That means that they would have been traveling for the vast majority of their lifetimes. The star’s age could therefore approximate how long the star has been traveling. Combining travel time with its measured speed, astronomers can determine the distance between the star’s home galaxy and our galactic neighborhood.

    If astronomers can find stars that were kicked out of the same galaxy at different times, they can use them to measure the distance to that galaxy at different points in the past. By seeing how the distance has changed over time, astronomers can measure how fast the universe is expanding.

    These superfast rogue stars could have another use as well. When supermassive black holes smash into each other, they generate ripples in space and time called gravitational waves, which reveal the intimate details of how the black holes coalesced. A space telescope called eLISA, scheduled to launch in 2028, is designed to detect gravitational waves. Because the superfast stars are produced when black holes are just about to merge, they would act as a sort of bat signal pointing eLISA to possible gravitational wave sources.

    The bottom part of this illustration shows the scale of the universe versus time. Specific events are shown such as the formation of neutral Hydrogen at 380 000 years after the big bang. Prior to this time, the constant interaction between matter (electrons) and light (photons) made the universe opaque. After this time, the photons we now call the CMB started streaming freely. The fluctuations (differences from place to place) in the matter distribution left their imprint on the CMB photons. The density waves appear as temperature and “E-mode” polarization. The gravitational waves leave a characteristic signature in the CMB polarization: the “B-modes”. Both density and gravitational waves come from quantum fluctuations which have been magnified by inflation to be present at the time when the CMB photons were emitted.
    National Science Foundation (NASA, JPL, Keck Foundation, Moore Foundation, related) – Funded BICEP2 Program

    Cosmic Microwave Background  Planck
    CMB per ESA/Planck

    ESA Planck
    ESA Planck schematic

    The existence of these stars would be one of the clearest signals that two supermassive black holes are on the verge of merging, says astrophysicist Enrico Ramirez-Ruiz of the University of California, Santa Cruz. Although they may be hard to detect, he adds, they will provide a completely novel tool for learning about the universe.

    In about 4 billion years, our own Milky Way Galaxy will crash into the Andromeda Galaxy.

    The Andromeda Galaxy is a spiral galaxy approximately 2.5 million light-years away in the constellation Andromeda. The image also shows Messier Objects 32 and 110, as well as NGC 206 (a bright star cloud in the Andromeda Galaxy) and the star Nu Andromedae. This image was taken using a hydrogen-alpha filter.
    Adam Evans

    The two supermassive black holes at their centers will merge, and stars could be thrown out. Our own sun is a bit too far from the galaxy’s center to get tossed, but one of the ejected stars might harbor a habitable planet. And if humans are still around, Loeb muses, they could potentially hitch a ride on that planet and travel to another galaxy. Who needs warp drive anyway?

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 11:46 am on November 20, 2014 Permalink | Reply
    Tags: , , , , , Wired Science   

    From WIRED: “War of the Worlds” 

    Wired logo


    No Date posted
    Lee Billings

    Two teams of astronomers may have found the first earth-like planet in outer space. So who truly discovered Gliese 667C

    Artist’s impression of Gliese 667 Cb with the Gliese 667 A/B binary in the background

    No one knows what the planet Gliese 667Cc looks like. We know that it is about 22 light-years from Earth, a journey of lifetimes upon lifetimes. But no one can say whether it is a world like ours, with oceans and life, cities and single-malt Scotch. Only a hint of a to-and-fro oscillation in the star it orbits, detectable by Earth’s most sensitive telescopes and spectrographs, lets astronomers say the planet exists at all. The planet is bigger than our world, perhaps made of rocks instead of gas, and within its star’s “habitable zone”—at a Goldilocks distance that ensures enough starlight to make liquid water possible but not so much as to nuke the planet clean.

    That’s enough to fill the scientists who hunt for worlds outside our own solar system—so-called exoplanets—with wonder. Gliese 667Cc is, if not a sibling to our world, at least a cousin out there amid the stars. No one knows if it is a place we humans could someday live, breathe, and watch triple sunsets. No one knows whether barely imagined natives are right now pointing their most sensitive and far-seeing technology at Earth, wondering the same things. Yet regardless, to be the person who found Gliese 667Cc is to be the person who changes the quest for life beyond our world, to be remembered as long as humans exist to remember—by the light of the sun or a distant, unknown star.

    Which is a problem. Because another thing no one knows about Gliese 667Cc is who should get credit for discovering it.

    Gliese 667Cc is at the center of an epic controversy in astronomy—a fight over the validity of data, the nature of scientific discovery, and the ever-important question of who got there first.

    In late 1995 Swiss astronomer Michel Mayor and his student Didier Queloz found 51 Pegasi b, the first known exoplanet orbiting a sunlike star. It was orbiting far too close to its sun to allow the formation of water, but the discovery made Mayor’s European team world famous anyway.

    Soon, though, they lost their lead in the planet-hunting race to a pair of American researchers, Geoff Marcy and Paul Butler. The two men had been looking for exoplanets for almost a decade; they bagged their first two worlds a couple of months after Mayor’s announcement.

    The two teams evolved into fiercely competitive dynasties, fighting to have the most—and most tantalizing—worlds to their names. Their rivalry was good for science; within a decade, each had found on the order of a hundred planets around a wide variety of stars. Soon the hunt narrowed to a bigger prize. The teams went searching for smaller, rocky planets they could crown “Earth-like.”

    The Spectral fluctuations of a star with an exoplanet create a sine wave.

    Most planet hunters aren’t looking for exoplanets, per se. Those worlds are too small and dim to easily see. They’re looking instead for telltale shifts in the light of a star, “wobbles” in its spectral identity caused by the gravitational pull of an unseen orbiting exoplanet. When that force tugs a star toward Earth, the Doppler effect ever so slightly compresses the waves of light it emits, shifting them toward the blue end of the spectrum. When the star moves away from Earth, its waves of starlight stretch to reach us, shifting toward the red. You can’t see those shifts with the naked eye. Only a spectrograph can, and the more stable and precise it is, the smaller the wobbles—and planets—you can find.

    By late 2003 the European team had a very precise instrument, the [ESO] High Accuracy Radial velocity Planet Searcher, or Harps. Mounted to a 3.6-meter telescope on a mountaintop in Chile, Harps could detect wobbles of less than a meter per second. (Earth moves the sun just a tenth that amount.) The Americans had to make do with an older instrument called the [Keck]High Resolution Echelle Spectrometer, or Hires—less precise but paired with a more powerful telescope.

    ESO HARPS at the ESO La Silla 3.6m telescope

    Keck HIRES
    HIRES at Keck Observatory

    As the two teams continued to fight for preeminence, trouble was brewing among the Americans. Marcy, a natural showman as well as a brilliant scientist, regularly appeared on magazine covers, newspaper front pages, and even David Letterman’s late-night show. His far more taciturn partner, Butler, preferred the gritty tasks of refining data pipelines and calibration techniques. Having devoted years of their lives to the planet-hunting cause, Butler and another member of the team, Marcy’s PhD adviser, Steve Vogt (the mastermind behind Hires), began to feel marginalized and diminished by Marcy’s growing fame. The relationships hit a low in 2005, when Marcy split a $1 million award with their archrival Mayor. Marcy credited Butler and Vogt in his acceptance speech and donated most of the money to his home institutions, the University of California and San Francisco State University, but the damage was done. Two years later, the relationship disintegrated. Butler and Vogt formed their own splinter group; Butler and Marcy have barely spoken since.

    It was a risky move. Harps and Hires remained the best planet-hunting spectrographs available, and Butler and Vogt now lacked easy access to fresh data from either one. The American dynasty was shattered, and Marcy was forced to find new collaborators. Meanwhile, the ever-expanding European team continued to wring planets from Harps even though Mayor had formally retired in 2007. The search for Earth 2.0, long seen as a struggle between two teams, became a more crowded and open contest.

    Then a seeming breakthrough: In the spring of 2007, the Europeans announced that they’d spotted a potentially habitable world, Gliese 581d.1 It was a blockbuster—a “super-Earth”—on the outer edge of the habitable zone, eight times more massive than our own world.

    Three years later, in 2010, Butler and Vogt scored their own big find around the same star—Gliese 581g. It was smack in the center of the habitable zone and only three or four times the bulk of Earth, so idyllic-seeming that Vogt poetically called it Zarmina’s World, after his wife, and said he thought the chances for life there were “100 percent.” Butler beamed too, in his own subdued way, saying “the planet is the right distance from the star to have water and the right mass to hold an atmosphere.” They had beaten Marcy, laid some claim to the first potentially Earth-like world, and bested their European competitors.

    But to a chorus of skeptics, Zarmina’s World seemed too good to be true. The European group said the signals the Americans had seen were too weak to be taken seriously. The fight was getting ugly; entire worlds were at stake.

    Plotted on a computer screen, a stellar wobble caused by a single planet looks like a sine wave, though real measurements are rarely so clear. A centimeters-per-second wobble in a million-kilometer-wide ball of seething, roiling plasma isn’t exactly a bright beacon across light-years. Spotting it takes hundreds to thousands of observations, spanning years, and even then it registers as a fractional offset of a single pixel in a detector. Sometimes a signal in one state-of-the-art spectrograph will fail to manifest in another. Researchers can chase promising blips for years, only to see their planetary dreams evaporate. Finding a stellar wobble caused by a habitable world requires a volatile mix of scientific acumen and slow-simmering personal obsession.

    A Spanish astronomer named Guillem Anglada-Escudé certainly meets that description. Now a lecturer at Queen Mary University in London, he began working with the American breakaways Butler (a friend and collaborator) and Vogt not long after they announced Gliese 581g.

    Today, Anglada-Escudé’s name is on the books next to between 20 and 30 exoplanets, many found by scraping public archives in search of weak, borderline wobbles. The European Southern Observatory, which funds Harps, mandates that the spectrograph’s overlords release its data after a proprietary period of a year or two. That gives other researchers access to high-quality observations and potential discoveries that the Harps team might have missed. Scavenging scraps from the European table, it turns out, can be almost as worthwhile as being invited to the meal.

    In the summer of 2011, Anglada-Escudé was a 32-year-old postdoc at the end of a fellowship, looking for a steady research position in academia. With Butler’s help he had developed alternative analytic techniques that he used to scour public Harps data. In fact, Anglada-Escudé argued that his approach treated planetary data sets more thoroughly and efficiently, harvesting more significant signals from the noise.

    One late night that August, he picked a new target: nearly 150 observations of a star called Gliese 667C2 taken by the Harps team between 2004 and 2008. He sat before his laptop in a darkened room, waiting impatiently as his custom software slowly crunched through possible physically stable configurations of planets within the data.

    The first wobble to appear suggested a world in a seven-day orbit—the faster the orbit, the closer to the star the planet must be. A weeklong year is about enough time to get roasted to an inhospitable cinder—and anyway the Harps team had announced that one in 2009, as the planet Gliese 667Cb. But Anglada-Escudé spied what looked suspiciously like structure in the residuals of the stellar sine wave snaking across his screen. He ran his software again and another signal emerged, a strong oscillation with a 91-day period—possibly a planet, possibly a pulsation related to the estimated 105-day rotation period of the star itself.

    1 Astronomer Wilhelm Gliese cataloged hundreds of stars in the 1950s. The lowercase letter marks the order in which astronomers discovered the planets orbiting a star.

    2 The capital C indicates a trinary system, with A and B stars.

    See the full article here.

    Please help promote STEM in your local schools.


    STEM Education Coalition

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 12:47 pm on May 23, 2011 Permalink | Reply
    Tags: , Wired Science   

    From Wired Science: “Milky Way Galaxy Has Mirrorlike Symmetry” 

    This is copyright protected, so, just a glimpse.

    By Ron Cowen, Science News
    May 21, 2011

    “A new study suggests the Milky Way doesn’t need a makeover: It’s already just about perfect.

    Astronomers base that assertion on their discovery of a vast section of a spiral, star-forming arm at the Milky Way’s outskirts. The finding suggests that the galaxy is a rare beauty with an uncommon symmetry — one half of the Milky Way is essentially the mirror image of the other half. ”


    See the full article here.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 494 other followers

%d bloggers like this: