From ars technica: “Finally some answers on dark energy, the mysterious master of the Universe”

Ars Technica
ars technica

Nov 5, 2015
Eric Berger

U Texas McDonald Observatory Hobby-Eberle 9.1 meter Telescope
U Texas McDonald Observatory Hobby Eberle 9.1 meter Telescope Interior
U Texas McDonald Observatory Hobby-Eberle 9.1 meter Telescope

Unless you’re an astrophysicist, you probably don’t sit around thinking about dark energy all that often. That’s understandable, as dark energy doesn’t really affect anyone’s life. But when you stop to ponder dark energy, it’s really rather remarkable. This mysterious force, which makes up the bulk of the Universe but was only discovered 17 years ago, somehow is blasting the vast cosmos apart at ever-increasing rates.

Astrophysicists do sit around and think about dark energy a lot. And they’re desperate for more information about it as, right now, they have essentially two data points. One shows the Universe in its infancy, at 380,000 years old, thanks to observations of the cosmic microwave background radiation. And by pointing their telescopes into the sky and looking about, they can measure the present expansion rate of the Universe.

But astronomers would desperately like to know what happened in between the Big Bang and now. Is dark energy constant, or is it accelerating? Or, more crazily still, might it be about to undergo some kind of phase change and turn everything into ice, as ice-nine did in Kurt Vonnegut’s novel Cat’s Cradle? Probably not, but really, no one knows.

The Plan

Fortunately astronomers in West Texas have a $42 million plan to use the world’s fourth largest optical telescope to get some answers. Until now, the 9-meter Hobby-Eberly telescope at McDonald Observatory has excelled at observing very distant objects, but this has necessitated a narrow field of view. However, with a clever new optical system, astronomers have expanded the telescope’s field of view by a factor of 120, to nearly the size of a full Moon. The next step is to build a suite of spectrographs and, using 34,000 optical fibers, wire them into the focal plane of the telescope.

“We’re going to make this 3-D map of the Universe,” Karl Gebhardt, a professor of astronomy at the University of Texas at Austin, told Ars. “On this giant map, for every image that we take, we’ll get that many spectra. No other telescope can touch this kind of information.”

With this detailed information about the location and age of objects in the sky, astronomers hope to gain an understanding of how dark energy affected the expansion rate of the Universe 5 billion to 10 billion years ago. There are many theories about what dark energy might be and how the expansion rate has changed over time. Those theories make predictions that can now be tested with actual data.

In Texas, there’s a fierce sporting rivalry between the Longhorns in Austin and Texas A&M Aggies in College Station. But in the field of astronomy and astrophysics the two universities have worked closely together. And perhaps no one is more excited than A&M’s Nick Suntzeff about the new data that will come down over the next four years from the Hobby-Eberly telescope.

Suntzeff is most well known for co-founding the High-Z Supernova Search Team along with Brian Schmidt, one of two research groups that discovered dark energy in 1998. This startling observation that the expansion rate of the Universe was in fact accelerating upended physicists’ understanding of the cosmos. They continue to grapple with understanding the mysterious force—hence the enigmatic appellation dark energy—that could be causing this acceleration.

Dawn of the cosmos

When scientists observe quantum mechanics, they see tiny energy fluctuations. They think these same fluctuations occurred at the very dawn of the Universe, Suntzeff explained to Ars. And as the early Universe expanded, so did these fluctuations. Then, at about 1 second, when the temperature of the Universe was about 10 billion degrees Kelvin, these fluctuations were essentially imprinted onto dark matter. From then on, this dark matter (whatever it actually is) responded only to the force of gravity.

Meanwhile, normal matter and light were also filling the Universe, and they were more strongly affected by electromagnetism than gravity. As the Universe expanded, this light and matter rippled outward at the speed of sound. Then, at 380,000 years, Suntzeff said these sound waves “froze,” leaving the cosmic microwave background.

These ripples, frozen with respect to one another, expanded outward as the Universe likewise grew. They can still be faintly seen today—many galaxies are spaced apart by about 500 million light years, the size of the largest ripples. But what happened between this freezing long ago, and what astronomers see today, is a mystery.

The Texas experiment will allow astronomers to fill in some of that gap. They should be able to tease apart the two forces acting upon the expansion of the Universe. There’s the gravitational clumping, due to dark matter, which is holding back expansion. Then there’s the acceleration due to dark energy. Because the Universe’s expansion rate is now accelerating, dark energy appears to be dominating now. But is it constant? And when did it overtake dark matter’s gravitational pull?

“I like to think of it sort of as a flag,” Suntzeff said. “We don’t see the wind, but we know the strength of the wind by the way the flag ripples in the breeze. The same with the ripples. We don’t see dark energy and dark matter, but we see how they push and pull the ripples over time, and therefore we can measure their strengths over time.”
The universe’s end?

Funding for the $42 million experiment at McDonald Observatory, called HETDEX for Hobby-Eberly Telescope Dark Energy Experiment, will come from three different sources: one-third from the state of Texas, one-third from the federal government, and a third from private foundations.

The telescope is in the Davis Mountains of West Texas, which provide some of the darkest and clearest skies in the continental United States. The upgraded version took its first image on July 29. Completing the experiment will take three or four years, but astronomers expect to have a pretty good idea about their findings within the first year.

If dark energy is constant, then our Universe has a dark, lonely future, as most of what we can now observe will eventually disappear over the horizon at speeds faster than that of light. But if dark energy changes over time, then it is hard to know what will happen, Suntzeff said. One unlikely scenario—among many, he said—is a phase transition. Dark energy might go through some kind of catalytic change that would propagate through the Universe. Then it might be game over, which would be a nice thing to know about in advance.

Or perhaps not.

See the full article here .

Please help promote STEM in your local schools.

STEM Icon
Stem Education Coalition
Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

From ars technica: “A deathblow to the Death Star: The rise and fall of NASA’s Shuttle-Centaur”

Ars Technica
ars technica

Oct 9, 2015
Emily Carney

In January 1986, astronaut Rick Hauck approached his STS-61F crew four months before their mission was scheduled to launch. The shuttle Challenger was set to deploy the Ulysses solar probe on a trajectory to Jupiter, utilizing a liquid-fueled Centaur G-Prime stage. While an upcoming launch should be an exciting time for any astronaut, Hauck’s was anything but optimistic. As he spoke to his crew, his tone was grave. He couldn’t recall the exact quote in a 2003 Johnson Space Center (JSC) oral history, but the message remained clear.

“NASA is doing business different from the way it has in the past. Safety is being compromised, and if any of you want to take yourself off this flight, I will support you.”

Hauck wasn’t just spooked by the lax approach that eventually led to the Challenger explosion. Layered on top of that concern was the planned method of sending Ulysses away from Earth. The Centaur was fueled by a combustible mix of liquid hydrogen and oxygen, and it would be carried to orbit inside the shuttle’s payload bay.

The unstoppable shuttle

Hauck’s words may have seemed shocking, but they were prescient. In the early 1980s, the space shuttle seemed unstoppable. Technically called the US Space Transportation System program, the shuttle was on the verge of entering what was being called its “Golden Age” in 1984. The idea of disaster seemed remote. As experience with the craft grew, nothing seemed to have gone wrong (at least nothing the public was aware of). It seemed nothing could go wrong.

In 1985, the program enjoyed a record nine successful spaceflights, and NASA was expected to launch a staggering 15 missions in 1986. The manifest for 1986 was beyond ambitious, including but not limited to a Department of Defense mission into a polar orbit from Vandenberg Air Force Base, the deployment of the Hubble telescope to low Earth orbit, and the delivery of two craft destined for deep space: Galileo and Ulysses.

The space shuttle had been touted as part space vehicle and part “cargo bus,” something that would make traveling to orbit routine. The intense schedule suggested it would finally fulfill the promise that had faded during the wait for its long-delayed maiden flight in April 1981. As astronaut John Young, who commanded that historic first flight, stated in his book Forever Young, “When we finished STS-1, it was clear we had to make the space shuttle what we hoped it could be—a routine access-to-space vehicle.”

To meet strict deadlines, however, safety was starting to slide. Following the last test flight (STS-4, completed in July 1982), crews no longer wore pressure suits during launch and reentry, making shuttle flights look as “routine” as airplane rides. The shuttle had no ejection capability at the time, so its occupants were committed to the launch through the bitter end.

Yet by mid-1985, the space shuttle program had already experienced several near-disasters. Critics of the program had long fretted over the design of the system, which boasted two segmented solid rocket boosters and an external tank. The boosters were already noted to have experienced “blow by” in the O-rings of their joints, which could leak hot exhaust out the sides of the structure. It was an issue that would later come to the forefront in a horrific display during the Challenger disaster.

But there were other close calls that the public was largely unaware of. In late July 1985, the program had experienced an “Abort to Orbit” condition during the launch of STS-51F, commanded by Gordon Fullerton. A center engine had failed en route to space, which should normally call for the shuttle’s immediate return. Instead, a quick call was made by Booster Systems Engineer Jenny Howard to “inhibit main engine limits,” which may have prevented another engine from failing, possibly saving the orbiter Challenger and its seven-man crew. (The mission did reach orbit, but a lower one than planned.)

download mp4 video here.
Howard makes the call to push the engines past their assigned limits.

People who followed things closely recognized the problems. The “Space Shuttle” section of Jane’s Spaceflight Directory 1986 (which was largely written the year before) underscored the risky nature of the early program: “The narrow safety margins and near disasters during the launch phase are already nearly forgotten, save by those responsible for averting actual disaster.”
The push for Shuttle-Centaur

All of those risks existed when the shuttle was simply carrying an inert cargo to orbit. Shuttle-Centaur, the high-energy solution intended to propel Galileo and Ulysses into space, was anything but inert.

Shuttle-Centaur was born from a desire to send heavier payloads on a direct trajectory to deep space targets from America’s flagship space vehicles.

6
Centaur-2A upper stage of an Atlas IIA

The Centaur rocket was older than NASA itself. According to a 2012 NASA History article, the US Air Force teamed up with General Dynamics/Astronautics Corp. to develop a rocket stage that could be carried to orbit and then ignite to propel heavier loads into space. In 1958 the proposal was accepted by the government’s Advanced Research Products Agency, and the upper stage that would become Centaur began its development.

The first successful flight of a Centaur (married to an Atlas booster) was made on November 27, 1963. While the launch vehicle carried no payload, it did demonstrate that a liquid hydrogen/liquid oxygen upper stage worked. In the years since, the Centaur has helped propel a wide variety of spacecraft to deep-space destinations. Both Voyagers 1 and 2 received a much-needed boost from their Centaur stages en route to the Solar System’s outer planets and beyond.

NASA Voyager 1
Voyager 1

General Dynamics was tasked with adapting the rocket stage so it could be taken to orbit on the shuttle. A Convair/General Dynamics poster from this period read enthusiastically, “In 1986, we’re going to Jupiter…and we need your help.” The artwork on the poster appeared retro-futuristic, boasting a spacecraft propelled by a silvery rocket stage that looked like something out of a sci-fi fantasy novel or Omni magazine. In the distance, a space shuttle—payload bay doors open—hovered over an exquisite Earth-scape.

2
General Dynamics’ artistic rendering of Shuttle-Centaur, with optimistic text about a 1986 target date for launch.
The San Diego Air & Space Museum Archives on Flickr.

The verbiage from a 1984 paper titled Shuttle Centaur Project Perspective, written by Edwin T. Muckley of NASA’s Lewis (now Glenn) Research Center, suggested that Jupiter would be the first of many deep-space destinations. Muckley optimistically announced the technology: “It’s expected to meet the demands of a wide range of users including NASA, the DOD, private industry, and the European Space Agency (ESA).”

The paper went on to describe the two different versions of the liquid-fueled rocket, meant to be cradled inside the orbiters’ payload bays. “The initial version, designated G-Prime, is the larger of the two, with a length of 9.1 m (30 ft.). This vehicle will be used to launch the Galileo and International Solar Polar Missions (ISPM) [later called Ulysses] to Jupiter in May 1986.”

According to Muckley, the shorter version, Centaur G, was to be used to launch DOD payloads, the Magellan spacecraft to Venus, and TDRSS [tracking and data relay satellite system] missions. He added optimistically, “…[It] is expected to provide launch services well into the 1990s.”

NASA Magellan
Magellan

Dennis Jenkins’ book Space Shuttle: The History of the National Space Transportation System, the First 100 Missions discussed why Centaur became seen as desirable for use on the shuttle in the 1970s and early 1980s. A booster designed specifically for the shuttle called the Inertial Upper Stage (developed by Boeing) did not have enough power to directly deliver deep-space payloads (this solid stage would be used for smaller satellites such as TDRSS hardware). As the author explained, “First and most important was that Centaur was more powerful and had the ability to propel a payload directly to another planet. Second, Centaur was ‘gentler’—solid rockets had a harsh initial thrust that had the potential to damage the sensitive instruments aboard a planetary payload.”

However, the Centaur aboard the shuttle also had its drawbacks. First, it required changes in the way the shuttle operated. A crew needed to be reduced in size to four in order to fit a heavier payload and a precipitously thin-skinned, liquid-fueled rocket stage inside a space shuttle’s payload bay. And the added weight meant that the shuttle could only be sent to its lowest possible orbit.

In addition, during launch, the space shuttles’ main engines (SSMEs) would be taxed unlike any other time in program history. Even with smaller crews and a food-prep galley removed mid-deck, the shuttle’s main engines would have to be throttled up to an unheard-of 109-percent thrust level to deliver the shuttle, payload, and its crew to orbit. The previous “maximum” had been 104 percent.

But the risks of the shuttle launch were only a secondary concern. “The perceived advantage of the IUS [Inertial Upper Stage] over the Centaur was safety—LH2 [liquid hydrogen] presented a significant challenge,” Jenkins noted. “Nevertheless, NASA decided to accept the risk and go with the Centaur.”

While a host of unknowns remained concerning launching a volatile, liquid-fueled rocket stage on the back of a space shuttle armed with a liquid-filled tank and two solid rocket boosters, NASA and its contractors galloped full speed toward a May 1986 launch deadline for both spacecraft. The project would be helmed by NASA’s Lewis. It was decided that the orbiters Challenger and Discovery would be modified to carry Centaur (the then-new orbiter Atlantis was delivered with Centaur capability) with launch pad modifications taking place at the Kennedy Space Center and Vandenberg.

The “Death Star” launches

The launch plan was dramatic: two shuttles, Challenger and Atlantis, were to be on Pads 39B and 39A in mid-1986, carrying Ulysses and Galileo, each linked to the Shuttle-Centaur. The turnaround was also to be especially quick: these launches would take place within five days of one another.

The commander of the first shuttle mission, John Young, was known for his laconic sense of humor. He began to refer to missions 61F (Ulysses) and 61G (Galileo) as the “Death Star” missions. He wasn’t entirely joking.

The thin-skinned Centaur posed a host of risks to the crews. In an AmericaSpace article, space historian Ben Evans pointed out that gaseous hydrogen would periodically have to be “bled off” to keep its tank within pressure limits. However, if too much hydrogen was vented, the payloads would not have enough fuel to make their treks to Jupiter. Time was of the essence, and the crews would be under considerable stress. Their first deployment opportunities would occur a mere seven hours post-launch, and three deployment “windows” were scheduled.

The venting itself posed its own problems. There was a concern about the position of the stage’s vents, which were located near the exhaust ports for the shuttles’ Auxiliary Power Units—close enough that some worried venting could cause an explosion.

Another big concern involved what would happen if the shuttle had to dump the stage’s liquid fuel prior to performing a Return-to-Launch-Site (RTLS) abort or a Transatlantic (TAL) abort. There was worry that the fuel would “slosh” around in the payload bay, rendering the shuttle uncontrollable. (There were also worries about the feasibility of these abort plans with a normal shuttle cargo, but that’s another story.)

These concerns filtered down to the crews. According to Evans, astronaut John Fabian was originally meant to be on the crew of 61G, but he resigned partly due to safety concerns surrounding Shuttle-Centaur. “He spent enough time with the 61G crew to see a technician clambering onto the Centaur with an untethered wrench in his back pocket and another smoothing out a weld, then accidentally scarring the booster’s thin skin with a tool,” the historian wrote. “In Fabian’s mind, it was bad enough that the Shuttle was carrying a volatile booster with limited redundancy, without adding new worries about poor quality control oversight and a lax attitude towards safety.”

4
Astronauts John Fabian and Dave Walker pose in front of what almost became their “ride” during a Shuttle-Centaur rollout ceremony in mid-1985.
NASA/Glenn Research Center

STS-61F’s commander, Hauck, had also developed misgivings about Shuttle-Centaur. In the 2003 JSC oral history, he bluntly discussed the unforgiving nature of his mission:

“…[If] you’ve got a return-to-launch-site abort or a transatlantic abort and you’ve got to land, and you’ve got a rocket filled with liquid oxygen, liquid hydrogen in the cargo bay, you’ve got to get rid of the liquid oxygen and liquid hydrogen, so that means you’ve got to dump it while you’re flying through this contingency abort. And to make sure that it can dump safely, you need to have redundant parallel dump valves, helium systems that control the dump valves, software that makes sure that contingencies can be taken care of. And then when you land, here you’re sitting with the Shuttle-Centaur in the cargo bay that you haven’t been able to dump all of it, so you’re venting gaseous hydrogen out this side, gaseous oxygen out that side, and this is just not a good idea.”

Even as late as January 1986, Hauck and his crew were still working out issues with the system’s helium-actuated dump valves. He related, “…[It] was clear that the program was willing to compromise on the margins in the propulsive force being provided by the pressurized helium… I think it was conceded this was going to be the riskiest mission the Shuttle would have flown up to that point.”
Saved by disaster

Within weeks, the potential crisis was derailed dramatically by an actual crisis, one that was etched all over the skies of central Florida on an uncharacteristically cold morning. On January 28, 1986, Challenger—meant to hoist Hauck, his crew, Ulysses, and its Shuttle-Centaur in May—was destroyed shortly after its launch, its crew of seven a total loss. On that ill-fated mission, safety had been dangerously compromised, with the shuttle launching following a brutal cold snap that made the boosters’ o-rings inflexible and primed to fail.

It became clear NASA had to develop a different attitude toward risk management. Keeping risks as low as possible meant putting Shuttle-Centaur on the chopping block. In June 1986, a Los Angeles Times article announced the death-blow to the Death Star.

“The National Aeronautics and Space Administration Thursday canceled development of a modified Centaur rocket that it had planned to carry into orbit aboard the space shuttle and then use to fire scientific payloads to Jupiter and the Sun. NASA Administrator James C. Fletcher said the Centaur ‘would not meet safety criteria being applied to other cargo or elements of the space shuttle system.’ His decision came after urgent NASA and congressional investigations of potential safety problems following the Jan. 28 destruction of the shuttle Challenger 73 seconds after launch.”

5
Astronauts Rick Hauck, John Fabian, and Dave Walker pose by a Shuttle-Centaur stage in mid-1985 during a rollout ceremony. Hauck and Fabian both had misgivings about Shuttle-Centaur. The San Diego Air & Space Museum Archives on Flickr.

After a long investigation and many ensuing changes, the space shuttle made its return to flight with STS-26 (helmed by Hauck) in September 1988. Discovery and the rest of the fleet boasted redesigned solid rocket boosters with added redundancy. In addition, crews had a “bailout” option if something went wrong during launch, and they wore pressure suits during ascent and reentry for the first time since 1982.

Galileo was successfully deployed from Atlantis (STS-34) using an IUS in October 1989, while Ulysses utilized an IUS and PAM-S (Payload Assist Module) to begin its journey following its deployment from Discovery (STS-41) in October 1990.

NASA Galileo
Galileo

As for Shuttle-Centaur? Relegated to the history books as a “what if,” a model now exists at the US Space and Rocket Center in Huntsville, Alabama. It still looks every inch the shiny, sci-fi dream depicted in posters and artists’ renderings back in the 1980s. However, this “Death Star” remains on terra firma, representing what Jim Banke described as the “naive arrogance” of the space shuttle’s Golden Age.

Additional sources

Hitt, D., & Smith, H. (2014). Bold they rise: The space shuttle early years, 1972 – 1986. Lincoln, NE: University of Nebraska Press.
Jenkins, D. R. (2012). Space shuttle: The history of the national space transportation system, the first 100 missions. Cape Canaveral, FL: Published by author.
Turnill, R. (Ed.). (1986). Jane’s spaceflight directory (2nd ed.). London, England: Jane’s Publishing Company Limited.
Young, J. W., & Hansen, J. R. (2012). Forever young: A life of adventure in air and space. Gainesville, FL: University Press of Florida.
Dawson, V., & Bowles, M.D. (2004). Taming liquid hydrogen: The Centaur upper stage rocket, 1958 – 2002. Washington, D.C.: National Aeronautics and Space Administration.

See the full article here .

Please help promote STEM in your local schools.

STEM Icon
Stem Education Coalition
Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

From ars technica: “Quantum dots may be key to turning windows into photovoltaics”

Ars Technica
ars technica

Aug 26, 2015
John Timmer

1
Some day, this might generate electricity. Flickr user Ricardo Wang

While wind may be one of the most economical power sources out there, photovoltaic solar energy has a big advantage: it can go small. While wind gets cheaper as turbines grow larger, the PV hardware scales down to fit wherever we have infrastructure. In fact, simply throwing solar on our existing building stock could generate a very large amount of carbon-free electricity.

But that also highlights solar’s weakness: we have to install it after the infrastructure is in place, and that installation adds considerably to its cost. Now, some researchers have come up with some hardware that could allow photovoltaics to be incorporated into a basic building component: windows. The solar windows would filter out a small chunk of the solar spectrum and convert roughly a third of it to electricity.

As you’re probably aware, photovoltaic hardware has to absorb light in order to work, and a typical silicon panel appears black. So, to put any of that hardware (and its supporting wiring) into a window that doesn’t block the view is rather challenging. One option is to use materials that only capture a part of the solar spectrum, but these tend to leave the light that enters the building with a distinctive tint.

The new hardware takes a very different approach. The entire window is filled with a diffuse cloud of quantum dots that absorb almost all of the solar spectrum. As a result, the “glass” portion of things simply dims the light passing through the window slightly. (The quantum dots are actually embedded in a transparent polymer, but that could be embedded in or coat glass.) The end result is what optics people call a neutral density filter, something often used in photography. In fact, tests with the glass show that the light it transmits meets the highest standards for indoor lighting.

Of course, simply absorbing the light doesn’t help generate electricity. And, in fact, the quantum dots aren’t used to generate the electricity. Instead, the authors generated quantum dots made of copper, indium, and selenium, covered in a layer of zinc sulfide. (The authors note that there are no toxic metals involved here.) These dots absorb light across a broad band of spectrum, but re-emit it at a specific wavelength in the infrared. The polymer they’re embedded in acts as a waveguide to take many of the photons to the thin edge of the glass.

And here’s where things get interesting: the wavelength of infrared the quantum dots emit happens to be very efficiently absorbed by a silicon photovoltaic device. So, if you simply place these devices along the edges of the glass, they’ll be fed a steady diet of photons.

The authors model the device’s behavior and find that nearly half the infrared photons end up being fed the photovoltaic devices (equal amounts get converted to heat or escape the window entirely). It’s notable that the devices are small, though (about 12cm squares)—larger panes would presumably allow even more photons to escape.

The authors tested a few of the devices, one that filtered out 20 percent of the sunlight and one that only captured 10 percent. The low-level filter sent about one percent of the incident light to the sides, while the darker one sent over three percent.

There will be losses in the conversion to electricity as well, so this isn’t going to come close to competing with a dedicated panel on a sunny roof. Which is fine, because it’s simply not meant to. Any visit to a major city will serve as a good reminder that we’re regularly building giant walls of glass that currently reflect vast amounts of sunlight, blinding or baking (or both!) the city’s inhabitants on a sunny day. If we could cheaply harvest a bit of that instead, we’re ahead of the game.

Nature Nanotechnology, 2015. DOI: 10.1038/NNANO.2015.178 (About DOIs).

See the full article here.

Please help promote STEM in your local schools.

STEM Icon
Stem Education Coalition
Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

From ars technica: “Huge population of “Ultra-Dark Galaxies” discovered”

Ars Technica
ars technica

Jul 11, 2015
Xaq Rzetelny

1

About 321 million light-years away from us is the Coma Cluster, a massive grouping of more than 1,000 galaxies.

2
A Sloan Digital Sky Survey/Spitzer Space Telescope mosaic of the Coma Cluster in long-wavelength infrared (red), short-wavelength infrared (green), and visible light. The many faint green smudges are dwarf galaxies in the cluster.
Credit: NASA/JPL-Caltech/GSFC/SDSS

Some of its galaxies are a little unusual, however: they’re incredibly dim. So dim, in fact, that they have earned the title of “Ultra-Dark Galaxies” (UDGs). (The term is actually “Ultra-Diffuse Galaxies”, as their visible matter is thinly spread, though “ultra-dark” has been used by some sources and, let’s face it, sounds a lot better). This was discovered earlier this year in a study that identified 47 such galaxies.

Dimness isn’t necessarily unusual in a galaxy. Most of a galaxy’s light comes from its stars, so the smaller a galaxy is (and thus the fewer stars it has), the dimmer it will be. We’ve found many dwarf galaxies that are significantly dimmer than their larger cousins.

What was so unusual about these 47 is that they’re not small enough to account for their dimness. In fact, many of them are roughly the size of our own Milky Way (ranging in diameter from 1.5 to 4.6 kiloparsecs, compared with the Milky Way’s roughly 3.6) but have only roughly one thousandth of the Milky Way’s stars. The authors of the recent study interpret this to mean that these galaxies must be even more dominated by dark matter than are ordinary galaxies.

Finding the dark

Intrigued by this tantalizing observation, a group of researchers constructed a more detailed study. Using archival data from the 8.2-meter Subaru telescope, they examined the sky region in question and discovered more UDGs—854 of them. Given that the images they were working with don’t cover the full cluster, the researchers estimated that there should be roughly 1,000 UDGs visible in the cluster altogether.

NAOJ Subaru Telescope
NAOJ Subaru Telescope interior
NAOJ/Subaru

There are a lot of small caveats to this conclusion. First of all, it’s not certain that all these galaxies are actually in the Coma Cluster, as some might just be along the same line of sight. However, it’s very likely that most of them do lie within the cluster. If the UDGs aren’t part of the cluster, then they’re probably a typical sample of what we’d observe in any patch of sky the same size as the Subaru observation. If that’s true, then the Universe has an absurdly high number of UDGs, and we should have seen more of them already.

In this particular patch of sky, the concentration of UDGs is stronger towards the center of the Coma Cluster. While that doesn’t prove they’re part of the cluster, it’s strongly suggestive.

Dark tug-of-war

The dim galaxies’ relationship to the cluster probably has something to do with the mechanism that made the UDGs so dark in the first place. These galaxies would have had an ample supply of gas with which to make stars, so something must have prevented that from happening. This could be because the gas was somehow stripped from its galaxy or because something cut off a supply of gas from elsewhere.

The dense environment in the cluster might be responsible for this. Gravitational interactions can pull the galaxies apart or strip them of their gas. These encounters can also deplete the gas near the galaxies, cutting off the inflow of new material. Since there are plenty of galaxies swirling around in the dense cluster, there are plenty of opportunities for this to happen to an unfortunate galaxy. The victims of these vampiric attacks might become dark, losing their ability to form stars. Neither living nor dead, these bodies still roam the Universe, perhaps waiting to strip unsuspecting galaxies of their gas.

But unlike those bitten by movie vampires, the galaxies have a way to fight back. Rather than letting their blood (or in this case gas) get sucked away, the galaxy’s own gravity can hang onto it. And since most of a galaxy’s mass comes in the form of dark matter, the mysterious substance is pretty important in the tug-of-war over the galaxy’s star-forming material. The more dark matter a galaxy’s got, the more likely it will be able to hold onto its material when other galaxies pass by.

“We believe that something invisible must be protecting the fragile star systems of these galaxies, something with a high mass,” said Jin Koda, an astrophysicist with Stony Brook University and the paper’s lead author. “That ‘something’ is very likely an excessive amount of dark matter.”

The role dark matter plays in this struggle is useful for researchers here on Earth. If they want to find out how much dark matter one of these UDGs has, all they have to do is look at how much material the galaxy has held onto. While the results of an encounter between galaxies are complicated and dependent on many factors, this technique can at least give them a rough idea.

Close to the core

Near the core of the Coma Cluster, there’s a higher density of galaxies, and so many more opportunities for galaxies to lose their gas in encounters. Tidal forces are much stronger there, and as such it takes more dark matter to continue to hold onto material.

The earlier study’s smaller sample of UDGs didn’t see any of them very close to the core, and it seemed safe to assume any potential UDGs deeper in had been ripped apart entirely. That provided a clue as to the amount of dark matter these galaxies contain: not enough to hold them together in the core. The authors of that study used this information to put an upper limit on the percentage of dark matter in the UDGs, but it was very high—up to 98 percent. But even galaxies with 98 percent dark matter shouldn’t survive in the rough center of the cluster.

Thus, in the new study, researchers didn’t expect to find UDGs any closer to the core. But they did. These galaxies are less clearly resolved because, in the cluster’s center, more interference from background objects mucks up the view. But assuming they have been correctly identified, they’ve got even more dark matter than the previous estimate: greater than 99 percent. There can be no doubt these UDGs live up to their (unofficial) name, as everything else the UDG includes—stars, black holes, planets, gas—make up less than one percent of the galaxy’s mass.

Into the dark

The discovery of so many dark galaxies in the Coma Cluster is a stride forward in the exploration of these objects. (Note: some of the objects included in the study had been previously discovered and were included in galaxy catalogs, but they were inconsistently classified, with many of them not identified as UDGs at all). The study’s large sample size compared strengthens its conclusions and also provides a more detailed picture of how these dark galaxies come to be.

Many questions remain for future work to address, however. It’s still not known exactly how many of the objects identified in the study are actually part of the Coma Cluster, though it is likely that most are. Another question is whether the Coma Cluster’s UDG distribution is typical of other clusters, which will determine how well the findings of this study can be extrapolated elsewhere in the Universe. Modeling should also provide a more detailed look into the complex interactions of galaxies in the cluster, including the exact mechanisms responsible for the creation of UDGs.

And crucially, UDGs offer an excellent opportunity to observe and study dark matter. Situations like this one, where dark matter’s interactions with baryonic (ordinary) matter can be observed, are ripe for study.

“This discovery of dark galaxies may be the tip of the iceberg,” said Dr. Koda. “We may find more if we look for fainter galaxies embedded in a large amount of dark matter, with the Subaru Telescope and additional observations may expose this hidden side of the Universe.”

The Astrophysical Journal Letters, 2015. DOI: 10.1088/2041-8205/807/1/L2 (About DOIs)

Suprisingly, the institution responsible for this research is not named, nor are we given the names of the team members and their affiliations.

See the full article here.

Please help promote STEM in your local schools.

STEM Icon
Stem Education Coalition
Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

From ars technica: “Shining an X-Ray torch on quantum gravity”

Ars Technica
ars technica

Mar 17, 2015
Chris Lee

1
This free electron laser could eventually provide a test of quantum gravity. BNL

Quantum mechanics has been successful beyond the wildest dreams of its founders. The lives and times of atoms, governed by quantum mechanics, play out before us on the grand stage of space and time. And the stage is an integral part of the show, bending and warping around the actors according to the rules of general relativity. The actors—atoms and molecules—respond to this shifting stage, but they have no influence on how it warps and flows around them.

This is puzzling to us. Why is it such a one directional thing: general relativity influences quantum mechanics, but quantum mechanics has no influence on general relativity? It’s a puzzle that is born of human expectation rather than evidence. We expect that, since quantum mechanics is punctuated by sharp jumps, somehow space and time should do the same.

There’s also the expectation that, if space and time acted a bit more quantum-ish, then the equations of general relativity would be better behaved. In general relativity, it is possible to bend space and time infinitely sharply. This is something we simply cannot understand: what would infinitely bent space look like? To most physicists, it looks like something that cannot actually be real, indicating a problem with the theory. Might this be where the actors influence the stage?

Quantum mechanics and relativity on the clock

To try and catch the actors modifying the stage requires the most precise experiments ever devised. Nothing we have so far will get us close, so a new idea from a pair of German physicists is very welcome. They focus on what’s perhaps the most promising avenue for detecting quantum influences on space-time: time-dilation experiments. Modern clocks rely on the quantum nature of atoms to measure time. And the flow of time depends on relative speed and gravitational acceleration. Hence, we can test general relativity, special relativity, and quantum mechanics all in the same experiment.

To get an idea of how this works, let’s take a look at the traditional atomic clock. In an atomic clock, we carefully prepare some atoms in a predefined superposition state: that is the atom is prepared such that it has a fifty percent chance of being in state A, and a fifty percent chance of being in state B. As time passes, the environment around the atom forces the superposition state to change. At some later point, it will have a seventy five percent chance of being in state A; even later, it will certainly be in state A. Keep on going, however, and the chance of being in state A starts to shrink, and it continues to do so until the atom is certainly in state B. Provided that the atom is undisturbed, these oscillations will continue.

These periodic oscillations provide the perfect ticking clock. We simply define the period of an oscillation to be our base unit of time. To couple this to general relativity measurements is, in principle, rather simple. Build two clocks and place them beside each other. At a certain moment, we start counting ticks from both clocks. When one clock reaches a thousand (for instance), we compare the number of ticks from the two clocks. If we have done our job right, both clocks should have reached a thousand ticks.

If we shoot one into space, however, and perform the same experiment, and relativity demands that the clock in orbit record more ticks than the clock on Earth. The way we record the passing of time is by a phenomena that is purely quantum in nature, while the passing of time is modified by gravity. These experiments work really well. But at present, they are not sensitive enough to detect any deviation from either quantum mechanics or general relativity.

Going nuclear

That’s where the new ideas come in. The researchers propose, essentially, to create something similar to an atomic clock, but instead of tracking the oscillation atomic states, they want to track nuclear states. Usually, when I discuss atoms, I ignore the nucleus entirely. Yes, it is there, but I only really care about the influence the nucleus has on the energetic states of the electrons that surround it. However, in one key way the nucleus is just like the electron cloud that surrounds it: it has its own set of energetic states. It is possible to excite nuclear states (using X-Ray radiation) and, afterwards, they will return the ground state by emitting an X-Ray.

So let’s imagine that we have a crystal of silver sitting on the surface of the Earth. The silver atoms all experience a slightly different flow of time because the atoms at the top of the crystal are further away from the center of the Earth compared to the atoms at the bottom of the crystal.

To kick things off, we send in a single X-Ray photon, which is absorbed by the crystal. This is where the awesomeness of quantum mechanics puts on sunglasses and starts dancing. We don’t know which silver atom absorbed the photon, so we have to consider that all of them absorbed a tiny fraction of the photon. This shared absorption now means that all of the silver atoms enter a superposition state of having absorbed and not absorbed a photon. This superposition state changes with time, just like in an atomic clock.

In the absence of an outside environment, all the silver atoms will change in lockstep. And when the photon is re-emitted from the crystal, all the atoms will contribute to that emission. So each atom behaves as if it is emitting a partial photon. These photons add together, and a single photon flies off in the same direction as the absorbed photon had been traveling. Essentially because all the atoms are in lockstep, the charge oscillations that emit the photon add up in phase only in the direction that the absorbed photon was flying.

Gravity, though, causes the atoms to fall out of lockstep. So when the time comes to emit, the charge oscillations are all slightly out of phase with each other. But they are not random: those at the top of the crystal are just slightly ahead of those at the bottom of the crystal. As a result, the direction for which the individual contributions add up in phase is not in the same direction as the flight path of the absorbed photon, but at a very slight angle.

How big is this angle? That depends on the size of the crystal and how long it takes the environment to randomize the emission process. For a crystal of silver atoms that is less than 1mm thick, the angle could be as large as 100 micro-degrees, which is small but probably measurable.
Spinning crystals

That, however, is only the beginning of a seam of clever. If the crystal is placed on the outside of a cylinder and rotated during the experiment, then the top atoms of the crystal are moving faster than the bottom, meaning that the time-dilation experienced at the top of the crystal is greater than that at the bottom. This has exactly the same effect as placing the crystal in a gravitational field, but now the strength of that field is governed by the rate of rotation.

In any case, by spinning a 10mm diameter cylinder very fast (70,000 revolutions per second), the angular deflection is vastly increased. For silver, for instance, it reaches 90 degrees. With such a large signal, even smaller deviations from the predictions of general relativity should be detectable in the lab. Importantly, these deviations happen on very small length scales, where we would normally start thinking about quantum effects in matter. Experiments like these may even be sensitive enough to see the influence of quantum mechanics on space and time.

A physical implementation of this experiment will be challenging but not impossible. The biggest issue is probably the X-Ray source and doing single photon experiments in the X-Ray regime. Following that, the crystals need to be extremely pure, and something called a coherent state needs to be created within them. This is certainly not trivial. Given that it took atomic physicists a long time to achieve this for electronic transitions, I think it will take a lot more work to make it happen at X-Ray frequencies.

On the upside free electron lasers have come a very long way, and they have much better control over beam intensities and stability. This is, hopefully, the sort of challenge that beam-line scientists live for.

See the full article here.

Please help promote STEM in your local schools.

STEM Icon
Stem Education Coalition
Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

From ars technica: “Imaging a supernova with neutrinos”

Ars Technica
ars technica

Mar 4, 2015
John Timmer

1
Two men in a rubber raft inspect the wall of photodetectors of the partly filled Super-Kamiokande neutrino (BNL)

There are lots of ways to describe how rarely neutrinos interact with normal matter. Duke’s Kate Scholberg, who works on them, provided yet another. A 10 Mega-electron Volt gamma ray will, on average, go through 20 centimeters of carbon before it’s absorbed; a 10 MeV neutrino will go a light year. “It’s called the weak interaction for a reason,” she quipped, referring to the weak-force-generated processes that produce and absorb these particles.

But there’s one type of event that produces so many of these elusive particles that we can’t miss it: a core-collapse supernova, which occurs when a star can no longer produce enough energy to counteract the pull of gravity. We typically spot these through the copious amounts of light they produce, but in energetic terms, that’s just a rounding error: Scholberg said that 99 percent of the gravitational energy of the supernova goes into producing neutrinos.

Within instants of the start of the collapse, gravity forces electrons and protons to fuse, producing neutrons and releasing neutrinos. While the energy that goes into producing light gets held up by complicated interactions with the outer shells of the collapsing star, neutrinos pass right through any intervening matter. Most of them do, at least; there are so many produced that their rare interactions collectively matter, though our supernova models haven’t quite settled on how yet.

But our models do say that, if we could detect them all, we’d see their flavors (neutrinos come in three of them) change over time, and distinct patterns of emission during the star’s infall, accretion of matter, and then post-supernova cooling. Black hole formation would create a sudden stop to their emission, so they could provide a unique window into the events. Unfortunately, there’s the issue of too few of them interacting with our detectors to learn much.

The last nearby supernova, SN 1987a, saw a burst of 20 electron antineutrinos be detected about 2.5 hours before the light from the explosion became visible.

2
Remnant of SN 1987A seen in light overlays of different spectra. ALMA data (radio, in red) shows newly formed dust in the center of the remnant. Hubble (visible, in green) and Chandra (X-ray, in blue) data show the expanding shock wave.

ALMA Array

NASA Hubble Telescope
Hubble

NASA Chandra Telescope
Chandra

(Scholberg quipped that the Super-Kamiokande detector “generated orders of magnitude more papers than neutrinos.”) But researchers weren’t looking for this, so the burst was only recognized after the fact.

Super-Kamiokande experiment Japan
Super-Kamiokande detector

That’s changed now. Researchers can go to a Web page hosted by Brookhaven National Lab and have an alert sent to them if any of a handful of detectors pick up a burst of neutrinos. The Daya Bay, IceCube, and Super-Kamiokande detectors are all part of this program.) When the next burst of neutrinos arrives, astronomers will be alert and searching for the source.

Daya Bay
Daya Bay

ICECUBE neutrino detector
IceCube

“The neutrinos are coming!” Scholberg said. “The supernovae have already happened, their wavefronts are on their way.” She said estimates are that there are three core collapse supernovae in our neighborhood each century and, by that measure, “we’re due.”

If that supernova has occurred in the galactic core, it will put on quite a show. Rather than detecting individual events, the entire area of ice monitored by the IceCube detector will end up glowing. The Super-Kamiokande detector will see 10,000 individual neutrinos; “It will light up like a Christmas tree,” Scholberg said.

It’ll be an impressive show, and it’s one that I’m sure most physicists (along with me) hope happen in their lifetimes. But if it takes a little time, the show may be even better. There are apparently plans afoot to build a “Hyper-Kamiokande,” which would be able to detect 100,000 neutrinos from a galactic core supernova. Imagine how many papers that would produce.

See the full article here.

Please help promote STEM in your local schools.

STEM Icon
Stem Education Coalition
Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

From ars technica: “If dark matter is really axions, we could find out soon”

Ars Technica
ars technica

Jan 23 2015
Xaq Rzetelny

1
Lawrence Livermore National Lab

From observations of the Milky Way galaxy, we’ve learned that in any given cubic meter of space, even the particular cubic meter that snugly fits your seated form as you read this article, there’s a small amount of matter—only about 50 proton masses worth—passing through in any given moment. But unlike the particles that make up your seated form, this matter doesn’t interact. It doesn’t reflect light, it isn’t repelled by solid objects, it passes right through walls. This mysterious substance is known as dark matter.

Since there’s so little of it in each cubic meter, you would never notice its presence. But over the vast distances of space, there’s a lot of cubic meters, and all that dark matter adds up. It’s only when you zoom out and look at the big picture that dark matter’s gravitational influence becomes apparent. It’s the main source of gravity holding every galaxy together; it binds galaxies to one another in clusters; and it warps space around galaxy clusters, creating a lensing effect.

But despite its importance to the large-scale structure of the Universe, we still don’t know what dark matter really is. Currently, the best candidate is WIMPs, or Weakly Interacting Massive Particles (Which makes sense, now that we know it’s not MAssive Compact Halo Objects, or MACHOs). But WIMPs are not the only option—there are quite a few other possibilities being investigated. Some of them are other kinds of massive particles, which would constitute cold dark matter, while others aren’t particles at all.

Axions, theoretical particles that were originally predicted to solve a tricky problem involving the strong nuclear force, happen to have just the right properties to be a good candidate for dark matter. Leslie Rosenberg, a physicist at the University of Washington, Seattle, recently wrote an overview of the experiments being done to investigate the possibility of axions being dark matter for the journal PNAS.

Hot or Cold?

Among the various models of dark matter, there are two overarching categories: Hot (HDM) and Cold Dark Matter (CDM). The hot variety gets its name because its particles would be whipping around at incredibly high speeds, up to significant fractions of the speed of light. But hot dark matter seems to be a dead end as a possibility. If particles were traveling that fast, most of them would be able to escape the gravitational pull of their host galaxy. Instead, dark matter forms into nice, spherical halos around every galaxy—which means that it’s probably cold.

The physical difference between HDM and CDM is mass. If dark matter is composed of low-mass particles, then it would be easy for the particles to accelerate, and since the particles interact so little with other particles, it would be very hard to slow them down; hence the relativistic speeds of HDM. CDM, then, would have to be a higher-mass particle, because those aren’t as easy to accelerate. WIMPs would fall into this category.

Axions, meanwhile, occupy a unique sort of middle ground between HDM and CDM. They are low-mass particles, low enough that they might have been HDM, except that they would have been slowed down gravitationally in the very early Universe. In effect, they now behave like CDM, moving slowly and thus potentially forming the dark matter halos we observe, even though they have the mass of HDM. Crucially, axions interact weakly enough with light and other matter that they fulfill the ‘dark’ part of dark matter.

One advantage to axions as dark matter is that there’s only a very specific mass range of axions that would be consistent with the dark matter we observe. If the axions were much lighter or much heavier, they would produce observable differences—sufficiently observable that we would have already seen them. For example, the supernova explosion sn1987a would have lost energy as axions transported it out of the exploding star, which would have resulted in a noticeably different neutrino flash than the one recorded on Earth.

1
This image shows the remnant of Supernova 1987A seen in light of very different wavelengths. ALMA data (in red) shows newly formed dust in the centre of the remnant. Hubble (in green) and Chandra (in blue) data show the expanding shock wave

ALMA Array
ALMA

NASA Hubble Telescope
Hubble

NASA Chandra Telescope
Chandra

That narrow range of possibilities makes the axion hypothesis very easy to conclusively test. Since it’s such a narrow range, a test that turns up negative could rule out axions as a possibility altogether. (They might still exist, but they would be ruled out as a dark matter candidate). And in science, testability makes a hypothesis very attractive (at least until the test rules out your favorite model).

So how do we find it?

Another advantage of axions is that they can spontaneously decay into things that might be observed. An axion can turn into two photons, and that light could hypothetically be detected. The reverse process, light turning into an axion, is also possible—and it may even play a role in the propagation of light. The light would briefly become an axion, which would then decay back into two photons, with the briefly-existing axion being considered a virtual axion.

Another effect axions could have would be on the Sun—its seismic activity and energy output could be affected by the interactions of axions. And those Solar axions could scatter off a germanium crystal, producing X-rays that could be observed. Additionally, the dark matter axions in the halos around astronomical objects, like other galaxies, could spontaneously decay and produce photons that we might see in telescopes.

Unfortunately, none of these tests are sensitive enough to detect the expected mass range of axions that would be dark matter. To find axions in the right range, there are a few methods that might work—and some of them are being tried in experiments right now.

Astronomical axions

Astronomical objects can provide an opportunity to observe axions. Supernovae should produce them (as noted above), as should other astronomical objects such as the Sun.

In the core of the Sun, light scatters off of the particles it encounters there, bouncing around from particle to particle until its random path allows it to escape the Sun (some 170,000 years after the light was produced). As the light scatters in this process, it can be converted into an axion. That axion might then turn back into two photons while still inside the Sun. Since the axion was produced in the Sun’s hot core, the photons ultimately observed here on Earth would be in the form of X-rays. Alternatively, we could potentially detect the axions themselves, should they escape the Sun.

But it would be difficult to distinguish whether the axions detected this way are dark matter or simply part of a normal physics process. More energetic events, like supernovae, would also fall short of producing unambiguously detectable dark matter axions.

The best experiment using this method right now is the CERN Axion Solar Telescope. Using a dipole magnet from the Large Hadron Collider on a steerable mount, this device could achieve good sensitivity to axions escaping the Sun—but it’s just barely more sensitive to dark matter axions than observations of the supernova sn1987a were. So, while this experiment could not rule out axions by itself, it might further constrain the properties of axion dark matter.

A more sensitive version is being conceived, however, which might provide better insight.

Shining light through walls!

Another technique with a chance of detecting dark matter axions is the “Shining light through walls” technique, which is just what it sounds like. (A name we didn’t make up, in case you were wondering). As we’ve seen, light can convert into axions and axions can be converted into light. So if researchers wanted to create axions in the lab, they might start with some light.

By sending some polarized light through a dipole magnet, some of the light can be converted into axions.

2
Magnetic Field of a simple dipole bar magnet

The axions would then be able to pass right through a wall, as though it weren’t there, and appear on the other side. If they encounter a second dipole magnet, it will convert the axions back into photons, which are then detected. To be fair, this isn’t a measurement of pre-existing axions, so it doesn’t demonstrate that the dark matter we’re observing is composed of axions—only that axions in the right mass range exist. But that by itself would provide a strong argument that dark matter is axions.

The problem with this technique is that the process happens very infrequently—so infrequently that it would be very hard to tell such a light burst from the surrounding noise. As a result, the technique wouldn’t be sensitive enough to detect axions in the dark matter mass range.

But there are some experiments being constructed that have addressed that problem by adding devices called Fabry-Perot optical resonators to both sides of the wall. This has the effect of increasing the number of photons that decay into axions and vice versa, which should make it a vastly stronger signal—strong enough to stand out from the noise. But despite the improvements, these experiments probably still won’t be sensitive enough to detect axion dark matter, though they might be able to find other forms of axions.

Catching axions

Another approach is known as the Radio Frequency (RF) technique. This relies on an axion’s ability to decay into light, and could allow researchers to catch one. Axions that are part of the Milky Way’s dark matter halo should be passing through the Earth at all times, putting them within reach. The only thing that’s needed is the right catcher’s mitt. Like other dark matter candidates, axions pass right through solid matter, so it’s tricky to devise a device to catch one. But unlike other dark matter candidates, axions might interact with a magnetic field. If so, the axion could be stimulated to decay into microwave photons. Those photons could then be detected.

The catcher’s mitt, in this case, is a device called an RF cavity, a metal cylinder which serves as a resonator, keeping the electromagnetic waves it catches inside.

This approach has been taken by the Axion Dark Matter eXperiment (ADMX).

ADMX Axion Dark Matter Experiment
ADMX

That RF cavity device is four meters tall, but the actual cavity itself, the part where the axion’s photons will be caught, is only about half a meter tall, and surrounded by a powerful, wrap-around magnet. The main difficulty with this experiment, as with so many experiments in astronomy, is reducing noise. Axions that are part of the Milky Way’s halo should produce some extremely weak photons, which are very difficult to distinguish from the background noise.

To deal with this issue, the ADMX device has recently been refitted, replacing its transistor amplifiers with Superconducting QUantum Interference Devices (SQUIDs). The SQUIDs are more effective at amplifying the signal of the microwave photons the device catches, helping them to stand out from the noise. The ADMX, enhanced with the SQUIDs, is sensitive enough that it should be able to detect axions from the Milky Way’s dark matter halo with a high degree of certainty. Over the next few years, this experiment could conclusively rule out axions as the identity of dark matter—or it could confirm this hypothesis.

Conclusions

The possibilities raised by these experiments—especially by ADMX—are exciting, as they represent clear progress toward solving the puzzle that is dark matter. And that’s no trivial puzzle, as an understanding of dark matter is important to our understandings of the Universe as a whole.

But in science, things are often more complicated than they seem at first, as the author cautions in the paper. “it may be that the relation between axion mass and couplings is loosened. In such a case, there could well be surprises,” he writes. Nonetheless, he doesn’t downplay the potential significance of ADMX: “sensitivity to dark matter QCD axions has at last been achieved with the RF cavity technique, and we may know soon whether the dark matter is made of axions.”

If dark matter does turn out to be axions, it will be good news in one sense at least: physicists will be able to directly detect and experiment with dark matter, a boon for cosmology. Considering that it’s not yet certain that dark matter interacts at all—and it would be essentially impossible to directly observe if it doesn’t—that would be good news indeed.

See the full article here.

Please help promote STEM in your local schools.

STEM Icon
Stem Education Coalition
Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

From ars technica: “Exploring the monstrous creatures at the edges of the dark matter map”

Ars Technica
ars technica

Sept 30 2014
Matthew Francis

So far, we’ve focused on the simplest dark matter models, consisting of one type of object and minimal interactions among individual dark matter particles. However, that’s not how ordinary matter behaves: the interactions among different particle types enable the existence of atoms, molecules, and us. Maybe the same sort of thing is true for dark matter, which could be subject to new forces acting primarily between particles.

Some theories describe a kind of “dark electromagnetism” where particles carry charges like electricity, but they’re governed by a force that doesn’t influence electrons and the like. Just as normal electromagnetism describes light, these models include “dark photons,” which sound like something from the last season of Star Trek: The Next Generation (after the writers ran out of ideas).

elec
Diagram of a solenoid and its magnetic field lines. The shape of all lines is correct according to the laws of electrodynamics.

Like many WDM candidates, dark photons would be difficult—if not impossible—to detect directly, but if they exist, they would carry energy away from interacting dark matter systems. That would be detectable by its effect on things like the structure of neutron stars and other compact astronomical bodies. Observations of these objects would let researchers place some stringent limits on the strength of dark forces. Another consequence is that dark forces would tend to turn spherical galactic halos into flatter, more disk-like structures. Since we don’t see that in real galaxies, there are strong constraints on how much dark forces can affect dark matter motion.

som
The “Sombrero” galaxy shows that matter interacting with itself flattens into disks. Dark matter doesn’t seem to do that, limiting the strength of possible interactions between particles.
NASA, ESA, and The Hubble Heritage Team (STScI/AURA)

NASA Hubble Telescope
NASA/ESA Hubble

Another side effect of dark forces is that there should be dark antimatter and dark matter-antimatter annihilation. The results of such interactions could include ordinary photons, another intriguing hint in the wake of observations of excess gamma-rays, possibly due to dark matter annihilation in the Milky Way and other galaxies.

What’s cooler than cold dark matter?

While most low-mass particles are “hot,” a hypothetical particle known as the axion is an exception. Axions were first predicted as a solution to a thorny problem in the physics of the strong nuclear force, but certain properties make them appealing as dark matter candidates. Mainly, they are electrically neutral and don’t interact directly with ordinary matter except through gravity.

Axions are also very low-mass (at least in one proposed version), but unlike hot dark matter, they “condensed” in the early Universe into a slow, thick soup. In other words, they behave much like cold dark matter, but without the large mass usually implied by the term.

Axions aren’t part of the Standard Model, but in a sense they’re a minimally invasive addition. Unlike supersymmetry, which involves adding one particle for each type in the Standard Model, axions are just one particle type, albeit one with some unique properties. (To be fair, these aren’t mutually exclusive concepts: it’s possible both SUSY particles and axions are real, and some versions of SUSY even include a hypothetical partner for axions.)

sm
The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

Supersymmetry standard model
Standard Model of Supersymmetry

Like WDM, axions don’t interact directly with ordinary matter. But according to theory, in a strong magnetic field, axions and photons can oscillate into each other, switching smoothly between particle types. That means axions could be created all the time near black holes, neutron stars, or other places with intense magnetic fields—possibly including superconducting circuits here on Earth. This is how experiments hunt for axions, most notably the Axion Dark Matter eXperiment (ADMX).

So far, no experiment has turned up axions, at least of the type we’d expect to see. Particle physics has a lot of wiggle-room for possibilities, so it’s too soon to say no axions exist, but axion partisans are disappointed. A universe with axions makes more sense than one without, but it wouldn’t be the first time something that really seemed to be a good idea didn’t quite work out.

A physicist’s fear

Long as it is becoming, this list is far from complete. We’ve excluded exotic particles with sufficiently tiny electric charges to be nearly invisible, weird (but unlikely) interactions that change the character of known particles under special circumstances, plus a number of other possibilities. One interesting candidate is jokingly known a WIMPzilla, which consists of one or more particle type more than a trillion times the mass of a proton. These would have been born at a much earlier era than WIMPs, when the Universe was even hotter. Because they are so much heavier, WIMPzillas can be rarer and interact more readily with normal matter, but—as with other more exotic candidates—they aren’t really considered to be a strong possibility.

godzilla
If the leading ideas for dark matter don’t hold up to experimental scrutiny, then we’ve definitely sailed off the map into the unknown.
Castle Gallery, College of New Rochelle

And more non-WIMP dark matter candidates seem to crop up every year, though many are implausible enough they won’t garner much attention even from other theorists. However, each guess—even unlikely ones—can help us understand what dark matter can be, and what it can’t.

We’ve also omitted a whole other can of worms known as “modified gravity”—a proposition that the matter we see is all there is, and the observational phenomena that don’t make sense can be explained by a different theory of gravity. So far, no modified gravity model has reproduced all the observed phenomena attributed to dark matter, though of course that doesn’t say it can never happen.

To put it another way: most astronomers and cosmologists accept that dark matter exists because it’s the simplest explanation that accounts for all the observational data. If you want a more grumpy description, you could say that dark matter is the worst idea, except for all the other options.

Of course, Nature is sly. Perhaps more than one of these dark matter candidates is out there. A world with both axions and WIMPs—motivated as they are by different problems arising from the Standard Model—would be confounding but not beyond reason. Given the unexpected zoo of normal particles discovered in the 20th century, maybe we’ll be pleasantly surprised; after all, wouldn’t it be nice if several of our hypotheses were simultaneously correct for once? (I’m a both/and kind of guy.) More than one type might also help explain why we have yet to see any dark matter in our detectors so far. If a substantial fraction of dark matter particles is made of axions, then the density of WIMPs or WDM must be correspondingly lower and vice versa.

But a bigger worry lurks in the minds of many researchers. Maybe dark matter doesn’t interact with ordinary matter at all, and it doesn’t annihilate in a way we can detect easily. Then the “dark sector” is removed from anything we can probe experimentally, and that’s an upsetting thought. Researchers would have a hard time explaining how such particles came to be after the Big Bang, but worse: without a way to study their properties in the lab, we would be stuck with the kind of phenomenology we have now. Dark matter would be perpetually assigned to placeholder status.

In old maps made by European cartographers, distant lands were sometimes shown populated by monstrous beings. Today of course, everyone knows that those lands are inhabited by other human beings and creatures that, while sometimes strange, aren’t the monsters of our imagination. Our hope is that the monstrous beings of our theoretical space imaginings will some day seem ordinary, too, and “dark matter” will be part of physics as we know it.

See the full article here.

Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

ScienceSprings relies on technology from

MAINGEAR computers

Lenovo
Lenovo

Dell
Dell

From ARS Technica: “Dark matter makes up 80% of the Universe—but where is it all?”

Ars Technica
ARS Technica

July 27 2014
Matthew Francis

It’s in the room with you now. It’s more subtle than the surveillance state, more transparent than air, more pervasive than light. We may not be aware of the dark matter around us (at least without the ingestion of strong hallucinogens), but it’s there nevertheless.

dm
Composite image of X-ray (pink) and weak gravitational lensing (blue) of the famous Bullet Cluster of galaxies.
X-ray: NASA/CXC/CfA/ M.Markevitch et al.; Lensing Map: NASA/STScI; ESO WFI; Magellan/U.Arizona/ D.Clowe et al. Optical: NASA/STScI; Magellan/U.Arizona/D.Clowe et al.

Although we can’t see dark matter, we know a bit about how much there is and where it’s located. Measurement of the cosmic microwave background shows that 80 percent of the total mass of the Universe is made of dark matter, but this can’t tell us exactly where that matter is distributed. From theoretical considerations, we expect some regions—the cosmic voids—to have little or none of the stuff, while the central regions of galaxies have high density. As with so many things involving dark matter, though, it’s hard to pin down the details.

Cosmic Background Radiation Planck
CMB per ESA/Planck

ESA Herschel
ESA/Planck

Unlike ordinary matter, we can’t see where dark matter is by using the light it emits or absorbs. Astronomers can only map dark matter’s distribution using its gravitational effects. That’s especially complicated in the denser parts of galaxies, where the chaotic stew of gas, stars, and other forms of ordinary matter can mask or mimic the presence of dark matter. Even in the galactic suburbs or intergalactic space, dark matter’s transparency to all forms of light makes it hard to locate with precision.

Despite that difficulty, astronomers are making significant progress. While individual galaxies are messy, analyzing surveys of huge numbers of them can provide a gravitational map of the cosmos. Astronomers also hope to overcome the messiness of galaxies and estimate how much dark matter must be in the central regions using careful observation of the motion of stars and gas.

There’s also been a tantalizing hint of dark matter particles themselves in the form of a signal that may come from their annihilation near the center of the Milky Way. If this is borne out by other observations, it could constrain dark matter’s properties while avoiding messy gravitational considerations. Adding it all up, it’s a promising time for mapping the location of dark matter, even as researchers still build particle detectors to identify what it is.

A (very) brief history of dark matter

In the 1930s, Fritz Zwicky measured the motion of galaxies within the Coma galaxy cluster. Based on simple gravitational calculations, he found that they shouldn’t move as they did unless the cluster contained a lot more mass than he could see. As it turned out, Zwicky’s estimates of how much matter there was were too large by a huge factor. Still, he was correct in the broader picture: more than 80 percent of a galaxy cluster’s mass isn’t in the form of atoms.

Zwicky’s work didn’t get a lot of attention at the time, but Vera Rubin’s later observations of spiral galaxies were another matter. She found that the combined stars and gas had too little mass to explain the rotation rates she measured. Between Rubin’s work and subsequent measurements, astronomers established that every spiral galaxy is engulfed by a roughly spherical halo (as it is called) of matter—matter that’s transparent to every form of light.

The Bullet Cluster

That leads us to the “Bullet Cluster,” one of the most important systems in astronomy.

bullt
X-ray photo by Chandra X-ray Observatory of the Bullet Cluster (1E0657-56). Exposure time was 0.5 million seconds (~140 hours) and the scale is shown in megaparsecs. Redshift (z) = 0.3, meaning its light has wavelengths stretched by a factor of 1.3. Based on today’s theories this shows the cluster to be about 4 billion light years away. In this photograph, a rapidly moving galaxy cluster with a shock wave trailing behind it seems to have hit another cluster at high speed. The gases collide, and gravitational fields of the stars and galaxies interact. When the galaxies collided, based on black-body temperature readings, the temperature reached 160 million degrees and X-rays were emitted in great intensity, claiming title of the hottest known galactic cluster. Studies of the Bullet cluster, announced in August 2006, provide the best evidence to date for the existence of dark matter.

First described in 2006, it’s actually a pair of galaxy clusters observed in the act of colliding. Researchers mapped it in visible and X-ray light, finding that it consists of two clumps of galaxies. But it’s the stuff they couldn’t image directly that ensured the Bullet Cluster is rightfully cited as one of the best pieces of evidence for dark matter’s existence (the title of the paper announcing the discovery even calls it “direct empirical proof”).

Galaxy clusters are the biggest individual objects in the Universe. They can contain thousands of galaxies bound to each other by mutual gravity. However, the stuff within those galaxies—stars, gas, dust—is outweighed by an extremely hot, gaseous plasma between them, which shines brightly in X-rays. In the Bullet Cluster, the collision between the two clusters created a shock wave in the plasma (the shape of this shock wave gives the structure its name).

More dramatically, though, the astronomers who described the cluster used gravitational lensing—the distortion of light from more distant galaxies by the mass within the cluster—to map the distribution of most of the material in the Bullet Cluster. That method is known as “weak gravitational lensing.” Unlike the sexier strong lensing, weak lensing doesn’t create multiple images of the more distant galaxies. Instead, it slightly warps the light from background objects in a small but measurable way, depending on the amount and concentration of mass in the “lens”—in this case, the cluster.

Astronomers found the shocked plasma, which represents most of the mass of the Bullet Cluster, was almost entirely in the region between the two clusters, separated from the galaxies. However, the mass was largely concentrated around the galaxies themselves. This enabled a clear, independent measurement of the amount of dark matter, separate from the mass of the gas.

The results also confirmed some predictions about the behavior of dark matter. Thanks to the shock of the collision, the plasma stayed in the region between the two clusters. Since the dark matter doesn’t interact much with either itself or normal matter, it passed right through the collision without any noticeable change.

It’s a phenomenal discovery, but it’s only one galaxy cluster, and that ain’t enough. Science is inherently greedy for evidence (as it should be). A single example of anything tells us very little in a Universe full of possibilities. We want to know if dark matter always clusters around galaxies or if it can be more widely dispersed. We want to know where all the dark matter is, in all galaxy clusters and beyond, throughout the entire cosmos.

A dark matter census

Weak gravitational lensing provides a method to search for dark matter in other galaxy clusters, too, as well as even larger and smaller structures. Princeton University astronomers Neta Bahcall and Andrea Kulier took a weak lensing census of 132,473 galaxy groups and clusters, all within a well-defined patch of the sky but at a range of distances from the Milky Way. (“Groups” are smaller associations of galaxies; for example, the Milky Way is the second largest galaxy in the Local Group, after the Andromeda galaxy.) While individual galaxy clusters usually can’t tell us much, a large sample allowed the astronomers to treat the problem statistically—weak lensing effects that were too small to spot for a single cluster became obvious when looking at hundreds of thousands.

For example, a typical quantity used in studying galaxies is the mass-to-light ratio. To measure this statistically, Bahcall and Kulier looked at the cumulative amount of light (mostly emitted by stars) and weak lensing (mostly from dark matter), starting from the centers of each cluster and working outward. They found something intriguing: the amount of mass and light increased in tandem and then leveled off together. That means neither the dark matter nor the light extends farther than the other: the stars inside these groups and clusters were a very good tracer for the dark matter. That’s surprising because stars are typically less than two percent of the mass in a cluster, with the balance of ordinary matter made up by gas and dust.

As Kulier told Ars, “The total amount of dark matter in galaxy groups and clusters might be accounted for entirely by the amount of dark matter in the halos of their constituent galaxies.” That’s an average result, though; the details could look quite different. “This does not necessarily imply that the halos are still ‘attached’ to the galaxies,” Kulier said. In other words, when galaxies came together to form clusters, the stronger forces acting on galaxies and their stars could in principle separate them from their dark matter but leave everything inside the cluster thanks to mutual gravity.

Kulier pointed out that these results provide strong support for the “hierarchical” model of structure formation: “smaller structures collapse earlier than larger ones, so that galaxies form first and then merge together to form larger structures like clusters.” The Bullet Cluster is an archetypical example of this, but things could be otherwise. For instance, dark matter could have ended up in the center of clusters, separate from the galaxies and their individual halos.

But that’s not what astronomers see. In their analysis, Bahcall and Kulier also calculated that the total ratio of dark matter to ordinary matter in galaxy clusters matches that of the Universe as a whole. That’s another strong piece of evidence in favor of the standard model in cosmology: maybe most of the dark matter everywhere is in galactic halos.

Every galaxy wears a halo

halo
Computer reconstruction of the location of mass in terms of how it affects the image of distant galaxies through weak lensing.
S. Colombi (IAP), CFHT Team

So what about the halos themselves and the galaxies that wear them? Historically, dark matter was first recognized for its role in spiral galaxies. However, it’s one thing to say that dark matter is present. It’s another to map out where it is—especially in the dense, star-choked inner parts of galaxies.

Spiral galaxies consist of three basic parts: the disk, the bulge, and the halo. The disk is a thin region containing the spiral arms and most of the bright stars. The bulge is the central, densest part, with large populations of older stars and (at its very heart) a supermassive black hole. The halo is a more or less spherical region containing a smattering of stars; it envelops the other regions, extending several times beyond the limit of the disk. To provide an example, the Milky Way’s disk is about 100,000 light-years in diameter, but its halo is between 300,000 to 1 million light-years across.

Because of the relative sizes of the different regions, most of a galaxy’s dark matter is in the halo. Relatively little is in the disk; Jo Bovy and Scott Tremaine showed that the disk and halo contain less than the equivalent mass of 100 Earths in a cube one light-year across. That may sound like a lot, but Earth isn’t that large, and a light-year defines a big volume. That amount isn’t enough to affect the Sun’s orbit around the galactic center strongly. (It’s still enough for a few particles to drift through detectors like LUX, though.)

By contrast, the amount of dark matter increases toward the galaxy’s center, so the density should be much higher in the bulge than anywhere else. For that reason, a number of astronomers look to the central part of the Milky Way for indications of dark matter annihilation, which (under some models) would produce gamma rays. This would occur if dark matter particles are their own antimatter partners, so that their (very rare) collisions result in mutual destruction and some high-energy photons. This winter, a group of researchers announced a possible detection of excess gamma rays originating in the Milky Way’s core, based on data from the orbiting [NASA] Fermi gamma ray observatory.

NASA Fermi Telescope
NASA/Fermi

However, the bulge also has the highest density of stars, making it a tangled mess. Many things in that region could produce an excess of gamma rays. As University of Melbourne cosmologist Katherine Mack told me, “The Galactic Center is a really messy place, and the analysis of the signal is complicated. It’ll take a lot to show that the signal has to be dark matter annihilation rather than some un-accounted-for astrophysical source.” We can’t rule out the possibility of dark matter annihilation, but it’s definitely too soon to break out the champagne.

The difference between the ease of calculating an average density and detecting the presence of dark matter is illustrative of the general problem with mapping dark matter inside galaxies. It’s relatively simple to put limits on how much there is in the disk, since that’s a small fraction of the total volume of a galaxy. The tougher questions include how steeply the density falls off from the galactic center, how far the halo actually extends, and how lumpy the halo is.

For instance, our galaxy’s halo is big enough to encompass its satellite galaxies, including the Magellanic Clouds and a host of smaller objects. But these galaxies also have their own halos in accordance with the hierarchical model. Because they’re denser dark matter lumps inside the Milky Way’s larger halo, the satellites’ halos create a substructure.

Our dark matter models predict how much substructure should be present. However, dwarf galaxies are very faint, so astronomers have difficulty determining if there are enough of them to account for all the predicted substructure. This is known as the “missing satellite problem,” but many astronomers suspect the problem will evaporate as they get better at finding these faint objects.

A hopeful conclusion

So where is the dark matter? Based on both theory and observation, it looks like most of it is in galactic halos. Surveys using weak gravitational lensing are ongoing, with many more planned for the future. These surveys will show where most of the mass in the Universe is located in unprecedented detail.

How dark matter is distributed within those halos is still a bit mysterious, but there are several hopeful approaches. By looking for “dark galaxies”—small satellites with few stars but high dark matter concentrations—astronomers can determine the substructure within larger halos. The [ESA]Gaia mission is working to produce a three-dimensional map of a billion stars and their motions, which will provide information about the structure of the Milky Way and its surrounding satellites. That in turn will allow researchers to work backward, determining the gravitational field dictating the motion of these stars. With that data in hand, we should have a good map of the dark matter in many regions that are currently difficult to study.

Dark matter may be subtle and invisible, but we’re much closer than ever to knowing exactly where it hides.


ScienceSprings is powered by MAINGEAR computers