From “ars technica“: “Black holes can’t trash info about what they swallow—and that’s a problem”

From “ars technica“

Paul Sutter

Solving the information paradox could unlock quantum gravity and unification of forces.

Aaron Horowitz/Getty Images.

Three numbers.

Just three numbers—that’s all it takes to completely, unequivocally, 100 percent describe a black hole in general relativity. If I tell you the mass and electric charge and spin (i.e. angular momentum) of a black hole we’re done. That’s all we’ll ever know about it and all we’ll ever need to describe its features.

Those three numbers allow us to calculate everything about how a black hole will interact with its environment, how objects around it will respond to it, and how the black hole will evolve in the future.

For all their ferocious gravitational abilities and their unholy exotic natures, black holes are surprisingly simple. If I give you two black holes with the exact same mass, charge, and spin, you wouldn’t be able to tell them apart. If I swapped their places without you looking, you wouldn’t know that I did it.

This also means that when you see a fully formed black hole, you have no idea what made it. Any combination of mass squeezed into a sufficiently small volume could have done the job. It could have been the ultra-dense core of a dying star. It could have been an extremely dense litter of adorable kittens squashed into oblivion.

As long as the mass, charge, and spin are the same, the history is irrelevant. No information about the original material that created the black hole survives. Or does it?

Founding charters

“Information” is a bit of a loaded term; it can take on various definitions depending on who you ask and what mood they’re in. In physics, the concept of information is tightly linked to our understanding of how physical systems evolve and how we construct our theories of physics.

We like to think that physics is a relatively useful paradigm for understanding the Universe we live in. One of the ways that physics is useful is its power of prediction. If I give you a list of all the information about a system, I should be able to apply my laws and theories of physics to tell you how that system will evolve. The reverse is also true. If I tell you the state of a system now, you can run all the math backward to figure out how the system got to its present state.

These two concepts are known as determinism (I can predict the future) and reversibility (I can read the past) and are pretty much the foundational core of physics. If our theories of physics didn’t have these properties, we wouldn’t be able to get much work done.

These two concepts also apply to quantum mechanics. Yes, quantum mechanics puts strict limits on what we can measure about the Universe, but that doesn’t mean all bets are off. Instead, we can simply replace a sharply defined classical state with a fuzzier quantum state and move on with our lives; the quantum state evolves according to the Schrödinger equation, which upholds both determinism and reversibility, so we’re all good.

This one-two punch of determinism and reversibility means that, in terms of physics, information must be preserved during any process. It can’t be either created or destroyed—if we were to add or remove information willy-nilly, we wouldn’t be able to predict the future or read the past. Any loss or gain means there would either be missing information or extra information, so all of physics would crumble to dust.

There are many processes that appear to destroy information, but that’s only because we’re not keeping careful enough track. Take, for example, the burning of a book. If I gave you a pile of ashes, this would appear to be irreversible: There’s no way you could put the book back together. But if you have a sufficiently powerful microscope at your disposal (and a lot of patience) and got to watch me in the act of burning the book, you could—in principle at least, which is good enough—watch and track the motion of every single molecule in the process. You could then reverse all those motions and all those interactions to reconstruct the book. Information is not lost when you burn a book; it’s merely scrambled.

In the traditional, classical view of black holes, all this business about information is not a problem at all. The information that went into building the black hole is simply locked away behind the event horizon—the one-way boundary at the black hole’s surface that makes it so unique. Once there, the information will never be seen in this Universe again. Whether the black hole was formed from dying stars or squashed kittens, it doesn’t practically matter. The information may not be destroyed, but it’s permanently hidden from our prying eyes.

Hawking’s surprise

At least, that’s what we thought until the mid-1970s, when famed astrophysicist Stephen Hawking discovered that black holes aren’t entirely… well, black.

Hawking was exploring the nature of quantum fields near the event horizons of black holes when he discovered an unusual property. The interaction of the event horizon with the quantum fields triggered the emission of radiation; light and particles could escape from the otherwise inescapable event horizons, causing the black holes to lose mass and eventually evaporate.

Curiously, Hawking found that the radiation emitted by a black hole was perfectly thermal, meaning that it contained no information whatsoever except for that regarding the mass, charge, and spin of the black hole. Thus was born the black hole information paradox. Unlike if it were burned, were a book to fall into a black hole, there’s no way we could reconstruct the words from the radiation that came out. After the black hole radiated away all its mass and disappeared in a poof of particles, all the information about all the objects (books, stars, kittens, etc.) that fell in to create the black hole would disappear along with it.

But as we went over earlier, information can’t just disappear, so this was a bit of a puzzle.

The problem languished for decades, with physicists arguing back and forth (and even changing their minds!) about how to fix it. Hawking’s calculations could be wrong, but that would mean we were missing something important about the nature of quantum field theory—which was well-tested. Or our understanding of gravity could be wrong, although that was well-tested, too. Or we needed to give up our cherished notions of the conservation of information… which was also well-tested.

It won’t be spoiling the rest of this article to tell you that we still do not have a solution to the paradox. But in studying this troubling problem, physicists have come up with several interesting clues that are helping us move in what’s hopefully the right direction.

Information wants to be free

The first major clue came in the late 1990s when theoretical physicist Juan Maldacena calculated the entropy of a black hole. In a nutshell, this calculation of the entropy was a count of all the missing information that gets locked behind an event horizon. He found that the amount of entropy inside a black hole is proportional to the radius squared—and thus proportional to the surface area of the black hole. (That’s in contrast to the radius cubed, which is proportional to the volume.)

For example, if you take a standard black hole and add one single bit of information to it (as encoded by, say, a single photon with a wavelength equal to the radius of the black hole) its surface area will increase by exactly the square of the Planck length.

Leading from Hawking’s insight, this result suggested that the most important property of a black hole—the place where we should focus our attention and efforts—was not the infinitely dense singularity in the center but the surface of the event horizon, which separates the insides of a black hole from the Universe outside.

The relationship between a black hole’s surface and its entropy also dovetailed nicely with another concept evolving out of the string theory community at the time, something known as the “holographic principle.”

String theory is an attempt to develop a theory of everything, a complete description of all the forces of nature under a single unifying framework. That attempt hasn’t seen a lot of success because nobody has been able to use string theory to develop a quantum theory of gravity—all the math just gets too complex to solve. So several physicists in the ’90s wondered if there was a way to simplify the problem. Instead of trying to work through the nasty problem of quantum gravity in our normal four-dimensional Universe, maybe we could encode all the information contained in the Universe onto an imaginary three-dimensional boundary and get an easier version of the math.

Maldacena was able to provide a realization of that idea via what’s called the AdS/CFT correspondence. It works like this. You start by trying to solve a problem involving quantum gravity in a particular kind of Universe called anti-de Sitter space (AdS, which has no matter or radiation inside it but does have positive cosmological constant). Mathematically, you can project all the information in that Universe onto its surface. Once you make that transformation, your impossible-to-solve quantum gravity problem turns into a merely very-difficult-to-solve problem in conformal field theory (that’s the CFT part), which is a kind of quantum field theory that doesn’t include gravity at all. You can then solve your problem and translate the solution back into the full-dimensional Universe and move on with your life.

This correspondence between the information within a volume and the information present on that volume’s surface is the holographic principle (named so because holograms store 3D information on a 2D surface). The correspondence has yet to be proven mathematically, but it has turned out to be useful for solving various kinds of specialized problems in the realm of high-energy physics.

What does this have to do with black holes? The fact that a black hole’s information content is related directly to its surface and not its volume seems to be a major clue that the resolution to the paradox may come from using the AdS/CFT correspondence, which recasts problems involving extended objects with gravity into surface-layer problems without gravity. Leaving aside the slightly uncomfortable fact that the inside of a black hole is definitely not an anti-de Sitter space, perhaps the black holes are trying to tell us something fundamental not just about the nature of gravity but about reality itself.

It was based on this correspondence that Hawking declared a winner in the love-it-or-leave-it debate regarding the preservation of information. Based on the AdS/CFT holographic picture of the Universe, information must be preserved (somehow) on the surface of a black hole and end up leaving the black hole (somehow) via Hawking radiation. If you threw a book into a black hole and kept careful track of the particles emitted over the next few trillions of years, you should be able to put the book back together again.


The “promised land” of quantum gravity

The “how” part of this story has been keeping some physicists up late at night for the past 20 years. One particular line of thinking has been to closely examine the nature of spacetime near the event horizon. In Hawking’s original approach, he assumed that a large enough black hole would curve space in the region of the horizon, but only mildly so. But we know from our (limited and incomplete) forays into quantum gravity that we may have to account for a more dramatic curvature. To fully answer the question of “what’s gravity up to around an event horizon?” we may also have to fold in the same kind of quantum fuzziness that underlies theories of subatomic particles.

When we do that, however, we typically get uncontrollable infinities popping up everywhere in the math because such theories need to account for every possible exotic shape that spacetime can take. This is generally why we don’t have a theory of quantum gravity. That said, some brave theorists have dared to venture into those uncharted waters and have discovered some clever tricks (really hardcore stuff, too, like imaginary wormholes threading together in a complex mathematical space) to untangle some of the equations, showing that it may be possible to create scenarios where information can leak into the Hawking process.

Still other theorists have rejected this string-theory-driven approach to black holes and focus instead on the nature of space-time at the singularity. Their approaches consider whether space and time might come in discrete little chunks, the same way that energy levels and angular momentum do. In this view, the singularity is not an infinitely dense point but merely a really tiny one. And when the black hole evaporates, it doesn’t disappear completely—instead, it leaves behind a nugget of information-rich material. But those approaches run into major hurdles of their own, like having to figure out how to make the transition from a black hole with an inescapable horizon to a lump of matter existing bare naked in the Universe.

Ultimately, physicists remain intrigued by the information paradox because it potentially exposes a feature of quantum gravity and makes it available to our examination. Quantum gravity is usually the domain of the ultra-exotic: the initial moments of the Big Bang or unachievable particle collider energies. But black holes are real things in the real Universe; with enough determination, we could reach out and dip a toe into an event horizon.

If we can solve the information paradox, we just might be able to unlock quantum gravity, the unification of the forces, and more.

See the full article here .


Please help promote STEM in your local schools.

Stem Education Coalition

Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).