Friday, June 07, 2019

Penington vs information loss

MmanuF has pointed out that I forgot to discuss an impressive preprint
Entanglement Wedge Reconstruction and the Information Paradox
by Geoffrey Penington. He's still a Stanford graduate student but it's totally plausible that he understands the subtle ways how at least AdS/CFT resolves the information loss paradox more than any other physicist in the world.



After a long search for the most illuminating video to embed, I chose Angelique de Peyrac because MmanuF is French while Peyrac was also Geoffrey. Too bad that things like Angelique are no longer sufficiently politically correct in California.

The 73-page-long paper has everything to convince me that the man knows everything that is really important for the resolution of the information loss paradox – and the related firewall paradox: the paper agrees with all the qualitative assumptions that I have convinced myself to be irrevocable; and it contains additional 164 equations that seem totally relevant and a dozen of somewhat randomly ones have passed my nontrivial examination. ;-)



OK, first, Penington agrees that the information isn't lost and there's no firewall. He agrees with a key assumption needed to avoid these two paradoxes – information loss and firewalls – which is the state independence, first clearly articulated by Papadodimas and Raju. He understands the error correction scheme and may actually arrange the system so that the description is as state-independent as possible, given an entanglement wedge. And he knows that exponentially small, nonperturbatively tiny, errors of the reconstruction of the geometry need to be there and are also sufficient to carry the information away.



Penington works with an entanglement wedge. Pick a region on the boundary, and the corresponding wedge in the bulk that is glued to it. Imagine that you allow all perturbations in the wedge but you also want to keep the smooth geometry. These conditions pick a subspace of the Hilbert space – or the space of the corresponding density matrices. And for every nearby state in some counting, there is an error correction scheme that gives you the right state in the subspace.

He considers the black holes in AdS/CFT with the absorbing boundary conditions – meaning that the infalling modes are assumed to be in the ground state – and shows, at quite some microscopic level, that the Ryu-Takaynagi surface is slightly inside the black hole which means that some information about the interior is carried away by the early Hawking radiation. The Page time and Page curve are basically derived microscopically and the errors in the error correction are quantified. All paradoxes are resolved, all the descriptions look rather local – assuming simply that the subspace of the CFT Hilbert space gives rise to a nearly local theory in the wedge – and various errors are quantified.

In the acknowledgements, he makes it clear that he talks to all the real experts – the list he thanks to is understandably California-centric. Some people on the East Coast are admitted to exist – especially Maldacena and Witten, a coach of Penington's who pointed out that a previous paper by Penington may be relevant for the informatino loss problem. Europe and Asia (the continents inhabited by the likes of Papadodimas and Raju, respectively) don't really exist, as far as I can see, but this particular bias could be due to the Trump trade wars, of course.

A typical crackpot, let's call him Saddam al Newton Al Lah, sometimes writes his acknowledgements as follows:
I am grateful to Glashow, Salam, Weinberg, and especially Einstein who praised me when I was 3 in 1954 and correctly predicted that I would become the world's greatest physicist.
And so on. A funny aspect of the acknowledgement in papers such as Penington's is that the acknowledgements are somewhat analogous and you need some expertise to distinguish Al Lah's acknowledgements from Penington's. But with that expertise, you may become sure that Penington is clearly one of the biggest shots and the leaders of the field largely recognize him as such.

There are lots of technical things I am only gradually starting to understand – and the more time I spend with it, the more sense the things seem to make which is a good sign. For example, the infalling observer's observation is reconstructed in a state-independent way within a scheme – which is a good feature because the perceptions of the people inside the black holes aren't "arbitrary". I feel that the choice of the error correction scheme actually replaces the choice of the state while they make the identification of the operator less arbitrary and less subjective. Papadodimas and Raju have had discussions about it – at least I was pushing them to talk about such things – and this subtlety about the arbitrariness of the choice of operators wasn't ever quite settled.

It's probably better to look at the operators as being not state-dependent but dependent on the choice of the error correcting subspace and representation. Such an approach is probably sufficient to resolve the firewall and information loss paradoxes, Penington seems to have proven; but they're somewhat more constraining than just the arbitrary identification of the operators that have a good enough algebra, as envisioned by Papadodimas and Raju.

While the error correction codes seem to impose stricter constraints on the embedding of the operators within whole regions of the AdS bulk, there seems to be something more precise about the Papadodimas-Raju. They say that too long products of the operators aren't represented properly. I don't see whether this claim about the "trick to deform" the embedding or about the "character of the errors of the embedding" is consistent with the tricks of the error correction or whether they are unequivalent. This should be resolved. They may be mathematically unequivalent but having the same physical implications – much like e.g. the microcanonical and grand canonical ensembles in statistical physics. Unfortunately, the error correction approach depends on some "computer science" expertise a bit more than Papadodimas-Raju, so the folks around Papadodimas and Raju probably need to learn more about the error correction codes and the related approach to quantum gravity. And I think that they should because I believe that at least at some moral level, I think that the whole error correction business in quantum gravity is just a particular reincarnation of Papadodimas-Raju.

As I said, Penington's paper has 164 sophisticated displayed equations. The qualitative lessons about his solution of the paradoxes surely don't require too many equations and I think that he may want to summarize the work in a less technical way, too.

No comments:

Post a Comment