**...the vacuum can't be excited only if one assumes that it can't be excited...**

When I was living my days in the "physics establishment", it was pretty much true that there was a connected theoretical high-energy theoretical physics community including professors, postdocs, and students that worked hard to learn everything it should learn, that cared about the important new findings, and that cared whether the papers they write are correct ones. You could have taken the arXiv papers from that community pretty seriously and when a paper was wrong, chances were that it would be corrected or withdrawn. A serious enough blunder would be found, especially if the paper were sold as an important one, and experts would quickly learn about it and reduced the attention given to the authors of the wrong paper appropriately.

You could have said that the people around loop quantum gravities and similar "approaches" didn't belong because they have never respected any quality standards worth mentioning. Everything was clear but the "pure status" of the community began to be blurred with the arrival of the anthropic papers after the year 2000 that suddenly made it legitimate to write down some very lousy, unsubstantiated, non-quantitative claims, often contradicting some hard knowledge. I tended to think that this decrease of the quality expectations and the propagation of philosophically preconceived and otherwise irrational papers was a temporary fluke connected with the anthropic philosophy – because it's so "philosophically sensitive".

However, it ain't the case. When one looks at the literature about the black hole information issues, i.e. a big topic that made a tremendous progress in the 1990s, a very large portion of the literature that is completely wrong began to develop. Raphael Bousso just released his 4-page preprint

Frozen Vacuumand it's just so incredibly bad – and so far from the first preprint written by a similarly well-known name that is just awful.

Bousso correctly clumps two paradigms that claim that no black hole firewalls exist. The newer ER-EPR correspondence by Maldacena and Susskind; and the \(A=R_B\) interpretation of the original black hole complementarity principle (the interior's degrees of freedom aren't independent from the exterior ones), a more general approach taken in various earlier papers such as Papadodimas-Raju (who are not cited by Bousso which is a pity because it's a better paper about these BH information topics than anything that Bousso has ever written on that topic himself) and advocated on this blog for more than a year, from the beginning of the AMPS provocations.

Indeed, \(ER=EPR\) is just a more specific and more geometric way to think about the lack of independence between the internal and external degrees of freedom – and about the reasons why the independence disappeared (because wormholes connecting the interior with the distant regions are mass-produced and getting longer, starting from short wormholes representing the entangled Hawking pairs produced near the horizon).

I think it's important to point out that the ER-EPR correspondence wasn't "essential" to show that the AMPS firewall arguments were invalid. Many previous papers have offered valid arguments why AMPS didn't have a solid proof.

Unfortunately, it's the last positive thing I can say about the new paper by Bousso.

He believes that either \(ER=EPR\) or \(A=R_B\) – which are approaches that claim to preserve the equivalence principle, at least much more accurately than in the firewall picture by AMPS – ultimately have to violate the equivalence principle as well because an infalling observer isn't allowed to see any particle excitations near the event horizon.

Needless to say, this claim is entirely preposterous and the arguments backing it are flawed. There can't possibly be any "logical contradiction" hiding in \(ER=EPR\) – it's just ordinary quantum mechanics of degrees of freedom that may be approximately visualized as field theory modes on the background of an Einstein-Rosen bridge. What could possibly go wrong? There can't be any contradiction in the very assumption \(A=R_B\) itself because it's even more general than \(ER=EPR\).

The main reason why Bousso's arguments don't hold water will appear later. But I just can't resist to point out many – and there are indeed very many – other, perhaps more minor things in the paper that just drive me up the wall and that would be enough to throw the paper to the trash bin in the good old times when quality mattered.

The first column on the first page ends with this paragraph:

In order to avoid a firewall at the horizon, one could identify the interior partner \(\tilde b\) with some \(e_b\) or with the exact purification \(\hat b\). This reduces the inconsistent double entanglement to a consistent single entanglement. An out-state-dependent mapping is necessary to ensure that \(b \tilde b\) will not just be entangled, but in a particular entangled state, the vacuum state. This type of map is called \(A = R_B\) [3, 6-8] or \(ER=EPR\) [9], or "donkey map" [4]. It is nonstandard [6, 10], and so already faces a number of challenges;Some relationship is "non-standard" according to two papers, one of which was written by the present author. What a heresy. Clearly, the word "non-standard" means that Bousso wants to spit on these claims without having any specific counter-arguments.^{2}moreover, no donkey map exists for out-states where \(b\) is less than thermally entangled [3, 4, 11].

Of course that there can't be any contradiction if we're forced to look at the out state whenever we want to restrict our attention to microstates that see the vacuum near the horizon. The field theory modes in a black hole background just don't preserve the exact rules of a local quantum field theory so the Hilbert space can't be written as an exact tensor product of Hilbert spaces from individual "subregions". All these subregions are correlated because they are required to combine to a black hole of a fixed size. That black hole has a finite entropy and its limited number of microstates has to efficiently incorporate various field-theoretical degrees of freedom from the regions, and other, non-field theoretical degrees of freedom as well.

Imagine that you have 1 MB of disk space to compress a 10 MB text file – and this is what the black hole is doing with the "regional" information. The algorithm that achieves such a high degree of compression of the information simply has to depend on the whole 10 MB long text. In the analogy, it has to be out-state-dependent. Moreover, the requirement that there would be an empty space near the horizon at some point isn't a natural constraint on the initial state of the star that collapsed into a black hole. We're just not guaranteed to get an empty black hole if we start with any initial state. It's likely but it's not guaranteed.

So one can't be surprised that the identification of the degrees of freedom with their purification has to be out-state-dependent. It has to be out-state-dependent already because the procedure is required to break down if the initial state actually doesn't produce the vacuum in the region where we want to see the vacuum.

The observation that one can't invent a canonical "donkey map" for a non-maximally entangled state is more or less fine but there's no physical reason why such a "donkey map" should always exist.

The first equation of the paper that makes me slightly upset is equation (1):\[

{\ket\psi}_{b e_b p} \propto {\ket 0}_p \otimes \sum_{x=0}^\infty x^n {\ket n}_b {\ket n}_{e_b}

\] This is problematic at so many levels.

First, the pointer state \({\ket 0}_p\) is "added" to the formula as a simple tensor product which means that Bousso implicitly assumes the exact locality or clustering property for all the degrees of freedom.

Second, the tensor factor in the state that is related to the \(b\) and \(e_b\) degrees of freedom has a completely particular form – only the entangled degrees of freedom are present. Moreover, their coefficients are written as \(x^n\), a very particular function of \(n\), the occupation number.

Such a simple dependence on \(n\) may only be justified in a free quantum field theory on the curved background. In a general interacting setup, and a black hole is strongly interacting and "reshuffles" all the information extremely efficiently, there's no reason to expect that the complex amplitudes for the states \({\ket n}_b\otimes {\ket n}_{e_b}\) should scale like \(x^n\) although \(x^{2n}\) is the scaling of the density matrix eigenvalues in the mixed ensemble (but we're dealing with general microstates here which are more variable). Moreover, when Bousso writes

For modes with Killing frequency of order the Hawking temperature, \(x=\exp(-\beta\omega/2)\) is of order one.he doesn't seem to realize that \(x\) is never "really close" to one. This \(x\) may indeed be comparable to one for modes with Killing frequency that is as low as the Hawking temperature but there are just several such modes and these modes are heavily delocalized – by the uncertainty principle, their spatial size is comparable to the black hole radius as well – which means that these modes can't really be helpful to test the equivalence principle in a region near the horizon that is much smaller than the black hole size (and this condition is needed for the non-uniformities of the gravitational field to be negligible and for the gravitational field to be really indistinguishable from acceleration in a flat space, and this is the equivalence that the equivalence principle is all about). These warnings against "wrong, long-distance tests of the equivalence principle" were raised especially by Mathur and Turton whose papers aren't cited by Bousso, either. This fact itself would also be enough to be sure that the rest of the new Bousso paper can't be a valid argument against the equivalence principle.

The very fact that Bousso assumes a particular state and calls it "the state" although he hasn't defined any special properties of the state indicates that he just doesn't understand the superposition principle of quantum mechanics. In quantum mechanics, any and every complex superposition of allowed ket vectors is equally allowed. A black hole doesn't have a single "the state". It has exponentially many microstates. You may only talk about "a" state. And there are many of them.

It's clear that Bousso is trying to build the microstate of the black hole from the low-energy field-theoretical occupation modes. But this can't be done. A black hole has an exponentially huge entropy and a vast majority of these microstates corresponds to black hole configurations that look pretty much empty inside the horizon. Similarly, a vast majority of the microstates of the Hawking radiation are microstates that closely resemble the thermal mixed state of the radiation. There are no "the" states of either the black hole or its radiation.

For a page or so, Bousso is transforming the pointer state and employs some broken not-so-quantum terminology (and perhaps not only terminology) such as a "collapse" of the wave function. There is no physical process that could be called the "collapse" of the wave function. Moreover, this whole discussion about the pointers and measurements is completely redundant and only adds confusion to the text.

Finally, in the second column of the second page, we read:

Nine years later, a clueless Alice happens to fall through the zone without encountering Bob or the pointers. She does encounter the mode \(b\), ten light-years from the horizon, as well as \(\tilde b\), inside the horizon. She makes no particular measurement but just enjoys the vacuum. After all, her theory of black holes says that \(\tilde b\) must be identified with whatever purifies \(b\), whether or not Alice controls the purifying system or has any idea where it is. By Eq. (3), the purification happens to be a subspace of \(ep\). The associated donkey map is Eq. (5), and the result is the infalling vacuum (6).You read it once, twice, thrice. You try to understand what the argument for the contradiction could possibly be. If you think carefully, you will fail. It makes no sense whatsoever. The first reason why it makes no sense is that Alice doesn't measure anything, she just "enjoys" the flat space. But if she's just on a vacation and measures nothing, her work can't be used to derive any paradox, either. The word "enjoy" sounds like a joke except that it seems to be an important part of Bousso's thinking.

Bousso seems to claim that he has found two derivations of the value \(N\) of an occupation number. One of them gives you \(N=0\) and the other gives you \(N=4\). That would indeed be a paradox except that a necessary condition for him to derive that \(N=4\) is to assume that \(N=4\) in the experiment. And a necessary condition for \(N=0\) is to assume \(N=0\). These assumptions can't hold simultaneously because \(N\) is a well-defined operator on the Hilbert space, or at least on the subspace of the Hilbert space that respects a macroscopic appearance of the black hole from an incoming observer's viewpoint. So there can't be any paradox.

Just try to answer the question: Why does Bousso think that Alice enjoys the vacuum? It's likely that an old black hole has a lot of vacuum with \(N=0\) (the occupation number is measured in a freely falling frame; by the Bogoliubov transformation, an observer trying to sit at a constant \(R\) will see lots of quasi-thermal Unruh/Hawking radiation with nonzero values of \(N\)) near its event horizon, on both sides, because it has already devoured what it could have devoured, but it's just not guaranteed. There's a nonzero probability that \(N=4\) and if \(N=4\), then the measurements with one pointer or 50 pointers etc. will imply \(N=4\). What do the pointers have to do with this simple thing?

There can't be any ambiguity for the value of \(N\). One may imagine that \(N\) is an occupation number of a field-theoretical mode on a curved spacetime resembling the Einstein-Rosen bridge, if we pick the particular terminology of \(ER=EPR\). So they just evolve according to the usual field (Heisenberg) equations. Operators on one slice are functionals of operators on another slice. Those relationships are calculable from the field (Heisenberg) equations. If the observables are related in some way, they are related and the measurements will agree. If they are not related, they are not related and the measurements may disagree. The answer is always unambiguous.

Also, the Hawking radiation modes are linked (with some extra scrambling transformation – geometrically interpreted as a complicated "twisting" of the Einstein-Rosen bridges) – to some of the modes in the black hole interior. In the ER-EPR correspondence, this simply results from their proximity. The regions may look distant in the ordinary black hole spacetimes but because there are wormholes, there is also a sense in which they are very close to each other. So the field operators in these two regions may be seen to be equal, up to differences proportional to the very high-energy modes (that are approximately set to their ground state).

There can't possibly be any paradox.

Moreover, the whole game with the ket vectors is a proof that Bousso is just spreading confusing fog. The "accent" of this text reminds me of the people who haven't learned quantum mechanics well and who believe that it can only be formulated in Schrödinger's picture (with "collapses" that they imagine "materialistically"). If he believes that there are arguments showing \(N=0\) and \(N=4\) at the same moment – two different values of an observable – it must be possible to formulate the proof without any ket vectors. One doesn't need to talk about any ket vectors. He wants to prove a strange claim about the observables so the proof must be based on observables and relationships between them (especially the Heisenberg equations of motion, the spectra of operators, and so on). There's no point in writing the explicit ket-vectors – except if he wants to obscure the situation and introduce lots of wrong assumptions to the game such as the precise tensor factorization of the Hilbert space which doesn't hold – and fails

*especially*when we consider the physics of black holes.

On the remaining pages, Bousso tries to make his arguments with the pointers etc. – that have been a redundant source of fog from the beginning – even more complicated and confusing. The first two pages may at least classified as a spectacularly wrong segment of a paper. But the rest is a case of unspectacularly wrong excessive babbling.

I can't understand why this whole culture of "the equivalence principle has to be totally wrong" has spread in the quantum gravity literature. It's so self-evidently wrong and it's been discussed for decades. For two decades, we have known why similar would-be arguments that lead to paradoxes don't really work. Every expert was saying these things. Why didn't they protest 15 or 20 years ago?

It seems to me that the community of the quantum gravity or high-energy theoretical physics experts is really decaying away and within a few years, you will face a violent backlash even if you write down that \(1+1=2\). Raphael, Joe, others, can't you just stop posting this increasingly awful rubbish to the arXiv?

## snail feedback (21) :

A lot of papers are overly complex, it's as if the author is

deliberately out to distract and confuse, while promoting how

intelligent they are due to the complexity of their work.

But if they stop posting it, we will loose your instuctive and entertaining commentary/criticism. Maybe they should just send it straight to you? ;-)

Dear lucretius, believe me that rather than the excess material for criticism, I would be excited about further advances that would be e.g. comparable to ER-EPR every other day. ;-)

agreed and liked!

Are there any more comments or explanations about how entanglement is a topological charge?

I was trying to see if I can give a sort of "topological interpretation" to quantum mechanics... now that everyone is interested in "interpretations"... apparently others did it first but the field is not exhausted

Yep, they should send it for peer review to Lumo, and Lumo posts the review here on TRF :-)

Indeed, this article reads like a peer review that says "reject" at the end ;-)

I can't understand why this whole culture of "the equivalence principle

has to be totally wrong" has spread in the quantum gravity literature.

It's so self-evidently wrong and it's been discussed for decades. For

two decades, we have known why similar would-be arguments that lead to

paradoxes don't really work. Every expert was saying these things. Why

didn't they protest 15 or 20 years ago?

best hcg diet recipes phase 2

Are you referring to anything in Bousso's paper? I can't see any mention of anything topological. However, I think Kauffman has done some work the relationship between knots, braids, quantum entanglement and quantum computing.

There is a certain madness to the amount of ads in your posts. Just saying.

Please explain this: "Of course that there can't be any contradiction if we're forced to look

at the out state whenever we want to restrict our attention to

microstates that see the vacuum near the horizon." How are these microstates measurable by Alice?

Apologies in advance for my naive question, Lubos.

Given that we don't yet have a sufficiently completely understood theory of quantum gravity yet and are relying on semiclassical gravity in a lot of these black hole discussions, how do we know for sure that black holes truly develop exact event horizons in the first place? At the classical level, sure, black holes have nice event horizons, but how do we know once we start taking quantum-gravity effects into account that event horizons on black holes (as opposed to, say de Sitter horizons) are really, exactly information-trapping and don't leak information, beyond Hawking's original prediction of thermal radiation? And if black holes, as quantum objects, don't have exact event horizons, then don't a lot of these puzzles go away?

I'm hoping you can tell me (or point me to a paper with) the most trustworthy argument in favor of the view that event horizons can be trusted to be exact even once we open the door to (not fully understood) quantum-gravity effects.

Thanks!

Alice - I suppose you mean an infalling observer - doesn't have the access to all the microscopic degrees of freedom. Her limited time until the rest of her life clearly prevents her from measuring every detailed microscopic property of the black hole.

That doesn't mean that those degrees of freedom can't affect her and the rest of her life. Of course that they can. The ER bridge picture makes the reason manifest. The black hole interior is connected with wormholes to distant places of the Hawking radiation so the operations done with the Hawking radiations do influence the black hole interior seen by Alice.

Hi, well, this is a good question, the same fundamental question that underlies this very controversy.

My view is that we know that the event horizons are formed even with QG effects included because they are a classical GR phenomenon and they are unambiguously implied by the evolution in GR whose terms in the equations have been validated experimentally. Only when some well-defined observables reach extreme values not compatible with the classical GR approximation, the quantum effects may kick in.

One may perhaps argue that the black hole interior - and even vicinity of the horizon - is already transgressing to the quantum regime although such a view is a violation of the equivalence principle which allows us to use a flat-space-like description for regions that are nearly flat even though they are "naturally" described by singular coordinate systems as well.

But if someone claims that there are new effects near the would-be horizon of a young black hole, and Bousso does, it is an extra problem: the location of the event horizon isn't even determined before all the evolution ends. A point on the surface of a neutron star may suddenly turn out to be in the black hole interior although it looks very similar to the point just a nanosecond earlier which was outside. So claiming that some quantum effects appear in between them violates not only the equivalence principle but also causality - such a distinction would have to know the future history in advance.

One could perhaps change the moment when the quantum effects become substantial to another surface near the event horizon such as the apparent horizon etc. Nature may be hiding surprises but I personally don't think that She will force us to abandon the equivalence principle near the event horizon completely because the interior and exterior start as connected through ordinary nearly flat space and there's no reason why the nature of the connection should qualitatively change over time - and I even think that the normal continuation of the space past a surface (horizon) is the only consistent way to continue physics. There are all kinds of microscopic correlations but those are exponentially tiny so that they become invisible to a low-energy observer.

Would this tell us things about the "back action" of measuring the Hawking radiation? It seems that it should due to monogamy. Just wondering.

Thanks for the clear response!

I am perfectly comfortable believing in classical GR all the way down to exp(-S) corrections, which are obviously extremely tiny for all but the tiniest black holes.

But how can we trust that the classical solution for the gravitational field is correct all the way down to that level of exp(-S) precision? And if the event horizon is "broken" at the level of such tiny corrections, then won't it leak just enough information to ensure the outgoing radiation is unitary?

I know that event horizons are global concepts, but the real question is whether there are gravitational fields that are strong enough and consistent enough all the way down to those tiny corrections that they can keep ahold of every bit of information and prevent it from escaping, and how do we know we can trust quantum gravitational fields to do that?

The only precise UV complete theory of quantum gravity we have is from AdS/CFT, and the CFT duals to black holes leak information, and are consistent with unitarity. So isn't that evidence that the quantum black holes in the bulk are slightly leaky too?

Thanks again!

It tells us something but the information is probably hopelessly scrambled.

Yes, it's monogamous because those measurements on the Hawking radiation far away and some of the properties of the black hole interior are the *same* degrees of freedom - geometrically, they are functional of field modes on two regions of spacetime that are actually very close to each other if one walks over the bridge.

Dear Anonreader,

the classical equations and their solutions are surely not precise descriptions of reality up to the precision exp(-S). They're just the leading approximation and already the first power-law corrections, of which R^2 squared-curvature terms are an example, represent quantum corrections that surely do appear in the reality.

What is precisely said about the exponentially tiny corrections is that corrections of size comparable to exp(-S) etc. are those that violate not only the precise form of the classical equations but even the *principles* of the classical theory - locality - that are apparently preserved by all conceivable higher-order corrections.

Yes, this (exponentially tinily) leaking horizon is exactly what may be credited for the preservation of the information. The Hawking radiation arises from the tunneling of a sort. So when one talks about the microscopic details, every piece of information can leak. But when one observes low-energy phenomena, the horizon should behave classically.

It is not true that AdS/CFT is the only precise UV complete description of QG we have. At least Matrix theory is another example.

All black holes preserve information - as far as these qualitative properties go, the properties of all black holes in all quantum theories of gravity is the same.

Best wishes

LM

Umm, I'm not so sure that it is hopeless as where should be a way to compute the relative entropy of the two regions. That is my question for now. Thanks.

Thanks again -- this is very helpful. And sorry about forgetting about Matrix theory!

I guess my question boils down to how circular some of these arguments are. If we assume that there's a horizon, then obviously any exp(-S) effects that allow information to go back through the horizon to the outside world must be nonlocal. But if the horizon isn't exactly a horizon in the first place, then why must leaking information be nonlocal?

As for the equivalence principle, that presupposes some given gravitational field, and says that the gravitational field should look like nothing to a free-falling observer. But if the gravitational field is corrected by fluctuating exp(-S) effects, then the necessary free-fall frame keeps changing, and so a smoothly traveling observer won't be able to stay in exactly the right free fall frame all the time, and should see fluctuations. But that's not really a violation of the equivalence principle, right, because the field itself keeps fluctuating. Or is this incorrect?

Thanks so much for humoring my confusions!

Dear Anonreader,

I am used to Matrix theory's being forgotten. As long as that there are at least some people who don't forget all of science, the situation isn't hopeless.

It is not clear to me what you mean by "in the first place" when you talk about the existence of the horizons. What does it mean? If it means that they're there in an approximation, the answer is Yes. If it means that they were there first historically when gravity started to be studied as the curved spacetime, it's Yes. There may be other "in the first place" approaches - starting from other starting points in the first place - that don't have the horizon in the "first place". It's not an exact notion if the black hole is evaporating.

Yes, the equivalence principle also presupposes that the space near the large black hole horizon has all the properties expected from an empty flat space because the space is supposed to be nearly flat according to classical GR (plus small corrections). If the fluctuations near the horizons become larger or qualitatively different from those you could say in empty space, I would say that the equivalence principle is violated, too - simply because they would allow you to detect whether there is a horizon gravitational field, or whether it's due to acceleration, and that's what the equivalence principle is supposed not to allow you to distinguish.

Post a Comment