**...because the "basis" of coherent states is overcomplete...**

Let me begin with something simple. John Preskill asked you "What's inside a black hole?" and offered you four options:

- An unlimited amount of stuff.
- Nothing at all.
- A huge but finite amount of stuff, which is also outside the black hole.
- None of the above.

Well, (B) may also be interpreted as a claim allowing a firewall, in which case it's wrong in general (the firewall isn't necessary or generic) but of course that there are rare black hole microstates that contain something that burns you near the horizon much like there are rare black hole microstates with a bunny in the interior.

This point is simple but often misunderstood. A black hole is defined by its event horizon but it doesn't follow that the interior has to be empty. There can be a bunny in it. However, among microstates of localized matter, a black hole with a bunny is an exponentially rarer class of microstates. Most of the mass \(M\pm \delta M/2\) black hole microstates look empty – that's why the entropy-increasing evolution converges towards these states as the black hole keeps on devouring the surrounding matter to clean its interior (and vicinity). But don't make a mistake about it: a bunny in a black hole (or a nonzero occupation number of freely falling field operator modes) is unlikely yet possible.

But let me switch to a more complicated question.

During Suvrat Raju's talk at a recent Fuzz-Or-Fire workshop in Santa Barbara, a core of the pro-firewall/anti-firewall conflict became rather visible. On one hand, the Papadodimas-Raju "state-dependence" of the definition of the black hole interior field operators seems unacceptable to the firewall champions although it looks pretty much inevitable to many of us.

On the other hand, this disagreement may be described as a criticism of Polchinski's and pals' alternative: They believe that the bulk field operators and especially their location in a quantum theory with a dynamical geometry may be defined by a recipe described operationally in a "background-independent way". For example, start at the AdS boundary that must be close to the empty AdS geometry, pick a direction as if it were an empty AdS, and go in this direction for a certain proper time or proper length. Then you turn in the direction of the greatest curvature (defined in some other way) and walk for 5 meters of proper distance or 2 microseconds of proper time, and so on. You get to a point and there is a scalar field at that point that you may call \(\phi(x,y,z,t)\) and ask about its eigenvalues or what it does when it acts on a state \(\ket\chi\) etc.

The classical counterpart of such a prescription sounds totally OK in classical general relativity. You may imagine that one particular spacetime geometry is the "right one" and whatever the spacetime geometry is, the operational procedure involving the proper times, proper distances, and angles may be followed and the right value of the field at the point we just found becomes a well-defined \(c\)-number (let's talk about scalar functions of local tensor fields and their derivatives only).

Joe Polchinski and others believe that the same background-independent operational definition of field operators at various points may be used in quantum gravity, too. This belief is incorrect. There are several ways to see why. They seem very different but ultimately they are rooted in the same general properties or at least "spirit" of quantum gravity that is imposed on us by consistency.

To maintain their belief that the background-independent localization of field operators (and therefore state-independence) is possible, the firewall advocates must assume that

- the metric tensor is a good and precisely and uniquely defined degree of freedom (quantum observable) at arbitrarily short distances
- every ket vector in a quantum gravity theory may be uniquely rewritten as a sum of ket vectors each of which comes with a well-defined classical geometry

**The metric tensor isn't any good at (sub)Planckian distances**

The first point has to be true because they want to determine proper distances. You need a metric tensor for that. Because the definition must work even in rather general, potentially extreme environments near collapsing and other black holes where we often need an exponential precision to locate the events (note the coordinate singularity at the horizon etc.) while we have to resist high matter densities etc., the definition of the metric tensor has to be really exact for Joe's and pals' background-independent operational definitions of the points in a general spacetime to make any sense.

However, quantum gravity doesn't allow you things like that. The metric tensor is only good and well-defined in an effective description of quantum gravity. At shorter distances, it just ceases to be a good observable. Well-defined observables in quantum gravity are different; the gauge fields in the \(\NNN=4\) Yang-Mills theory involved in the most famous example of the AdS/CFT correspondence are an example. The matrices \(X,P,\Theta\) in Matrix Theory are another example.

Even if you had something like a "closed string field theory" that would apparently contain the metric tensor "everywhere", you would have to solve the problems of the mixing of its field modes with some other modes of fields arising from heavy excited string states (with the same charges and spin). To make the procedure well-defined, you would have to overcome the problem that there are many ways (related by field definitions involving all the massive string fields) how to define the metric tensor. They may be thought of as different "renormalization schemes". You may imagine that a different "renormalization scheme" amounts to switching the metric from something like the string frame to something like the Einstein frame but the rescaling depends on the massive scalar fields \(h\) in string/M-theory rather than the dilaton \(\phi_D\). Classically, \(h\) is constant so this rescaling doesn't change much. However, quantum mechanically, \(h\) is a dynamical, fluctuating field so an \(h\)-dependent redefinition of the metric tensor does matter.

But even if your procedure directing someone to walk over some proper distances in a general spacetime etc. did specify a particular "renormalization scheme", it would still be no good because at very short, near-Planckian distances, the geometry becomes brutally fluctuating and the proper distances and times, when accurately measured over the violent landscape of the quantum foam, are probably divergent and/or ill-defined. So Joe's prescription would break down.

My point is that whatever "renormalization scheme" you pick, \(g_{\mu\nu}(x,y,z,t)\) is a fluctuating degree of freedom that has nonzero probability amplitudes to be nonzero and substantial even in the vacuum state of the spacetime. By dimensional analysis, the magnitude of the contribution \(\delta L\) of these fluctuations to a proper distance \(L\) comparable to the Planck length is comparable to the Planck length i.e. 100 percent; I believe that this dimensional analysis, assuming \(g_s=O(1)\), is OK even in string theory despite its ability to "calm down" the quantum foam. You simply shouldn't assume that the flat and peaceful spacetime offers you good expectations about the behavior of proper distances, times, and angles near/below the Planck scale. Try to follow Joe's algorithms on the quantum foam (the picture at the bottom):

It's pretty obvious that you get caught in the weird tunnels and valleys of this quantum foam whatever recipe you choose. What you actually need is a geometric prescription that is allowed to use the smooth, nearly flat spacetime similar to the upper part of the figure. But using the proper distances and proper times calculated from the dynamic metric tensor just don't give you anything like a flat space even in the vacuum-like ket vectors. The quantum foam picture at the bottom of the picture above is an eigenstate of \(g_{\mu\nu}(x,y,z,0)\) and even the Minkowski-like vacuum state in quantum gravity is a superposition of states whose geometry looks like this. You won't really get anywhere with the background-independent protocols to isolate a location in the spacetime.

**Non-uniqueness of a "geometry" associated with a ket vector in QG**

But it's the second complaint against Joe's paradigm, if you allow me to call it in this way, that seems more damning and conceptual. You could imagine that for some unknown reasons, string theory calms down the quantum foam so nicely that the sub-Planckian terrain may still be imagined as a smooth space rather than the quantum foam and the procedure could get through with a potentially natural choice of the "renormalization scheme".

However, the procedure will still fail due to some facts that don't depend on the short-distance, Planckian physics. What are these general problems with the background-independent approach to the location of points in a dynamically curved quantum spacetime?

For the sake of simplicity, let's assume that the procedure "go here for 5 meters, turn left etc." is only used to move through a slice of the spacetime at a fixed value of the coordinate \(t\), whatever it is. If we considered trajectories deviating from the slice, we would open yet another can of worms because the metric tensor doesn't commute with its time derivatives (the uncertainty principle!) so it's just downright impossible to imagine that these behave classically in any ket vector (this assumption is as wrong as the assumption that arbitrarily sharp trajectories in the quantum phase space make sense).

Fine. Polchinski's procedure is meant to tell you what is the action of an operator \(\phi(P)\) on a general quantum gravity ket vector \(\ket\psi\). The point \(P\) is specified by an operational, background-independent procedure of the type "go for 5 meters, turn left, do this and that". Now, Joe believes that the action\[

\phi(P)\ket\psi

\] is another well-defined ket vector. We can see it can't be the case. Why? Well, the vector \(\ket\psi\) isn't an eigenstate of the metric tensor operators \(g_{\mu\nu}(Q)\) at the relevant points \(Q\) that may appear along the trajectory. To avoid the immediate ill-definedness of a recipe based on proper distances, we must decompose \(\ket\psi\) into eigenstates of the metric tensor variables \(g_{\mu\nu}(Q)\):\[

\ket\psi = \sum_j \ket{\gamma_j}

\] Well, the sum could actually be an integral and the normal people would tend to normalize \(\ket{\gamma_j}\) to unity and write the normalization factor as a special coefficient, and so on, but the equation above is good enough. In the previous section, I discussed the problems resulting from the violent character of the geometry in the \(g_{\mu\nu}\)-eigenstate. But even if you forget about these short-distance troubles and ambiguities and you assume that the proper distances through the apparent quantum foam behave just like your long-distance intuition suggests (up to a universal renormalization coefficient for the distances), you face insurmountable problems, even at long distances. They're related to the short-distance problems discussed previously but the arguments below hopefully make their independence on the UV physics more obvious.

Imagine that we want to apply the procedure to the most peaceful yet nontrivial state we can imagine, a smooth macroscopic gravitational wave in an otherwise empty spacetime. This state containing a gravitational wave may be written as a coherent state\[

\ket\psi = \exp\left[\int d^d k\,\alpha(k) c^\dagger(k)\right] \ket 0.

\] It's the exponential of a superposition of creation operators for some graviton states. As a homework exercise ;-), add sums over the polarizations and other indices and everything else you like or need. Now, additional particles may be created on top of the state \(\ket\psi\) and I think that Polchinski would say that the right way to apply his procedure is for the distances in the states that contain a few particles on top of the curved spacetime \(\ket\psi\) to use the geometry of this curved spacetime when we try to follow the procedure to "find the location in a general spacetime".

You should already feel uncomfortable at this point because the state \(\ket\psi\) is an excitation of the Minkowski vacuum state, too. Rewrite the exponential as a Taylor expansion if you want to make the point more suggestive. Gravitons are particles, too. You might say that it couldn't be a hopeless idea to use the flat spacetime's metric when you try to locate points in the spacetime except that it would also be obvious why the relationship between the local operators on top of the excited coherent, curved space \(\ket\psi\) with the local operators on top of the Minkowski space \(\ket 0\) is extremely convoluted.

So let me assume that Polchinski et al. really want to use the curved geometry from the coherent state \(\ket\psi\) when they follow their background-independent procedure. It means that to find the action of a local operator \(\phi(P)\) on \(\ket\chi\), they need to decompose \(\ket\chi\) into "matter-like" (and therefore geometry unchanging) excitations of coherent states of the type \(\ket\psi\) above for which the metric tensor is known.

*The trouble with this background-independent physics is that the "basis" of the harmonic oscillator Hilbert space consisting of the coherent states is overcomplete.*

See basic introductions to coherent states if you have any doubt about the statement. So even if you restrict your calculations to ket vectors \(\ket\chi\) that only contain purely gravitational excitations, you will need "the" decomposition of such states to coherent vectors to identify \(\phi(P)\) but "the" decomposition actually isn't unique.

This is a problem that makes your background-independent procedure break down even for states \(\ket\chi\) that are as simple as a low-energy, single-graviton excitation of the Minkowski vacuum state. On one hand, you could consider this excitation to only change the background geometry infinitesimally and use the Minkowski geometry to follow the procedure. The first excited state of a harmonic oscillator is proportional to a superposition of coherent operators weighted by \(\delta'(a)\) all of which are infinitesimally close to the origin of the phase space (interpreted as a flat space in the Fock space of gravitons). On the other hand, you may rewrite this first excitation of the harmonic oscillator as some linear superposition of coherent states centered elsewhere, even very far from the center at zero (effectively a linear superposition of highly curved spacetimes). It's clear that the point \(P\) where you get by following these spacetimes will depend on the way how you decompose your states to the coherent states. This way isn't unique and the infinitely many choices differ by differences that are unbounded from above.

If the procedure doesn't work for single-graviton states, you may be sure that the problems become exponentially worse if you try to apply the procedure to a black hole spacetime with a significant density of mass, coordinate singularities, and many other things. It's completely hopeless.

Incidentally, if you tried to replace the decomposition into coherent states by a decomposition into \(g_{\mu\nu}\)-eigenstates – in the harmonic oscillator analogy, \(x\)-eigenstates – discussed at the beginning, you could cure the overcompleteness problem of the basis but you would also totally delocalize the vectors in the values of \(\partial_t g_{\mu\nu}\) which means that the time-like geodesics of the recipe would probably become infinitely singular (the coherent states naturally balance the needs of the metric in the spatial and temporal directions); you wouldn't be guaranteed that the proper distances are well-behaved and finite at short distances. At the end, any attempt to define the recipe will fail because what all of them actually contradict is the equivalence principle: they are assuming that the spacetime geometry is classical enough so that the proper length/time of some generic trajectories going in many directions may be accurately measured which isn't so.

**An alternative for the background-independent operational localization protocols**

Once I have shown that the background-independent way of identifying locations of operators isn't possible, it may seem polite for me to tell you what's a legitimate replacement of it. We could be saying that no calculations based on strictly local operators attached to "points" are possible in quantum gravity. Except that I think that they are possible. However, you have to assume (manually and, whenever possible, cleverly choose) a background – a particular "curved space" vacuum-like state of the quantum gravitational theory which may also be obtained as a coherent state built from other vacuum-like states – and construct many other microstates out of this vacuum-like state by the action of a "finite" (not scaling with various parameters called \(N\) that would be increasing functions of the curvature radius etc.) number of field operators where these field operators are behaving much like they are behaving in the flat space, at least locally in regions where the curvature may be neglected. Papadodimas and Raju explain these conditions more quantitatively. In some sense, I believe that the ER-EPR correspondence with its ER bridges is a special visualizable "Ansatz" for solutions of such constraints.

Here I must say that people like Lee Smolin have been saying totally idiotic things about "background independence" for years. They would even criticize string theory for being able to write the Hilbert space of quantum gravity as a de facto Fock space built upon a particular background. Remember all the silly demagogy that no backgrounds can ever be talked about because GR imposes a democracy between all of them, and all this rubbish.

Feel free to impose a ban on talking about backgrounds but then you will be unable to make any calculations that may be compared with the experiments, too. The adjective "background-independent" may be given many meanings and some of them are respectable, at least in some contexts, but be sure that if your interpretation is that "we can't use any backgrounds in calculations at all", then you are throwing the baby out with the bath water.

Because I properly learned many of the computational techniques that existentially depend on the choice of a background (in the spacetime or the world sheet) from Joe Polchinski, I wouldn't have believed 14 years ago that he would ever be saying things "remotely similar" to the Smolinian rubbish on the background independence.

If we want to organize a Hilbert space (or, more typically, its subspace) as some collection of states with a spatial interpretation (states that tell us what is being observed here or there), then we simply need to associate the microstates with a background. We also need to gauge-fix the diffeomorphism gauge symmetry or redundancy, if you wish. Only when it's done, it's possible to define how local field operators act in between the states in this subspace of the Hilbert space. It's clear that if you create too many things in your background, or if you deform the geometry by too many gravitons, to be more specific, the added gravitons or the backreaction to the added matter make the original background's geometry an unnatural (or perhaps more accurately, practically not too useful) way to measure distances and times. You should better pick a different background to parameterize the relevant portion of the Hilbert space if you consider states whose geometry is too different from the original background. But you must choose

*a*background because trying to leave the "job to measure the geometry" on the microstates without a choice of background requires a decomposition of the gravitons' Fock space states to coherent states which isn't unique.

**ER-EPR's definitions of operators are clearly background-dependent, too**

The state-dependence – well, really background-dependence – of the definitions of the black hole interior (and perhaps all other) local field operators is something most tightly associated with the insights by Papadodimas and Raju. But I believe that the Maldacena-Susskind ER-EPR correspondence makes this inevitable background dependence equally if not more self-evident.

Why?

It's simple. They say that the Hilbert space of one Einstein-Rosen bridge (a pair of black holes geometrically connected by a non-traversable wormhole) is the same Hilbert space as the Hilbert space of two faraway black holes (that are allowed to be entangled). Clearly, these two pictures of the same Hilbert space envision completely different background spacetimes – the spacetimes have different topologies, in fact. So the definitions of field operators in the black hole interior(s) are clearly different in these two pictures. In other words, the definition of local field operators depends on whether you describe the same Hilbert space as two black holes that can get entangled later (but you're "expanding" around the microstates for which the entanglement is low and the black holes are assumed to be independent to start with) or the Hilbert space of a single Einstein-Rosen bridge with just "one interior" (you're expanding around a particular microstate for which the entanglement entropy is maximized; note that there can be many such maximally entangled microstates for which the bridge is correspondingly "twisted"). In other words, the definition of the local field operators is background-dependent, i.e. dependent on the choice of the spacetime background you have to make manually and subjectively before you start your calculations. It's clear because the local operators depend even the topology which is totally different in the two choices. The two black holes have two interiors while the Einstein-Rosen bridge only has one component of the interior. For various situations or classes of microstates, one of the two descriptions is more convenient or practical than the other description, but there can't be a universal law that would make one description more correct than the other one a priori. You must predecide how many components the interior(s) has (have) before you start to talk about the field operators in the interior(s).

Finally, I must say that I believe that most of what I wrote above aren't my exclusive original insights but just a reinterpretation of some insights made by Papadodimas and Raju which uses different words. If this is not a legit way to describe what they concluded, they will tell me and I will inform you, too.

I like to think about the ER-EPR correspondence but again, I believe it is just a more specific, visualizable "Ansatz" how to write the field operators at different places and the general, non-visualized principles for the operators were already found by Raju and Papadodimas (and perhaps others whom I may have slightly overlooked). The Raju-Papadodimas conditions for the mutual relations between the field operators start to break down once you arrive to short enough distances where the Einstein-Rosen bridges with the Hawking radiation become visible.

## snail feedback (8) :

like! I will comment on this! With LM's permission, I might have some questions in 1-2 days... :)

This is a nice inspiring post that gives me something to think about, I'll certainly have to reread it another day before midnight :-)

In particular the renormalization analogy of how Polchinsky and colleagues try to obtain background independent definition of local oparators inside a black hole picks me. So could one also see that what they try does not work from the fact, that gravity is not renormalizable?

Sorry if I am off the mark ... :-/

I think you must be right - this procedure must be provable to be impossible from GR's non-renormalizability, too, although my attempted proof would probably sound chaotic and imcomplete at this point. But the non-renormalizability is a technical way to see that the metric tensor can't be a good variable up to arbitrarily short distances, one of the requirements of Polchinski's prescription to be doable with the fine precision that I pointed out.

Hi Lubos,

Kind off topic: do you find the following article in Nature

balanced, reflecting the current status of ideas? Personally I don’t. Theories in the fringe of physics research, attracting little attention, are overrepresented.

http://www.nature.com/news/theoretical-physics-the-origins-of-space-and-time-1.13613

Dear Giotis, I read it yesterday and found it much better than the average. It's not about all of theoretical physics, of course, but I think that fringe theories and a core of the respectable current research are given about 50% each which is a much higher percentage for the credible physics research than average articles about similar topics. Mark van Raamsdonk may smile as loop quantum gravity and CDT - which manifestly have nothing to say about the entanglement in QG or thermodynamics in QG - were placed in the middle of his topics as if they were solving some problem in the heart of his thinking.

Now, I don't think that Mark is the only or unchallenged researcher in similar matters but I surely do find - and have found for years - his papers sensible, original, and careful

http://motls.blogspot.com/2009/07/mark-van-raamsdonk-entanglement-glue.html?m=1

and I think he's pretty much the forefather of most recent papers in the whole field that talk about the "origin of spacetime" and its relationships with quantum information (those papers) that are not obviously wrong.

hear hear! :D

Ok, my comments:

First, let me note that I think analytic continuation has a nice collateral effect when analyzing the above problems. Mathematically speaking the definition of a metric is somehow more restrictive when dealing with topological spaces. Not the same is true for the definition of continuity which can be constructed easily without the use of notions like distance or "subtraction of positions". In this sense the use of continuity and continuous mappings is essential. Next, of course, the holographic principle states relatively clearly that field theories over-count the degrees of freedom. I may wonder if other dualities may have some interesting effects on the problems described... This being stated I understand the BH-horizon as a surface that encodes an amount of information (the whole of it, by that means, but in a different, more compact way). The holographic principles assures us (more or less) that information must be representable in that way but it doesn't say it is the only way one can represent it. One can, in the end, represent it quite "inefficiently" giving the "image" of a N dimensional world as we see it around us or as we may see it inside a BH.

Of course I appreciate the fact that you finally understood what I mean by "geometrical uncertainty principles" and used it in the argument. Of course I agree with my idea ;)

I also have some inclination for the beauty of the arguments related to how one could infer fundamental restrictions on knowledge from apparent engineering type problems...

I would like to understand more about this whole idea of "background independence". If I get it right it cannot have the interpretation that one "cannot use a background"... Of course one can and the problem appears to me analogous to the connections between geometry and topology. Some aspects of differential geometry can be related to topology, others not (but my analogy may be excessive)...

Excellent post! I especcially liked the part about changing the background by perturbing the metric.

Post a Comment