I've been aware of the unmatched internal consistency, richness, and predictive power of string/M-theory for something like two decades even though my understanding of the theory's unique features was growing pretty much every year since then.
Bohemian [not Czech] Gravity [not Rhapsody]. (Alex Maloney's student) Tim Blais' Queen-based musical video edition of Joe Polchinski's string theory textbook (1,000+ pages compressed to 8 minutes). 70,000+ views in 2 days (and 370,000+ in 3 days); I guess that his master thesis he was writing at the same time will be read by "somewhat" fewer folks.
But as recently as a decade ago, I was also immensely impressed (and, later, proud about) the intellectual strength of the string theory community. What I mean is that I wasn't aware of a similarly large, comparably well-defined group of people who would have the same or higher education, IQ, and especially integrity and patience to evaluate an intellectual problem in depth, avoid easy but superficial and ultimately wrong answers and solutions. It was the only "community" I knew that would avoid group think or that would only accept well-established theses as a basis of something that others could call group think.
It's not clear whether I continue to enjoy the latter excitement today. A decade ago, many people began to write lots of self-evidently stupid, soft-science-styled, and scientifically indefensible things about the anthropic principle and "typicality". It was annoying but it could have been a downward fluke. In recent five years or so, it has become so tolerated – and we should perhaps say "fashionable" – to write and say so many fundamentally wrong things about so many topics that I would no longer claim that it's a "community" that is in charge of the fantastic insights that have been accumulated. The insanities haven't avoided some famous names in the field.
Matt Strassler wrote about the 2013 Kallosh-Shenker-fest (congratulations to Steve and Renata!) and some memes spread at the conference – especially those by Leonard Susskind, to be discussed below – remind me of the comments sent by Richard Feynman to his wife from a similar 1962 Gravity Conference in Warsaw
Feynman would write:
“I am not getting anything out of the meeting. I am learning nothing. Because there are no experiments, this field is not an active one, so few of the best men are doing work in it. The result is that there are hosts of dopes here (126) and it is not good for my blood pressure. Remind me not to come to any more gravity conferences!”The constant comparisons of hypotheses with detailed data is important for the health of a science. In the context of the theoretical work in string theory, the role of the detailed experimental data was largely played by lots of calculations of many detailed quantities in diverse situations. What I mean is that the boring calculations of detailed properties (masses, entropy etc.) of particular objects in particular backgrounds (ersatz experimental data) were deciding about the fate of "big hypotheses" such as dualities.
To a large extent, the "ersatz experimental data" work even more reliably than the ordinary experimental data because their "error margin" is much smaller. While string theory has never shown us any "totally universal definition" what She is, any "social security card", it has allowed us to calculate lots of things with an amazing precision and these results could be used to induce "big hypotheses" such as the dualities of many kinds (S-duality, T-duality, U-duality, AdS/CFT correspondence, Matrix models' equivalence to the vacua of string/M-theory, and so on) or the existence and smoothness of topological transitions, among other things.
Certain people, perhaps especially younger people of age 25-45 who are old enough to have been trained in everything they need to know (in this environment increasingly drowning in nonsense) but young enough not to replace hard work and precision calculations by philosophical guesses and "bogus revolutions they may already afford to spark", tend to carefully compare various general statements about quantum gravity with things they know from the "data", their detailed AdS/CFT calculations, and they just seem to know if something is wrong.
When people begin to ignore the data and focus on overly philosophical arguments, it's probably easy for them to go astray.
Lenny Susskind's most eye-catching comments were summarized by Matt Strassler as follows:
In fact, we heard this from none other than Lenny Susskind (famous for his efforts, along with those of ‘t Hooft and many others, to oppose Hawking’s view that black holes require no revision of quantum mechanics, but now himself deeply puzzled by the firewall problem — the failure of what Susskind called `complementarity’). Susskind stated clearly his view that string theory, as currently understood, does not appear to provide a complete picture of how quantum gravity works.Bizarre. And it's not just Susskind; Joe Polchinski said very similar things recently. I can't think of any interpretation of this comment except for interpretations that make the statement either self-evidently wrong or a vacuous tautology.
The inserted words "as currently understood" soften the otherwise hopelessly weird proposition. But they soften it in a way that doesn't really make the sentence any more valid – just a bit more blurred. The words "as currently understood" may either mean that we haven't understood and (using string theory) answered all questions that could be asked. That's indeed a true statement, a tautology. We will probably never answer "really all conceivable questions" so if the answer to all conceivable questions is your definition of "completeness", such a completeness will never be achieved.
Alternatively, the words "as currently understood" are meant to express the speaker's idea that our current understanding of string theory isn't just incomplete in the sense that we haven't answered all questions we could invent; instead, there is a qualitative "defect" or "completely missing pieces" in the system of ideas and laws of physics that we currently call "string theory" and this defect will have to be modified or the hole will have to be filled, thus producing a de facto different theory that could be complete.
This statement isn't vacuous but I think it is indefensible given the available evidence.
Well, don't get me wrong. Our current knowledge of string theory is "incomplete" in the sense that (even in principle) we can't calculate the observables "nonperturbatively exactly" on any background we may find – typically, we have expansions and/or complete descriptions that are valid for superselection sectors only. But this is not the incompleteness that could be relevant for the black hole information puzzle (also, the currently discussed incompleteness has nothing to do with the hypothetical incompleteness of our present string-theoretical predictions about the initial state of the Universe or vacuum selection if a more complete picture of these matters exists at all). What Susskind's sentence clearly refers to is a hypothetical incompleteness that affects all sectors of string theory, including e.g. those that have a description in terms of Matrix Theory or AdS/CFT.
But these descriptions are demonstrably totally exact and complete.
The BFSS matrix model (and yes, SS stands for Shenker-Susskind rather than Schutzstaffel) is the clearest example of this completeness. For a finite \(N\), it's no more mysterious or able to store "subtleties" than the usual undergraduate non-relativistic models of quantum mechanics. The wave function is a set of functions of \(9N^2\) bosonic variables. No renormalization is needed. The theory is clearly as well-defined and complete as those non-relativistic models of quantum mechanics. Still, the evidence is overwhelming that it describes all the dynamics of objects we expect in the 11-dimensional vacuum of string theory, namely M-theory. It has the graviton supermultiplet, its correct interactions, branes, black holes, everything you can think of, and the model behaves correctly under compactification etc. The black holes evaporate in the model and the evaporation process is unitary.
The AdS/CFT examples are analogous except that they're field theories so some renormalization treatment may be needed. You could try to put these theories on the lattice, assuming that you will guarantee that SUSY is recovered in the continuum limit, and so on. Again, assuming that quantum field theory is "complete" in the usual sense, the CFT is a complete description of the particular AdS superselection sector of string theory which is a theory of quantum gravity.
That doesn't mean that we are currently able to answer any question about what the observers will observe at any place of the bulk. We may answer some questions but not others. But even if some questions of this sort couldn't be answered even after lots of work, it doesn't mean that the AdS/CFT description of this superselection sector of quantum gravity is incomplete. Instead, string theory authoritatively tells us that if we want a complete, exact description of this superselection sector, we shouldn't be satisfied with a field theory located in the bulk. Instead, we should use the boundary CFT. This boundary CFT makes the physical properties of objects localized in the bulk "hard to read", especially if these objects are inside black holes, but this is not a sign of incompleteness. Instead, it is a lesson that string theory has clearly taught us. If we want to talk about exact observables only, it's simply not true that physics may be thought of as being composed of sharply localized objects in a predetermined classical geometric background, at least not in a uniquely specified way.
The very assumption that there exists a black hole interior (and the strict event horizon that separates it from the rest of the Universe) is an approximation. Exact stringy rules of evolution don't allow such a conclusion to be 100% certain, ever. The probability may converge to 100% for a star collapse but the remaining deviations from 100% are always necessary to restore the unitarity and guarantee other high-precision consistency conditions.
Much like in similar situations in physics where people are confused and talk about "incompleteness", what they actually see is that the theory is teaching us something that is incompatible with their prejudices. They don't want to discard their invalid prejudices so they start to invent negatively sounding adjectives to describe their discomfort with the theory. None of the adjectives correctly capture what is actually the source of the problem – just some subjective emotional discomfort – but they try to present the discomfort as an objective vice of the theory, anyway.
I am primarily referring to the tradition of anti-quantum-mechanics pronouncements that were pioneered by Albert Einstein. Einstein has often said that quantum mechanics was "incomplete" but it was demonstrably just a diplomatic codeword for his true belief, namely that quantum mechanics must be "fundamentally wrong". This claim of mine is easy to see if you read the EPR paper. They were assuming "local realism" so they believed that the photon pairs just couldn't be correlated in several different quantities that may be measured on them after they're separated (e.g. polarizations with respect to various axes that are chosen the same for both photons in the pair). Because quantum mechanics predicts the correlations in all of them – because the entanglement is what quantifies all correlations in quantum mechanics and entanglement is able to produce "more and stronger correlations" than any classical model of correlations – they believed that quantum mechanics had to be falsified once the EPR experiments are performed.
They actually believed that quantum mechanics was wrong and would be falsified once such experiments would be performed. The adjective "incomplete" was just a diplomatic codeword that was muddying the waters. Needless to say, quantum mechanics wasn't wrong. The experiments beautifully confirmed its predictions. All the big shots who had mastered quantum mechanics much more nicely than Einstein did were free of doubts – already at the time when the EPR paper was published and arguably years before that, too – that quantum mechanics would work correctly in those EPR experiments, too. This fact just unambiguously followed from all the evidence.
Susskind's and Polchinski's claims about the incompleteness of the stringy description of quantum gravity are analogous to Einstein's claims about the incompleteness of quantum mechanics. In fact, it's not just an analogy; Susskind's and Polchinski's confused remarks about the incompleteness of string theory or AdS/CFT are a special case of Einstein's confusing remarks about quantum mechanics. Why?
I have already said that "incomplete" was just a code for "wrong". One may see that many things that they – and others – recently said about quantum gravity directly clash with some insights we can derive from AdS/CFT. But even the fundamental drivers behind these "incompleteness" thoughts are the same as they were in Einstein's case: it's the idea that locality has to work and work in a way that is described by the intuition of classical physics. But this assumption just isn't true.
To be more specific, there are two basic, mutually related and invalid assumptions made by the proponents of the firewall "problem". The first one is that they seem to assume that the black hole interior is described by completely independent degrees of freedom from the degrees of freedom describing the exterior. In reality, this precise independence is linked to the perfect vanishing of the commutators of fields at spacelike separated points. However, this perfect vanishing of the spacelike commutators is justified by the Lorentz symmetry.
Because the black hole background spontaneously breaks the Lorentz symmetry of the spacetime – and the breaking is strongest at the distance scales comparable to the black hole (curvature) radius – the usual consequences derived from special relativity don't have a reason to hold. As experts have at least believed for two decades, although the evidence wasn't always as clear and strong as it is today, the Hawking radiation involves physical phenomena that look slightly nonlocal if pictured on the pre-existing, fixed, classical, black-hole-containing background.
The nonlocality may be visualized in many ways – and all of them are arguably right in some way: First, Hawking radiation is a quantum tunneling of a sort. The information may get out for the same reason why alpha particles may penetrate a classically forbidden barrier during the alpha-decay. The "absolute ban" only becomes valid in a "classical limit" when the barrier (in the case of the black hole, the size of the black hole) becomes infinitely large. But whenever it's finite, the classical conclusion that the process is strictly prohibited is simply incorrect.
Since 2005, Hawking has argued that the unitarity of his evaporation process is preserved due to the intermediate histories that don't look like the pre-existing black hole at all. The final states of the Hawking radiation may be reached through non-black-hole (or different-black-hole and elsewhere-black-hole) intermediate states as well and these other intermediate histories are demonstrably necessary for the exact unitarity, too. It's not shocking that if one tries to visualize these "totally differently looking intermediate histories" as states on a fixed classical background, they will look nonlocal and/or superluminal. These effects are enough to get the information out. In some sense, it's guaranteed that these effects are equivalent to the quantum tunneling.
The ER-EPR correspondence offers one more visualization of the nonlocality: the production of Hawking pairs also produces some thin Einstein-Rosen bridges between the outgoing radiation and the black hole interior. These bridges guarantee that the degrees of freedom at these two faraway regions are really not "that far from each other". They imply that what is done with the Hawking radiation at infinity may subtly affect the observations done inside the black hole. This correspondence seems to revert the question "who affects whom" – it's the exterior events that influence the black hole interior (rather than "the interior affects the Hawking radiation"). But there is no sharp discrepancy here because the bridges connecting the two places don't look like a simple evolution operator by a positive or negative time. They're highly scrambled so one can't really say which of the regions supports the "cause" and which of them harbors the "effect". The only invariant claim is that the field operators in the two regions don't quite commute with each other. (No observer may perceive the ordering of the two regions in time – no observer may measure both the late Hawking radiation as well as the interior of the still large black hole. There's no asymmetry similar to Gell-Mann's thoughts whether he should first try MIT or the suicide. This fact could be derived from a consistency of the theory and the consistency of the "usual or decohered histories" and the non-vanishing commutator, too. I have believed, perhaps due to the lessons taught to me by my ex-adviser Tom Banks, that all those remarks – not contradicting any of the AdS/CFT data etc. – belonged to the rudimentary lore since the black hole complementarity papers; clearly, when it comes to the most famous co-father of complementarity, I had to be wrong.)
I have already implicitly mentioned the other mistake done by the firewall champions and "AdS/CFT is incomplete" folks: they're just incorrectly assuming that it's always enough to imagine that the quantum state is a "tiny" perturbation of a classical background. The major history proceeds classically while the quantum corrections are "obliged" to be tiny. But this assumption isn't true, either. The intermediate histories that look very different even macroscopically are critical for the exact unitarity, as argued by Hsu and others.
You see that both mistakes are related and they effectively mean that the people claiming that there exists a "firewall problem" are treating the background geometry as a classical object. They think that it "objectively" (in the classical sense) has to look like a particular classical black hole geometry. This classical background imposes speed limits "exactly", they incorrectly think, and it also implies that the spacelike separated commutators with respect to this geometry "exactly" vanish, the spacelike separated field operators are "completely independent", and all degrees of freedom and signals must have a simple, not-faster-than-light description as small perturbations of one classical geometry.
All these assumptions are invalid – which also means that all theorems derived from these assumptions are inconsequential for the physics of quantum gravity. Even the macroscopic appearance of the history is a quantum degree of freedom, intermediate histories where the macroscopic appearance differs have to be Feynman-summed-over-all-trajectories (see and don't mess with the path integral), and if we can describe these histories as perturbations of one background at all, it's inevitable that there will be contributions that look causally-violating. If the benchmark classical background geometry is suitably chosen, they be small enough so that the semiclassical calculation doesn't produce any too large errors for coarse-grained observations that a real-world observer may perform. But they will be large enough to guarantee unitarity.
Again, let me point out that the error that Susskind, Polchinski, and others were doing in recent years is a special case of Einstein's error in the EPR papers. Einstein was assuming that once the two parts of an entangled photon pair separate, they must have objective properties in the classical sense which, via Bell's-theorem-like reasoning (if I use the equivalent "newer" toolkit), implies that the correlations can't be too large or too universal. In the same way, the "firewall problem" advocates think that the properties of a black hole such as its positions may be treated as classical observables once the black hole is created. While Einstein (and EPR) would think about small systems and pairs of particles, their point was much more general and the "firewall problem" champions' mistake (or two related mistakes, in the counting done above) is not just analogous but it is a special case of Einstein's error.
As you can see, I think that most of these misunderstandings, especially by the big shots, boil down to their subtle (?) misunderstandings of quantum mechanics, more precisely attempts to treat certain questions classically even though it is totally paramount to treat them quantum mechanically to avoid "paradoxes" they want (?) to avoid in their final understanding of quantum gravity.
But I want to close this discussion of "anti-quantum zeal". There's another general point I find bizarre, namely the eagerness of some folks to suggest that there is something fundamentally wrong with the AdS/CFT. It is just such an apparently extraordinary yet indefensible claim that I can't understand why there's nothing in their minds that would stop them from saying things that are this stupid.
The AdS/CFT has been shown to work in something like 10 thousand papers. We know that it doesn't just reproduce supergravity in one background and in the planar limit. The correspondence has worked for dozens of (classes) of backgrounds. It reproduces the excited string modes, their interactions, wrapped branes of all kinds we can think of (Witten: via baryons), black holes and their thermodynamics (entropy, temperature), and many other things that have to be present in a consistent theory of quantum gravity (also known as string/M-theory). It's clear that the bulk gravity respects the equivalence principle outside event horizons – it has to, by consistency (after all, spin-two fields can't couple differently, a simple argument shows: when investigated in detail, this condition is very strong, however, and singles out string/M-theory as the only solution to the consistency criteria). So claiming that suddenly, there has to be a problem, a wrong prediction by AdS/CFT about a quantity, sounds as likely as the claim that the Sun won't rise tomorrow. Why would you believe such a thing after billions of years when the Sun rose just OK, especially if you have models indicating that the Sun should keep on working for 7.5 billion years (and AdS/CFT is valid forever)?
My point is that what AdS/CFT in particular or string theory in general says about the black hole interior (and the mechanisms that allow the information to get out during the Hawking evaporation) may look confusing. In different descriptions, the Hawking evaporation requires quantum tunneling, superluminal trajectories, superpositions with a different spacetime, complicated maps relating no-longer-mutually-independent-or-commuting internal and external operators, thin and/or heavily twisted Einstein-Rosen bridges, dependence of field operators on the microstate, freedom to choose topologically different backgrounds that nevertheless allow one to describe the same Hilbert space (the Hilbert space for an ER bridge is the same as the Hilbert space of two black holes, just recommending a different basis), and so on.
The correct entries in the list are ultimately equivalent to each other and there can't be any contradiction between them. But even if it were impossible to design any "localized yet unitary way" to think about the dynamics of the Hawking radiation from the bulk viewpoint, it wouldn't imply that there is any incompleteness in AdS/CFT or string theory as a quantum gravity theory. Instead, such a situation would imply that there is an incompleteness of any local description of quantum gravity which is pretty much a problem of all the theories that are not string theory (or AdS/CFT). So it's really ironic if and when defects of this "local QFT approach" are blamed on an approach that is as different as the consistency allows. Among other lessons, string theory teaches us that the most exact microscopic description of quantum gravity in the AdS space is the boundary CFT (there may exist other, equivalent, totally exact descriptions as well, but this is not guaranteed). You may try to decode a local description – assuming a particular classical background – of the bulk dynamics from string theory but if you fail, it's either your failure or something that you just objectively shouldn't be doing, not a failure of string theory.
String theory is also carefully teaching us which questions are good to ask and which are problematic. For example, people quickly learned that the Lorentz-covariant amplitudes calculable from string theory aren't the local Green's functions; string theory directly leads us to calculate the on-shell S-matrix only. We now know (well, people rather quickly realized) that it is not a defect of string theory but an important lesson. There can't be covariant Green's functions in quantum gravity because the diffeomorphism symmetry mixes field modes at some momenta with those at different momenta. Only if the geometry may be "glued" to some classical value – and it may be done at infinity (in the Minkowski, AdS, or perhaps similar spaces only), the ambiguities produced by the diff symmetry may be neglected. That's why the S-matrix amplitudes are well-defined but their off-shell generalizations are not.
Analogously, if string theory tells us that the infalling observer may only predict his or her observations in some approximation or his or her predictions and observations depend on some conventions, choices about the coarse-graining, or they have some irreducible error margin or risk to be wrong, it's a lesson we should learn, not a reason to consider the string-theoretical description incomplete.
I found Lenny's comments so weird and contradicting so many insights he helped to find in his half-a-century career that I can't resist asking him: When will be your new paper with Lee Smolin out, Lenny? ;-)