- It is desirable to find a background independent formulation of string/M-theory
- Such a formulation would likely to answer the questions whether the landscape approach to string/M-theory is correct; why it's not; what it should be replaced with.
These are the anthropic topics and it has been described many times why I agree with Smolin and others. However, I can't agree with the other points about background independence, especially Lee's opinions that
- We should try to revive Leibniz's relationism or Mach's principle
- Philosophical reasoning about background independence is relevant for derivation of physics of a particular background
- It's better if your theory contains no space (possibly not even an emergent one) rather than if it does
- Quantum mechanics should be replaced by something else that goes "beyond it"
and many others that will be discussed in this text.
See also: Background independence in AdS spacesOK, let me start with the questions about relationism and Mach's principle. I highly recommend you the second popular book by Brian Greene, "The Fabric of the Cosmos", where the relative vs. absolute debate is covered in the first chapters. And the presentation is very nice.
What do I think about these issues? Unlike others, I have never been impressed by the relationist ideas and Mach's principle.
It's easy to agree with Leibniz's "principle of sufficient reason": one must rationally justify every choice made in the description of Nature. However, this principle must be interpreted properly (justification: a proper interpretation is always a better choice than a wrong interpretation). A proper interpretation always allows you to make a choice because this choice leads to an agreement with this interesting world. More concretely, the coordinates and similar things are useful concepts to describe reality despite some redundancies and symmetries, as we will argue below. The success of Newton's theories is undoubtedly a sufficient reason to justify every single piece of his theory.
On the other hand, Leibniz's "identity of the indiscernible" - which says that the objects with the same properties must be identified - is technically wrong in all theories we've been using in the last 300 years. If two objects/states A,B are related by a global symmetry transformation, they have the same properties but they must still be considered as two distinct objects (configurations or states in the Hilbert space) - two objects that are not equal (=) - otherwise the mathematics would break down. Leibniz's approach to the "identity question" may be attractive philosophically, but in my opinion, it is more important that it is technically untrue in all successful theories of reality and we reasonably expect that it will remain untrue in the future theories, too. The only allowed interpretation is that Leibniz has defined another relation (equivalence) that is different from the usual mathematical identity.
Obviously, not all physicists share my viewpoint that the verifiable truth is more important than the philosophical prejudices; in other words, the philosophical prejudices may be incorrect even if their defenders sell them as deep ideas (and these chaps sell themselves as smart scientists and philosophers). But let us return to the question "Does space objectively exist?". There are aspects of "relationism" that are part of the standard physics cannon; and aspects that are an unnecessary philosophical addition that usually brings us on a wrong track.
In classical physics, the Galilean symmetry and the principle of relativity is alright. It's a well-known fact that you can't determine whether your train is moving or not. This fact is reflected by a mathematical property of the equations of motion: they only involve the acceleration and the differences between positions of objects; they don't directly depend on velocities. (Such a dependence may appear when you consider friction; friction typically picks a priviliged reference frame such as the frame associated with water in the ocean.)
Special relativity - an ideal amount of a deformation
In special relativity, the Galilean symmetry is deformed into the Lorentz symmetry which is in a sense deeper, as argued in the article about the depth. The Lorentz symmetry respects the same "beauty" as the Galilean symmetry (the beauty means, in this case, the equivalence of inertial observers) but it is more general because a parameter - equivalent to the speed of light - has a more generic, finite value. The Lorentz symmetry is a strong constraint on the possible form of the physical law. And every principle that has a chance to be true (or at least deep) and that has the capacity to constrain the equations of physics is welcome. The finiteness of the speed of light links mass and energy by Einstein's most famous formula and it reduces the number of independent physical fundamental concepts in many other cases, too.
The Galilean spacetime implied that the space and time were independent quantities (much like the energy and mass, and others). The Lorentz deformation is an ideal one (a golden compromise): it essentially identifies space and time (and mass and energy) with a power of the speed of light as the conversion factor. However, any choice of the speed of light leads to a physically equivalent theory (with a different choice of units). Had we made a larger deformation, there would be new undetermined dimensionless parameters. Special relativity shows the "optimum of beauty". Quantum mechanics is doing something similar and identifies time with energy (with hbar being the conversion factor), again without introducing new dimensionless parameters. Quantum gravity ideally sets another constant (Newton's constant or an equivalent) equal to one which allows to calculate all dimensionless numbers.
Coordinates: synthetic vs. analytical geometry
Also, as a kid, I was very impressed by coordinates and the possibility to analyze geometrical questions analytically, by looking at the equations involving coordinates. I don't know whether some of you have had the same feelings, but the mathematical tasks to solve a geometrical problem by "synthetic geometry" without the coordinates always looked like a useless childish exercise to me (which does not mean that one can't get good at it); it was recreational mathematics for children. If you can find the truth by using the coordinates, why shouldn't you use them? Coordinates are great and they are, to some extent, real. They are real modulo translations, rotations, and the Galilean (or Lorentzian) boosts. But there are only 10 parameters for this Poincare group in the whole Universe while the coordinates of the objects you want to study may be counted in thousands. No doubt, most of them are physical. We can't live without them or something more or less equivalent. The Cartesian coordinates, for example, look more fundamental than the angle between a bucket, Mercury, and a Mercedes, so why shouldn't we use them?
The relationist approach seemed to be an attempt to fight against the concept of coordinates; an attempt to pretend that they don't exist or they are unphysical; an attempt that must fail unless the coordinates are replaced by some assumptions that are equally powerful and essentially equivalent (but perhaps more awkward than the coordinates) because the space simply exists, to some extent, and you can't hide it. Also, the relationist approach did not look like a mathematical constraint on the possible form of physical laws. Instead, it was a way to make the questions quantitatively ill-defined. The relationist principles never looked like well-defined symmetries of a physical system; they were a method to show that no choice of the degrees of freedom is good enough for a sufficiently dogmatic person. We may summarize the situation: there was nothing that I would naturally like about the relationist approach.
Don't get me wrong: self-contained "bootstrap" systems that look non-quantitative and uncalculable at the beginning may be fine and very deep in physics but only if they're temporarily uncalculable. Relationism seemed to be a direction that wanted to make things permanently uncalculable.
This description also holds for Mach's principle. According to Mach, the reason why you feel the centrifugal force when you rotate are the distant stars in the Universe; your rotation may be defined with respect to these static stars. If you remove all the stars, you can't distinguish a rotating yourself from yourself at rest and the centrifugal force disappears. While this idea was presented as a profound one by many popular books, it has always seemed obvious to me that it was a philosopher's nonsense. There can't exist any natural way to transform this paradigm into a set of mathematical objects and equations. The very goal of this approach is to show that everything about the coordinates is always unphysical. We simply know that it's not true: ideal solid bodies follow the rules of the Euclidean geometry (in a good approximation) and all the coordinates in this geometry are meaningful modulo a few symmetries we know. When we add dynamics, it's only the Galilean or Lorentz symmetry that must be subtracted from the reality of the coordinates. The fraction of the unphysical coordinates becomes arbitrarily small as we study ever more complex systems; and I just don't see anything wrong about auxilliary variables in physics.
Also, the Machian approach seemed as one of the confusions about the units. If the number of stars in the Universe were smaller, Mach's disciples argue, the inertial force would be smaller by the same factor. But in which units? It seemed clear that one can always use units in which the inertial force is independent on the number of stars. The exception is the case in which the number of stars is zero, but that's a singular case that can't agree with our Cosmos, and that could already be enough to make it uninteresting for a physicist.
Another technical problem with Mach's principle was the following.
Imagine that there are only two stars in the Universe. One is above you and one is below you. You will be able to distinguish rotational motion - except for the motion around the axis connecting the two stars. Does it mean that the magnitude of the centrifugal force should depend on the direction? Such a dependence would break the rotational symmetry of some laws of mechanics. Such a breaking does not seem to occur in reality. Moreover, it is ugly. Also, the argument seems to depend discontinuously on the size of the stars because if the stars have nonzero size (or if they have planets), you should be able to distinguish rotating objects once again. Let me summarize: Mach's principle always looked cheap, ill-defined, and nonsensical to me.
Mach's principle in GR
Einstein was strongly impressed by Mach's principle. It was one of the motivations why Einstein was developing General Relativity. And the resulting theory predicts some phenomena - such as frame dragging - that smell of the Machian flavor if you look at it from the right (or, equivalently, wrong) way.
But eventually, General Relativity had killed Mach's principle.
Mach's principle has not only been challenged: it became one of the weird prejudices that often leads you to wrong conclusions. Mach's principle was the main reason why so many people in the 1960s thought that the gravitational waves could not exist in GR; they thought that all such solutions always had to be pure gauge which means that they could be transformed into flat space by a coordinate transformation.
Of course, it's not true. If you analyze the equations in the linearized approximation and if you describe the diffeomorphism invariance using the same language as other gauge invariances - in other words, if you apply the rational and reliable tools of field theory - you will get the right counting of the physical polarizations in a few minutes.
A Nobel prize has been given in the 1990s for indirect experimental evidence of the existence of gravitational waves (a binary quasar is losing energy as it emits the gravitational waves). And I don't think it was an erroneous prize. LIGO, VIRGO, and LISA can tell us much more than just the answer to the simple question about their existence. Gravitational waves simply exist; they are a prediction of GR that is as much justified by the very basic principles of GR as any other phenomenon (such as the Mercury's perihelion precession, bending of light, or gravitational red shift). The existence of gravitational waves implies that there is something in the empty space that can oscillate; the "fabric of the cosmos" does exist, indeed. Any theory that wants to reproduce the successes of GR must agree with the existence of gravitational waves. In the low energy limit, they must be described by the same mathematical expressions. Moreover, in a quantum theory, gravitational waves must be coherent states of quanta called "gravitons" that are analogous to photons, quanta of the electromagnetic field.
An attempt to revive Mach's principle means to argue that the gravitational waves do not exist. It is a struggle to return us not only before General Relativity; it is a program to return the humankind to the pre-Newtonian era and the dark Middle Ages. Some people may be permanently impressed by Mach's principle and some people may find it shallow after a closer scrutiny. These two groups may be composed of equally nice people. But the difference is that the critics of Mach's principle have a good physical intuition; its advocates are philosophers who are unable to think analytically and quantitatively and they prefer to insist on prejudices that can be shown flawed by a five-minute-long quantitative argument.
Objects vs. relations
There is a lot of other philosophical waste around these questions that does not impress me much. On page 9, Lee starts with his theses. R1 says that "There is no background". We have discussed this one already. Using my language, R2 says that "The degrees of freedom should always be described as relations between the objects, not the objects themselves". I view it as a silly philosophical prejudice. There are great theories where at least some of the degrees of freedom are associated with the objects themselves, not just their relations. These are equally good degrees of freedom; the statement that relational degrees of freedom are "better" is unjustified (it violates a principle of Leibniz above, for example) and most likely incorrect, too. Also, you may view a property of the object A as a quantity describing the relation A-G between her and God, or whatever. There is no real difference.
Moreover, when there are some symmetries, the degrees of freedom assigned to the objects themselves may behave much like the degrees of freedom connected with their relations. R3 says that the degrees of freedom (the "relationships" in this case) evolve with time and the role of time is to order them; well, that's what time usually does except that Lee seems to mod out by the time reparameterization symmetry from the very beginning which I find awkward especially because a good absolute measure of time - including the reparameterization (not just ordering) may be measured by gadgets known as "clocks".
Let us agree that if background independence means to return us to the Machian dogmas that the space (and gravitational waves) can't exist, then it's a medieval silliness that we should no longer discuss in the 21th century. The assertion that "the relationist approach is powerful" is ridiculous simply because there does not exist a single relational theory that would describe anything in the real world, at least approximately. The field-theoretical revolution at the end of the 19th century has led us to exactly the opposite way of thinking: even in empty space, there can be fields whose existence is as real as the existence of material objects. They just describe other, field-theoretical degrees of freedom. The gravitational field is one of these fields that is connected with geometry of spacetime and with gravity; it exists much like the electromagnetic and other fields. Quantum field theory has shown that the positions of objects are not only on par with the values of fields in empty space; in fact, the former (particles) emerge from the latter (fields) once you quantize the fields.
Atoms of space
Another weird attempt is to say that the gravitational field only reflects long-distance properties of a complex system of "atoms of space" that can be found in very many different configurations. Well, such an opinion is nothing else than the gravitational counterpart of the luminiferous aether behind electromagnetism, i.e. the gravitational aether. It does not really matter whether one uses new terms such as "spin network", "spin foam", "background independence": these things mean nothing else than new versions of the gravitational aether and they are as discredited as luminiferous aether. If the "atoms of space" had many possible non-equivalent states in which they could be found - but all of these states would look like empty space - then the empty space would carry a (large) nonzero entropy density. Such entropy density is a four-vector whose nonzero value (probably the huge Planckian density!) would massively break Lorentz invariance of the vacuum.
It's clear that no such massive breaking is allowed in a realistic theory - not even a semi-realistic theory. In other words, the Minkowski vacuum must be locally unique. The requirement of the Lorentz invariance - even an approximate, local one - hugely constrains possible types of "compositeness" of space, and it definitely rules out all "chaotic models" of the vacuum. It is incredible that one may obtain something that looks like the empty space (or a gravitational wave) by making a condensate of seemingly complicated objects such as the closed strings with the graviton excitation. Nevertheless, we may show that these closed strings behave just like the metric. (At stronger coupling, the graviton is not composed "purely" of closed strings because they are not the only fundamental objects. For example, at very strong coupling in type IIB, the graviton is almost entirely composed of a D1-brane.) In virtually all other cases, we may show that the presence of some objects in space simply destroys the ability of the space to act as locally Lorentz-symmetric, empty space.
Comparison of aethers
Incidentally, the luminiferous aether in the 19th century was much more developed than the current versions of the gravitational aether. Maxwell has designed several models and FitzGerald has even produced a working model of aether based on wheels and gears. Be sure that it is not easy to create a system where only transverse waves propagate but Maxwell and FitzGerald were able to do it and immitate Maxwell's equation. (No model of gravitational aether that would mimic Einstein's equations has been constructed as of 2005.)
Although the 19th century luminiferous aether was much ahead of the current proposals for gravitational aether, it was exactly luminiferous aether whose complete destruction and humiliation became one of the most important symbols of the Einsteinian revolution in physics. Einstein updated and clarified Lorentz's observations that only one electric and one magnetic vector exist in the vacuum and they are not made out of anything more fundamental; he found a symmetry - namely the Lorentz symmetry - relating the electric and magnetic phenomena and showed that this valuable, precious symmetry would be broken by any system of wheels or anything else that you imagine to occupy the empty space. Wheels in empty space have been moved to the trash bin of physics. They have been superseded by a powerful, beautiful, and restrictive symmetry that tells you that no aether is possible. Undoubtedly, this conclusion holds for the luminiferous aether much like the gravitational aether.
I hope that most readers are intelligent enough not to be manipulated by a slightly fashionable philosophical term "background independence" into believing these patently false and discredited ideas about the origin of electromagnetic and gravitational fields. No doubt, many proponents of loop quantum gravity have tendencies to revive Mach's principle, aether, and other things. But these are not debates that should excite 21th century physicists since they are completely unrelated to any experiments, observations, and conceptual problems with the current theories that explain the cosmos; these are debates whether we should forget everything we have learned in the last 100 or 300 years and return to the era when philosophical and religious dogmas, not experiments and their lessons, should dictate what is the truth. I guess that we should not.
Background independence of GR
Lee is trying to argue that GR is a "partly relational theory". The "partial relationism" stems from the diffeomorphism invariance. Well, from a rational viewpoint, diffeomorphism invariance is nothing else than an example of a gauge symmetry; an example that prevents us from defining local gauge-invariant fields/operators. It seems that it's the absence of local gauge-invariant fields/operators that Lee sees as the source of his "partial relationism". I don't care about these fancy words; the diffeomorphism group is nothing else than a gauge symmetry, much like Yang-Mills gauge symmetry. In fact, we know that Yang-Mills and diffeomorphism symmetries may be dual to each other in the Kaluza-Klein theory and string/M-theory; in the latter case, they can also transmute into each other. It is definitely a misunderstanding to assign the diffeomorphism invariance with a philosophically deeper role than the Yang-Mills symmetry has, for example. Both of them are local symmetries - redundancies of the description.
Also, if someone says that the diffeomorphism symmetry is qualitatively different from Yang-Mills symmetry and its philosophical implications are different and more far-reaching, he or she shows the flawed opinion that the space and the degrees of freedom associated with it are special. They are not special and in string/M-theory, we know that all degrees of freedom - gravitational as well as matter fields - are generating from the same fundamental starting point. This unification is an important philosophical paradigm: all degrees of freedom in a completely satisfactory theory should stem from the same starting point; the existence of one group of degrees of freedom should be deducible from others via consistency requirements and symmetries. Why is my principle deeper than Lee's or Leibniz's principles? It's because it's respected by a theory that is capable to describe the real world at a very fundamental level. Leibniz's are just words that are - as far as we can say - simply wrong and the evidence for his words is purely sociological.
On page 13, Lee also argues that there is no kinematics without dynamics in GR. Well, it's because the diffeomorphisms are able to mix space (kinematics) and time (dynamics) almost arbitrarily. Note that this important rule is also violated in loop quantum gravity. Its proponents argue that it is perfectly OK to study kinematics first, and pretend that the results are independent of dynamics (about which they know absolutely nothing). It's not OK in a generally covariant theory as explained above; the ability of loop quantum gravity to separate kinematics from dynamics reflects its struggle with the time-like diffeomorphisms that will probably never be well-defined in the theory.
Lee says many statements that can easily be seen to be flawed. On page 13, he says that if we construct a physical description of GR, Leibniz's "identity of indiscernible" will be respected. However, light-cone gauge string theory in the Minkowski space is a completely physical picture, but it still has non-trivial global symmetries that can transform a state into an analogous, but different state. A similar statement may also be said about the AdS/CFT correspondence or Matrix theory (which is similar to the light-cone gauge example); in both cases, the diffeomorphisms are also removed but global symmetries survive.
On page 15, Lee argues against perturbative quantum general relativity but his arguments actually seem to be directed against perturbative general relativity itself (without the word "quantum") because the word "quantum" plays no role in his arguments. The reality is that perturbative general relativity is the most important technical tool to study it - one that was instrumental in deriving all major predictions of GR such as the gravitational waves, Mercury's perihelion precession, bending of light, gravitational redshift, and others. The same perturbative techniques are also critical in the investigation of the quantum theory; they allowed Hawking to calculate his radiation and the details of his and Bekenstein's black hole thermodynamics; they tell us that gravitons must exist; they inform us that they become strongly coupled at the Planck scale where a UV complete theory (string/M-theory) must take over. Once again, virtually all tested and reliable conclusions of GR at the classical and quantum level could not have been derived without the help of the perturbative method.
On the other hand, there are no successes whatsoever of the approaches that Lee wants to call "non-perturbative approaches". The main problem is that they don't care about physics, experiments, and the new principles that are revealed by them; they prefer philosophical dogmas from the 16th century. It is a waste of time to discuss these "non-perturbative" speculations in detail. In all cases (causal set theory, loop quantum gravity, triangulation models), the speculations are based on the naive picture of space as being composed of infinitely sharp points - like in the classical theory - which are moreover exactly discrete. All these approaches make incredibly strong assumptions about the physics at the Planck scale whose probability to be incorrect safely exceeds 99.9999999999%; all of them belong to the discredited category of "gravitational aether theories" and no 16th century philosophical principle is strong enough to transform this intellectual waste into a topic for a meaningful physical debate in the 21st century.
Background independence in string/M-theory
This subsection starts with the statement of the physicists that physics around a particular background of string theory may be studied without a background-independent formulation of the theory. This statement is obviously correct as shown by Matrix theory and especially the AdS/CFT correspondence. Lee responds with a quip (or a serious assertion??) that it is not clear whether the AdS/CFT correspondence is a valid conjecture. After 4000 papers of agreements, "no comment" seems to be the only plausible reaction to Lee's speculation.
There is no doubt today that many superselection sectors of the string/M-theory Hilbert space admit a description in terms of theories that are completely well-defined. Locally, we may move into different places of the configuration space (or the landscape). Physics of "other backgrounds" is always, to some extent, encoded in every background dependent description of string/M-theory.
What we don't like is that the descriptions depend on the starting point too strongly. It would be much better to have a universal description that treats all places in the landscape on par with others. Such a description is likely to make the spacetime locality and causality more manifest, too. When a background-independent description of string/M-theory is found, it will provide us with a global view on the landscape. If there are any special places in the landscape, we should see them. The transitions between the different places of the landscape during the cosmological evolution would probably become well-defined mathematically. A background-independent formulation would also be more powerful in revealing new subtle inconsistencies or instabilities of particular backgrounds.
Most of us dream about a background-independent formulation of string theory; but once again, we don't need it to study a particular background (superselection sector) of string theory. If Lee really predicted that one cannot deduce physics of a larger class of backgrounds if we start from a background-dependent description, then his prediction has already been safely falsified.
Relationism and reductionism
I don't understand the logical flow of the discussion that starts on page 25. Lee mentions the importance of emergent phenomena for complex systems; he says that it is not a contradiction to reductionism but rather a "deepening" of it. There is no justification. Common sense dictates that the more role emergent phenomena play, the less powerful reductionism becomes. While I believe that reductionism is a generally valid idea and it is always just a matter of approximation to rely on emergent phenomena, it's impossible to agree that Lee has justified that the emergent phenomena are "deepening" reductionism.
For string/M-theory, Lee postulates three principles that - as he believes - are widely believed: unification, uniqueness, maximal symmetry. Unification means that all elementary degrees of freedom in the theory are manifestations of the same elementary entity; one group of the degrees of freedom can't be removed without destroying the structure. Lee says that the elementary entity in string theory "is" a string - which is somewhat perturbative, obsolete interpretation - but otherwise he's right. I have already discussed this principle above.
The second principle is uniqueness: the right description of all the interactions and particles is unique. Although Lee connects this principle with insults such as the adjective "postmodern", there is no doubt that there can't be two fully correct but different theories that describe reality. Two theories may be exactly equivalent; then we call them one theory. If they're not, there must be a difference between them, and an accurate enough measurement is sufficient to distinguish which of them is correct. Lee's doubts about uniqueness suggest that even if we found the ultimate description of string/M-theory that accurately predicts the particle masses etc., Lee would object that the correct theory can still be a different one. In this hypothetical situation, I would find any doubts to be a bizarre kind of craziness. Also, our experience suggests that it becomes increasingly clearer to decide whether a theory is a correct one as we approach to the more fundamental layers of reality. And the theories become more unique.
The third principle of maximal symmetry has not become a component in the active, successful research yet. One thing is grand unification: the gauge group in Yang-Mills theory should be as simple (technically) as possible. Another thing is its generalization to the whole physics; this has not led anywhere so far. One must distinguish gauge symmetries and global symmetries. A gauge symmetry is a redundancy of a description, not a property of a physical system. It depends on the description and equivalent descriptions of the same physics may come with different gauge symmetries (AdS has diffeomorphisms; CFT may have a Yang-Mills symmetry). The representation group of a gauge symmetry is physically irrelevant because the trivial singlets are the only allowed physical states.
Global symmetries are different (although they can arise as a subgroup of "large" transformations in the group of gauge symmetries). Their representation theory is very important because the physical states form their representations that don't have to be trivial. But I don't know a reasonable physicist who is trying to maximize the global symmetries. We know what they can be - for example, the Poincare group. The possibilities for global symmetries are very limited in string theory because a global symmetry is typically extended into a local symmetry. For example, a symmetry current on the worldsheet may be multiplied by "del X" to create a vertex operator for a gauge boson. This gauge boson implies that the symmetry was actually a local one, at least perturbatively. For rotations and translations, this means that string theory always contains spacetime diffeomorphisms and gravity. For other global symmetries, the prescription also works. Whether or not non-singlet states are allowed depends on dynamics (whether or not a gauge field is confining or not and whether it is spontaneously broken, for example). In a sense, I agree with Lee (page 27) that the precise identification of the (global) symmetry group depends on the background; it is a background-dependent question.
As we move through the moduli space, the natural symmetry that we imagine to be "fundamental" is changing. Heterotic strings on a circle may start with an E8 x E8 symmetry; one may adjust the Wilson lines and get to a point with an SO(32) symmetry. Both of them are 496-dimensional groups that are not contained in one another; they are equally profound, in a sense. There does not seem to be any natural finite-dimensional group that contains both; the full "stringy gauge invariance" seems to be the only conceivable unifying framework.
The comments about the global vs. local symmetries above are valid for the theories as we know them today. If a very large symmetry is relevant for the theory of everything, something about the separation to local and global symmetries must be generalized. Morally, it is true that the unified structure of string theory also unifies its symmetries, but it is harder to see technically how a particular large group could be relevant for the whole picture and why it would be exactly this group and not others. The intriguing idea to get "all of physics" from one very large group (such as one of the groups beloved by Thomas Larsson) remains an unsuccessful speculation.
There are errors in Lee's reasoning on page 30 and around that are too numerous to enumerate. Lee does not distinguish effective theories and UV complete theories; he claims that there are no good interacting quantum field theories above 4 dimensions (what about the (2,0) in d=6?) and so on. I don't know how anything reliable can arise from this philosophizing if one half of the input is just plain wrong. In my opinion, it is enough to overlook one error in order to destroy an argument. Whether or not a 6-dimensional quantum field theory may be UV complete and interacting is an important question, and the answer is Yes. It may not be an expansion around a gaussian fixed point; but it is a consistent theory with operators and their correlators nevertheless.
Fortunately, these technical points are completely independent from the general discussion about the anthropic hope, which is why I can easily agree with Lee's comments about the anthropic hope once again.
In the following section, Lee unifies relationism not only with reductionism but also with Darwin's theory. While I also enjoy these deep ideas about "metaunification", let me admit that similar constructions proposed by others usually look weird to me. Natural selection could possibly share something with relationism but it is definitely too vague to be of any use. Incidentally, Lee's prediction that the parameters of our Universe are optimized for black hole prediction has been safely falsified. It is easy to adjust some parameters in such a way that we produce many more black holes than those seen around.
Cosmological constant puzzle
It's interesting to see a debate about the C.C. problem that is based on four speculations all of which seem completely vacuous and flawed. Statements such as "the C.C. problem is just an artifact of the evil background-dependent thinking" look ridiculous. No doubt, the background dependence is also responsible for the latest terrorist attacks. But such an emotionally loaded combination of words does not show how to calculate the right (tiny) value of the cosmological constant in a theory that contains the known particle physics - which is the true content of the C.C. problem. Some people apparently think that a solution to the C.C. problem means to construct a grammatically correct English sentence that contains a quote by a 16th century philosopher as well as the happy end that the C.C. problem is eventually solved. I beg to differ.
"Relational quantum theory"
On page 37 Lee starts an attack against the quantum theory itself. It's hard for me to read this kind of material. There is one valid point - that Bohr argued that the boundary between the "classical observer" and the "quantum observed object" may be drawn more or less anywhere which was not satisfactory. Today, we solve this question by decoherence that may be used to calculate the scale at which the classical concepts become a good approximation and the quantum coherence and interference disappears because of the interactions with the environment. Decoherence is a part of the modern neo-Copenhagen interpretations, especially the picture based on Consistent Histories.
The emergence of the classical world from the universal framework of quantum mechanics was a well-defined puzzle associated with the interpretation of quantum mechanics - one that has been solved. It is much harder to see which problems Lee is trying to solve now but I suspect that they are not my problems.
Lee combines various valid but usually invalid objections against various interpretations of quantum mechanics with relationism and cosmology. Because the length scale above which the meaning of the text seems to evaporate is around 1-3 sentences, I can't unfortunately say anything nice about these comments. As far as I can say, "relational quantum theory" is an incoherent conglomerate of weird assertions about quantum theory from people who have never understood it and who kind of confuse the lessons of relativity with the lessons of quantum mechanics. Concerning the "relational approaches to go beyond quantum theory", let me just state that as far as I can say, there exist no approaches to go beyond quantum theory (certainly not "relational ones") and all statements I have seen that claim the opposite are rubbish.
While the word "rubbish" may sound harsh, you should not forget that if you rearrange the electrons and nucleons in rubbish properly, you may obtain anything, including a piece of gold. This is what many of us should try to do with these questions.
See also Moshe Rozali's comments about background independence that are pretty much equivalent to mine.