Saturday, July 21, 2007 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Phenomenology of quantum gravity

David Goss has sent me a flawless article from the July 19th issue of Nature, pages 297-301. Everyone who prefers articles about physics itself over fourth-class porn about physicists sleeping with other physicists - the kind of junk that numerous Woits and Smolins offer to their highly undemanding readers - will enjoy it. John Ellis (CERN) explains how powerful the LHC is and why it must see the Higgs-Brout-Englert boson (what an interesting new name!). A picture presents the particle as a rumor about a famous physicist that spreads inside the Higgs ocean, represented by physicists who fill the room.

Ellis is skeptical about technicolor models and analyzes models with extra dimensions and SUSY, motivated by the hierarchy problem. He recalls that string theory, hierarchy problem, LSP as a dark matter candidate, and a precise gauge coupling unification are four independent reasons to take SUSY broken around 1 TeV seriously. Extra dimensions are justified by string theory and the lightest Kaluza-Klein particle (LKP) that may be stable in some models. Their phenomenology includes Kaluza-Klein particles and perhaps small black holes. The LHCb experiment will carefully look at b-quarks and s-quarks and try to find new sources of CP violation i.e. violation of the matter-antimatter symmetry. The penguin diagram is the most specific possibility how this additional violation may be generated and Ellis dedicates a subarticle with this diagram in the middle to the history and facts on CP violation.

He ends up with the prospects to upgrade the LHC if more accurate tests of certain kinds are needed. But let me get to the main topic of this article.




Sabine's musings

I consider Sabine's (and Stefan's) blog to be one of three most inspiring physics blogs in the world. Her openness about various topics is refreshing and it is also helpful that we share certain influences of the Central European cultural space.

However, her opinions about a majority physics questions that go beyond the college material look scarily uninformed to me if not downright dumb.

Her article "Phenomenological quantum gravity" is no exception. I guess that because of political correctness and her sex, no one ever tells her why her constructs are physically nonsensical. At "Loops 2007", there is one more reason why she wasn't told so - namely that all participants except for Moshe Rozali were cranks.

The first oxymoron of the article is its title. No characteristic phenomena based on quantum gravity can be observed today and unless extra dimensions are very large or strongly warped, this situation will continue for quite some time. The assertion in the previous sentence is not a random guess because extra dimensions are necessary to change the dimensional analysis. Every honest person who understands both reality and theoretical physics and who has ever started to study quantum gravity must have known very well that quantum gravity has been a theoretical enterprise.

In fact, today, it is a purely theoretical enterprise. Anyone who is doing quantum gravity and who says that she is doing so by a more physical analysis of experiments than others is cheating herself and the rest of the world, too. It is impossible because no such experiments are available and careful calculations show that in the likely scenarios, they will continue to be out of reach. They have been impossible for those 40 years during which the people were talking about quantum gravity, and frankly speaking, I estimate that they will continue to be impossible at least for next few decades. If someone promises that he or she will surely transform these phenomena into observational science in a foreseeable future is a liar.

Quantum gravity is about doing the theory and mathematics carefully and right. It has never been motivated by easily doable experiments, it is not motivated by them today, and it will probably not be motivated by them in a foreseeable future.

Questions to be answered

At the beginning, Sabine outlines some of the questions that should be answered by the research and they make sense:

But [the Standard Model] has also left us with several unsolved problems, question that can not be answered - that can not even be addressed within the SM. There are the mysterious whys: why three families, three generations, three interactions, three spatial dimensions? Why these interactions, why these masses, and these couplings? There are the cosmological puzzles, there is dark matter and dark energy. And then there is the holy grail of quantum gravity.
If you neglect the subtlety that families and generations are the same thing :-) - she might have mentioned three colors - everything is justifiable and somewhat conventional.

However, you may notice that what she actually writes about later has absolutely nothing to do with any of these big questions about the Standard Model. And I will argue that it has nothing to do with quantum gravity either.

She introduces the top-down and bottom-up approaches. That would be fine except that all of her additions are strange. Let me start with the terminological issues. First, she gives a new name to the top-down approach: she turns it into a "reductionist approach". That's a very bizarre identification. In reality, the particle phenomenologists and model builders are pretty much as staunch believers in reductionism as string theorists. The top-down vs bottom-up dichotomy is not about reductionism. Reductionism is a rational belief that a theoretical tunnel can be built to connect relatively complex perceptions with pure, elementary forms of existence - with fundamental forces and particles. Top-down and bottom-up approaches differ in the strategy how to dig this tunnel. If you wanted to find an approach that questions reductionism, you would have to talk to some condensed matter physicists such as Robert Laughlin who would like to talk about fundamental physics of space but who have no idea what they're talking about.

Principled vs constructivist theories

Sabine's own name for the bottom-up approach is "constructivist approach". That's less inaccurate than the "reductionist approach" but it is still a historically and logically misleading term. Recall that Albert Einstein has divided physical theories to principled and constructivist theories. Principled theories start with a grand principle and then mathematically derive its consequences for our observations. Einstein named thermodynamics (the non-existence of perpetuum mobiles of various kinds being its principles) and both theories of relativity (with its postulates) as examples. On the other hand, constructivist theories, represented by statistical physics or quantum mechanics, build their new insights by grouping several known phenomena and quantitatively abstracting their common features.

Most string theorists surely prefer the first approach even though the actual history of string theory has been a wiggly, phenomenological one. While we understand the theory pretty well these days, we still don't know what is the universal principle behind all of it and whether such a principle exists at all.

You might think that Einstein's principled theories are results of top-down work while the constructivist theories are bottom-up, phenomenological models. However, the adjectives top-down and bottom-up contain something that Einstein couldn't have understood well: the scales. When we talk about top-down and bottom-up things today, we literally talk about the pyramid of energy scales, with the top represented by the huge Planck energy and the bottom represented by the 0.1 TeV scale that is accessible to current experiments. These simple insights about the renormalization group have allowed us to organize our ignorance in a logical and visually satisfactory way. The long-distance physics is pretty much independent of the short-distance details which is both good as well as bad news. It's good because we can learn long-distance physics without knowing the short-distance details. It's bad because of the same reason, namely because we can't directly learn short-distance physics from its long-distance manifestations.

Concerning the "reductionist" approach, Sabine writes that
The difficulty with this approach is that not only one needs that 'promising candidate for the fundamental theory', but most often one also has to come up with a whole new mathematical framework to deal with it.
Well, developing and understanding mathematical frameworks has always been a difficulty for most people. I just wonder when did theoretical physicists started to consider this thing a difficulty, too. Developing new mathematical frameworks has been the most important and most exciting part of theoretical physics at least since the era of Isaac Newton.

Middle-of-nowhere approach

All these things were just details. The real fun starts when Sabine tries to describe what is her own approach: does she prefer top-down over bottom-up? We learn that she can't pronounce "phenomenology" so instead,
I picture myself somewhere in the middle. People have called that 'effective models' or 'test theories'. Others have called it 'cute' or 'nonsense'. I like to call it 'top-down inspired bottom-up approaches'. That is to say, I take some specific features that promising candidates for fundamental theories have, add them to the standard model and examine the phenomenology.
I like to call it nonsense, too.

Whenever we consider new physics, we either believe that it could be real, or we don't believe it could be real. If we believe that it is not real, we shouldn't talk about it. If we believe that it is real, we want to answer more detailed and refined questions about it, in order to make progress. There are two known classes of methods how to do so. One of them is to carefully analyze the internal structure of the new phenomena at their typical scale which is usually inaccessible to existing experiments because it is too high. This is the top-down approach. The other approach, the bottom-up approach, is to study consequences of the new phenomena for physics at accessible scales. Sabine's words clearly sound more like the bottom-up approach than the top-down approach: so what does the extra fog mean?

I think that her bizarre "compromise" of the two approaches is meant to allow one to be more sloppy than phenomenologists in their work but simultaneously pretend to be as fundamental as top-down theorists. In other words, the middle-of-nowhere approach is a systematic algorithm to create confusion. One of the main conceptual results of the last 35 years in theoretical physics has been the renormalization group - especially its insight that our knowledge can be organized according to scales. As far as I can say, if someone claims that there can be a middle-of-nowhere approach, she misunderstands this fundamental insight due to Ken Wilson et al.

Sabine's examples of her middle-of-nowhere approach are extremely diverse in character and require separate discussions.

What does the presence of extra dimensions lead to?

This is of course a fair question but it is a standard one for bottom-up phenomenology, investigated in thousands of papers. The fact that extra dimensions have recently been associated with string theory, a top-down theory, is completely irrelevant. Phenomenologists can study and do study extra dimensions with their own bottom-up tools and logic. As phenomenological models, extra-dimensional models are analyzed much like many other models. The only difference is that one can mention that their starting point may also be justified by a known top-down theory which arguably makes them more appealing and more likely. But this fact doesn't influence the rules of the game and what physics actually does with these models too much.

Presence of a minimal length

The Planck length is surely a minimal length below which the usual intuition about geometry is no longer applicable. This fact - a part of the general lore in the field - has been known for decades. In this sense, the Planck scale is a minimal length. But such a proclamation is extremely vague and can only be used to vaguely derive other vague proclamations. What the statement exactly means and how things start to change when you approach the Planck length is pretty much the whole "meat" of quantum gravity. All papers that have been written as of 2007 and that have argued to have derived something more out of the "minimal length" while avoiding string theory belong to the crackpot category. All of them imagine a kindergarten model of the Planck scale physics - where the length remains a good degree of freedom whose eigenvalues are moreover discrete - that is flagrantly incompatible with numerous fundamental features of this Universe.

In other words, the consequences of such a version of "minimal length" include a violation of the rules of the renormalization group, a conflict with rotational symmetry and Lorentz symmetry, unphysical huge entropy density of the vacuum, the absence of light particles and non-gravitational forces. They contradict pretty much everything in physics and because I have written dozens of articles explaining why all existing "discrete quantum gravity" proponents are crackpots, I don't want to write another one.

I apologize for repeating this important point so many times but if someone only wants to look at "specific features" of some ideas and neglect whether models with these features can be compatible with very basic features of the Universe we live in, she is a crackpot.

Preferred reference frame

If you believe Google, there is only one preferred reference frame in the world and you are just reading it. ;-) According to this preferred reference frame, preferred reference frames can be studied in the bottom-up approach but don't have much justification in any known top-down approach. While there exist ways to break the Lorentz symmetry in string theory, such a breaking can always be viewed as a spontaneous symmetry breaking and this kind of breaking is moreover impossible in all known realistic classes of the string vacua.

In the effective field theory approach, a preferred reference frame - which is equivalent to a Lorentz symmetry breaking that preserves the rotational symmetry - produces additional sets of operators of various dimensions that have been classified years ago. Their coefficients must be small - this adjective is quantified by existing observations - which is why they can be viewed as perturbations and the bottom-up approach can't say much more, except for optimizing methods how the hypothetical terms can be seen most easily. The effects of these new terms only become significant or non-perturbatively large at some much higher energy scales where a top-down theory may be needed anyway.

What I want to say is that the Lorentz symmetry breaking has been studied for decades and surely any person who can't say anything concrete but who claims to "know some new important consequences of a preferred reference frame" has produced pure nonsense. Nothing like that exists and even if it did, it wouldn't be new. Don't forget that theories with preferred reference frames were studied throughout the 19th century. As far as I can say, they became pure anachronisms in 1905 and you shouldn't expect anything valuable ever coming out of this kind of research.

Holographic principle

Sorry, Sabine, but I don't believe that there is any bottom-up analysis of the holographic principle or entropy bounds and their consequences: the holographic principle always belongs to top-down physics. From the bottom-up approach, the Planck area is effectively zero and the entropy bounds say that the entropy is smaller than infinity which is not hard to satisfy. At low energies, you can't really think about doable experiments that would try to violate these bounds. The only way to test the entropy bounds - besides realizing that they are safely satisfied by all known objects - is to do Planck scale experiments. In that regime, you can indirectly test them but then you really probe the full top-down theory.

So far I talked about the entropy bounds. What is the difference between entropy bounds and the holographic principle? The difference is somewhat subtle if any. The holographic principle says not only that the maximum information in a given volume is bounded by its surface area - i.e. a finite-dimensional Hilbert space is adequate - but the full system can also be expressed in terms of degrees of freedom that live on the boundary. You can pretty much say that any theory on a finite-dimensional Hilbert space can be phrased in this way which would lead you to the conclusion that the entropy bounds and the holographic principle are equivalent. However, we usually require the theory on the boundary to be special - local, in fact - which is far more constraining. The holographic principle in this strong, local form is true for AdS-like spaces but it is unlikely that for spaces where the warp factor remains finite at the boundary, the boundary theory could be exactly local.

Consequently, we can't really make the phrase "holographic principle" too precise for finite volumes - and only those are relevant for observations. Operationally, holography for finite volumes is equivalent to entropy bounds that have already been discussed. There's really nothing waiting for us here. It may sound nice to define these "research projects" but be almost sure that any paper written about this realistically observable phenomenology of holography is going to be a stupidity.

Stochastic fluctuations

Sabine asks "whether stochastic fluctuations of the background geometry would have observable consequences". I am not sure whether she means the regular quantum fluctuations or something else. If she means something else - namely classical stochastic fluctuations - then it is nonsense because empty spacetime can't have any such fluctuations because they would require the vacuum entropy density to be nonzero which would be a deadly catastrophe that would, among hundreds of other things, heat up everything to huge temperatures and destroy all interference patterns in all experiments.

If she means the conventional quantum fluctuations of the background geometry, there are many ways to see them. They are, among other things, responsible for the anisotropy of the cosmic microwave background and the primordial structure formation - and perhaps some gravitational waves. Does she talk about these standard cosmological questions? It doesn't look so. Does she talk about the uncertainty of the measurements of distances whose error always exceeds the Planck length, as a simple calculation shows in the four-dimensional case? Be sure you can't measure the length that accurately in practice. At any rate, she seems to be talking about some new insights that resemble the formation of first galaxies and that are equally important but different. Well, such an extraordinary statement about a completely new phenomenon is bound to be either spectacular or, much more likely, insane. But Sabine presents it as everyday physics. Does she understand what she's saying? Have the words lost all of their meaning?

We are told that
These approaches do not claim to be a fundamental theory of their own. Instead, they are simplified scenarios, suitable to examine certain features as to whether their realization would be compatible with reality.
In other words, one is allowed to be superficial and arbitrarily choose whatever "features" she likes.

But such a comment can't define a legitimate approach to science: it defines scientific misconduct. Every scientist - including top-down physicists and bottom-up physicists - may find it useful to be superficial, heuristic, or qualitative at certain points. They may neglect certain features or predictions of a model. But a moment later, someone else may perform a more detailed analysis or check other features. If the results of a more detailed analysis show that the results of the superficial approach were wrong or that the neglected "features" significantly change the conclusions about the validity of a theory, the superficial analysis becomes superseded and irrelevant.

It is not legitimate to be cherry-picking "certain features" when the validity of a model is being tested. One must always consider all features that a model is supposed to be able to explain. This is about the very basic rules of the game. No diversity of approaches is allowed here as long as we talk about science rather than falsification of evidence.

The scientific approach dictates us to study things as accurately and deeply as needed to find the right answers at the required level of confidence. If you follow the insights carefully and sensitively, you are being told not only what the answers could be but also how likely various answers actually are. People may disagree about the probabilities of unknown things but what's more important is that figuring out these probabilities - at least qualitatively - is a part of the scientific method. When one uses it properly, she knows how many details should be looked at before some "big answer" is considered seriously or even established as a fact. The only way how I am able to interpret Sabine's middle-of-nowhere approach is as a call to lower standards how many things are analyzed before big results are announced with a lot of self-confidence.

I can't agree with that. What Sabine wants to do is not a new scientific strategy. It is a rejection of the scientific method itself.

The following comments make it clear that this is what she wants:
These models have their limitations, they are only approximations to a full theory. But to me, in a certain sense physics is the art of approximation. It is the art of figuring out what can be neglected, it is the art of building models, and the art of simplification.
Except that she doesn't seem to know the art at all. Had she known it, she would know that there can't be any middle-of-nowhere approach. One can dig a tunnel from France or from Britain or both but you can't start in the middle.

Whenever a physical theory is an approximation and we realize that certain aspects were neglected, we must understand why the neglected aspects were small: otherwise the approximation breaks down.

For example, we neglect higher-derivative operators in effective field theories because their coefficients are inversely proportional to a positive power of M, a mass scale that is supposed to be huge because the effective theory should be valid up to this large scale.

In top-down theories, we make frequent approximations, too. The most typical approximation assumes that a coupling constant - such as the string coupling - is small. That allows one to trust the results of the leading orders of perturbation theory. Analogously, we can assume that N - e.g. the number of colors - is large i.e. 1/N is small. The approximation from the previous paragraph is, in some sense, a special example of the approximations from this paragraph.

Science and art

But justifying why certain things can be neglected is exactly what Sabine doesn't want to do: she wants to replace justifications by arts. That's how I understand why Sabine chose Karl Popper's bizarre quote that "science may be described as the art of systematic over-simplification". Incidentally, I would say that Popper's philosophy was the art of systematic over-simplification of the scientific method that makes him famous enough among the semi-stupid people while he doesn't make scientists too upset so that they would stop his fame.

Is science an art - an art of approximation? Yes and no. It is an art in the sense that finding the right zeroth approximations, which is one of the most spectacular tasks in science, requires scientists to have special skills, much like artists. However, science differs from arts in one essential aspect: the quality of the scientific art is evaluated by very different methods, namely by a comparison of the artwork with experiments, observations, or calculations based on other experiments or observations.

If someone makes an approximation that seems to be illegitimate because it is not based on any good scientific argument - such as the smallness of couplings, gaps between energy scales, or an approximate symmetry - then you can call it art but I prefer to call it bad science. Laughlin's wavefunction behind the fractional quantum Hall effect was good art and good science, too. But one could only find it by seeing that it predicts some required phenomena that were also observed in many labs. In condensed matter physics, it is not hard to produce a lot of experimental data. Just apply for another one-million-dollars grant. Easily accessible experiments make condensed matter physicists more superficial and they don't usually ask "Why". Why is the wavefunction a good approximation of reality? Condensed matter theorists usually don't have to answer much because the experiments do this work for them.

In quantum gravity, we don't have this luxury to see most of the effects directly but this doesn't allow us to eliminate the step of the judgement and replace it by arts, personal aesthetic preferences, or hateful media campaigns. We must continue to judge the approximations scientifically: the only difference is that the arguments inevitably get more indirect, theoretical, and based on increasingly accurate and abstract mathematics. Any other approach to divide ideas to right and wrong in the absence of direct experiments is unscientific.

Incidentally, I find Popper's statement - that science is the art of systematic over-simplification - kind of dumb because of the last word, too. Over-simplification is, by definition, a simplification that is more severe than what would be optimal for getting the best results. It is a bad thing. A more famous guy, Albert Einstein, has said that science should be as simple as possible but not simpler. You can see that at least at the linguistic level, these two sentences are in flagrant contradiction with one another and you don't have to think twice whether I agree with the philosopher or the physicist.

Quantized gravity

Sabine's text gets ever more unreasonable as she gets closer to more concrete questions. We learn that
To be honest though, we don't even know that gravity is quantized at all.
Well, most crackpots don't know it but we certainly do. By "we", I mean the people who are familiar with the inner workings of quantum mechanics, a well-established theoretical framework of physics. The myth that a classical gravity may be coupled to quantum mechanical matter has been discussed in my text about myths on quantum gravity.

Sabine's "arguments" in favor of the myth are nothing else than the usual foggy unscientific gibberish:
I carefully state we don't 'know' because we've no observational evidence for gravity to be quantized whatsoever.
That's very nice from a person who says that she studies quantum gravity. It is exactly isomorphic situation to a hypothetical person who studies evolutionary biology but who says that we have no observational evidence that life has been evolving during the billions of years because we were not there. It's just breathtakingly silly.

All natural sciences have been learning for many centuries that most of the correct explanations are more indirect and abstract than most of the simple-minded people would prefer. As physics makes further progress, we "see" things ever more indirectly and the required ideas are increasingly abstract. Bigots such as Peter Woit will never understand these basic facts about science but it is very sad that Sabine doesn't understand them either.

Not "seeing" gravitons is certainly not the first time in history when we know about something that can't be seen directly. Many other concepts have been known as real to good scientists long before most people could "see" them directly, including atoms, electrons, positrons, pions, charm quarks; numerous germs and viruses causing diseases; new planets (from their gravitational influence), black holes (from equations of GR); many missing links in evolution, genes responsible for XY, and let me stop. When one looks more carefully than others, he may "see" much more than others do. A forced reduction of science to the lowest common denominators - those Woits who "see" the least - would mean to kill science because science requires just the opposite situation: it is led by those who can "see" the most.

Sabine also writes:
The fact that we don't understand how a quantized field can be coupled to an unquantized gravitational field doesn't mean it's impossible.
She turns the whole situation upside down. Understanding becomes misunderstanding and vice versa. Quite on the contrary, Sabine: we do understand why classical gravity can't be coupled to quantum mechanical matter - for example, the gravitational field around the Schrödinger cat is inevitably found in a linear superposition of two profiles after she's "half-killed" by the decaying nucleus. This is not a speculation but a result of an overwhelmingly tested theory within its domain of validity.

The fact that you don't understand why classical gravity coupled to quantum matter is impossible doesn't mean that it is less impossible than it actually is. More generally, understanding is always more important for science than misunderstanding.

I think that Freeman Dyson's quote about "two separate worlds" is equally unreasonable: however, it is less painful from this eminent scholar because he doesn't present himself as a quantum gravity researcher at the same moment. One can easily see that some of the effects of quantum gravity are surely observable in principle. A Planckian collider that would fit our Universe could do a lot and there exist better ways to look at things. Whether these things are observable in practice is surely secondary for any person who was ever really interested in quantum gravity. If someone only cares about things that can be seen in practice (and quickly), he or she should have baked the bread from the very beginning. Doing quantum gravity for money - while believing that practice is more important than principles - is a form of intellectual prostitution.

I am surprised that both Freeman Dyson as well as the person inside him who studies intelligent plant life near Saturn cares so much more about practice than about the principles.

Sabine continues with a discussion of the inverse problem, the hard task to reconstruct a fundamental theory from the low-energy observations. I think that all details in the text and her graph are partially wrong and confused. For example, take her graph. There's a lot of arrows. Let's assume that they mean "determine". Quantum gravity in the top layer determines three things in the second layer from the top: semiclassical quantum gravity, quantum gravity X,Y,Z (which is probably a recursive feature of the graph), and modifications of general relativity. All of these things are supposed to determine "effective models".

Her list of effective models has nothing to do with what we call quantum gravity. DSR and MDR are unmotivated, confused twists of classical (...) non-gravitational (...) kinematics in the Minkowski space, spacetime foams are incompatible with gravity (and the existence of smooth space). Lorentz violation has nothing to do with gravity and if we are strict, it really contradicts it because by a theory of gravity, we really mean a theory that respects the principles of general relativity and the Lorentz invariance of local physics is one of them (it becomes a part of its gauge symmetry).

Finally, extra dimensions and quintessence should be counted as serious physics but they're not really aspects of quantum gravity per se because the quantum character of space plays no role there. Extra dimensions are about a classical geometric background on which other fields may propagate while quintessence is about an additional scalar field.

So you can see that most of the arrows from the second layer to the third layer are just wrong because the second layer lists three out of dozens of important aspects of quantum gravity (95% of which are apparently completely unknown to Sabine as I will discuss at the very end) while the third layer, the "effective models", has nothing to do with quantum gravity per se. Nevertheless, according to the graph, you can only get from quantum gravity to phenomenology in the bottom layer (astrophysics, cosmology, terrestrial labs) through the layer of effective models. Because all the arrows above are really broken, you can't get there.

What fascinates me about these pictures is their combination of a complete disregard for the key questions of a given field and an extreme eclecticism with which the less important, mutually incompatible aspects and theories are mixed with each other. To "improve" her picture, she wants to include a new, entirely meaningless paper about "macroscopic non-locality" - one that we discussed on this blog - into her graph. Doesn't she realize that this whole structure is a pile of stupidities, broken links, missing key entries, and misunderstanding? I can't believe she doesn't. It's all so incredibly stupid.

Finally, we also learn an interpretation of the history of physics that I can't agree with. Sabine wants the history of quantum gravity to mimic what happened with general relativity. We learn that general relativity was in a certain situation at "the beginning of the last century" and later, it was settled by the light deflection by the Sun. Well, general relativity was written down in 1915, published in its final form in 1916, and the light bending was tested in 1919. There was just a small delay there. But more importantly, the light bending observation was more important for the media than it was for the real science. All good theoretical physicists knew that general relativity was the correct theory of gravity by the end of 1916. Moreover, it seems that the 1919 measurements could have been faked because they couldn't have had the necessary accuracy. Even if they were correct, the theoretical considerations based on the combination of special relativity and the equivalence principle were much stronger an argument in favor of general relativity than the particular expedition.

At any rate, the London Times announced on November 7th, 1919, exactly two years after the not-so-great not-October not-revolution, that the Newtonian ideas were overthrown - and some people care about the London Times more than about scientific arguments.

Sabine also says that it "doesn't matter where one starts" because Heisenberg's and Schrödinger's pictures are equivalent, too. That's nice that two formulations of quantum mechanics are equivalent but it doesn't mean that any two ideas in physics are equivalent. Be sure that more than 3/4 of the entries in the graphs and text are not equivalent to any valuable idea in science or elsewhere and it does matter a lot whether scientists study serious science by using sound scientific arguments, or whether they prefer the art how to mix 50 incoherent ways to leave your lover, as recommended by Sabine Hossenfelder and Paul Simon.

And that's the memo.

P.S. Let me copy a list of items that I consider to be the most representative questions of quantum gravity. Note that even though there is no "string theory" in this list, these questions don't seem to overlap with what Sabine considers to be quantum gravity. Be sure that one of us is extremely confused about completely fundamental issues and it is not your humble correspondent. Your humble correspondent thinks that selling stupid ideas unrelated to quantum gravity (and physics) under the mysteriously exciting brand of "quantum gravity" is a form of parasitism.
  • Has the Universe ever been infinitely small?
  • Did the concepts of geometry such as the distances actually make sense when the Universe was a newborn baby?
    If they didn’t, how should we generalize the tools of geometry so that they don’t collapse in these extreme conditions?
  • In other words, what quantities and new kinds of questions should replace the usual concepts of the geometry of space if we want to probe the very early history of the Cosmos accurately?
  • Did the rules of the game simplify near the beginning? Was the newborn Universe similar to a computer in any way? If it were, how did it exactly work?
  • When you try to multiply the quantities that described the Cosmos, does it matter how you order the factors? When does the ordering matter and when it doesn’t?
  • Is it allowed for the Universe to suddenly begin its existence or did the moment of creation violate some laws? Was there any “creation of information” when the Universe got started? If there was, what laws did it obey?
  • Is the number of the dimensions of space exactly equal to three? Can it be more or less?
  • Can there be some uncertainty about the number of dimensions of space?
  • Is it possible to create holes in the Cosmos or to discontinuously change its shape?
  • Is there any small chance for an observer inside the black hole to send some information about himself to the people who live outside the black hole even though the general theory of relativity forbids that? Or is the information about the troubled person lost forever? These questions underlie the so-called information loss paradox.
    Where do the black holes store the information about the matter that has created them?
  • Can an observer detect that he has just crossed the surface of the black hole – the so-called horizon – or is this question impossible for him to answer, just like the general theory of relativity seems to imply?
  • Can a black hole be continuously transformed or melted into an elementary particle and vice versa? How does the transition look like?
  • What happens if we collide particles whose total energy – and thus the total mass – is almost enough to produce a black hole but not quite?
  • Did the Universe have a well-defined temperature at the beginning? How high was it? Is there a universal upper limit on the temperature? What happens if you approach this limit?
  • The special theory of relativity unifies space and time but it still allows us to distinguish space-like and time-like separations between two events. Can this last difference between the space and time – technically called the “signature of spacetime” – be removed?
  • Should the distances and times be always real numbers that most of us know or does it make sense to imagine that they are more complicated numbers known as complex numbers? If they can be complex, why do we seem to know real distances and times only?
  • Are all types of elementary particles and forces manifestations of a more fundamental, unified type of matter? What is this primordial substance and what kind of mathematics and physics should we learn to understand it really well?
  • Can there exist Universes that include the force of gravity but whose repertoire of other elementary forces and particles differs from ours? How many possibilities are there? In other words, how many siblings our Universe has? Are there any numbers that control the character of our Universe that can be continuously adjusted?
  • If there are many possibilities how the Universe could a priori look like, is there some explanation why we live in this Universe and not another one?
  • Is there a fundamental difference between the past and the future or is the difference a matter of conventions and psychology?

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :