Tuesday, May 12, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Confidence levels in theoretical physics

This text is a moral continuation of the article about

Probabilities of various theories in physics
where I listed some estimates of the likelihood that various statements in theoretical physics are valid.

Today, Oswaldo Zapata presented himself as a historian of science with his preprint
On Facts in Superstring Theory. A Case Study: The AdS/CFT Correspondence (PDF)
He tries to semi-quantitatively analyze how the theoretical physicists' belief in various hypotheses - such as the AdS/CFT correspondence - is changing as the new evidence is emerging.

Oswaldo believes that the acceptance of facts depends on the existence of a community. In principle, I disagree with this conclusion. Every physicist is (or should be: but yes, I think that even in the real world, he or she mostly is) using rational arguments to determine which hypotheses should be taken seriously, which of them are likely enough to be considered as facts, and which of them are likely enough to be studied as mere possibilities.

In reality, some people or most interact with the community so intensely that they modify their vocabulary according to the average member of a group that is expected to read a given paper.

David Berenstein is the first co-author of the so-called BMN minirevolution (2002) that has considerably increased the probability that Maldacena's AdS/CFT correspondence is exactly correct. That's why the BMN paper plays an important paper in Zapata's analyses, too.




Black, gray, and white in maths and sciences

David wrote a sensible essay on their blog, arguing that only mathematics knows the "p=0" or "p=1" certainty: it occurs whenever a rigorous proof is available. On the other hand, strict mathematics considers all probabilities in between 0 and 1 as qualitatively isomorphic "ignorance".

On the other hand, natural sciences such as physics only know probabilities greater than 0 but smaller than 1. Certainty doesn't exist and how far the probabilities are from 0 or 1 is very important. Physicists keep on modifying their ideas about the probability of various hypotheses according to their a priori beliefs and the new evidence - the ability of the conjectures to pass new tests. The process of refining the probabilities could be modeled as Bayesian inference. However, a lot of partially subjective decisions and estimates are needed to "calculate" the probabilities numerically - so the precise numerical values don't really have any objective meaning.

Words encoding certainty

A paper of our Boston group has also helped to increase the probability that the BMN hypothesis is valid, so it appears in a footnote of Zapata's paper, too. ;-)

A linguistically unfortunate formulation from our paper was way too tempting for Zapata who used it:
More recently the Maldacena conjecture has established a duality between a conformal gauge theory (with a fixed line of couplings) and string theories on an AdS background. However these dualities are well understood only at large values of the gauge coupling [supergravity limit in the bulk]. [33] (Italics added.)
The only problem I have with this formulation is an aesthetic one: if we say that Maldacena's paper has established something, we shouldn't really call the paper a "conjecture" even though "conjecture" was a popular term to describe Maldacena's discovery at that time.

So the combination of the words "conjecture" and "established" is an imperfection.

But you can imagine, we didn't think about a historian who would analyze a particular verb on page 7 exactly 7 years later. Our group of 7 authors gave me 7 minutes to submit the preprint - composed out of 7 distinct dialects of LaTeX that were slightly incompatible with Ginsparg's LaTeX - to the arXiv before the deadline, in order to be compete in speed with a group of German competitors who would later become leaders in the integrability business - the field that the BMN minirevolution has transformed into, as David Berenstein kindly formulated it for me around 2004. By the way, I didn't succeed to fix the 3 independent types of errors before the deadline but the extra 24 hours allowed us to improve the paper detectably.

I wrote that the combination of those two words looked like an oxymoron. But the content reflected the individually reached opinions of all the authors. None of us has really had any detectable doubts that the AdS/CFT correspondence was correct. Today, we know that our certainty was even much more justifiable than we used to think back in 2002. Still, the correspondence - and its BMN ramification - looked nontrivial enough for us and others to be pleased whenever a qualitatively new test passes.

As the number of different tests that perfectly work surpasses many thousands, you start to be bored by the success. And if you see many new tests, you also start to realize new relationships between them: they no longer look quite independent to you. Even though there exists no truly rigorous "full mathematical proof" in literature (although Berkovits et al. may have already found it, but his formalism continues to be difficult for others to check it), physicists are certain that the correspondence works.

When the Sun sets one billion times in a row, you begin to expect that it may do the same thing again tomorrow. In the case of the Sun, such an expectation will actually fail sometime in the future but inherently mathematical propositions don't "burn" so it is more sensible to expect that they are completely and eternally correct if they have passed 5,000 different tests.

Because the typical expert in the field thinks that the probability that the AdS/CFT correspondence fully holds exceeds 99.9% - and virtually all peers think that the figure is above 99%, anyway - it makes sense to choose a language that simply treats the duality as a fact. In natural science, there always exists a risk that such an assumption collapses in the future. But if that's the case, the paper based on the failed assumption will become useless, anyway. This uselessness can't be fixed by making the paper more diplomatic or fuzzy.

Outside the community of experts, there are many people who may be irritated by the certainty implicit in the vocabulary - and who (probably incorrectly) think that the probability of various assumptions is much smaller than 99%. But they're not the expected readers of the BMN papers, anyway. For example, if someone thinks that the AdS/CFT correspondence is almost certainly incorrect, he or she is unlikely to study some detailed 2-loop evidence that the map works well for interactions of four long operators. He or she will either dismiss any evidence, or selectively look (or hope) for evidence that the map doesn't work (and he or she will probably never find one).

Penetration of AdS/CFT to the broader public

Zapata's main question is how the correspondence became accepted by the broader scientific public - and maybe parts of the ordinary public. Well, I think it's important to realize that this fact has no consequences for the actual science - and minimal impact on the true experts, too. On the other hand, it may be an interesting sociological question.

In November 1997, Jaume Gomis was the first guy who told me about the map. Being focused on some flat-space issues concerning fivebranes and Matrix theory, I completely ignored his words for a month. In fact, I wasn't even told that the relevant background had a Ramond-Ramond flux, so I concluded that the background didn't solve Einstein's equations and the correspondence was wrong. It took a month before I really sat down with this issue. After some time, Witten gave a lecture about it at Princeton, and so forth.

The mathematical operations were somewhat unusual and different than what we learn in our basic (and advanced) QFT courses. So it took months (and years) for many people to learn what's going on. Only once they learned it, their opinions started to matter. And of course, almost everyone who has learned it at the technical level thinks that the correspondence works exactly.

At some moment, it used to be sensible to think that the duality could break down beyond certain limits, approximations, or supersymmetric truncations. However, every sensible enough scenario - how the duality could break down "somewhere" and still pass all the tests that have already been done - has been ruled out by another successful test.

After all, an independent definition of type IIB string theory for generic couplings and radii is not really known. So in this background, it makes sense to define the type IIB string theory in terms of the gauge theory. You only have to check that this definition of type IIB string theory is compatible with all the older (e.g. perturbative) definitions of type IIB string theory. This process is essentially finite and as far as I can say, it has been done.

The hierarchies of certainty are interesting. One must realize that it only makes sense to study extremely complex calculations if we're damn sure about the steps and the assumptions - otherwise the papers would be just piles of random gibberish. So the physicists must be pretty sure about their assumptions and about the validity of their crucial steps.

Natural science is not as strict - and cannot be as strict - as mathematics. But we could say that it is sensible to keep the expectation value of the "critical errors" that have the potential to seriously invalidate the basic qualitative picture resulting from the paper below one or so - and maybe well below one. The corresponding expectation value is around 50 in average loop quantum gravity papers, so what you get is pure gibberish, indeed. Whenever such a situation occurs, the people should better work on something that is simpler for them (e.g. as cooks) instead of trying to behave as new Einsteins.

Complex theories, hypotheses, and calculations require very careful work, indeed. This fact dramatically reduces the number of people who can do these things right. However, these selected people can be much more certain about various things than more average people.

Different people belong to differently large groups. In most topics that Edward Witten has written papers about, he has between 5 and 50 peers in the world. It's not surprising that he gives no damn what's happening in the broader scientific community - and whether a stuttering pampered liar harbored by Columbia University increases the number of stupid haters of theoretical physics in average colleges by one or two orders of magnitude. (David Berenstein doesn't seem to care either, as he shamelessly promotes the vitriolic blog.) It doesn't matter for Witten: the number of people who will follow e.g. his work on rigid surface operators will be of order one, anyway.

Then you have other people who are more strongly affected by these broader communities. They may belong to the top 10 - top 1,000 in the world in a few of their very specialized favorite topics, and the generic professionals' "top 5,000" of theoretical physics. These people begin to be affected by excessive doubts about statements that are known to be certain by a more selective group. And sometimes they make a completely incorrect conclusion.

As you continue towards "top 6 billion" people, the signal-to-noise ratio plummets. The noise is especially devastating if you try to settle very complex questions. Only about 5,000 people in the world have the genuine intellectual tools to independently say something about the validity of string theory or its alternatives for the right theory of quantum gravity. That's less than one part per million (1 ppm). If you don't choose the right people or if you compute the average of a group that includes much more than the top 5,000 people (and be sure that Peter Woit is not among the top 50,000 people), you will obtain pure gibberish. It's completely obvious that you can't possibly get any signal.

See Insiders and outsides: sociological arguments in science for some estimates of the number of experts.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :