Tuesday, September 02, 2008

Locally predictive landscape

Dienes and Lennek assume that the set of all stringy vacua doesn't exhibit any particularly strong correlation that would allow non-trivial predictions.



However, they realize that it is counterproductive to look at the whole landscape simultaneously. Instead, you should imagine that the landscape is an (overlapping) union of regions and each region displays some (generally different) correlations i.e. it makes predictions.

Once you determine which region our Universe belongs to, you can clearly predict things. They try to quantify the probability P_x(n) that "x" vacua can be divided into "n" classes, each of which exhibits some correlations. The values of P_n(x) for small values of "n" matter and quantify the predictive power: the larger P_n(x) for small "n" is, the more predictive the landscape becomes.

A high P_n(x) for "n=1" implies that we could make predictions by treating the whole landscape as one set: there would be universal predictions. That's unlikely to occur but for slightly higher values of "n", P_n(x) could already be large.




Paper

At the beginning, they argue that we should expect the string theory "framework" to predict more things than the quantum field theory "framework" because the elementary building blocks and parameters cannot be freely chosen in string theory, unlike quantum field theory.

However, it is unlikely that the set of all vacua has some universal, highly non-trivial properties. On the other hand, it doesn't mean that the predictive power must be zero. They choose a scenario "in between": the different regions imply predictions but there are still many of them.

This is clearly along the old-fashioned approach to science in which we are looking for the truth "step by step": we first try to identify the correct "region" and then we focus on smaller regions inside it, hopefully ending with the individual model at the end. This gradual method of learning has been a part of science for quite some time and there's no good reason to assume that it should be any different with the landscape.

A priori, you could assume that the "regions" are given by the construction methodologies (heterotic vs braneworlds) but they emphasize that the correct division should follow the low-energy predictions, and not the methods to construct the vacua.

Once again, they quantify the situation using P_n(x), the probability that one can divide "x" vacua to "n" regions so that correlations exist in each region. For fixed "x", this probability increases with "n" and another interesting quantity is "n*", the minimum value of "n" for which P_n(x) equals one (or essentially one, if the condition "correlations exist" becomes fuzzy); the rise of P_n(x) to one is very fast around the critical value of "n=n*", anyway. The value of "n*" is a characteristic feature of the landscape. The smaller it is, the more predictive the landscape is.

As I have already said, what this exercise really means is that we don't quite know where the real Universe sits in the landscape. But wherever it is, the neighborhood of this place (defined by the low-energy properties) is either ordered (predictive, with many correlations) or fine-tuned (all features are random and uncorrelated).

Their picture is kind of oversimplified (in many respects) because we would also like to talk about the amount of correlations and the multiplicity and accuracy of the quantities that can be predicted in various regions. But it is an interesting first step, anyway.

They want to believe that the value of "n*" associated with the stringy landscape is comparable to ten (or less), otherwise they would say that the theory is not predictive. Well, this is probably just an emotional convention. Even if "n*" were 1,000, we could realistically identify the correct region and divide the region into subregions later (several times), ultimately finding the right vacuum: their method, originally designed for the whole landscape, can be used for the regions, too.

Comparing the information going in and out

They don't write the following explicit point - not sure whether they realize it - so let me do it for them.

If we have a particular value of "n*", it means that we need to supplement string theory with the input information "ln(n*)" that identifies the region. What is needed for the theory to be predictive is that the information that can be extracted from the correlations - that are universal for the typical (or the least predictive) class - is greater than "ln(n*)". So if the correlations you can extract are more non-trivial, detailed, or accurate (for continuous quantities), you can clearly afford a higher value of "n*" and still claim that the theory is predictive (in practice).

With this quantitative definition, it is easy to see that in principle, string theory is infinitely predictive. We may divide the landscape completely, into classes that contain one vacuum each. Then, using the popular figure for the number of vacua, we need to specify the input information whose amount is "ln(10^{500}) = 1151.3 = 1661 bits = 208 bytes" of information but the predictions can be arbitrarily accurate, giving us an infinite amount of output information.

Of course, this reasoning only holds in principle because we probably can't measure all the quantities so accurately that we could extract 500 decimal digits of the information from the observed low-energy data. To realize the predictive power in practice, we need many fewer classes than 10^{500} but I disagree that the number has to be as low as O(10). Even if you have "n=1,000" classes, we only need 3 decimal digits of input information, and it is very plausible that the predicted information from each class may easily exceed 3 decimal digits (9.97 bits).

More generally, I am not sure why they don't use Shannon's concept of "information" (measured in bits) systematically.

No comments:

Post a Comment