Tuesday, July 31, 2007

Don Page defends typicality

Hartle and Srednicki wrote a crisp paper that has sketched the Bayesian methods to evaluate the probabilities that a theory is correct and that argued that the anthropic concept of "typicality" shouldn't influence our evaluation of theories.

Don Page responds. He agrees with many statements and rules by Hartle and Srednicki but he still wants to end up with different conclusions. That's a pretty difficult task. The detailed flow of the paper doesn't make sense to me.

I find Page's criticism illegitimate in these respects:

First, Page seems to argue that Hartle and Srednicki were considering theories where probabilities of alternatives don't sum up to one; I don't realize that Hartle and Srednicki ever did.

Second, Page defines a concept of typicality for an observed dataset in a given theory, by dividing its probability by the probability of a "median dataset". This is strange for several reasons.
  1. It is not justified why he picks the median and not another kind of mean value - arithmetic or geometric, for example.
  2. It is not justified why he considers any kind of mean at all. The Bayesian formula should already contain everything that is needed to calculate the a posteriori probabilities of the theories, so any ad hoc addition to these rules is clearly wrong.
  3. It is not possible to sort the probabilities for different $D_j$ in the first place because the probabilities of $D_j$ depend on how finely we divide the space of possible outcomes to boxes. For discrete pure microstates, we could count individual states, but that's clearly not possible in the general case where either a continuum of pure states or mixed states (a density matrix) must be considered.
  4. This whole concept of a randomly chosen ratio involving a randomly chosen "mean" seems as a bureaucratic sleight-of-hand and I see no way how this ugly rule added (?) to the Bayesian inference could ever influence rational considerations.
Third, Page doesn't seem to talk about the prior probabilities $P(T_i)$ at all even though this is precisely where the whole dispute is hiding.

These prior probabilities should be assigned wisely, reflecting our state of ignorance. For example, if discrete different theories are available to explain data, they should be given equal priors. However, when possible theories come in large classes or spaces, it becomes subtle. Do you assign one voice to each member or one voice to the whole class?

The anthropic reasoning, once again, wants to assign voices to individual members of these classes. It's the same approach as the approach of Lee Smolin who sells his/their crackpot discrete models not as one theory but eight theories, assuming that it will increase their appeal by a factor of eight.

This approach reminds me of the blonde woman who is asked whether her pizza should be cut to four or eight pieces: "Only four, eight would be too much for me to eat!" ;-)

The choice of the priors is a somewhat philosophical question but Hartle and Srednicki convinvingly argue that classes of theories with a huge number of elements shouldn't have too high a weight. I haven't noticed that Page discusses this issue.

What is Page's answer to their Jovian thought experiment?

In this setup, two theories, T1 and T2, give identically good and accurate predictions of the data that we could test on the Earth. We may imagine that their predictions are actually indistinguishable whenever some quantities such as temperature and gravitational acceleration resemble those on Earth.

However, T2 happens to differ from T1 in a subtle way: it predicts a new set of molecules or other bound states (with new particles, for example) that will form life in environments such as Jupiter's atmosphere. When you calculate it, T2 predicts roughly six trillions of intelligent beings living inside that atmosphere. Assume that we can't yet observe whether this life inside Jupiter's atmosphere exists - which is a pretty realistic assumption anyway.

The question is which theory, T1 or T2, is more likely according to the available data.

Hartle, Srednicki, rational thinking, as well as common sense dictate that these two theories are equally likely because they are two discrete possibilities that should be assigned the same priors and they give indistinguishable predictions for the quantities we could have measured. It follows that all numbers entering the Bayes' formula are identical for T1 and T2 and these two theories are thus in equally good shape.

According to defenders of typicality, T2 is disfavored about 1000 times in comparison with T1 because according to T2 combined with typicality, we should probably be living inside Jupiter's atmosphere which we don't.

I view the latter argument as an irrational one. It is based on an inherently political notion that some dirty colorful creatures who may live inside Jupiter's atmosphere should have the same voice as we do, in some kind of crucial counting that determines how we evaluate theories. Such a "democratic" assumption could only follow from some kind of equilibrium or an "egalitarian" law that is enforced both on the Earth as well as Jupiter. I hope that at least so far, no such a law that would cover the Solar System exists. More importantly, it didn't exist in the past when both civilizations evolved - which shows that any conclusion based on the assumption that this law operates is unjustified.

Moreover, the "typicality" approach is based on another assumption, namely that "we" could be the Jovian guys, after all. I think that this assumption is clearly incorrect, too. "We" couldn't be the Jovian guys because "we" are defined as those who live around 300 Kelvins and 9.8 N/kg of gravity. The meaningful question that enters the Bayes' formula is whether a theory predicts the right life in these conditions, not how it interprets the word "we". T2 doesn't predicts that we are probably Jovians. No theory can predict such a thing because it is tautologically untrue: the only way to make the word "we" meaningful is to define it as the people who live in 300 Kelvins and 9.8 N/kg. Punishing T2 for this "prediction" that it can't really make is a fraud.

It would be equally wrong to say that T2 is, on the contrary, 1000 times more likely than T1 because it predicts 1000 times more intelligent beings.

Incidentally, I didn't tell you that the Jovian beings have ten brains each and leftist activists demand that each being has ten votes in the General Polls of the Solar System (GPSS) instead of one, thus reducing the influence of Earth from 0.1% to 0.01%. Is that a right thing to do? Clearly, these questions are purely political and in the real world, the outcome would depend on a comparison of political and military forces rather than some ad hoc bureaucratic egalitarian rules. Spoiler: the Earth will win the war.

Summary & challenge

The first argument that implies that T1 and T2 are equally likely sounds solid. If two individual theories predict the same results for situations that we have tested and only differ in their predictions for situations that we haven't yet observed and where we don't know what the right answers are, these two theories must have the same probability, regardless of some egalitarian ideology. Could Don Page or another champion of typicality kindly address this particular thought experiment?

No comments:

Post a Comment