Wednesday, May 26, 2010

Misanthropic principle

Why it could be true, how it could be exactly defined, and what it could imply

The typical application of the anthropic principle in the literature is based on the assumption that our Universe - and our species - is among the most typical universes or species in the multiverse, at least among the universes that admit some intelligent life.



Of course, this assumption is not rationally justified. I have always believed and I still believe that it is not only unjustified but that its negation is more likely to be true - and if it is ever mathematically refined, it may even become a deep principle.

Fine. So the misanthropic principle is the assumption that
our Universe is among the most special vacua in the landscape.
Of course, it's almost as hard to define "special vacua" as it is to define the "typical vacua": you can just say that the two notions are opposite to one another, at least morally. However, it could be a more well-defined task to define the special vacua because the very idea that we should think about the special vacua leads us to think about distributions that are supported in more special and more finite regions: such a measure is more likely to be normalizable.




A justification using the second law

The assumption that our world is "special", at least in some respects, hasn't always been true but it has worked on numerous occasions in the history of science. People who believe it tend to think that there is something "fundamental" or "beautiful" or "inevitable" about this world. Such feelings are largely emotional and don't have to be true.

However, there's one argument why the assumption could be true in the context of the vacuum selection. This new argument of mine is rooted in thermodynamics.



Off-topic: Steven Weinberg received a lunch from Google which is why he likes them and why he gave a talk related to his new book about general topics, Lake [Austin] Views.

According to the second law of thermodynamics, entropy keeps on increasing with time. In the past, it had to be lower. The further we look into the past, the lower entropy we have to get. That means that whenever the past entropy is positive, we may imagine that there was an earlier moment when the entropy was even lower.

By repeating this strategy, we may ultimately believe that the entropy at the very beginning was essentially zero. It's the only value of entropy that prevents us from going further to the past - an appropriate value for the initial state.

However, if you imagine that our Universe started as an empty state at some "generic" point of the dense regions of the landscape - those regions that contain those popular or notorious "10^{500}" vacua - the "initial" state would still contain the information about the precise identity of the vacuum.

Because you may select it out of "10^{500}" elements, and these elements (vacua) are "indistinguishable" from each other when you use just some generic criteria allowed in quantum cosmology, you need "ln(10^{500})" of information which is about 1150 e-bits, or 1660 binary bits of information.

That's of course much fewer than the entropy or information carried by the Universe today - or at any moment of its macroscopic life. However, 1660 bits is still a significant amount of information. When the entropy was that high, you may argue that the Universe was already "somewhat old" and it had to evolve from something more fundamental - from a state whose entropy had been lower.

Now, it's pretty difficult to define which states or backgrounds are "macroscopically indistinguishable" from a given one. But it may be the case that you want the right Universe to be located in regions of the configuration space or the landscape where the density of the vacua is very low. The vacua that are isolated, lonely, and relatively distant from their neighbors and from the big "bunches" of the generic vacua may be the most likely ones to describe the initial state of the Universe because their "cosmological entropy" is minimized.

Spectrum of these special vacua

How can you identify such vacua? Are the observable phenomena in these universes correlated with the vacua's being special in some way? Well, there should probably be a lot of volume on the configuration space around them. The configuration space includes the moduli. But there are no exact moduli in realistic (stabilized) vacua.

Moreover, it can't be the case that only "strictly massless" fields matter. The light fields should influence the counting in a similar way as the massless fields because their mass is similar to the mass of the moduli - it is close to zero. Imagine that one light bosonic field is generating one direction in the configuration space.

Its mass term in the Lagrangian is
L = -m2 phi2 / 2.
Imagine that the "appropriate neighborhood" of your vacuum is obtained approximately in the region in which "L" remains smaller than some energy density "rho". The maximum value of "phi" is therefore of order
phi = sqrt(rho) / m.
It is inversely proportional to the mass. Moreover, you want to compute the dimensionless volume, so that you can compare it between the vacua, regardless of their number of light fields. So "sqrt(rho)" should be supplemented with "1/M" involving some typical, e.g. Planckian, energy scale "M". I don't know what's the right value of "sqrt(rho)/M" that should be used for such counting.

But this way of looking at the measure already offers us many potentially interesting selection rules - and maybe also some explanations for the interrelations between the Planck mass, the SUSY breaking scale, and the mass of the lightest massive particles (neutrinos) that happen to coincide with the 4th root of the cosmological constant.

For example, if you want to maximize the volume around the vacuum, believing the product formula above, you may want to
  1. minimize the number of light fields because each light field contributes "sqrt(rho)/Mm" which is arguably smaller than one
  2. among the models with a small number of light fields, you want to prefer the models for which the masses "m" are small because "1/m" is large. More precisely, the product of these masses should be as small as possible.
You want to have a small number of relatively light fields but those that survive should be as light as possible. These are pretty interesting possible consequences of the assumption that the initial entropy should be minimized.

They seem pretty viable, too: the first insight seems to prefer "simple" vacua with a small number of light fields, e.g. Calabi-Yau manifolds with small Hodge numbers (and therefore small numbers of the generations, and so on). The models with small numbers of light fields could be "classifiable" in some sense.

The second rule would seem to give us an explanation why Nature likes hierarchies - families of particles that are much lighter than others. In reality, the masses of various known elementary particles seem to be "almost" uniformly distributed on the logarithmic scale. This fact - which also seems pretty useful for life, so could be used by the anthropic people - could also follow from some basic principles of physics such as the minimization of the initial entropy.

Needless to say, all these ideas are just sketches and even if the basic strategy is correct, the right details may differ from those above.

Some aspects - such as the contributions from fermions (whose masses could go to the numerator and cancel the bosons if SUSY is unbroken: then the rule could prefer fermions to be lighter/heavier than the bosonic counterparts) - are completely missing so far. Also, you could say that the ideology above leads you to "look for the keys under the lamppost only".

However, there could exist objective and rationally justifiable reasons why the keys actually have to be under the lamppost, i.e. outside the "hopeless" chunks of the numerous random vacua in the landscape.

And the right vacuum selection principles could discriminate the vacua according to their actual physics - their spectrum and interactions. This likely possibility has been completely absent in the research about the vacuum selection so far. Every vacuum would usually get "one vote" regardless of its properties and spectrum, regardless of the volume of its neighborhood in the configuration space, and so on.

I believe that if there exists a "measure" that answers the vacuum selection puzzles and tells us something about the initial state, then this assumption about the "blindness of the selection to physical properties" is indefensible. The Hartle-Hawking-like initial state has to care about the dynamics, about the spectra of the backgrounds, and about their physics in general. After all, if there exist rational principles of the vacuum selection, they have to be connected with the dynamics - much like decoherence is connected with the Hamiltonian - and not just with some dumb, superficial, and kinematic counting of possibilities.

And that's the memo.

No comments:

Post a Comment