Thursday, December 17, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Why naturalness should be expected

Sabine Hossenfelder has revealed another incredible fact - namely that she doesn't understand why parameters of theories should be expected to be natural.

Holy cow!

She correctly explains that naturalness means that parameters are expected to be of order one (e.g. 3.14 or 1 or 0.618 but not 19171107 or 0.00000025021948) and that this rule can only be applied to dimensionless parameters because the numerical magnitude of dimensionful quantities depends on the choice of units and in various units, it can be much greater than one, much smaller than one, comparable to one, or equal to one, according to your choice of units.

But everything else she writes is just unbelievably dumb.




She thinks that there's no reason for anything to be natural - because the only reason why people expect parameters of order one is that they like mediocre things and they like the book about women from Venus and men from Mars. And she claims that she doesn't care about these things (I thought that she was the ultimate champion of mediocrity!) which means that she thinks that the naturalness is bogus.

However, the reason why naturalness is an important principle in physics has nothing to do with the caricature used by Hossenfelder to make fun out of it, or with Mars, Venus, or sex, and the fact that Swedish universities are ready to employ people who don't have the slightest idea about the basic principles of theoretical physics doesn't mean that these principles of theoretical physics are wrong or unimportant.

If an applicant for a grad school wrote a similar essay against naturalness, I would instantly turn him or her down. It's just too bad. Weak knowledge, insufficient intelligence, and poor intuition are needed to believe what Hossenfelder does.

What does naturalness mean?

Naturalness means that the dimensionless parameters in physics are likely to be comparable to one - and very unlikely to be much greater or much smaller than one - unless an explanation why they're very different from one exists.

Why is it true?

When Bayesian inference is taken into account, the statement above is true almost tautologically. Imagine that you study the value of a dimensionless parameter in your theory called X and describing an object Y. You don't quite know how much X is. However, you assume that Y is kind of fundamental and doesn't hide additional layers of an onion to be learned about.

What can you say about X in this case?

Well, you may only have some probabilistic expectations about X. And because you assume that you have gotten at the very bottom of the pyramid of explanations, the full theory of the physical system must produce X as a solution to some equation - yes, I do assume that there is actually *a* theory that explains any feature of the Universe. Imagine that the equation is "Cos(X)=X". In reality, the equations are much more complicated and involve path integrals and other complex objects.

It's very clear that unless the equation contains some very awkward patterns, X will be of order one. It is of order one - around 0.739 - in the example above. Any sufficiently concise equation of this kind will produce solutions that are of order one unless you're lucky and X=0 is an exact solution (in which case there is usually a rigorous argument, usually based on symmetry). Try it.

If we're ignorant about the value of X, it really means that we imagine that the value of X is distributed according to a statistical distribution. And so is Ln(X). Because it cannot be uniform (that couldn't be normalized), Ln(X) must be centered somewhere. Clearly, Ln(X) near 0 is the only central locus which you can get by averaging over different kinds of simple enough equations.

What is the width? How dramatically may X deviate from 1 or Ln(X) from 0? Well, once again, this width W is another parameter or meta-parameter and the same reasoning as the reasoning above tells you that it should be of order one. At any rate, you won't get too far from 1 unless special, awkward, unlikely, unusual things appear in the solution.

Don't get me wrong. You may generate equations with solutions that are very different from 1. For example, the equation

Ln(Ln(Ln(X))) = Cos(Ln(Ln(Ln(X))))
will have the "first" solution at X=3352.5. But the only reason it's large is that we actually had an equation for Z=Ln(Ln(Ln(X))), so X was naturally written as Exp(Exp(Exp(Z))) for a natural Z. If we find a reason why X is the exponential (or triple exponential) of a more natural parameter, then we can understand why X is much larger than one or much smaller than one. But otherwise we can't.

The strong CP-problem

This example is probably more transparent.

Consider an angular parameter of your theory, for example the strong theta angle of QCD, the coefficient in front of the "trace of F wedge F" term in QCD. I won't really explain what it means - except that I will tell the experts that we will be interested in the value at the high-energy or unification scale. The only thing that matters here is that it influences physics and the values that differ by multiples of 2.pi are equivalent to each other. So it is an angular parameter that effectively lives in the interval (0,2.pi). How big is it?

Well, if you could find a qualitative reason why it is (or it could be) exactly zero - a kind of a symmetry argument - you could expect it to be zero (or likely to be zero). But let's assume that no such qualitative argument exists. In fact, theta can't be universally zero because it runs - it depends on the renormalization scale.

If no argument exists, theta is equally likely to be any number. How likely it is that it is smaller than 10^{-9}?

Because we don't know anything about theta, we must imagine that it is an unknown variable with a statistical distribution. And because of the additive shifts that don't change physics, we should expect that theta has a uniform distribution (1/2.pi) between 0 and 2.pi. This measure is also natural because it's the volume form on the configuration space of scalar fields, if the parameter becomes dynamical.

With this distribution, what is the probability that it is smaller than 10^{-9}? Well, it's clearly comparable to 10^{-9} itself. Only a very small portion of the interval (0,2.pi) belongs to the interval (0,10^{-9}) and it is very unlikely that a random number from the big interval falls into the shorter interval.

Is this argument exact or canonical? No, it's just a matter of logical inference, an estimate of probabilities. But a damn good one. It seems extremely unlikely for its conclusions to be violated by Nature.

Needless to say, the theta in the real world actually is smaller than 10^{-9}. It's not a proof that physics is contradictory because the argument above was just probabilistic. But it does show that it is likely that something is wrong with the argument. It seems that theta is just *not* a random, uniformly distributed number between 0 and 2.pi. It likes to be close to zero. And a reason should exist.

What do I mean by a reason? Well, the full theory of everything contains not only an explanation why theta is small. It also produces its exact numerical value.

But if you have a calculation that produces a value that is much smaller than one, there must almost inevitably exist a simplified, rough, qualitative version of the calculation that doesn't quite give you the exact value but that manages to explain why the value is so tiny.

For example, we found the reason when we were solving the equation
Ln(Ln(Ln(X))) = Cos(Ln(Ln(Ln(X))))
It would be hard to immediately see that X=3352.5 is a solution. But it's not hard to see that the solution to the equation above will be large because X is actually Exp(Exp(Exp(Z))) where Z is a solution to a natural equation, Z=Cos(Z). For pretty natural values of Z, it is normal to get pretty big values of X.

The analogous qualitative explanation must exist in the case of all unnatural parameters. Many of these naturalness problems have been solved. The only unsolved ones, in the state-of-the-art picture of physics, are
  • the cosmological constant problem: why the C.C. so tiny in the Planck units?
  • the hierarchy problem: why is the Higgs boson (and therefore W/Z bosons) so much lighter than the Planck scale or other fundamental high-energy scales?
  • why is the QCD theta angle so small?
  • why are some Yukawa couplings (and lepton/quark masses) so much smaller than others?
The last problem, the Yukawa coupling hierarchy, is the least serious one - the hierarchies are closest to one, too - and it also has some promising preliminary solutions in various scenarios of string theory. The strong CP-problem is often "solved" by the axions - although we can't be sure that it's the right solution - the hierarchy problem is solved by SUSY - which is more likely than not to be the right solution - while the cosmological constant problems remains the toughest ones.

Eventually, physics should be able to calculate these things. But the values of the parameters above are not just accurately unknown. They're unnatural - which means that they're not even approximately close to your expectations for solutions to a generic messy equation. They're very different. And because they're so different from one, there must exist a simplified version of the calculation that tells you why they're so different from one even though it doesn't quite allow you to calculate the exact values.

That's why people are looking for it.

One of the postmodern approaches to the naturalness problems, the anthropic principle, is not a really satisfactory solution - from a physics viewpoint - but it is a kind of solution, anyway. It says that these constants are so different from one because such unnatural values are actually necessary for intelligent life, or lots of it, or a high probability of it, or something like that. So the loophole in the argument above that postulated "a probabilistic distribution around one" is that the right distribution is actually elsewhere, around big numbers needed for the life to emerge.

Sabine Hossenfelder simply denies that there are any problems to solve. That's because she's not curious about physics, about the values of observable quantities, about the logic and patterns that underlie them. She doesn't want to understand how the world works. She prefers to find insults directed against established pillars of physics and logic and excuses why her theories don't have to solve any actual problems and why it's equally OK to write crackpot papers that don't make any sense.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :