Friday, June 30, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Backreaction: the same stunning misunderstanding of naturalness as in 2009

Sabine Hossenfelder has some non-convex reproductive organs and the European Union's sexist bureaucrats make sure that it's enough and she doesn't have to understand any science while pretending to be a physicist. But her new text

To understand the foundations of physics, study numerology
is amazing because it's more than just a repetition of her complete cluelessness about the logic of naturalness that she presented in 2009. Her new text is actually more explicit, and therefore dumber, than the text from the last decade.

From the beginning, she boldly tells us that any argument referring to naturalness must be wrong – including the argument that the Universe should have better undergone an inflationary epoch or another epoch that explains its almost perfect flatness.




OK, let's copy a piece of her diatribe:
...This means for the curvature-parameter to be smaller than 0.005 today, it must have been smaller than 10-60 or so briefly after the Big Bang.

That, so the story goes, is bad, because where would you get such a small number from?

Well, let me ask in return, where do we get any number from anyway? Why is 10-60 any worse than, say, 1.778, or exp(67π)? ...
10-60 is worse for a simple reason: it's 60 or 158 f*cking orders magnitude smaller than the other two numbers above. And the probability that a number ends up this small assuming any natural, normalizable distribution covering numbers of order one is comparable to the number itself.

If you consider \(x\) which is uniformly and randomly distributed in the interval from 0 to 1 or from 0 to 3, the probability that it ends up being 10-60 or smaller is about 10-60. The probability is virtually zero. On the other hand, the probability that a number distributed between 0 and 3 ends up being 1.778 or smaller (or larger) is about 50%, so it may perfectly well happen.

If the parameter were exp(67π), we would have to invert the parameters because the choice of the dominant one would have to be qualitatively changed, and there would be the inverse parameter exp(-67π) that would be even more unnatural than 10-60 because exp(-67π) is about 10-91.




I have used some probability distribution that covered the interval from 0 to 1 or 0 to 3 and was uniform. But the point is that for any distribution that produces numbers naturally, the qualitative conclusion will be the same. Consider the normal distribution around 0 with the standard deviation 7π2. Some detailed numbers will change but the qualitative conclusion won't: it is insanely unlikely that the number ends up being this small.

Any theory equipped with any distribution like that will predict that the required tiny initial value of the parameter is basically impossible, so the theory will be f*cking falsified.

This doesn't mean that there is a contradiction in physics or mathematics. Instead, what the argument above means is that the right distribution that you get from a qualitative approximate sketch of the final theory has to be totally and radically different than any of the distributions that I mentioned above. So there must be some reason why an approximate sketch of the more complete theory has e.g. the standard deviation comparable to 10-60. Or the distribution must contain a peak near zero, or many delta-function peaks near zero, or something else so that the probability that you get a qualitatively tiny number like that will no longer negligible.

This is what inflation does. It simply expands the Universe so that it ends up being flat at the end.

Even for women who only care about their appearance, this thinking must be totally trivial. Imagine that woman A meets woman B and looks at her stomach. A has a random, fat belly but she sees that B has a flat stomach. The curvature of the B's stomach is 0.000 plus-minus 0.005 if not plus-minus 10-60. Woman A will think: How it's possible that B's stomach is this exactly flat? Well, A will offer a theory after a while: B has undergone a plastic surgery to make the stomach perfectly flat.



Ivan Mládek's musical version of the dialogue of these women, Ms Dáša Nováková and her pals.

C will immediately respond that A is just jealous about B's flat stomach. And you know, A is often jealous. And yes, she is arguably a fat obnoxious bitch, too. But in this case, she's obviously right. B has indeed undergone a plastic surgery and the unnaturally flat stomach is basically a proof. Natural women without any medical help are being born with bellies whose curvature radius is comparable to the height of the woman. They are of the same order – roughly a meter. If someone's measured curvature radius is larger (i.e. curvature is smaller) by several orders of magnitude, it proves that something unnatural has taken place.

It's only "unnatural" relatively to the simplest theory. At the end, when you figure out some new qualitative idea what has happened – e.g. the rough concept of the plastic surgery – the extreme flatness may become rather natural. After all, the plastic surgeon is a result of the natural selection and Darwin's evolution – despite his precision, he has evolved from apes. But the point is that something big is missing in your calculation. Even before you find out a hypothetical complete precise theory that exactly predicts the tiny curvature of B's belly, you must be capable of finding a qualitative sketch of the theory. This qualitative sketch doesn't produce the precise numerical value of the curvature. But it does explain why the curvature is so tiny. The complete theory is a relatively "minor" refinement of this qualitative sketch.

Now, I talked about fat and skinny women but the very same story applies to the Universe. The tiny value of the initial curvature means that the "minimal story" – claiming that this is how the Universe began, without any other big events – is simply wrong. Such a normal beginning of the Universe would predict a curvature of order one and the probability that it would be 60 orders of magnitude smaller is basically zero. Theories that predict very tiny probabilities for effects that we actually observe are in trouble – this is how we exclude theories in physics – and in science. So there should better be something that says that the beginning of the Universe was not a "normal" structureless beginning. Something like the plastic surgery – namely the inflationary epoch – had to take place.
Ah, you might say, but clearly there are more numbers smaller than 10197 than there are numbers smaller than 10-60, so isn’t that an improvement?

Unfortunately, no. There are infinitely many numbers in both cases. Besides that, it’s totally irrelevant. Whatever the curvature parameter, the probability to get that specific number is zero regardless of its value. So the argument is bunk. Logical mush. Plainly wrong. Why do I keep hearing it?
She's just so breathtakingly stupid yet self-confident.

Whether the "number of numbers" in a continuum is finite or infinite has nothing to do with the existence of the problem. The existence of a problem for a hypothesis is that it predicts a tiny probability for a proposition that is actually observed to hold. Science is all about f*cking probabilities, not about some counting of numbers in a set. And the probability that the Big Bang Theory passes the test of the rough value of the flatness is closer to zero if there's nothing special before the Big Bang expansion, and of order 1 if there is an exponentially expanding epoch before the Big Bang cosmology. That's why the cosmology with the inflationary epoch passes the test while the inflation-less Big Bang fails it. What the hell is so difficult about this trivial thing?
And there is another problem with that argument, namely, what probability distribution are we even talking about? Where did it come from? Certainly not from General Relativity because a theory can’t predict a distribution on its own theory space. More logical mush.
A distribution on a theory's parameter space should actually be a part of every modern usable theory. It's pretty much the same statement as the statement that if you measure the value of a quantity, you should specify the error margin. If you don't specify the error margin, the information about the measured value – which will almost certainly not be exact if it is a real number – may be said to be useless.

A usable theory must make at least some modest statement about the error margins, and therefore about the distribution of its parameters. For example, QED, to be usable, has a fine-structure constant 1/137.036 and it must say at least that this number may be determined from the experiment XY and should be trusted at some precision – e.g. 15 decimal points. If you didn't know how ambitious QED is concerning the precision, you wouldn't know which tests are vital.

If the theory implicitly stated that the number 1/137.036 is exact, the theory would be immediately falsified because there are surely some deviations in the 16th significant figure. On the contrary, if the theory implicitly stated that the fine structure constant is 1/137.036 plus minus 30%, then QED would only be a qualitative sketch of some phenomena and wouldn't encourage you to try to do high-precision tests. So the physicists using QED or any other theory simply must have some idea about the distribution of the parameters.

The distribution of the parameters may be measured experimentally if the theory is assumed to describe the experiments – and we get the usual experimental error margins, statistical and systematic errors, and so on. But the distribution of the parameters must be used by a theory as well. A theory itself doesn't allow you a precise calculation of the value of the parameter but you must always supplement it with some sketch of a more complete theory that does say something about the value of parameters, even though this statement is fuzzy. Otherwise the theory would be absolutely vacuous.
If you have trouble seeing the trouble, let me ask the question differently. Suppose we’d manage to measure the curvature parameter today to a precision of 60 digits after the point. Yeah, it’s not going to happen, but bear with me. Now you’d have to explain all these 60 digits – but that is as fine-tuned as a zero followed by 60 zeroes would have been!
It's f*cking not the same at all. A generic parameter that is of order one isn't fine-tuned while a tiny one that is comparable to 10-60 is fine-tuned.
Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits.
You really need to cancel 30 digits because the coefficient in the Lagrangian is the squared mass, not the mass itself.
That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.
It's f*cking unlikely according to any natural distribution – as I explained at the top as well as 8 years ago. That is why someone's "demand" to see one particular distribution only shows her complete incompetence. There is no single preferred distribution in these discussions. The whole power of the naturalness reasoning boils down to the fact that it holds for all (or almost all) natural enough distributions.
Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?

Do my colleagues deliberately lie when they claim...
The problem is, once again, that some of the observed parameters are vastly smaller than one, and they're therefore unnatural. The number 1.1370982612166126 times the Planck mass would be absolutely natural and would lead to no fine-tuning problem or a conflict with naturalness. If one produces two random numbers of order one, it's unlikely that both of them will be exactly equal, or equal at the 16-digit precision. But this "two random numbers" exercise doesn't quantify whether there's fine-tuning.

The point is that the number 1.1370982612166126 doesn't violate any natural, qualitative property that would be extremely likely according to any natural distribution. By a natural, qualitative property, I mean a property that can be formulated without too many digits you would have to remember (a property that isn't fine-tuned by itself); but a property that holds for a random number selected according to a natural distribution with a probability approaching one. On the other hand, 1.1370982612166126 x 10-60 does violate a natural property. It violates the requirement that a parameter's absolute value should be greater than 10-50.
Do my colleagues deliberately lie when they claim...
Nice tough words. They are not your colleagues, Ms Hossenfelder. They are physicists while you are just a f*cking stupid incompetent fraudster pretending that you know something about research-level physics. You're exactly the type of aggressive junk that has spread as locusts as a consequence of the political correctness and affirmative action.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :