Sunday, November 17, 2013 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Naturalness and JFK conspiracy theories

Among the 89 episodes of the classic show Penn & Teller: Bullshit, the 29th one was dedicated to conspiracy theories, namely to 9/11 truthism, moonlanding, and JFK conspiracy theories.

I recommend you to find all the episodes and watch them – it will be 45 hours of intelligent fun!

Just to be sure, JFK was assassinated in Dallas on November 22nd, 1963; it will have been 50 years next Friday. The apparent sniper was Lee Harvey Oswald, an American commie (believed to be a "lone gunman") who loved Cuba and who emigrated to the Soviet Union. Yesterday, CNN listed a dozen of the conspiracy theories about the assassination and suggested that one of them could be right although I didn't quite understand which scenario they endorsed.

In their show, Penn and Teller have been primarily making fun out of many kinds of nutcases. And as the number of episodes, 89, suggests, even the number of the basic types of nuts is really, really large, and all of them have many subtypes as well as several billions of human examples.

Equally importantly, they present the actual evidence that the conspiracy theories (and other crazy beliefs discussed in other episodes) are wrong – mundane, likely possible or demonstrated explanations that easily defeat the contrived interpretations of the evidence used by the conspiracy theorists.

The show is insightful and entertaining but sometimes they discuss deeper points. Why do some people – in some cases people who are intelligent according to other benchmarks – love to believe such stuff?

A lady (12:12) proposes an explanation (see also a man at 23:10). People want to see "a big overriding story", a story with sufficiently far-reaching philosophical or moral implications, as an explanation of every big enough event. (It's possible that I am improving her quote a little bit but I won't claim the whole credit.) People want the explanations and the events that they explain to be commensurable or comparable in magnitude.

They just don't want to believe that something so grand as JFK, the most powerful man on the planet, or the World Trade Center could be terminated by something or someone as tiny, stinky, generic, and irrelevant as an angry Arab man or a mediocre American communist who preferred to read paperback trash over Marx's tirades.

(Even if some other commies were helping Oswald, e.g. some folks in the USSR, I wouldn't be stunned. I don't really care how many commies participated on a crime and I don't think that the Soviet commies were "qualitatively different" from some of their Western counterparts. If the USSR had participated, it would still have limited consequences for the relationships with the current Russia which isn't responsible for everything that was ever done by a Russian national.)

But that's how the world often works. Many great people died because of some infection, i.e. some petty stupid microorganisms that were much less sophisticated than the humans. And many other events or phenomena in Nature have seemingly mundane, low-key, disappointing (for a conspiracy theorist expecting a great story) explanations. The comparability of the demolished buildings or terminated human lives with those of the killers isn't something that is implied by the actual logic or the actual laws of physics and the society. But some people incorrectly believe that this commensurability is a part of rational reasoning.

Because of our Friday and Saturday discussions on naturalness, especially with Giotis, I couldn't overlook the apparent similarity of the sentiment of the conspiracy theorists and those who take the naturalness arguments too seriously or strictly. Why are those attitudes similar?

Well, because the strict naturalness fans identify a pattern in Nature, and the lightness of the Higgs boson is the most important example, and they expect or demand some far-reaching, paradigm-shifting, philosophically deep explanation, perhaps one with huge moral consequences or at least consequences for the character of the future research. (I generally agree with almost everything that Nima Arkani-Hamed says about physics but yes, I am talking about him in this case a little bit, too, and at least our "accent" was very different when we debated these issues.)

But let me tell you something. Just like in the case of JFK, seemingly "clear patterns" may have convoluted or uninteresting explanation. I believe there's really no solid evidence that the explanation why the Higgs mass is so much smaller than the GUT scale has to be a "grand idea". More precisely, the explanation for this hierarchy probably is a grand idea, the supersymmetry, but what I wanted to say is that the explanation why the superpartners are 10 times heavier than the Higgs boson doesn't have to be another "grand idea" anymore.

Don't get me wrong. I do use the reasoning based on naturalness. After all, all reasoning in science is ultimately probabilistic. See e.g. Why naturalness should be expected for the most pro-naturalness perspective on your humble correspondent. However, what I do not believe is the idea that the probabilistic distributions on the spaces or parameters are the most important or most rock-solid considerations we have in science. I do not believe that similar references to naturalness have dictated or will determine most of the insights about science. I don't believe such considerations have or should have the last word, either. There are much "harder", more reliable theoretical arguments and I think that the experimental evidence (if checked not to be flawed) always beats some philosophical arguments such as those based on naturalness.

I am somewhat open-minded whether the "existence of life" (or something like that) could be used as a "part of the explanation" why the Higgs boson is so light – and why other features of the vacuum surrounding us have the qualitative properties we know, properties that seem necessary for life of our type. And this open-mindedness – again, I prefer explanations that are non-anthropic but I am not 100% certain that those will be found for every question – is something that isn't really changing qualitatively once the lower bound on the scale of new physics gets doubled, for example.

Supersymmetry seems to be the only major physics paradigm we know that is capable of explaining the apparently weakly self-interacting, moderately light Higgs boson. The cancellations resulting from SUSY guarantee that the expected residual Higgs boson mass is comparable to the mass of the top squark, higgsinos, and perhaps gauginos. Those may be below a \(\TeV\) or at several \(\TeV\)s etc. so the degree of fine-tuning of \(m_h^2\) (it's the squared mass that appears in the Lagrangian and that naturally gets "almost additive contributions") gets improved from \(1\) in \(10^{30}\) to \(1\) in \(100\) or \(1,000\) or so in the SUSY models that remain viable.

But what does it "exactly" mean that the Higgs mass is predicted "not too be much smaller"? How smaller it may be? Well, there is clearly no "exact" answer. It depends how strong tuning or fine-tuning you're ready to tolerate – effectively, how unlikely event or selection you're ready to allow in the foundations of physics. I am perfectly OK with \(1\) in \(100\) and even \(1\) in \(1,000\). I believe that the number of questions comparably important to the Higgs boson's lightness in physics is comparable to 100 so it is totally normal to expect something like one of these questions whose answer will be 1-in-100 fine-tuned, apparently. But they may exist even if the chances are a bit lower.

It's important to notice that the degree of fine-tuning isn't necessarily a simple function of the mass ratios. Some models with new fields and interactions may reduce the amount of fine-tuning even if the mass ratios are much larger. For example, models with \(5\TeV\) Dirac gluinos may actually be highly natural. Because we don't know the field content and the list of interaction terms, we can't "calculate" the degree of fine-tuning with any precision.

But even if we could, the absence of new physics at the LHC (even at the \(13-14\TeV\) run) would still be a weak argument against naturalness. It wouldn't settle the question in one way or another. Why?

Imagine that the LHC establishes that there is no gluino etc. up to \(5\TeV\) sometime in the foreseeable future. Imagine that this means that \(m_h^2\) is fine-tuned to \(1\) part in \(1,000\). So the existence of the world as we know it, with the parameters we have measured, has depended on a "good luck" that only had the probability \(1/1,000\) to proceed in the right way. Is that unacceptable?

I don't think so. Well, I would kindly argue that because of the results that keep on agreeing with the Standard Model, the LHC has already excluded the idea that a \(1\) in \(10\) and perhaps \(1\) in \(100\) fine-tuning is "unacceptable". Even if you view this \(1/1,000\) fine-tuning of the squared mass as the probability, as a \(p\)-value, its magnitude is still \(1/1,000\). That's not extremely tiny. In fact, we commonly translate this \(p\)-value, using the maths of the normal distribution, to something slightly more than 3 standard deviations.

Even if you view this absence of new particles near the Higgs mass scale as the evidence falsifying the "null hypothesis which is naturalness", and even if you ignore the aforementioned disclaimers that a modified particle content may render much heavier superpartners natural, the null hypothesis has only been contradicted by a 3-sigma bump or so! In the case of other 3-sigma bumps, we would say that it fails to reach the usual standard of particle physics for a discovery. We know why we use these standards: 3-sigma bumps may be and often are due to chance. They often go away.

For a normal proper discovery, particle physicists demand 5 sigma which is equivalent to the \(p\)-value comparable to \(1\) part in \(1,000,000\). In the counting (or analogy) above, this would occur if the new particles (stop, higgsino etc.) responsible for the Higgs boson's lightness were roughly \(1,000\) times heavier than the Higgs boson, i.e. around \(100\TeV\). Only if you exclude superpartners up to \(100\TeV\) or so, something that even the SSC would be incapable of achieving, you could claim that you have the equivalent of a 5-sigma evidence against the null hypothesis (naturalness).

Because naturalness is such a natural thing to believe, at least to a certain extent, I would argue that the claim that it is completely wrong is so extraordinary that we should demand extraordinary evidence i.e. an even higher confidence level than 5 standard deviations. And again, let me repeat that because some non-minimal adjustments to the physics may tolerate even larger gaps and keep them natural, the tolerable gap increases further.

If you summarize the arguments and views outlined above, it's very clear that I won't qualitatively change my mind about the "big questions" such as the "relevance of the counting of intelligent observers" even after the \(13-14\TeV\) LHC run, regardless of its results. The LHC may be expensive but from the viewpoint of "all the physics", it's just another minor step, an improvement of the energy scale by an order of magnitude. There are still approximately 15 orders of magnitude that separate us from the GUT or Planck scale.

So the reasons why superpartners are 10 times and perhaps 100 times or 1,000 times heavier than the Higgs boson may be "a bit convoluted". The collection of reasons may be composed of some issues that are studied in some unknown papers today – or that are being completely overlooked. The neutron lifetime is vastly longer (10 minutes) than the lifetime you could expect – the nuclear time scale around \(10^{-22}\,{\rm seconds}\). We sort of understand why today. But we couldn't have understood those things before the neutron's interior was sufficiently understood. Our order-of-magnitude estimate for the neutron's lifetime could have been wrong by 25 orders of magnitude if we were sufficiently naive.

(Incidentally, would you say that with the hindsight we have today, the failure of the dimensional analysis to estimate the neutron's lifetime – or, more physically, the unexpected length of the neutron's lifetime – was due to the anthropic considerations? Is a long-lived neutron really needed for life etc.? I don't think we are organizing our explanations of the neutron's longevity in this way. In the same way, I don't think it's guaranteed that the explanation for the lightness of the Higgs believed in 2100 AD will employ some anthropic ideas. It's just not necessary even if the ideas about naturalness from a particular era are shown to be wrong.)

If someone has a particular idea how (and how strictly) naturalness should work and this idea was just falsified by the experiment, he shouldn't claim that he has everything he needs to say all the right things about naturalness in Nature. Instead, he should be more humble because he has just lost a battle with the experiments. You don't want to believe such a person if he tells you that he knows what must be the "only other alternative". There are lots of possible alternatives. Only when the more complete theory is understood more fully, we will understand why the superpartners (or whatever new particles exist) are \(X\) times heavier than the Higgs boson – much like we need some precision knowledge and arguments to understand why the neutron's decay rate is 25 orders of magnitude smaller than the most naive nuclear-physics estimates.

In the text above, I discussed the belief of the conspiracy theorists in the "commensurability" of the big events and patterns on one side and the big stories or far-reaching theories that explain them on the other side. A proper, hard-scientific reasoning just doesn't imply that this commensurability is a general law. This belief in commensurability is clearly not justifiable by solid mathematical or scientific evidence; it is partly ideological in character. I believe that this commensurability is intrinsically a left-wing belief, a form of ideological egalitarianism.

But there's one more aspect or interpretation of the egalitarian ideology that leads some people (and I really mean Nima in this case) to say that the null results from the LHC high-energy run would be a great discovery (because it would falsify naturalness as a general tool – and it would even perhaps prove the anthropic bullshitting). What is it? It's the implicit assumption that an experiment is adding the same amount of information per unit time regardless of the results. I don't claim that this is really the reason why Nima says the things about the "two roads" that he does but I do think that many other physicists implicitly want to impose this "quota".

But this "equivalence" is completely wrong. Of course that the importance of an experiment does depend on what it actually discovered – the importance of an experiment always partially depends on luck. If an experiment finds "nothing new" and only improves some lower bounds on masses or upper bounds on probabilities or interaction constants, it's naturally disappointing for the experimenters (and others).

It doesn't mean that we're learning nothing out of an experiment that continues to produce null results. We're learning something. Every time the experimental bounds are improved, and even when some previous bounds are justified by a somewhat independent method, we're learning something or at least getting more confident about something. We may exclude some models and parts of parameter spaces of other models, too. But the information we're gaining is far less groundbreaking than a positive discovery! That's just how it works. It is silly to deny it.

We don't know what the LHC will see in the \(13-14\TeV\) run. I still tend to bet that the likelihood is comparable to 50% (it doesn't make sense to try to quantify such subjective probabilities more accurately than that because there's nothing objective or high-precision about Bayesian probabilities) that new physics will be discovered. But of course that I find it conceivable that no new physics will be found, too. It wasn't found in 2012, either (unless some not yet released paper will stun us).

It's my feeling that some people try to get a "verbal insurance" that would guarantee that regardless of what the LHC will find, it will be viewed as an important experiment. An equally important experiment. They want some ultimate hedge. But nothing like that exists because the importance of the LHC will clearly be greater if some new physics (aside from the Higgs boson that was already found) will be discovered. It makes no sense to question this correlation between the importance and the positive discoveries.

Of course that the discovery of some new physics would open a completely new chapter in physics. It would be exciting. The continuation of the null results will move the physics in the "opposite direction", so to say, but this shift will be much smaller, anyway. The continuation of negative results will really change nothing about the qualitative framework of physics. You may invent new year's resolutions for yourself – that if nothing new will be found before some artificial deadline, you will stop doing A and spend more time with B. But the fact that people may invent new year's resolutions doesn't imply that they're good science, not even if the people are employed as scientists, not even if they're top scientists.

Even in the "most pro-naturalness" counting above, one in which I ignored the dependence of the "degree of fine-tuning" on the (unknown) BSM particle spectrum, it was argued that the absence of any new particles up to \(5\TeV\) will only be equivalent to a single "3 sigma bump" mildly contradicting naturalness. It's too little. If the LHC discovers new particles, it will be rather quickly able to pump those 5-sigma "positive bumps" up to 10 sigma and discover new equally strong signals in other channels, and so on.

Positive discoveries at the LHC would bring us far more information and would be far more groundbreaking than the continuation of the null results. It's just wrong to invent ideologies and hype that would attempt to contradict these self-evident facts.

And that's the memo.

Bonus: naturalness vs renormalizability

A comment about the cutoffs by Giotis unmasked something in the "strict naturalness beliefs" that I consider not just "not sharply right" but, in fact, more wrong than right. They want to say that one should expect the cutoff scale to be "naturally" of the same order as the characteristic scale of the phenomena in your effective theory.

I would say that this question cannot have a universally valid answer but if I had to pick an answer, I would surely pick exactly the opposite one! On the contrary, it's natural to consider or demand theories that allow a vastly greater cutoff scale than the scales of their characteristic phenomena (e.g. masses of particles they predict). These theories are nothing else than the renormalizable theories! Renormalizable theories are those that allow us to set the cutoff scale vastly above the characteristic energy scale.

In my opinion, there is formidable evidence, both of the "easthetic" and empirical kind, in favor of the dominance of renormalizable theories. Whenever we were living in a jungle of chaotic, seemingly strongly coupled phenomena – e.g. the chaotic zoo of hadrons in the 1960s – it was just a temporary situation that would soon be replaced by a renormalizable theory – QCD with quarks or a weakly coupled elementary Higgs scalar field. And renormalizable theories may be extrapolated to much higher cutoffs. (If they're just perturbatively renormalizable, like the electroweak theory, they may be extended up to an exponentially high cutoff scale near the Landau pole.)

The actual accumulated empirical evidence in favor of the proclamation "renormalizable theories (=theories that allow the extrapolation to vastly higher energies) are more natural to be expected than the non-renormalizable ones" is much stronger than the evidence in the naturalness in the sense of "everything is of the same order", I believe! Hadrons and the electroweak symmetry breaking didn't have to admit renormalizable descriptions and many people have actually expected the right explanation to be some strongly-coupled mess. But the right explanation was renormalizable at the end, it seems. For many questions, these two beliefs (naturalness vs renormalizability) almost directly contradict one another.

Of course that we may get to another scale of new physics which will look like a "strongly coupled chaotic zoo" to us for a while. (The string scale or the Planck scale make such an impression inevitable.) But once the dust settles, the resulting winning theory will be able to make big leaps to higher energies again. In the case of perturbative string theory, once we get past the initial floors of the Hagedorn tower and their inner organization, we will be able to extrapolate the theory to "all energies comparable to the string scale" which may mean up to the Planck scale – another multiplicative gap of order \(1/g_s\) or \(1/g_s^2\) or another power.

There's no reason to expect "lots of physics at every scale". This would be a sort of fine-tuning, too. Gaps are bound to occur and if we look at the energy scales involved in the Standard Model (and its effective theories at even lower energies), we know that they do occur. We empirically know that they exist. So at most, I would be ready to adopt a more balanced yin-and-yang philosophy. Everything-at-the-same-scale mushy reasoning linked to the dogmatic naturalness has to co-exist with the boldly-extrapolate-your-theories-as-far-as-you-can paradigm favoring renormalizable field theories and favoring the values of parameters that actually do create such deserts.

The final theory surely must allow the existence of gaps and dimensionless numbers that are "substantially" different from one because we know with certainty that those occur in Nature. So I would surely say that those who decide to believe that "everything must be of the same order" are making an empirically indefensible assumption about Nature. And if they "derive" this philosophy from the effective field theory framework, they're using the framework beyond its domain of validity to derive a skewed assumption that the full theory simply cannot back up. Only the full theory (and I don't have to provoke anyone with the phrase "string theory" even though I believe it's the same thing because none of these claims of mine depends on its "stringiness" in any technical way) may decide where the whole framework of "effective field theory" breaks down – and be sure that it does break down somewhere.

Any particular effective field theory is OK to study the "effective phenomena" and knows about the limits where this particular effective field theory ceases to hold. But it doesn't know about the place where all effective field theories cease to hold!

Add to Digg this Add to reddit

snail feedback (36) :

reader Haelfix said...

There is also a trial factor or a look elsewhere effect that is frequently unmentioned. If you assume a uniform distribution prior over the set of real numbers to be a measure of naturaleness, you need to know how many trials (or degrees of freedom) there are that can take values for your observation. So for instance, in supersymmetry where you might have a large amount of degrees of freedom and/or theories with a lot of bound-states, it is perhaps not so unreasonable to expect one or more of those to take on a naively unnatural value.

One of the subleties of this game then is with how to properly use Bayesian reasoning and how to put a measure on the 'theory space' where the degree of freedom counting takes place.

As an aside, for the Hierarchy problem, pure naturalness arguments like the above is less convincing to me then what I would call the issue about the UV sensitivity. The conspiracy between unrelated physical quantities in any putative theory that might actually explain the Higgs mass from first principles. It's just very difficult to think of a simple effective field theory at high energies that would spit out such relationships.

reader W.A. Zajc said...

I like the idea of relating the 5 sigma discovery criterion used in HEP to establishing a “natural” scale for evaluating claims of naturalness. Curiously, in English the phrase “one in a million” is the most “natural” expression of something that is understood to be rare but actually occurred.

Since the SSC was mentioned in this post, I’ll tie that to the other topic of conspiracy theories. In 1992 I attended the ICHEP (International Conference in HEP) in Dallas, TX. The conference was being held in Dallas to celebrate the SSC, then under construction in nearby Waxahachie. I arrived late in the evening. Opening the drapes in my hotel room the next morning, I was shocked to see that it overlooked the Texas Book Depository, the building from which Lee Harvey Oswald fired the shots that killed JFK. Many of the conspiracy theorists start from the low probability (unnaturalness) of being able to strike a moving target from the ‘great’ distance of the book depository. I of course walked over to the plaza at some point, and was surprised to see how close it all was. I fired rifles enough as a kid to know that the shot was makable, particularly for a trained marksman. My Bayesian prior on all conspiracy theories is to discount them, for another reason - the likelihood of maintaining complete secrecy over a long period of time. This of course has to be conditioned against the length of time and the nature of the particular society in question, but it’s a good place to start. Seeing the actual site of the JFK assassination just reinforced this belief.

I am not completely doctrinaire on this - another trigger for the conspiracy theorists is the all too convenient murder of Oswald by Jack Ruby, someone with known ties to organized crime. But again, the mundane may provide the more probable explanation. It is likely Ruby knew he was terminally ill, and simply decided to act on the impulse that so many felt.

None of this discounts the profound impact of JFK’s death on those of us old enough to remember it. I was in grade school, and vividly remember the announcement coming over the public address system. They were feeding the CBS news broadcast through it, so we heard the news from the most authoritative broadcaster of that era, Walter Cronkite

Still stunning 50 years later.

reader Uncle Al said...

theory is arbitrarily asymptotic to its data interval but has zero predictive
power forward and backward. Economics' publication
of theory is an unlimited gush of wonder independent of empirical standards.
Pray that physics is not Chile's Augusto Pinochet embracing the
"boys from Chicago" (elegantly modeling Gran Sasso's "superluminal" muon
neutrinos). There is a(n MSSM) more intensely
parameterized macroeconomic model (Pol Pot), a larger one (USSR), and a bolder
one (Obamacare). They needed bigger
budgets and higher energies to succeed.

reader cynholt said...

Penn and Teller are
just two well placed cogs in the mass media disinformation machine
determined to shove the Warren Commission
Report down our throats. All the cable shows are pro-commission and use
the 'lone gunman' theory to explain JFK's assassination. Occam's razor
is useful as a guide, but it falls far short of being right most of the
time. I have not seen one TV program this time attacking the
Report. All we hear about is Oswald, Oswald, Oswald. Yet up
to 70% of the public does not believe in this report. We are supposed to
live in Democracy where majority rules, but not in the case of JFK's
assassination. The Warren Commission is complete fiction, from end to
end of its 26 books.

I believe Kennedy was our last real
president. After him, we are ruled by the military industrial complex
who choose the president.

Just Google up Robert Dallek's recent article in the Atlantic, "JFK vs. the
Generals," and you'll see that JFK was constantly hounded by the
military to go to war. The generals seriously proposed nuking the USSR,
China, and Vietnam.
Had JFK not been President, it might well have
happened. And Dallek, a respected historian, is no real fan of Kennedy.
But as he and others have learned from declassified files, Kennedy was
the only barrier we had to truly insane militarism.
He was going to
withdraw the troops from Vietnam, as well as dismantle the CIA. It's not hard to work out why he was killed or who
killed him. Who benefits? That's all you need to ask.

I submit to you that JFK
paid for his bullheadedness with his life.

reader Gene Day said...

In our everyday lives we experience improbable events. I can think of more than one occurrence in my life that had a probability of roughly one in a million. Such events are infrequent but they do happen.
I agree completely with Lubos.

reader cynholt said...

Focusing on the physics is
all well and good, Uncle Al, but I'm cautious against using the JFK
research as a model for proving a case against the official story. While
the magic bullet theory (and along with it the entire notion that Lee
Harvey Oswald acted alone) might be thoroughly discredited at this
point, this in and of itself hasn't done anything to bring justice in
the case, or even establish a commonly agreed upon counter theory to the
official story. Personally, I'm convinced that it was the CIA who
killed Kennedy, but as far as I can tell the alternative conspiracy
theories about the mafia or Castro are just as widely held.

I wouldn't be surprised if a similar dynamic started playing out with
9/11. The more that the physics of the official story aredisproved,
the more you will start hearing people say, "OK, there might have been
pre-placed explosives, and there was probably a cover-up, but that
doesn't mean it was an inside job." You'll probably start hearing
theories that the government covered up the controlled demolition aspect
"because they were embarrassed" that al-Qaeda could pull such a thing
off, or because a foreign government was involved that they wanted to
avoid war with (Saudi Arabia perhaps). Once the physics are established, the government apologists will devise new ways of attacking the "truthers."

reader Luboš Motl said...

Dear Bill, thanks for your feedback! We use "one in a million" ("jeden z milionu"), too. This isn't necessarily a contradiction with the particle physics' claim that "one in a million doesn't normally happen" because the look-elsewhere correction makes it more likely for "one in a million" to occasionally occur. So I think that the English "one in a million" is without the actual look-elsewhere reduction of the significance. ;-)

reader Luboš Motl said...

LOL, Cynthia, you bring us a refreshing reminder that the core of the conspiracy theorists isn't divided by any artificial left-right ideological Iron Curtain. ;-)

reader Luboš Motl said...

Thanks, Gene, it was really one of my main points.

Just to be sure, the evidence of "luck" behind the lightness of the Higgs in a SUSY scenario is nowhere near 1-in-a-million so far - it's at most 1 in 100 or perhaps 1 in 1,000. Because the number of questions of a comparable importance to the Higgs lightness that we have seen in the history of physics so far is arguably comparable to 100 or 1,000, it wouldn't be shocking to see one such a question which turns out to be the Higgs that would exhibit the dependence of a 1-in-100 or 1-in-1,000 "good luck".

reader cynholt said...

JFK's assassination has too many links to
the CIA, Lubos, too many instances of the government destroying or altering
evidence not to dismiss the totality of the Oswald theory as nothing
other than a coverup. If there was a coverup there was a conspiracy. You can conjecture exactly what happened, who might have been involved,
and what the many motives may have been, but you have to willfully turn a
blind eye to what happened to accept the government story.

Lee Harvey Oswald said he was a patsy and then he was murdered.

I believe there was a coup d'etat on November 22, 1963. The
succeeding events show a marked change in course that escalated the cold
war from the Kennedy administration.

As far as 9/11 is concerned, I believe it might be a clear case of the "Shock Doctrine." I believe
the government chose to ignore the intelligence to provide a reason to
implement their militaristic aims in Iraq and the Middle East, knowing
that the public would not be supportive of unprovoked war on Iraq.

While I doubt it was a conspiracy, I do find the evidence for controlled
demolition pretty compelling. A pancake collapse of a 110 story
building takes 96 seconds,
not 10.
Those buildings came down at free fall speed, which means they were
up. Look at pictures of pancake collapses: they don't look anything
like the moonscape you see at "Ground Zero." All objects, regardless of
their mass fall at the same speed - 10 m s/s. Do the math. None of them
were pancake collapses. A building only falls through air at free fall
speed if explosives are used.

Whether this theory is valid or
not, it still points to the fact that a lot of Americans have lost faith
in our public institutions.

reader Shannon said...

The NSA's scale of spying would have remained a conspiracy theory hadn't Snowden revealed the truth, don't you think ?

reader Mark Thomas said...

Sometimes the difference between Natural and UnNatural can be very very small. The exp(pi sqrt 163) = 262537412640768743.99999999999925,,, .Martin Garner's hoax was a conspiracy of sorts where he claimed that this number should be an integer. Computer experts then thought that there was "Floating Point" (hence error) occuring in the calculation and the joke was on some of the community. Had this number been an integer then it would have been unnatural. The transcendental number 262537412640768743.99999999999925,,, is a very natural number and is no coincidence as it is explained through Complex Multiplication and the q expansion of the j-invariant.

reader Giotis said...

This is indeed a very nice summary of the different ideas in
naturalness although I admit I have my objections for the bonus chapter.

Renormalizability represents the old picture before Wilson and its EFT approach. After Wilson physicists know that non renormalizabilty is not really an issue; every field theory is an effective one applicable up to
certain scale and thus there is a cut-off to protect you; you don’t have to extrapolate to infinity.

Renormalizability just means that the low energy EFT
depends on the UV one essentially only via a small number of relevant or marginal terms but you can always add non renormilizable irrelevant terms in
your action if these are allowed by symmetry. The point is to know the domain of validity of your theory and non renormilizable terms and naturalness could
guide you. You just take the effective dimensionless
coupling and demand that is of order one; this will give you pretty much the cut-off which more or less is the characterising energy scale.

This picture as I understand it is so “natural” and beautiful in my mind that I don’t like to distort it to tell you the truth…

reader Luboš Motl said...

LOL, not quite integer. This 163-linked issue was discussed e.g. at

reader Luboš Motl said...

Dear Giotis,

I feel that your (and not just your, of course, many physicists say it) comments that "renormalizability isn't an issue" are ethical in essence, and I don't quite share this kind of ethics.

One may indeed say that Wilson's approach allows us to treat renormalizable and nonrenormalizable theories "by the same tools", together, I agree with that. But I disagree with the claim that it should mean that we should treat both of them as equally likeable.

The empirical evidence is that renormalizable theories are very important in Nature. This is, via the Wilson dictionary, *equivalent* to saying that it's a good idea to search for (initially effective, then not just effective) field theories that may be extrapolated to vastly higher energy scales without introducing new physics or parameters. Don't you agree this is a legitimate conclusion we can make from the success of the Standard Model?

Best wishes

reader Giotis said...

Well yes ok…Trying to find out if the continuum limit of a theory exists is always a good thing to do and teach you a lot of stuff.

Nevertheless I don’t really have a problem that QED, although normilizable, has a Landau pole; I know it’s just an effective theory.
Similarly QCD despite the fact has an UV fixed point and thus UV complete is still an effective field theory.

reader Gene Day said...

Of course, Lubos, 1 in 100 or 1 in 1000 things happen every day and one should not attribute anything special to them.

reader kashyap vasavada said...

This is very interesting discussion between Lubos and Giotis. Question: I understand Electro-weak theory has

been proved to be renormalizable by t'hooft enough to get nobel prize. Would you put this as an effective theory valid only upto certain energy or upto infinity?

reader Dilaton said...

Dear Lumo, sorry for the off-topic to this nice TRF article but this is annoying:

I always thought that the "Spektrum der Wissenschaft", the German analogon of the "Scientific American" which you can really throw to the trash bin concerning fundamental physics topics, is slightly better but now they adobt the exactly same low-level trolling attitudes. My friend and colleague has just sent to me this (unfortunately written in German) article inflating this experimental atomic physics paper

about measurements of the electric dipol moment, to a large annoying unjustified low-level (they explain absoluately nothing from a physics point of view) hype

claiming that the new limits of the electric dipole moment of the electron exclude supersymmetry, throw all of beyond the standare model physics out of the window, bla bla bla etc ..., you now what I mean: today's "science journalism" at its worst ... :-(. I can tell you that I usually dont read such stuff, and my colleague knows this, soll I'll have to have a serious word with him tomorrow ...

Why are (some or all) supersymmetric models expected to feature an electric dipole moment of the electron in the first place (if they really do) ?

About the fact that the paper is largely overhyped I am 100% sure ...

reader kashyap vasavada said...

Interesting debate between Lubos and Giotis. I have a simple question. t'hooft proved that electro-weak theory is renormalizable and got Nobel prize. Is this theory renormalizable in the effective field theory sense upto certain energy or upto infinity?

reader Eugene S said...

Who benefits? That's all you need to ask.
Oh, that's brilliant. So, every time someone dies in a fatal car crash on the speedway, it was not an accident but murder. At least this is so if he had life insurance and the wife collects on the policy.

reader Werdna said...

I'm not sure what you mean, Lubos, I remember when Cynthia was a hard core leftwing communist. I vaguely recall her changing some of her ideas, but not in much a meaningful way. Does this meant that you suppose this illustrates conspiracy theorists aren't generically right wing? Of course not. Conspiracy theorist are generically left wing to generically politically so muddled in their thoughts it isn't worth classifying them.

reader anna v said...

I want to comment on the non appearance as yet of a supersymmetric candidate at the LHC.

Back when the quark model was forming out of the chaotic hadron resonance population, and the GIM mechanism based on symmetries was proposed, there existed an elegant e+e- machine in Frascati I think, whose energy stopped at 1.5 GeV, just below the predicted PSI from the GIM mechanism. Null results on the prediction.

So, patience, children, patience. All is not lost :)

reader Luboš Motl said...

Dear Werdna, I meant something more than that - namely that Cynthia's conspiracy theory doesn't look intrinsically left-wing or right-wing and she is even explicitly embracing a fraction of the population (70%) much of which simply has to vote for right-wing politicians, to say the least. I agree that many of them are actually heavily confused leftists ;-) but there's some sense - other contexts - in which they may be called right-wing.

reader Luboš Motl said...

Dear Dilaton, the German coverage is annoying but I've seen so much of this junk in recent years that I no longer care.

Of course that if a class of theories has a *potential* to make a quantity nonzero, it doesn't imply that it has to be large and that it doesn't have a potential to make it small. It's really the same discussion as one in this article.

reader Luboš Motl said...

Dear Kashyap, the word "renormalizable" [theory] is a word that existed before the Renormalization Group so it is *not* organized according to scales. Your phrase "renormalizable up to XY" really combines two different terminologies, the old renormalization and the Renormalization Group, inconsistently.

The Standard Model is perturbatively renormalizable period - it means that the infinities cancel at every order of perturbative expansions. This implicitly means that to this precision - organization accorrding to powers of the coupling constant - they just cancel to all orders.

However, because the U(1) factor in the Standard Model is getting more strongly coupled, there is actually a scale of order exp(K/g^2)*M_{electroweak} - an exponentially far one - where the theory gets inconsistent.

The QCD part has a coupling that is getting weaker at higher energies so it's renormalizable and consistent even non-perturbatively - it doesn't have to break down at any, arbitrarily high energies. In the real world, it breaks down anyway because QCD isn't the whole story (there's also quantum gravity etc.) even though it *could* be extrapolated.

reader Shannon said...

Gene, you make me want to play Lottery again ;-)

reader Eugene S said...

100% off-topic but this is pretty funny / sad / funny.

And speaking as a conference interpreter myself, yes, it happens. We have a little control panel in the booth that includes a MUTE button prominently marked. Comes in handy when you want to clear your throat or ask your colleague for assistance. And there's also a little red light at the top of the microphone to tell you it's switched on.

On (fortunately very rare) occasion, two wires in your brain short-circuit and make you think you pushed the button when you haven't. Hugely embarrassing, but once said and heard, there's nothing you can do to take back those words.

I don't know what kind of job security UN interpreters have, but I don't think she will get fired. Unless Mullah Omar al-Sheitan al-Arabi is in charge of interpreting services, which knowing the UN is not out of the question.

reader Peter F. said...

A strong, generally suspicious, attitude may well serve a common good in ways similar to how a genetically endowed and heritable anxious actentiveness (~=awareness and jumpiness) served survival in the phylogeny of brainy fauna.

I imagine it can do so by keeping people extra vigilant and on guard against the always present possibility of oppression orchestrated from democratically appointed or otherwise established economical/political/technological/theological positions of power - positions easily misused by the people or “overlords” that hold them.

reader kashyap vasavada said...

Thanks. Nice explanation. So if I understand, neglecting neutrino oscillations and of course, future unforeseen discoveries, SM has no renormalization problems.

reader Casper said...

The Penn & Teller shows are well named. They're bullshit.

reader Casper said...

It's hard to know what Lubos is babbling on about here. Is he saying that JFK conspiracy theories have something to do with particle physics? Perhaps he should join forces with Professor Loondowsky and write a joint paper. Loondowsky, by all accounts, could do with some mathematic expertise which I'm sure Lubos could provide.

reader Luboš Motl said...

What can you possibly dislike about them? I would sign every sentence in these 89 episodes, as far as I remember.

reader Luboš Motl said...

Unlike Lewandowsky, I am not saying that the particle physicists obsessed with naturalness also believe the specific everyday-life conspiracy theories like JFK. They don't. But that doesn't prevent me from pointing out similarities in the ways of thinking about certain issues.

reader Casper said...

Okay, on further reading the mystery of the article has resolved itself. I can see now what it is on about. Lubos has taken to the problem of the JFK assassination with the approach of the classically trained physicist. Using the sharp mathematical tool of "naturalness", he has reduced this complex political issue to simple general principles.

What 'naturalness' is actually, is beyond my capability to understand, but I'm sure it will bring positive and proven results. It appears to be a highly novel and innovative approach. Well done Lubos.

reader Casper said...

Yes its true. Penn & Teller are intellectual giants.