You know that I like to use the term "crackpot" but I believe that I have actually learned the word from quite some serious, uncontroversial people who avoid expletives at all times: as far as I know, Dr Jiří Langer, an important physics instructor at my undergraduate Alma Mater, was the one who introduced us to the term "crackpots".
For years, the canonical crackpots – compatible with this flavor of the terminology – were haters of Einstein's theory of relativity. Once upon a time, one of them – the author of the far-reaching ;-) book above – came to our department in Prague, along with his buddy, the chairman of Mensa Czechoslovakia, and wanted to persuade the physicists that he had disproven the theory of relativity and, ideally, the professors should nominate him for the Nobel Prize in physics.
OK, he showed some wrong solution to some idiosyncratic version of a twin paradox and I was apparently the only one who exactly knew what was wrong about his reasoning and I have explained the flaw to everybody, including the senior physicists. At that time, I had already had accumulated quite some experience in interactions with the crackpots.
Incidentally, his real (Slovak) name is Artúr Bolčo – which rhymes with lečo rather than Einstein. At any rate, I found it amazing how someone who is so stupid can be so self-confident at the same moment – to publish books such as "How I Outsmarted Einstein".
Incidentally, Bolstein was accompanied by a buddy who was careful to frame himself as a "guy in between" – he didn't have the courage to openly agree with Bolstein at that time – a guy who thinks that he is a genius because he has trained himself how to solve the IQ tests. In reality, as I have found later, that buddy was about as stupid as Bolstein himself and they misunderstand relativity equally totally.
IQ tests include questions such as this one:
Needless to say, you're supposed to spot, decompose, and extrapolate patterns. If I get it well ;-), in the picture above, you should notice that the top colors alternate as blue-brown-green, blue-brown-green etc., so the top should be brown, while the lower colors are red-yellow, red-yellow, so you should have red, and the answer "3 of 4" is OK. Do you agree?
It's rather intellectually limited, repetitive, and it's obvious that by learning how to solve this particular kind of problems well, you can't prove that your intelligence exceeds some amazing thresholds such as 150 or 180. More realistically, you may become increasingly certain that your IQ is above 90 or so if you can learn to deal with similar problems because people below 90 or so can't be taught to do such things well. "Learning how to solve the problems" is really a type of cheating and the IQ tests are only meaningful tools for comparisons if the tested people were equally "untrained" to start with. Then you may surely extract some information, at least statistical information about the average IQ and standard deviations in large enough groups of people.
Ideally, the IQ tests should measure the innate intelligence only – so you shouldn't even be trained in any analogous problems. But that condition is hard to fulfill in practice so the IQ tests always quantify some combination of innate intelligence and the intellectual culture of the person's environment.
While the IQ tests are not terribly smart, it's clear that the people who can't deal with patterns at all are stupider still. And those who spend years if not decades by thinking about matters related to relativity – but who still determine that relativity or its consequences such as locality must be fundamentally wrong – simply are very stupid. Einstein wasn't the smartest man in the history but he was great – he had to discover the theory before everyone else did. Those who can't even understand this completed theory, not even after many years of efforts, are obviously several levels beneath him.
To understand why special relativity has to be right is an example of the IQ-test-like pattern recognition, too. Take a simple statement that follows from relativity: "You can't send information faster than light". Is that right? Is that wrong? Can we completely ignore the principle?
An intelligent person who tries to evaluate this statement looks at situations in which one tries to maximize the speed of something. Look at the particle accelerator. It just seems to be true that almost all the particles in the LHC tunnel travel almost exactly by the speed of light (99.9999% of it). It was the case of the electrons and positrons at LEP which used to sit in the same tunnel – and it's true for the protons and lead nuclei, too. Light travels at the same speed. The international bodies have redefined one meter so that the speed has been precisely 299,792,458 m/s since 1981 or so.
Can you learn something from these facts? I surely hope so. This agreement is extremely strong evidence in favor of the statement that the speed of light is really the maximum speed that any kind of particle – and field disturbances are ultimately equivalent to particles as well – may achieve. You can't send the information faster than c, if it were possible, it would probably be rather easy to see some hint of such a superluminal propagation of information.
If the constancy of the maximum speed of anything weren't an important principle, surely the different particles – and their bound states such as the led nuclei – would have different maximum speeds they may achieve, like different car models have different maximum speeds. It would be virtually 100% certain that the maximum speeds would differ if there weren't a good explanation why they're the same. The previous sentence uses the reasoning of Gell-Mann's totalitarian principle: "Everything that is not forbidden is mandatory." If there are no laws or deep reasons why cars' maximum speeds are equal to those of other cars, the probability that they will be equal is basically zero.
Gell-Mann's reasoning is an example of Bayesian inference – and so are the arguments revolving around naturalness. If the qualitative traits of a theory imply that a number will be a boring random number in some interval, it probably will be boring and won't be special (or extremely close to a special number such as zero). Even the modest 4-box alternation of red-yellow-red-yellow is supposed to be a source of knowledge for you if you solve the IQ tests – the patterns in Nature that lead us to believe in important principles of physics are vastly stronger than the colors of 4 boxes.
Instead, by assuming the two postulates (equal form of laws of physics in all inertial frames; and the constancy of the speed of light), relativity correctly explains this fact and all related facts, tells us exactly how to describe the same phenomena from different inertial frames' perspectives, and how objects become heavier – and therefore harder to accelerate – as their speed approaches the speed of light (and lots of other things). The assumptions of relativity are highly constraining but their consequences nontrivially agree with many experiments and all experiments, so that's huge evidence that the assumptions are actually correct and important! That's how good science works. Science looks for nontrivial laws and principles, generic ideas of this kind don't work but some of them nontrivially do and they become important and they are studied more closely and generalized. Someone who decides to completely ignore these deduced principles of relativity is equivalent to a person who tries to solve the IQ test but who is just totally incapable of seeing any patterns and regularities. Ever.
You simply can't do any meaningful science – or at least not theoretical physics – if your brain is this weak because science, and theoretical physics in particular, is all about the identification of increasingly complex and abstract regularities that describe or predict an increasingly universal collection of phenomena in Nature; it's all about the discovery, memorization, and extension of relationships between empirical observations and aspects of theories – and between various aspects of theories mutually.
Take Bell's inequality. Two spin-1/2 particles are prepared with spins in the singlet state so they are completely anticorrelated. The correlation \(-1\) between them (when you measure them along the same axis) is generalized by quantum mechanics to be \(-\cos\alpha\) if you measure the two spins along axes with the angle \(\alpha\) in between them. This simple and elegant prediction of quantum mechanics is perfectly confirmed by the experiments.
Bell has shown that any local classical theory – where the randomness of the results is due to some hidden variables but all variables, including the hidden ones, already have some objective values independently of (or before) any measurements – predicts the correlation \(P(\alpha)\) to be such that the function \(P(\alpha)\) obeys inequalities that the correct QM result \(-\cos\alpha\) violates. Because the QM result is experimentally verified, it follows that every local classical theory is wrong.
So far so good. A local classical theory is hopeless. It has no reason to predict an elegant result such as \(-\cos\alpha\) and not too surprisingly, no local classical theory makes this prediction. Even without Bell's theorem, you could figure out that without the principles of quantum mechanics that seem needed to imply the result \(-\cos\alpha\) (it's really a matrix element of the spin in a basis rotated by an angle, so you need the superposition principle for the spin's Hilbert space to get this result, and that's almost all of QM), this result (as a whole function) would be infinitely unlikely. And "the infinitely low probability of getting the right function \(P(\alpha)\)" is really a sufficient argument against non-quantum theories – which is another reason why the added value of Bell's proof is close to zero.
The correct refined commentary "why the local classical theory is wrong" is that the Universe is local but it is not classical. It's local because it follows from special relativity – and relativity is almost unavoidable given the observations of relativistic phenomena (the universality of the maximum speed would really be enough, much like some other experiments could be enough by themselves, too).
For totally silly ideological reasons that have nothing to do with any valid arguments or any observations, some people decide that the right commentary should be the exactly opposite one. (John Bell was really the first well-known man who was deluded in this way – and he added lots of invalid junk to the correct core of his theorem.) "The world is classical but non-local," they shout. These people boldly ignore all empirical evidence in favor of quantum mechanics – which is something they want to do, anyway – but to do so, they also need to ignore all empirical evidence in favor of relativity. Because modern physics is primarily built on quantum mechanics and relativity, they deny all foundations of modern physics. But it's fine with them!
Imagine that, as they assume, that the laws of physics fundamentally allow the information to be sent superluminally. Let's find a particle or a quantum of a field that propagates faster than the speed of light, e.g. by the speed \(2c\), and let's call it a "speedon". I deliberately avoided "tachyon" because "tachyon" should be used for particles with a negative squared mass in otherwise Lorentz-covariant theories only – but we want to assume that the fundamental theory is non-local and therefore Lorentz-breaking.
OK, so first of all, no one has seen such a speedon – or any manifestation of a superluminal propagation of anything tangible. But if such a particle or an effect is important enough in the laws of physics, it must exist somewhere. Some kind of a flavor or a trace of it must be included in various other material objects. For example, lead nuclei might contain 0.0001% of the speedon, in some form, perhaps "virtual speedons", and that should change the maximum allowed speed of the lead nuclei by something comparable to 0.0001%, in one direction or another.
But that clearly doesn't occur because the lead nuclei have the same maximum speed as light, electrons, positrons, and protons. So something is seriously wrong with the theory of "speedons", right? It follows that the laws of physics should better rigorously respect the principles of relativity, otherwise they almost unavoidably predict phenomena that are clearly not seen in Nature. The different maximum speeds of different particle species is just one clear example I randomly chose. We could parameterize the violations of relativity in many different ways – e.g. by a different taste of food in a quickly moving train! ;-) If relativity is fundamentally violated, the number of phenomena that are predicted to be seen – but are not seen in the world around us – is infinite.
For example, there exist infinitely many species of bound states of elementary particles (nuclei, atoms, molecules) and each of them would have a different maximum speed in a fundamentally non-relativistic theory. These speeds would be uncorrelated to each other in a completely generic non-relativistic theory (because the bound states are more, actually less [by counting the mass/energy], than the sum of their parts). The assumption that all these speeds are equal to each other amounts to infinitely many real-valued equations and if your fundamentally non-relativistic theory is predictive at all, i.e. if it only has a finite number of parameters (even if it has hundreds of parameters), you just won't be even able to fine-tune it to obey the infinite number of conditions (the maximum speed is the same for all bound states)!
Laymen are often being brainwashed by the claim that with some fine-tuning, they can fulfill any conditions – and a von Neumann quote involving an elephant is often cited as a "proof". But it's simply not true at all. A random theory with finitely many parameters is virtually guaranteed to violate deep principles (such as the principle of relativity, even if you just mean "a class of experimentally doable tests of relativity") for every single value of the parameters! You simply don't have a sufficient number of parameters to "fake" the validity of infinitely many conditions that the principle predicts.
Nature must clearly respect the locality and Lorentz invariance exactly or almost exactly – at most, you might imagine that the laws of physics are obtained from relativistic ones by the addition of tiny Lorentz-breaking terms. But because there has to be a starting point that is exactly Lorentz-covariant, there is really no point in adding the deformation if the effects are unobserved. Occam's razor tells you that you shouldn't add them – the purer theory is nicer.
Quantum field theory or string theory respect the postulates of relativity as well as the principles of quantum mechanics. The right theory obviously has to be like that – and if quantum gravity is included, you really need string/M-theory. The theory must tell you that hidden variables that are the cause of the random outcomes fundamentally don't exist – if they existed, Bell's theorem would imply that either relativity is broken (and it would be almost certainly broken a big time and the LHC would see different maximum speeds for different species) or the theory predicts too low correlations of the spins etc. that are experimentally ruled out.
Why would someone believe all these wrong ideas? Why would you place some totally indefensible philosophical assumptions (the measured values must be already determined a picosecond before the observation) above the precise experimental tests of the postulates of relativity, principles of quantum mechanics, and their consequences? It's due to a combination of bigotry and low intelligence – the latter may also be described as the absence of the "pattern recognition" abilities.
Recently I discovered an incredible fallacy that helps these intellectually incapable people to sustain their indefensible beliefs – and it's the following "clever idea" of theirs:
We don't need to worry about the experimental tests showing that the information never seems to propagate faster than the speed of light. Why? Because we may simply assume that the superluminal signals can't be observed in practice.Too bad, these people haven't yet invented that this clever method of theirs may be used outside physics as well – everywhere – e.g. to revive creationism.
I needed some time to understand – and verify – that these people actually mean what I just wrote because the stupidity of such a belief is just utterly mind-boggling. They have discovered a whole new method to produce correct theories of Nature:
Invent any ideas or theories you want and make sure that they are OK by adding the assumption that no unwanted prediction of the theory may actually be observed!It sounds like a parody but some people really think it is possible. Start with any garbage – and everything they believe about science is garbage which is unsurprising because they protect their garbage from any criticisms – and say that it's good enough because you may also assume that it is not garbage and that no contradictions between your beliefs and the observations can ever be spotted in experiments. It's so ingenious, isn't it?
These people believe that the relationship between the observations and the predictions of their theory can be and should be completely cut – simply by dividing your beliefs about physics to two parts. One part is whatever you want to believe. And the other part is the assumption – an additional axiom – that your first belief is compatible with all the observations! ;-)
Needless to say, the problem with all such "combined theories" composed of "random garbage" plus "belief that the garbage doesn't contradict observations" is that these "combined theories" are internally logically inconsistent because the garbage simply does imply predictions that are experimentally refuted, despite all your fine-tuning efforts, so the assumption that they don't is just incompatible with the rules describing the garbage.
Whether a theory predicts superluminal signals isn't a matter of assumptions. It's a matter of theorems. Any well-defined theory of yours (and some general traits of the theory are usually enough to decide) either predicts that superluminal signals don't exist in which case the theory has passed a single but important test; or it implies that superluminal signals will appear in which case the theory is dead because it contradicts the observations.
How hopeless someone's brain must be if he or she cannot figure out that you just can't add a pure wishful thinking such as "the theory won't contradict tests of relativity" to your theory that clearly does say something about the validity of relativistic predictions? Incredibly hopeless, indeed. They just don't even get the elementary point that theories actually have implications that you can't change by adding "further assumptions". But such people are all around us and because of the brutally deteriorating quality of the education systems and universities in particular, many of them have been accepted to colleges.
I think that the belief "I can just easily solve a problem by assuming that the problem doesn't arise" partly boils down to the culture of entitlement and spoiled brats. Some folks were simply trained to receive whatever they ask for, even if it is absolutely ludicrous or immoral, and the validity of "the theory is fine" when it's clearly not fine is something else they expect to get easily.
In the 1990s, people like Bolstein couldn't make it to colleges – or at least not to physics departments of graduate schools. But the quality has hugely dropped and these days, you find the people whose thinking is as catastrophic as Bolstein's, and sometimes isomorphic to Bolstein's, in some (so far lousy) graduate schools or universities, too.
If you're a famous quantum physicist, get ready to be screamed at by one of these people who believe that you may simply add the "assumption that no problems with experiments may be seen" to his or her theories.