Thursday, September 08, 2011 ... Deutsch/Español/Related posts from blogosphere

Is there too much theory in science?

That's the question that Jon Butterworth asked on his Guardian blog four days ago:

The laws of physics. Or are they more like guidelines?
Make no doubts about it. I think that there is too little theory in science. This gap is one of the things that ultimately make the contemporary science slow, inefficient, and vulnerable when it comes to politicization and the propagation of superstitions. Lots of theory is needed.

By "lots of theory", I surely don't mean assorted speculations that try to invent as many new things and effects as possible, regardless of rational arguments that suggest that the number of new things and effects to be discovered in a foreseeable future is ultimately going to be pretty low.

Theory as rational conservatism derived from the experience in action

By "lots of theory" which are needed, I mean exactly and primarily these considerations that try to extrapolate our knowledge as far as they can, that produce a picture that is internally consistent and compatible with observations, and that separate things that are established from those that aren't as reliably as you can get.

In science, lots of mathematics is needed. Whoever thinks that professional engineering or physics of the 21st century can be done without maths is unreasonable. It's of course a very popular theme with the public because most of the public hates maths. Their average IQ is just 100 – moreover, this sad fact is true by the definition of IQ. ;-) But the most shameful populists of this kind should be constantly identified as anti-science, anti-civilization weeds.

By the way, did you know that Google Trends constantly rewrite and delete the history? Right now, a comparison of the searches for Motl, Smolin, Woit shows that almost no one has ever searched for the two aggressive dishonest cranks so the historians of science could incorrectly conclude, using Google Trends, that my anxiety about the influence of these two populist scumbags has been unjustified by the data. No one ever gave a damn about them. But those who have played with this service a few years ago know that it wasn't always 1.00:0.00:0.00.

There's been lots of hype about new physics, new miraculous supernatural effects that were discovered or that were about to be discovered. But most of these effects turned out to be either "not new" or "completely non-existent". In the latter case, they were often "demonstrably impossible" at the time when they were proposed. This bias boils down to the lack of theory, too. People just don't understand the phenomena that actually exist. That's what leads them to envision many new phenomena that actually don't exist or that are not new. Most of the speculations about such new phenomena may be excluded by pure thought.

This is another unpopular thing. We often hear that Lord Kelvin claimed that airplanes would be impossible. Well, he would also say that the X-rays were a hoax, the radio had no future, and sunspots had no relationship with the magnetic storms – the apparent relationship was just a coincidence. ;-)

That's a correct observation. Lord Kelvin was wrong: that's not shocking, scientists are not created to be infallible. But he was still an excellent scientist. And what these four examples do is to cherry-pick episodes from the history of science to support a certain preconceived conclusion. While these episodes are true and scientists indeed fail to be infallible, there has been a much higher number of people who have claimed something to be possible (or done) even though it later became clear (and it was often clear to many contemporaries as well) that it wasn't really possible. Some people want to actively hide such lessons from the history of science which is too bad. They think in the "Yes, we can" way. They think that it is politically incorrect to say things that violate the "Yes, we can" paradigm. But we can only do things that are allowed by the laws of Nature and "Yes, She can" prevent us from doing many things we could imagine. ;-)

If people were spending more time by educating themselves in theory and in a sufficiently deep, quantitative, and comprehensive thinking about Nature, they would know that it's very hard to come up with genuinely new fundamental phenomena because physics just (mostly) works. The new effects, if they exist, must be pretty subtle – weak, requiring extraordinary energies or something else that makes them pretty inaccessible. People wouldn't expect as many "constant revolutions" as they do today. On the other hand, they would be much more excited if a breakthrough actually occurred. Their thinking would be much more correlated with the reality.

They would know that the Standard Model and general relativity describe almost everything we have ever observed (with the exception of the dark matter hints that the DMIS friends are seeing if they are really seeing it haha); something like the Higgs sector has to be a part of this story even though it hasn't been seen yet; supersymmetry is likely to kick in at some scale; all these partial approximate laws must almost certainly arise from a compactification of string theory whether or not we will ever be able to observe some "purely stringy" phenomena. Even though some people incorrectly say that She is a bitch, Nature isn't obliged to pose for Playboy. It's both up to our skills as well as Her laws whether we will learn something new about Her in the next XY years.

The value of thought experiments

Thought experiments are a typical component of a theorist's reasoning. They're not real experiments but they help us to figure out what's going on and how the laws work so that they're compatible with each other. In many cases, it doesn't really matter that these experiments are only "gedanken" because we have some principles that tell us what should happen.

When the thought experiments are useful, we may actually answer what would happen in such an experiment by applying some principles robustly extracted from the actual experiments.

Thought experiments are merely one aspect of our thinking about the real world. Similar modes of thinking allow us to catch errors in our own reasoning, discover limitations of some of our assumptions, and – as mentioned previously – invalidate conjectures about the existence (or absence!) of some very new phenomena.

Purely theoretical analyses of black holes

Butterworth explicitly mentions the analysis of the black hole interiors which is a purely theoretically-driven exercise. No one wants to visit the interior of a black hole whose radius is a few light minutes or (usually much) smaller because you would be sure that your life won't last for more than a few more minutes.

Still, there are good reasons to think that it is a legitimate question to ask: what would observers who fall through the event horizon of a black hole experience, see, feel, and perceive? Basic principles of approximate locality make it pretty clear that the infalling observers shouldn't feel anything special when they cross the event horizon. It's a generic place in space. In fact, you may only identify the location of the event horizon "retroactively" once you know how the whole spacetime including the future stages of the black hole's life look like. The laws of physics would have to involve some amazing discontinuity which is moreover based on a huge non-locality if not retrocausality if they allowed the observer to easily see that he has already crossed the event horizon.

This leads us to a picture of the black holes as painted by classical general relativity. You don't feel anything when you cross the horizon; it's only the ƒuçkiñg collision with the singularity and its huge tidal forces that kills you.

At the same moment, quantum considerations guarantee that the black hole must have a huge entropy, a huge number of new degrees of freedom that remember its state, and those degrees of freedom could perhaps even be "imagined" to live in the bulk of the black hole interior, as the fuzzball theory proposes. Quantum mechanics requires that there exists some degree of non-locality that allows the information about the infalling observer (and the star that collapsed to the black hole in the first place) to get out in the form of the Hawking radiation.

Those aspects of the physics research are surely "impractical".

The people who only like "applied science" meant to liberate ordinary people from poverty etc. will surely be unimpressed. Because these exercises about "what may happen inside the black hole" and "how the black hole organizes the information about itself" are purely theoretical, you may even say that they belong to philosophy. I have nothing against it. I think that all people who study this business seriously should proudly count themselves as modern philosophers. However, the logic how this "philosophy" looks for the truth is still the pure and full-fledged scientific method. It eliminates wrong hypotheses if they disagree with the empirical evidence – even though most of the actual work is actually of theoretical character. So it's a different philosophy than the philosophy at the times when it was pursued irrationally. This new philosophy is also called pure science – whether the populist critics of theorists (and theory) like it or not.

Too little theory in climate science

I don't want to focus on climatology in this text but I also do think that there's too little old-fashioned proper theory in the climate science. By proper theory, I mean the knowledge of the type that grad students are asked during their oral qualifying exams, if you wish. You must use your proper knowledge of general physical phenomena and apply the right ones to a given problem and end up with a good order-of-magnitude estimate of some effect, or perhaps a more accurate result if you're asked.

In most of this business, you don't need a computer. A theorist should always be able to estimate important things on the top of her head. This point is often being misunderstood by the laymen who frequently think that a computer is needed all the time. But it's not. It shouldn't be. The point is that a good theorist must always be able to approximately reproduce the computer's operations – and in some simple enough cases, exactly reproduce the computer's operations – and come up with the approximately right result.

If a system is analytically unsolvable and fails to be weakly coupled, you will need a computer to produce exact numerical results. But a good theorist without any computers should still be able to arrive to approximate results and whenever one contribution is vastly greater than another, he should be able to see it without a computer. In some cases, he may even evolve a "non-mathematical intuition" that allows him to "imagine" things and guess pretty accurate results without any "conscious" calculations: this is possible, believe me. Even if some insights of this kind were originally obtained by a computer, they become a part of the knowledge of a good theorist and a good theorist may use this knowledge in many other situations, too.

So the reliance on the climate models is due to a shortage of proper theory, not an excess of it. Those people just don't understand the things themselves. But they think that if they have an access to expensive computers, these computers may compensate for their personal ignorance. Except that they can't. The machines aren't miraculous and the programs were written by some people. If you can't do certain calculations without a computer, not even approximately, you won't even be able to design the tests that will decide whether the models behave properly (at least not when you only claim that your models only reproduce some overall properties of a chaotic system). A religious belief that the model is omniscient won't help you. If the model is wrong, other people – better theorists than you – may ultimately see that the model is wrong, regardless of the strength of your beliefs. And if you believe some things despite the evidence, then you are a demonstrable bigot.

LHC as a gift to theorists

Butterworth addresses the complaint that the LHC is just too big a gift to the theorists. The cost was $10 billion and because its particular results are only comprehensible and relevant for a community of less than 10,000 theorists, it's clear that each of the people – like your humble correspondent (and especially the real experts in phenomenology) – was de facto given a gift of more than 1 million dollars. :-)

I think that this reasoning is kind of right from some viewpoint. (However, you could also say that the investment to build the LHC was a gift to 10 billion people in the future who will be fully familiar with the state-of-the-art particle physics, so each of them only received a $1 gift.) But yes, I do think that the people doing state-of-the-art particle physics do deserve a million of dollars of a gift that doesn't really go to their wallet.

At the same moment, what is ironic about the complaint is that it is usually being raised by the people who don't worship theorists as much as they should, to put it euphemistically. ;-) But the LHC was built – and is being run – largely by the experimenters and the would-be friends of the experimenters completely forget about the experimenters' existence! So even though the theorists are surely interested in the results of the LHC experiments, the gift primarily goes to the experimenters. Just to be sure, their number is also comparable to 10,000 so each of them got a million of dollars, too! :-)

The experimenters are trying to find whatever can be found by the LHC – and the LHC indisputably probes new regimes where new phenomena may be hiding. The hype has been (and, to some extent, still is) huge: the LHC must find new phenomena, and so on. However, there don't have to be any new phenomena at the LHC except for something that plays the role of the Higgs sector which may end up being very boring and simple.

This bias – I mean the huge thirst for new phenomena – isn't new. In fact, if you read David Gross's article celebrating his smaller colleague, Oskar Klein, you will see that the greatest physicists in the late 1930s (except for Oskar Klein himself who was really an early string theorist so he had a good idea about the big picture) were willing to abandon all of their knowledge about quantum mechanics (which they found themselves) at slightly shorter distances than those where these laws had been established – at the Compton wavelength of the electron.

This was crazy. These days, we know that non-relativistic quantum mechanics works nicely all the way down to the nuclear distance scale and the general postulates of quantum mechanics almost certainly hold universally, without any limitation. But in some sense, some people's expectation that the LHC has to invalidate the Standard Model and find lots of new things is similar to the belief of Heisenberg et al. in the late 1930s that quantum mechanics was likely to collapse "right behind the corner". People never learn.

To be sure, at some points, we do find new phenomena. But we should have realistic expectations about the frequency of such events. When you look at the masses of particles at the logarithmic energy scale, they're "approximately" uniform: only a few new particles (plus minus a few) occur every time you increase your energy 10-fold. So you will get something in between zero and "two times few" particles when you multiply your energies by 10.

The LHC has only multiplied the center-of-mass energy of the Tevatron, 2 TeV, by the factor of 3.5 which is close to the square root of ten. So you should expect something between zero and "one times few" new particles by now. And indeed, the exact result so far is at the lower end of this interval, zero. ;-)

So please don't get carried away. There has never been a very good reason to expect a "huge amount of new physics" in the early stages of the running of the LHC. It could have happened and many people secretly or openly hoped that it would happen (I did so mostly secretly but I won't pretend that this wouldn't be my preferred scenario when it comes to the happy hormones), but it didn't happen. We will see how it changes in the future. Of course, it can. You still have the expectation that new things often occur when you multiply the energy by 10.

This absence of new physics at the LHC so far shouldn't prevent people from thinking about possibilities – because thinking is not too expensive, after all. The amount of money spent for theorists in high energy physics is probably vastly lower than the amount of money spent for the technical equipment of the experiments, maybe by orders of magnitude. So theorists have the right to think about the questions "What if".

However, one must be always cautious about this bias – about the inherent inclination of the theoretical community to invent many new things because they make a living out of them. If the papers in the literature (in any discipline) typically talk about many new interesting and exciting phenomena that should already have been seen by the LHC (or in another experiment or process), it doesn't imply that the scientific community seriously thinks that there would be lots of phenomena. This "irrational exuberance" occurs because it's easy to think and invent new things (especially if they're allowed to be largely independent of the old things) and because the scientists are financially motivated to think about theories that predict lots of new things – because you may write many more papers about the new things if you assume that they exist. ;-)

This pressure of course affects not just the climate science but any scientific discipline including particle physics; only the way how the researchers deal with this clear bias may depend on the quality of the discipline.

The scientific community itself should never get carried away, either. And it shouldn't have gotten carried away: maybe it has. Despite the large model building activity, experts should have always kept a reasonable opinion about what would actually happen, what is the probability that some new physics would be observed and how many new particles etc. would be observed by now. Among 1,000 of different models for a sub-TeV new physics, at least 999 of them were guaranteed to be wrong. I am afraid that the particle physics (phenomenological) community has partially failed in this task – it has incorrectly elevated its own composition of the theoretical papers to the reality.

We live in refreshing times when the LHC has already killed a large portion of the detailed phenomenological papers that wanted to get to the gold too quickly. Suddenly, we see the truth and not just what some phenomenologists wanted to sell as the truth even though their main argument was often hype or a financial interest. Let's hope that people will learn a lesson. At the same moment, don't forget that the absence of new physics at the LHC so far isn't a proof that there isn't any new physics at all.


But don't forget about the main point: it's very important for modern science and technology to have very good theorists because a very good theorist may replace lots of experimental work in a much cheaper way – and because of many other reasons, too. Thinking about the real world around us and going beneath the surface, even if the amount of food we can eat doesn't depend on these excursions, is one of the main things that arguably puts us above the other animals on Earth (apologies to all dog Americans, kitty Americans, and dolphin Americans who have read this text and whose sensibilities have just been offended).

Add to Digg this Add to reddit

snail feedback (1) :

reader Frogwatch said...

I have an ancient anecdote concerning the disconnect between theory of atmospheric models and experiment. Although I advertised myself as an experimentalist, I got hired by a defense contractor to model the experiments, specifically, the Infra-red atmospheric brightness (radiance) produced by the high altitude nuclear explosions in the early 60s. This info was critical to those who were designing IR sensing systems to see incoming soviet nuke missiles for Reagan's Stategic Defense Intitiative (Star Wars). Models using existing knowledge of the atmosphere were consistently 2 orders of magnitude off predicting lower brightness than measured. The modellers insisted the primitive experiments of the 60s must have been flawed.
Later, when they were using aurora measurements made from sounding rockets as a proxy for nuclear atmospheric bursts, the same thing happened. After 4 sounding rockets showed the same phenomena, the modelers still insisted the experiments were flawed. This brightness was apparently being produced by water vapor at extreme altitude and the modelers insisted there was no way there could be enough such vapor at extreme altitudes. I do not know how this was ever resolved but I think the experimentalists were correct.
The atmospheric model was the most advanced of the time and is the basis of the ones being used today for climatology. The modelers still think theory trumps experiment.

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');