Tuesday, October 22, 2019 ... Deutsch/Español/Related posts from blogosphere

Theories with special properties are more valuable, more likely than generic cousins

In an interesting conversation, someone complained about the recently published numerous new swampland constraints by saying that they "go too far".

While I don't know whether these constraints are valid, I believe that the complaint that a scientific hypothesis "goes too far" simply isn't a valid scientific argument – and it is downright harmful. Why? Because those of us who like science and progress in science actually want the new theories to be as far-reaching as possible – we just want the new results to go very far! The idea that they "shouldn't go too far" is equivalent to saying that the speaker "doesn't want much progress in physics"!

If some hypothesis "goes too far" in the sense that its proposition is too strong, it should be easier, and not harder, to disprove it – assuming that the statement is indeed wrong. When it's so, we should be more demanding and indeed expect an actual disproof and not just some emotional or prejudiced complaints that something "goes too far", right?



New, revolutionary theories in physics were often invalidating some assumptions of the previous theories. In this sense, they went "too far indeed" – people had to think outside the box. But even before the invalid assumptions were identified, people had to focus their attention on some "special subclasses of theories or hypotheses" where something unusual was true, where something rare was happening, where there was a potential to find a previously overlooked hidden gem.



Let me give a few examples of progress in physics. Galileo Galilei was throwing eggs on the tourists from the Tower of Pisa – a favorite hobby of his – and he already knew that the speed of the eggs tends to increase with time as they're freely falling. The function \(v(t)\) could be a generic increasing function and most people were satisfied with the qualitative understanding of "accelerated motion". But Galileo wanted to know more – how much it was accelerating.

Although Newton's and Leibniz's calculus wasn't known yet, Galileo had to do something similar using the more primitive tools of this time. So he formulated two hypotheses:

The speed increases by a universal amount (in meters per second) every second.

The speed increases by a universal amount (in meters per second) every meter.
We know – and he experimentally found that – the first answer is correct. It leads to the parabolic motion. The second option would lead to the differential equation\[

\frac{dv}{dt} = Kv.

\] There's an extra factor of \(v\) on the right hand side because in each second, we must compute how many "each meters" there are, to determine how much the speed should increase in that second, and that's why the acceleration is proportional to the speed. Needless to say, we get\[

\frac{dv}{v} = Kdt

\] and the solution – by integration – is \(\log v = Kt\) or \(v=\exp(Kt)\). The speed would increase exponentially. The Earth demonstrably isn't this good at achieving exponentially high velocities. Even more obviously, it wouldn't be possible to accelerate anything starting with \(v=0\) because zero isn't the exponential of any finite number. Galileo was interested in the quantitative aspect of the acceleration in Earth's gravity – and he found the "constant acceleration" and the parabolic paths that follow from it.

Similarly, Ptolemy and Kepler were interested in the motion of the planets and – aside from the obvious circles that never quite worked – they recommended the epicycles (a composition of two circular motions) and ellipses. Some closed curves were singled out.

Newton derived all these things from unified, more fundamental equations including \(F=ma\) and\[

F_{\rm gravity} = \frac{Gm_1 m_2}{r^2}.

\] Cool, it's the gravitational inverse square law. Someone could say that \(1/r^2\) is arbitrary. Why isn't there \(1/r\) or \(1/r^3\) or any other decreasing function of \(r\) over there? There is something special about \(1/r^2\) in three spatial dimensions, however. If you integrate the lines over a sphere\[

\int \vec F \cdot \vec{dS},

\] you get a constant result regardless of the radius of the sphere because \(1/r^2\) from the force cancels against \(r^2\) from the area of the sphere. Of course, this has to be so because the gravitational field may also be written in terms of the gravitational potential \(\Phi\) that obeys something like\[

\Delta \Phi = G \rho_{\rm mass}.

\] Add the correct numerical prefactor! Using Gauss' law and other things, it makes perfect sense why \(1/r^2\) is the right dependence of the force on the distance. But Gauss lived in 1777-1855. He formulated Gauss' law in 1813, just 206 years ago – although yes, Lagrange basically had it in 1773. But long before these things were fully understood, people could have noticed that there was a nice special heuristic explanation involving the integral of the field lines.

They could have believed that something special was going on for the force \(1/r^2\) – and this feeling was surely right, wasn't it? It is often right. There is always an increased chance that something important and clear will be discovered about the laws that look special. Good theoretical physicists are instinctively aware of this fact – because they saw it work in so many examples. That's why they naturally focus on the special places, special forms of the laws and special values of exponents and other things that may be adjusted where some special identities hold, where something special is going on.

It's an aspect of the "beauty in physics" that every good physicist greatly appreciates. In some sense, these two comments are exactly equivalent.

Aside from the special role of hyperbolae and ellipses as deformations of arcs and circles, and aside from the constant integrals of the field lines, there has been a type of a special trait that was becoming important in the late 19th and early 20th century, namely symmetries. Well, the laws always had the \(SO(3)\) rotational symmetry from the Euclidean space but people found it obvious and they weren't trying to hype it because this symmetry looked like the only one and there was nothing similar of the same kind to be discovered.

But that was wrong. Later, many more interesting symmetries were found. In Maxwell's equations, when one adds the magnetic monopoles (or sets the electric charges and currents to zero), there is a symmetry between the electric and magnetic fields\[

\vec E \to \vec B, \quad \vec B \to -\vec E.

\] Maxwell was aware of that nice symmetry and it helped him to add the missing term in Maxwell's equations which is why we call the equations after him – and why he could derive the electromagnetic waves (and explain light in terms of electrodynamics) using his equations! In the 19th century, it wasn't clear "why" there should be a symmetry between electricity and magnetism. Now, in 2019, it's still somewhat unclear "why" there are such symmetries except that we know that they surely are omnipresent in the interesting theories we have found, including string theory. These symmetries are connected with each other.

Albert Einstein made a big deal out of the symmetries. The Lorentz symmetry was the principle that reconciled the existence of many inertial frames – where the laws of physics have the same form – with the laws of mechanics as well as electromagnetism. That symmetry unifies space with time; energy with momentum; mass with momentum (so it also equates energy and mass); merges the electric and magnetic fields into the same tensor. And by clumping the physical variables to a smaller number of tensors that must act as "wholes", it greatly constrains the form of the laws of physics.

General relativity does something similar – at a higher level. It's all diffeomorphisms that are postulated to be a local symmetry and sufficient to explain gravity assuming that the spacetime may be curved. Einstein's equations involving the Ricci curvature tensor end up being unique – well, "unique up to some order" in some decomposition of similar equations according to the number of derivatives.

New interesting physics is often found in the research of extreme situations. Extremely strong gravitational fields lead to black holes – with all the amazing phenomena, including the quantum ones such as the Hawking radiation and the information puzzles.

Quantum mechanics focuses on the extremely small and finds new symmetries and special values of many things. The harmonic oscillator has an equally spaced spectrum. The hydrogen problem reproduces the \(-1/n^2\) spectrum of Bohr's naive model of the atom. Quantum mechanics has a \(x\leftrightarrow p\) symmetry that is accompanied with the Fourier transform of the wave function. More generally, it postulates a symmetry or democracy between all Hermitian operators – allowed observables. It says that the cells of the phase space whose volume is \((2\pi\hbar)^N\) are indivisible – they are the "elementary building blocks" of the phase space although it's only the effective volume, and not the shape, of the brick that is determined by the theory.

Quantum field theory combines the constraints and symmetries of quantum mechanics and those of the special theory of relativity. One finds out some general predictions of this union – particle production, antimatter, and crossing symmetry – and also subtleties in the calculations – UV and IR divergences. Those lead us to understand that "not all classical theories" are equally good starting points to be quantized. One divides the interactions to renormalizable and non-renormalizable and so on. Renormalization group has made it understandable what's "better" about the renormalizable interactions. They're those that survive at energy scales much lower than the energy scale where the "typical new physics" takes place. But even before the renormalization group was fully understood, people already knew that there was something "better" about renormalizable interactions.

And then there's string theory beyond quantum field theory. For quantum gravity, we clearly need some "generalization of a QFT with infinitely many particle species" in some counting e.g. because the black hole microstates are exponentially numerous (and they are effectively independent new particle species). There is an infinite number of unknown parameters to be added. String theory is a realization of the expectation that not all choices are created equal. The microstates must be derived as oscillation states of a string (resulting from a conformal symmetry of the world sheet etc.) or another extended object or some mathematical continuation of these theories to a stronger coupling. String theory has allowed us to organize all the ideas more tightly with numerous dualities, holography, the ER-EPR correspondence, perhaps some stringy uncertainty principles, and the swampland constraints, among other things.

There are symmetries, patterns, constraints (inequalities saying that quantities can't get too small and too large; and various no-go theorems), and new phenomena in extreme conditions everywhere in physics. Before we know what the new laws, constraints, symmetries, exceptions, generalizations exactly say, we must see some fuzzy pictures of this new physics. That's surely what the intuitive "magicians" of physics such as Vafa often do. The real point is that such physicists who are changing the paradigm often see "that" something special is happening somewhere, in some conditions, for some special choices of the theories and their parameters; before they see exactly "what" is happening and why. And they are seeing constraints before they may prove them.

So I am extremely irritated whenever I hear some people saying that physicists shouldn't think about new possible constraints, or they shouldn't look at special examples of existing theories where something rare is happening; or they shouldn't look in extreme conditions and what the existing theories predict there, and so on. I am extremely irritated because in combination, these things really are responsible for a majority of progress in physics. The rest is some "more routine work inside the box" but it is very clear that the "more routine work inside the box" wouldn't be sufficient to get where physics is today – and it won't be sufficient to bring physics where it will be in 2200 AD, either.

A sociological remark: I was more terrified when the person wrote me that – because of the "threats" posed by the swampland program – the "community of string theorists should be carefully handled". What? Handled like Greta Thunberg is handled by her handlers? Sorry, if some activity is real scientific research, researchers cannot be handled or constrained by any "boss", aside from a readable visible boss who has the superior expertise to see the big picture of the discipline. At the very top of such a hierarchy, there must be a real expert. But "handling" researchers by someone who isn't really their professional superior – but who still wants to seriously constraint what the scientists may do and mustn't do – means the end of science. You can't "beat" this essential principle by some whining about some particular inconclusive or incorrect papers, about the (totally unsurprising) imperfections of average researchers, or about some "threats".

I am really terrified by the percentage of the people who don't get this fundamental point about the scientific method – namely the point that scientists mustn't be controlled by someone who decides about the important conclusions according to some political or otherwise non-scientific criteria.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :