In science, confirmations are always ultimately empirical in character but science has always been more than just naive empiricism
A year ago, philosopher (and trained theoretical physicist) Richard Dawid wrote a book named String Theory and the Scientific Method where he essentially argued that science is becoming less dependent on empirical observations.
Off-topic: a huge black hole (diameter 80 meters) was discovered at the Yamal Peninsula, Siberia.
Two amazon.com reviewers admitted that they haven't read the book but as homeless losers, they don't like the price of the book. I have a trouble with this kind of "reviews". If they are homeless losers who can't afford to buy a product, why don't they just shut their mouth? Reviews should be written by someone who knows what he is reviewing. If they haven't seen the book, they can't even say whether the price is appropriate.
Maybe a week ago, Richard Dawid was interviewed by the 3 a.m. magazine:
People like Sabine Hossenfelder along with assorted über-šitheads whose names are banned on this blog (and who should be banned in the Solar System, too) disagreed with Dawid.
I am tired of these debates. The basic philosophical framework is so clear.
Science always ultimately derives its validity from observations and experiments – from the empirical data. It's been true for centuries, it's true now, and it will be true in the future. On the other hand, the experiments and observations needed to decide about the truth value of scientific questions have almost never been quite direct, aren't direct, and have been getting and are still getting increasingly indirect.
Über-šitheads and probably Sabine Hossenfelder have no idea about the structure, character, and inner workings of the evidence that is actually relevant in the 21st century physics. So they would like to reduce the scientific method to the work of a mindless bureaucrat who is sitting, performing some mechanical procedures, and directly deriving the validity of scientific theories out of it. But science can't be done like that. The empirical data are the ultimate judges but one must be able to think cleverly and learn an increasingly sophisticated and complex language to translate between the seemingly disordered, boring, raw data on one side and far-reaching, conceptual, general scientific claims on the other side.
Richard Dawid realizes many important things but he is wrong to talk about a "qualitative transformation" that the scientific method is allegedly undergoing. Complicated theoretical arguments had to be added on top of the "raw experimental data" to decide about the validity of many claims since the very beginning – since the moments when Galileo Galilei established the scientific method. I could tell you lots of examples from Galileo's life (for example, whether one was allowed to use a telescope as a source of the empirical data has been controversial for many years!). The chain connecting the empirical data on one side and the conceptual foundations of theories on the other side has been getting longer and subtler since Galileo's time and the 20th century physics and string theory in particular made the chain even longer.
I also disagree with Dawid's claims such as "string theory hasn't been confirmed" or "the confirmations of string theory we have are non-empirical in character". That's just not the case. We have dozens of very important confirmations of string theory and they are ultimately of a perfectly empirical character. Even the "pure consistency" checks may be classified as being empirical. The fact that the probability is never negative (or greater than 100 percent) may be derived from the empirical data, too.
Moreover, the "directly" empirical confirmations of string theory are exactly as strong as the "directly" empirical confirmations of quantum field theory. One may demonstrate that string theory is predicting pretty much the same thing for low-energy experiments as QFT – the only difference is that the data needed to construct the right theory/vacuum are parameterized differently in QFT and string theory. In QFT, you build a theory by adding fields one by one; later, you adjust the renormalizable couplings. In string theory, both steps are replaced by a single discrete choice of the compactification – one stabilized solution of string theory is picked from finitely or countably many. From a purely empirical viewpoint, these two frameworks – QFT and string theory – are equivalent even though we prefer to use QFT to describe the collider data because QFT is "less abstract" and "more directly connected with the raw data" than string theory. But at the theoretical level, the higher abstract character of string theory is really an advantage. What we can't do is to make an actual doable, direct, empirical test that would discriminate between QFT and string theory. But you can't selectively use this as an argument against string theory; that is a completely logically invalid way to argue. Using it as an argument in favor of string theory and against QFT would be equally (il)logical.
QFT was historically found before string theory but that's just a fact about the history or sociology, not a fact about science. In science, one theory can't be considered "more correct" or "more empirically rooted" just because it is older. The existence of black holes wasn't affected by the death of Karl Schwarzschild who caught a bad disease when he calculated trajectories of projectiles in the Great War.
Competent high-energy theoretical physicists find it important to pay attention to what string theory is saying about various problems not only because string theory is a remarkably consistent, rigid, predictive, unifying mathematical structure and these adjectives are "cool" which is why the competent folks know that string theory is a "more correct" framework than QFT. These adjectives are actually needed for the theoretical research to have an added value.
What do I mean? If you work within quantum field theory, a particular quantum field theory makes lots of predictions about particles, fields, and phenomena in general. But aside from the Standard Model low-energy approximation, we don't know the right quantum field theory to use for "all of Nature" (just like we don't know the right string theory compactification – yet). On the other hand, theoretical physicists have gotten so good in analyzing quantum field theories that the translation between "assumptions about the field content and parameters of a QFT" and "the implications of that QFT" has become "almost straightforward".
For this reason, when we are thinking about implications of particular quantum field theories, we aren't really learning much about Nature. We are just translating the "assumptions about new physics formulated in one way" (in terms of the directly observed data) to the "assumptions about new physics formulated in another way" (the field content and interactions in a QFT).
While some clever phenomenological papers might be said to add some "small value", after all, string theory provides us with the only known (and, quite likely, the only mathematically possible) collection of underlying principles that still produce effective quantum field theories at low energies but that systematically constrain them in new ways so that we can actually learn something new without assuming the same thing in different words! String theory gives us new principles that imply that one particle spectrum or one set of low-energy interactions or one type of values of the parameters is more justifiable than another.
Even though it must undoubtedly sound surprising to the ears brainwashed by a decade of hostile anti-scientific propaganda by the Shmoitian scum, string theory is being studied exactly because it is more predictive than the framework of QFT that string theory has superseded.
There is a sense in which I even disagree with the statement that "it has become very difficult to test theories experimentally". Whether this sentence is true or false depends on the theories you want to test. If you cherry-pick theories that have been carefully adjusted to agree with all the known empirical facts and that are decorated with an additional, possibly unmotivated cherry on the top of the pie (new phenomena at very high energies added on top of the Standard Model), then indeed, it may be difficult to test such theories because you may need a collider that is too big or too expensive.
On the other hand, if you talk about sufficiently generic theories that naturally follow from some simple enough principles and that someone unfamiliar with the pyramid of known experimental facts could propose, it has become much easier to test such theories simply because the amount of experimental data that scientists have collected is huge and it is still getting larger.
It is therefore extremely easy and fast to falsify more or less every theory of this kind. A random theory constructed from scratch is pretty much guaranteed to contradict some empirically established facts! For example, it takes a few minutes to prove that everything that any of the professional string theory critics has proposed as a new theory in his life is wrong. For a theory to be viable in 2014, it must be extremely similar to the Standard Model (or, more generally, QFT) in many ways, otherwise they contradict the data. We also want theories that are "original" at least in some respects, otherwise we're not adding much to physics. Theories obeying both conditions in the previous two sentences are very rare and string theory is really the ultimate representative and perhaps the only "comprehensive prototype" of such theories.
There are theories like "the Standard Model with an added heavy particle" etc. that can't be falsified that easily but it's really because these theories were constructed to resemble the known "minimal" viable theories (the Standard Model in particular) and to deviate minimally, in aspects that are not really independently justified by anything. So such (often) unoriginal "new theories" or "potential competitors of the Standard Model" really violate Occam's razor in most cases. The added stuff doesn't really make the theory any prettier, more sensible, coherent, consistent, or universally applicable.
The frameworks, theories, and principles that naturally produce predictions that agree with the empirical data – i.e. contain quantum field theories or the Standard Model as a good approximation – but that also give us a new, prettier, more unified, more robust perspective on the natural phenomena must be taken very seriously because they're really the only alternatives that may be meaningfully investigated before one experimentally observes their consequences.
Such valuable frameworks, theories, and principles include string theory, supersymmetry, grand unification, and – to a lesser extent – several other interesting "conceptual enough" ideas in modern theoretical high-energy physics.
But if you think about possible new theories without this coherence and new formidable foundations, e.g. about a theory that just adds a single heavy particle to the Standard Model, there is really nothing much to study over there. You may just stop this research and wait for the moment when the new heavy particle is (hypothetically) observed. When this occurs, you may quickly do the research that you're doing now – and you're doing it along with hundreds of similar analyses that are bound to be useless because they assume the existence of other particles that don't exist.
String theory, grand unification, supersymmetry, and other "broad enough" frameworks drive many science haters up the wall because they're sophisticated theoretical structures and the science haters don't have good enough brains to understand the internal logic of these theories or frameworks. There's just so much to learn before you can make meaningful contributions. But this is actually exactly one of the key reasons why the research into these things is justified. Even if or when we discover experimental signatures of SUSY, grand unification, or string theory, there will still be lots of difficult questions to settle. That's the real reason why people are doing "preemptive" research of these questions now. The predictions are more than a guesswork. They have some robust internal logic and it takes some time and energy to unmask this logic.
At this general philosophical level, the debate is meaningless and most of the participants don't really know string theory (or another, similarly "assaulted" theory) at the technical level so they're just worthlessly bullšiting about something they don't really know.
To make similar debates more meaningful, we would have to talk about more particular assumptions or preliminary conclusions that decide about theorists' research. When you do so, you will be forced to notice that string theory predicts tons of wonderfully general, universal, yet conceptual things about the observable properties of Nature. It predicts string, brane, or black hole microstates with densities that behave in some ways, with interactions that also behave in some ways, phase transitions and gradual transformations between objects and geometries of various kinds, and so on. Many of these patterns are similar but many are very different from what one could guess based on the quantum field theory expectations.
The aspects that are the same are examples of the perspectives that make quantum field theory and string theory "basically equivalent" from an empirical viewpoint. And the aspects in which quantum field theory and string theory differ are capable of discriminating the two frameworks. Physicists have to choose which of the views seems more likely to them and they must use some arguments or ways of thinking to determine their subjective probabilities.
For example, string theory adds its extremely powerful voice to the debate about almost any sufficiently deep question in quantum gravity. In quantum field theory, you could – and Hawking did – expect that the information has to be lost when a black hole evaporates. String theory seems to clearly imply that all the arguments making the information loss "unavoidable" were artifacts of approximations and string theory is actually capable of circumventing all these "would-be proofs" of the information loss, and in many well-defined superselection sectors, string theory actually demonstrably does preserve the information! The tricks that allow string theory to preserve the information seem very clever, delicate, yet natural. They're surely not something that a wise and professional physicist in the field could ignore.
Now, imagine a quantum gravity researcher. Will he be affected by the incorporation of all these questions within string theory? Unless he is an incompetent sloppy moron, he clearly should be. String theory really proves that the previous arguments "proving" the information loss are flawed. Stephen Hawking who pioneered those "information loss is inevitable" arguments has understood that despite their apparent cleverness within the approximate QFT context, these arguments have been shown flawed by string theory for a decade or so.
This is just one general example of a more particular technical question where string theory actually affects what people believe. Because the theory is all about predicting some in principle observable data and because the theory – with some assumptions about the right vacuum – actually produces empirical predictions that agree with everything we have observed, these arguments based on string theory must be considered empirical in character. And they still matter and affect competent people's beliefs about the existence of the "information loss" and hundreds of others, sometimes much more technical, questions in theoretical or high-energy physics.
String theory has the power to change these opinions (about the "information loss" and hundreds of other questions) because it is capable of reorganizing the known empirical data in a way that makes much more sense than the previous picture that string theory has superseded. In some sense, it has found previously unnoticed patterns in the experimental data, something that can be extracted from the empirical data even though people had not noticed this pattern or derivation before string theory.
When string theory affects the "information loss" debate or another debate, should we be saying that it is an example of a "post-empirical science"? I don't know and I don't care – although I would choose "No". "Post-empirical science" is just a sloppy philosophical cliché, a cliché that someone uses as a compliment while others use it as an expletive. In one way or another, it is a demagogic label. But compliments and expletives don't really matter in science. What matters is whether a well-defined statement is right or wrong. And be sure that the information isn't fundamentally getting lost when the black hole evaporates!
Be sure that any general answer to similar conceptual questions that string theory brings us – whether it agrees with the quantum field theory's preliminary answer or it differs – is right. But to show why it's right – and why credible physicists believe it is right – you must actually penetrate into the technical beef of the given physical question. General bullšiting about "post-empiricism" – whether you use the word as a compliment or as an insult – isn't enough. In many cases, it is spectacularly clear that it would be utterly foolish to ignore the insights that string theory has already brought us.
So science is always ultimately deriving its authority from the empirical data but the raw empirical data have to be processed by layers of transformations – an increasing number of increasingly structured layers – and this fact has been an inseparable part of science since the very beginning. The character of the layers may be gradually changing with time. It is clearly wrong to dismiss a new science just because it is doing things differently than we are used to!
The relative importance of "old experiments" and "new experiments" is changing, too. Of course that once it becomes very hard to increase the collision energy of protons at colliders, scientists will have to pay a relatively larger attention to the older data and the (known or overlooked) patterns in them simply because totally new data may become scarce. But if the inflow of "really new" empirical data slows down, it in no way means that the discipline is becoming "non-empirical" or "post-empirical".
Old empirical data are as empirical as the new empirical data! And we arguably know more than enough about Nature – so that with this knowledge, a smart enough person or civilization should perhaps be able to complete the final theory. We still want some new data but we should be ready to appreciate that our thirst may have gotten silly and we should perhaps think more carefully about the data we have already collected.