Science has a problem. The present organization of academia discourages research that has tangible outcomes, and this wastes a lot of money...we basically learn that she just hates pure science or basic research, she always did, and she always will. So you may think it's ironic that she was hired as a theoretical physicist, a worker in a field that she absolutely deplores and she has no talent for. How is it possible? Well, it's all about the political correctness. By placing folks like Hossenfelder to positions they have absolutely no prerequisites and no passion for, you not only hurt the science and its effectiveness. You also hurt the people whom you claimed to help. She really suffers.
...However, using the scientific method is suboptimal for a scientist’s career if they are rewarded for research papers that are cited by as many of their peers as possible...Everything in the real world is "suboptimal" or "imperfect". It was always so, it will always be so, and it has to be so. But doing the scientific method well and being praised by competent enough real-world scientists who have been selected in a meritocratic way and who have real passion for the scientific truth is as good an approximation to the optimal state of affairs as you can get in the real world.
All thinkable alternatives are demonstrably vastly worse. For example, if a researcher considered professionally lousy by her actual colleagues tries to get points by writing the cheapest possible anti-scientific diatribes addressed to the most gullible morons who are willing to read such diatribes at Backreaction, the resulting effect on science is bound to be worse than just suboptimal.
Instead of writing big words about the optimal science, she could start by efforts to become an average scientist – subpar researchers such as herself can only dream about such an outcome.
To the end of producing popular papers, the best tactic is to work on what already is popular, and to write papers that allow others to quickly produce further papers on the same topic.papers are popular among smart and productive enough researchers mostly because they bring something interesting or new or they make our picture of the world more meaningful, logical, beautiful, or coherent than what we had before the paper. A great paper makes a physicist say Wow. He wants to work on a similar topic because it just seems exciting. He thinks that there's more to be found if we search around the previous great, successful, or at least intriguing ideas, and that's why physicists write papers that may be classified as followups of older papers. This logic makes perfect sense and has been successful many times.
Also, ambulance chasing is often a sound strategy to achieve some results in science. And fads in the sense of minirevolutions are extremely healthy and tangibly increase the ability of the scientists to do productive work because they're actually really more excited than at other moments. Even if their work remains mostly derivative, they train their abilities, they don't forget how to calculate certain things or any things. It's extremely bad that such minirevolutions have almost been delegitimized.
Even though Isaac Newton was arguably the smartest or most impactful scholar in the history whose name is known to us – and he really began quantitative natural science or physics in the contemporary sense – he was also the guy who once modestly said:
"If I have seen further, it is by standing upon the shoulders of giants".That quote has two meanings. The wise, deep meaning that Newton wanted everyone to understand quickly is that every physicist, including Isaac Newton, was building on previous discoveries by other people. Physics wasn't ever built completely from scratch, not even in the lifetime of Isaac Newton. The second meaning that Newton wanted to be understood by insiders was a humiliation of Robert Hooke – a life-long foe of Newton's who was also "thanked" – because Robert Hooke was short, not a giant. ;-)
Too bad, Ms Hossenfelder just doesn't get any of that. It's completely normal and largely unavoidable for physicists as well as all other scientists (and all people who create something) to use the previous insights and discoveries by other physicists (and, sometimes, older insights by themselves). It's normal for papers to be followups. Every paper is a followup of some other papers – in the list of references – and it's true even for papers that become more important or more famous than the papers in the list of references. Every discovery is made in some context.
This means it is much preferable to work on hypotheses that are vague or difficult to falsify, and stick to topics that stay inside academia.No, it doesn't mean that at all. This claimed implication is a pure falsehood. As I wrote in the title, papers in theoretical physics that give us new methods to calculate something, to find some patterns, to make predictions etc. etc. generally (or, in average) get many more citations than papers that remain vague. This is just an unquestionable fact and everyone who is actually familiar with theoretical physics – and who has done some theoretical physics that makes sense – knows that very well.
Take the most cited contemporary paper in theoretical physics, Maldacena's AdS/CFT groundbreaking paper. Google Scholar says it has over 16,000 citations. I do think it's somewhat overcited and I think that one can enumerate some papers that should be comparable in their impact but they're vastly less famous. On the other hand, it's a great and groundbreaking paper and an excellent example to show how utterly silly Hossenfelder's proposition is.
Maldacena's paper has induced so many followups largely because it is a hypothesis that would be extremely simple to falsify if it were really wrong. Things just wouldn't work. One could calculate various results beyond the simplest quantities in some special simplest examples that Maldacena checked. They wouldn't work and Maldacena's AdS holography would be falsified. It has never happened and the conjecture was generalized and verified in many different situations and quantities, checked and semi-proven from many perspectives, and so on.
Also, Maldacena's paper wasn't vague. The conjecture is extremely sharp. At least its specialization to some particular AdS backgrounds in string theory has a completely well-defined and highly studied quantum field theory on one side (the boundary); and a vacuum of quantum gravity i.e. string theory on the other side (the AdS bulk) that may be approached by effective field theory, perturbative string theory, and other methods. And Maldacena's duality just works. Infinitely many functions of infinitely many variables are predicted to agree on both side in infinitely many examples. And a huge subset of these functions – still infinite families of examples involving functions of many variables – have been verified or semi-proven.
On the other hand, look at a buzzword that is more reasonably linked to vague papers, the multiverse. Papers with that word in the title (or at important places) generally have hundreds (below 1,000) citations. Some of the hits are books or papers from other disciplines and if you look at the most successful theoretical physics papers prominently using the "multiverse", their citation count is close to 100, not 1,000. And you can check that the highly cited papers about the multiverse are more interesting, more quantitative, more novel, simply better than the average ones. Scientists just don't write many followups to vague papers about the multiverse or anything else because these people don't really make much sense, they don't make anyone too excited, no one really knows what he has learned and how he should use it, and he also expects all potential followers of the followup to be even more confused. Scientists just naturally try to remain as sharp – non-vague – as possible. It doesn't mean that physicists are always 100% rigorous, this requirement would be utterly lethal for physics, too. But if all other things are equal, they surely prefer to read or write or follow a well-defined paper over a vague one, and those papers therefore get a higher number of citations, too.
It's easy to argue that it should be so, everyone who has been an actual researcher knows that it's also true in practice, and Hossenfelder's statement is simply false.
The citation count isn't a perfect, God's measure of quality of a paper. But it's damn good. It's vastly better than what all the fanatical critics similar to Hossenfelder are trying to suggest. In particular, the total citation count for a physicist (or the h-factor which encourages production at a certain higher rate) is way better than the total number of papers. It's relatively easy to write 400 papers and beat Edward Witten. But it's just hard to collect 130,000 citations. You just won't do it by writing vague, meaningless papers that have no implications. Just try it. Every strategy to reach at 130,000 by some simple algorithm will fail. Even if it didn't fail, everyone would know you are a trickster so you wouldn't get hired.
You could try to replace the "validation" by a high number of citations with another validation recipe, like a signature from 5 colleagues, or 10 positive articles in the popular magazines or in blogs, or anything of the sort. Be sure that any criterion like that would lead to dramatically worse scientific results than the (hypothesized – it's not really universal) struggle for a high number of citations. All these alternative ways to judge scientists' work would be far easier to be corrupted. It would be far easier to collect the "points" from some special people who aren't really good scientists or who aren't honest or who have amplified their number and influence disproportionately, and so on.
Because I often write about the Bitcoin, let me use some buzzword (with some unusually positive implications) here. A citation is linked to a followup paper – someone had to do some work to write a new paper and in practice, the paper has a limited number of slots in the list of references. So there is some "proof of work", like in the Bitcoin mining, that actually gives value to the citation. It isn't anything that could be considered cheap – like a "like" from several colleagues if not the journalists or laymen. The citation is a good "point" in some counting of meritocracy because it's linked to some nontrivial work that takes time – but at the same moment, it doesn't depend on complete additional waste of time.
Just like democracy is the worst system except for all the others that have been tried, so is the system that cares about the number of citations.
After having obnoxiously whined that similar catastrophes plague all of science, not just theoretical physics, she writes:
Because they are the tactics that keep researchers in the job.Well, if there exist remarkable tactics that can keep researchers in the job despite their maximum incompetence and lack of creativity and significant results, then it is exactly what the likes of Lee Smolin and Sabine Hossenfelder have been doing for years. The survival has been incredible. Just be lame, never write anything that is usable or creative, and just impress the total idiots in the public how smart you are and how you're being discriminated against by the dinosaurs. Even though every expert knows that you are a crank – in the case of Lee Smolin – or a subpar derivative copier of stuff extremely far from any cutting edge – Sabine Hossenfelder – they will be afraid of the tons of brainwashed idiots in the public and keep you in some job.
But there exist no straightforward strategies to keep research jobs that are avoiding Smolin-like conspiracies with the idiotic laymen or various purely political powers in the Academia. Everyone who is really writing papers has to depend on the expert judgement of the work and the expert judgement is simply being made by the most competent people you may find on Earth, at least in theoretical physics. They're not perfect but they're far better than any alternative "judges" you could propose.
What we witness here is a failure of science to self-correct.No, what we witness is an example of the business-as-usual in science – elimination of papers that seem worthless and the loss of credibility of their authors such as Sabine Hossenfelder. The fact that physicists agree that Sabine Hossenfelder is worthless as a physicist is a textbook example of science's ability to self-correct. Others accept it. But Hossenfelder, spoiled by kilotons of affirmative action, just doesn't like that science actually corrects itself.
It’s a serious problem.The actual serious problem is that worthless and fraudulent researchers such as Sabine Hossenfelder are increasingly frequently circumventing all the scientific meritocracy by flattering complete morons in the public and the mainstream media and persuading them that they would surely pick better science than what the likes of Edward Witten could find. Sorry, no science can really be built out of populist tirades driven by the anger of the most hopeless morons. And if the likes of Hossenfelder build anti-science that boils down to anti-meritocracy, there is a risk that this one will annihilate against science with its meritocracy so science will cease to exist. (And I haven't even mentioned that unlike theoretical physics, most of Hossenfelder's anti-science would be done by anti-Semites.) It's vital for science not to be annihilated and for its procedures not to be "compensated" by some adjacent environments.
But then I go and read things like that Chinese scientists are paid bonuses for publishing in high impact journals. Seriously.Bonuses for Chinese scientists who manage to publish in high impact journals are an absolutely good idea because China publishes too much mediocre stuff that isn't too valuable and China simply needs to increase the quality – while the quantity may already be enough (not too surprising given the population of China). In other words, China needs to improve the selection and competitive struggle which has been absent. I would claim that this is true not just in theoretical physics. In many other disciplines and industries, China should focus on incentives to increase the quality even if it leads to a decrease of the quantity.
That has begun to have an impact on the behavior of some scientists. Wei and co report that plagiarism, academic dishonesty, ghost-written papers, and fake peer-review scandals are on the increase in China, as is the number of mistakes.Well, it's not shocking that when some rewards are distributed, people try to cheat the system. But most of them will finally be caught. In the Western journals, they would be caught quickly. But the fact that some people are trying to cheat the system to get some rewards doesn't imply that the incentives are a bad idea. The situation is totally analogous to doping in sports or many other examples. East German or Soviet athletes were motivated to win Olympic medals and this has contributed to their tendency to take forbidden substances – this behavior was widespread.
What does it actually mean? It means that the fight against clearly unprofessional, and potentially illegal, behavior was rather lousy in sports, at least in these two countries. In the U.S. which has also won lots of medals, the doping was never too bad. But does it mean that the East German or Soviet athletes shouldn't have been encouraged at all to be better than the average? So this would-be argument is just bogus. The frequency of scientific misconduct will almost certainly be higher in China that is simply not quite as civilized as the Western countries – at least so far. But you can't solve this problem by eliminating incentives for the Chinese to be better than the average. You just need such incentives because the Chinese are near the average or below the average way too often.
Her proposal to eliminate all such incentives for higher quality are also analogous to the leftists' general criticisms of liberalization and privatization etc. Some people have acquired the companies too easily or unethically and therefore the privatization in Czechoslovakia was bad, evil etc. Great. What was the better alternative? To keep communism surely wouldn't be a better alternative. By now, we would have been poorer by a factor of five. Private business, competition, incentives to increase quality etc. are essential for decent conditions even if they also lead to some unwelcome consequences aside from the desirable ones. The unwelcome consequences may be gradually reduced – and they have been reduced in Czechia, too. But you need a correct starting point in the zeroth-order approximation and the absence of competition and incentives just can't be an approximately good starting point.
Hossenfelder also complains against similar policies in Hungary:
The programme is modelled on European Research Council grants, but with a twist: only those who have published a paper in the past five years that counted among the top 10% most-cited papers in their discipline are eligible to apply.She asks what you would do with such a grant? Grants may be used for the personal pleasure of the scientists, as salaries, and/or for doing even better research in the future. At any rate, incentives such as the Chinese and Hungarian ones are vital. Too bad that their number in Czechia is probably too low. Scientists from post-communist countries simply have a lower quality than the Western ones. It's been true throughout the communism that "it's enough to be an average guy" and to be OK with the bosses and the communist party etc. It can't be surprising that in such circumstances, people almost always end up being average. They're not motivated to be better – and they're also not being properly selected and the good ones aren't properly rewarded. In the socialist countries – and much of our Academia still runs in the socialist rhythms – one almost always got promoted in certain ways. That's different from a job at a school at Massachusetts which I shouldn't have accepted – but it's still true that I had 65 competitors. Competition does have good consequences statistically, however. If or when participation is enough, mediocrity and stagnation are the only guaranteed outcomes.
Hossenfelder hates any meritocratic policy because she sucks in all meritocratic criteria. She can only whine and invent conspiracy theories that impress complete idiots. That's what she's good at and she has arguably gotten even better in the most recent 10 years.
Surely in some areas of research – those which are closely tied to technological applications – this works. Doing more of what successful people are doing isn’t generally a bad idea. But it’s not a path to discover useful new knowledge.Doing similar things to what successful people are doing is the only way to be successful. In Western physics, the success is evaluated by real-world, but otherwise as good as you can get, scientists. In the existing system, indeed, they have to recognize something to be valuable if not groundbreaking. But there's no better way to choose the winners well. At the end, good scientists don't really like someone who is just a follower. They love someone who stuns them, who made them breathless while they were reading his paper. I've met lots of achieved physicists – and served on lots of admission committees – and I just know it to be the case.
Competent people at leading universities understand that something is great even if it differs from their work in detail. This ability is partly responsible for the places' remaining great. Sabine Hossenfelder hates the very good and especially great physicists – and all criteria based on their feedback or views – because she has never been considered good by any very good or great physicist. She can only get a positive rating from stupid laymen and politically corrupt journalists and apparatchiks which is whom she builds upon.
The main problem with the Academia is that the number of parasitic people who actually fight against what they should build is increasing. It's much less bad in physics than in other fields. But this cancer is spreading from other fields, especially humanities and social sciences, to the rest of the Academia.
Hossenfelder also proposes some hostile systemic interpretations and conspiracy theories "explaining" why we can't cure breast cancer yet. If she thinks that they're doing it wrong, why doesn't she publish how to cure the breast cancer herself? What makes this subpar arrogant mediocre bitch self-confident enough to place herself above the cancer researchers? I've known quite a lot of them and I am absolutely sure that she couldn't do better than them. A cure for breast cancer simply is a difficult enough task so that the researcher up to 2017 hasn't found a solution yet. In the absence of a proof that one can do better – which would probably have to be an actual cure – it is just purely arrogant bullšiting for someone to claim that the cancer researchers are bad.
OK, what's her solution?
His solution? Don’t let scientists decide for themselves what research is interesting, but force them to solve problems defined by others.Obviously, only theoretical physicists really understand the modern theoretical physics so to make them "answer question invented by those who are not theoretical physicists" would mean to destroy all of theoretical physics. Others just can't even phrase or envision such questions and 99% of the questions they would invent would be just pure garbage destined to waste people's time. More generally:
In the future, the most valuable science institutions […] will link research agendas to the quest for improved solutions — often technological ones — rather than to understanding for its own sake.This proposal is nothing else than in invitation to kill or ban pure science in general. Science is about the understanding for its own sake. The fact following from her rant that she has absolutely no respect for pure science proves that she shouldn't be doing what she's doing at all because her mind isn't of high enough quality for that, and she's only doing so because affirmative action has pushed this subpar lady to places she was guaranteed to abhor.