## Monday, September 14, 2015 ... /////

### A recipe to flood physics literature with garbage

Sabine Hossenfelder is one of the people who have written lots of papers and as far as I know, and I have spent quite some time by looking at them (because she is visible as a blogger), none of them has any value worth mentioning. Most of them are wrong and those that are not wrong are mostly vacuous and lacking any originality. She is certainly not the only one whose "research" may be described in this way.

You may ask what are the procedures that fill the physics literature with similar material lacking value. Well, she has published an algorithm for everyone to follow,

How to publish your first scientific paper,
and it is a pretty stunning text. It's like one of the spam web pages telling you "How to fool the people around you and become a billionaire in 72 hours".

First, her text is more than 11 kB long and it is about writing a scientific paper. You would expect that she discusses some issues you encounter during the research. However, the following is the complete copy of her wisdom concerning the research:
2. Getting the research done

Do it.
If I don't count the title, six characters were enough for her wisdom about ways to do research. 99.95% of her recipe to write your first scientific paper has clearly nothing to do with the research at all! And if you happened to think that the simple "do it" recipe is original, it's not original, either. She has plagiarized Donald Trump who has interviewed himself. ;-)

If "do it" is enough to do the research, what is the rest of her recipe about in that case? It's about choices helping you to manipulate the editors and reviewers, the ways to argue with that so that they forget that your paper is worthless (as a referee, I have encountered several authors of bad science who must have heard Hossenfelder's recommendations "how to deal with a referee who actually demands at least some quality"), about taking a walk and planning your next trip. No kidding.

The recommendation "do it" concerning the scientific research is short enough but it's still much longer than the essential steps that a good scientist does all the time and she doesn't mention them at all. In particular, right after "she picks the topic" (Section 1) and "she does it [the research]" (Section 2), she is already preparing the paper for publication (Section 3).

The reason why she's so remarkably candid about that is that she apparently doesn't even realize what is so stunningly pathological about her algorithm. She has "worked" among pseudoscientists such as Lee Smolin who have been publishing an uninterrupted stream of junk for more than 30 years and she doesn't even have a clue what genuine research could look like.

What is missing in her algorithm?

Well, what is completely missing is any quality control, any filter, any verification. A good scientist is doing lots of research on topics he picks but in average, 90% of it leads nowhere – except for more confusion – and has to be de facto thrown into the trash bin. It is complete nonsense that a good theorist automatically "prepares a paper for publication" after "he or she does the research".

Hossenfelder's algorithm makes it clear that she thinks that she may predetermine the topic and be guaranteed that she may create a paper that is worth publishing. But that's simply not how proper science ever works. A theorist can't know what interesting or important insight he or she discovers in advance, and in most cases, the answer is "nothing". Ask a good or great theorist. He may have 300 papers but he will still agree that a majority of the things he is thinking about doesn't lead to any published paper.

The existence of this "filter" – the insight has to pass some nontrivial tests if it deserves to be published – is arguably the main characteristic by which the proper scientific research differs from the pseudosciences. This is nicely shown in the classic joke about the poor university:
A university council votes to establish a new department but it's a poor university that can't pay too much for any expensive equipment. Someone suggests that they should build a new institute of theoretical physics because theoretical physicists only need pens and paper to do their work.

However, after a discussion, they establish a new department of philosophy [feel free to change to any social science or pseudoscience] because, as someone has observed, theoretical physicists need a trash bin, too.
This is cute and it's what the main difference is all about. In pseudosciences, one simply publishes whatever comes to his tongue – it is a straightforward exercise to write a paper on anything (it's just like a homework exercise for a schoolkid that is guaranteed to be solved by a mechanical procedure) – while true sciences eliminate ideas and hypotheses most of the time. To find a scientific result is harder and less likely than to write a few sentences that respect the rules of grammar.

Sabine Hossenfelder – but also Lee Smolin, 95% of their co-authors, and 85% of their co-authors' co-authors, among many others – are simply not doing science. They are doing pseudoscience. Their papers are all about the form but they actually never put any ideas to the test. Their ideas and claims have never passed any test. They publish on every topic that they predecided to be a good topic for a paper. You can't be surprised that everything they end up writing is junk. This outcome is pretty much guaranteed from the beginning by their unscientific method. Genuine science just can't be created this easily.

There is one extra important observation about the "topics of papers" that Hossenfelder totally misses. When a scientist does an actual research, he just doesn't quite know where the work will lead him. And the outcome – perhaps an important outcome – may be and often is very different from the expectations he started with. And in fact, the outcome may – and often does – address a completely different kind of questions than those that kickstarted the project at the beginning. Hossenfelder is totally unfamiliar with these "twists in science" because she has never done any actual scientific research. She has only written essays on predecided topics with predecided conclusions.

Hossenfelder's goal has never been to do genuine science, however. She is absolutely open about the fact that the actual goal is to have some papers published. Like good scientists themselves, good journals have some filters as well – they don't publish everything that arrives to the mailbox, either. However, it's always possible that a wrong or worthless paper gets printed by an accident, or by a "clever" strategy of the authors.

It's spectacularly clear that the methods to increase the chance that a worthless text will be seen as one worth publishing is what over 99% of Hossenfelder's – and many similar people's – thinking about papers is all about. You can see that theme in pretty much everything she writes about this meta-topic. Let me add a discussion of some particular assertions.
But simple silence leaves me feeling guilty for contributing to the exclusivity myth of academia, the fable of the privileged elitists who smugly grin behind the locked doors of the ivory tower. It’s a myth I don’t want to contribute to.
The word "elitist" may mean various things, usually associated with some people's unjustified belief about their exclusivity. However, when it comes to science – as well as many other "hard enough" enterprises – there simply has to exist some actual exclusivity of the people who are doing it. And places that remained meaningful are only hiring people who are exclusive to one extent or another, who are visibly far from an average person – or even an average PhD.

If a scientific institution employs an average person who does something that "everyone can do", it is just wasting the money.

Another cute comment:
Before we start. These days you can publish literally any nonsense in a scam journal, usually for a “small fee” (which might only be mentioned at a late stage in the process, oops). Stay clear of such shady business, it will only damage your reputation.
"Just stay clear of such shady business," she says. But the fun is she is talking to the same author whether or not he chooses to send texts to scam journals. And you know, the ultimate main reason that decides where the author sends his paper is the quality or character of his work.

Certain authors are sending papers to scam journals mostly because no one else would publish them! A pretty good reason. By construction, Hossenfelder is talking to an author who is willing to send a paper to a scam journal. In other words, she is almost certainly talking to a junk author. And she is telling him: Don't send your work to scam journals, send it to a more reputable journal.

Clearly, the result of that recommendation – if followed – is that lots of additional junk will be sent to more reputable (or previously reputable) journals. It's more or less the main purpose of Hossenfelder's guide.

The right recommendation is the opposite one: If you are only creating bogus "research", something that is guaranteed to be written and whose conclusions are pretty much decided before the "research", i.e. the "research" of Hossenfelder's type, please, don't send it to any true journals. It just doesn't belong there. You may increase your apparent reputation if you associate yourself with reputable journals but you are damaging these journals, wasting the time of its editors and reviewers, and whenever your paper gets in (and some percentage of the low quality papers unavoidably gets in), you are damaging the readers and science in general.

Another remarkable advice is about the novelty – or apparent novelty – of the paper that her readers predecide to get published:
As a rule of thumb, I therefore recommend you stay far away from everything older than a century. Nothing reeks of crackpottery as badly as a supposedly interesting find in special relativity or classical electrodynamics or the foundations of quantum mechanics.
You may see that she cares whether the paper reeks of crackpottery – not so much about the question whether it is crackpottery. The problem, as she sees it, is that whenever something "reeks", someone may "smell" it. So the readers of her guide are told to work hard to mask the stench.

People know that most of the papers revising relativity or other discoveries that are over 100 years old are crackpots. So you should avoid those, the crackpots who listen to the advises from their successful colleague are told. What can their paper be about if it shouldn't be about a topic that is over 100 years old?

This question is hard because there is another constraint: these authors – including Hossenfelder herself – know virtually nothing about theoretical physics of the recent 40, maybe 50 or 60 years. It's just too hard for them. For example, all of the criticisms of supersymmetry or string theory is by the people who are simply not good enough to become competitive high-energy theoretical physicists in the recent 40 years. So they have to write papers about "semi-modern" topics that are some 40-100 years old – in combination with rants that attack physics of recent decades.

In other words, they are living on the margins of the scientific epoch whose revision may be guessed to be crackpottery. If you revise insights that are 110 years old, it's too suspicious and your untrustworthiness will seem too manifest. So you should focus on topics that are at most 90 years old – like criticisms of the 1925 theory of quantum mechanics – and in those cases, you have a chance to "survive".

These guidelines about the "moderate novelty" are clearly meant for crackpots and unoriginal authors to look just like "marginal crackpots" and "marginally innovative authors" which gives them a significant chance of getting published even though they never find anything of value. Her focus is on the "appearances", regardless of the actual content. It improves the "appearances" if you avoid revising topics that are over 100 years old. But in reality, there is no "rigorous" threshold of this kind. It is possible that someone makes a truly revolutionary insight that changes opinions that have been believed to be true for more than 100 years. Such revolutions have taken place many times before.

And on the contrary, even revisions of insights that have been believed to be true for 30 or 40 years – like some basics of string theory and its relevance – are sufficiently unlikely to be valid, too. Hossenfelder recommends everyone the window that improves the apparent chances that the research is meaningful. But in reality, if the research has been pre-programmed in this way, the rules in no way increase the actual chances that the research is right and valuable.
At first, you will not find it easy to come up with a new topic at the edge of current research. A good way to get ideas is to attend conferences. This will give you an overview on the currently open questions, and an impression where your contribution would be valuable. Every time someone answers a question with “I don’t know,” listen up.
The sentence "I don't know" is an extremely weak reason to write a paper. At the bad conferences that Hossenfelder attends most of the time, the people don't know anything important, so the sentence "I don't know" carries no useful information at all. But even if you hear "I don't know" from a good scientist, it may be because
1. he just doesn't know something that his colleagues know, because he's not good at everything or he is ignorant about something by accident,
2. he doesn't know the answer and it's because he's not (and others are not) terribly interested in the question because the question doesn't look important,
3. he doesn't know and colleagues don't know the answer and the question looks interesting but the answer hasn't been found yet because it seems out of reach at the current level of progress.
In all these cases, it will be a pretty bad idea to work on the question that someone else cannot answer right now. To find a topic with a high enough probability of a decent outcome is much harder than to listen to "I don't know". One has to choose the right question that seems interesting but that hasn't been answered – because other physicists haven't looked at it in the right way, or for random reasons, haven't spent enough time with it – but you can do it because it's not hopelessly hard or impossible.

These situations are rare. The simple reason is that if something has been hard for hundreds of other physicists, it may very well be hard for you, too. If it turns out to be easier for you, it's either because you are willing to spend more time with something because you consider the problem more important than others did or because you're more hard-working; or you're smarter; or you have some other comparative advantage over the other researchers.

There can obviously be no mechanistic algorithm to choose the right questions and lines of research. If such a useful recipe existed, everyone would be following it. Theoretical physics is a creative business. The existence of the filters – and the criteria according to which these filters work – distinguish it from arts and other things. But the creativity and intuition needed in theoretical physics make it analogous to arts, too. You can't become a good physicist by following mechanical rules or by listening to sentences "I don't know" pronounced by others, especially if these other people are not good physicists themselves.
1.2 Modesty

Yes, I know, you really, really want to solve one of the Big Problems. But don’t claim in your first paper that you did, it’s like trying to break a world record first time you run a mile. Except that in science you don’t only have to break the record, you also have to convince others you did.
Again, these recommendations are about ways to manipulate other people, not about the substance. The most typical reason why people overstate the importance (and validity) of their claims and papers is that these people lack the knowledge, integrity, intelligence, or experience to correctly evaluate these claims and papers.

But you can't fix these problems by telling them "be modest".

At the end, the most important insights in science are not modest. When Hossenfelder says that you can't break a world record first time you run a mile, it's a proposition that is deeply and importantly untrue in the most important situations in science. The most important discoveries in science are breaking a world record. And some researchers, pretty much including Albert Einstein, broke these records the first time they were running. Sometimes several of them in a row.

Hossenfelder's claim that "it's impossible" is completely wrong exactly in the rare situations that matter most for the fate of science. A correct claim similar to hers is that "most people don't ever break a world record" or "most people don't break a world record the first time they run". But that doesn't mean that one can neglect the situations in which a world record is broken (or broken during the first run). One simply cannot. The scientific progress is largely composed of these events! By assuming that they don't exist, Hossenfelder is throwing the baby out with the bath water. She guarantees that her text will only be read by the people who only add noise.

One should be as modest or as confident about some insights as the actual value, validity, and importance of these insights justifies! The more important an insight is, the less modest presentations it deserves. But if you find something, others will probably understand it as well and you don't have to praise your own insights. Others will probably do the job for you. Well, unless they don't. It may happen that you will be the only one who understands something correct and important for quite some time. In that case, you obviously have to fight for that.

You may be wrong and excessively immodest about claims that are not true. If you're wrong, it may be because you will be wrong throughout your life and there is no reason to give you recommendations in that case. The system will hopefully eliminate you and your wrong contributions. But you must also consider the possibility that you are only wrong now and you will be smarter or more experienced in the future. In that case, it makes sense to tell you to protect your good name.

But if you make a mistake, it's not the end of the story. You may still regain your influence if you make a genuine and correct discovery in the future. Physicists are not stupid to eternally ignore everyone just for making a mistake or two. Everyone does mistakes. Those are the reasons why there are no useful yet universally valid advises about these questions. The better you are, the more correct claims you will be doing, and the more appreciated they (and you) will be by the good physicists in the world. What you need is creativity, talent, motivation, and hard work (and good luck may often help, too), not some guidelines how to manipulate other people's opinions and how to make yourself look like something else than what you are.

After she does the research using the "do it" method, she begins
3. Preparing the manuscript

Many scientists dislike the process of writing up their results, thinking it only takes time away from real science. They could not be more mistaken. Science is all about the communication of knowledge – a result not shared is a result that doesn’t matter. But how to get started?
Sorry but science and communication of knowledge are two entirely different beasts. And quite generally, the former is much harder than the latter. A scientific result that is not shared is a scientific result that is not shared. Science can only be known by one person or a tiny percentage of the mankind – in reality, modern science almost always is known by a tiny percentage of the mankind only – but it is still science. Whether something is correct science is determined by its intrinsic properties, checks of its evidence, logic, and argumentation – internal checks. It is not determined by the existence or amount of communication.

It's helpful for the scientific community – and the mankind – if it learns about important enough scientific results. But if it doesn't, it's the problem for the scientific community and the mankind (and perhaps a reason why the discoverer won't get enough credit, fame, or money). If a result doesn't get publicized, it can't imply that it is not science or it is not good science. And on the contrary, if something gets a lot of press (or good press), it doesn't mean that it's good science. Sadly, we are being shown examples on a daily basis.

After some technicalities, she discusses the review process:
...The reviewers’ task is to read the paper, send back comments on it, and to assign it one of four categories: publish, minor changes, major changes, reject. I have never heard of any paper that was accepted without changes.
It's probably because having been surrounded by subpar researchers and pseudoscientists whose main activity is the "struggle" to get garbage published somewhere, she has never heard of any good paper written by a very good author. It's rather normal for papers to be published without changes. When I was writing them, a fraction of my papers have been, and as a reviewer, I recommended a significant fraction of papers to be published without changes, too (while rejecting about 1/2 of them).

If someone knows how to do things well, it's completely plausible for his or her papers to be "just right". In those cases, there is no good reason for the reviewer to become visible "at any cost". There is no good justification for censorship etc. There is no reason for the referee to try to make the paper copy some referee's idiosyncrasies instead of the author's ones. After all, the referee isn't really a co-author of the paper.

Thankfully, she also says
Never submit a paper to several journals at the same time.
DocG, a reader, disagrees and recommends to send papers to many journals. Quite generally, the authors who write garbage that has a very low probability of being accepted like to send the papers to many journals. I think that good physicists are more or less familiar with the authors who pursue this policy. More or less everyone knows that the existence of a published paper by such an author is a fluke that has to take place at some frequency, and this publication doesn't imply that the author has done something else than the garbage he has been associated with.
Peer review can be annoying, frustrating, infuriating even. To keep your sanity and to maximize your chance of passing try the following:

Keep perspective, stay polite and professional, revise, revise, revise, repeat as necessary [shortened].
This discussion has been very long. The focus on these things makes it even more obvious that the main goal of the guidelines is to have some paper published, not to contribute something genuine to science.

It's good to be polite and professional but if an argument erupts over a paper, both the author and the critical reviewer may be right. The reviewer isn't infallible, either, and the result of the referee process isn't necessarily "perfectly accurate science". The referee may raise bogus objections and he may fail to raise important objections, too.

But it's mainly the "revise, revise, revise" dictum that shows the actual main goal of Hossenfelder's guidelines.

It's right to revise but a quality paper by a careful author that was written in order to be right is very likely to need (almost) no revisions. To agree with all modifications proposed by the referee may be a good strategy to increase the probability of publication. But it is not a good sign that the author is actually a good researcher.

A good and important researcher may very well face – and almost by definition, he sometimes has to face – some opposition by his colleagues and he must insist on what he is saying because it's right even though it's not immediately and perfectly understood by everyone else. With her "revise, revise, revise" recommendation, Hossenfelder doesn't take such key situations into account at all. She doesn't because she is not familiar with the event when an actual breakthrough that advances science was found. That's why her recommendation is all about adaptation to the group think in the environment. No significant scientific advance can ever take place with these assumptions.
6. And then... ...it’s time to update your CV! Take a walk, and then make plans for the next trip.
I had to copy this conclusion because it's hilarious. To summarize her whole essay: in order to add an entry to your CV, invent a random topic that superficially looks like topics discussed by authors who have a chance not to be crackpots, write something, whatever is the outcome of this "research", send it to a journal, use all conceivable tricks and diplomatic skills to convince the editors and reviewers that the paper isn't wrong or meaningless, agree with all their proposals to increase the chances that they will okay the publication, and then celebrate, editor your CV and make plans for the next trip.

This is what the scientific research has degenerated into at Nordita, an institute that was founded by a Swedish minister and especially Niels Bohr in 1957 as a center of excellence – the modern replacement of the Copenhagen school that created the most important revolution in the 20th century physics.