Someone asked the following question at Quora:
But he – and others – basically uniformly denounce the worry that is implicit in the question. Lots of people are reading papers etc. I just don't think so. I would love to know the numbers – how many readers a median paper in one or another discipline has – but I've followed some trends and the approximate numbers seem to make it obvious that the number of papers grows faster than the time in man-hours that people dedicate to reading them, which makes it unavoidable that every page is read by a substantially smaller number of eyes than years or decades ago.
Aside from the "observational" estimate – the fact that the number of papers really goes up but the apparent interest of the readers doesn't – I also find it obvious that there are "theoretical" reasons why one should expect the trend to have this sign, and probably a large magnitude. Why?
Well, people doing careers in research are only rewarded for writing papers, not for reading them. So one should expect that this has some consequence – they will tend to write them and not read them.
How many people have read our today's paper with Polchinski, Higgs, Sundrum, and Aranov-Bohm, for example? And it's so much fun! ;-)
To some extent, it's obvious why writing should be rewarded. One might say that the writer of a paper, the active researcher, is analogous to a producer of a product, let's pick a hamburger, while the reader is the consumer – and sometimes he even has to pay for reading (for the journals or the access to their websites).
That's all legitimate and the statement that the creative people are generally doing more non-trivial work than the readers, the passive players, seems to be self-evident. But in the research context, there is a problem: If the author of a paper is fairly rewarded, one also wants the paper to have some significant value or quality. And that value or quality can only be judged by someone who actually reads the paper.
For this reason, the reading is an active and important part of the research process as well: Someone has to read the papers for them to "really become" a part of science understood as a "process making lots of scientists aware of some facts and laws". The people who do lots of this work basically remain uncompensated – in particular, that applies to most reviewers and anonymous referees etc. – and I think it's a problem.
It's a problem because
- the evaluation of the papers' value and quality becomes less accurate and sometimes completely detached from any substance
- people are ignorant of other people's papers which is why much of the work is redundant
- people aren't aware of other researchers' work which is why they may write wrong papers, often for years and decades, which depend on pretending that research proving that the current paper is worthless doesn't exist
Concerning the first problem, much of the work – and funding – is being decided by the people who haven't read the papers or they can't even read it because they're not experts. The decision makers are increasingly likely to be the laymen brainwashed by some P.R. There is just a stunning number of papers and grant proposals that get a huge amount of hype in the media read by the laymen but that ultimately end up with 0 or 1 citation after many years. Too many people can do well just by screaming "we're the geniuses who can ignore everyone else and you should pay us". It may sometimes be true but in an overwhelming majority of cases, it just isn't. To have a systematically expected chance to pick the real geniuses, you simply need some fair competitive struggle, some interactions, some reading of other people's work! Sometimes, some researchers' group think may affect the choices. But when the laymen's or P.R. agencies' group think affects the outcomes, it's much worse!
Concerning the second point, there's a lot of redundancy – mostly repeated papers. It's great for a researcher to rediscover something but when the journals are filled with research that isn't really new, it reduces their value per page and makes it harder to find the new stuff. People should have some tools to make the system or sponsors aware of the claim that they could do things like that as well – without actually doing so. The system shouldn't really force the people to do too much work that will be found too unimportant or not novel enough.
Concerning the third point, I think that the whole existence of some wrong sub-industries – left me pick loop quantum gravity as an obvious example – may be blamed on the problem of people's not reading the work that should be a "must" in their occupation.
There is a lot of hard research showing some general results about symmetries in physics, to pick a pretty example. Global symmetries shouldn't be exact while the gauge symmetries should be allowed to be emergent and the identity of the group should depend on the point in the configuration space. Lots of the papers in the literature contradicts all these major findings.
What's wrong and what is the logic that should fix it?
Well, I think it's obvious that the core of the problem is that the researchers in loop quantum gravity and other corners of would-be theoretical physics are simply not up to their job – they are not reading the relevant papers, perhaps even the bulk of the fundamental papers, in a discipline that they pretend to be theirs. They're not reading these papers because they actually can't; because it's more convenient for such people to ignore papers; and because there aren't any real pressures that would push them to read those papers and deal with them in some way.
Now, people may disagree. But science isn't just some "fancy opinions", as the plastic troll recently wrote on Twitter. Science is the elaboration of hypotheses using arguments and evidence. When a paper presents reasons why purely global symmetries in a quantum gravitational theory shouldn't be exact and your paper contradicts this point, you simply shouldn't be allowed to ignore the paper that contradicts your work. You should address it.
In the context of some institutionalized science, you should be understood to have some kind of a duty to do so. If you believe someone is wrong in some statements that are considered vital, you should present evidence that the argument you disagree with is wrong. If you don't present such evidence, you're not behaving quite honestly. You are really obscuring and hiding evidence.
So for example, string theorists generally think that loop quantum gravity is on a completely wrong track. And there are (TRF blog posts and) papers such as this paper by Nicolai et al. that actually discuss the loop quantum gravity as a serious proposal, technically, just like they would approach an apparently wrong paper in string theory. And they conclude that loop quantum gravity doesn't solve any of the non-renormalizability problems of quantum gravity – because the number of ambiguities remains as infinite as it was before – and there are other, completely particular problems.
You could say that similar "mirror image" papers exist that criticize string theory. Except that they're very different. All those papers are just populist tirades addressed to the laymen, not experts. They're not papers of the same kind that is actually being advanced by top-tier researchers. So the situations aren't the same. The technical and often deep conceptual results of string theory and serious quantum gravity in general are simply being ignored.
I used quantum gravity research but I had dozens of similar "movements" in theoretical physics in mind and similar problems are growing in most scientific disciplines. Researchers are increasingly fragmented into cliques that don't interact with others – and that increasingly fight through P.R. agencies and friends in the media instead of the legitimate arguments in research-level journals. That's wrong and that's unhealthy because they should interact. To do so, they have to read other people's papers and address them whenever they're relevant. There should be incentives that encourage this part of the work.
More generally, I also think that the huge sets of papers in some disciplines could be and should be dramatically "compactified". For example, the evaluation of ATLAS and CMS experiments at the LHC requires some deep expertise but it's extremely similar in almost all the papers. So I think that hundreds of papers should really be merged into "one big paper" that has a shared background with the methods, and various results as "applications of the methods in many cases" described in much more concise ways. Or someone should create a much better system to organize the papers and find what you want. This kind of "smart secretary" work could be more useful than dozens of new papers of a certain kind. I mention this example because I am afraid that the number of readers of an LHC paper is actually much lower than the number of authors.