In this research article, published in PLOS One last week (), V. Larivière and R. Costas analysed the publication and citation records of more than 28 million researchers, who published at least one paper between 1980 and 2013. Using this database, the authors tried to understand the relationships between research productivity and scientific impact. They addressed the question whether incentives for scientists to publish as many papers as possible will lead to higher-quality work – or just more publications. It was found that, in general, an increasing number of scientific articles per author did not yield lower shares of highly cited publications, or, as Larivière and Costas put it: ‘the higher the number of papers a researcher publishes, the higher the proportion of these papers are amongst the most cited.
There are two reasons why we find this paper very interesting and worth reading:
On the one hand, here at PAASP; we are very much interested in the reverse relationship – whether quality has an impact on productivity. Indeed, some colleagues are worried that introducing and maintaining higher quality standards in research could have a negative impact on the number of papers published, less possibilities to publish in a higher impact factor journal or longer duration of student projects (e.g. for PhD students).
On the other hand, this paper reminds us that using citation number as an index of quality is a dangerous approach. For example, we have used data generated by the Reproducibility Project: Psychology (Open Science Framework) to plot citations for papers where research findings were replicated vs papers with findings that could not be replicated (Excel table with the raw data is available upon request).
As the graph below illustrates, it is does not matter how often the senior or first authors have been cited during their careers or how many times a particular paper has been cited. There are no differences between publications, which could or could not be replicated!