• Search by category

  • Show all

Uncited Papers Equals Meaningless Research?

January 24, 2018

On my first ‘learning’ day at university in 1982, the professor inducting us to the fraternity (used loosely) of science explained that science without communication was pointless and that we should always aim to share our observations, positive and negative, with the rest of the scientific community. This ethos aligns with sociological analyses of science that emphasise ‘communalism’ as a core norm of scientific practice, whereby findings must be publicly shared to become part of certified knowledge [1]. Then, as now, the scientific literature was the principal medium for communicating the results of your research and the aim of researchers was to add as much of their work as possible to the permanent record of the collective achievements of the scientific community.

The size and scope of the literature is huge and almost impossible for us to conceive. Guesstimates suggest that more than 1.5 million such papers are published in over 25,000 peer-reviewed journals annually and that over 50 million have been published since the first scientific journal was published in 1665. Bibliometric analyses have estimated exponential growth in scientific output since the seventeenth century, with doubling times of 9–15 years in recent decades [2,3]. It is perhaps unsurprising therefore to find that no small proportion the literature goes unrecognised in the form of being cited. The extent and value of the uncited literature has been the subject of some speculation and discussion. Early citation analyses suggested substantial proportions of papers remained uncited within five years of publication [4]. Researchers often worry that high numbers of uncited papers indicate a useless or irrelevant direction of research, although citation behaviour is known to be shaped by social and field-specific factors rather than intrinsic merit alone [5].

Previous estimates have suggested that up to half of all papers have yet to receive their first citation some 5 years after publication [4], and even 10% of papers by winners of the Nobel Prize go uncited, though later large-scale analyses indicate this figure is far lower [6]. One often repeated statistic is that more than half of all academic articles remain uncited at 5 years [4]. To get a better idea on the extent to which published research goes uncited, the journal Nature recently performed its own analysis. In their article it is noted how difficult it can be to get a grip on accurate numbers. In this case, the researchers reviewed data on a core group of 12,000 journals catalogued in the Web of Science. They found evidence that previous estimates are far too high, consistent with broader re-analyses of citation distributions across disciplines [6]. Looking at recent papers it appeared that fewer than 10% remain uncited. This may also be higher than it is in reality as data recorded on Web of Science does not include citations in journals that are not registered on their database or appear in books, patents, etc. As a means of comparison, work published by Nobelist’s goes uncited just 0.3% of the time [6].

A closer look at the data suggests that the proportion of uncited papers has been higher in the past and uncited numbers have been falling steadily. For example, 20% of papers published in 1980 still don't have a single citation today. It seems likely that this reflects the revolution in the way we have changed the way we undertake background research over the last 30 years following the emergence of electronic literature search engines. In addition, we have also seen growth in the number of references we cite in papers. This trend has been documented empirically across multiple fields [7].

A new field of study has exploded onto the scientific landscape over the last decade, that of bibliometrics. Foundational work on citation indicators and research evaluation, including the development of the h-index, has shaped contemporary assessment practices [8]. Qualitative analysis of the scientific literature is changing rapidly with the creation of new evaluation tools, parameters and normative data. These parameters can be categorised into author or journal focused metrics, such as impact factor and article-level indicators [9]. Once you have an understanding of the field of bibliometrics it becomes obvious immediately that publication metrics fail to give a true assessment of research ‘quality’, whatever that may be; it is not yet possible to derive a simple indication of the value of a piece of work. Concerns about misuse of journal-based metrics have been widely articulated in the research community [10]. You still have to determine that for yourself.

Reviewing my own track record over the last 25 years suggests that reception of my work has been pretty average, with 11.5% of my manuscripts having zero citations. At least my h-index scores indicate that I haven’t spent my career citing myself.

In the spirit of generosity, the author Nature’s paper noted that we shouldn’t assume that any of our work that goes uncited is worthless> It is still possible for the work to effect researchers and practitioners. Empirical studies of ‘sleeping beauties’ in science demonstrate that some papers remain uncited for years before later recognition, illustrating that citation trajectories can be delayed and nonlinear [11]. Long dormant work can still influence readers who then change the way they do something or think but who may not necessarily make comment about the influence the work may have had within the literature. To be fair, they may not even know that they have been influenced.

In this respect, the arrival of the digital era has had a significant impact on science. Not only has it changed the way we search the literature, it has also democratised and re-valued every single research report. The rise of online databases, open access publishing and article-level metrics has diversified the ways in which impact can be observed, including downloads, social media attention and policy citations [9]. Databases can just as easily regurgitate the details of a manuscript from 1980 as it can one from 2018. Younger researchers, born into these systems now adopt a learn-as-they-go approach, adapting their searching strategy ‘on the fly’. As such, what they see informs their decision trees, and thus, it is the whole body of our research work, positive, negative and revolutionary, that empowers their progress. The growing recognition of publication bias and the need to report negative or null findings has further underscored the importance of communicating all results, not only those that achieve high citation counts [12]. Although it may go uncited, even the most uninspiring paper can impact on the way we think, though perhaps you don't want to be seen to be one of the 10 lucky Ig Noble Prize announced annually.

Finally, for me there is one consequence of my early research that includes several zero-scoring research projects. Each introduced me to other researchers in the wider community who were working in similar fields. Social network analyses of science confirm that collaboration and informal exchange between scientists are central drivers of knowledge production and innovation [13]. Thanks to these interactions I formed several life-long collaborations, friendships and publishing partnerships that would otherwise not have happened. Even today, as Managing Director of a business, Niche Science & Technology, I still see it as my duty to communicate about science. And I benefit massively from these chances to interact with others.

References

  1. Merton RK. The normative structure of science. In: Merton RK. The sociology of science. Chicago: University of Chicago Press; 1973.
  2. Price DJ de Solla. Networks of scientific papers. Science. 1965;149(3683):510–5.
  3. Bornmann L, Mutz R. Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references. J Assoc Inf Sci Technol. 2015;66(11):2215–22.
  4. Hamilton DP. Publishing by—and for?—the numbers. Science. 1990;250(4986):1331–2.
  5. MacRoberts MH, MacRoberts BR. Problems of citation analysis. Scientometrics. 1989;36(3):435–44.
  6. Van Noorden R. The science that’s never been cited. Nature. 2017;552:162–4.
  7. Hyland K. Self-citation and self-reference: Credibility and promotion in academic publication. J Am Soc Inf Sci Technol. 2003;54(3):251–9.
  8. Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569–72.
  9. Priem J, Taraborelli D, Groth P, Neylon C. Altmetrics: A manifesto. 2010.
  10. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314(7079):498–502.
  11. van Raan AFJ. Sleeping beauties in science. Scientometrics. 2004;59(3):467–72.
  12. Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, et al. Dissemination and publication of research findings: An updated review of related biases. Health Technol Assess. 2010;14(8):1–193.
  13. Newman MEJ. The structure of scientific collaboration networks. Proc Natl Acad Sci U S A. 2001;98(2):404–9.

About the author

Tim Hardman
Managing Director
View profile
Dr Tim Hardman is Managing Director of Niche Science & Technology Ltd., a bespoke services CRO based in the UK. He also serves as Managing Director at Thromboserin Ltd., an early-stage biotechnology company. Dr Hardman is a keen scientist and an occasional commentator on all aspects of medicine, business and the process of drug development.

Social Shares

Subscribe for updates

* indicates required

Get our latest news and publications

Sign up to our news letter

© 2025 Niche.org.uk     All rights reserved

HomePrivacy policy Corporate Social Responsibility