Can it really be true that 50% of research is unpublished?

Paul Glasziou and Iain Chalmers recently published the above article on the BMJ Blog.  As you’d expect with these authors it’s a great read.  I’d like to highlight one section – that’s particularly relevant to the issue of rapid reviews (Note my emphasis):

Whether the precise non-publication rate is 30%, 40%, or 50%, it is still a serious waste of the roughly $180 billion annually invested in health and medical research globally. Non-publication means that researchers cannot replicate or learn from the results found by others—particularly the disappointments, which are less likely to be published. Funders deciding on the gain from new research cannot base that decision on all previous research. Reviewers trying to summarize all the research addressing a particular question are limited by access only to a biased subsample of what has been done.

So, what they are reporting is that relying on published journal articles represents a biased subsample of the total evidence.  Surely it’s not just Iain, Paul and myself that think that this is problematic.  Indeed, when this issue is explored the evidence is fairly clear: when you compare systematic reviews (SRs) based on published articles they frequently differ, often profoundly, to those based on ALL trials (published and unpublished) [1, 2].

Bottom line: SRs based on published journal articles (“a biased subsample“) are problematic and cannot be relied upon to be accurate.

So, do SRs typically rely on all trials or just the published trials?  Again, this question can be answered with evidence [3], and the answer is no.  Typically, SRs (even from the better SR production organisations) do not grab all trials and the attempts at obtaining unpublished studies is poor and unsystematic.

So, we’ve established that most SRs are potentially problematic and probably biased.  Some will be accurate – but we have no way of knowing which are accurate and which are not.  As such, I’ve long felt that SRs can only be relied upon to give a ballpark estimate of the ‘worth’ of an intervention.

But what’s this all got to do with rapid reviews….?

My broad view is that there is a piece of research to be done that unpicks the obsession with locating all published journal articles.  People put in so much effort, resource and cost in finding them all – why?  The oft-told myth is that it’s to reduce bias.  Iain and Paul seem to have exposed that as a nonsense.  I’ve written previously that setting a high-methodological bar is a barrier to entry [4] – I can think of no other reason (apart from ignorance).

The current situation of producing SRs gives a ballpark estimate (nothing more), if you’re happy with ballpark – do it quickly.  A traditional SR might find 50% of the trials, a rapid review might find 40% – really, a problem?  Possibly in some cases but most of the time I suspect it makes no difference (but let’s get some research going).

A few years ago I was chatting with Tom Jefferson and I highlighted that all the evidence comparing rapid reviews to SRs found broadly similar results.  He said he was unsurprised as he views journal articles as typically pharmaceutical company marketing pieces.  So, to him, a sample of the marketing message is still the marketing message!

References

  1. Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy Turner EH et al. N Engl J Med. 2008 Jan 17;358(3):252-60.
  2. Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses Hart B et al. BMJ. 2012 Jan 3;344:d7202.
  3. Searching for unpublished data for Cochrane reviews: cross sectional study Schroll JB et al. BMJ. 2013 Apr 23;346:f2231
  4. Economics and EBM. Liberating the Literature, 2014

2 thoughts on “Can it really be true that 50% of research is unpublished?

Leave a comment