Bottom line: Systematic reviews, based on published journal articles, cannot be relied upon to be accurate.
Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy Turner EH et al. N Engl J Med. 2008 Jan 17;358(3):252-60.
Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses Hart B et al. BMJ. 2012 Jan 3;344:d7202.
These are extremely important papers and as such they should be discussed more widely. It highlights the potentially substantial impact of unpublished trials. This is a real danger and demonstrably so. From the AllTrials campaign we know a vast number of trials are unpublished, with estimates of between 30-50% of all trials hidden to all but the most assiduous researcher. And, when considering this topic, it’s important to understand that the majority of systematic reviews do not use unpublished trials and if they do it’s in a haphazard and non-systematic way (see Schroll JB et al. BMJ 2013; 346). So, the ramifications will apply to the vast majority of systematic reviews that you come across.
The Turner paper examined the publication of papers registered with the FDA (a regulatory requirement) for a variety of antidepressants. Of the 38 studies favouring an antidepressant, 37 were published. Of the 36 that were negative, only 3 were published. It doesn’t take much imagination to see the implications of relying on published trials for subsequent systematic reviews and meta-analysis.
The Hart paper demonstrated that this is routine. Hart compared meta-analyses of published trials versus meta-analyses of FDA data (so published and unpublished studies) over many more drugs/drug classes than Turner; 41 interventions in total. Hart reported:
“Overall, addition of unpublished FDA trial data caused 46% (19/41) of the summary estimates from the meta-analyses to show lower efficacy of the drug, 7% (3/41) to show identical efficacy, and 46% (19/41) to show greater efficacy.”
This can be visualised:
So, if you look at a bunch of systematic reviews what conclusions can you draw around the accuracy of the effect size? Based on the Hart paper you could draw the following conclusions:
- 25% will overestimate effect size by more than 10%
- 50% will be within 10% of the effect size
- 25% will underestimate effect size by more than 10%
But what is profoundly important is that the results are unpredictable. There is no way of knowing if the result of a meta-analysis, based on published trials, is likely to roughly accurate, an under-estimate or over-estimate of the true effect size.
Now
some
space
to
reflect
on
that
last
paragraph
So, where does that leave us? It leaves us with the need to be transparent around the short-comings of systematic reviews that rely on published trials. I also tend to view the results of such reviews as being – at best – ball park estimates. And this realisation can be liberating. If you’re happy with ball-park estimates – do it rapidly, not slowly. For all the methodological rigour of many systematic reviews they still do not deliver what many think that rigour brings – accuracy! This methodological rigour comes at a high-cost. So high-cost does not guarantee high-quality, so cost cutting (increasing value) seems entirely sensible.
One final thought based on the notion of a third of trials being missing. I initially created this as a bit of mischief in my Cochrane-bashing days. However, I have moved on from that (and now class myself as a critical friend of Cochrane (thank you David Tovey for that accolade)) but the image still carries a powerful message.
Note the insertion of two new trials (assuming 9 trials in total, of which 7 were published and used for the Cochrane logo) and the possible effect it might have. Both versions have an equal claim to ‘truth’.
2 thoughts on “The unreliability of systematic reviews”