Grey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews. Hartling L et al. BMC Medical Research Methodology 2017 17:64
Conclusion: The majority of SRs searched for non-English and unpublished studies; however, these represented a small proportion of included studies and rarely impacted the results and conclusions of the review. Inclusion of these study types may have an impact in situations where there are few relevant studies, or where there are questionable vested interests in the published literature. We found substantial variation in whether SRs searched for dissertations; in most reviews that included dissertations, these had little impact on results.
But an important section in the results:
“Most SRs searched for unpublished studies but the majority did not include these (only 6%) and they represented 2% of included studies. In most cases the impact of including unpublished studies was small; a substantial impact was observed in one case that relied solely on unpublished data.”
I highlight this as it shows the prevalent SR paradigm and why it is, frankly, nonsensical!
I’ve touched upon this previously (Why do we do systematic reviews? Part 4). This reports that – when you include unpublished studies the effects can be significant. In the Hart paper it demonstrated that the inclusion of unpublished studies alters the estimate of effect size by over 10% in half the SR examined (sometimes over 100%). So, without unpublished studies you cannot be confident in the result. This new study shows that most SRs don’t include them (only 6%) and these contributed very little. I suspect this discrepancy is down to the fact that searching for unpublished trials is incredibly difficult and indicates (as this paper shows) that searching for unpublished studies will find an unsystematic representation of them – the easier ones. Which could well explain why – in this study – they make little difference. To be clear, when you search PROPERLY for unpublished studies the difference can be profound. As such I profoundly disagree with this section in the conclusion:
“This study provides quantitative data regarding the potential impact on meta-analysis results of excluding studies published in non-English languages, as well as unpublished studies and dissertations. We found that the vast majority of SRs searched for non-English and unpublished studies; however, these represented a very small proportion of included studies and rarely impacted the results and conclusions of the review.”
From reading the article it might appear that there is little to be gained from adding unpublished studies. This is demonstrably not the case and – to me – misleading.
I challenge any ardent systematic reviewer to demonstrate that SRs based on published journal articles can be counted on to be ‘accurate’? I’m not saying they can’t be, but if you look at the Hart paper (and others) you’ll see that you simply cannot guarantee it as you never know what – unpublished studies – you’ve missed and the likely effect on the estimate of effect size.
Instead of the systematic review/EBM world coming out with faith-based pronouncements on SRs and their ‘power’ I really wish they would be more transparent about the shortcomings of the journal-article based SR.
Bottom line: Most SRs rely on published journal articles – this makes them unreliable, the evidence is pretty clear.
Note: usual caveat that most of the above refers to SRs based on RCTs!