Two new papers of interest:
Grey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews. Hartling L et al. BMC Medical Research Methodology 2017 17:64
Conclusions: The majority of SRs searched for non-English and unpublished studies; however, these represented a small proportion of included studies and rarely impacted the results and conclusions of the review. Inclusion of these study types may have an impact in situations where there are few relevant studies, or where there are questionable vested interests in the published literature. We found substantial variation in whether SRs searched for dissertations; in most reviews that included dissertations, these had little impact on results.
Until this paper there were only a couple of papers that explore language restrictions and their effect on SRs, and the results were mixed. This paper adds further weight to the notion that non-English papers add little. Therefore, if your interest is in being rapid, English appears to be the way to go.
For die-hard systematic reviewers it raises awkward questions. In a recent post I explore the notion of cost-benefit in relation to evidence synthesis and this adds some evidence. To search non-English language databases adds cost, which can be considerable with translation, and yet the evidence is they add relatively little. Is that ethical – to spend extra resource for little to no gain?
Search for unpublished data by systematic reviewers: an audit. Ziai H et al. BMJ Open 2017;7:e017737
Conclusion: A significant fraction of systematic reviews included in our study did not search for unpublished data. Publication bias may be present in almost half the published systematic reviews that assessed for it. Exclusion of unpublished data may lead to biased estimates of efficacy or safety in systematic reviews.
This paper compliments the Schroll paper, which I frequently refer to. In this Ziai paper we can see that most systematic reviews attempted to find unpublished articles but a lot didn’t. The important question for me is not so much how many attempted to search, but how many attempted to search – adequately – the unpublished data? The Schroll paper answers that – not many. I worry that a biased search of the unpublished data might do more harm than good.
Searching for unpublished data is highly problematic with no clear solutions – alas no words of wisdom to add!
Final comment: By definition, SRs include ‘all trials’. Yet both papers highlight a tension – do we include non-English paper and/or unpublished trials? How do you square that circle? It appears systematic review methods are based on all sorts of compromises that somehow, someone has made.
The SR methodologists seems comfortable taking some short-cuts (unpublished data being the obvious one) yet at the same time spend lots of time and effort squeezing ever decreasing amounts of bias from other parts of the system – often with large cost implications. To me it makes no sense, unless you take a view that pushing the methodological bar so high inhibits competition. I’ve written previously about this and in the post I quote the Wikipedia article on barriers to entry, this has now been altered, but is still relevant. It states:
“In theories of competition in economics, a barrier to entry, or an economic barrier to entry, is a cost that must be incurred by a new entrant into a market that incumbents do not have or have not had to incur.
Because barriers to entry protect incumbent firms and restrict competition in a market, they can contribute to distortionary prices and are therefore most important when discussing antitrust policy. Barriers to entry often cause or aid the existence of monopolies or give companies market power.”
Is this me being cynical or realistic? We really need to explore cost-benefit in evidence synthesis – without question.