Searching practices and inclusion of unpublished studies in systematic reviews of diagnostic accuracy. Korevaar DA et al. Res Synth Methods, 2020
The above is not really about RRs but it has implications, as much for the philosophical basis of evidence synthesis and the tension between ‘systematic’ and ‘rapid’ reviews.
In this paper the authors report:
“To prevent the potential bias from relying only on published evidence in systematic reviews, guidance documents invite reviewers to search for studies that are not reported in peer reviewed journals, but may be identifiable in, for example, proceedings of scientific conferences or prospective trial registries.”
This highlights the fact that many studies remain unpublished. In trials of pharmaceutical drugs, AllTrials reports “Half of all clinical trials have never reported results“. What the figure is in diagnostic studies is unknown.
In the conclusion the authors report:
“The fact that only 1.9% of all primary studies included in the non-Cochrane systematic reviews were unpublished, and only 2.3% of those included in the Cochrane systematic reviews, indicates that it is highly likely that such reviews fail to include a considerable amount of completed diagnostic accuracy studies.”
They also report on other types of SRs (ie non-diagnostic) and state:
“Sources of unpublished data, however, were rarely searched; for example, only 19% of systematic reviewers screened trial registries. Another evaluation of grey literature in systematic reviews in child-relevant Cochrane systematic reviews found that only 5.6% were able to include an unpublished study, and such studies only represented 1.9% of all included studies.”
Bottom line: Systematic reviews (diagnostic or otherwise) rarely include all trials and therefore rely on a “biased subsample” (the words of Glasziou and Chalmers) of studies.
So, what has this got to do with RRs? Two things:
- SRs have a privilege, based in part, on an assumption they include all trials (it was part of an earlier Cochrane definition of a systematic review), this is false. I do not see many efforts, by SR producers, to overcome this myth. This is to the detriment of rapid methods.
- Sampling. SRs don’t get 100%, they possibly get 50%. Yet a RR that is much quicker and cheaper might get 45% of the trials, yet many get agitated by that (ironically many of the harder to find trials, in the published literature, are likely to be of lower quality). There is so little discussion about what sample is required to produce an adequate review.
My concluding observation is that evidence synthesis is a continuum, there is no sudden move from ‘rapid’ to ‘systematic’. We have yet to untangle the relationship between cost and benefit in synthesis methods. This suits SR producers, not those needing high-quality evidence to support their decision making.