Single-reviewer abstract screening missed 13 percent of relevant studies: a crowd-based, randomized controlled trial
Journal of Clinical Epidemiology, 2020
This is an important topic and it’s always good to receive evidence relating to evidence reviewing. However, I do have an issue with two issues:
- The outcome measure used – articles found.
- The denominator – comparison with systematic reviews.
Evidence reviews primary outcome is to provide a robust answer to a clinical question. So, the primary outcome might be ‘Drug A is effective in condition B‘ or ‘We found no evidence for Drug C in condition D‘. What this paper found was that single reviewers missed 13% of relevant studies while dual screening missed 3%. This gives us zero information on the main outcome ie did these omissions result in any changes to the answer? The authors acknowledge this in the discussion: “future research needs to explore the impact of such studies on the results of meta-analyses and conclusions“. Irrespective of their comment it doesn’t alter the fact that missing some studies may be meaningless, which leads me to the next criticism…:
What I find curious (possibly even absurd) is that given the main chosen outcome is missing articles they use the comparison of what might be found in a systematic review. What is wrong with that? Well, we know that, typically, systematic reviews focus on published trials, of which 50% are unpublished. So, a better comparison would be all trials, not those captured in a systematic review. Let’s illustrate this with an example, using an intervention with 50 trials (25 published, 25 unpublished):
This paper is about highlighting a methodological issue that results in missing trials. Yet their concern only seems to be about some trials, not all trials.
So, back to the old issue, do we want all trials or some trials? If the latter what’s an appropriate sample? And, this paper (and others like it) has be conjuring up the phrase Reductio ad absurdum.