One of the main criticisms of ‘rapid reviews’ is that they cuts corner (relative to systematic reviews) and therefore it makes the likely to be – in some way – ‘wrong’ (however that is defined). This negativity is often taken from the perspective that a full systematic review is – in some way – ‘right’ (again, however that is defined).
What is increasingly clear to me is that for any given intervention there is the totality of evidence and whichever review method samples this totality of data – as it never captures 100% of it.
For the sake of argument there might be twenty trials for an intervention so there will be twenty clinical study reports (these vast unwieldy documents which contain significantly more data than a journal article). Now, we know that around 30-50% of trials are unpublished so let’s say there are fourteen published trials.
The vast majority of systematic reviews will only locate published trials. So, most systematic reviews will not include all trials it will include all published trials – this is a massively important detail. And, as well as missing six unpublished trials it will also miss the extra detail in the clinical study reports as well as any additional regulatory data.
A ‘rapid review’ might take a short-cut approach to searching and only locate twelve of the published trials. So, the reality would be:
- Systematic review 14/20 trials = 70% of the data
- Rapid review 12/20 = 60% of the data
And if we factor in the missing data by ignoring clinical study reports it might reduce the amount of data included by a further 25% (speculative figure). So, a systematic review might only include 52.5% of the data versus 45% for a rapid review.
The point of this post is to highlight that, even with a long-winded and thorough methodology, all you’ll ever get is a sample of the data.
Is 70% of the data better than 60% (or 52.5% versus 45%)? It’s certainly not ‘evidence based’ to say that 70% is good and 60% bad (or 52.5% versus 45%). There is no arbitrary cut-off point of ‘good’ versus ‘bad’ in relation to evidence synthesis. An evidence synthesis does not suddenly move to ‘good’ when all the published trials are found. It’s disingenuous to claim that and it’s dangerous to assume it.
One thought on “Sampling in evidence synthesis”
For interest an article that shows the compression between CSRs and journal articles:
Clinical study reports of randomised controlled trials: an exploratory review of previously confidential industry reports
“Report synopses had a median length of 5 pages, efficacy evaluation 13.5 pages, safety evaluation 17 pages, attached tables 337 pages, trial protocol 62 pages, statistical analysis plan 15 pages and individual efficacy and safety listings had a median length of 447 and 109.5 pages, respectively.”