I recently had the pleasure of talking about rapid reviews in Liverpool. One point that got raised in the discussion was that of methodological shortcuts.
Typically, rapid reviews are portrayed as ‘cut down’ systematic reviews ie the starting point is the systematic review process and you then reduce/remove steps to arrive at your rapid review methodology (BTW my post Different approaches to rapidity discusses this and suggests there are other approaches to rapid reviews than process ‘reduction’). In other words shortcuts are taken.
The problem I have with this framing is that it uses the standard systematic review as the gold standard. In reality systematic reviews take shortcuts! For example:
- Systematic reviews typically take the shortcut of only using published journal articles (click here for further information), when we know 30-50% of trials are unpublished.
- Systematic reviews typically rely on journal articles not fuller documents (such as clinical study reports).
- Systematic reviews will not search all databases they will restrict themselves to a handful.
Now, all the above are pragmatic and understandable shortcuts – but they are shortcuts nonetheless. These shortcuts are not based on evidence, they are based on the ‘collective wisdom’ that these are reasonable. They are based on faith (ironically the antithesis of evidence-based).
To demonstrate it as a graphic, take this image (adapted from a Tom Jefferson diagram):
The standard systematic review doesn’t take all the data from all the trials, it takes the journal articles from published trials. The effect is that there is a huge amount of missing data. And this effect is down to the shortcuts!
The shortcuts from ‘all data’ to a systematic review results in a massively greater data loss than the shortcut between systematic and rapid review.
Bottom line: if someone mentions methodological shortcuts, remember systematic reviews employ them to a much greater effect than rapid reviews.