Yesterday I saw a tweet that read:
The risk rapid reviews is if we get it wrong – how much wrong are we ready to risk
This is an important point. It is not the first time I’ve had people discuss rapid reviews and framing it as how often are you prepared to be wrong?
I always find it slightly annoying as – unintentionally I’m sure – it comes across as being sanctimonious. On all the occasions I’ve seen it it has come from people heavily involved in systematic reviews. It would appear that they are assuming some sense of moral superiority and that their way is the right way and that the success, or not, of a rapid review is judged by their standards. But it is fairly clear, as there is lots of evidence, that most systematic reviews are liable to ‘problems’ due to reporting bias. Every study I’ve seen show that relying on published journal articles is problematic (in comparison to fuller systematic reviews that include unpublished studies and/or additional data such as individual patient data or clinical study reports). One could even go so far as to say that there is more evidence that systematic reviews are problematic compared with so-called rapid reviews!
An apposite quote seems to be “Those that chose the lesser of two evils soon forgot they chose evil at all‘. In short any synthesis method that relies on journal articles is likely to be problematic (principally due to reporting bias) and that should be remembered!
Bringing it back round to the initial tweet – it’s a reasonable question to ask about being wrong but it is fairly clear that the question is equally relevant to standard systematic reviews.
But a bigger problem that concerns me is what does ‘wrong’ mean? Is it that you say Drug A is ‘effective’ when it’s not? Is it saying Drug A is better than Drug B when it’s not? Is it quantifying the effectiveness of a drug (via a meta-analysis) and being a long way off?
The nature of ‘wrong’ is surely context specific. If you’re a doctor and seeing a patient and there are two treatment options – both broadly the same in relation to cost and adverse effect profiles – you may simply want to know that A is better than B. However, if A is more effective yet has a higher risk of adverse events you may want to quantify both the scale of benefit and adverse events to support the clinical decision. That then starts to move back to Value of Information whereby the value of the information is related to the support required by the decision makers. Do they need more information? If so, what is the cost and benefit?
I really do think we are pretty backward in our thinking about evidence synthesis in that we rarely articulate the context of the synthesis and the decision making process it will support. The current default (thank heavens its changing) is to, unthinkingly, undertaken an expensive, long-winded SR – in certain contexts this appears unethical…!