I was at the wonderful Evidence Live and presented on rapid reviews. One question came from the wonderful Iain Chalmers who asked about the potential for harm if health professionals followed the advice of a RR that was subsequently shown to be wrong. Later, in conversation, it became clear that ‘wrong’ meant a reversal of conclusion – so the SR might say the intervention is ineffective while the RR says it’s effective (or vice versa).
Harm is a really important consideration and – clearly – any RR method needs to be robust to minimise that. Iain did a good thing by raising it. We (perhaps I mean ‘I’) can so easily get lost in the methodology and comparisons side of things that we lose sight of what’s important.
But, in thinking of harm, there is a counter-balance: opportunity cost. Opportunity cost means, in doing a piece of work, what are you missing out on doing.
We do SRs (and RRs) to inform decisions and to make these decision better. Without SRs (or RRs) you are more likely to look at single studies or perhaps a biased sample of single studies – so you have the potential for a wrong answer and therefore associated harm.
Say you can do 5 RRs for the price of 1 SR (in my approaches it’s more like 50-100, but that’s perhaps another point). So, which is more harmful?:
- 1 SR done which is unlikely to get a wrong answer but ignores 4 areas that could be covered by RRs – meaning an increased risk of harm in these 4 areas
- 5 RRs done with a small, but it could happen, chance of getting a wrong answer
I’m biased and think it’s a no-brainer. I’ve yet to see a robust RR method that gets an answer ‘wrong’ in the sense described above. But for the sake of argument say it could happen. Perhaps that’s part of the decision making process of which review method – the likelihood and consequences of a wrong answer.
I just wanted to put this post out while it was fresh in my mind, perhaps it’s thinking aloud! But it’s a topic I want to come back to. But the struggle I have is trying to understand how can we answer this question.
For me it raises the possibility, from an opportunity cost perspective, that systematic reviews might cause harm.