I posted a post-Evidence Live blog last week which explored the notion of harms associated with doing rapid reviews (RRs). There is overlap from that post but I’ve had time to reflect and hopefully this will be better written. I’ve also added a vote!! It may need re-writing again, if you think it needs clarification then please let me know!
The question I was asked was about the harm of potentially getting a RR wrong. In other words, if the RR said ‘drug A’ was effective and a systematic review (SR) said it was ineffective – there’s your harm – patients receiving ineffective treatments. The degree of harm would vary depending on the seriousness of the condition being treated. For treatment for a graze the harm would be minimal but for cancer it could be catastrophic.
I believe the meaning behind the question was to suggest that RRs are inferior and therefore only a SR will do – defending the position based on reduction in harm.
As posted previously, the counter-balance to this is the opportunity cost. Say you do 1 SR for the same cost as 5 RRs that means that the reduction in harm in the area covered by the single SR should be greater than the reduction in harm in the five areas of the RRs – otherwise the SR does more harm than the RRs. So, it’s:
Reduction in harm due to a SR PLUS the harm associated with completely unsynthesised care in 4 areas VERSUS reduction in harm in five areas via RRs.
I have yet to see any evidence that reasonable RRs cause a reversal in advice. Most published studies show very similar results while – in a soon to be published study I’m involved in – the worst that happened was either a change in significance (from significance to non-significance or vice versa) and sometimes a loss of data (where the RR would report no evidence, as opposed to the wrong answer).
So, the poll:
Please vote, comment, make suggestions to improve the post or the argument (one way or the other)!