Today marks a milestone. NICE has just released COVID-19 rapid evidence summary: acute use of nonsteroidal anti-inflammatory drugs (NSAIDs) for people with or at risk of COVID-19 and Cochrane have released Quarantine Alone or in Combination With Other Public Health Measures to Control COVID-19: A Rapid Review.
Two significant producers of evidence reviews have now, after years of resistance, embraced the rapid review. Has the evidence changed to demonstrate rapid methods are acceptable? Clearly not. So, what has changed?
- Covid-19 is a global health threat and the decisions that need to be taken cannot be put off for months, let alone years.
- Both NICE and Cochrane want to support these decisions and have committed considerable resource to the task. Without this enthusiasm both would have risked being criticised for not helping!
But, what about post-Covid-19 (assuming there is one), what then? Will both retreat to the old ways, the slow ways?
I have long argued that rapid reviews provide ‘good enough’ evidence to support the majority of decisions. The desire/need/indulgence to ‘gold plate’ reviews seems wasted resource. What is better 10 rapid reviews or 1 systematic review?
The main argument against rapid methods was what if you’re wrong? Clearly, that’s an important issue and one that can’t be dismissed. Again, I have long argued that there is scant evidence that rapid methods ever get things wrong (although there are some issues around the effect sizes estimates fluctuating, sometimes in and out of significance – but that is rare)! But by all means research the issue to help reassure decision makers
But the concern about rapid methods, for now at least, is surely still an issue – as mentioned the evidence around rapid methods hasn’t changed. So, what gives? I can see two explanations:
- The concern about being wrong was simply a smokescreen, one hyped up that can now be conveniently jettisoned.
- The concerns are still real, but the concern about being wrong is diminished by the desire to be relevant.
But will the old ways, the slow ways return? I imagine they will, but this crisis will hopefully make people realise we can have a fresh approach to evidence synthesis, one that is more pragmatic and provides greater value. My long favoured approach is to answer every question with a rapid method and then only use further resource (to make it systematic) should some threshold be reached, one linked to likely benefit. In other words, by committing extra resource (ie taking it from other potential reviews) are we sure it will it help the decision making process sufficiently to justify the extra ‘cost’?
If the threshold is breached the decision then becomes do you do a standard SR or a more complex one, perhaps based on clinical study reports or individual patient data. By having a more nuanced approach than a ‘one size fits all, lets do a 1-2 year SR’ we can produce many more reviews, ensuring practice is more likely to be evidence based.
One things for sure, systematic reviewers can no longer hold on to the view that rapid reviews are ‘dodgy’ – otherwise, why do them!?