Trading certainty for speed – how much uncertainty are decisionmakers and guideline developers willing to accept when using rapid reviews. Wagner G et al. BMC Medical Research Methodology 2017 17:121
Abstract below and comment (from me) under that:
Decisionmakers and guideline developers demand rapid syntheses of the evidence when time sensitive evidence-informed decisions are required. A potential trade-off of such rapid reviews is that their results can have less reliability than results of systematic reviews that can lead to an increased risk of making incorrect decisions or recommendations. We sought to determine how much incremental uncertainty about the correctness of an answer guideline developers and health policy decisionmakers are willing to accept in exchange for a rapid evidence-synthesis.
Employing a purposive sample, we conducted an international web-based, anonymous survey of decisionmakers and guideline developers. Based on a clinical treatment, a public health, and a clinical prevention scenario, participants indicated the maximum risk of getting an incorrect answer from a rapid review that they would be willing to accept. We carefully reviewed data and performed descriptive statistical analyses.
In total, 325 (58.5%) of 556 participants completed our survey and were eligible for analysis. The median acceptable incremental risk for getting an incorrect answer from a rapid review across all three scenarios was 10.0% (interquartile range [IQR] 5.0–15.0). Acceptable risks were similar for the clinical treatment (n = 313, median 10.0% [IQR 5.0–15.0]) and the public health scenarios (n = 320, median 10.0% [IQR 5.0–15.0]) and lower for the clinical prevention scenario (n = 312, median 6.5% [IQR 5.0–10.5]).
Findings suggest that decisionmakers are willing to accept some trade-off in validity in exchange for a rapid review. Nevertheless, they expect the validity of rapid reviews to come close to that of systematic reviews.
My major comment on this is that is assumes that systematic reviews can be relied upon to give accurate answers in the first place. The authors acknowledge this by saying it’s ‘a hypothetical assumption‘. We know this is not the case and has been discussed frequently by me (for instance in this article) but the two main references are by Turner and Hart.
The results are really important – how often are users prepared to get an answer wrong. As the authors state:
“Participants of our survey, on average, viewed 10% as the maximum tolerable risk of getting an incorrect answer from a rapid review. In other words, respondents of our survey expect rapid reviews to provide answers similar to systematic reviews in at least nine out of ten cases.”
If the authors removed the contrast between ‘systematic‘ and ‘rapid‘ and simply asked about how often respondents were willing to receive a ‘wrong’ answer from an evidence synthesis that would be more meaningful. I’d also be interested to explore the other factors such as cost (financial and time). So, would the policy-makers want one SR in one year or 5 RRs in 6 months?
I’d also probe what constitutes ‘wrong’? Is it saying Drug A is better than Drug B when it’s the other way round? Or, is it that it underestimates or overestimates an interventions worth? I’ve emailed the main author and will post the response here when I get a response.
But another important paper that teases out important points in finding the appropriate place for rapid reviews in the evidence synthesis landscape.