In the recent post relating to trading certainty for speed I highlighted that the authors stated:
“Participants of our survey, on average, viewed 10% as the maximum tolerable risk of getting an incorrect answer from a rapid review”
My issue with this is that there was no definition of what ‘incorrect’ was. So, I emailed one of the authors:
“A fascinating paper, thank you. One thing I’m unsure about, how was the notion of an incorrect answer defined/explained?
For instance it might be that a review says drug A is better than drug B but a systematic review says it’s the other way round. Alternatively a RR might give an estimate of effect out by 10, 20, 50%. It may get the correct drug (as in A is better than B) but it might under/over estimate the effect. Arguably this is also wrong!….”
The reply came quickly and he was happy for me to share his response:
“You are absolutely right, “incorrect answer” can mean various degrees of getting it wrong from drawing a wrong conclusion as a consequence of an incorrect answer to a mathematically incorrect answer because the relative risk might be just a little bit off. Because there are so many different ways to get “incorrect answers”, for the survey, we left the definition open. We stated in the beginning of the survey in general terms, however, that “Incorrect answers might lead to an increased risk of making incorrect decisions or recommendations”.
The goal of this survey was to establish a non-inferiority margin for a larger methods study that determined the impact of 13 different abbreviated search strategies on conclusions of Cochrane Reviews (protocol of the study is attached).”
For interest, the protocol mentioned is Assessing the validity of abbreviated literature searches for rapid reviews: protocol of a non-inferiority and metaepidemiologic study.
While I appreciate the pragmatism of allowing such ambiguity in relation to the definition of ‘incorrect answer’ it also troubles me as the ambiguity creates an uncertainty. Were the respondents worried about a wrong conclusion or simply being out a bit on the estimate of effect? These have different implications with regard studying ‘rapid’ versus ‘systematic’ reviews. Simply agreeing with the direction of effect is a much lower standard that being, say, within 10% agreement with the SRs estimate of effect. I suspect the respondents would have the former in mind as opposed to the latter – but I’ve no evidence for that!
But, using either definition of ‘incorrect’ gives a useful ‘target’ for those interested in rapid reviews to aim for!