Straw man and the accuracy of rapid reviews

Straw man: A logic fallacy involving the purposeful misrepresentation of an argument in order to strike it down.

I was sent a paper to review earlier this week and I was quite strong in my feedback, so I thought I should share my frustration and write a blog – so here it is…

A central tenet of the paper was that rapid reviews (RR) can be useful and can get close to systematic reviews (SR) in terms of accuracy.  It was the comparison with SR that irked me it grants SRs a privileged status – one they don’t deserve – and that’s the straw man….

The perception of SRs being the most accurate form of evidence synthesis, the gold standard, is not evidence-based, it’s habit, it’s opinion – nothing more.  And, to be clear, in using the term SR I mean the type that rely on published journal articles, the Cochrane-style ones – these represent the vast majority.  Yet, we know there are more rigorous ones – I’ll call them SR+ – which are much more resource intensive and will use all trials (not just those published), may use individual patient data or clinical study reports.  Due to the cost these are not routine.  So, out of convenience/cost we ‘settle’ for SRs, it’s a compromise – a compromise that is typically hidden and not discussed.

I’ve written extensively in this blog about comparisons between SR and SR+ (e.g. The unreliability of systematic reviews).  This paper is a classic Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses Hart B et al. BMJ. 2012 Jan 3;344:d7202. It used FDA data to supplement the data found in published journal articles, reporting:

Overall, addition of unpublished FDA trial data caused 46% (19/41) of the summary estimates from the meta-analyses to show lower efficacy of the drug, 7% (3/41) to show identical efficacy, and 46% (19/41) to show greater efficacy.”

These deviations can be charted:

This can be summarised as 50% of the SRs are within 10% of the effect size and the other 50% will over or under-estimate the effect size by more than 10% (with an equal split between over and under-estimation). Now, and here’s some speculation, if we assume that RRs are similarly inaccurate to SRs as SRs are to SR+ we can have a bit of fun (Note: I actually think RRs are, in most cases, likely to be much closer – but that’s for another day (and a paper just submitted!)).

So, based on the above scenario we’re saying that 50% of RRs will over or underestimate the effect size compared with a SR and 50% will be pretty close.  So, 50% of RRs will be pretty much as accurate as SRs – simple!

But what of the other 50% (out by more than 10%)?  If we assume, as with the Hart paper the under and over-estimates were equally split we can say the same as that for RRs.  This leads to this scenario:

So, half the time the RRs will be more accurate than the SRs and half the time less so.

To be clear, I’m not reporting this as a truth. I am speculating that there is the real possibility that, in some situations, RRs can be more accurate than SRs!

But the take home messages from this post are:

  • When exploring the accuracy of RRs, comparing them to SRs is problematic and feels lazy. The truer test is how they compare to SR+.
  • In many cases RRs will be more accurate than SRs.

3 thoughts on “Straw man and the accuracy of rapid reviews

Leave a comment