To what extent does adding poor quality ingredients to the review ‘bake’ means we get a bad cake?

Rapid reviews may produce different results to systematic reviews: a meta-epidemiological study. J Clin Epidemiol. 2018 Dec 24. Marshall I, Marshall R, Wallace B, Brassey J, Thomas J.

I was delighted to be part of this study (which is open access, so full-text is here) which simulated the effects of various rapid review ‘shortcuts’ and the implications for the effect size estimates relative to the full systematic review.  As this was automated it was able to explore thousands of reviews – “2,512 systematic reviews (16,088 studies) were included.

It confirmed what previous studies have shown that – generally – the results are the same. [NOTE: Iain (lead author) has a difference in perspective and prefers this wording “It confirmed what previous studies have shown that – generally – the results are similar in most cases.” He also wanted me to add “this is for the best performing methods , the worst performing were way worse.” – I think this gives a balance and reflects different perspectives]

In the optimal rapid review method the difference, counted as ‘moderate or greater‘ occurred in 10% of the cases.  Depending on perspective this might be good or bad:

  • SR perspective – this is catastrophic, think of the potential harm.
  • RR perspective – that’s really encouraging. We need to understand the 10% more – what proportion are clinically important differences and/or can we predict the 10%? Also, does this mean the 90% – done as a SR – equates to waste?

One passage that is really interesting is:

Studies ‘found’ by the rapid methods were significantly more likely to have a low risk of bias than those ‘lost’

This mirrors what Eggar found back in 2003:

Trials that are difficult to locate tend to be smaller and of lower methodological quality than those that are easier to find and published in English.” And there are other examples – on this blog – where this sort of thing is discussed.

Deschartes has explored this in the JAMA paper Association Between Analytic Strategy and Estimates of Treatment Outcomes in Meta-analyses where they compared treatment outcomes estimated by meta-analysis of all trials and several alternative analytic strategies, one of which was “meta-analysis restricted to trials at low overall risk of bias.” In the results section they report:

Meta-analysis of All Trials vs Meta-analysis Restricted to Trials at Low Overall Risk of Bias
This analysis is based on 41 meta-analyses of subjective outcomes and 40 of objective outcomes, including at least 1 trial at low overall risk of bias. Overall, we found no significant difference between treatment outcomes from meta-analysis of all trials and from meta-analysis restricted to trials at low overall risk of bias for subjective outcomes (ROR, 0.94 [95% CI, 0.86-1.04], P = .23) and a significant difference for objective outcomes (ROR, 1.03 [95% CI, 1.00-1.06], P = .048).

However, in the conclusion they report “In our study, treatment outcomes were larger for trials at high or unclear risk of bias than for trials at low risk for sequence generation, allocation concealment, and blinding, which is consistent with the BRANDO study combining data from several metaepidemiologic studies. However, we did not find any differences in treatment outcomes by overall risk of bias.

I’m not sure where that leaves us but I do wonder, to what extent, does adding poor quality ingredients to the review ‘bake’ means we get a bad cake?

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s