Small trials in evidence synthesis

Bottom line: The inclusion of small studies introduces a whole host of problems with little obvious gain.  So, don’t waste time/money in trying to locate them all.  In most cases less can be more!

A recent post in the Lancet [1] caused some controversy by suggesting that systematic reviews can sometimes increase waste by promoting underpowered trials. The authors report:

Efforts by Cochrane and others to locate all trials have meant that many low-quality, single-centre trials, often with inaccuracies, are easily accessible. Most meta-analyses are dominated by such trials. The median number of trials in Cochrane reviews is six to 16, and the median number of patients per trial is about 80. Inclusion of such trials in meta-analyses results in inflated treatment effects. Small trials are prone to publication and other selection biases, are often low quality, and, because single-centre trials have less oversight than multicentre trials, they are more susceptible to misconduct.

They end with this:

Employing legions of reviewers to pick through the detritus in the hope of extracting morsels of useful information might not be the best use of resources.

This letter provoked a response from Cochrane [2], in which they state:

We suggest that there is considerable awareness of the challenges of using small trials and that adherence to standard Cochrane methods helps counter the concerns surrounding the inclusion of small trials.

The authors then discuss a Cochrane review on proximal humeral fractures which highlights the lack of evidence up to the inclusion of “…evidence from a sufficiently powered, good-quality, multicentre randomised trial (the ProFHER trial)“.  Cochrane argue that the inclusion of small trials highlighted the poor state of evidence and helped prompt the large trial to be funded. They use a similar argument in the case of a review of electromechanical arm training post stroke – that a number of poor studies, included in the review, helped prompt a fuller trial.

The authors conclude “…such reviews still serve a crucial role by highlighting the evidence deficiency for key questions that merit the investment of substantive research“.  I find myself scratching my head on this conclusion.  If they had ignored (or had not found) small trials would the conclusion have been any different?  Would finding 15 small trials or no small trials made any difference to the recommendations that a large trial is needed?  No!  So, it’s hardly a compelling reason to include small trials.  Is that really the best reason to include small trials?

Moving on, two papers from 2013 explore the effects of small sample size trials on meta-analyses.  The first, ‘The Impact of Study Size on Meta-analyses: Examination of Underpowered Studies in Cochrane Reviews’ [3], starts with the statement:

Most meta-analyses include data from one or more small studies that, individually, do not have power to detect an intervention effect.”  The authors highlight the problems with small studies and report on previous work:

Some researchers argue for excluding small studies from meta-analyses. Specifically to reduce the effects of publication bias, Stanley suggested discarding 90% of the study estimates, so that conclusions are based on only the most precise 10% of studies. Earlier, Kraemer proposed including only adequately powered studies in meta-analysis, both to remove publication bias and to discourage future researchers from carrying out small studies. In teaching, Bird has long advocated that trials should not be started unless they could deliver at least 50% power in respect of a priori plausible, worthwhile effect sizes.

In the paper the authors studied the contribution of underpowered trials to nearly 15,000 meta-analyses from 1,991 Cochrane systematic reviews.  They reported:

In 10,492 (70%) of 14,886 meta-analyses, all included studies were underpowered; only 2,588 (17%) included at least two adequately powered studies. 34% of the meta-analyses themselves were adequately powered.

The conclusions were the most interesting:

In conclusion, we found that underpowered studies play a very substantial role in meta-analyses reported by Cochrane reviews, since the majority of meta-analyses include no adequately powered studies. In meta-analyses including two or more adequately powered studies, the remaining underpowered studies often contributed little information to the combined results, and could be left out if a rapid review of the evidence is required.

But they highlight that the reason for the systematic review is important.  If you want to include ALL trials “…the objective of meta-analysis is to resolve uncertainty by combining all available evidence and investigating reasons for between-study heterogeneity, and it would be inappropriate to leave out smaller studies.

The final paper, ‘Influence of trial sample size on treatment effect estimates: meta-epidemiological study’ [4], sought to “…assess the influence of trial sample size on treatment effect estimates within meta-analyses“.  As with other analyses they report that small studies give larger treatment effects.  The smallest trials had treatment effects, on average, 32% larger than those in the biggest trials.  They conclude:

Effect estimates differed within meta-analyses solely based on trial sample size, with, on average, stronger effect estimates in small to moderately sized trials than in the largest trials. These stronger effects might not reflect the true treatment effect; therefore, robustness of the conclusions of a meta-analysis…

They finish with:

Reviewers and readers can easily check whether the result for the overall meta-analysis agrees with the results for the largest trials (that is, those in quarter 4 of sample size). Interpretation of the pooled result should be cautioned when this is not the case. More generally, our results raise questions about how meta-analyses are currently performed, especially whether all available evidence should be included in meta-analyses because it could lead to more beneficial results.

So, should we include small trials in rapid and/or systematic reviews?  Is this a case of garbage in, garbage out?  In other words lumping together lots of small, unreliable trials, is that really a good idea?  Just because you can statistically combine the results it doesn’t make it the right thing to do.

Finding and reporting on small trials adds, as far as I can tell:

  • Data that is likely to be unreliable.
  • Considerable extra workload/cost – summed up wonderfully – “Employing legions of reviewers to pick through the detritus in the hope of extracting morsels of useful information might not be the best use of resources.”
  • Unknown benefit.  I have yet to see a compelling reason for including small studies, other than those that desire to find and synthesise ALL trials (which seems a philosophically absurd position).  Although, pragmatically, it may be that summarising multiple small trials is beneficial, if there is reasonable homogeneity of results.

An important issue is the notion of transparency – being clear about the reasons for including or not including small studies.  It does seem clear that, if there are large trials, little can be gained from including smaller trials.  This also fits in with a comment by Caroline Struthers on this blog [5] where she stated:

If I ruled the world, meta-analysis would be restricted to trials at overall low risk of bias

I’m not sure if small trials – although unreliable – are biased (I’m no expert on the semantics of these issues) but surely it relates to only relying on robust studies, which typically  means larger ones. NOTE: I appreciate large trials can still show considerable bias.

This also links in nicely with papers such as Glasziou and Tam which report on either the frequency of agreement of either the largest trial and subsequent meta-analysis [6] or the first (and sometime last) significant trial [7]. These reinforce the fact larger trials are inherently better and can typically be used as a good proxy for a ‘full scale’ (AKA long-winded and costly) systematic review.

In summary, and relating this back to rapid reviews, I don’t think there’s much justification for going after small-scale studies, especially if it is particularly troublesome (costly) in locating them.  This is based on the – reasonable – assumption that larger trials are easier to find; they should certainly be more visible.  If small studies are found during a search there needs to be a clear reason for including them in any synthesis.

References

  1. How systematic reviews cause research waste. Roberts I et al. Correspondance. The Lancet. Vol 386 October 17, 2015
  2. In defence of reviews of small trials: underpinning the generation of evidence to inform practice[editorial]. Handoll HHG et al. Cochrane Database of Systematic Reviews 2015;(11)
  3. The Impact of Study Size on Meta-analyses: Examination of Underpowered Studies in Cochrane Reviews. Turner RM et al. PLoS ONE 2013. 8(3): e59202
  4. Influence of trial sample size on treatment effect estimates: meta-epidemiological study. Dechartres A et al. BMJ 2013;346:f2304
  5. 2 thoughts on “List of articles: exploring trial variables and effect on meta-analysis” Struthers C. 2015
  6. Can we rely on the best trial? A comparison of individual trials and systematic reviews Glasziou PP et al. BMC Med Res Methodol. 2010 Mar 18;10:23
  7. How Often Does an Individual Trial Agree with Its Corresponding Meta-Analysis? A Meta-Epidemiologic Study. Tam WWS et al. PLoS ONE 9(12): e113994.
Advertisement

3 thoughts on “Small trials in evidence synthesis

  1. It may be appropriate to exclude small trials, and especially those which are grossly under-powered, from meta-analysis but one of the strengths of systematic review is surely the attempt to find ALL trials using an open and defined methodology. Yes it may be philosophically impossible to achieve this, and there’s plenty of evidence that even well conducted searches miss a lot, but at least the current standard methodology imposes a baseline degree of thoroughness in finding what is out there. Once you have the trials it’s then impossible to tell which should be included/excluded without at least a superficial scan by reviewers – I don’t think it would be fair to simply discard papers automatically using a sample size threshold.

    What I think we could do as reviewers is be rather more ruthless in excluding from meta-analysis all but the most methodologically reliable trials and where there are none we should be a bit more blunt in saying the available evidence is useless – standard Cochrane wording, in my view at least, is a bit too generous in writing about ‘Limited and poor quality evidence supports….’ in summary paragraphs.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s