Turner RM, Bird SM, Higgins JP. PLoS One. 2013;8(3):e59202.
An old paper but one someone pointed out recently and it’s one that mentions rapid reviews in a number of places, notably:
“When at least two adequately powered studies are available in meta-analyses reported by Cochrane reviews, underpowered studies often contribute little information, and could be left out if a rapid review of the evidence is required.”
“Where a rapid review of the evidence is required and if several large, high-quality studies have been found in initial searches, it may be justifiable to truncate the searching and perform the synthesis, since inclusion of more obscure, smaller studies is unlikely to change the conclusions of the review.”
This is a related area that I worked on with Paul Glasziou a while back with Can we rely on the best trial? A comparison of individual trials and systematic reviews. In this study we found that if the largest trial was positive and significant then the subsequent meta-analysis was – nearly all the time – positive and significant.
Surely it shouldn’t be too complex to write some heuristics which allow you to restrict any evidence review so you don’t – unethically – waste resource by hunting down extra trials which will bring little/no benefit! Something like, perform a search for RCTs for the chosen intervention/condition and then arrange by sample size – these two steps can be automated. The next bit is a bit fuzzy for me and it could look like one of these:
- Pick the largest ‘n’ trials. I think the ‘n’ will be defined by some rule relating to it accounting for a certain percentage of the total sample, so the top 3 trials might account for – say – 80% of the total sample size of all trials and therefore you conclude the remaining 20% might not alter the subsequent results by any appreciable amount. This step can be empirically tested.
- Pick the biggest trial and that gives you the likely answer of any meta-analysis (especially if it’s positive and significant) and then see if the next ‘n’ agree – if so stop, if not explore the difference and decide (somehow) if it’s worth digging deeper down the list.
A bit vague the above, but you get the point. As mentioned the first bits (RCT and sample size) is already pretty much automated, so I can’t help wondering how hard can it be…