Why do we do systematic reviews? Part 5

For the previous four articles in this series links can be found in the reference section below [1-4].

The series is about exploring the reasons for undertaking a systematic review, with four main reasons seeming popular.  The second most popular reason, with 24% of the votes at the time of writing, is ‘To know if an intervention has any ‘worth’.  Looking back I regret the ambiguity of the wording of the question.  It was intended to contrast with the third most popular reason (To quantify, quite tightly, how good the intervention is.).  With that question relating to a very precise measure of the effectiveness of an intervention while this question – simply asking about ‘worth’ – is broader.

So, how do we know if an intervention is any good?  How do we know if the intervention (call it Drug A) is more effective than, say, placebo?  Do we really need a long-winded systematic review?  I would say, in most cases absolutely not.  And, when you do need a more long-winded solution, is a systematic review (based on published journal articles) really the most appropriate?

I think the issue here relates to being systematic and having an unbiased search.  From the start, if you’re relying on published journal articles, then you’ve got a bias of published versus unpublished and from the previous article in this series [4] you can understand the implications.  Unbiased and ‘all’ trials are not synonyms.  You can have an unbiased sample of published trials.  It just requires the mechanism of searching to not favour a particular trial with a particular result.  Most of biomedicine relies on samples of a population, but typical systematic reviews seem to avoid this and rely on all trials (which we know is actually ‘all published trials’).  Again, from the previous article [4] we know that the evidence to date indicates a sample of trials, used in updating or undertaking a systematic review, seems to have no effect on the results.  So, why not sample?  Those wishing to find all trials have surely got to demonstrate the need to expend more resource if the gain is little/none!

So, for Drug A – we want to know if it has any worth.  Using the Trip Rapid Review system [5] will get you a pretty reasonable result in five minutes.  While this system is experimental new iterations are likely to see dramatic improvements.  Alternatively, different manual methods can undertake a very specific search (as opposed to a sensitive search) and find most trials really quickly.  A quick reading of the abstract can start to build a reasonable picture.  Critical appraisal of the articles will add another dimension.  Using something like the RobotReviewer [6] can make the assessment of bias almost instant.

If you add in an element of pragmatism (short-hand for using heuristics) and you can really rapidly demonstrate the worth of an intervention.  For instance, if the largest trial is significant has a low risk of bias then you can be reasonably sure the subsequent result of a review (rapid or systematic) will find the same result [7].

Bottom line: It really doesn’t need to be complicated or long-winded, in most situations, to understand if an intervention has any ‘worth’.


  1. Why do we do systematic reviews?
  2. Why do we do systematic reviews? Part 2
  3. Why do we do systematic reviews? Part 3
  4. Why do we do systematic reviews? Part 4
  5. Trip Rapid Review System
  6. RobotReviewer: auto-assessment of bias in clinical trials
  7. Small trials in evidence synthesis