New post: Testing the effectiveness of simplified search strategies for updating systematic reviews

Testing the effectiveness of simplified search strategies for updating systematic reviews. Rice M et al. J Clin Epidemiol. 2017 Aug;88:148-153

Premise: To test the overall effectiveness of a simplified search strategy (SSS) for updating 9 systematic reviews.

Conclusion: SSS performed well for clinically focused topics and, with a median sensitivity of 100%, could be a viable alternative to a conventional comprehensive search strategy for updating this type of systematic reviews particularly considering the budget constraints and the volume of new literature being published. For broader topics, 80% sensitivity is likely to be considered too low for a systematic review update in most cases, although it might be acceptable if updating a scoping or rapid review.

 

Comment: My take home is that you can save an awful lot of time using a simple search strategy.  Interestingly the authors draw a distinction between focused clinical topics and broader clinical questions.  For the former they got virtually identical results (based on returned citations) but with significantly less effort.  For the latter it dropped to around 80% sensitivity, which they comment may be too low for a systematic review.

I get frustrated with the framing of it in such a way, for two main reasons:

  • I’m not overly bothered about the proportion of papers, I care about the answer and getting to it quickly.  So, the issue is does the 80% affect the result/conclusion?  That is not addressed.
  • This focus on 100% recall is bizarre when you understand the context of 50% non-publication rates of trials (see this post for instance).  What they’re – in effect saying – is that 100% of the published studies (so 50% overall) may be too low compared with 80% (40% of all studies).  100% versus 80% is actually 50% versus 40%.  Which returns me to the top point – does it make any difference?  We dont know so we cling, like a security blanket, to this 100% myth!

But really useful paper demonstrating that pragmatic searches should be the future and that traditional literature searches for systematic reviews might be unethical (as they potentially cause significant waste).

One thought on “New post: Testing the effectiveness of simplified search strategies for updating systematic reviews

  1. Yes! In my view, all aspects of clinical research should be pragmatic. It is indeed unethical to spend time and money on a comprehensive search of published literature when 50% of research is not published. Of the research that is published, much is reported badly and is biased due to selective reporting bias, non-reporting of harms etc. There is of course bias in published research which has been badly designed even if it is well-reported. If it’s a quick accurate answer you’re after, it would be quicker to find a well-conducted and fully reported primary study that fits your question and population.

    Like

Leave a comment