Wasted research when systematic reviews fail to provide a complete and up-to-date evidence synthesis: the example of lung cancer. Créquit P et al. BMC Medicine 2016 14:8
Should I admit to liking this article? Enjoying it because it highlights the folly of relying on high-cost, not fit for purpose, systematic reviews.
The background of the paper is:
“Multiple treatments are frequently available for a given condition, and clinicians and patients need a comprehensive, up-to-date synthesis of evidence for all competing treatments. We aimed to quantify the waste of research related to the failure of systematic reviews to provide a complete and up-to-date evidence synthesis over time.”
And the authors concluded:
“We illustrate how systematic reviews of a given condition provide a fragmented, out-of-date panorama of the evidence for all treatments. This waste of research might be reduced by the development of live cumulative network meta-analyses.”
Equally damning was this passage from the discussion:
“Our comparison of the amount of randomized evidence covered by systematic reviews and all randomized trials available for inclusion revealed a substantial waste related to the failure of systematic reviews to accumulate evidence scientifically: the evidence covered by existing systematic reviews on the topic was always substantially incomplete, with 40 % or more of treatments, treatment comparisons, and trials missing.”
The authors discuss the reasons and highlight the notion of living systematic reviews further highlighting the challenges this notion creates.
Overall, this is a really rich and thoughtful article. It’s challenging to anybody interested in evidence synthesis – whether you favour rapid or long-winded approaches.