A simple experiment, searching PubMed for mentions of rapid reviews over time and doing a similar thing with systematic reviews. For my own sake I lumped (technical term) some earlier dates to give the following results for rapid reviews: 1980-89 – 10 1990-1999 – 25 2000-2004 – 32 2005-2009 – 31 2010 – 10 2011 – 14 2012 – 11 2013 – 18 2014 – … Continue reading The rise of rapid reviews? Growth compared with systematic reviews
We’ve had 150 votes as to why we do systematic reviews (see this article for details) and the results are: To see what has been done before, to see if new research is needed – 25.33% To know if an intervention has any ‘worth’ – 24.67% To quantify, quite tightly, how good the intervention is – 23.33% To understand the adverse events associated with the intervention … Continue reading Why do we do systematic reviews? Part 2.
Reviews: Rapid! Rapid! Rapid! …and systematic. Schünemann HJ et al. Systematic Reviews 2015, 4:4 This editorial introduced the series Advances in Rapid Reviews in the journal Systematic Reviews. It gives a nice introduction to the challenges facing systematic reviewers who want to undertake reviews more rapidly. The authors highlight the importance of using strategies to reduce bias and random error and the need for transparency. Transparency … Continue reading Article review: Reviews: Rapid! Rapid! Rapid! …and systematic
This might seem a really obvious question but it’s one I really struggle with. So, this post is a request for help! Note: the post relates to systematic reviews of individual interventions as opposed to the broader outcome-focussed systematic reviews (e.g. what’s effective in helping people quit smoking?) I get the impression that people embark on systematic reviews with little thought to the reasons behind the review; … Continue reading Why do we do systematic reviews?
David Moher, Adrienne Stevens and Chantelle Garritty give a nice overview and examples of rapid reviews. Continue reading Video: Introduction to Rapid Reviews
Bottom line: Systematic reviews, based on published journal articles, cannot be relied upon to be accurate. Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy Turner EH et al. N Engl J Med. 2008 Jan 17;358(3):252-60. Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses Hart B et al. BMJ. 2012 Jan 3;344:d7202. These are extremely important papers … Continue reading The unreliability of systematic reviews
Yesterday was the last day of a really interesting two-day symposium on automation and systematic reviews in Bristol. The main participants were computer scientists and systematic reviewers; I belonged in the relatively small ‘other’ group. It struck me that the focus was on breaking down the steps of systematic reviews (as seen in a few papers, one reviewed on this blog – click here) and … Continue reading Two main fronts on the speeding up of systematic reviews
In 2014 Guy Tsafnat, Paul Glasziou and others wrote the paper Systematic review automation technologies where the authors “…surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review.“ It is an excellent overview of the potential of automation in the systematic review process. They do this by breaking down the systematic review … Continue reading Article review: Systematic review automation technologies
This review links to two articles published on the Trip Databases ‘Liberating the literature’ blog. Conflict of Interest – I wrote both of them. The first article was published in April 2012 and was a list of articles that compared rapid versus systematic methods. The second article (published shortly afterwards) was a list of lessons learned, which I reproduce below: Lesson 1: The notion of … Continue reading Rapid versus systematic reviews