Why do we do systematic reviews? Part 4

This is the 4th article in the ‘Why do we do systematic reviews?’ series (see references below for previous articles [1, 2, 3]).  The series is about exploring the reasons for undertaking a systematic review, with four main reasons seeming popular.  The third most popular reason, with 23.6% of the votes at the time of writing, is ‘To quantify, quite tightly, how good the intervention is‘. … Continue reading Why do we do systematic reviews? Part 4

Why do we do systematic reviews? Part 3

This is the 3rd article in the ‘Why do we do systematic reviews?’ series (see references below for number 1 and 2).  The series is about exploring the reasons for undertaking a systematic review in the first place with four main reasons seeming popular.  Number four (with 19.5% of the votes at the time of writing) is ‘To understand the adverse events associated with the … Continue reading Why do we do systematic reviews? Part 3

Why do we do systematic reviews? Part 2.

We’ve had 150 votes as to why we do systematic reviews (see this article for details) and the results are: To see what has been done before, to see if new research is needed – 25.33% To know if an intervention has any ‘worth’ – 24.67% To quantify, quite tightly, how good the intervention is – 23.33% To understand the adverse events associated with the intervention … Continue reading Why do we do systematic reviews? Part 2.

Why do we do systematic reviews?

This might seem a really obvious question but it’s one I really struggle with.  So, this post is a request for help! Note: the post relates to systematic reviews of individual interventions as opposed to the broader outcome-focussed systematic reviews (e.g. what’s effective in helping people quit smoking?) I get the impression that people embark on systematic reviews with little thought to the reasons behind the review; … Continue reading Why do we do systematic reviews?

Small trials in evidence synthesis

Bottom line: The inclusion of small studies introduces a whole host of problems with little obvious gain.  So, don’t waste time/money in trying to locate them all.  In most cases less can be more! A recent post in the Lancet [1] caused some controversy by suggesting that systematic reviews can sometimes increase waste by promoting underpowered trials. The authors report: “Efforts by Cochrane and others to locate all … Continue reading Small trials in evidence synthesis

An important point – not all systematic reviews are the same

The Evidence Synthesis Team at PenCLAHRC highlighted an important point, when they replied to a tweet of mine with the following: Are you just thinking about reviews of effectiveness? Or more complex research questions? I am frequently guilty of treating all systematic reviews as the same, when they’re clearly not.  I all too often fall into the trap of seeing all systematic reviews as being … Continue reading An important point – not all systematic reviews are the same

Heuristics

Two papers, two years apart covering closely related territory: Can we rely on the best trial? A comparison of individual trials and systematic reviews. Glasziou PP et al. BMC Med Res Methodol. 2010 Mar 18;10:23 How Often Does an Individual Trial Agree with Its Corresponding Meta-Analysis? A Meta-Epidemiologic Study. Tam WWS et al. PLoS ONE 9(12): e113994. Heuristics (from Wikipedia): “A heuristic technique, often called … Continue reading Heuristics

The unreliability of systematic reviews

Bottom line: Systematic reviews, based on published journal articles, cannot be relied upon to be accurate.   Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy Turner EH et al. N Engl J Med. 2008 Jan 17;358(3):252-60. Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses Hart B et al. BMJ. 2012 Jan 3;344:d7202.   These are extremely important papers … Continue reading The unreliability of systematic reviews

Two main fronts on the speeding up of systematic reviews

Yesterday was the last day of a really interesting two-day symposium on automation and systematic reviews in Bristol.  The main participants were computer scientists and systematic reviewers; I belonged in the relatively small ‘other’ group. It struck me that the focus was on breaking down the steps of systematic reviews (as seen in a few papers, one reviewed on this blog – click here) and … Continue reading Two main fronts on the speeding up of systematic reviews