This paper sought to assess the effect of various biases on effect estimates in randomised controlled trials. It also looks at the implications for systematic review producers. This alone makes it pertinent for the site as assessment of bias is a time-consuming (and therefore costly) process. Although we should note automated methods are making great strides (e.g. RobotReviewer or the Systematic Review Assistant).
The review explored a number of different types of bias, as outlined in the table below (copied directly from the report):
In each case it explored the evidence base for the likely effects of the bias on trial results. Each bias is discussed in depth and individual conclusions made.
This paper is not an easy read and I’m not convinced I’ve fully appreciated all the points raised. In part I’ve posted the article here as it’s clearly an important paper that needs highlighting.
Within the discussion there is a section ‘Implications for Reviewers’ and I’d like to highlight two passages:
“Of greatest concern are the implications of our findings for systematic reviewers. Although we only found statistically significant differences by studies employing a minority of protections against bias, we do not conclude that reviewers should abandon these evaluations.”
“This may imply a value in more cautiously evaluating studies that lack protections for multiple sources of bias, including biases without consistent or precise evidence of having an individual effect.”
If others read this article and have thoughts on how it relates to evidence synthesis please let me know!