The contribution of databases to the results of systematic reviews: a cross-sectional study. Hartling L et al. BMC Medical Research Methodology. 2016 16:127
How much searching should be undertaken when performing a systematic review? There have been a number of other articles exploring this, many captured on this site (Restricting the databases (or language) for a search) but as the authors of this paper point out “One important gap is the modest amount of empirical evidence demonstrating the impact on results and conclusions from different approaches to searching.”
In their study they examined the effect of including trials from a limited number of databases and compared the results to fuller systematic reviews. This passages is the most important one for me:
“Our results show that the vast majority of relevant studies appear within a limited number of databases. Further, the results of meta-analyses based on the majority of studies (that appear within a limited number of databases) do not differ in the majority of cases. In particular, there were very few cases of results changing in statistical significance. The effect estimates changed in a minority of meta-analyses but in the majority of these they changed to a small extent. Finally, results do not appear to change in a systematic manner (i.e., regularly over- or underestimating treatment effects), suggesting that searching select databases may not introduce bias in terms of effect estimates.”
So, is searching multiple databases wasteful and therefore unethical? It fits in to my own interest in Value of Information: which highlights the cost of acquiring extra information against the value it brings. Each additional database past Medline adds a cost, but is the benefit worthwhile? In most cases it would appear not and in the case of searching Medline and Embase I think one would need to make a very strong case for the inclusion of other databases on top of these.
But the biggest issue for me is the pedestal the paper places on mainstream systematic reviews. The assumption of the paper appears to be that systematic reviews, based on published journal articles, are the gold standard. This is demonstrable not the case and it’s an issue that crops up frequently in my thinking. Due to reporting bias, systematic reviews – that rely on journal articles – miss vast amounts of data (unpublished studies, clinical study report data etc).
A classic example is the 2008 Turner paper Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy. This showed that reporting bias severely affected the result of systematic reviews, see this tweet from Erick Turner himself for a nice illustration of the clear problems of relying on journal articles
So, in this study the authors would be comparing their results to the results of the ‘usual’ systematic reviews ie biased and probably inaccurate. I’ve struggled with this in my work on rapid reviews. If I think we can get close to a systematic review’s accuracy quickly is that good? Superficially you’d say so but then the comparison is flawed. How close do you want to get to an answer that’s potentially wrong??
This significant problem aside you really need to read this paper, it’s important.