New article: The contribution of databases to the results of systematic reviews

The contribution of databases to the results of systematic reviews: a cross-sectional study. Hartling L et al. BMC Medical Research Methodology. 2016 16:127

How much searching should be undertaken when performing a systematic review? There have been a number of other articles exploring this, many captured on this site (Restricting the databases (or language) for a search) but as the authors of this paper point out “One important gap is the modest amount of empirical evidence demonstrating the impact on results and conclusions from different approaches to searching.

In their study they examined the effect of including trials from a limited number of databases and compared the results to fuller systematic reviews.  This passages is the most important one for me:

Our results show that the vast majority of relevant studies appear within a limited number of databases. Further, the results of meta-analyses based on the majority of studies (that appear within a limited number of databases) do not differ in the majority of cases. In particular, there were very few cases of results changing in statistical significance. The effect estimates changed in a minority of meta-analyses but in the majority of these they changed to a small extent. Finally, results do not appear to change in a systematic manner (i.e., regularly over- or underestimating treatment effects), suggesting that searching select databases may not introduce bias in terms of effect estimates.

So, is searching multiple databases wasteful and therefore unethical?  It fits in to my own interest in Value of Information: which highlights the cost of acquiring extra information against the value it brings. Each additional database past Medline adds a cost, but is the benefit worthwhile?  In most cases it would appear not and in the case of searching Medline and Embase I think one would need to make a very strong case for the inclusion of other databases on top of these.

But the biggest issue for me is the pedestal the paper places on mainstream systematic reviews.  The assumption of the paper appears to be that systematic reviews, based on published journal articles, are the gold standard.  This is demonstrable not the case and it’s an issue that crops up frequently in my thinking.  Due to reporting bias, systematic reviews – that rely on journal articles – miss vast amounts of data (unpublished studies, clinical study report data etc).

A classic example is the 2008 Turner paper Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy.  This showed that reporting bias severely affected the result of systematic reviews, see this tweet from Erick Turner himself for a nice illustration of the clear problems of relying on journal articles

So, in this study the authors would be comparing their results to the results of the ‘usual’ systematic reviews ie biased and probably inaccurate.  I’ve struggled with this in my work on rapid reviews.  If I think we can get close to a systematic review’s accuracy quickly is that good?  Superficially you’d say so but then the comparison is flawed.  How close do you want to get to an answer that’s potentially wrong??

This significant problem aside you really need to read this paper, it’s important.

Advertisements

2 thoughts on “New article: The contribution of databases to the results of systematic reviews

  1. Thanks for a great blog Jon. Perhaps the focus of rapid and traditional reviews should shift to designing new and better research which is what they are uniquely well designed to do. As we know, any primary research needs to be built on existing knowledge. A systematic review is that knowledge. It will include flawed as well as good research. A review may contain no research at all in which case there is even more need to design and conduct primary research – assuming the review question is an important one to patients. Much of the published evidence is flawed and usually biased in favour of interventions with little information about harms or whether the outcomes are important to patients or measured sensibly. The dogged time-consuming pursuit and analysis of any publlished data so you’ve got something to put in a meta-analysis which is likely to give an inaccurate estimate of effect and not enough information about harms or the appropriateness of outcome measures is incredibly wasteful. Much better to spend the time designing and/or reviewing primary study protocols which can answer important questions properly than spend time on searching, data extraction and meta-analysis of published research. As Jon says, is it right to accept the results of a “gold standard” review as closest to the truth when the fact that it contains only published data makes it more likely to be wrong?
    PS – Erick Turner’s Twitter gif is the best I’ve ever seen!

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s