Ewald H et al. Journal of Clinical Epidemiology
Conclusion: Abbreviated literature searches often led to identical or very similar effect estimates as comprehensive searches with slightly increased confidence intervals. Relevant deviations may occur.
Apart from the conclusion, some key observations:
- “Searching multiple data sources may increase the number of studies, study participants, and observed events contributing to meta-analyses but abbreviated literature searches often give identical or very similar treatment effect estimates“. In the systematic review world there is a pathological ‘need’ to get ‘all’ articles. This is problematic as (a) after a certain point the additional articles will add nothing, other than cost (b) it is impossible to know if you have ‘all’ articles and most SRs focus on published articles (so potentially missing lots of unpublished studies).
- “However, in up to one in seven reviews (ie 6-13%), the direction of the effect estimate changed or it was not possible anymore to provide a result at all when relying on abbreviated searches. This may have a substantial impact on decision making but which may also be an acceptable tradeoff for users of rapid reviews.“. This is why we need to carry out research in this area. If we can reliably identify when RRs ‘fail’ we can ensure we undertake more resource intensive reviews in those areas but doing RRs in the 90% of cases when it appears to make very little/no difference.
- “For all abbreviated searches, there was one meta-analysis where the abbreviated search gave results deviating substantially (absolute deviation to original meta-analysis up to 2.39-fold; outcome: withdrawal due to adverse events: two of the five RCTs could not be found with any of the search strategies as they are unpublished data from industry sponsor and constitute 82.5% of the weight of the meta-analysis).“
This last point is really important and identified one potential source of error; missing unpublished studies. It identifies that unpublished studies can be vital in an evidence synthesis. In this analysis the authors assume that the SR is the gold standard and is ‘true’. But, given the potential importance of unpublished studies how do we know the other SRs hadn’t missed unpublished studies that could miss 82.5% of the weight?
So, what does the evidence tell us about how good SRs are at tracking down unpublished studies. These are the two studies I found (there may be others, if I’ve missed any please post in the comments):
Searching for unpublished data for Cochrane reviews: cross sectional study Schroll JB et al. BMJ. 2013 Apr 23;346:f2231
Search for unpublished data by systematic reviewers: an audit. Ziai H et al. BMJ Open 2017;7:e017737.
The first study highlights how variable it was in obtaining unpublished data with many authors not trying and in many situations, where they tried, they were unsuccessful. The second study had the following conclusion:
“A significant fraction of systematic reviews included in our study did not search for unpublished data. Publication bias may be present in almost half the published systematic reviews that assessed for it. Exclusion of unpublished data may lead to biased estimates of efficacy or safety in systematic reviews.”
This is acknowledge by Ewald in the list of study limitations “Finally, we determined the impact of abbreviating searches compared with the complex search strategies of Cochrane reviews as gold standard, reflecting a typical application of abbreviated searches. Since even Cochrane reviews may not retrieve all available evidence, we are not able to quantify the impact of abbreviating searches compared with theoretically perfect searches that would identify all existing evidence.”
Bottom line: In some situations missing (a) unpublished studies or (b) some key references can be problematic. Unfortunately, we do not understand when this is. And, as a result, we continue to potentially waste huge resource in undertaking systematic reviews when a rapid review would suffice.