How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Egger M et al. Health Technol Assess. 2003;7(1):1-76.
Over ten years old but still a really important paper. They report a number of important findings, that could form part of a list of heuristics, for instance:
- The importance of trials that are difficult to find vary by speciality.
- Unpublished trials show less beneficial effects than published trials.
- Trials that are difficult to locate tend to be smaller and of lower methodological quality than those that are easier to find and published in English.
The conclude starts with this passage:
“Systematic reviews that are based on a search of English language literature that is accessible in the major bibliographic databases will often produce results that are close to those obtained from reviews based on more comprehensive searches that are free of language restrictions.”
They also raise the prospect that extensive searches could introduce bias by the inclusion of poorer quality trials. They later state:
“…We believe that in situations where resources are limited, thorough quality assessments should take precedent over extensive literature searches and translations of articles”
The important take home message for me is that sometimes ‘less is more’. By focussing on easily accessible studies could actually mean better results. So, can rapid reviews – which focus on a single bibliographic database (perhaps two) – actually produce better results?
3 thoughts on “Article review: How important are comprehensive literature searches and the assessment of trial quality in systematic reviews?”
I agree that reviews including hard to find low quality trials might not be any better that rapid reviews, and possibly worse.. But you are ignoring the bias caused by the missing data from good quality unpublished or selectively reported trials,. This data will not be included in either rapid reviews or comprehensive ones so the results of the rapid review will still be biased. At least with comprehensive reviews you are then aware of the full extent of substandard research practices in a particular area, and are then in a position to help researchers do better in future. Unfortunately, at present, the forensic knowledge of bad research practices painstakingly gathered and reported by systematic reviewers (particularly Cochrane reviewers) is not used for that purpose which is a great shame.
Thank you for the comment. The article was principally an overview of the Egger paper and not a broad overview of rapid versus systematic reviews. My commentary at the end was focussed on how I see it sitting in the broader context, but was not meant to be exhaustive.
My broad view is that current systematic review methods, as typified by Cochrane, cannot be fully relied upon to give a ‘full’ answer to the question they are exploring. This is due to, as you point out, missing data. Systematic reviews typically only use published journal articles and therefore miss the 30-50% of unpublished trials and the wealth of data contained within Clinical Study Reports. My view is that such a review can, at best, give a ballpark estimate of an interventions worth. So, if you’re happy with ballpark, then do it quickly!
I see current systematic review methods fall between two stools: they are too slow (in most situations) and they are not rigorous in others (e.g. Tamiflu).
As for the substandard research, I have no doubt that many trials are poor (enough possible reason to focus on larger, better quality trials) and shining a torch on these bad practices has got to be a good thing.