Database selection in systematic reviews: an insight through clinical neurology
Vasser M et al. Health Info Libr J, 34: 156–164.
Unfortunately, it’s behind a paywall, so here’s the abstract:
Failure to perform a comprehensive search when designing a systematic review (SR) can lead to bias, reducing the validity of review’s conclusions.
We examined the frequency and choice of databases used by reviewers in clinical neurology.
Ninety-five SRs and/or meta-analyses were located across five prominent neurology journals between 2008 and 2014. Methods sections were reviewed, and all bibliographic databases were coded.
On average, 2.59 databases were used in SR searches. Seven reviews included an information specialist, and these reviews reported a greater number of information sources used during the search process. Thirty-nine databases were reported across studies. PubMed/MEDLINE® and EMBASE were cited most frequently.
Searching too few databases may reduce the validity and generalisability of SR results. We found that the majority of systematic reviewers in clinical neurology do not search an adequate number of databases, which may yield a biased sample of primary studies and, thus, may influence the accuracy of summary effects.
Systematic reviewers should aim to search a sufficient number of databases to minimise selection bias. Additionally, systematic reviewers should include information specialists in designing SR methodology, as this may improve systematic review quality.
I’d love to see the evidence behind some of the statements. For instance, what is the evidence that the majority of SRs do not search an adequate number of databases? That presumes there is an adequate number – have I missed something?
Also, people tend to look at increasing the number of databases (or any other methodological burden) and easily throw it out there with a disregard for the potential harm. One could easily increase the cost for no gain (or marginal gain) – ethical or unethical?
Bias is bad, but waste is also bad. Doing one review with small bias, is that ‘better’ than ten reviews with a bit more bias?
No idea of the answer or how you’d answer it. But in the absence of evidence the prevailing, faith-based, response gets adopted by the uncritical. Surely, science is about scepticism – that works both ways.
2 thoughts on “Database selection in systematic reviews: an insight through clinical neurology”
Agree – that this is interesting but it is the WRONG study! Question is not how many databases do review teams THINK they need but how many did they actually need. Authors should analyse all these reviews and see what percentage of included studies could have been retrieved just from MEDLINE and therefore they would add to the “how few can you get away with?” debate. They cite background evidence that searching one or two databases ONLY retrieves 60%-80% of included studies. This glass is more than half full! The true role of an information specialist is not to identify how many databases they can possibly think of (what I have called elsewhere a “p-ing up a wall contest”) but how few they need to get the same results and save everyone a lot of unnecessary searching and screening work (this is where my analogy breaks own!).
And I suspect those figures of included studies refers to published studies. So, they seems happy at 100% of published studies and unhappy at 60-80%.
Are the 30-50% of unpublished studies of no interest? There is enough evidence that unpublished studies can be massively important. I suspect the narrative of SRs has hoodwinked the authors.
Bottom line: the missing published studies outside of Medline/PubMed will invariably introduce a much smaller bias than the unpublished studies.