Reading the ‘Expediting citation screening using PICo-based title-only screening for identifying studies in scoping searches and rapid reviews‘ (posted here recently) got me thinking! It seems to me that what they were doing was very similar to a very specific search of the literature (in this case matching keywords to words in document titles). Typically, in evidence synthesis, the opposite (a sensitive search) tries to find as many possible matches as possible – so as not to miss any. The problem with this approach is that it returns hundreds, thousands even tens of thousands of results that all need screening to try to find the appropriate ones. From a pool of a thousand candidate articles you might end up with a handful of results. The good/noise ratio is ridiculous.
That is why a sensitive search always seems so inefficient.
The results from Rathbone et al. gives me hope that specific searches should be attempted/evaluated. The good/noise ratio should be reduced dramatically and, as it’s so much quicker you can address another problem that affects evidence synthesis – the number of databases to search! For an evidence synthesis do you search 1, 3, 7 or 15 databases? We’ve written about this previously (see here and here) and my take home is a quote from one of the papers:
“Our results show that the vast majority of relevant studies appear within a limited number of databases.”
But why not do a specific search (perhaps even a title search) over multiple databases – more than you would if you were doing a sensitive search? It appears to be – as a working hypothesis – to be the most efficient way of returning relevant documents with a much reduced level of noise. And, those that are missed are, probably (based on extrapolating inferences from the Hartling paper) likely to have limited/no impact on any subsequent review conclusions!
All we need now is some empirical evidence….