I recently had the pleasure of talking about rapid reviews in Liverpool. One point that got raised in the discussion was that of methodological shortcuts. Typically, rapid reviews are portrayed as ‘cut down’ systematic reviews ie the starting point is the systematic review process and you then reduce/remove steps to arrive at your rapid review methodology (BTW my post Different approaches to rapidity discusses this and … Continue reading Shortcuts
Reading the ‘Expediting citation screening using PICo-based title-only screening for identifying studies in scoping searches and rapid reviews‘ (posted here recently) got me thinking! It seems to me that what they were doing was very similar to a very specific search of the literature (in this case matching keywords to words in document titles). Typically, in evidence synthesis, the opposite (a sensitive search) tries to … Continue reading Sensitive searching of few or specific search of many…?
As far as I can tell we undertake evidence synthesis to better understand the effectiveness of an intervention. The rationale is that the greater the accumulation of evidence the greater the understanding of how good an intervention is. This is typically characterised by a reduction in the size of the confidence intervals in meta-analyses. Put it another way, we attempt to be as certain as … Continue reading Theorising about evidence synthesis – is it about the cost, language or other?
In the recent post relating to trading certainty for speed I highlighted that the authors stated: “Participants of our survey, on average, viewed 10% as the maximum tolerable risk of getting an incorrect answer from a rapid review” My issue with this is that there was no definition of what ‘incorrect’ was. So, I emailed one of the authors: “A fascinating paper, thank you. One … Continue reading Update from author: Trading certainty for speed
I’m just back from Evidence Live where I ran a workshop on the community rapid review idea. I spoke to many people about rapid reviews, and it’s interesting how the tide was turning (by the rise in interest in RRs). During one discussion the absurdity struck me. Systematic reviews Fantasy = you include all trials Reality = as 50% of trials (on average) are unpublished … Continue reading Does this even make sense?
Paul Glasziou and Iain Chalmers recently published the above article on the BMJ Blog. As you’d expect with these authors it’s a great read. I’d like to highlight one section – that’s particularly relevant to the issue of rapid reviews (Note my emphasis): Whether the precise non-publication rate is 30%, 40%, or 50%, it is still a serious waste of the roughly $180 billion annually … Continue reading Can it really be true that 50% of research is unpublished?
One category on the Trip Database is ‘ongoing systematic reviews’. This content is taken from the PROSPERO database of ongoing systematic reviews. If you’re not familiar with PROSPERO this is how the site describes itself: “PROSPERO is an international database of prospectively registered systematic reviews in health and social care, welfare, public health, education, crime, justice, and international development, where there is a health related … Continue reading Registering rapid reviews
I spotted an interesting tweet earlier and replied, the exchange is below: The paper in question is: Responsible Translation of Stem Cell Research: An Assessment of Clinical Trial Registration and Publications. For fear of being repetitive reporting bias is hugely problematic. Avoiding unpublished trials can massively affect a systematic review [1, 2]. Yet Cochrane, arguably the ‘gold standard’ for systematic review production, has an unsystematic … Continue reading Unpublished studies in stem cells
Grey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews. Hartling L et al. BMC Medical Research Methodology 2017 17:64 Conclusion: The majority of SRs searched for non-English and unpublished studies; however, these represented a small proportion of included studies and rarely impacted the results and conclusions of the review. … Continue reading Grey literature in systematic reviews
Yesterday I saw a tweet that read: The risk rapid reviews is if we get it wrong – how much wrong are we ready to risk This is an important point. It is not the first time I’ve had people discuss rapid reviews and framing it as how often are you prepared to be wrong? I always find it slightly annoying as – unintentionally I’m … Continue reading On being wrong and/or sanctimonious