Trading certainty for speed – how much uncertainty are decisionmakers and guideline developers willing to accept when using rapid reviews. Wagner G et al. BMC Medical Research Methodology 2017 17:121 Abstract below and comment (from me) under that: Background Decisionmakers and guideline developers demand rapid syntheses of the evidence when time sensitive evidence-informed decisions are required. A potential trade-off of such rapid reviews is that … Continue reading Trading certainty for speed – how much uncertainty are decisionmakers and guideline developers willing to accept when using rapid reviews
The following was posted on the Trip Database blog….: As part of the KConnect work (EU funded Horizon 2020 project) we have been doing a fair bit of work exploring the automatic extraction of various elements from RCTs and systematic reviews. If we can automatically understand what a paper is about it can open up all sorts of avenues with regard search and evidence synthesis. … Continue reading Can you do evidence synthesis automatically?
Comparison of a full systematic review versus a rapid review approaches to assess a newborn screening test for tyrosinemia type 1. Taylor-Phillips S et al. Res Synth Methods. 2017 Jul 13. This is exactly the sort of thing I want to see, a comparison of systematic versus rapid reviews. A couple of points: Much of the analysis focuses on process outcomes (e.g. RR missing papers). … Continue reading Comparison of a full systematic review versus a rapid review approaches to assess a newborn screening test for tyrosinemia type 1
Does knowledge brokering improve the quality of rapid review proposals? A before and after study. Moore G et al. Systematic Reviews 2017 6:23 Conclusions: This study found that knowledge brokering increased the perceived clarity of information provided in Evidence Check rapid review proposals and the confidence of reviewers that they could meet policy makers’ needs. Further research is needed to identify how the knowledge brokering process … Continue reading Two new articles on rapid reviews
I was at Evidence Live last week to discuss the Community Rapid Review idea. It was good to see a number of sessions on rapid reviews and in one of those (where I was in the audience) a question was asked relating to comparisons between ‘rapid’ and ‘systematic’ reviews. I suggested that, for Evidence Live 2018, there should be a RR ‘hack’! At the start … Continue reading Evidence Live 2018: Rapid review hackathon
I’m just back from Evidence Live where I ran a workshop on the community rapid review idea. I spoke to many people about rapid reviews, and it’s interesting how the tide was turning (by the rise in interest in RRs). During one discussion the absurdity struck me. Systematic reviews Fantasy = you include all trials Reality = as 50% of trials (on average) are unpublished … Continue reading Does this even make sense?
The Community Rapid Review idea has been discussed for a while now and the final stage, before we move to production, is coming very soon. Next week I will be running a workshop at Evidence Live on the idea. It’ll be an interactive exploration of the thinking behind the idea and will hopefully see some final constructive criticism to guide the final product. If you’re … Continue reading Evidence Live: Community Rapid Review
Paul Glasziou and Iain Chalmers recently published the above article on the BMJ Blog. As you’d expect with these authors it’s a great read. I’d like to highlight one section – that’s particularly relevant to the issue of rapid reviews (Note my emphasis): Whether the precise non-publication rate is 30%, 40%, or 50%, it is still a serious waste of the roughly $180 billion annually … Continue reading Can it really be true that 50% of research is unpublished?
Database selection in systematic reviews: an insight through clinical neurology Vasser M et al. Health Info Libr J, 34: 156–164. Unfortunately, it’s behind a paywall, so here’s the abstract: Background Failure to perform a comprehensive search when designing a systematic review (SR) can lead to bias, reducing the validity of review’s conclusions. Objective We examined the frequency and choice of databases used by reviewers in … Continue reading Database selection in systematic reviews: an insight through clinical neurology
One category on the Trip Database is ‘ongoing systematic reviews’. This content is taken from the PROSPERO database of ongoing systematic reviews. If you’re not familiar with PROSPERO this is how the site describes itself: “PROSPERO is an international database of prospectively registered systematic reviews in health and social care, welfare, public health, education, crime, justice, and international development, where there is a health related … Continue reading Registering rapid reviews