This review links to two articles published on the Trip Databases ‘Liberating the literature’ blog. Conflict of Interest – I wrote both of them. The first article was published in April 2012 and was a list of articles that compared rapid versus systematic methods. The second article (published shortly afterwards) was a list of lessons learned, which I reproduce below:
Lesson 1: The notion of a rapid-review is ill-defined. However, introducing one methodology isn’t necessarily appropriate. What is important is transparency behind the process.
Observation 1: The methodology behind systematic reviews varies a great deal as well. Also, what constitutes rapid? In the literature it was typically less than 5 weeks. A lot of my work is undertaken in less than 5 hours. So, I’m very supportive of the notion of transparency.
Lesson 2: The tension between speed and accuracy is a common theme.
Observation 2: While it may appear obvious it’s important that it’s made explicit.
Lesson 3: Rapid reviews tend to look at a focused question while systematic reviews will typically look at broader topics. Also, they tend to focus on efficacy or effectiveness while not be used to examine safety, economics or ethics.
Observation 3: I’m not sure how accurate this statement is. However, I do know the broader the question the less likely it is be answerable quickly.
Lesson 4: Meta-analyses are often not undertaken in rapid reviews, so no effect sizes given – typically just a sign of an interventions effect. Any results are less generalisable and less certain.
Observation 4: A rapid review might be able to say if a treatment is likely to be better than another, it’s less able to say how much better it is. This may or may not be be important.
Lesson 5: Trial quality assessment is important, poor quality studies are likely to overestimate the benefits of a therapy or the value of a test.
Observation 5: Again, this is linked to the time factor. If you only have two days to return a response what should you do? For our ultra-rapid reviews it seems sensible to be transparent and make explicit the short-cuts and possible effects. In our ultra-rapid reviews we aim to use secondary studies but we will use abstracts of primary research as well. One paper suggested that a moderately robust summary of the evidence is better than no evidence.
Lesson 6: The conclusions between a rapid review and a systematic review do not – typically – differ. The extra effort undertaken by carrying out a systematic review may not greatly impact the final conclusions.
Observation 6: Unsurprising, but needs to be taken in the context of the points raised above. Also, an understanding of why they don’t agree is needed.
Lesson 7: Rapid reviews, when compared with systematic reviews occasionally differ. In the papers that compared the rates of difference between rapid reviews and systematic reviews were 4/39, 1/14, and 1/6.
The study that reported 4 differences in conclusion out of 39 reviews compared NICE and BUPA judgements around funding. This may well have reflected genuine differences, semantic differences (ie BUPA used a different classification system than NICE), difference in the year the review was taken (BUPA typically published their reviews earlier than NICE) and genuine judgement differences e.g. BUPA reported that percutaneous vertebroblasty for osteoporosis said it should be used in ‘trial only’ while NICE said ‘evidence adequate’ (but added caveats).
The same paper reported another study showing 1/14 differences but I was unable to ascertain the reason for the difference due to poor referencing.
In the 1/6 case the rapid review reported that the intervention was experimental while the large cost-effectiveness study indicated that the intervention was safe and efficacious. No reason was supplied for the discrepancy.
Observation 7: Clearly more research is needed to understand differences and I’d be very keen to see how ultra-rapid (less than 1 day) reviews compare with rapid and systematic reviews.
The conclusion: ‘Transparency is the key message for me’
Focus on Rapid Reviews Comment: Nothing major seems to have changed and I think the lessons are still broadly appropriate. Lesson 1 (the ill-defined nature of rapid reviews) is still very much the case and is part of the motivation for this blog. This links nicely with Lesson 2 and the tension between speed and accuracy. I have tried to draw the relationship as I see it (below):
In short I feel you can get pretty close to the ‘correct’ answer relatively quickly and the rest of the time is spent on trying to remove as much bias as possible. It’s a clear case of the law of diminishing returns and the elephant in the room is the missing trials/data. If you rely on published articles (as most systematic reviews do, see Schroll et al 2013) you cannot assume you have an accurate assessment of the worth of an intervention (see Some additional thoughts on systematic reviews). I feel there is a cost-benefit discussion that needs to be had, somewhere, sometime soon. Perhaps at Evidence Live 2016.
The other notable lessons are 6 (conclusions between rapid and systematic reviews don’t really differ) and 7 (research is needed). We need to explore the reasons for doing systematic reviews and instead of simply saying, when faced with uncertainty, that we need a ‘systematic review’ we need to better understand what the implications are (financial and opportunity cost, time possibly wasted etc). In many situations a rapid methodology will suffice. But we need to be informed as to when it’s ‘safe’ to use rapid methods and when it’s not. Equally, when is it not safe to rely on ‘standard’ systematic review methods (e.g. in the case of Tamiflu).
As a non-academic I feel don’t feel conflicted to say we ‘need more research’.