A while back I wrote a piece Different approaches to rapidity which suggested there are two ways of doing a rapid review:
- Process – take the systematic review and take short cuts
- Outcome – what’s the optimal way of getting to the desired outcome
I’m increasingly concerned that all the focus is on the former and not the latter. My concern is based on a variety of thoughts:
- The systematic review process is flawed (mainly due to reporting bias), so we take that as the starting point and degrade it to get to the rapid review. In some situations this might be good, others less so.
- Much of the work around rapid reviews are being undertaken by systematic review people/organisations. On one hand that’s reasonable, after all the interest is in evidence synthesis. But on the other I see it as being a way of maintaining control in the domain. Again, I’ve written about economic barriers to entry previously Economics and EBM. I can’t help feeling this post is still relevant and relevant to rapid reviews.
- Linked to the above, why are no RR methods using data from the likes of the FDA? It’s because they’re not used in systematic reviews! By relying on the SR framework we limit our imagination and potential for innovation. See my post on a rapid review for Brexpiprazole from 2016.
- It’s being led by academia and that’s sucking the fun out of things!
- I don’t feel we’re any better at articulating the reasons for doing the rapid review and how that should modify the methodology. Are we interested in the efficacy of an intervention? The adverse events? To see what’s been done before? These are different reasons and the Q ‘What RR method to use?’ never starts with the question – why are we doing the review?
I’m not sure that there a clear narrative to describe my disquiet, my frustrations.
But if I go back to first principles, my main interest in EBM is in clinical question answering. I had/have problems with groups like Cochrane as they were, broadly, rubbish at answering a broad range of clinical questions (I told Iain Chalmers I’d like Cochrane when they answered 5 or 10% (I forget which) of the clinical questions I was answering at the time – they were easily less than 1% and I doubt that’s changed). So, rapidity has the potential to increase the syntheses that can be used to answer clinical questions. But, if you take the trajectory of RRs, I can’t help being fearful that it’ll just fall in to the same trap as SRs.
A quick leap to this equation:
In my mind I always saw the ‘work to access’ (from the perspective of someone answering clinical Qs) to mean ‘work to produce’ as I think that works well. RRs reduce the ‘work’ (relative to a SR), possibly reduce the ‘validity’ but doesn’t touch the relevance.
SRs and RRs are simply products to answer clinical Qs and within that a relatively small number of clinical Qs. So, the frustration is my own – have I concentrated too much on a relatively narrow part of clinical question answering? Have I been sucked in to some academic exercise at the expense of the bigger picture?
2 thoughts on “Where are we going with rapid reviews? #frustrating”
I think the optimal way of getting to the outcome you want (an answer to a clinical question) is by looking the results of individual well-designed, well-conducted,and well-reported primary study which most closely matches the question you have. Good luck finding that – but I think you have more chance than with a systematic review where you are one step removed – behind a smoke screen often. If Cochrane-style systematic reviews can only answer clinical questions 1% or even 10% of the time, why are the same academics leading the way to try and get there quicker, and indeed sucking the joy out of it. Case in point that horrific and deeply unjoyful academic-led row over the HPV vaccine.
LikeLiked by 1 person