Last week I presented at the JBI European Symposium in Cardiff, one part of the discussion related to rapid reviews. Following that a Twitter conversation started:
There were other messages in the exchanges but you get the picture! A few observations:
- I’m still unsure if James believes we need all the data or not when doing an evidence synthesis.
- Assuming we don’t need all the data I wonder what the rationale is for using journal articles, other than convenience and we’ve always done it that way. After all when SRs first rose to prominence reporting bias was much less understood/described in the literature.
- To repeat the message from above – it’s really important – is that made by Iain Chalmers (who founded Cochrane) and Paul Glasziou in this BMJ Blog: “Reviewers trying to summarize all the research addressing a particular question are limited by access only to a biased subsample of what has been done.“
- I’ve typically said that if you rely on published articles you can only ever be confident you’ll get in the right ballpark. Caroline, rightly, pointed out that may not actually be true!
So, where does that leave me/us?
When assessing an intervention for effectiveness what is better a large, well-conducted RCT or a SR based on a biased subsample of what has been done? When framed like that, the former seems preferable!
We are also still far away from understanding the evidence base for evidence synthesis! We need to better understand the interplay between effort and results. When might:
- the largest trial suffice?
- when might a rapid review suffice?
- when might a systematic review suffice?
- when might you need to do a full systematic review, using all the data (including unpublished data including CSRs as seen with the Tamiflu work of Tom Jefferson)?
The problem, from my perspective, is that these issues are hard and inconvenient. The current systematic review system suits lots of people and any large change would be like turkeys voting for Christmas! One thing I’m fairly certain of is that we’ll look back at the way we do evidence synthesis and have regrets! We’ll certainly regret not spending more effort understanding the evidence base for evidence synthesis.
The last word, actually a tweet, is from Erick Turner. Erick wrote the seminal paper of reporting bias and antidepressants – over ten years ago now, in the NEJM. He joined in the Twitter ‘conversation’ and is suggesting we need to do things more quickly, before bad habits become embedded: