This is the 3rd article in the ‘Why do we do systematic reviews?’ series (see references below for number 1 and 2). The series is about exploring the reasons for undertaking a systematic review in the first place with four main reasons seeming popular. Number four (with 19.5% of the votes at the time of writing) is ‘To understand the adverse events associated with the intervention‘.
I will use this article to highlight the clear, demonstrable, failings of standard systematic review methods in answering these types of questions and then I will bring it round to the case for rapid reviews.
Firstly, it is clear that most systematic reviews, as exemplified by Cochrane, rely heavily on published journal articles [3]. Yet, we know that up to 50% of all trials are unpublished [4]. So, the standard systematic reviews miss up to half the trials. So, do they miss half the data? No, they miss loads more than that.
In 2001 the legendary John Ioannidis published ‘Completeness of Safety Reporting in Randomized Trials: An Evaluation of 7 Medical Areas’ [5], concluding:
“The quality and quantity of safety reporting vary across medical areas, study designs, and settings but they are largely inadequate. Current standards for safety reporting in randomized trials should be revised to address this inadequacy.”
In 2012 Peter Doshi, based on his work on the Tamiflu saga, published ‘Rethinking credible evidence synthesis’ [6]. They repeat the assertion that most academic systematic reviewers rely on published journal articles and later state:
“…discrepancies between the reporting of trials in clinical study reports sent to regulators versus published papers led us to lose confidence in the journal reports. In one case, the published version of a trial unambiguously states that “there were no drug-related serious adverse events,” while the clinical study report lists three that were possibly related to oseltamivir.”
A 2013 paper [7], in PLOS Medicine, was published from authors based at IQWiG (which I’ve seen described as a the muscular, German, version of NICE). They concluded:
“Information on patient-relevant outcomes investigated in clinical trials is insufficient in publicly available sources; considerably more information can be gained from CSRs.”
The article reports on significant differences between adverse events and serious adverse events between journal articles and clinical study reports.
The final paper, to highlight the point that systematic reviews – based on published journal articles – are unreliable in highlighting adverse events is from Paul Glaziou. ‘Reducing waste from incomplete or unusable reports of biomedical research’ [8] was published last year in the Lancet and reports:
“Information about adverse effects (harms) is especially poorly reported in reports of randomised controlled trials. Chowers and colleagues reviewed 49 randomised controlled trials of highly active antiretroviral therapy. Only 16 of 49 trials (33%) had all adverse events (AEs) reported; for the remainder only some events were reported (eg, the most frequent, those with p<0·05, or selected adverse events). The investigators stated that “These facts obstruct our ability to choose [highly active anti-retroviral therapy] based on currently published data”. Reporting of adverse effects is also poor in systematic reviews.”
So, it is clear – systematic reviews, based on published journal articles, are not a reliable way of finding complete adverse event data for a given intervention. So, where does that leave us in relation to rapid reviews?
So, the situation seems to be:
- If you need ALL the adverse event data you need to go after more than the published journal articles. You need the clinical study reports.
- If you don’t need ALL the adverse event data, just a reasonable sample, what method is appropriate?
It is the second part which lends itself to rapid methods. We know from various papers (e.g. [9]) that smaller papers tend to be harder to find. So, broadly, rapid methods are likely to find the larger trials. It stands to reason that the trials with larger populations are likely to generate more adverse event data (if you have 1 adverse event per 5 patients and you have a trial of 25 patients, that would mean 5 adverse events. If the trial was 1000 strong, you’d have 200 adverse events). So focussing on larger trials, assuming they are well reported, seems reasonable.
There is finite resource to undertake evidence reviews and therefore you need to have a really strong reason to expend extra time, money and other resource on extending your review methods. If you’re after adverse events there is NO evidence that spending extra time in a Cochrane-style review will yield any extra adverse event data over a more rapid method that picks up most of the – larger – studies.
One could make the case that you might miss a couple of adverse events by missing the smaller articles. As we have already ascertained, by not using clinical study reports, you’re already conceding that you’re happy to not have all the adverse event data. So, missing some doesn’t seem to be an overt concern.
I have two main points to finish off:
- Arbitrariness: The methodology used, in relation to looking for adverse event data, is not evidence-based. It is completely arbitrary. We use current systematic review methods because it is expected, nothing more.
- Lack of transparency: It concerns me that people view systematic reviews, based on published journal articles, as being the gold standard and therefore can be relied upon to give robust adverse event information. Systematic reviews (and rapid reviews) should clearly state that the methods employed are likely to miss potentially significant adverse event data. Let the consumer of the information be aware and they can act accordingly. Ignorance, in this situation, is not bliss.
Conclusion: Expending extra time doing a full systematic review (based on journal articles), to highlight adverse events is unjustifiable. The methodology employed is not evidence-based, it’s arbitrary and the lack of transparency needs to be urgently addressed. Using expedited, rapid, methods seems entirely reasonable.
References
- Why do we do systematic reviews?
- Why do we do systematic reviews? Part 2
- Searching for unpublished data for Cochrane reviews: cross sectional study. Schroll JB et al. BMJ. 2013 Apr 23;346:f2231
- Half of all clinical trials have never reported results
- Completeness of Safety Reporting in Randomized Trials: An Evaluation of 7 Medical Areas. Ioannidis JPA et al. JAMA. 2001;285(4):437-443
- Rethinking credible evidence synthesis. Doshi P et al. BMJ 2012; 344
- Completeness of Reporting of Patient-Relevant Clinical Trial Outcomes: Comparison of Unpublished Clinical Study Reports with Publicly Available Data. Wieseler B et al. PLoS Med. 2013 Oct;10(10):e1001526
- Reducing waste from incomplete or unusable reports of biomedical research. Glasziou P et al. Lancet. 2014 Jan 18;383(9913):267-76
- How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Egger M et al. Health Technol Assess. 2003;7(1):1-76
Interesting article. I agree that systematic reviews and rapid reviews should not be used as a way of discovering adverse events associated with interventions. The literature is still too unreliable. You could probably use them to highlight patterns of poor conduct and reporting of adverse event data and demonstrate its negative impact on patients
LikeLike