The Cochrane HPV vaccine review was incomplete and ignored important evidence of bias by Lars Jørgensen, Peter C Gøtzsche, Tom Jefferson (BMJ Evidence-Based Medicine 2018)
This is a highly critical article pointing out that the review missed nearly half of eligible trials. They also land some heavy punches with comments such as:
- “Cochrane’s public relations of the review were uncritical“
- “In our view, this is not balanced and people with conflicts of interest in relation to the manufacturers should not be quoted in relation to a Cochrane review.“
- “Part of the Cochrane Collaboration’s motto is ‘Trusted evidence’. We do not find the Cochrane HPV vaccine review to be ‘Trusted evidence’, as it was influenced by reporting bias and biased trial designs.“
The above is certainly not an isolated example of problems with systematic reviews (SRs) and I have posted many other examples previously (the Turner and Hart papers are good starting points).
This post is not about Cochrane, but to raise a serious issue in relation to rapid reviews (RRs); well evidence synthesis generally: no synthesis can be viewed as the synthesis of all the evidence for an intervention – they are all samples of the total data. SRs typically take a bigger sample than RRs but that’s about it. I’ve tried to capture the continuum of evidence synthesis below:
This situation troubles me in a number of ways, for instance:
- People rarely acknowledge it’s a continuum. There is no cliff edge between ‘rapid’ and ‘systematic’.
- As RRs get more academic scrutiny they are always compared against SRs and they NEVER question that the SR might be compromised (the Hart paper indicates around 50% of SRs are compromised). In doing this comparison, it reinforces the primacy of SRs and – the big Q here for me – what if the SR is ‘wrong’?
- Most RRs focus on shortcuts from the SR process. For the reasons above this is problematic (and, from my perspective, it shows a lack of imagination) and I’ve written about this before; see Different approaches to rapidity. Sometimes a SR sample might be enough, sometimes a RR sample will be enough. But we lack the evidence to underpin this and we lack the intellectual framework to discuss this.
- Innovations (be it in evidence synthesis or anywhere else) can be categorised in two ways iterative and revolutionary. Iterative builds on the work of others and typically makes small changes. Revolutionary comes from ‘way out there’ and makes large changes to the way we do things. I see the ‘RRs by shortcuts of SRs’ in the ‘iterative’ camp, but my interest is more the latter (hence I find much of the current RR work boring). I also feel it has the most scope to bring about meaningful change. But little work is spent in this domain.
Bottom line: Evidence synthesis is about pulling together a sample of the total data for an intervention. At present we do not know what sample, for a given context, is appropriate.
A great blog by Hilda Bastian, digging deeper on the HPV vaccine discussion https://blogs.plos.org/absolutely-maybe/2018/08/25/the-hpv-vaccine-a-critique-of-a-critique-of-a-meta-analysis/
LikeLike