I spotted this news via the BMJ and I wanted to share as these (CSRs) are an important component of the debate around rapid versus systematic reviews.
I have long argued that terms such as ‘rapid’ and ‘systematic’ are mis-leading and the CSR helps illustrate this point.
Rapid – is a relative term and open to interpretation. I would see rapid as taking a day or so, others might see it as two months. In short, I think it’s broadly used as anything less than a systematic review!
Systematic – suggests, to me, you get all the evidence. However, we know this is not the case as most SRs will miss the unpublished trials (so missing ~50% of all trials) and will miss the data that’s not in the journal article (eg contained in CSRs). In short, systematic reviews are only systematic in relation to published journal articles. So, if you define systematic as getting all the evidence then – ironically – they’re not systematic in any sense.
For those of you who are not aware, CSRs are standardised documents representing the most complete record of the planning, execution, and results of clinical trials. These are submitted by industry to government drug regulators. These are typically summarised to create the journal article.
So, back to the CSR news story, it points to the FDA’s release of the CSR for a single trial (NCT # 01946204) and they link to the CSR via this holding page. I will also link to the actual CSR but before I do that, have a think of the typical journal article and try to think how long it is – 10-12 pages perhaps? Now, if you click on this link it’ll take you to the actual CSR and it’s 891 pages of evidence. Peter Joshi and Tom Jefferson have highlighted this compression (from CSR >> journal article) previously (see Clinical study reports of randomised controlled trials: an exploratory review of previously confidential industry reports).
891 pages summarised to – say – 12 pages. That’s a lot of data/information/evidence missing.
If you did a full systematic review using ALL the CSRs for ALL the trials then things we consider systematic reviews would actually be rapid reviews (as it would take a fraction of the time to do, relative to a CSR based systematic review).
In my experience people – particularly from a systematic review background – seem concerned about the potential of rapid reviews to be, somehow wrong by missing important data. However, they seem considerably less concerned about missing important data from CSRs and missing trials. I have to question why that is and I can think of two possible explanations (there could be others):
- Self-interest: They do well from their work in the SR world and do not want to rock the boat and potentially lose their privilege.
- Ideology: Their view on SRs is ideologically driven and are immune to counter-arguments/evidence. Their adherence to the cause if based on faith not evidence.
I end the post with a metaphor that I’ve adopted from Tom Jefferson: