Evidence Live has come and gone and I had a wonderful chat with Iain Chalmers. Iain is a marvel and in the course of the conversation I had a ‘light bulb’ moment relating to the nature of rapid versus systematic reviews. I’m increasingly unhappy with the distinction and I am of the view that the debate should not be ‘rapid’ versus ‘systematic’ but how, for a given context, can we efficiently synthesise the available evidence.
For a given intervention you have a totality of evidence/data which has been generated in clinical trials. This data is variably available:
- Journal articles
- Regulatory data (e.g. FDA and EMA reports)
- Clinical Study Reports
The issue is when do we have enough data to be happy? We will never get 100% of the evidence so we’re – in effect – using a sample. Are we working with 10% of the evidence or is it 75% of the evidence?
Currently we typically rely on published journal articles. This means we lack two important sources of evidence: unpublished studies and additional data from regulatory data and/or CSRs. So, a systematic review based on published journal articles might synthesise 35% of the available evidence (my estimate, but it is based on the fact that 30-50% trials are unpublished and CSRs contain lots of additional evidence missed out in journal articles).
So, is this enough? If it is, where’s the evidence that it’s enough?