Does this even make sense?

I’m just back from Evidence Live where I ran a workshop on the community rapid review idea.  I spoke to many people about rapid reviews, and it’s interesting how the tide was turning (by the rise in interest in RRs).  During one discussion the absurdity struck me.

Systematic reviews

Fantasy = you include all trials

Reality = as 50% of trials (on average) are unpublished you have a sample (“a biased subsample”) which is 50% of all trials.

Rapid reviews might quickly get to 70-80% of all the published trials = 35-40% of all trials (say 37.5% on average).

Therefore, the systematic review position appears to be that the difference between 37.5% to 50% is crucial and is worth taking huge extra resource to obtain.  However, the shift from 50-100% is trivial and of little interest and not worth bothering about.

Where’s the sense in that?  Why is that even considered ethical?

Need all the evidence? Then get all the evidence (published and unpublished).

Happy with a sample? Then use the smallest sample possible to reduce costs and allow more to be done.

The reality is we’re currently sampling, broadly people seem oblivious, and it’s perpetuating massive waste. The SR organisations – who have set the barrier to entry for competitors really high and continue to push it higher – must be hoping the myth continues.  It seems like a cabal to me with a coterie of apparatchiks to help support the status quo.

 

One thought on “Does this even make sense?

Leave a comment