I asked the following question to the EBHC mail-list:
“I’m wondering how one could test the following so would welcome advice.
Question: Assuming we have a finite resource for evidence synthesis which is better 1 systematic review or, say, 5-10 rapid reviews?
Context: There is an opportunity cost associated with doing the labour intensive systematic reviews how do we know we are using this scarce resource (of evidence synthesis resource) optimally? In the studies of RR v SRs I have yet to see an example where a RR has got a ‘wrong’ answer (ie SR says the intervention is good while the RR says bad – so a reversal in conclusion) but there is sometimes variation in estimated effect size. This variation is frequently small but sometimes it can move the effect from significant to non-significant or vice versa.
So, what method would you use to assess which gives most benefit for the limited amount of resource?”
I posted it earlier today and so far no concrete suggestions. Anyone out there that can make some suggestions?