I posted a post-Evidence Live blog last week which explored the notion of harms associated with doing rapid reviews (RRs). There is overlap from that post but I’ve had time to reflect and hopefully this will be better written. I’ve also added a vote!! It may need re-writing again, if you think it needs clarification then please let me know! The question I was asked … Continue reading Value of Information to help with the SR v RR debate?
I was at the wonderful Evidence Live and presented on rapid reviews. One question came from the wonderful Iain Chalmers who asked about the potential for harm if health professionals followed the advice of a RR that was subsequently shown to be wrong. Later, in conversation, it became clear that ‘wrong’ meant a reversal of conclusion – so the SR might say the intervention is … Continue reading Systematic versus rapid reviews – what about harms?
I had the pleasure of presenting at the HTAi 2018 conference in Vancouver which ended yesterday. Here is a picture from the event, shared as (a) the unplanned colour co-ordination is impeccable and (b) people have commented I look like a game show host. I talked about, you guessed it, rapid reviews. My emphasis was on the fact that, whatever the review type, you never … Continue reading HTAi 2018
Straw man: A logic fallacy involving the purposeful misrepresentation of an argument in order to strike it down. I was sent a paper to review earlier this week and I was quite strong in my feedback, so I thought I should share my frustration and write a blog – so here it is… A central tenet of the paper was that rapid reviews (RR) can be … Continue reading Straw man and the accuracy of rapid reviews
I recently had the pleasure of talking about rapid reviews in Liverpool. One point that got raised in the discussion was that of methodological shortcuts. Typically, rapid reviews are portrayed as ‘cut down’ systematic reviews ie the starting point is the systematic review process and you then reduce/remove steps to arrive at your rapid review methodology (BTW my post Different approaches to rapidity discusses this and … Continue reading Shortcuts
Reading the ‘Expediting citation screening using PICo-based title-only screening for identifying studies in scoping searches and rapid reviews‘ (posted here recently) got me thinking! It seems to me that what they were doing was very similar to a very specific search of the literature (in this case matching keywords to words in document titles). Typically, in evidence synthesis, the opposite (a sensitive search) tries to … Continue reading Sensitive searching of few or specific search of many…?
As far as I can tell we undertake evidence synthesis to better understand the effectiveness of an intervention. The rationale is that the greater the accumulation of evidence the greater the understanding of how good an intervention is. This is typically characterised by a reduction in the size of the confidence intervals in meta-analyses. Put it another way, we attempt to be as certain as … Continue reading Theorising about evidence synthesis – is it about the cost, language or other?
In the recent post relating to trading certainty for speed I highlighted that the authors stated: “Participants of our survey, on average, viewed 10% as the maximum tolerable risk of getting an incorrect answer from a rapid review” My issue with this is that there was no definition of what ‘incorrect’ was. So, I emailed one of the authors: “A fascinating paper, thank you. One … Continue reading Update from author: Trading certainty for speed
I’m just back from Evidence Live where I ran a workshop on the community rapid review idea. I spoke to many people about rapid reviews, and it’s interesting how the tide was turning (by the rise in interest in RRs). During one discussion the absurdity struck me. Systematic reviews Fantasy = you include all trials Reality = as 50% of trials (on average) are unpublished … Continue reading Does this even make sense?
Paul Glasziou and Iain Chalmers recently published the above article on the BMJ Blog. As you’d expect with these authors it’s a great read. I’d like to highlight one section – that’s particularly relevant to the issue of rapid reviews (Note my emphasis): Whether the precise non-publication rate is 30%, 40%, or 50%, it is still a serious waste of the roughly $180 billion annually … Continue reading Can it really be true that 50% of research is unpublished?