Value of Information to help with the SR v RR debate?

I posted a post-Evidence Live blog last week which explored the notion of harms associated with doing rapid reviews (RRs). There is overlap from that post but I’ve had time to reflect and hopefully this will be better written. I’ve also added a vote!!  It may need re-writing again, if you think it needs clarification then please let me know! The question I was asked … Continue reading Value of Information to help with the SR v RR debate?

Systematic versus rapid reviews – what about harms?

I was at the wonderful Evidence Live and presented on rapid reviews. One question came from the wonderful Iain Chalmers who asked about the potential for harm if health professionals followed the advice of a RR that was subsequently shown to be wrong. Later, in conversation, it became clear that ‘wrong’ meant a reversal of conclusion – so the SR might say the intervention is … Continue reading Systematic versus rapid reviews – what about harms?

HTAi 2018

I had the pleasure of presenting at the HTAi 2018 conference in Vancouver which ended yesterday. Here is a picture from the event, shared as (a) the unplanned colour co-ordination is impeccable and (b) people have commented I look like a game show host. I talked about, you guessed it, rapid reviews. My emphasis was on the fact that, whatever the review type, you never … Continue reading HTAi 2018

Shortcuts

I recently had the pleasure of talking about rapid reviews in Liverpool.  One point that got raised in the discussion was that of methodological shortcuts. Typically, rapid reviews are portrayed as ‘cut down’ systematic reviews ie the starting point is the systematic review process and you then reduce/remove steps to arrive at your rapid review methodology (BTW my post Different approaches to rapidity discusses this and … Continue reading Shortcuts

Sensitive searching of few or specific search of many…?

Reading the ‘Expediting citation screening using PICo-based title-only screening for identifying studies in scoping searches and rapid reviews‘ (posted here recently) got me thinking!  It seems to me that what they were doing was very similar to a very specific search of the literature (in this case matching keywords to words in document titles).  Typically, in evidence synthesis, the opposite (a sensitive search) tries to … Continue reading Sensitive searching of few or specific search of many…?

Theorising about evidence synthesis – is it about the cost, language or other?

As far as I can tell we undertake evidence synthesis to better understand the effectiveness of an intervention.  The rationale is that the greater the accumulation of evidence the greater the understanding of how good an intervention is.  This is typically characterised by a reduction in the size of the confidence intervals in meta-analyses.  Put it another way, we attempt to be as certain as … Continue reading Theorising about evidence synthesis – is it about the cost, language or other?

Update from author: Trading certainty for speed

In the recent post relating to trading certainty for speed I highlighted that the authors stated: “Participants of our survey, on average, viewed 10% as the maximum tolerable risk of getting an incorrect answer from a rapid review” My issue with this is that there was no definition of what ‘incorrect’ was.  So, I emailed one of the authors: “A fascinating paper, thank you.  One … Continue reading Update from author: Trading certainty for speed

Can it really be true that 50% of research is unpublished?

Paul Glasziou and Iain Chalmers recently published the above article on the BMJ Blog.  As you’d expect with these authors it’s a great read.  I’d like to highlight one section – that’s particularly relevant to the issue of rapid reviews (Note my emphasis): Whether the precise non-publication rate is 30%, 40%, or 50%, it is still a serious waste of the roughly $180 billion annually … Continue reading Can it really be true that 50% of research is unpublished?