Using regulatory data

In my recent post I expressed frustration with the direction of travel of rapid reviews and one thing I highlighted was the lack of work on using regulatory data.  This prompted two responses highlighting two separate papers: How to use FDA drug approval documents for evidence syntheses, BMJ 2018 Practical guidance for using multiple data sources in systematic reviews and meta‐analyses (with examples from the … Continue reading Using regulatory data

Where are we going with rapid reviews? #frustrating

A while back I wrote a piece Different approaches to rapidity which suggested there are two ways of doing a rapid review: Process – take the systematic review and take short cuts Outcome – what’s the optimal way of getting to the desired outcome I’m increasingly concerned that all the focus is on the former and not the latter. My concern is based on a variety … Continue reading Where are we going with rapid reviews? #frustrating

The Cochrane HPV vaccine review was incomplete and ignored important evidence of bias

The Cochrane HPV vaccine review was incomplete and ignored important evidence of bias by Lars Jørgensen, Peter C Gøtzsche, Tom Jefferson (BMJ Evidence-Based Medicine 2018) This is a highly critical article pointing out that the review missed nearly half of eligible trials.  They also land some heavy punches with comments such as: “Cochrane’s public relations of the review were uncritical“ “In our view, this is not … Continue reading The Cochrane HPV vaccine review was incomplete and ignored important evidence of bias

Value of Information to help with the SR v RR debate?

I posted a post-Evidence Live blog last week which explored the notion of harms associated with doing rapid reviews (RRs). There is overlap from that post but I’ve had time to reflect and hopefully this will be better written. I’ve also added a vote!!  It may need re-writing again, if you think it needs clarification then please let me know! The question I was asked … Continue reading Value of Information to help with the SR v RR debate?

Systematic versus rapid reviews – what about harms?

I was at the wonderful Evidence Live and presented on rapid reviews. One question came from the wonderful Iain Chalmers who asked about the potential for harm if health professionals followed the advice of a RR that was subsequently shown to be wrong. Later, in conversation, it became clear that ‘wrong’ meant a reversal of conclusion – so the SR might say the intervention is … Continue reading Systematic versus rapid reviews – what about harms?

HTAi 2018

I had the pleasure of presenting at the HTAi 2018 conference in Vancouver which ended yesterday. Here is a picture from the event, shared as (a) the unplanned colour co-ordination is impeccable and (b) people have commented I look like a game show host. I talked about, you guessed it, rapid reviews. My emphasis was on the fact that, whatever the review type, you never … Continue reading HTAi 2018

Shortcuts

I recently had the pleasure of talking about rapid reviews in Liverpool.  One point that got raised in the discussion was that of methodological shortcuts. Typically, rapid reviews are portrayed as ‘cut down’ systematic reviews ie the starting point is the systematic review process and you then reduce/remove steps to arrive at your rapid review methodology (BTW my post Different approaches to rapidity discusses this and … Continue reading Shortcuts

Sensitive searching of few or specific search of many…?

Reading the ‘Expediting citation screening using PICo-based title-only screening for identifying studies in scoping searches and rapid reviews‘ (posted here recently) got me thinking!  It seems to me that what they were doing was very similar to a very specific search of the literature (in this case matching keywords to words in document titles).  Typically, in evidence synthesis, the opposite (a sensitive search) tries to … Continue reading Sensitive searching of few or specific search of many…?

Theorising about evidence synthesis – is it about the cost, language or other?

As far as I can tell we undertake evidence synthesis to better understand the effectiveness of an intervention.  The rationale is that the greater the accumulation of evidence the greater the understanding of how good an intervention is.  This is typically characterised by a reduction in the size of the confidence intervals in meta-analyses.  Put it another way, we attempt to be as certain as … Continue reading Theorising about evidence synthesis – is it about the cost, language or other?