I asked the following question to the EBHC mail-list: “I’m wondering how one could test the following so would welcome advice. Question: Assuming we have a finite resource for evidence synthesis which is better 1 systematic review or, say, 5-10 rapid reviews? Context: There is an opportunity cost associated with doing the labour intensive systematic reviews how do we know we are using this … Continue reading Methods Q: Rapid versus systematic reviews – assessing which generates most benefit/least harm
In my recent post I expressed frustration with the direction of travel of rapid reviews and one thing I highlighted was the lack of work on using regulatory data. This prompted two responses highlighting two separate papers: How to use FDA drug approval documents for evidence syntheses, BMJ 2018 Practical guidance for using multiple data sources in systematic reviews and meta‐analyses (with examples from the … Continue reading Using regulatory data
A while back I wrote a piece Different approaches to rapidity which suggested there are two ways of doing a rapid review: Process – take the systematic review and take short cuts Outcome – what’s the optimal way of getting to the desired outcome I’m increasingly concerned that all the focus is on the former and not the latter. My concern is based on a variety … Continue reading Where are we going with rapid reviews? #frustrating
The Cochrane HPV vaccine review was incomplete and ignored important evidence of bias by Lars Jørgensen, Peter C Gøtzsche, Tom Jefferson (BMJ Evidence-Based Medicine 2018) This is a highly critical article pointing out that the review missed nearly half of eligible trials. They also land some heavy punches with comments such as: “Cochrane’s public relations of the review were uncritical“ “In our view, this is not … Continue reading The Cochrane HPV vaccine review was incomplete and ignored important evidence of bias
I posted a post-Evidence Live blog last week which explored the notion of harms associated with doing rapid reviews (RRs). There is overlap from that post but I’ve had time to reflect and hopefully this will be better written. I’ve also added a vote!! It may need re-writing again, if you think it needs clarification then please let me know! The question I was asked … Continue reading Value of Information to help with the SR v RR debate?
I was at the wonderful Evidence Live and presented on rapid reviews. One question came from the wonderful Iain Chalmers who asked about the potential for harm if health professionals followed the advice of a RR that was subsequently shown to be wrong. Later, in conversation, it became clear that ‘wrong’ meant a reversal of conclusion – so the SR might say the intervention is … Continue reading Systematic versus rapid reviews – what about harms?
I had the pleasure of presenting at the HTAi 2018 conference in Vancouver which ended yesterday. Here is a picture from the event, shared as (a) the unplanned colour co-ordination is impeccable and (b) people have commented I look like a game show host. I talked about, you guessed it, rapid reviews. My emphasis was on the fact that, whatever the review type, you never … Continue reading HTAi 2018
Straw man: A logic fallacy involving the purposeful misrepresentation of an argument in order to strike it down. I was sent a paper to review earlier this week and I was quite strong in my feedback, so I thought I should share my frustration and write a blog – so here it is… A central tenet of the paper was that rapid reviews (RR) can be … Continue reading Straw man and the accuracy of rapid reviews
I recently had the pleasure of talking about rapid reviews in Liverpool. One point that got raised in the discussion was that of methodological shortcuts. Typically, rapid reviews are portrayed as ‘cut down’ systematic reviews ie the starting point is the systematic review process and you then reduce/remove steps to arrive at your rapid review methodology (BTW my post Different approaches to rapidity discusses this and … Continue reading Shortcuts
Reading the ‘Expediting citation screening using PICo-based title-only screening for identifying studies in scoping searches and rapid reviews‘ (posted here recently) got me thinking! It seems to me that what they were doing was very similar to a very specific search of the literature (in this case matching keywords to words in document titles). Typically, in evidence synthesis, the opposite (a sensitive search) tries to … Continue reading Sensitive searching of few or specific search of many…?