Community Rapid Review v3

This is the latest iteration to our thinking around a community rapid review system.  Two important things to stress:

Firstly, the notion of being nurturing, supportive and educational is being seen as overwhelmingly positive – something I’m delighted with.

Secondly, the idea is not to be in competition with systematic reviews or produce ‘cut down’ systematic reviews.  The idea is that there is a continuum of quality for evidence synthesis.  At one end we have a really poor, biased review carried out by someone inexperienced and at the other end a full systematic review.  So, a primary aim is to try to support users from moving from the poor quality end to the high quality end.

Anyway, onto the current high-level outline:

  1. We create a Rapid Review ‘wizard‘ and users enter their clinical Q and then the PICO elements.  This helps structure the search.
  2. The search returns a list of candidate articles and the user selects the one’s they think look good.
  3. Using various automatic techniques (such as our SmartSearch and PubMed’s related articles) we suggest other potential articles. The user then selects those that are pertinent.  We also give the user the opportunity to add other documents from other sites (e.g. Google Scholar, Embase etc)
  4. We then ask them to extract the relevant content from each document (creating an evidence table – with a row of text for each article they’ve selected).  We are working on a number of automation techniques to support this.
  5. They then create a narrative summary and a bottom line.
  6. It’s published as ‘pending’ and is pending until enough external ‘viewers’ endorse it (post-publication peer-review).
  7. Others can get involved and add to the initial review.

In many ways we see it as having a similar content as say a BestBETs or UTHSCSA CAT.

We are currently writing a larger document outlining the above in more detail.  Much of the development focus will be around ensuring we create a quality product (e.g. minimising bias) and also creating a nurturing environment (e.g. ensuring there is a supportive environment with volunteers being available to mentor users who are uncertain and to answer specific questions).

Comments very welcome to help improve the above. Roll on v4!

11 thoughts on “Community Rapid Review v3

  1. The process sounds good – There are indeed a lot of evidence or review or synopsis (or whatever…!) services out there. The key issue for any of these is demonstrating QUALITY of evidence and synthesis, and I think having a platform with TRIP that can LINK both search and evidence synthesis would be a bit plus. Looking forward to see how it develops!

    Like

  2. I do like the potential of this idea – to be flexible, and to allow someone to ‘ask a question’ that then generates a review *lite that has been peer-reviewed. It needs a bit of work to make sure the flow of quality is maintained (a collection of good papers doesn’t necessarily make a good systematic review unless you know how to bring all the evidence together well and critique it) but I’m very interested in the affordances this offers for people creating ‘personalised systematic reviews’ – especially patient groups, charities, those not necessarily embedded within the clinical, NHS or research worlds etc…

    Like

  3. Pingback: Evidently So

Leave a comment