Kelly SE et al. PeerJ 4:e2522
A lovely paper that explores attitudes towards rapid reviews. The abstract is as follows:
Rapid reviews expedite the knowledge synthesis process with the goal of providing timely information to healthcare decision-makers who want to use evidence-informed policy and practice approaches. A range of opinions and viewpoints on rapid reviews is thought to exist; however, no research to date has formally captured these views. This paper aims to explore evidence producer and knowledge user attitudes and perceptions towards rapid reviews.
A Q methodology study was conducted to identify central viewpoints about rapid reviews based on a broad topic discourse. Participants rank-ordered 50 text statements and explained their Q-sort in free-text comments. Individual Q-sorts were analysed using Q-Assessor (statistical method: factor analysis with varimax rotation). Factors, or salient viewpoints on rapid reviews, were identified, interpreted and described.
Analysis of the 11 individual Q sorts identified three prominent viewpoints: Factor A cautions against the use of study design labels to make judgements. Factor B maintains that rapid reviews should be the exception and not the rule. Factor C focuses on the practical needs of the end-user over the review process.
Results show that there are opposing viewpoints on rapid reviews, yet some unity exists. The three factors described offer insight into how and why various stakeholders act as they do and what issues may need to be resolved before increase uptake of the evidence from rapid reviews can be realized in healthcare decision-making environments.
The three factors mentioned in the abstract are worth expanding on (pertinent passages are in italics):
Factor A. “Don’t judge a book by its cover”: They had strong agreement (+3) with two statements in particular: “All evidence synthesis products, including rapid reviews, (systematic reviews, or health technology assessments), can be conducted very well or very poorly” and “A well-conducted rapid review may produce better evidence than a poorly conducted systematic review.” They similarly disagreed (−2) with the statement that “a rapid review cannot be a systematic review.” AND Participants agreed that value and quality were not tied to the length of time taken to complete a review, no matter how long or short. They agreed (+2) with the statement “A good quality review of evidence is determined by the methods used, not by the speed at which it is completed” and disagreed (−3) with the statement “The more time spent conducting the review of the evidence, the more valid the results of the review will be.”
Factor B. “Gold standard or bust”: This group strongly believed in the gold standard systematic review to meet the needs of knowledge-users, and that use of rapid reviews should be the exception, and not the rule. They firmly hold the belief (+3) that “deviating from accepted systematic review methods may introduce bias and impact the validity of the resulting rapid review, which may be an unacceptable risk for some for knowledge users” and that “rapid reviews cannot be systematic reviews.” AND Participants also endorsed the view (+1) that “Rapids reviews are ‘quick and dirty’ systematic reviews,” which participants in Factors A and C both disagreed with (−2). This sentiment is repeated in their disagreement (−2) with the principle suggested by “A well-conducted rapid review may produce better evidence than a poorly conducted systematic review.”
Factor C. “The pragmatist”: This factor was characterized by a focus on the pragmatic needs of the knowledge user, balanced with the value of tailored rapid reviews and the inherent risk of bias that may accompany their use in decision-making processes. In opposition to those in Factor B, the participant felt strongly (+3) that “Knowledge users don’t always need all of the evidence, they just need the best evidence to support their decision, and what is ‘best evidence’ is specific to the knowledge user.”
In the conclusion the authors highlight that transparency is a pressing factor to be addressed and go on to mention PRISMA and AMSTAR. In writing on this blog the notion of transparency and reproducibility are, by a considerable distance, the most important aspects that keep appearing and clearly need addressing as a matter of some importance.
Another important point relates to the need for more research to better understand the implications of various aspects of so-called ‘rapid reviews’. I love the comment Simply put, at this time we do not know when a systematic review becomes ‘unsystematic.’
One final point, I truly despair at the attitudes displayed in Factor B!