I’ve been aware of these for a while but it was requested I not share. But, now they’ve been published elsewhere, I thought I’d share them now! In October 2015 members of the International Collaboration for the Automation of System Reviews (ICASR) met and drafted the following:
- Systematic reviews involve multiple tasks, each with different issues, but all must be improved.
- Automation may assist with all tasks, from scoping reviews to identifying research gaps as well protocol development to writing and dissemination of the review.
- The processes for each task can and should be continuously improved, to be more efficient and more accurate.
- Automation can and should facilitate the production of systematic reviews that adhere to high standards for the reporting, conduct and updating of rigorous reviews.
- Developments should also provide for flexibility in combining and using, e.g. subdividing or merging steps and allowances for different users to use different interfaces.
- Different groups with different expertise are working on different parts of the problem; to improve reviews as a whole will require collaboration between these groups.
- Every automation technique should be shared, preferably by making code, evaluation data and corpora available for free.
- All automation techniques and tools should be evaluated using a recommended and replicable method with results and data reported.
I was not at the Vienna meeting, so I’m unsure who was involved. I’ve been invited to the next meeting and will find out more then. But, if anyone has any information it’d be great to see who’s involved.
A few comments:
- The principles follow the traditional approach to speeding up the review process. I’ve written previously on there being two main approaches – traditional (process focussed) and outcome focussed.
- The terms ‘more accurate’ in point 3. is interesting. As has been reported extensively (for example my own post Some additional thoughts on systematic reviews) systematic reviews that rely on published journal articles cannot be relied upon to be accurate – due to publication bias. So, I’m never sure what the best way to describe what an appropriate outcome is for a review.
But it’s great to see these written down; the longest march begins with the first step.