In 2014 Guy Tsafnat, Paul Glasziou and others wrote the paper Systematic review automation technologies where the authors “…surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review.“
It is an excellent overview of the potential of automation in the systematic review process. They do this by breaking down the systematic review process into 15 steps (see Figure 1.). For each step they describe the task and discuss the potential for automation.
They also suggest a possible change in order of doing tasks, stating:
“Tasks are reordered so that necessarily manual tasks are shifted to the start of the review, and automatic tasks follow. As an example, consider risk of bias assessment which sometimes requires judgment depending on the outcome measures, the intervention and the research question. During the review protocol preparation, a reviewer would train the system to make the required specific judgment heuristics for that systematic review. A classification engine will use these judgments later in the review to appraise papers. Updating a review becomes a matter of executing the review at the time that it is needed. This frees systematic reviewers to shift their focus from the tedious tasks which are automatable, to the creative tasks of developing the review protocol where human intuition, expertise, and common sense are needed and providing intelligent interpretations of the collected studies. In addition, the reviewers will monitor and assure the quality of the execution to ensure the overall standard of the review.”
In the above paragraph there are a few bits of techie speak that need explaining:
Train the system – this is used in machine learning. Machine learning relies on being given a training set of data whereby you feed it positive and negative examples of what you’re interested in. So, you might input examples where the text of the document contains good and bad examples of – say – allocation concealment. The machine’s algorithms learn to distinguish ‘rules’ by which to make judgements. A further testing set of data is used to check the algorithm is working.
Classification engine – This uses the rules/algorithm used in the ‘train the system’ to actually make the assessment in practice.
The authors finish with the following conclusion:
“Synthesis for evidence based medicine is quickly becoming unfeasible because of the exponential growth in evidence production. Limited resources can be better utilized with computational assistance and automation to dramatically improve the process. Health informaticians are uniquely positioned to take the lead on the endeavor to transform evidence-based medicine through lessons learned from systems and software engineering. The limited resources dedicated to evidence synthesis can be better utilized with computational assistance and automation.
Conducting systematic reviews faster, with fewer resources, will produce more reviews to answer more clinical questions, keep them up to date, and require less training. Together, advances in the automation of systematic reviews will provide clinicians with more evidence-based answers and thus allow them to provide higher quality care.”