Where to start with this marvel? Perhaps, the RobotReviewer website that allows you free access to upload PDFs of clinical trials and have them automatically assessed for bias. Alternatively, there is the recent peer-reviewed journal article RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials. Or even the various tweets that have been tweeted by a multitude of people.
But what is RobotReviewer? The aims are laid out in the objectives of the above paper:
“…a system to automate the assessment of bias of randomized controlled trials using the Cochrane Risk of Bias (RoB) Tool. For each domain in the Cochrane RoB tool, the system should reliably perform two tasks: 1. Determine whether a trial is at low risk of bias (document classification of low v high or unclear risk of bias), and 2. Identify text from the trial report that supports these bias judgments (sentence classification of relevant v irrelevant).”
Without going into too much detail the authors have used machine learning to automate the process. They highlight that they use distant supervision as opposed to the more conventional supervised learning. For more information, and why this was important, see the full-text (via the link above). But essentially the authors took risk of bias assessments and annotations directly from Cochrane (ie done by a human) and got the system to learn what features to look for and to base their predictions on that.
As part of the paper the authors test their system, reporting:
“Automatic document judgments were of reasonable accuracy, lagging our estimate of human reviewer accuracy by around 7%.”
“Future methodological improvements are likely to close the gap between the algorithm and human performance, and may eventually be able to replace manual risk of bias assessment altogether.”
To use RobotReviewer you simply go to the website, and upload a PDF of a trial and you get a result like this:
On the left hand side is the paper and on the right hand side are the results. As you’ll see there are six risk of bias measures e.g. Random Sequence Generation, Allocation Concealment. The eye symbol with the number three indicates that the system found three sentences pertinent to that potential bias. Clicking on each measure reveals more information, including the actual assessment:
In the above example it assesses the Allocation Concealment as having a low risk of bias and the Blinding Of Participants And Personnel as having high/unclear risk of bias.
While not quite perfect, it’s remarkably good and the authors discuss it’s potential use in supporting/streamlining the systematic review process in the discussion section of the paper. Tools like this, as they’re adopted, should transform the speed and cost of systematic (and rapid) review production.