“Perfect is the enemy of good” Voltaire
Bottom line: Only consider a large-scale, resource intensive, systematic review if you believe it can add value over and above a rapid method.
I would describe my discovery of Value of Information (VoI or VOI) as a personal eureka moment! It gave me a theoretical framework to help explain my increasing certainty that things need to dramatically change in the world of systematic review production. In the course of this article I hope I can convince at least some of you that VoI is a key component of moving forward with a more nuanced and sophisticated approach to evidence synthesis.
But what is VoI?:
“Value-of-information (VOI) methods determine the worth of acquiring extra information to help the decision-maker. From a decision analysis perspective, acquiring extra information is only useful if it has a significant probability of changing the decision-maker’s currently preferred strategy. The penalty of acquiring more information is usually valued as the cost of that extra information, and sometimes also the delay incurred in waiting for the information.” [1]
VoI has been used in the medical world, for instance exploring research prioritisation [2], healthcare decision making [3] and even updating Cochrane systematic reviews [4].
I am increasingly drawn to it to provide a ‘structure’ to help discuss ‘rapid reviews’. While the formal application of VoI can be quite mathematical I am mainly interested in the principles. It links the cost of acquiring information with the support for decision making. Obtaining extra information often comes at significant costs, for example:
- Financial costs – acquiring information requires staff to search and process the information, costs of database subscriptions, obtaining full-text copies etc. The more information you acquire the higher the costs.
- Opportunity costs – while staff are acquiring and processing the information they are not doing something else. They are potentially being kept away from doing something more useful.
- Time cost – if a decision maker is waiting for an evidence synthesis before making a decision the longer the synthesis takes the longer it takes to make the decision. This delay can have a significant cost.
These costs need to be considered against the benefit they bring. Does using the extra resource bring increased value or is it a waste?
In the typical narrative of systematic reviews/evidence synthesis much of the emphasis is on trying to acquire perfect information (another VoI concept) and little is spent on the costs that are associated with it. Much effort is spent trying to remove increasingly small sources of bias with no eye on the additional methodological costs this brings. While bias is undesirable so is waste. In healthcare, where resource is constrained, this appears a glaring oversight. Do we always need to do a costly, Cochrane-style, systematic review? Is it ethical to spend more resource than is necessary to support decision makers?
A clear conclusion is that VoI is context dependent. In some situations the cost of acquiring additional information can be justified and in others, less so. In the case of the review of neuraminidase inhibitors (eg Tamiflu) carried out by Tom Jefferson et al under the auspices of Cochrane [5, 6] the decision to purchase Tamiflu (before Tom’s review was published) in the UK alone cost £500m [7] and was not based on robust evidence. As Ben Goldacre states [7]:
“Today we found out that Tamiflu doesn’t work so well after all. Roche, the drug company behind it, withheld vital information on its clinical trials for half a decade, but the Cochrane Collaboration, a global not-for-profit organisation of 14,000 academics, finally obtained all the information. Putting the evidence together, it has found that Tamiflu has little or no impact on complications of flu infection, such as pneumonia.
That is a scandal because the UK government spent £0.5bn stockpiling this drug in the hope that it would help prevent serious side-effects from flu infection.”
If we had had perfect (or near perfect) information before the decision was made to spend £500m we might have avoided such shocking waste. So, in hindsight, this scenario is a example of when the cost of acquiring (near)perfect information would have been of clear benefit. But an important issue to consider, in this scenario, is the standard Cochrane methodology was not sufficient. For his later reviews Tom ignored published journal articles and relied on regulatory data and clinical study reports. In this case the (near)perfect information came at a much higher price than the standard Cochrane systematic review.
So, how does this relate to rapid reviews? In ‘A Practical Guide to Value of Information Analysis’ [3] Wilson states:
“The expected value of a research project is the expected reduction in the probability of making the ‘wrong’ decision multiplied by the average consequence of being ‘wrong’ (the ‘opportunity loss’ of the decision, defined in Sect. 2.1 below). This is compared with the expected cost of the research project. If the expected value exceeds the (expected) cost then the project should be undertaken. If not, then the project should not be undertaken: the (expected) value of the resources consumed by the project exceeds the (expected) value of the information yielded.”
An obvious starting point is with a rapid review to arrive at an estimate of ‘worth’/’value’ for an intervention. One can then start to estimate the cost of acquiring new data (via a fuller systematic review) versus the potential benefit. If extra work is unlikely to bring any benefit then it makes no sense in undertaking the extra work. Similarly, if any benefit is realised it needs to be weighed against the likely cost; small benefit for major cost suggests it’s not worth undertaking the extra work.
So, what factors need to be considered when taking the decision to allocate more resource to a review? A few examples might be:
- Outcomes – are they appropriate? If the research is not focussing on useful patient or clinical outcomes then there is no benefit to pursuing extra information.
- Effect size – is it clinically significant? If the initial estimates are that the intervention is not likely to be clinically significant it is unlikely to alter clinical practice (all things such as cost, adverse events being equal), so there is little value to be gained from additional resource/information.
- High NNT – will the reduction in uncertainty make much difference? If a rapid method produces a NNT range of 50-80, will the reduction in uncertainty, to say an NNT of 65-70, make any appreciable difference? (NOTE: this is here as an example; I’m far from convinced a systematic review – in most cases – will make an appreciable difference in estimating effect size over rapid methods [see 9, 10]).
- Too early in the research cycle – are there enough trials to give a reliable decision? If there are only limited number of small trials the uncertainty will still exist even if an extra trial or two were found using more robust methods [11, 12].
- Significance already demonstrated – is it known that the intervention works or doesn’t? If the review is to inform the planning of primary research and a rapid review clearly demonstrates significance what extra value could be generated by expending more resource?
- Ongoing clinical trials – are there any due? If a clinical trial is likely to report in the near future there is little to be gained from exerting appreciable effort as the result will be outdated almost immediately.
- Large-scale purchasing decisions – is the need for accurate information likely to help support a commissioning decision? As in the case of Tamiflu.
Only if a clear benefit can be demonstrated should extra resource be used.
This approach, as well as offering greater value, will mean that more controlled trials can be placed into reviews due to the lower associated costs of undertaking rapid reviews. This will also allow us to approach Archie Cochrane’s wish that medicine produced a “critical summary, by speciality or subspeciality, adapted periodically, of all relevant randomised controlled trials” [13].
References
- Value-of-information. VOSE Software. 2007
- Value of Information: A Tool to Improve Research Prioritization and Reduce Waste. Minelli C et al. PLoS Med 12(9): e1001882
- A Practical Guide to Value of Information Analysis. Wilson ECF. PharmacoEconomics February 2015, Volume 33, Issue 2, pp 105-121
- Efficient Updating of Cochrane Reviews: Using Value of Information Analysis to Prioritise Reviews for Review. Wilson ECF. Cochrane UK & Ireland Symposium 2014
- Neuraminidase inhibitors for preventing and treating influenza in healthy adults and children. Cochrane Protocol 2011
- Neuraminidase inhibitors for preventing and treating influenza in adults and children. Jefferson T et al. Cochrane Database of Systematic Reviews 2014, Issue 4. Art. No.: CD008965.
- What the Tamiflu saga tells us about drug trials and big pharma. Goldacre B. Guardian. Thursday 10 April 2014
- Searching for unpublished data for Cochrane reviews: cross sectional study. Schroll JB et al. BMJ. 2013 Apr 23;346:f2231
- McMaster Premium LiteratUre Service (PLUS) performed well for identifying new studies for updated Cochrane reviews. Hemens BJ et al. J Clin Epidemiol. 2012 Jan;65(1):62-72.e1
- A pragmatic strategy for the review of clinical evidence. Sagliocca L et al. J Eval Clin Pract. 2013 Jan 15. doi: 10.1111/jep.1202
- Meta-analyses of small numbers of trials often agree with longer-term results. Herbison P et al. J Clin Epidemiol. 2011 Feb;64(2):145-53.
- Effect sizes in cumulative meta-analyses of mental health randomized trials evolved over time. Trikalinos TA et al. J Clin Epidemiol. 2004 Nov;57(11):1124-30.
- Cochrane AL. 1931-1971: A critical review, with particular reference to the medical profession. In: Teeling-Smith G, Wells N, editors. Medicines for the year 2000. London: Office of Health Economics; 1979. pp. 1–11.
6 thoughts on “Value of Information”