Two papers, two years apart covering closely related territory:

Heuristics (from Wikipedia):

“A heuristic technique, often called simply a heuristic, is any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution.”

In the context of rapid reviews, heuristics relates to the quickest way to find a ‘good’ answer in the absence of a resource intensive, time-consuming, systematic review.  This article explores two linked papers that could serve as the basis of a reasonable heuristic.

The conclusion of the first study was:

“Single most precise trials provided similar estimates of effects to those of the meta-analyses to which they contributed, and statistically significant results are generally in agreement. However, “negative” results were less reliable, as may be expected from single underpowered trials. For systematic reviewers we suggest that: (1) key trial(s) in a review deserve greater attention (2) systematic reviewers should check agreement of the most precise trial and the meta analysis. For clinicians using trials we suggest that when a meta-analysis is not available, a focus on the most precise trial is reasonable provided it is adequately powered.”

The second study concludes:

“The conclusion of the first trial that the treatment is effective or harmful is mostly likely correct. A statistically significant trial agrees more often with its corresponding meta-analysis than a large trial. These findings imply that particularly in some urgent, life-saving or other critical circumstances for which no other effective methods are available, cautious recommendation based on the significant result of the first trial seems justifiable and could start use of an effective intervention by 5–8 years earlier.”

Both papers indicate that relying on a single trial, particularly if positive and significant, can be a reasonable proxy for any subsequent meta-analysis.  However, they differ on focus, largest trial versus first significant trial.

A drawback of the first study, acknowledged by the authors, is the difficulty in finding the best trial.  Heuristics such as the best trial appearing in major journals or being multi-centred is likely to be problematic and is currently untested.  However, identifying the first study can also be problematic (how do you know you’ve not missed an earlier study?). But linked to that problem was an interesting passage from the second paper:

“Furthermore, often what the physician has got in hand is the most recently published trial (which we call the last trial). Again, if the last trial conveniently available showed a statistically significant result, the conclusion that the treatment is effective or harmful is very likely to be as good as that from the meta-analysis that combines all the relevant studies. In such circumstances, the clinician should feel almost as confident as having a meta-analysis in hand, which will save the effort of finding the meta-analysis or more trials.”

An interesting distinction between the two studies was that the first study focussed on Cochrane systematic reviews, while the second study was not so selective – this could explain the different conclusions (whereby the authors of the second paper report “A statistically significant trial agrees more often with its corresponding meta-analysis than a large trial”). The authors of the second paper highlight that non-Cochrane reviews tend to have more trials, perhaps due to looser inclusion criteria in relation to poor trial quality.

So, where does that leave us?  A few reflections:

  • The research discussed in the two papers is aimed at supporting decision making, not speeding up systematic review methods.  But, in that context, it is trying to replicate the confidence one can get from a systematic review and meta-analysis with a single trial.
  • Is this a tool to prioritise systematic reviews?  If there are two suggested systematic review topics one prioritisation feature could be degree of uncertainty.
  • I wonder how often any single significant trial disagrees with a meta-analysis.  If it does, what does that mean? What might explain it?  For instance it might be answering a slightly different question, different inclusion criteria etc.
  • Single trials, particularly large and significant, seem pretty good proxies for subsequent meta-analyses.

I have little doubt that the area of heuristics is an un-tapped opportunity and I imagine this site will feature more articles on heuristics over time.  But, if you have any ideas of other ‘short-cuts’ then let me know.

Conflict of Interest: I am one of the authors of the 2010 paper.


2 thoughts on “Heuristics

  1. A good short cut would be to take more notice of any trial in a Cochrane systematic review with a low risk of bias rating across the board (14% of included studies achieve this). And If this unbiased trial is under-powered, repeat it with more power. If there are no included studies at low risk of bias, or no eligible studies at all, then write a protocol for an adequately powered trial designed to achieve low risk of bias. This would be a very constructive use of the vast methodological expertise of Cochrane contributors, and of course would mandate the use the SPIRIT guidelines http://www.spirit-statement.org/


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s