Review Article
A critical review of methods used to determine the smallest worthwhile effect of interventions for low back pain

https://doi.org/10.1016/j.jclinepi.2011.06.018Get rights and content

Abstract

Objective

To critically and systematically review methods used to estimate the smallest worthwhile effect of interventions for nonspecific low back pain.

Study Design and Setting

A computerized search was conducted of MEDLINE, CINAHL, LILACS, and EMBASE up to May 2011. Studies were included if they were primary reports intended to measure the smallest worthwhile effect of a health intervention (although they did not need to use this terminology) for nonspecific low back pain.

Results

The search located 31 studies, which provided a total of 129 estimates of the smallest worthwhile effect. The estimates were given a variety of names, including the Minimum Clinically Important Difference, Minimum Important Difference, Minimum Worthwhile Reductions, and Minimum Important Change. Most estimates were obtained using anchor- or distribution-based methods. These methods are not (or not directly) based on patients’ perceptions, are not intervention-specific, and are not formulated in terms of differences in outcomes with and without intervention.

Conclusion

The methods used to estimate the smallest worthwhile effect of interventions for low back pain have important limitations. We recommend that the benefit–harm trade-off method be used to estimate the smallest worthwhile effects of intervention because it overcomes these limitations.

Introduction

What is new?

Key finding

  1. Most methods used to ascertain the smallest worthwhile effect of interventions for low back pain do not reflect patients' opinions, do not weigh the costs and benefits of intervention, and are not expressed in terms of differences in outcomes between intervention and control groups.

What is the implication and what should change now?
  1. The benefit-harm trade-off method should be used to elicit estimates of the smallest worthwhile effect of health interventions. The method could be used routinely to inform the design and interpretation of randomized trials.

Although there is a high degree of consensus about many aspects of how randomized trials should be conducted (e.g., [1]) several important methodological issues remained unresolved. One of the most persistent issues concerns how to estimate the smallest worthwhile effect of intervention [2].

The smallest worthwhile effect of an intervention is the smallest beneficial effect of intervention that justifies the costs, risks, and inconveniences of that intervention. It defines a threshold effect above which intervention might be indicated. There are at least two important uses of estimates of the smallest worthwhile effect in design and analysis of randomized trials. First, in the planning of randomized trials, information about the smallest worthwhile effect can be used to inform sample size calculations. Trials can, and arguably should, be powered to detect the smallest worthwhile effect of intervention. Second, once a trial has been completed, interpretation of the trial’s findings should involve consideration of whether the estimated effects of intervention are large enough to justify use of the intervention in clinical practice. In large part, this involves determining if the estimated effect of intervention exceeds the smallest worthwhile effect [3].

In an important article published in 1989, Jaeschke et al. [4] defined the Minimum Clinically Important Difference as “the smallest difference in score in the domain of interest which patients perceive as beneficial and which would mandate, in the absence of troublesome side effects and excessive cost, a change in the patient’s management.” They explained that their interest in this construct was motivated by the desire to evaluate the “clinical importance” of estimates of the effect of interventions estimated in particular randomized trials. This article was significant because it was one of the first attempts to obtain empirical estimates of the smallest worthwhile (“clinically important”) effects of intervention.

Since the publication of Jaeschke et al.’s article, there have been many reports which describe measurement of the Minimum Clinically Important Difference. Other reports describe measurement of quantities with similar names, such as the Minimal Clinically Important Difference [5], Minimum Important Difference [6], Minimum Worthwhile Reductions [7], or Minimal Important Change [8], [9]. It is not always clear what construct these measurements are intended to measure. However, the authors of the reports often indicate that they are interested in identifying “clinically important” or “clinically meaningful” effects of intervention, suggesting that these estimates could be used for sample size calculations or to interpret the findings of clinical trials.

Barrett et al. [10] have carefully reviewed methods used to estimate the smallest worthwhile effect (or “clinical significance”) of interventions. They argued convincingly that such estimates must satisfy two conditions. First, decisions about what constitutes a worthwhile effect must involve weighing the benefits of the intervention against its costs, risks, and inconvenience. An important implication is that the smallest worthwhile effect must be intervention-specific. Thus the smallest worthwhile effect is not a property of the outcome measure. Second, judgments about whether the benefits of intervention outweigh costs, risks, and inconvenience must be based on the perspective of patients who are to receive the intervention. It will usually not be reasonable to claim that the effects of intervention are worthwhile unless the patient judges that the intervention is worthwhile. Therefore, judgments about whether the effect of a particular intervention is large enough to be worthwhile must be made by potential recipients of the intervention (patients), not by clinicians or researchers.

We would add one further criterion: if an estimate of the smallest worthwhile effect is to be used to inform the design and interpretation of clinical trials, it must be expressed in terms of an effect rather than an outcome [11], [12]. The effect of an intervention on an individual is the difference in outcomes that would occur with and without intervention (alternatively, the effect of an intervention could be the difference in outcomes that would occur with two competing interventions). It is a hypothetical value because individual patients do not simultaneously experience and not experience the intervention. This means that the precise effect of intervention on an individual cannot usually be known. Nonetheless it is possible, in randomized trials, to estimate the mean effect of intervention because the difference in the mean outcome of the intervention and the control groups is equal to the mean effect of intervention [13], [14]. In contrast, treatment outcomes, or changes in outcome that occur over the course of treatment, do not provide a satisfactory measure of the effect of intervention because although they might be influenced by intervention, they might also be influenced by natural recovery, statistical regression, and placebo effects [15]. So estimates of the smallest worthwhile effect of intervention must be conceived in terms of the hypothetical difference in outcomes with and without intervention, rather than in terms of outcomes or changes in outcome over the course of treatment. The same point has been made by researchers associated with Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials [11].

Our impression, before conducting this review, was that although many studies use language that suggests they are interested in measuring the smallest worthwhile effect of an intervention, few studies use methods that would enable them to do so. Specifically, our impression was that most such measurements are not directly based on patients’ perceptions, are not intervention-specific, and are not formulated in terms of effects of intervention. Consequently we conducted a systematic review to explore how the smallest worthwhile effect of interventions has been measured. We focused on research into low back pain because many relevant studies have been conducted in this field. We sought to determine whether estimates of the smallest worthwhile effect were based on the opinions of patients, were intervention-specific, and were expressed in terms of effects of intervention.

Section snippets

Data sources and searches

A computerized search was conducted of MEDLINE up to May 2011 and of CINAHL, LILACS, and EMBASE up to August 2010. The search strategies can be found in Appendix A. Reference lists were also screened. Searches were not restricted by language. Two reviewers independently assessed study eligibility and quality and extracted data. Standardized data extraction forms were used. Disagreement between reviewers was resolved by consensus or, where necessary, by a third reviewer.

Study selection

Studies were included in

Search strategy

A total of 265 potentially relevant titles were identified in MEDLINE, 333 in EMBASE, 173 in CINAHL, and 39 in LILACS. After screening for eligibility and removing duplicates, a total of 29 studies [5], [7], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43] were included in the review. Two additional articles were included as a result of screening reference lists [8], [9], yielding a

Discussion

More than half of the estimates of the smallest worthwhile effect of interventions for low back pain were obtained using the anchor-based approach developed by Guyatt et al. [46], Jaeschke et al. [4], Juniper and Guyatt [47], and Juniper et al. [48]. With the anchor-based approach, the smallest worthwhile effect (usually referred to as the Minimum Clinically Important Change) is determined by comparing changes in health-related outcome measures to a threshold global rating of change score.

Conclusion

A range of methods have been proposed for eliciting estimates of the smallest worthwhile effect of interventions for nonspecific low back pain. The most commonly used anchor-based and distribution-based methods are based on the opinions of researchers, do not account for the risks and costs of treatment, and rarely define effects of intervention in terms of the difference in outcome with and without intervention.

We recommend the use of the benefit–harm trade-off method for determining the

References (63)

  • A.G. Copay et al.

    Minimum clinically important difference in lumbar spine surgery patients: a choice of methods using the Oswestry Disability Index, Medical Outcomes Study questionnaire Short Form 36, and pain scales

    Spine J

    (2008)
  • J.T. Farrar et al.

    Clinical importance of changes in chronic pain intensity measured on an 11-point numerical pain rating scale

    Pain

    (2001)
  • M.L. Ferreira et al.

    People with low back pain typically need to feel ‘much better’ to consider intervention worthwhile: an observational study

    Aust J Physiother

    (2009)
  • K. Jordan et al.

    A minimal clinically important difference was derived for the Roland-Morris Disability Questionnaire for low back pain

    J Clin Epidemiol

    (2006)
  • R.W. Ostelo et al.

    24-item Roland-Morris Disability Questionnaire was preferred out of six functional status questionnaires for post-lumbar disc surgery

    J Clin Epidemiol

    (2004)
  • V.C. Oliveira et al.

    People with low back pain who have externalised beliefs need to see greater improvements in symptoms to consider exercises worthwhile: an observational study

    Aust J Physiother

    (2009)
  • G.H. Guyatt et al.

    Measuring change over time: assessing the usefulness of evaluative instruments

    J Chronic Dis

    (1987)
  • E.F. Juniper et al.

    Determining a minimal important change in a disease-specific quality of life questionnaire

    J Clin Epidemiol

    (1994)
  • R.M. Bremnes et al.

    Cancer patients, doctors and nurses vary in their willingness to undertake cancer chemotherapy

    Eur J Cancer

    (1995)
  • V. Duric et al.

    Patients’ preferences for adjuvant chemotherapy in early breast cancer: what makes AC and CMF worthwhile now?

    Ann Oncol

    (2005)
  • V.M. Duric et al.

    Comparing patients’ and their partners’ preferences for adjuvant chemotherapy in early breast cancer

    Patient Educ Couns

    (2008)
  • C.D. Naylor et al.

    Can there be a more patient-centred approach to determining clinically important effect sizes for randomized treatment trials?

    J Clin Epidemiol

    (1994)
  • R.H. Dworkin et al.

    Interpreting the clinical importance of treatment outcomes in chronic pain clinical trials: IMMPACT recommendations

    J Pain

    (2008)
  • International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use,...
  • R.J. Gatchel et al.

    Minimal clinically important difference

    Spine

    (2010)
  • M.J. Yelland et al.

    Defining worthwhile and desired responses to treatment of chronic low back pain

    Pain Med

    (2006)
  • B. Barrett et al.

    Sufficiently important difference: expanding the framework of clinical significance

    Med Decis Making

    (2005)
  • J.J. Heckman

    The scientific model of causality

    Sociol Methodol

    (2005)
  • B. Barrett et al.

    Comparison of anchor-based and distributional approaches in estimating important difference in common cold

    Qual Life Res

    (2008)
  • W. Chansirinukor et al.

    Comparison of the functional rating index and the 18-item Roland-Morris Disability Questionnaire: responsiveness and reliability

    Spine

    (2005)
  • Cited by (84)

    View all citing articles on Scopus
    View full text