Intended for healthcare professionals

Editorials

Absence of evidence is not evidence of absence

BMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7438.476 (Published 26 February 2004) Cite this as: BMJ 2004;328:476
  1. Phil Alderson (palderson{at}cochrane.co.uk), associate director
  1. UK Cochrane Centre, Oxford OX2 7LG

    We need to report uncertain results and do it clearly

    The title of this editorial is not new. For example, it was used nearly a decade ago for an article in the BMJ's Statistics Notes series.1 Altman and Bland considered the dangers of misinterpreting differences that do not reach significance, criticising use of the term “negative” to describe studies that had not found statistically significant differences. Such studies may not have been large enough to exclude important differences. To leave the impression that they have proved that no effect or no difference exists is misleading.

    As an example, a randomised trial of behavioural and specific sexually transmitted infection interventions for reducing transmission of HIV-1 was published in the Lancet.2 The incidence rate ratios for the outcome of HIV-1 infection were 0.94 (95% confidence interval 0.60 to 1.45) and 1.00 (0.63 to 1.58) for two intervention groups compared with control. In the abstract, the interpretation is: “The interventions we used were insufficient to reduce HIV-1 incidence…” But, looking again at the confidence intervals, the results in both treatment arms are compatible with a wide range of effects, from a 40% reduction in incidence of HIV-1 to a 50% increase. So, to give a summary of the results that gives the impression that this study has shown that these interventions are not capable of reducing HIV-1 incidence is misleading. What might be the implications for people at risk of HIV-1 infection? It could be that an intervention that does in fact protect against infection is not widely used. It could also be that an intervention that actually harms people by increasing HIV-1 infection is viewed as an intervention which has “no effect.” The truth of these situations can be established only by collecting more evidence, and statements implying that an intervention has no effect might actually discourage further studies by giving the impression that the question has been answered.

    When is it reasonable to claim that a study has proved that no effect or no difference exists? The correct answer is “never,” because some uncertainty will always exist. However, we need to have some rules for deciding when we are fairly sure that we have excluded an important benefit or harm. This implies that some threshold must be decided, in advance, for what size of effect is clinically important in that situation. This concept is not new and is used in designing equivalence studies, which set out to show whether one intervention is as good as another.3 Thresholds, often called limits of equivalence, are set between which an effect is designated as being too small to be important. Outcomes of, for example, studies of effectiveness can then be related to these thresholds. This is shown in the figure, where the confidence interval from a study is interpreted in the context of predefined limits of equivalence.

    Figure1

    Relation between confidence interval, line of no effect, and thresholds for important differences (adapted from Armitage, Berry, and Matthews4)

    Of course, setting such thresholds is not straightforward. How big a reduction in the incidence of HIV-1 infection is important? How large an increase in incidence is important? Who should decide? How different should the thresholds be for different groups of patients and different outcomes? These are difficult questions, and although we may not be able to find easy answers to them, we can at least be more explicit in reporting what we have found in our research. Wording such as “our results are compatible with a decrease of this much or an increase of this much” would be more informative.

    What can we do to help ensure that in another decade we will be closer to heeding the advice of Altman and Bland? Firstly, considering results of a particular study in the context of all available research which considers the same question can increase statistical power, reduce uncertainty, and thus reduce the confusing reporting of underpowered studies. Such an approach might have clarified the implications of a recent study of passive smoking published in the BMJ.5 Secondly, researchers need to be precise in their interpretation and language and avoid the temptation to save words by reducing the summary of the study to such an extent that the correct meaning is lost. Thirdly, journals need to be willing to publish uncertain results and thus reduce the pressure on researchers to report their results as definitive.6 We need to create a culture that is comfortable with estimating and discussing uncertainty.

    Acknowledgments

    I thank Iain Chalmers and Mike Clarke for comments on draft versions.

    Footnotes

    • Competing interests None declared.

    References

    View Abstract