Article Text

Download PDFPDF

Catalogue of bias: attrition bias
  1. David Nunan1,
  2. Jeffrey Aronson2,
  3. Clare Bankhead1
  1. 1 Nuffield Department of Primary Care Health Sciences, Centre for Evidence-Based Medicine, University of Oxford, Oxford, UK
  2. 2 Centre for Evidence-Based Medicine, University of Oxford, Oxford, UK
  1. Correspondence to Dr David Nunan, Nuffield Department of Primary Care Health Sciences, Centre for Evidence-Based Medicine, University of Oxford, Oxford OX26GG, UK; david.nunan{at}phc.ox.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Background to attrition bias

Attrition bias referes to bias that arises from systematic differences in the way participants are lost from a study.

Two definitions from the Dictionary of Epidemiology 1 are useful here: attrition and attrition bias.

Attrition: ‘Reduction in the number of participants in a study as it progresses (ie, during follow-up of a cohort study or a randomized controlled trial). Losses may be due to withdrawals, dropouts, or protocol deviations’.

Attrition bias: ‘A type of selection bias due to systematic differences between study groups in the quantitative and qualitative characteristics of the processes of loss of their members during study conduct; that is, due to attrition among subjects in the study. Different rates of losses to follow-up in the exposure groups may change the characteristics of these groups irrespective of the studied exposure or intervention, or losses may be influenced by the positive or adverse effects of the exposures’.

Attrition in a study is the loss of participants during the study. It almost always happens to some extent. When participants are lost, it may not be known whether they continued or discontinued the intervention, and there may be no data on outcomes for these participants after they have dropped out. If there are systematic differences between people who leave the study and those who continue, (attrition) bias can be introduced. This relates to selection bias.

Those who leave a study are often likely to be different from those who continue. For instance, in an intervention study of diet and depression, those who are more severely depressed may find it more difficult to adhere to a dietary regimen; in an observational study of quitting smoking, those who are unsuccessful may be more likely to leave the study.

Impact

In a longitudinal study of psychosocial factors among patients with cardiac conditions, those who completed the study differed in clinical and psychosocial features from those who dropped out before the study ended.2 Given the relationships between clinical and psychological factors and outcomes in people with cardiovascular conditions, it is likely that such attrition could bias the study’s results.

Random attrition overall reduces the amount of data for analysis, but attrition bias is caused when attrition is not random, that is, when there are differences between those who leave and those who remain in the study. Schulz and Grimes3 recommended that loss to follow-up of 5% or less is unlikely to introduce bias; a loss of 20% gives concern about the possibility of bias; and a loss of between 5% and 20% might be a source of bias. However, Hewitt and coauthors4 point out that it is important to distinguish between overall attrition rates and whether those without follow-up data are different from those with data.

When losses to follow-up occur differentially across exposure groups and are associated with prognostic factors, the estimated effect of the exposure on the outcome will be biased, owing to attrition bias.

In a meta-analysis of eight trials of tranexamic acid and upper gastrointestinal bleeding, the relative risk (RR) of mortality was lower in the intervention group than in the control group (RR=0.60, 95% CI 0.42 to 0.87; P=0.007; I²=0%). However, when trials at high or unclear risks of attrition bias were removed from the analysis, there was no association.5

Dumville and coworkers examined the impact of attrition bias in a trial of hip protectors for preventing hip fracture.6 7 They compared rates of loss to follow-up between the arms of the trial and looked at how these related to baseline characteristics of the trial participants. They found that there was a small difference in attrition rates between the arms of the trial, and that the between-group differences in participants lost to follow-up were larger than chance differences at baseline. More people with poor or fair (as opposed to good) health and more people who had had a previous bone fracture were lost from the control group than from the intervention group.

Preventive steps

Every effort should be made to reduce attrition and therefore attrition bias, but some are likely to remain.

Over-recruitment beyond the numbers originally calculated may be helpful.8

When there is knowledge of systematic differences between study participants who leave and those who do not, efforts can be made to counteract this. Post-hoc strategies for analysis can be helpful in reducing the impact of attrition bias. For example, Zethof and coworkers9 showed that sampling weights and tailored replenishment samples can help to compensate for the effects of attrition bias.

Discussion

Attrition bias is an important component of bias and it can affect observational and interventional studies. Attrition bias is often suspected without sufficient information to deal with its effects.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.

Footnotes

  • Contributors All authors contributed equally to manuscript drafting and revision.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.