Equivalence of electronic and paper-and-pencil administration of patient-reported outcome measures: a meta-analytic review

Value Health. 2008 Mar-Apr;11(2):322-33. doi: 10.1111/j.1524-4733.2007.00231.x.

Abstract

Objectives: Patient-reported outcomes (PROs; self-report assessments) are increasingly important in evaluating medical care and treatment efficacy. Electronic administration of PROs via computer is becoming widespread. This article reviews the literature addressing whether computer-administered tests are equivalent to their paper-and-pencil forms.

Methods: Meta-analysis was used to synthesize 65 studies that directly assessed the equivalence of computer versus paper versions of PROs used in clinical trials. A total of 46 unique studies, evaluating 278 scales, provided sufficient detail to allow quantitative analysis.

Results: Among 233 direct comparisons, the average mean difference between modes averaged 0.2% of the scale range (e.g., 0.02 points on a 10-point scale), and 93% were within +/-5% of the scale range. Among 207 correlation coefficients between paper and computer instruments (typically intraclass correlation coefficients), the average weighted correlation was 0.90; 94% of correlations were at least 0.75. Because the cross-mode correlation (paper vs. computer) is also a test-retest correlation, with potential variation because of retest, we compared it to the within-mode (paper vs. paper) test-retest correlation. In four comparisons that evaluated both, the average cross-mode paper-to-computer correlation was almost identical to the within-mode correlation for readministration of a paper measure (0.88 vs. 0.91).

Conclusions: Extensive evidence indicates that paper- and computer-administered PROs are equivalent.

Publication types

  • Meta-Analysis

MeSH terms

  • Computers, Handheld
  • Data Collection / methods*
  • Humans
  • Outcome Assessment, Health Care / methods*
  • Patient Satisfaction*
  • Statistics as Topic