Table 3

Test–retest reliability and agreement between the PRO algorithm from test 1 to test 2 in the study population and in different methods of administration

PRO algorithmnPerfect agreement
% (95% CI)
Disagreement improved status % (95% CI)Disagreement worsening status % (95% CI)Kappa* (95% CI)
Pooled55482 (78 to 85)7 (5 to 9)11 (9 to 14)0.67 (0.60 to 0.74)
Web–web16687 (80 to 92)5 (2 to 9)8 (5 to 14)0.78 (0.67 to 0.86)
Paper–paper11282 (74 to 89)8 (4 to 15)10 (5 to 17)0.69 (0.57 to 0.81)
Mixed†27679 (74 to 84)8 (5 to 12)13 (9 to 18)0.59 (0.48 to 0.69)
  • *Weighted Kappa with squared weights.

  • †Web–paper and paper–web.

  • PRO, patient-reported outcome.