Article Text

How well do health professionals interpret diagnostic information? A systematic review
  1. Penny F Whiting1,2,
  2. Clare Davenport3,
  3. Catherine Jameson1,
  4. Margaret Burke1,
  5. Jonathan A C Sterne1,
  6. Chris Hyde4,
  7. Yoav Ben-Shlomo1
  1. 1School of Social and Community Medicine, University of Bristol, Bristol, UK
  2. 2The National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care West at University Hospitals Bristol NHS Foundation Trust
  3. 3Unit of Public Health, Epidemiology and Biostatistics, School of Health and Population Sciences, College of Medical and Dental Sciences, University of Birmingham, Edgbaston, Birmingham, UK
  4. 4Peninsula Technology Assessment Group, Peninsula College of Medicine & Dentistry, Exeter, UK
  1. Correspondence to Dr Penny Whiting; penny.whiting{at}bristol.ac.uk

Abstract

Objective To evaluate whether clinicians differ in how they evaluate and interpret diagnostic test information.

Design Systematic review.

Data sources MEDLINE, EMBASE and PsycINFO from inception to September 2013; bibliographies of retrieved studies, experts and citation search of key included studies.

Eligibility criteria for selecting studies Primary studies that provided information on the accuracy of any diagnostic test (eg, sensitivity, specificity, likelihood ratios) to health professionals and that reported outcomes relating to their understanding of information on or implications of test accuracy.

Results We included 24 studies. 6 assessed ability to define accuracy metrics: health professionals were less likely to identify the correct definition of likelihood ratios than of sensitivity and specificity. –25 studies assessed Bayesian reasoning. Most assessed the influence of a positive test result on the probability of disease: they generally found health professionals’ estimation of post-test probability to be poor, with a tendency to overestimation. 3 studies found that approaches based on likelihood ratios resulted in more accurate estimates of post-test probability than approaches based on estimates of sensitivity and specificity alone, while 3 found less accurate estimates. 5 studies found that presenting natural frequencies rather than probabilities improved post-test probability estimation and speed of calculations.

Conclusions Commonly used measures of test accuracy are poorly understood by health professionals. Reporting test accuracy using natural frequencies and visual aids may facilitate improved understanding and better estimation of the post-test probability of disease.

  • EPIDEMIOLOGY
  • MEDICAL EDUCATION & TRAINING
  • STATISTICS & RESEARCH METHODS

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.