A systematic review finds that diagnostic reviews fail to incorporate quality despite available tools

J Clin Epidemiol. 2005 Jan;58(1):1-12. doi: 10.1016/j.jclinepi.2004.04.008.

Abstract

Background and objective: To review existing quality assessment tools for diagnostic accuracy studies and to examine to what extent quality was assessed and incorporated in diagnostic systematic reviews.

Methods: Electronic databases were searched for tools to assess the quality of studies of diagnostic accuracy or guides for conducting, reporting or interpreting such studies. The Database of Abstracts of Reviews of Effects (DARE; 1995-2001) was used to identify systematic reviews of diagnostic studies to examine the practice of quality assessment of primary studies.

Results: Ninety-one quality assessment tools were identified. Only two provided details of tool development, and only a small proportion provided any indication of the aspects of quality they aimed to assess. None of the tools had been systematically evaluated. We identified 114 systematic reviews, of which 58 (51%) had performed an explicit quality assessment and were further examined. The majority of reviews used more than one method of incorporating quality.

Conclusion: Most tools to assess the quality of diagnostic accuracy studies do not start from a well-defined definition of quality. None has been systematically evaluated. The majority of existing systematic reviews fail to take differences in quality into account. Reviewers should consider quality as a possible source of heterogeneity.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review
  • Systematic Review

MeSH terms

  • Bias
  • Data Interpretation, Statistical
  • Diagnostic Tests, Routine / standards*
  • Humans
  • Quality Assurance, Health Care*
  • Research Design
  • Review Literature as Topic*