A simple and valid tool distinguished efficacy from effectiveness studies

J Clin Epidemiol. 2006 Oct;59(10):1040-8. doi: 10.1016/j.jclinepi.2006.01.011. Epub 2006 Aug 4.

Abstract

Objective: To propose and test a simple instrument based on seven criteria of study design to distinguish effectiveness (pragmatic) from efficacy (explanatory) trials.

Study design: Currently no validated definition of effectiveness studies exists. We asked the directors of 12 Evidence-based Practice Centers to select six studies each: four that they considered to be examples of effectiveness trials and two considered efficacy studies. We then applied our proposed criteria to test the construct validity using the selected studies as if they had been identified by a gold standard.

Results: Based on the rationale to identify effectiveness studies reliably with minimal false positives (i.e., a high specificity), a cutoff of six criteria produced the most desirable balance between sensitivity and specificity. This setting produced a specificity of 0.83 and a sensitivity of 0.72.

Conclusion: When applied in a standardized manner, our proposed criteria can provide a valid and simple tool to distinguish effectiveness from efficacy studies. The applicability of systematic reviews can improve when analysts place more emphasis on the generalizability of included studies. In addition, clinicians can also use our criteria to determine the external validity of individual studies, given an appropriate population of interest.

Publication types

  • Research Support, U.S. Gov't, P.H.S.
  • Validation Study

MeSH terms

  • Drug Therapy*
  • Drug-Related Side Effects and Adverse Reactions
  • Evidence-Based Medicine / methods
  • Humans
  • Randomized Controlled Trials as Topic / methods
  • Research Design
  • Review Literature as Topic
  • Sensitivity and Specificity
  • Treatment Outcome*