How reliable is peer review? An examination of operating grant proposals simultaneously submitted to two similar peer review systems

J Clin Epidemiol. 1997 Nov;50(11):1189-95. doi: 10.1016/s0895-4356(97)00167-4.

Abstract

To determine level of agreement and correlation between two similar but separate peer review systems, proposals simultaneously submitted during the same funding year to two agencies using the same scoring system were identified and analyzed (n = 248). There was a direct linear relationship between the scores of the two agencies (r = 0.592, p < 0.001). Raw agreement within whole-digit ranges was moderate (53%) but a Cohen's kappa indicated that agreement beyond chance was only fair (kappa = 0.29, 95% CI = 0.198, 0.382). When proposals were arbitrarily categorized as being "clearly fundable" (on a 0-5 scale, score > or = 3.0) or "not clearly fundable" (score < 3.0), raw agreement was 73% and agreement beyond chance was moderate (kappa = 0.444, 95% CI = 0.382, 0.552). In cases where there was inter-agency disagreement on the fundability of the project, the difference in scores was greater than in those in which there was agreement. In a subsample of 128 pairs, variables describing the application and the applicant (i.e., principal investigator) were coded, but none explained inter-agency agreement on the "fundability" of proposals.

MeSH terms

  • Humans
  • Peer Review, Research / standards*
  • Reproducibility of Results
  • Research Support as Topic / standards*