Original Article
A Peer Review Intervention for Monitoring and Evaluating sites (PRIME) that improved randomized controlled trial conduct and performance

https://doi.org/10.1016/j.jclinepi.2010.10.003Get rights and content

Abstract

Objective

Good clinical practice (GCP) guidelines emphasize trial site monitoring, although the implementation is unspecified and evidence for benefit is sparse. We aimed to develop a site monitoring process using peer reviewers to improve staff training, site performance, data collection, and GCP compliance.

Study Design and Setting

The Peer Review Intervention for Monitoring and Evaluating sites (PRIME) team observed and gave feedback on trial recruitment and follow-up appointments, held staff meetings, and examined documentation during annual 2-day site visits. The intervention was evaluated in the ProtecT trial, a UK randomized controlled trial of localized prostate cancer treatments (ISRCTN20141297). The ProtecT coordinator and senior nurses conducted three monitoring rounds at eight sites (2004–2007). The process evaluation used PRIME report findings, trial databases, resource use, and a site nurse survey.

Results

Adverse findings decreased across all sites from 44 in round 1 to 19 in round 3. Most findings related to protocol adherence or site organizational issues, including improvements in eligibility criteria application and data collection. Staff found site monitoring acceptable and made changes after reviews.

Conclusion

The PRIME process used observation by peer reviewers to improve protocol adherence and train site staff, which increased trial performance and consistency.

Introduction

What is new?

  • Trial data are often verified against source data during site monitoring visits, although there are no standard monitoring methods and little evidence of benefit

  • In this study, annual site monitoring visits by peer reviewers (Peer Review Intervention for Monitoring and Evaluating sites [PRIME] process) focused on observing and improving trial conduct

  • Trial performance, good clinical practice (GCP) guidelines compliance, data collection, and study cohesion were increased by the PRIME process, which should now be evaluated more widely in other trials

Pragmatic multicenter randomized controlled trials are recommended for the evaluation of health care interventions to enhance recruitment, dissemination, and increase the external validity of trial results. However, multiple sites generate logistical and scientific challenges for trial conduct, particularly in ensuring uniform adherence to the protocol [1], [2]. There is now a much greater emphasis on trials of comparative effectiveness, but quality control systems are less well established for these complex and difficult trial designs [3].

Clinical trials of investigational medicinal products (IMP) conducted in Canada, Japan, and the USA are regulated by the International Conference on Harmonisation (ICH) GCP guidelines, which are “a standard for the design, conduct, performance, monitoring, auditing, recording, analyses and reporting of clinical trials that provides assurance that the data and reported results are credible and accurate, and that the rights, integrity and confidentiality of trial subjects are protected” [4]. The ICH–GCP guidelines states that “in general there is a need for on-site monitoring, before, during and after the trial” with the frequency and content to be determined by the trial design and complexity. IMP trials conducted in the European Union are regulated by the Clinical Trials Directives (2001/20/EC and 2005/28/EC), which are based on the ICH–GCP guidelines.

IMP trials designed for regulatory approval usually include site initiation visits by the sponsors and subsequent visits every 6–8 weeks until closure. Monitors predominantly verify trial case report forms (CRFs) against source medical records but may sometimes check site files, consent forms, or site progress [5]. Site monitoring for other trials ranges from comparable systems for IMP regulatory trials through programs selecting sites based on inexperienced staff or data problems [6], [7]. The Veterans Affairs Cooperative Studies Program (VASCP) devised a Site Monitoring and Review Team (SMART) system for trials that they sponsored in the USA [8]. VASCP monitors completed a GCP-based checklist through examination of study documents over 1- to 2-day visits to every site once during a trial. GCP adherence scores of the most critical checklist items increased significantly in a follow-up phase of monitoring [8]. All US National Cancer Institute (NCI) sponsored trials include monitoring of sites every 3 years by clinical investigators and data managers, with NCI observers present at up to 20% of reviews [9]. However, the value of intensive site monitoring for trials not evaluating IMPs was recently questioned at a conference aimed at improving trial conduct [3], [10], whereas a systematic review of on-site monitoring systems found little evidence for benefits to trial conduct or patients [11]. European trialists have recently proposed a risk-based approach to determine the frequency and intensity of on-site monitoring by assessing the potential risks for patients and data before the trial commences [12].

We aimed to devise a site monitoring system based on the observation of study conducted by peer reviewers in addition to the more established assessment of study documentation and staff meetings. The peer review–based site monitoring process focused on investigating GCP compliance, site conduct and performance, and training staff. This article describes the design of the site monitoring process and its initial evaluation in a large pragmatic randomized controlled trial.

Section snippets

ProtecT study

The ProtecT study is a randomized controlled trial of the effectiveness of treatments for clinically localized prostate cancer preceded by community-based prostate specific antigen (PSA) testing (ISRCTN20141297). Unselected men aged 50–69 years registered in randomly selected general practices in and around nine UK cities were invited to participate by letter (for details see Donovan et al. [13]). In brief, men attended recruitment clinics held in primary care where the risks and benefits of a

Results

The PRIME process reviewed eight trial sites over three rounds from 2004 to 2007. Evaluation of the PRIME process included the impact of site monitoring on protocol compliance, study performance, monitoring costs, and its acceptability to staff.

Discussion

The PRIME process was designed around annual peer review visits with a focus on the observation of trial appointments to ensure ongoing quality and consistency at clinical trial sites. The initial evaluation of the PRIME process in a large pragmatic trial was observational and showed that adverse review findings decreased markedly across all sites over three rounds. Site monitoring identified procedures that were then used more widely across the trial and rectified practices, which threatened

Acknowledgments

Department of Health disclaimer: The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the Department of Health. The authors would like to acknowledge the tremendous contribution of all members of the ProtecT study research group, including Christine Croker for arranging the reviews and those involved in this research: Prasad Bolinas, Debbie Cooper, Michael Davis, Andrew Double, Alan Doherty, Emma Elliott, David Gillett, Poppa Herbert, Joanne

References (31)

  • I. de Salis et al.

    Qualitative research to improve RCT recruitment: issues arising in establishing research collaborations

    Contemp Clin Trials

    (2008)
  • L. Howard et al.

    Why is recruitment to trials difficult? An investigation into recruitment difficulties in an RCT of supported employment in patients with severe mental illness

    Contemp Clin Trials

    (2009)
  • C. Warlow

    Organise a multicentre trial

    BMJ

    (1990)
  • M. Weinberger et al.

    Multisite randomized controlled trials in health services research: scientific challenges and operational issues

    Med Care

    (2001)
  • L. Duley et al.

    Specific barriers to the conduct of randomized trials

    Clin Trials

    (2008)
  • Cited by (17)

    • A prospective cohort and extended comprehensive-cohort design provided insights about the generalizability of a pragmatic trial: the ProtecT prostate cancer trial

      2018, Journal of Clinical Epidemiology
      Citation Excerpt :

      Another strength of ProtecT was the high level of randomization of eligible participants—at 62%, much higher than the similar Prostatectomy versus Observation Trial (14.6%) [17] and most other cancer RCTs [34]. This was achieved by the integration of qualitative research to optimize recruitment and informed consent [21,35] and dedicated staff training [36]. Men who declined randomization were very similar to the randomized in almost every aspect, except for having more professional occupations and lower deprivation.

    • Active monitoring, radical prostatectomy, or radiotherapy for localised prostate cancer: Study design and diagnostic and baseline results of the ProtecT randomised phase 3 trial

      2014, The Lancet Oncology
      Citation Excerpt :

      The trial steering committee (seven independent members and chair) reviewed trial progress every year. Study training programmes and on-site monitoring visits were used to standardise trial conduct.11,12 Men discussed treatment options with the specialist nurses, and if they agreed to the three-group randomisation (1:1:1), the nurse telephoned a central system in the Bristol trials' office (Bristol, UK) and logged participant details.

    • Expert panel reviews of research centers: The site visit process

      2012, Evaluation and Program Planning
      Citation Excerpt :

      The Committee on Science, Engineering and Public Policy suggested that expert review is the optimal method for evaluating applied research (COSEPUP, 2001). Furthermore, expert panel review has been shown to be an effective procedure for improving research behavior and providing summative information about research quality (Barbosa & Grayson, 2009; Lane et al., 2011). Expert panel evaluation of entities, such as research centers, usually involves a site visit by the panel to the research center being reviewed.

    • Monitoring strategies for clinical intervention studies

      2021, Cochrane Database of Systematic Reviews
    View all citing articles on Scopus

    Conflict of interest: None declared.

    Role of the funding source: The UK Department of Health funded the study through the UK NIHR Health Technology Assessment Program. The sponsor oversaw the conduct of the trial but had no role in the study design, collection, analysis, and interpretation of the data or in the decision to submit the paper for publication.

    1

    Dr. Lane with Professors Donovan, Neal, and Hamdy had full access to all the data in the study and take final responsibility to submit for publication.

    View full text