Article Text

Original research
Methods and results used in the development of a consensus-driven extension to the Consolidated Standards of Reporting Trials (CONSORT) statement for trials conducted using cohorts and routinely collected data (CONSORT-ROUTINE)
  1. Mahrukh Imran1,
  2. Linda Kwakkenbos2,
  3. Stephen J McCall3,4,5,
  4. Kimberly A McCord6,
  5. Ole Fröbert7,
  6. Lars G Hemkens6,
  7. Merrick Zwarenstein8,9,
  8. Clare Relton10,
  9. Danielle B Rice1,11,
  10. Sinéad M Langan12,
  11. Eric I Benchimol13,14,15,
  12. Lehana Thabane16,
  13. Marion K Campbell17,
  14. Margaret Sampson18,
  15. David Erlinge19,
  16. Helena M Verkooijen20,21,
  17. David Moher22,
  18. Isabelle Boutron23,24,25,
  19. Philippe Ravaud23,24,25,
  20. Jon Nicholl26,
  21. Rudolf Uher27,
  22. Maureen Sauvé28,29,
  23. John Fletcher30,
  24. David Torgerson31,
  25. Chris Gale32,
  26. Edmund Juszczak3,33,
  27. Brett D Thombs1,34
  1. 1Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Québec, Canada
  2. 2Behavioural Science Institute, Clinical Psychology, Radboud University, Nijmegen, Netherlands
  3. 3National Perinatal Epidemiology Unit Clinical Trials Unit, Nuffield Department of Population Health, University of Oxford, Oxford, UK
  4. 4Center for Research on Population and Health, Faculty of Health Sciences, American University of Beirut, Ras Beirut, Lebanon
  5. 5Institute of Applied Health Sciences, School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
  6. 6Basel Institute for Clinical Epidemiology and Biostatistics, Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland
  7. 7Faculty of Health, Department of Cardiology, Örebro University, Örebro, Sweden
  8. 8Department of Family Medicine, Western University, London, Ontario, Canada
  9. 9IC/ES Western, London, Ontario, Canada
  10. 10Centre for Clinical Trials and Methodology, Barts Institute of Population Health Science, Queen Mary University, London, UK
  11. 11Department of Psychology, McGill University, Montreal, Quebec, Canada
  12. 12Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK
  13. 13Department of Pediatrics and School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada
  14. 14ICES uOttawa, Ottawa, Ontario, Canada
  15. 15Children's Hospital of Eastern Ontario Research Institute, Ottawa, Ontario, Canada
  16. 16Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
  17. 17Health Services Research Unit, University of Aberdeen, Aberdeen, UK
  18. 18Library Services, Children’s Hospital of Eastern Ontario, Ontario, Ottawa, Canada
  19. 19Department of Cardiology, Clinical Sciences, Lund University, Lund, Sweden
  20. 20University Medical Center Utrecht, Utrecht, Netherlands
  21. 21University of Utrecht, Utrecht, Netherlands
  22. 22Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
  23. 23INSERM, Paris, France
  24. 24Centre d’Épidémiologie Clinique, Hôpital Hôtel Dieu, Assistance Publique–Hôpitaux de Paris, Paris, France
  25. 25Faculté de Médecine, Université Paris Descartes, Sorbonne Paris Cité, Paris, France
  26. 26School of Health and Related Research, University of Sheffield, Sheffield, UK
  27. 27Department of Psychiatry, Dalhousie University, Halifax, Nova Scotia, Canada
  28. 28Scleroderma Society of Ontario, Hamilton, Ontario, Canada
  29. 29Scleroderma Canada, Hamilton, Ontario, Canada
  30. 30British Medical Journal, London, UK
  31. 31York Trials Unit, Department of Health Sciences, University of York, York, UK
  32. 32Neonatal Medicine, School of Public Health, Faculty of Medicine, Imperial College London, London, UK
  33. 33Nottingham Clinical Trials Unit, University of Nottingham, Nottingham, UK
  34. 34Departments of Psychiatry; Epidemiology, Biostatistics and Occupational Health; Medicine; and Educational and Counselling Psychology; and Biomedical Ethics Unit, McGill University, Montreal, Quebec, Canada
  1. Correspondence to Dr Brett D Thombs; brett.thombs{at}mcgill.ca

Abstract

Objectives Randomised controlled trials conducted using cohorts and routinely collected data, including registries, electronic health records and administrative databases, are increasingly used in healthcare intervention research. A Consolidated Standards of Reporting Trials (CONSORT) statement extension for trials conducted using cohorts and routinely collected data (CONSORT-ROUTINE) has been developed with the goal of improving reporting quality. This article describes the processes and methods used to develop the extension and decisions made to arrive at the final checklist.

Methods The development process involved five stages: (1) identification of the need for a reporting guideline and project launch; (2) conduct of a scoping review to identify possible modifications to CONSORT 2010 checklist items and possible new extension items; (3) a three-round modified Delphi study involving key stakeholders to gather feedback on the checklist; (4) a consensus meeting to finalise items to be included in the extension, followed by stakeholder piloting of the checklist; and (5) publication, dissemination and implementation of the final checklist.

Results 27 items were initially developed and rated in Delphi round 1, 13 items were rated in round 2 and 11 items were rated in round 3. Response rates for the Delphi study were 92 of 125 (74%) invited participants in round 1, 77 of 92 (84%) round 1 completers in round 2 and 62 of 77 (81%) round 2 completers in round 3. Twenty-seven members of the project team representing a variety of stakeholder groups attended the in-person consensus meeting. The final checklist includes five new items and eight modified items. The extension Explanation & Elaboration document further clarifies aspects that are important to report.

Conclusion Uptake of CONSORT-ROUTINE and accompanying Explanation & Elaboration document will improve conduct of trials, as well as the transparency and completeness of reporting of trials conducted using cohorts and routinely collected data.

  • statistics & research methods
  • general medicine (see internal medicine)
  • clinical trials

Data availability statement

All data relevant to the study are included in the article or uploaded as supplemental information.

https://creativecommons.org/licenses/by/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • We followed a five-step process to develop Consolidated Standards of Reporting Trials statement extension for trials conducted using cohorts and routinely collected data (CONSORT-ROUTINE), consistent with Enhancing the QUAlity and Transparency Of health Research guidance.

  • Items were informed by reporting guidelines on similar research designs, a scoping review, a three-round Delphi process and expert members of the guideline development team.

  • CONSORT-ROUTINE was reviewed and tested at various stages of the development by project team members and key stakeholders.

  • The limited methodological literature on trials conducted using cohorts and routinely collected data was a limitation in developing the extension.

  • Similar to other reporting guidelines, CONSORT-ROUTINE will require re-evaluation and revisions over time to ensure that it is kept up to date with evolving methodology and practice of trials using cohorts and routinely collected data.

Background

The use of reporting guidelines, including the Consolidated Standards of Reporting Trials (CONSORT) statement, improves the transparency and completeness of reports of results from randomised controlled trials (RCTs).1–4 The CONSORT statement helps to facilitate critical appraisal and interpretation of RCTs by providing guidance to authors on a minimal set of items that should be reported for all trials.5 The CONSORT 2010 guideline aimed to improve the reporting of two-arm parallel group RCTs. Extensions of the CONSORT statement have been developed to encourage better reporting of other trial designs, including, for instance, multiarm parallel group randomised trials, cluster trials, pilot and feasibility trials and pragmatic trials.6–9

There is a growing interest in RCTs conducted using cohorts or routinely collected data, including registries, electronic health records (EHRs) and administrative databases.10–14 In a cohort, a group of individuals is gathered for the purpose of conducting research, whereas routinely collected data refer to data initially collected for purposes other than research or without specific a priori research questions developed before collection.15 16 Trials may use a cohort or routinely collected data for: (1) identification of eligible participants, (2) outcome ascertainment and (3) to implement an intervention, or for a combination of these purposes. For example, in registry-based RCTs, a registry could be used to identify eligible participants for a trial, for the collection of participant baseline characteristics and as the source of outcome data; some registries have used interactive technology to actively flag participants for RCT enrolment as patient data are entered into the registry.12 In some EHR trials, the EHR itself is used to implement an intervention. For example, one RCT tested an intervention to reduce antibiotic prescribing by feeding back personalised antibiotic prescription data to primary care physicians.17

The use of cohorts and routinely collected data may make RCTs easier and more feasible to perform by reducing cost, time and other resources.18 19 It may also facilitate the conduct of trials that more closely replicate real-world clinical practice. These trial designs, however, are relatively recent innovations, and published RCT reports may not describe important aspects of their methodology in a standardised way. Trials conducted using cohorts and routinely collected data share certain elements with conventional RCTs, but there are also distinctive elements to report that are not covered in the CONSORT 2010 statement. The REporting of studies Conducted using Observational Routinely-collected Data (RECORD) statement provides guidance on reporting of studies conducted using routinely collected data but does not address RCT-specific methodological and reporting considerations.20 Research conducted using routinely collected data presents unique methodological challenges that are often insufficiently reported, but there is scant guidance on methods and reporting of trials conducted using routinely collected data or cohorts.21 22

An extension to the CONSORT statement for RCTs conducted using cohorts and routinely collected data was developed using methods recommended for developing reporting guidelines.23 This article describes, in detail, the consensus-based development process. The main aims of this article are to: (1) describe the methods and processes used in the development of the CONSORT Extension for Trials Conducted Using Cohorts and Routinely Collected Data (CONSORT-ROUTINE)24 and (2) describe decisions made to arrive at the final checklist and the accompanying Explanation & Elaboration statement.

Methods

The project was registered with the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network.25 We followed the EQUATOR network’s guidelines for recommended methods and processes for developing, disseminating and implementing healthcare reporting guidelines.23 These methods have been used in the development of other similar EQUATOR guidelines. Figure 1 illustrates the five parts of the development process for this guideline.

Project phase 1: project launch, establishment of team and funding

Figure 1

Development process of the CONSORT Extension for Trials Conducted Using Cohorts and routinely Collected Data (CONSORT-ROUTINE).

Need for the guideline and literature review

An initial informal review of reports of published protocols and reports of trials using cohorts and routinely collected data by BDT and LK suggested that there appeared to be deficiencies in reporting of such trials. For instance, many reports did not adequately describe the cohort or database from which trial participants were recruited, processes used to link participants across databases were not always provided and it was sometimes unclear whether trial outcomes were assessed by the triallists or ascertained via existing databases used to conduct the trial. A review of the EQUATOR website and published literature indicated that there was no existing reporting guideline for these types of trials. The RECORD statement addresses reporting issues related to routinely collected data but does not include guidance on reporting of trials. Many trials conducted using routinely collected data are pragmatic or use cluster designs, for instance, but CONSORT extensions for those types of trials do not address issues germane to the use of cohorts or routinely collected data to conduct trials.7 9

Project launch and identification of CONSORT-ROUTINE project members

Initial discussions on developing a CONSORT extension for RCTs conducted using cohorts occurred in November 2016 at the Trials within Cohorts symposium in London, UK (LK, MZ, CR and BDT).26 Discussions continued virtually and key people involved in cohort-embedded trials or the EQUATOR network were approached during December 2016 (HMV, DM, IB, PR, JN, RU and DT). It was suggested that trials conducted in registries had many characteristics similar to those in cohorts, and there was agreement to include registry-based trials in the extension. People with expertise in registry-based trials were approached in March 2017 (OF, LT, MKC and DE), and an experienced librarian (MSam) and patient representative familiar with trials conducted using cohorts (MSau) were also included in the group at that point.

The project was registered on the EQUATOR website in April 2017. During the preparatory phase, while developing searches and reviewing example publications, we became aware that trials conducted using EHRs and administrative databases also shared similar characteristics with trials in cohorts and registries, and it was decided to expand the scope to trials conducted using cohorts and routinely collected data. In July 2017, triallists, who were leading the development of a reporting guideline for EHRs, joined the project group (EJ and CG). Given the relevance of their previous work and their expertise (LH, SL, DM and EIB), authors who had been involved in the development of the RECORD statement were invited to join the team.20 Several doctoral students also joined the project team (SJM, KAM and DBR). A steering committee comprising of 10 members with key expertise for consultation was established. A research coordinator (MI) was hired in April 2018 to manage the project, and an experienced journal editor was invited to join (JF). The group communicated regularly throughout the process via a number of virtual meetings, using an online platform to conduct teleconferences, as well as through email discussions.

Rationale for developing one checklist versus four different checklists for trials conducted using cohorts, registries, EHRs and administrative databases

Team members discussed the advantages and disadvantages of creating individual checklists for each of the four types of data versus a single checklist for all four. It was determined that, although there are some differences in the implementation of trials across the different types of data sources, the methodological principles are similar, and there is substantial overlap in the design, conduct and factors that may influence interpretability. Thus, the steering committee reached consensus to develop a single statement, addressing any differences by including ‘if applicable’ to items in the checklist that may not apply to all trial designs and to clarify differences in the Explanation & Elaboration publication as deemed necessary.

Funding

The project team obtained its main source of funding from a grant from the Canadian Institutes of Health Research Institutes (CIHR) to support the development of the guideline (BDT, OF, EJ, LK, CR; Grant #PJT-156172). EJ and CG also obtained funding from the UK National Institute of Health Research Clinical Trials Unit Support Funding - Supporting efficient/innovative delivery of NIHR research. In addition, funding to hold the face-to-face meeting was provided by a Planning and Dissemination Grant from CIHR (BDT and LK; Grant #PCS - 161863) and by contributions from Queen Mary University of London, the University of Sheffield, McGill University and the Lady Davis Institute for Medical Research of the Jewish General Hospital in Montreal, Canada.

A project protocol was developed and published.22

Project phase 2: scoping review

A preliminary ‘long list’ of possible reporting items was formulated by LK and KAM based on review of the CONSORT 2010 statement items, the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE)27 and the RECORD statements,20 as well as discussions with steering committee members. The STROBE and RECORD statements were considered the most relevant to this project because of their focus on reporting of observational studies and non-interventional studies using routinely collected data.

A scoping review was conducted to identify: (1) articles on the methodology or reporting of RCTs conducted using cohorts or routinely collected data that could inform the development of new items or modification of existing CONSORT items; and (2) trial reports to identify aspects of reporting that need improvement and examples of good reporting of potential checklist items that could be used to support CONSORT-ROUTINE.28 We searched for relevant articles on trials conducted using cohorts, registries, EHRs and administrative databases from 2007 to 2018. After screening articles for inclusion and exclusion at the abstract and full-text level, 10 people from the team independently reviewed the included papers and provided suggestions for modifications or additional reporting guideline items until no new ideas emerged (saturation). Suggestions were added in a standardised, shared spreadsheet. At the same time, team members provided examples of good reporting for each proposed item or item modification. Additionally, the review helped us to create a list of authors with experience in these trial designs as potential participants for the Delphi study. Search terms used in the scoping review are shown in online supplemental file 1.

Project phase 3: Delphi study

The objectives of our Delphi study were: (A) to obtain feedback on the importance of including each candidate item in CONSORT-ROUTINE; (B) to improve the wording of items considered important; and (C) to elicit suggestions for additional items not in the existing list. We aimed to engage key stakeholders across different sectors and backgrounds. There are not fixed guidelines on the sample size of Delphi studies, and the ideal number of participants may depend on the complexity of the topic, the likely heterogeneity of relevant experiences and viewpoints, and resources available to manage the data generated.29–31 Many studies use small groups of experts (eg, <20), but we believed that a larger group with diverse expertise would best complement the knowledge of the project team. Thus, we sent out an invitation to reporting guideline developers (including those involved in previous CONSORT extensions), funders, journal editors, patient representatives, trial methodologists, epidemiologists, meta-research authors, ethicists, biostatisticians and clinical triallists who were identified by members of the project team. We also encouraged recipients of the invitation to forward the invitation to other potentially interested stakeholders.

The Delphi surveys were built and hosted using an online survey platform in Qualtrics. During registration, we gathered demographic and professional background characteristics of participants, including geographical location, self-identified stakeholder group (eg, clinical trials user, clinical triallist and methodologist), employment sector, years of experience in trials research and research experience in trials conducted using cohorts or routinely collected data.

Registered participants received a link to access each of the three rounds of the Delphi survey. In each round, we asked participants to rate their perceptions about the importance of each suggested reporting item by ranking items based on how essential they are for reporting on a 1–5 Likert scale (1=not essential; 5=essential). There is not consensus on the ideal number of Likert categories or groupings for decision-making, but it is common to use between 4-point and 7-point scales.30

Responses were categorised as follows:

1–2=low score (item should not be part of CONSORT-ROUTINE checklist).

3=moderate (item should be discussed).

4–5=high score (item should be part of CONSORT-ROUTINE checklist).

Participants also had the option to select ‘Not my expertise’ for items if they believed that they did not have the appropriate level of expertise to rate an item. Figure 2 shows a screenshot of an example proposed modification item from the survey:

Items from the CONSORT 2010 statement for which modifications were initially not proposed were also included in the survey so that participants could provide comments or make recommendations for modifications to these items. For all items (proposed modifications and CONSORT 2010 items), we provided participants with the opportunity to give open-ended feedback, using free-text boxes provided at the bottom of each survey page and at the end of the survey. At the end of the survey, participants were asked to provide any additional items that they believed would be important for reporting in trials conducted using cohorts and routinely collected data but that had not been included in the proposed set of new and modified items.

Figure 2

Example of a round 1 Delphi survey item as presented in the online survey. CONSORT, Consolidated Standards of Reporting Trials.

We launched round 1 of the survey on 4 February 2019 with 2 weeks to provide responses. Round 2 was launched on 4 March 2019, and round 3 was launched on 1 April 2019. After each round, the Qualtrics built-in analysis software was used to generate a distribution of scores and to aggregate group results for each item (mean score, maximum and minimum score, SD, variance and percentage ratings of 1–5 ranking for items) and summary statistics were circulated among all participants. Individual responses were not fed back. In addition, a bar chart with the ratings and counts for each item was created. Following each round of the survey, the CONSORT-ROUTINE steering committee members reviewed the survey results independently and then met via teleconference to discuss and analyse the results of the survey. During these meetings, decisions were made on how to address comments from participants by modifying, adding or combining items. Notes were also made on comments that reflected a need for explanation in the Explanation & Elaboration companion to the checklist.

We predefined consensus as at least 2/3 of responders rating the importance of an item as ‘high’ or ‘very high’. Items that reached consensus for inclusion were not rated again in the next round. For some items that did not reach consensus, the wording of items was revised based on participants’ suggestions. Items that did not reach consensus were rated again in the next round in their original or revised form. Reports summarising the Delphi results were circulated after each round including summary statistics such as counts, means, SD and variances for the responses on each item. Reminder emails were sent 1 week prior to the deadline and extensions were provided if requested for all three rounds in order to maximise participation.

Since the Delphi Study was advisory, all items were reviewed and vetted again at the in-person consensus meeting, and comments provided by participants of the Delphi Study were taken into consideration while making decisions to include or exclude items.

Project phase 4: in-person consensus meeting and development of checklist publication

A 2-day in-person consensus meeting was held on 13–14 May 2019 in London, UK. The purpose of the meeting was to discuss the Delphi results, make decisions on items to retain in the final checklist, make any necessary modifications to items and suggest reporting aspects that should be addressed in the Explanation & Elaboration documentation supporting the checklist. The meeting was attended by 26 members of the CONSORT-ROUTINE Group.

We used approaches similar to those used in previous consensus meetings for other guidelines. Participants were provided with the results of the initial long-list generation and the Delphi study in advance of the meeting. At the meeting, steering committee members first presented the background and an update on work done to date, in order to facilitate the discussions. Session chairs then separately presented items from the preliminary checklist, results of the Delphi study and feedback from stakeholders, after which the group discussed in an open forum. Decisions were made on items to be modified or added based on the following criteria: (1) whether they addressed elements unique to trials conducted using cohorts or routinely collected data versus elements applicable to any trial and (2) whether they reflected information that should be included in a minimum reporting set of items. Notes were taken, and the discussions were audio-recorded to ensure that the content was accurately captured.

Following the consensus meeting, refinement of the content and wording of the items was continued through online group discussions with CONSORT-ROUTINE project team members. The initial version of the checklist was pilot-tested by circulating it among stakeholders in order to assess its usability and to identify any challenges that might arise while applying the checklist. Pilot-testing the checklist also provided insight into issues that should be addressed in detail in the Explanation & Elaboration statement.

Project phase 5: publication, dissemination and implementation

As with several previous CONSORT extensions, it was decided to publish the reporting checklist with a detailed Explanation & Elaboration statement in the same document.6–9 The Explanation & Elaboration statement is intended to provide an in-depth explanation of the scientific rationale for each recommendation, together with an example of clear reporting for each item.

In addition to publication of the reporting guideline checklist and Explanation & Elaboration material, to attempt to maximise uptake, we will undertake additional dissemination activities, including presentations and workshops at conferences and other venues. We also plan to seek endorsement of the guideline by journal editors. Research has shown that formal endorsement and adoption of the CONSORT statement by journals is associated with improved quality of reporting.2 Studies conducted by members of our team have benchmarked pre-extension reporting completeness in trials conducted in cohorts, registries, EHRs, and administrative databases.32–34 There were not enough examples of completed cohort-embedded trials for benchmarking reporting.

The final CONSORT-ROUTINE checklist has been published.24

Patient and public involvement

One of the members of our CONSORT-ROUTINE team, MSau, is a patient organisation leader. She has been involved in working with researchers to establish a cohort of patients living with the rare disease scleroderma, which supports RCTs of trials of online rehabilitation, self-management and psychological intervention programmes.

Results

Stage 2: scoping review and initial long list of potential items

The scoping review sought methods articles and reports of trials conducted using cohorts, registries, EHRs or administrative databases.

Cohorts

The database search identified 1185 publications, of which 1062 were excluded after title and abstract screening and 37 after full-text review. A total of 86 studies were included in the scoping review, including 15 papers on methodological considerations of using cohorts for conducting RCTs. All trials used the cohort for both identification of patients and outcome ascertainment.

Registries

The search identified 234 publications, of which 143 received full-text review. A total of 106 publications were eligible, including 95 trial reports or protocols (both identification of patients and outcome ascertainment (n=27); identification of patients only (n=28); outcome ascertainment only (n=40)) and 11 papers on methodological considerations.

Electronic health records

The search identified 2085 citations, of which 548 studies were reviewed at the full-text level. A total of 289 eligible publications, including 263 trial protocols or reports (both identification of patients and outcome ascertainment (n=169); identification of patients only (n=38); outcome ascertainment only (n=56)) and 26 articles that described methodological considerations.

Administrative databases

The search identified 663 citations, of which 151 full texts were reviewed. There were a total of 117 trial protocols or reports included (both identification of patients and outcome ascertainment (n=57); identification of patients only (n=1); outcome ascertainment only (n=58)) and one paper on methodological considerations.

Delphi study results

Of 125 people invited to take part in the Delphi study, 115 people registered via an online survey, and 92 (74%) provided responses on the items in round 1. Figures 3 and 4 present the types of stakeholder groups that completed round 1 of the Delphi study and the type of trials conducted using cohorts or routinely collected databases with which they had familiarity. Participants belonging to more than one category had the option of checking multiple options in the survey.

Figure 3

Professional roles reported by participants who completed round 1 of the CONSORT-ROUTINE Delphi study (%). Participants could report more than one role. CONSORT-ROUTINE, Consolidated Standards of Reporting Trials Extension for Trials Conducted Using Cohorts and Routinely Collected Data.

Figure 4

Participants of round 1 of the CONSORT-ROUTINE Delphi study by type of cohort or routinely collected database with which they had familiarity (%). Participants could report more than one. CONSORT-ROUTINE, Consolidated Standards of Reporting Trials Extension for Trials Conducted Using Cohorts and Routinely Collected Data.

Round 1

Of the 92 participants who completed the round 1 survey, 90 provided valid ratings and two provided comments but not ratings. Of the 27 items rated in round 1, 14 reached consensus to be included in discussions at the consensus meeting; the other 13 did not reach consensus and were included in round 2. Based on round 1 feedback, a total of 11 items were modified for review in round 2, including two items that were combined into one. No items were excluded from the checklist.

Round 2

Of the 92 participants who completed round 1, 77 (84%) completed the round 2 survey. Of the 13 items rated, 2 reached consensus for inclusion in consensus meeting discussions, and 11 did not reach consensus in round 2. Based on round 2 feedback, eight items were modified prior to round 3.

Round 3

Of the 77 people who completed round 2, 62 (81%) completed round 3. Of the 11 items in round 3, five items reached consensus in round 3. The remaining six items did not reach consensus after the three rounds.

There were several new items suggested via the Delphi process but not added to the potential item list. The main reasons why some items were suggested but not incorporated were:

  1. The suggestion was encapsulated in CONSORT 2010 items, was already captured by proposed new or modified items or could be captured by further modifying new or modified items.

  2. The suggestion was not specific to trials conducted using cohorts and routinely collected data and, thus, was recommending a change to the CONSORT 2010 checklist, which was not the task of the CONSORT-ROUTINE group.

Summary results of the three rounds can be accessed at: https://osf.io/4zh6f/

In-person consensus meeting

Table 1 summarises the CONSORT-ROUTINE group’s discussions and advisory decisions for each of the items that was discussed during the in-person meeting. If there were differing opinions on the inclusion or exclusion of items and consensus could not be reached, voting was implemented by the session chair, with an 80% threshold for inclusion in the checklist as part of the minimal set of recommended reporting items. The key recommendations that emerged were as follows:

  • Proposed modification to CONSORT 2010 items: it was recommended to retain proposed modifications to seven CONSORT 2010 items. These modifications pertained to differences in mechanisms used to conduct trials using cohorts or routinely collected databases. As in previous CONSORT extensions, some of the recommended changes end with ‘if applicable’ to show that some information which authors are being asked to report might not be relevant or applicable for their particular RCT or the particular type of data that was used in the RCT.

  • Proposed additional items: consensus was reached to include six additional items and to add a new subheading, ‘Cohort or routinely collected database’, to the checklist.

Table 1

Consensus meeting discussions and advisory decisions for the checklist items

A recurrent discussion point was the need to minimise adding new items to the abstract unless they are essential due to word limits imposed by journals. A suggestion was made to expand the explanatory text of the Explanation & Elaboration document for nine unchanged CONSORT 2010 items to clarify additional requirements for reporting aspects of the trial without modifying the item: item 1a (identification as a randomised trial in the title), item 4b (settings and location where the data were collected), item 5 (intervention), item 13b (losses and exclusions after randomisation), item 14a (dates of recruitment/follow-up), item 15 (baseline data), item 20 (limitations), item 21 (generalisability) and item 24 (study protocol). For the abstract, there was an agreement to include an additional item to the abstract for naming the cohort or routinely collected database (item 1c). This item was later merged with item 1b from the CONSORT 2010 checklist after discussion with the project team (table 1). Thus, the final extension checklist included eight modified items and five new items.24

CONSORT-ROUTINE pilot test

The preliminary version of the checklist was pilot-tested by 17 people who had been previously involved in conducting trials using cohorts and routinely collected data. Based on feedback received from the pilot test, there were minor modifications made to the wording of two items for clarity (item 1b and item 9) in the final checklist.24

Discussion

We have developed a consensus-driven extension to the CONSORT 2010 Statement for RCTs conducted using cohorts and routinely collected data.24 CONSORT-ROUTINE contains minimum reporting requirements with appropriate flexibility as described in the Explanation & Elaboration part of our checklist document. This article described how we reached the final checklist and Explanation & Elaboration text and provides information on the decision-making process. We anticipate this paper will help others who may learn from our experiences and may apply this to the development of future guidelines or extensions.

There were several important strengths to our approach. A consensus-driven Delphi methodology, which is recommended when developing healthcare reporting guidelines by the EQUATOR network, was used to develop the extension.23 We engaged with key stakeholders in trials research and potential end-users of the resultant CONSORT-ROUTINE reporting guideline throughout the development process. The process involved participants from a wide range of scientific disciplines and with diverse experience in conducting trials using different cohorts and routinely collected databases. As with other CONSORT-related guidelines, the inclusion of CONSORT Group members (IB, DM and PR) was intended to ensure consistency in the use of recommended methods in the development, dissemination and implementation of the extension. We recorded high response rates of 74% (92 respondents), 84% (77 respondents) and 81% (62 respondents) in Delphi rounds 1, 2 and 3, respectively. In addition, the number of registered participants and responders is larger than in most Delphi surveys used to develop healthcare reporting guidelines.8 35 36 Finally, we achieved a high degree of consensus that was consistent across Delphi survey rounds for the majority of the items.

There are also limitations to consider. One is that most participants were academic researchers with primary roles in trials research, and despite our broad engagement efforts, the number of participants from some stakeholder groups was small. One patient was included as a member of the reporting guideline development team, but no patients participated in the Delphi exercise. It is possible that perceptions about the importance of items might have differed across different stakeholder groups that might have favoured the inclusion or exclusion of certain items. Nonetheless, our project group included people from diverse backgrounds with expertise in using different types of data sources, who oversaw the development process to ensure that the checklist was equally applicable to, and representative of, all four types of data sources. A second is that our scoping review was not designed to capture each and every trial conducted using routinely collected data. This was in part because of the lack of accepted specific Medical Subject Headings terms to identify these studies, or any research using routinely collected data, and the limited number of completed trials and methodological articles on these trial designs. For our purposes, it was not necessary to capture all trials that had been conducted using cohorts or routinely collected data, and we believe that we were able to capture a significant number of important trial reports and methodology papers that served as a basis for the development of our extension. A third is that the CONSORT-ROUTINE group predominantly consisted of members from high-income countries, which might have led to decreased applicability of the checklist for trials conducted in other settings. Finally, as with all reporting guidelines, ours will require re-evaluation and revisions over time to ensure that it is kept up to date with evolving research and knowledge on these trail designs.

Conclusion

CONSORT-ROUTINE has now been developed and can be used to support comprehensive reporting of RCTs conducted using cohorts or routinely collected data. The extension statement contains minimum requirements of reporting that we encourage researchers to report. A baseline assessment of the completeness and reporting of these trial designs is being conducted, and the impact of the extension will be assessed in the coming years. While we anticipate that CONSORT-ROUTINE may need to be updated with the evolution of research methods, we hope the guideline will improve the reporting of RCTs conducted using cohorts and routinely collected data, enhance their interpretability and credibility of their results, improve their reproducibility, indirectly facilitate their robust design and conduct and lead to improved patient care.

Data availability statement

All data relevant to the study are included in the article or uploaded as supplemental information.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Twitter @LGHemkens, @ericbenchimol, @msampso, @dmoher, @DrCGale

  • Correction notice This article has been corrected since it was published. Deatils regarding final CONSORT-ROUTINE checklist under section Project phase 5: publication, dissemination and implementation has been corrected.

  • Contributors MI, LK, OF, LGH, MZ, CR, SML, DM, MSam, CG, EJ and BDT were involved in initial phases of study conception, design of the search strategy and development of conceptual frameworks. SJM, KAM, DBR, EIB, LT, MKC, DE, HMV, IB, PR, JN, RU, MSau, JF and DT provided regular feedback on each of these steps. MI wrote the first draft with LK and BDT. All authors made provided critical revisions to the development of this manuscript and approved the final version.

  • Funding The development of CONSORT-ROUTINE was supported by the Canadian Institutes of Health Research (CIHR; PJT156172; PCS-161863), and the UK National Institute of Health Research (NIHR) Clinical Trials Unit Support Funding – Supporting efficient/innovative delivery of NIHR research (Principal Investigator (PI): EJ, co-PI: CG). DBR was supported by a Vanier CIHR Graduate Scholarship; SML was supported by a Wellcome Senior Clinical Fellowship in Science (205039/Z/16/Z); EIB was supported by a New Investigator Award from CIHR, the Canadian Association of Gastroenterology and Crohn’s and Colitis Canada, and the Career Enhancement Program of the Canadian Child Health Clinician Scientist Program; RU was supported by the Canada Research Chairs Program (Award #231397); CG was supported by the UK Medical Research Council through a Clinician Scientist Fellowship; and BDT was supported by a Tier 1 Canada Research Chair, all outside of the present work.

  • Disclaimer The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.