Skip to main content

Development process of a consensus-driven CONSORT extension for randomised trials using an adaptive design

Abstract

Background

Adequate reporting of adaptive designs (ADs) maximises their potential benefits in the conduct of clinical trials. Transparent reporting can help address some obstacles and concerns relating to the use of ADs. Currently, there are deficiencies in the reporting of AD trials. To overcome this, we have developed a consensus-driven extension to the CONSORT statement for randomised trials using an AD. This paper describes the processes and methods used to develop this extension rather than detailed explanation of the guideline.

Methods

We developed the guideline in seven overlapping stages:

  1. 1)

    Building on prior research to inform the need for a guideline;

  2. 2)

    A scoping literature review to inform future stages;

  3. 3)

    Drafting the first checklist version involving an External Expert Panel;

  4. 4)

    A two-round Delphi process involving international, multidisciplinary, and cross-sector key stakeholders;

  5. 5)

    A consensus meeting to advise which reporting items to retain through voting, and to discuss the structure of what to include in the supporting explanation and elaboration (E&E) document;

  6. 6)

    Refining and finalising the checklist; and

  7. 7)

    Writing-up and dissemination of the E&E document.

The CONSORT Executive Group oversaw the entire development process.

Results

Delphi survey response rates were 94/143 (66%), 114/156 (73%), and 79/143 (55%) in rounds 1, 2, and across both rounds, respectively. Twenty-seven delegates from Europe, the USA, and Asia attended the consensus meeting. The main checklist has seven new and nine modified items and six unchanged items with expanded E&E text to clarify further considerations for ADs. The abstract checklist has one new and one modified item together with an unchanged item with expanded E&E text. The E&E document will describe the scope of the guideline, the definition of an AD, and some types of ADs and trial adaptations and explain each reporting item in detail including case studies.

Conclusions

We hope that making the development processes, methods, and all supporting information that aided decision-making transparent will enhance the acceptability and quick uptake of the guideline. This will also help other groups when developing similar CONSORT extensions. The guideline is applicable to all randomised trials with an AD and contains minimum reporting requirements.

Peer Review reports

Introduction

Clinical trials are expected to adhere to high ethical and scientific standards and answer research questions robustly, as quickly as possible to benefit patients, and use no more research resources than necessary. The need to streamline the conduct of trials is a cross-sector (public and private sector) and regulatory priority [1,2,3,4,5,6]. Well-designed and properly conducted adaptive design (AD) trials can improve the efficiency of clinical trials and help achieve these objectives.

There is a growing interest in ADs across sectors to address the shortcomings of trials with a fixed design. Furthermore, there is considerable statistical methodological literature on ADs [7, 8] and new methods continue to be developed. Discussions on opportunities to use ADs across trial phases and advice on their robust design and conduct are growing [9,10,11,12,13,14,15,16,17]. Different types of ADs are increasingly used or at least considered at the design stage across sectors [18,19,20,21,22,23,24,25]. However, ADs have a number of issues and challenges. There is lack of practical knowledge of ADs, and some obstacles and concerns about some types of ADs are impeding their use [22, 26,27,28,29,30,31,32]. Access to case studies of AD trials may help alleviate some of these problems [28, 33]. Consequently, authors have reviewed real-life AD case studies to build knowledge resources [18, 19, 34, 35]. Although these reviews found a number of AD case studies, especially in oncology, many of these trials are inadequately reported and thus may not address some of the concerns about ADs [18, 33, 36]. Adequate reporting will improve the credibility and interpretability of ADs and increase their application [28, 34].

The Consolidated Standards of Reporting Trials (CONSORT) framework has been instrumental in promoting transparent reporting of randomised trials. Increased complexity of the trial design and conduct, as is common in AD trials, comes with additional transparency and reporting demands. The CONSORT 2010 statement [37] includes the concept of changes to the trial design and methods after commencement without differentiating between planned adaptations and unplanned changes (item 3b) and interim stopping rules (item 7b). It does not, however, specifically address the general reporting needs for randomised trials that use an AD. As noted above, reporting deficiencies of AD trials have been highlighted [18, 23, 33,34,35] and it has been suggested that there is a need for additional reporting considerations to address this [33,34,35, 38]. However, these papers lack a grounded methodological approach to developing comprehensive reporting guidance. Thus, the suggested piecemeal recommendations are likely to be incomplete and unlikely to be accepted to influence practice because they lack input from important stakeholders through a robust process. Therefore, this project aimed to address this limitation by using a recommended consensus-driven framework [39] to develop an official reporting guideline, Adaptive designs CONSORT Extension (ACE), for randomised trials that use ADs.

In the spirit of good reporting practice, this paper describes the processes and methods that the ACE Steering Committee (SC) used to develop a consensus-driven ACE reporting guideline. We provide justification for the decisions made to arrive at the final checklist and explain the structure of the forthcoming ACE explanation and elaboration (E&E) document. Box 1 lists the long-term objectives of the ACE project.

Methods

A favourable ethical approval for this study was granted by the Research Ethics Committee (REC) of the School of Health and Related Research (ScHARR) at the University of Sheffield (ref: 012041). The guideline development process adhered to a consensus-driven methodological framework for developing healthcare reporting guidelines recommended by the CONSORT Executive Group [39]. An a priori registered protocol accessible via the EQUATOR Network [40] guided the conduct of this research, and Fig. 1 summarises the development process.

Fig. 1
figure 1

Development process of the Adaptive designs Extension CONSORT extension guideline for randomised trials

Study management and group composition

A multidisciplinary SC of 19 members from industry and the public sector, including the CONSORT Executive Group representative (DA) and members of the MRC Network of Hubs for Trials Methodology Research (HTMR) Adaptive Designs Working Group (ADWG), led the guideline development process. The members were based in Europe, USA, and Asia. The professional experience of members included methodology and conduct of AD trials, management and conduct of randomised trials, regulatory assessment and approval, reviewing research grant applications and decision-making on research funding panels, systematic reviewing of evidence, and development of reporting guidelines. This composition was motivated by the need to capture diverse views of experts across sectors with multidisciplinary roles in trials research covering wide geographical locations.

A Study Management Group (SMG) comprised of thirteen SC members oversaw the day-to-day project activities in consultation with the SC. For quality control, we sought the advice from an External Expert Panel of four members based in the USA, UK, and Australia—with practical and methodological expertise in AD trials during the drafting process of the version of the checklist to be included in the Delphi surveys. Additional file 1 summarises the project activities undertaken throughout the development process.

Prior work to inform the need for a CONSORT extension

The findings from a National Institute for Health Research (NIHR) Doctoral Research Fellowship (DRF-2012-05-182) led by MD and supervised by SJ, ST, and JN informed the need for this research [33]. The idea was presented, discussed, and contextualised at the 2016 annual workshop of the MRC HTMR ADWG attended by six members of the ACE SC (MD, TJ, PP, JW, AM, and CW). In summary, research prior to 2016 investigated obstacles and potential facilitators to the use of AD trials [22, 26, 28,29,30,31,32, 41] as well as deficiencies in their reporting [18, 23, 33, 34]. Further research highlighted the overwhelming need for a tailored reporting guideline for AD trials with literature suggesting some reporting principles [26, 28, 33,34,35, 38].

We approached the CONSORT Executive Group in 2016 informing them about our plans for the ACE guideline, and they agreed to oversee the development process. Before the research began, we further performed a scoping free text search on 10 October 2016 using the term ‘adaptive’ on the EQUATOR Network database [42], but we found no reporting guideline on ADs or related guideline under development.

Scoping literature review

The objectives of the scoping narrative review were to collate any concerns about AD trials or considerations that may influence their reporting, to identify any suggestions on how AD trials should be reported and to establish definitions of technical terms. The aim was to guide the preliminary drafting of the reporting items and working definitions for the extension checklist. The review also helped us to create a list of authors who had published AD trials or methodology research as potential participants for the Delphi surveys.

The literature search was not intended to be exhaustive but to provide a good foundation for the guidance development process. We searched the MEDLINE database via PubMed on 17 November 2016 for any articles about randomised AD trials written in English using this combination of terms: ((“adaptive design”) OR (“adaptive clinical trial”) OR (“adaptive trial”) OR (“adaptive interim”) OR (“flexible design”)) AND (reporting OR recommendation* OR (“best practice”) OR (“good practice”) OR (“panel discussion*”) OR guidance OR guideline* OR interpretation OR bias OR (“expert opinion”) OR (“expert panel”)). We retrieved 237 articles, from which we excluded 51: 33 were ineligible (irrelevant to the subject or about non-randomised studies), 16 inaccessible, one duplicate, and one had an English abstract but was written in Chinese. We narratively reviewed 186 eligible publications, and key ones are cited in relevant sections. We also reviewed some additional key documents that we were aware of but that were not retrieved by the search strategy, such as regulatory reflection guidance [4,5,6]. We summarised the findings and drafted a preliminary checklist in preparation for our first face-to-face SC meeting.

Checklist drafting process

On 29 January 2017, the SC met in Sheffield for a full day to discuss the findings from the scoping review, agree upon a working definition of an AD trial, and to discuss the preliminary extension checklist in the context of the concerns about AD trials and what necessary changes should be made to the CONSORT 2010 checklist.

What do we consider an adaptive design trial?

We found several references that provide definitions of an AD and related technical terms [5, 6, 16, 43,44,45,46]. Our review showed that what is considered an AD trial is inconsistently defined and often creates confusion [26, 41, 43]. However, there are three common themes in the definitions [5, 6, 16, 43, 46]: ‘use of accruing trial data’, ‘opportunity to make changes to aspects of the trial’, and ‘need to preserve trial validity and integrity’. After a lengthy discussion, the SC agreed to define an AD as:

A clinical trial design that offers pre-planned opportunities to use accumulating trial data to modify aspects of an ongoing trial while preserving the validity and integrity of that trial

By pre-planned, we envisaged trial changes or adaptations are specified at the design stage or at least before any unblinded review of the accumulating trial data, and they are documented in an auditable trial-related document such as the trial protocol. We acknowledged the existence of flexible statistical methods to cope with unplanned trial changes under specific conditions [7]. However, we strongly feel that pre-planning is one of the necessary conditions to preserve the integrity of the trial, a view shared with regulatory guidance [4,5,6]. Thus, this guideline is not meant for trials with unplanned changes only (no planned adaptations).

For the scope of this guideline, changes to aspects of an ongoing trial that solely depend on external information rather than accumulating trial data are outside the scope of what we consider an AD trial. Furthermore, we specifically exclude the use of accruing trial data to make changes that relate only to the feasibility and process aspects of conducting a trial, which forms part of almost every trial. We refer to these changes as operational adaptations [47]. The types of ADs and trial aspects that can be modified are discussed in the literature [3, 9, 11, 15, 16, 24, 41, 48,49,50,51,52,53].

By validity, we meant the ability to provide correct statistical inference to establish the effects of the study interventions and produce accurate estimates of the effects (such as point estimates and associated uncertainty) to give results that are convincing to research consumers. Finally, the use of the word integrity pertains to minimisation of operational bias, maintenance of data confidentiality, and consistency in trial conduct for credibility, interpretability, and persuasiveness of trial results. Our definitions of terms relating to ADs are listed in Additional file 2.

What are the concerns for adaptive design trials?

The review found some key publications that discussed why the reporting of AD trials requires special consideration and reporting suggestions or recommendations for particular types of AD trials [23, 25, 33, 34, 38, 45, 51, 53,54,55,56,57,58,59,60,61]. ADs are not immune to potential biases and limitations despite their appealing nature and promising benefits [9, 50, 53].

Box 2 summarises the concerns or considerations that influence the reporting of ADs into eight themes that may depend on the type of the AD and scope of the trial adaptations used. These themes explain why the reporting of AD trials requires special consideration, and they influenced the development of the ACE guideline.

Drafting of the first extension checklist

The SC then discussed the preliminary extension checklist drafted during the scoping literature review focusing on what changes need to be made and the structure of the changes with justification. We classified items as ‘no changes proposed’, ‘modifications proposed’, and ‘new item suggested’. A report summarising the discussions is accessible online (see download at https://doi.org/10.15131/shef.data.6139631). Following the first face-to-face meeting in Sheffield, the checklist was then redrafted and refined during an iterative process through subsequent face-to-face and teleconference meetings and email correspondence involving the SMG and the SC.

The External Expert Panel reviewed the draft checklist and working definitions of technical terms. We added two specific items on how to deal with overrunning participants (12e) and multiple outcomes or multiple treatment comparions (12f), which were suggested by the panel (see download at https://doi.org/10.15131/shef.data.6198290). The panel also suggested a rewording of some items for clarification and identified specific aspects that should be addressed in the E&E document. In addition, independent experts were consulted to review the draft checklist to identify major problems with content and wording of items.

On 5 May 2017, the SC finalised the official first draft of the extension checklist with a total of 58 items. This list included 22 new items, 15 modified items, and 21 items unchanged from the CONSORT 2010 checklist. This draft checklist is accessible online (see download at https://doi.org/10.15131/shef.data.6198290).

The sampling frame for the Delphi surveys

We aimed to engage key stakeholders across sectors and over wide geographical locations. We targeted those with AD-related experience including clinical trialists, clinical investigators, statisticians, trial methodologists, and health economists; those interested in using ADs; and consumers of research findings, decision makers, and policy-makers in clinical trials research including journal editors, systematic reviewers, research funders, regulators, research ethicists, and patient representative groups.

We created a list of 468 authors of the AD-related publications (trials or methodology) from our review and known case studies [18, 34]. This list contributed to the majority of the survey sampling frame. The details of organisations or professional groups we also approached are accessible online (see download at https://doi.org/10.15131/shef.data.6291050). We used a wide range of platforms to reach out to key stakeholders of interest such as targeted mailing lists, social media, and personal communications (see Additional file 3).

The Delphi process

The National Perinatal Epidemiology Unit (NPEU, University of Oxford) built and hosted the online Delphi surveys and offered administrative support to maintain the anonymity of participants’ responses. The SC including the lead investigator and study coordinator did not have access to any information that could link participants to their responses during and after the survey.

Number of survey rounds

The objective of the Delphi process was to assess the stability of opinions that can be viewed as consistency in ratings of importance between rounds and not merely to reach consensus. We expected two survey rounds would suffice to reach stability in perceptions based on recent similar studies [62, 63]. However, the methodology permitted the SC the flexibility to undertake a third round if necessary based on the results and feedback received in round 2.

Scoring system

We used an importance rating scale of 0 to 9 adopted in related Delphi surveys [62,63,64]: ‘not important’ (score 1 to 3), ‘important but not critical’ (score 4 to 6), ‘critically important’ (score 7 to 9), and ‘do not know’ (unsure). We used the same scoring system across rounds and indicated whether items were new (N), modified (M), or remained unchanged (U) from the CONSORT 2010 checklist [65]. See Fig. 2 for a screenshot.

Fig. 2
figure 2

Snapshot of the online round 1 Delphi survey. [N] and [M] represent new and modified reporting items

Delphi round 1

We registered stakeholders who were willing to take part with informed consent via a bespoke web-based platform. During registration, we obtained informed consent and gathered demographics and characteristics of participants such as geographical location, self-identified stakeholder group (clinical trials user, clinical trialist, or methodologist), employment sector, years of experience in trials research, and AD-related research experience.

Registered participants were sent personalised emails with a link to the round 1 survey. The landing survey page stated the ACE project aims, the contextual definition of an AD trial, and the scope of the guidance. We asked participants to rate their perceptions about the importance of the suggested reported items. Unchanged items were included to allow participants to provide comments and assess completeness of the proposed extension checklist when completing the survey. We provided participants with the opportunity to give item-specific and general open-ended feedback such as any potentially overlooked modifications or clarity issues. We activated the round 1 survey on 31 May 2017 and gave participants approximately 3 weeks to complete it.

Delphi round 2

Between rounds 1 and 2, we re-opened registration and extended recruitment to specifically target journal editors using a similar process as described for round 1. All registered participants were eligible to complete round 2 unless they withdrew consent. In round 2, participants who completed the round 1 survey were presented with their own previous item rating scores and the distribution of the item rating as displayed in Fig. 3 (including medians and interquartile ranges (IQRs) of all participants (green) and their self-identified stakeholder group at registration (blue)). We did not display previous data for participants who only completed the round 2 survey. We asked participants to rate the importance of 38 new or modified items as compared to the CONSORT 2010 checklist. Item 21 (generalisability) from round 1 was unintentionally overlooked and not included in the round 2 survey due to a technical error (see download at https://doi.org/10.15131/shef.data.6198290). Items 14a (dates defining the periods of recruitment) and 14b (unexpected termination/why the trial ended or stopped) were modified for reasons stated in Additional file 4. We asked participants to give open-ended feedback including any reasons for changing their ratings where applicable. The survey also displayed unchanged items from the CONSORT 2010 checklist and asked participants to provide any additional feedback without rating these items. The main and abstract draft checklist used for round 2 are accessible online (see download at https://doi.org/10.15131/shef.data.6198347). We launched the round 2 Delphi survey on 15 September 2017 and gave participants approximately 4 weeks to complete it.

Fig. 3
figure 3

Snapshot of the online Delphi survey for round 2 among round 1 completers. In green are responses of all participants. In blue are the responses of the self-identified stakeholder group at registration which the participant belongs to (clinical trialist, clinical trial user, or methodologist)

Consensus decision-making criteria

We predefined consensus as receiving the support of at least 70% of responders rating an item as ‘critically important’ for inclusion in the round 2 Delphi survey [40, 66]. Prior to the consensus meeting, we specified that the decision to retain an item should be based on achieving at least 50% support of delegates voting to ‘keep’ an item [40]. These criteria in consideration with the feedback gathered informed the SC in making the final decisions about reporting items to be included in the ACE guideline.

Analysis methods

We summarised the distribution of characteristics and demographics of registered participants and responders for each Delphi round. Item rating scores were descriptively analysed using the number of responders, the median (IQR), and mean (standard deviation, SD). We explored whether the ratings of participants differed by specific characteristics of interest using clustered boxplots stratified by:

  • Self-selected key stakeholder group (clinical trial user, clinical trialist, or methodologist);

  • Current employment sector (public sector or industry);

  • Self-reported regulatory assessment experience (yes or no); and

  • Primary role in clinical trials research as a statistician (yes or no).

We summarised the number and proportion of participants who rated an item as ‘not important’, ‘important but not critical’, and ‘critically important’, including the ‘do not know’ category. We analysed qualitative feedback gathered during the Delphi surveys using a simple thematic analysis [67] to identify common comments and elucidate feedback on suggested items (new or modified) as well as gather additional content suggestions for the checklist.

We assessed the stability and consistency of individual ratings of item importance across rounds using:

  1. 1)

    Percentage agreement as assessed by the proportion of responders whose ratings were the same in both rounds;

  2. 2)

    Weighted Cohen’s kappa with absolute error weights [68] with confidence intervals calculated using bootstrapping [69];

  3. 3)

    Bland-Altman plots [70] and histograms of changes in the scores between rounds.

Decision-making process

Feedback-based adaptation process

The SC reviewed the open-ended feedback received to inform the development process, such as modification of items for clarification and testing the wording of items. For instance, in round 1, we tested the preference of two additional versions of item 14c adaptation decisions (14d pre-planned adaptation decisions and 14e deviations from pre-planned adaptation decisions, see download at https://doi.org/10.15131/shef.data.6198290). The wording of items and structuring of the checklist evolved during the process.

Consensus meeting and onwards

The aim of the consensus meeting was to discuss the round 2 Delphi survey results; to make advisory decisions on items to retain in the guideline through voting, including reasons for supporting decisions; and to suggest reporting aspects that should be addressed in the supporting E&E document. We held a full day meeting on 8 November 2017 in London attended by 27 delegates from the UK, USA, Europe, and Asia. Delegates from the public sector and industry included clinical investigators, trial statisticians, journal editors, systematic reviewers, funding panel members, methodologists, and the CONSORT Executive Group representative. Professor Deborah Ashby was the independent chair of the meeting. We took notes during the meeting and audio-recorded and transcribed the discussions to ensure that the content was accurately captured. Following the discussion of each checklist item or group of checklist items, we asked delegates to anonymously vote about the inclusion of a specific item; ‘keep’, ‘drop’, and ‘unsure or no opinion’. We also included the item-voting preferences of a 28th delegate who was unable to attend in person but provided their ratings of checklist items remotely and the project support administrator voted on their behalf. Twenty-six delegates voted, with EC and the independent chair excluded from voting to maintain the independence of the process.

Results

Response rates across rounds

In round 1, we registered 143 participants, 94 (65.7%) completed the survey. Of these 94, 86 (91.5%) rated all 58 items and the remaining 8 (8.5%) rated 45 items or fewer. We registered an additional 13 participants after round 1, bringing the total registered participants in round 2 to 156. The round 2 response rate was 114/156 (73.1%). Of these 114, 110 (96.5%) rated all 38 items and the remaining 4 (3.5%) rated 22 items or fewer.

Excluding 13 participants who were only registered after round 1, 79/143 (55.2%) completed both round 1 and 2 surveys. Of the 114 round 2 responders, 35 (30.7%) did not complete the round 1 survey.

Characteristics of registered participants and responders

Additional file 5 presents the demographics and characteristics of registered participants and responders (completers of at least one reporting item in at least one round). Registered participants and responders were very similar across rounds. Responders in rounds 1 and 2 were based in 19 and 21 countries, respectively; the majority were from the UK, other European countries, and the USA. The majority of responders identified themselves as statisticians in their primary role in trials research; other prominent roles were clinical investigators and trial methodologists. However, the secondary roles in trials research were more diverse. Some stakeholder groups including regulatory assessors, health economists, and research ethicists were underrepresented. Over two thirds of responders were from the public sector. Responders had diverse AD-related experience, and most identified themselves as clinical trialists or methodologists.

Delphi round 1

Perceptions of proposed items

Additional file 6 summarises the distribution of the responders’ perceptions of the importance of reporting items. Detailed item descriptors are accessible online (see download at https://doi.org/10.15131/shef.data.6198290). Of the 22 new items, 11 (50.0%) and 17 (77.3%) were perceived as critical for inclusion by at least 70% and 50% of responders, respectively. Except for one modified item (15a—appropriate baseline data for comparability), which was rated as critical by only 62.9% of responders, the remaining 14 modified items were rated as critical by at least 70% of responders.

The perceptions of responders about the importance of suggested reporting items were broadly consistent across self-identified stakeholder groups, employment sectors, regulatory assessment experience, and statistical primary role. Figures 4 and 5 display these response patterns for two reporting items selected for illustration. The remaining clustered boxplots for the new or modified items are accessible online (see download at https://doi.org/10.15131/shef.data.6139721.v1).

Fig. 4
figure 4

Round 1 perceptions about the importance of specifying pre-planned adaptations (item 3c). Item descriptor is downloadable at https://doi.org/10.15131/shef.data.6198290

Fig. 5
figure 5

Round 1 perceptions about the importance of decision-making criteria to guide adaptation (item 7b). Item descriptor is downloadable at https://doi.org/10.15131/shef.data.6198290

Open-ended feedback from participants and Steering Committee decisions

On 3 July 2017, the SC met face-to-face to discuss the round 1 Delphi survey results. The summary of the open-ended feedback we received is accessible online (see download at https://doi.org/10.15131/shef.data.6139631). Some responders highlighted that the guideline does not cover ADs used in non-randomised studies. However, we intentionally restricted the scope of the guideline to randomised trials to conform to the scope of CONSORT 2010 framework and to avoid additional complexities. We suggest a separate reporting guideline specific to non-randomised ADs commonly applied in phase 1 trials.

In the feedback, some responders were concerned that the draft checklist included little about aspects relating to Bayesian AD trials. The SC had thought about this at the planning stage and decided to make this guideline as general as possible and applicable to all AD randomised trials regardless of whether they were designed and analysed using frequentist, Bayesian, or both statistical paradigms. The E&E document will further discuss the scope of the guidance and illustrate reporting using examples of various frequentist and Bayesian randomised trials that use an AD.

In general, the qualitative feedback acknowledged that the first checklist draft was comprehensive. However, some responders felt that there were too many items which may impede the use of ADs. The feasibility of reporting all aspects due to limited journal space was questioned although this should no longer be a barrier to complete reporting due to the availability of online repositories. However, the SC deliberately included a large number of draft items at this stage of the Delphi survey to gather perceptions about their importance. The aim of the Delphi process and the subsequent consensus meeting was then to help the SC to decide on essential items to retain.

Some responders suggested the need to include aspects of an estimand of interest, such as under item 2b (specific objectives and hypotheses). The SC acknowledge that the importance of estimands is growing [71,72,73]. It was felt that estimands are applicable to every trial, and therefore, we recommended via the CONSORT Executive Group representative that such a modification should be considered as a general amendment to the standard CONSORT 2010 when it is revised.

Based on the findings and feedback gathered, the SC made the following key decisions:

  • Open registration of new participants prior to round 2 specifically targeting journal editors to improve their participation;

  • Exclude the rating of unchanged items in round 2 to shorten completion time but include these items in the survey only to gather any qualitative feedback;

  • Terminate the Delphi survey after round 2 because the ratings suggested it was unlikely that additional valuable feedback would be gathered after this stage;

  • Submit an ethics amendment to increase the number of survey reminders sent out to non-responders to six and the completion period by 1 to 2 weeks in order to improve the response rate;

  • Provide general and itemised feedback to responders summarising their feedback and the SC’s response (what you said and what we did/will do);

Additional file 4 summarises some of the SC’s responses to responders’ qualitative feedback.

Delphi round 2

Additional file 7 presents the summary of item ratings of round 2 survey responders for new and modified items. See download at https://doi.org/10.15131/shef.data.6198347, for the detailed description of items for the main and abstract draft checklists used in round 2.

Perceptions of proposed items

For the abstract checklist, 65.8% of responders rated a new item on ‘adaptation decisions made’ as critical for inclusion (Additional file 7). The remaining four modified abstract items were rated as critical by at least 70% of responders. The overall distributions of ratings were relatively similar across these five abstract items.

For the main checklist items (Additional file 7), more than 70% of responders perceived 25/33 (75.2%) of the new or modified items as critical for inclusion, including 18/33 (54.5%) that were rated as critical by more than 90% of responders. Only 4/33 (12.1%) items received less than 50% votes for being critical: contribution to future research (22b), simulation protocol and report (24d), data monitoring committee charter (24e), and statistical code (24f). However, these items were perceived as at least important by more than 80% of responders. The remaining four items were perceived as critical by between 60% and 68% of responders: dealing with overrun trial participants (12e), representativeness of patient population (15b), access to intentionally withheld information during trial conduct (24b), and access to the statistical analysis plan (24c).

As in round 1, the perceptions of responders about the importance of suggested reporting items were broadly consistent across self-identified stakeholder groups, employment sectors, regulatory assessment experience, and statistical primary role. Clustered boxplots showing response patterns in item ratings are accessible online (see download https://doi.org/10.15131/shef.data.6139721.v1).

For each item, we calculated the proportion of responders who did not change their item ratings between rounds. The median (IQR) of these item rating proportions was 54.1% (48.6% to 57.1%) with a range of 38.7% to 61.6%. Individual item ratings between rounds were broadly consistent (Additional files 8 and 9). In addition, on average, most responders who changed their rating in round 2 increased scores from round 1 except for items 22b (contribution to future research) and 24e (data monitoring committee charter) (Additional file 9).

Open-ended feedback from participants

A summary of the open-ended feedback received in round 2 of the Delphi survey that was reviewed during the consensus meeting is accessible online (see download at https://doi.org/10.15131/shef.data.6139631). Two responders queried whether it was important to identify a trial as ‘adaptive’ in the title. We agreed on the importance of indexing an AD trial as adaptive. However, due to the increasing number of guidelines, it is impractical to mandate keywords in the title for every trial publication. Instead, we decided to recommend the inclusion of the word ‘adaptive’ in the trial abstract or at least as a keyword. This simplifies the search for AD trials in literature databases. A new item 3c (specification of pre-planned adaptation) then captures the details about the AD used.

Consensus meeting discussions

For the main checklist, Table 1 summarises the ACE Consensus Group discussions and advisory decisions made with suggestions of related issues to address. Delegates voted whether to keep or discard each item or whether they were unsure. There was a consensus (≥ 70% of votes) to include ten AD-specific items in the main checklist guideline, of which five were new and five modified items. A further five items were favoured by at least 50% of delegates: AD properties (50.0%), sample size (65.6%), and 53.8% each for randomisation updates after trial commencement (8c), dates defining periods of recruitment (14a), and for the inclusion of the statistical analysis plan (24c). A suggestion was made to expand the explanatory text of the E&E document for six items to clarify additional requirements for some ADs without modifying the item: items 14b (unexpected termination/why the trial ended or was stopped), 15 (appropriate baseline data for comparability), 16 (numbers analysed at interim and final analysis), 17a (primary outcome results), and 20 (limitations, sources of bias, imprecision and deviations) and 21 (generalisability) (Table 1). It was apparent after the meeting that modified item 6b (unplanned changes to outcomes) and new item 14c (adaptation decisions) that were supported by 46.2% of votes for inclusion needed further discussions by the SC (Table 1).

Table 1 Consensus meeting discussions and advisory decisions for the main checklist reporting items

For the abstract (Table 2), there was an agreement to include two modified items (description of trial design and clearly defined outcome for this report) and one new item (adaptive decisions made). A recurrent discussion point was the need to minimise adding new items to the abstract unless they are essential due to word limits imposed by journals.

Table 2 Consensus meeting discussions and advisory decision for the abstract checklist reporting items

Finalisation of the checklist

On 1 February 2018, the SMG met to discuss advisory decisions and suggestions made at the consensus meeting. The group discussed each item reflecting on the consensus report and agreed on the items to retain and structural changes required in the guidance.

The advisory decisions and suggestions from the consensus meeting were taken on board. The rationale for an AD (item 3b, Table 1) was dropped as a compromise but will be discussed in the E&E text under item 3c (pre-planned adaptations) and linked to the scientific background and explanation of the rationale (item 2a). We merged items 3e (AD properties) and 7b (sample size) because they are connected. As a result, we renamed the ‘sample size’ subheading to ‘sample size and operating characteristics’. The modified item 6b (unplanned changes to outcomes) with borderline results was included for clarification purpose. In addition, item 14c (adaptation decisions) was discussed as very important and also included for consistency with the abstract decisions. For items 24b to 24f (Table 1), we decided to keep the statistical analysis plan (24c) as an important standalone item and merge to include other items (24b intentionally withheld information, 24d simulation protocol and report, 24e data monitoring committee charter and 24f statistical code) for discussion in the E&E document for good practice.

For the abstract, we acknowledged the importance of including a clearly defined outcome used to inform adaptation if different from the primary outcome (1c Table 2). However, for parsimony reasons due to word limit imposed on abstracts, we dropped the modified item but will instead expand the E&E text discussing circumstances when this information is desirable to be included in the abstract.

Following the meeting, the checklist was revised including rewording and reordering of some items (such as item 3c ‘specification of pre-planned adaptation’ to 3b ‘pre-planned adaptive design features’) in consultation with the SC. On 13 March 2018, we shared the revised checklist with the ACE Consensus Group for their final feedback on the changes made. On 18 April 2018, we finalised the ACE main and abstract checklists that were signed off by the ACE Consensus Group which will be presented in the forthcoming E&E document. The ACE main checklist contains seven new and nine modified items, as well as six unchanged items that were recommended for inclusion in the expanded text of the E&E document for clarification. The other 21 items remain unchanged from the CONSORT 2010 Statement. The ACE abstract checklist had one new item, one modified item, and an unchanged item with expanded text, as well as 15 unchanged items. Table 3 presents the finalised modifications to the abstract and main report checklists excluding unchanged items.

Table 3 Finalised CONSORT extension for adaptive design randomised trials (only new and modified items and those with expanded E&E text)

Discussion

Main results or outputs

We have developed a consensus-driven extension to the CONSORT 2010 Statement for randomised trials using an AD to enhance transparency and adequate reporting. In the spirit of transparency, we have described in this paper the process for the development of the ACE checklist and provided all supporting information that aided the decision-making process. We hope that our experiences can help others in the development of other guidelines or extensions.

The guideline aims to promote transparency and adequate reporting of randomised trials that use ADs and not to stifle design innovation or application of ADs. The ACE checklist provides the minimum requirements that we encourage researchers to report. It is good scientific practice to present additional information beyond this guideline if it helps the interpretation of AD trial results. In principle, we are not advocating the inclusion of details of every trial aspect in a single journal publication. We believe that the most important is the access to details relating to the identified reporting items. For example, researchers can cite other accessible sources of information such as the protocol, simulation protocol and report, a prior publication detailing study design and rationale, methodology publications, and supplementary materials. In addition, the publishing landscape is rapidly changing to meet the needs for more transparency and adequate reporting.

During the development process, the SC came across a few reporting aspects that could be changed or added such as on estimands [71, 72] and data transparency but decided not to do so. This is because we felt that changes to reporting aspects that apply to every trial should be managed via universal amendments to the CONSORT 2010 Statement. We did not want the ACE to selectively put additional hurdles on ADs on reporting aspects which would also apply for other fixed designs. We have communicated this decision to the CONSORT Executive Group through its SC representative.

The ACE reporting guideline is applicable to all randomised AD trials regardless of the statistical framework used to design and analyse the trials (frequentist, Bayesian, or both). The supporting E&E document to be accessed via the CONSORT [74] and EQUATOR Network [42] websites will explain the checklist items in detail with the aid of examples and discussion. The E&E document will guide study publication authors in determining which minimum AD aspects warrant reporting and in what level of detail under different circumstances aided by examples. We hope this ACE reporting guideline will address some concerns about certain AD trials and, consequently, indirectly improve their design, conduct, and interpretability of results. We encourage researchers to use the guideline and journal editors and reviewers to enforce compliance as part of their publication policy. The usefulness of reporting guidelines can be maximised when there are adequate processes in place to enforce their compliance [75].

Main strengths

We used a consensus-driven Delphi methodology recommended when developing healthcare reporting guidelines [39]. We engaged with key stakeholders in trials research and potential end-users of the resultant ACE reporting guideline throughout the development process that involved participants from a wide range of scientific disciplines, employment sectors, and nationalities with diverse AD-related experiences. Throughout the checklist drafting process, an External Expert Panel provided quality control assurances. Given the topic of the guideline, we adapted the development process in response to the feedback gathered. The CONSORT Executive Group through its representative (DA) oversaw the development process of the guideline throughout. This research developed a CONSORT extension for AD randomised trials using this robust approach endorsed by the CONSORT Executive Group.

We recorded high response rates of 94 (66%), 114 (73%), and 79 (55%) in round 1, round 2, and across both rounds of the Delphi survey, respectively. The number of registered participants and responders is larger than other similar Delphi surveys [62, 76, 77]. The characteristics and demographics of registered participants and responders were very similar across Delphi survey rounds. In addition, the number of registered participants and responders is larger than in most Delphi surveys used to develop healthcare reporting guidelines [78, 79] and comparable to the one of the latest guideline on pilot and feasibility studies [80, 81]. We also improved the participation of key end-users of the guideline (journal editors) in round 2 by reopening registration after round 1. Finally, we achieved a high degree of consensus that was consistent across Delphi survey rounds for the majority of the items. Additional supplementary materials are publicly accessible (Additional file 10) including participants who took part (Additional file 11).

Main limitations

Despite the highlighted strengths of this study, we also identified a number of limitations. First, over half of the survey participants were statisticians in their primary role in trials research and even though industry currently contributes a huge proportion of ADs [18,19,20, 28, 82], over two thirds of participants were employed in the public sector. However, the secondary roles of participants in trials research were more diverse including clinical investigators and trial methodologists. Nonetheless, perceptions about the importance of items were broadly consistent regardless of the primary roles of the participants, and their self-identified stakeholder group, and employment sector.

Second, despite our broad engagement efforts, the number of participants from some stakeholder groups was small such as health economists, regulatory assessors, and research ethicists. Research on obstacles to AD trials also reported poor uptake among these stakeholder groups [26, 28]. The implications for the guideline development are unclear. Paradoxically, although few participants identified themselves as regulatory assessors, about 43% stated that they had AD-related regulatory assessment experiences. This could include researchers with regulatory experiences as part of regulatory engagements or submissions of their trials, previous employees of regulatory agencies, or current regulatory assessors who did not want to identify themselves as employees of regulatory agencies during the surveys due to contractual issues. However, the perceptions of responders were consistent regardless of the stated AD regulatory assessment experiences. It should also be noted that there was only a small number of regulatory assessors available for the sampling frame.

Finally, for practical purposes in line with the CONSORT 2010 statement, the ACE reporting guideline applies to randomised trials that use ADs. Hence, the guideline does not specifically address reporting aspects of non-randomised AD studies that are also applied in early phase trials. Nevertheless, the basic principles of the ACE reporting guideline may still be applicable to these interventional studies and are consistent with some researcher good practice propositions for writing early-phase AD study protocols [83]. We believe there is scope for a consensus-driven approach to develop a reporting guideline for non-randomised AD studies.

Conclusions

We have developed a consensus-driven CONSORT extension for AD randomised trials. This paper transparently describes how we reached the final ACE reporting checklist and the forthcoming E&E document and provides all supporting information that aided the decision-making process. The process we described is not just applicable to ADs, and so we hope this will help researchers in the development of future guidelines or extensions to learn from our experiences. The ACE reporting guideline is applicable to all AD randomised trials and contains minimum reporting requirements with appropriate flexibility to be described in the E&E document. We hope the guideline will improve the reporting of AD randomised trials, enhance their interpretability and credibility of their results, improve their reproducibility, and indirectly facilitate their robust design and conduct.

Abbreviations

ACE:

Adaptive designs CONSORT Extension

AD:

Adaptive design

ADMTP:

Adaptive Designs and Multiple Testing Procedures

ADWG:

Adaptive Designs Working Group

CONSORT:

Consolidated Standards of Reporting Trials

CTU:

Clinical Trials Unit

DIA:

Drive Insights to Action

DRF:

Doctoral Research Fellowship

E&E:

Explanation and elaboration

HTMR:

Hubs for Trials Methodology Research

IQR:

Interquartile range

ISCB:

International Society for Clinical Biostatistics

MRC:

Medical Research Council

NIHR:

National Institute for Health Research

PhRMA:

Pharmaceutical Research and Manufacturers of America

PSI:

Statisticians in the Pharmaceutical Industry

REC:

Research Ethics Committee

SC:

Steering Committee

ScHARR:

School of Health and Related Research

SCT:

Society for Clinical Trials

SD:

Standard deviation

SMG:

Study Management Group

UKCRC:

United Kingdom Clinical Research Collaboration

References

  1. Lauer MS, Gordon D, Wei G, Pearson G. Efficient design of clinical trials and epidemiological research: is it possible? Nat Rev Cardiol. 2017;14(8):493-501.

    Article  Google Scholar 

  2. O’Neill RT. FDA’s critical path initiative: a perspective on contributions of biostatistics. Biom J. 2006;48:559–64.

    Article  Google Scholar 

  3. Chow S-C. Adaptive clinical trial design. Annu Rev Med. Annual Reviews. 2014;65:405–15.

    Article  CAS  Google Scholar 

  4. CHMP. Reflection paper on methodological issues in confirmatory clinical trials planned with an adaptive design. 2007.

    Google Scholar 

  5. FDA. Guidance for industry: adaptive design clinical trials for drugs and biologics. 2010.

    Google Scholar 

  6. FDA. Adaptive designs for medical device clinical studies: draft guidance for industry and Food and Drug Administration staff. 2015.

    Google Scholar 

  7. Bauer P, Bretz F, Dragalin V, König F, Wassmer G. Twenty-five years of confirmatory adaptive designs: opportunities and pitfalls. Stat Med. 2016;35:325–47.

    Article  Google Scholar 

  8. Bretz F, Koenig F, Brannath W, Glimm E, Posch M. Adaptive designs for confirmatory clinical trials. Stat Med. 2009;28:1181–217.

    Article  Google Scholar 

  9. Pallmann P, Bedding AW, Choodari-Oskooei B, Dimairo M, Flight L, Hampson LV, et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. BioMed Central. 2018;16:29.

    Google Scholar 

  10. Guetterman TC, Fetters MD, Legocki LJ, Mawocha S, Barsan WG, Lewis RJ, et al. Reflections on the adaptive designs accelerating promising trials into treatments (ADAPT-IT) process-findings from a qualitative study. Clin Res Regul Aff. 2015;32:121–30.

    Article  Google Scholar 

  11. Rong Y. Regulations on adaptive design clinical trials. Pharm Regul Aff Open Access. OMICS International; 2014;03.

  12. Quinlan J, Krams M. Implementing adaptive designs: logistical and operational considerations. Drug Inf J. 2006;40:437–44.

    Article  Google Scholar 

  13. Gaydos B, Anderson KM, Berry D, Burnham N, Chuang-Stein C, Dudinak J, et al. Good practices for adaptive clinical trials in pharmaceutical product development. Drug Inf J. 2009;43:539–56.

    Article  Google Scholar 

  14. Thorlund K, Haggstrom J, Park JJ, Mills EJ. Key design considerations for adaptive clinical trials: a primer for clinicians. BMJ. British Medical Journal Publishing Group. 2018;360:k698.

    Article  Google Scholar 

  15. Park JJ, Thorlund K, Mills EJ. Critical concepts in adaptive clinical trials. Clin Epidemiol. Dove Press. 2018;10:343–51.

    Article  Google Scholar 

  16. Chow S-C, Chang M. Adaptive design methods in clinical trials - a review. Orphanet J Rare Dis. 2008;3:11.

    Article  Google Scholar 

  17. Jaki T. Designing multi-arm multi-stage clinical studies. Developments in Statistical Evaluation of Clinical Trials. Springer; 2014. p. 51–69. Available from: http://link.springer.com/10.1007/978-3-642-55345-5_3.

  18. Hatfield I, Allison A, Flight L, Julious SA, Dimairo M. Adaptive designs undertaken in clinical research: a review of registered clinical trials. Trials. BioMed Central. 2016;17:150.

    Google Scholar 

  19. Sato A, Shimura M, Gosho M. Practical characteristics of adaptive design in phase 2 and 3 clinical trials. J Clin Pharm Ther. 2018;43(2):170-80.

    Article  Google Scholar 

  20. Lin M, Lee S, Zhen B, Scott J, Horne A, Solomon G, et al. CBER’s experience with adaptive design clinical trials. Ther Innov Regul Sci. 2015;50:195–203.

    Article  Google Scholar 

  21. Yang X, Thompson L, Chu J, Liu S, Lu H, Zhou J, et al. Adaptive design practice at the Center for Devices and Radiological Health (CDRH), January 2007 to May 2013. Ther Innov Regul Sci. SAGE Publications. 2016;50:710–7.

    Google Scholar 

  22. Morgan CC, Huyck S, Jenkins M, Chen L, Bedding A, Coffey CS, et al. Adaptive design: results of 2012 survey on perception and use. Ther Innov Regul Sci. 2014;48:473–81.

    Article  Google Scholar 

  23. Bauer P, Einfalt J. Application of adaptive designs – a review. Biom J. 2006;48:493–506.

    Article  CAS  Google Scholar 

  24. Curtin F, Heritier S. The role of adaptive trial designs in drug development. Expert Rev Clin Pharmacol. 2017;10(7):727-36.

    Article  CAS  Google Scholar 

  25. Elsäßer A, Regnstrom J, Vetter T, Koenig F, Hemmings RJ, Greco M, et al. Adaptive clinical trial designs for European marketing authorization: a survey of scientific advice letters from the European Medicines Agency. Trials. 2014;15:383.

    Article  Google Scholar 

  26. Dimairo M, Boote J, Julious SA, Nicholl JP, Todd S. Missing steps in a staircase: a qualitative study of the perspectives of key stakeholders on the use of adaptive designs in confirmatory trials. Trials. BioMed Central Ltd. 2015;16:430.

    Google Scholar 

  27. Meurer WJ, Legocki L, Mawocha S, Frederiksen SM, Guetterman TC, Barsan W, et al. Attitudes and opinions regarding confirmatory adaptive clinical trials: a mixed methods analysis from the Adaptive Designs Accelerating Promising Trials into Treatments (ADAPT-IT) project. Trials. BioMed Central. 2016;17:373.

    Google Scholar 

  28. Dimairo M, Julious SA, Todd S, Nicholl JP, Boote J. Cross-sector surveys assessing perceptions of key stakeholders towards barriers, concerns and facilitators to the appropriate use of adaptive designs in confirmatory trials. Trials. BioMed Central Ltd. 2015;16:585.

    Google Scholar 

  29. Love SB, Brown S, Weir CJ, Harbron C, Yap C, Gaschler-Markefski B, et al. Embracing model-based designs for dose-finding trials. Br J Cancer. Nature Publishing Group. 2017;117:332–9.

    Google Scholar 

  30. Jaki T. Uptake of novel statistical methods for early-phase clinical studies in the UK public sector. Clin Trials. 2013;10:344–6.

    Article  Google Scholar 

  31. Quinlan J, Gaydos B, Maca J, Krams M. Barriers and opportunities for implementation of adaptive designs in pharmaceutical product development. Clin Trials. 2010;7:167–73.

    Article  Google Scholar 

  32. Coffey CS, Levin B, Clark C, Timmerman C, Wittes J, Gilbert P, et al. Overview, hurdles, and future work in adaptive designs: perspectives from a National Institutes of Health-funded workshop. Clin Trials. 2012;9:671–80.

    Article  Google Scholar 

  33. Dimairo M. The utility of adaptive designs in publicly funded confirmatory trials. 2016. http://etheses.whiterose.ac.uk/13981. Accessed 7 July 2017.

  34. Stevely A, Dimairo M, Todd S, Julious SA, Nicholl J, Hind D, et al. An investigation of the shortcomings of the CONSORT 2010 statement for the reporting of group sequential randomised controlled trials: a methodological systematic review. PLoS One. 2015;10:e0141104.

    Article  Google Scholar 

  35. Mistry P, Dunn JA, Marshall A. A literature review of applied adaptive design methodology within the field of oncology in randomised controlled trials and a proposed extension to the CONSORT guidelines. BMC Med Res Methodol. BMC Medical Research Methodology. 2017;17:108.

    Article  Google Scholar 

  36. Stevely A, Dimairo M, Todd S, Julious SA, Nicholl J, Hind D, et al. An investigation of the shortcomings of the CONSORT 2010 Statement for the reporting of group sequential randomised controlled trials: a methodological systematic review. Shamji M, editors. PLoS One. Public Library of Science; 2015;10:e0141104.

  37. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med. 2010;152:726–32.

    Article  Google Scholar 

  38. Detry MA, Lewis RJ, Broglio KR, Connor JT, Berry SM, Berry DA. Standards for the design, conduct, and evaluation of adaptive randomized clinical trials. Washington: Patient-Centered Outcomes Research Institute; 2012. http://www.pcori.org/assets/Standards-for-the-Design-Conduct-and-Evaluation-of-Adaptive-Randomized-Clinical-Trials.pdf. Accessed 7 July 2017.

  39. Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. Public Library of Science. 2010;7:e1000217.

    Google Scholar 

  40. Dimairo M, Todd S, Julious S, Jaki T, Wason J, Hind D, et al. ACE Project Protocol Version 2.3: development of a CONSORT Extension for adaptive clinical trials [Internet]. EQUATOR Netw. 2016 [cited 2018 Jan 25]. Available from: http://www.equator-network.org/wp-content/uploads/2017/12/ACE-Project-Protocol-v2.3.pdf

  41. Kairalla JA, Coffey CS, Thomann MA, Muller KE. Adaptive trial designs: a review of barriers and opportunities. Trials. 2012;13:145.

    Article  Google Scholar 

  42. The EQUATOR Network [Internet]. [cited 2016 Oct 10]. Available from: http://www.equator-network.org/reporting-guidelines/.

  43. Dragalin V. Adaptive designs: terminology and classification. Drug Inf J. 2006;40:425–35.

    Article  Google Scholar 

  44. Cook T, DeMets DL. Review of draft FDA adaptive design guidance. J Biopharm Stat. 2010;20:1132–42.

    Article  Google Scholar 

  45. Chow S-C, Corey R. Benefits, challenges and obstacles of adaptive clinical trial designs. Orphanet J Rare Dis. 2011;6:79.

    Article  Google Scholar 

  46. Gallo P, Chuang-Stein C, Dragalin V, Gaydos B, Krams M, Pinheiro J. Adaptive designs in clinical drug development--an executive summary of the PhRMA Working Group. J Biopharm Stat. 2006;16:275–83 discussion 285-91, 293–8, 311–2.

    Article  Google Scholar 

  47. Rosenberg MJ. The agile approach to adaptive research: optimizing efficiency in clinical development. 1st ed. Hoboken: John Wiley & Sons, Inc; 2010.

    Book  Google Scholar 

  48. Brown CH, Ten Have TR, Jo B, Dagne G, Wyman PA, Muthén B, et al. Adaptive designs for randomized trials in public health. Annu Rev Public Health. NIH Public Access. 2009;30:1–25.

    Article  Google Scholar 

  49. Wang SJ, Hung HM, O'Neill R. Adaptive design clinical trials and trial logistics models in CNS drug development. Eur Neuropsychopharmacol. 2011;21(2):159-66.

    Article  CAS  Google Scholar 

  50. Elman SA, Ware JH, Gottlieb AB, Merola JF. Adaptive clinical trial design: an overview and potential applications in dermatology. J Invest Dermatol. 2016;136:1325–9.

    Article  CAS  Google Scholar 

  51. Porcher R, Lecocq B, Vray M, D’Andon A, Bassompierre F, Béhier J-M, et al. Adaptive methods: when and how should they be used in clinical trials? Therapie. 2011;66:319–26.

    Article  Google Scholar 

  52. Maca J, Dragalin V, Gallo P. Adaptive clinical trials: overview of phase III designs and challenges. Ther Innov Regul Sci. 2014;48:31–40.

    Article  Google Scholar 

  53. Bauer P, Brannath W. The advantages and disadvantages of adaptive designs for clinical trials. Drug Discov Today. 2004;9:351–7.

    Article  Google Scholar 

  54. Coffey CS, Kairalla JA. Adaptive clinical trials: progress and challenges. Drugs R D. 2008;9:229–42.

    Article  CAS  Google Scholar 

  55. Gallo P. Operational challenges in adaptive design implementation. Pharm Stat. John Wiley & Sons, Ltd. 2006;5:119–24.

    Google Scholar 

  56. Gaydos B, Anderson KM, Berry D, Burnham N, Chuang-Stein C, Dudinak J, et al. Good practices for adaptive clinical trials in pharmaceutical product development. Ther Innov Regul Sci. 2009;43:539–56.

    Google Scholar 

  57. Phillips AJ, Keene ON. Adaptive designs for pivotal trials: discussion points from the PSI Adaptive Design Expert Group. Pharm Stat. John Wiley & Sons, Ltd. 2006;5:61–6.

    Google Scholar 

  58. Gould AL. How practical are adaptive designs likely to be for confirmatory trials? Biom J. 2006;48:644–9.

    Article  Google Scholar 

  59. Spencer K, Colvin K, Braunecker B, Brackman M, Ripley J, Hines P, et al. Operational challenges and solutions with implementation of an adaptive seamless phase 2/3 study. J Diabetes Sci Technol. 2012;6:1296–304.

    Article  Google Scholar 

  60. Koch A. Confirmatory clinical trials with an adaptive design. Biom J. 2006;48:574–85.

    Article  Google Scholar 

  61. Chuang-Stein C, Beltangady M. FDA draft guidance on adaptive design clinical trials: Pfizer’s perspective. J Biopharm Stat. 2010;20:1143–9.

    Article  Google Scholar 

  62. Eldridge SM, Chan CL, Campbell MJ, Bond CM, Hopewell S, Thabane L, et al. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. Pilot Feasibility Stud. BioMed Central. 2016;2:64.

    Article  Google Scholar 

  63. Kirkham JJ, Gorst S, Altman DG, Blazeby J, Clarke M, Devane D, et al. COS-STAR: a reporting guideline for studies developing core outcome sets (protocol). Trials. BioMed Central. 2015;16:373.

    Google Scholar 

  64. Gamble C, Krishan A, Stocken D, Lewis S, Juszczak E, Doré C, et al. Guidelines for the content of statistical analysis plans in clinical trials. JAMA. American Medical Association. 2017;318:2337.

    Article  Google Scholar 

  65. Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c869.

    Article  Google Scholar 

  66. Diamond IR, Grant RC, Feldman BM, Pencharz PB, Ling SC, Moore AM, et al. Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014;67:401–9.

    Article  Google Scholar 

  67. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

    Article  Google Scholar 

  68. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159.

    Article  CAS  Google Scholar 

  69. Efron B. Bootstrap methods: another look at the jackknife. Ann Stat. Institute of Mathematical Statistics. 1979;7:1–26.

    Google Scholar 

  70. Bland M, Altman D. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. Elsevier. 1986;327:307–10.

    Article  Google Scholar 

  71. Akacha M, Bretz F, Ruberg S. Estimands in clinical trials – broadening the perspective. Stat Med. 2017;36:5–19.

    Article  Google Scholar 

  72. Akacha M, Bretz F, Ohlssen D, Rosenkranz G, Schmidli H. Estimands and their role in clinical trials. Stat Biopharm Res. Taylor & Francis. 2017;9:268–71.

    Article  Google Scholar 

  73. Phillips A, Abellan-Andres J, Soren A, Bretz F, Fletcher C, France L, et al. Estimands: discussion points from the PSI estimands and sensitivity expert group. Pharm Stat. 2017;16:6–11.

    Article  Google Scholar 

  74. Extensions of the CONSORT Statement [Internet]. [cited 2018 May 21]. Available from: http://www.consort-statement.org/extensions.

  75. Blanco D, Biggane AM, Cobo E. Are CONSORT checklists submitted by authors adequately reflecting what information is actually reported in published papers? Trials. BioMed Central. 2018;19:80.

    Google Scholar 

  76. Agha RA, Fowler AJ, Rajmohan S, Barai I, Orgill DP, Afifi R, et al. Preferred reporting of case series in surgery; the PROCESS guidelines. Int J Surg. 2016;36:319–23.

    Article  Google Scholar 

  77. Vohra S, Shamseer L, Sampson M, Bukutu C, Schmid CH, Tate R, Nikles J, Zucker DR, Kravitz R, Guyatt G, Altman DG, Moher D; CENT Group. CONSORT extension for reporting N-of-1 trials (CENT) 2015 Statement. J Clin Epidemiol. 2016;76:9-17. https://doi.org/10.1016/j.jclinepi.2015.05.004. Epub 2015 Aug 10. PubMed PMID: 26272792.

    Article  Google Scholar 

  78. Gagnier JJ, Kienle G, Altman DG, Moher D, Sox H, Riley D, et al. The CARE guidelines: consensus-based clinical case reporting guideline development. Glob Adv Heal Med. SAGE Publications. 2013;2:38–43.

    Article  Google Scholar 

  79. Husereau D, Drummond M, Petrou S, Carswell C, Moher D, Greenberg D, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS)—explanation and elaboration: a report of the ISPOR health economic evaluation publication guidelines good reporting practices task force. Value Heal. Elsevier. 2013;16:231–50.

    Article  Google Scholar 

  80. Eldridge SM, Chan CL, Campbell MJ, Bond CM, Hopewell S, Thabane L, et al. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. BMJ. BMJ Publishing Group. 2016;355:i5239.

    Google Scholar 

  81. Stevens GA, Alkema L, Black RE, Boerma JT, Collins GS, Ezzati M, et al. Guidelines for accurate and transparent health estimates reporting: the GATHER statement. PLoS Med. Public Library of Science. 2016;13:e1002056.

    Google Scholar 

  82. Bothwell LE, Avorn J, Khan NF, Kesselheim AS. Adaptive design clinical trials: a review of the literature and ClinicalTrials.gov. BMJ Open. British Medical Journal Publishing Group. 2018;8:e018320.

    Google Scholar 

  83. Lorch U, O’Kane M, Taubel J. Three steps to writing adaptive study protocols in the early phase clinical development of new medicines. BMC Med Res Methodol. 2014;14:1–9.

    Article  Google Scholar 

  84. Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG, et al. CONSORT for reporting randomized controlled trials in journal and conference abstracts: explanation and elaboration. von Elm E, editor. PLoS Med. Public Library of Science; 2008;5:e20.

  85. Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG, et al. CONSORT for reporting randomised trials in journal and conference abstracts. Lancet. 2008;371:281–3.

    Article  Google Scholar 

Download references

Acknowledgements

The SC valued the administrative support provided by Sarah Gonzalez throughout the project and additional coordination. The authors acknowledged Sheffield Clinical Trials Research Unit for the support, particularly Mike Bradburn and Cindy Cooper for providing protected time to ensure progress of this project; Benjamin Allin and Anja Hollowell for the Delphi surveys’ technical and administrative support; and Peter Bauer and Martin Posch for their helpful review feedback of the draft checklist.

ACE Steering Committee: Munyaradzi Dimairo, Elizabeth Coates, Philip Pallmann, Susan Todd, Steven A. Julious, Thomas Jaki, James Wason, Adrian P. Mander, Christopher J. Weir, Franz Koenig, Marc K. Walton, Katie Biggs, Jon Nicholl, Toshimitsu Hamasaki, Michael A. Proschan, John A. Scott, Yuki Ando, Daniel Hind, and Douglas G. Altman

ACE Study Management Group: Munyaradzi Dimairo, Elizabeth Coates, Philip Pallmann, Susan Todd, Steven A. Julious, Thomas Jaki, James Wason, Adrian Mander, Christopher J. Weir, Franz Koenig, Katie Biggs, Jon Nicholl, and Daniel Hind

The External Expert Panel

We would like to thank these members for their invaluable contributions in reviewing the checklist and working definitions of technical terms for the Delphi surveys: William Meurer a, Yannis Jemiai b, Stephane Heritier c, and Christina Yap d.

a University of Michigan, Taubman Center, USA

b Cytel, Cambridge, USA

c Monash University, Department of Epidemiology and Preventive Medicine, School of Public Health and Preventive Medicine, Australia

d Cancer Research UK Clinical Trials Unit, Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham, UK

The ACE Consensus Group

We are very grateful to the participants who contributed to a very successful consensus meeting that influenced the decision-making process of the SC. All members including the SC signed off the final ACE checklist.

ACE Consensus Group: Munyaradzi Dimairo 1, Toshimitsu Hamasaki 2, Susan Todd 3, Christopher J Weir 4, Adrian P. Mander 5, James Wason 5, 6, Franz Koenig 7, Steven A. Julious 8, Daniel Hind 1, Jon Nicholl 1, Douglas G Altman 9, William J. Meurer 10, Christopher Cates 11, Matthew Sydes 12, Yannis Jemiai 13, Deborah Ashby 14 (Chair, non-voting member), Christina Yap 15, Frank Waldron-Lynch 16, James Roger 17, Joan Marsh 18, Trish Groves 19, Olivier Collignon 20, David J. Lawrence 21, Catey Bunce 22, Tom Parke 23, Gus Gazzard 24, Elizabeth Coates 1 (non-voting member), and Marc K Walton 25

1School of Health and Related Research, University of Sheffield, Sheffield, UK

2National Cerebral and Cardiovascular Center, Osaka, Japan

3Department of Mathematics and Statistics, University of Reading, Reading, UK

4Edinburgh Clinical Trials Unit, Centre for Population Health Sciences, Usher Institute of Population Health Sciences & Informatics, The University of Edinburgh, Edinburgh, UK

5MRC Biostatistics Unit, University of Cambridge, School of Clinical Medicine, Cambridge Institute of Public Health, Cambridge, UK

6Institute of Health and Society, Newcastle University, UK

7Medical University of Vienna, Center for Medical Statistics, Informatics, and Intelligent Systems, Vienna, Austria

8Medical Statistics Group, School of Health and Related Research, University of Sheffield, Sheffield, UK

9Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology & Musculoskeletal Sciences, University of Oxford, Oxford, UK

10University of Michigan, Taubman Center, USA

11Cochrane Airways, PHRI, SGUL, London, UK

12MRC Clinical Trials Unit, UCL, Institute of Clinical Trials & Methodology, London, UK

13Cytel, Cambridge, USA

14Imperial College London, St. Mary’s Campus, London, UK

15Cancer Research UK Clinical Trials Unit, Institute of Cancer and Genomic Sciences, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK

16Novartis Institutes for Biomedical Research, Basel, Switzerland

17Institutional address not applicable

18The Lancet Psychiatry, London, UK

19BMJ, BMA House, London, UK

20Luxembourg Institute of Health, Strassen, Luxembourg

21Novartis Campus, Basel, Switzerland

22School of Population Health and Environmental Sciences, Faculty of Life Sciences and Medicine, King’s College London, London, UK

23Berry Consultants, Merchant House, Abingdon, UK

24NIHR Biomedical Research Centre at Moorfields Eye Hospital and UCL Institute of Ophthalmology, London, UK

25Janssen Research and Development, Ashton, USA

Delphi survey participants

We would like to thank all the participants who took part in the Delphi surveys. This was time-consuming, and we acknowledge their immense contribution to the guideline development process. Additional file 11 lists registered participants who did not opt out for their names to be publicly acknowledged.

Funding

This paper summarises independent research jointly funded by the NIHR CTU Support Funding programme and the MRC HTMR. The views expressed are those of the authors and not necessarily those of the National Health Service, the NIHR, the MRC, or the Department of Health.

The funders had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.

This work reflects the views of the authors and should not be construed to represent FDA’s views or policies.

MD, JW, TJ, AM, ST, SJ, FK, CW, DH, and JN were co-applicants who sourced funding from the NIHR CTU Support Funding programme. MD, JW, and TJ sourced additional funding from the MRC HTMR. TJ’s contribution was in part funded by his Senior Research Fellowship (NIHR-SRF-2015-08-001) funded by NIHR. NHS Lothian via the Edinburgh Clinical Trials Unit supported CW in this work. The University of Sheffield via the Sheffield Clinical Trials Unit and the NIHR CTU Support Funding programme supported MD.

Availability of data and materials

Additional file 10 details additional supplementary material accessible via the University of Sheffield ORDA repository. This includes anonymised individual-level datasets generated from Delphi surveys reported in this study. Please contact the lead author for any related queries.

Author information

Authors and Affiliations

Author notes

  1. Douglas G. Altman is deceased. This paper is dedicated to his memory.

    • Douglas G. Altman
Authors

Contributions

The idea originated from an NIHR Doctoral Research Fellowship (DRF-2012-05-182) led by MD and supervised by SJ, ST, and JN. The idea was presented, discussed, and contextualised at the 2016 annual workshop of the MRC HTMR ADWG attended by six members of the SC (MD, TJ, PP, JW, AM, and CW). MD, JW, TJ, AM, ST, SJ, FK, CW, DH, and JN conceptualised the study design and applied for funding. All authors contributed to the conduct of the study and interpretation of the results. MD analysed quantitative data. EC analysed qualitative data, and MD assisted with technical interpretation. MD, EC, ST, PP, CW, TJ, JW, and SJ led the write-up of the first draft. DA on behalf of the CONSORT Executive Group oversaw the whole development process. All authors contributed to the write-up, reviewed, and approved the final manuscript version. However, we are deeply saddened by the passing of DA who did not have the opportunity to approve the final manuscript. In memory of his immense contribution to the ACE project, medical statistics, good scientific research practice and reporting, and humanity, we dedicate this work to him.

Corresponding author

Correspondence to Munyaradzi Dimairo.

Ethics declarations

Ethics approval and consent to participate

The project ethics approval was granted by the REC of the ScHARR at the University of Sheffield (ref: 012041). All Delphi participants provided consent online during registration.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

ACE project management activities. Summary of ACE project management related activities during the development process. (DOCX 21 kb)

Additional file 2:

Definitions of technical terms relating to adaptive designs. List of adaptive designs technical terms and definitions agreed by the ACE Steering Committee. (DOCX 26 kb)

Additional file 3:

Platforms used to reach out to key stakeholders for Delphi surveys. List of platforms used to reach out to key stakeholders for Delphi surveys. (DOCX 20 kb)

Additional file 4:

Qualitative feedback from round 1 Delphi survey and our response. Qualitative feedback and Steering Committee responses. (DOCX 20 kb)

Additional file 5:

Demographic and characteristics of registered participants and responders. Summaries of Delphi survey participants. (DOCX 23 kb)

Additional file 6:

Round 1 summary of perceptions about the importance of reporting items. Summaries of round 1 perceptions about the importance of reporting items. (DOCX 28 kb)

Additional file 7:

Round 2 summary of perceptions about the importance of reporting items. Summaries of round 2 perceptions about the importance of reporting items. (DOCX 29 kb)

Additional file 8:

Measures of agreement in rating scores between Delphi rounds 1 and 2 survey. Summaries of measures of agreement in the rating of participants between rounds 1 and 2 of the Delphi surveys. (DOCX 24 kb)

Additional file 9:

Distributions of the change in rating scores from round 1 and Bland-Altman plots. Bland-Altman plots and histograms showing the distribution of change in scores from round 1. (DOCX 1205 kb)

Additional file 10:

Accessible supplementary material hosted within the University of Sheffield ORDA repository. Summary reports; draft checklists used in round 1 and 2 Delphi surveys; registration and Delphi survey rounds datasets; Figures (clustered boxplots) displaying responders’ perceptions of reporting items stratified by key characteristics. (DOCX 22 kb)

Additional file 11:

Registered participants for the Delphi surveys. List of participants who registered to take part in the Delphi surveys. This includes only those who did not opt out to be publicly acknowledged. (DOCX 19 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dimairo, M., Coates, E., Pallmann, P. et al. Development process of a consensus-driven CONSORT extension for randomised trials using an adaptive design. BMC Med 16, 210 (2018). https://doi.org/10.1186/s12916-018-1196-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12916-018-1196-2

Keywords