Introduction

The randomized controlled trial (RCT) is preferred for testing cause–effect relationships between treatments and outcomes [1] but its validity depends upon several important elements. Two important components of an RCT are the primary outcome—the principle issue being tested—and the anticipated sample size. A larger sample size is required in studies where the between-group difference in primary outcome (i.e. effect size) is small, or where its variance is large; thus these two key elements are interrelated [2].

Changing a primary endpoint after enrolment has begun changes the principle purpose of the research; it also changes the original assumptions that were used to calculate the required sample size. Similarly, changing the sample size while the trial is underway suggests an inability to enrol adequate numbers of patients, or that the anticipated effect size was incorrect. Nonetheless, while changing the primary endpoint or sample size after enrolment has commenced can undermine the validity of a study, transparent reporting facilitates detection and understanding of such changes.

Recognizing such concerns, the National Institutes of Health (NIH) and other agencies created databases for trial pre-registration. Supporting this initiative, the International Committee of Medical Journal Editors (ICMJE) agreed that RCTs would not be considered for publication unless they had been registered with a minimum dataset [3, 4]. Using such registration databases, journal reviewers and readers can determine the originally recorded intentions of the study and whether (and when) such plans had been changed. In addition, pre-registration enables better identification of publication bias, protecting against the non-publication of ‘negative’ studies [5].

Understanding the timing of trial registration may provide important insight into the significance of alterations in the study plan. Changes that occurred between trial completion and publication may have been prompted by the final results, whereas changes occurring during patient enrolment may have been triggered by the accumulating study data. In contrast, the combination of registration before commencement of patient enrolment coupled with no subsequent design changes (or restriction to well-justified changes) ensures that neither emerging nor completed study data would have influenced the study design as reported in the final manuscript. While the original requirement was for registration before commencement of patient enrolment [3, 4], ‘appropriate’ registration has more recently been characterized as that occurring before completion of the trial [6].

Changes in study design may have particular impact on RCTs of critically ill patients, where the presence of multiple therapies and co-morbidities is common [79]. In addition, because of the high mortality, morbidity and economic cost associated with critical illness, it is especially important to design studies with sufficient power to determine with certainty whether therapies investigated in trials are truly effective—or not [10]. Altering the intended outcome during trial conduct can introduce similar barriers to understanding the effectiveness of a tested intervention. These concerns are heightened in the critically ill because a large majority (over 90 %) of multicenter RCTs in critical care medicine (CCM) that specify mortality as a primary endpoint reported that the tested intervention was non-beneficial [11].

Because pre-registration can facilitate insight into the validity of an RCT, and because this may be especially important in the critically ill, we investigated trial registration and post-registration trial alterations in published RCTs of treatments in ICU patients.

Methods

Search and selection criteria

We searched the MEDLINE database (August 2011) using OVID Medline to identify RCTs that were published in the discipline of CCM using the following MeSH terms: “Critical Care”, “Critical Illness”, “Intensive Care Units”, “Respiratory Distress Syndrome, Adult”, “Sepsis”, “Multiple Organ Failure”, and “Respiration, Artificial”. We limited our search to include only trials conducted in humans and published in English-language journals (“English language” and “humans”) and used the validated search term “randomized controlled trials” (pt). We also chose to limit our search to studies published after 2005 (“2005–August 2011”) because most major medical journals required trial registration by this date.

Two reviewers (VA, BK) independently screened studies for inclusion according to the following criteria: trials involving interventions in intensive care units, burn units, or pulmonary care units. Exclusion criteria included completion of enrolment before July 2005; focus on perinatal or neonatal issues, pharmacokinetics, follow-up clinics, caregiver knowledge or validation of scoring systems; secondary analyses of a prior study; retracted manuscripts; studies of pre- or intraoperative interventions; and studies in healthy volunteers or in sleep laboratories.

We searched for trials published since 1 July 2005 because the 2004 ICMJE statement stipulated that trials commencing after this date must be registered at or before the onset of enrolment. However, the ICMJE recognized that trials already commenced prior to this date might not yet have been registered, and required that such ongoing or completed trials be registered before 13 September 2005 to be considered for publication.

The registry identification number was recorded from the manuscript. For published trials that did not include their registry data, a search using the name of the first and last author (and the corresponding author, where this was neither the first nor last author) was conducted in the three most commonly used registration databases: ClinicalTrials.gov (NCT), controlled-trials.com (ISRCTN), and anzctr.org.au (ACTRN). If registration information was not identified, an email was sent to the corresponding author to enquire about registration status.

Manuscript review

Two reviewers independently abstracted information from the published manuscript of each registered trial. The enrolment start and end dates were recorded, if reported. If no enrolment date was reported, enrolment commencement and cessation dates from the registry were used, where available. The primary outcome of the study was recorded; if none was specified, the outcome variable that was used to determine sample size was assumed to be the primary outcome. If the reviewers were still unable to determine a primary outcome, the primary outcome was recorded as ‘unclear’. If more than one primary outcome (or primary efficacy or safety endpoints) were reported, all were recorded. The number of enrolled patients that were included in the final analysis was also recorded.

Registry review

Two reviewers abstracted the following information from the online registry: registration date, primary outcome, anticipated sample size. The date on which the study was registered in the database was recorded, as was the proposed primary outcome(s), as well as any dates that these were entered or altered. If a primary outcome was not recorded at the time of registration, but was subsequently appended, they were reported as ‘added’. The anticipated enrolment size was recorded as present or not.

Evaluation measures

The enrolment start and end dates (from the published paper) were compared to the trial registration date (from the registry). Studies in which enrolment commenced after 1 July 2005 (the date stipulated by the ICMJE statement) were examined for registration date and these dates were recorded as ‘registered before patient enrolment commenced’, ‘registered during patient enrolment’, and ‘registered after patient enrolment was completed’.

For trials that commenced before, but continued after 1 July 2005, registration was recorded as ‘studies registered on or before September 2005’ or ‘studies registered after September 2005’.

Primary outcomes and sample size

Changes in the primary outcome(s) between initial registration and publication in the manuscript primary outcome(s) were recorded; all detected changes were confirmed by three reviewers (BK, DS, CP) independently and then in conference. Consensus was used to resolve any disagreements. We a priori defined as important a difference in sample size of 10 % or more between that originally registered and that reported in the published manuscript.

ICMJE status

For each journal from which a trial was included in the analysis, their listing status with ICMJE was recorded. The source of this information was http://www.icmje.org/journals.html (journals following ICMJE recommendations).

Results

Our search identified 2,308 published studies and 197 trials met our eligibility criteria (Fig. 1). Overall, 133 (of 197; 68 %) trials were registered in a trials registry; 105 reported the registry number in the manuscript and 28 were found by registry searches. Most studies were registered with ClinicalTrials.gov (n = 103) or Controlled-Trials.com (n = 25), and 5 studies were registered with the Australian New Zealand Clinical Trials Registry. One study reported incorrect registration information but was nonetheless included in our analysis. For two studies in which enrolment completion was not recorded in the publication, the completion date from the registry was used. Five studies had incomplete enrolment date information in both publication and registry, and were excluded from further analysis. The included studies were divided into those starting enrolment after July 2005 (n = 90; Fig. 1) vs. those in which enrolment commenced before July 2005 (n = 107; Fig. 1).

Fig. 1
figure 1

Study flow chart: enrolment after July 2005 = trials where the entire enrolment occurred after July 2005; enrolment before July 2005 = trials where enrolment began before July 2005 but continued after July 2005

Enrolment commenced after July 2005

Registration and timing of registration (“Appendix 2”)

Two-thirds (59/90; 66 %) of trials commencing after July 2005 were registered (Fig. 1). Of the 59 registered trials, 20 (34 %) were registered before patient enrolment began, 23 (39 %) were registered during the enrolment phase of the trial, and 16 (27 %) were registered after enrolment was completed (Fig. 1). Approximately one-third (31/90; 34 %) did not have identifiable registration information; in these cases the corresponding author was contacted, nine of whom confirmed non-registration. No additional registration information was available.

Sample size (“Appendix 3”)

Of the 90 critical care trials included in the study, 5 (6 %) articles were both appropriately registered (registered prior to study enrolment), and had unchanged sample size from registration to publication (Fig. 2). In total 55 % (11/20) of pre-registered trials changed sample size by at least 10 %. Of trials registered during enrolment, 11/23 (48 %) changed the sample by at least 10 %; of these 82 % (9/11) were revised to a lower value. In a further 19 % (11/59) of registered trials the original sample size recorded in the registry was unclear.

Fig. 2
figure 2

Sample size in trials where all enrolment occurred after July 2005: before trial = registration occurring before trial enrolment or “pre-registration”; during trial = registration occurring while trial enrolment was ongoing; after trial = registration occurring after trial enrolment was completed or “post-registration”; not changed = trials where sample size was clearly not altered from registration to publication; changed = trials where sample size was definitely changed from registration to publication; unclear = trials where alteration of sample size was unclear

Primary outcome (“Appendix 4”)

Eleven of 90 (12 %) trials were appropriately registered and unchanged (Fig. 3). In 25 % (15/59) of registered trials a change was made in published primary outcome from that recorded in the registry. In 56 % (33/59) of registered trials a change in primary outcome was unclear either because of the lack of clearly identifiable primary outcome or because registration occurred after the trial enrolment began. Changes to a primary outcome often involved the reporting of mortality (3/7 RCTs registered pre-enrolment and 3/6 registered mid-enrolment) (“Appendix 7”).

Fig. 3
figure 3

Primary outcome in trials where all enrolment occurred after July 2005: before trial = registration occurring before trial enrolment or “pre-registration”; during trial = registration occurring while trial enrolment was ongoing; after trial = registration occurring after trial enrolment was completed or “post-registration”. Not changed = trials where primary outcome was clearly not altered from registration to publication; changed = trials where primary outcome was definitely changed from registration to publication; unclear = trials where alteration of primary outcome was unclear

Enrolment commenced before July 2005

Registration and timing of registration

For 107 studies in which enrolment commenced prior to July 2005 and continued through 13 September 2005, 74 (69 %) were registered; of these almost one-third (35 %, 26/74) were registered after enrolment was complete.

ICMJE status

Comparisons were made between papers published in journals listed as following ICMJE recommendations (‘Listed’) vs. papers published in journals that were not listed as following ICMJE recommendations (‘Non-listed’). Registration was confirmed in 83 % of papers published in ‘Listed’ journals vs. 57 % published in ‘Non-listed’ journals (Table 1; p = 0.019, OR 3.57, 95 % CI 1.09–12.35). Sample size was changed (by ≥10 %) in 30 % of papers published in ‘Listed’ journals vs. 64 % of papers published in ‘Non-listed’ journals (Table 2; p = 0.02, OR 0.24, 95 % CI 0.06–0.95). Finally, primary outcome was changed in 30 % of papers published in ‘Listed’ journals vs. 26 % of papers published in ‘Non-listed’ journals (Table 3. p = 0.73).

Table 1 Registration and ICMJE journal listing
Table 2 Changes in sample size and ICMJE journal listing
Table 3 Changes in primary outcome and ICMJE journal listing

Discussion

We systematically evaluated published RCTs in critical care commencing enrolment after July 2005 and were unable to find an associated registration in over one-third. Thus, as in other areas of medicine [6, 1214], clinical trials conducted in critically ill patients frequently do not conform to the recommendations of the ICMJE statement [3, 4]. Of trials in critically ill that were registered (at any time), registration occurred either during or after completion of patient enrolment in two-thirds of studies. In addition, among trials with significant changes in study design (i.e. sample size or primary outcome) between initial registration and eventual publication, the majority of these changes occurred during or after patient recruitment. Finally, our results likely underestimate the extent of this problem because protocol changes in trials that were not registered until after study completion cannot be detected; in addition while ClinicalTrials.gov allows tracking of protocol changes, other registries are not as easily tracked.

The ICMJE statement on trial registration was intended to minimize publication bias and increase transparency in reporting of trials. Such registration helps protect against selective reporting (and duplication of results) because investigators publicly declare the methodology and study purpose [3, 4], and do not alter these except for sound reasons that should be articulated. Registration helps confirm the study’s internal validity where primary outcomes and sample size remain as determined at the design phase. In contrast, internal validity is undermined if the outcomes or target sample size is shaped by evolving (or final) results. Thus, our finding that less than a fifth of all trials registered were done so before patient enrolment began and clearly did not change the primary outcome between registration and publication suggests that published manuscripts might not accurately represent the original study intent or design.

We found that the anticipated sample size frequently differed between initial registration and manuscript publication; when such discrepancies occurred, the published sample size was usually lower than that initially recorded. Reducing sample size during the conduct of a study may render the study underpowered to detect a clinically important difference between groups [10, 15]. Conversely, studies that report significant between-group differences should be viewed with skepticism where the sample size has been markedly reduced during the course of the study [16]; in these cases, the effect size (i.e. the magnitude of the apparent treatment benefit) is frequently overestimated, as has been described in cases of premature trial termination for apparent benefit [17].

We focused on trials in the critically ill because such studies may be particularly susceptible to the problems created by inadequate trial registration. For example, important heterogeneity exists in many aspects of these patients. Illness definitions—and thus criteria for trial entry—in such patients are usually syndromes (e.g. sepsis, acute respiratory distress syndrome) rather than specific disease entities; as well, management is often multifaceted and co-morbidities are common. Thus, trials in these patients involve much ‘study noise’, making imperative the standardization and consistency of trial management. These factors may explain, in part, why the overwhelming majority (over 90 %) of multicenter RCTs with mortality as a primary endpoint in this population report that the tested interventions were not beneficial [11].

Studies testing mortality as the primary endpoint may be the most important to patients. Because critically ill patients have very high levels of mortality [18], morbidity [1923] and economic cost [24], design changes that may contribute to incorrect or misleading reports assume a high priority. In addition to such concerns, there are ethical implications: altering the design of a study after consent has been obtained could compromise the nature of the consent [10], and may be especially important in CCM where consent is frequently through a third-party [25]. When changes to primary outcome were recorded, these often involved the reporting of mortality (3/7 RCTs registered pre-enrolment with subsequent changes and 3/6 registered mid-enrolment). We believe this is a conservative number, as several studies had multiple primary endpoints making it unclear if ‘the’ primary endpoint was changed.

Previous reports have raised concerns about the discrepancies between registered and published trial methodologies. Mathieu and colleagues evaluating trials from three different subspecialties (cardiology, rheumatology, and gastroenterology) reported that over 50 % of trials were not ‘adequately’ registered [6], and among these studies, the primary outcome was altered in one-third of trials, almost always (over 80 %) conferring ‘statistical advantage’ towards a positive trial result. However, this may be an underestimate of the problem, as in that report [6] registration was considered to be ‘adequate’ provided it occurred before study completion, thereby missing—in those studies—any design changes that may have been made during patient enrolment.

Other issues can undermine trial registration. For example, a study of trial registration in Canada reported non-compliance with identification of trial leadership and contact information, two (of the 20) important items identified as necessary by the ICMJE [26].

Lack of adherence to the principles outlined in the 2005 ICMJE statement [3, 4] may occur for several reasons, including lack of understanding or acceptance by researchers, inadequate review of registration data by manuscript reviewers, and insufficient oversight from editorial boards. However, in some cases lack of adherence may reflect a desire to change sample size or primary outcome in order to enhance the likelihood of earlier publication, or publication in a higher-impact journal. Our data suggest that attention to the timing of registration, as well as changes in key elements such as sample size and primary outcome, could enhance registry benefit in studies of the critically ill.

There are important limitations to our findings. First, the evaluation focused on trials in CCM, and thus may not reflect the prevalence of inadequate registration in other disciplines. In fact, review of trial registration in other subspecialties (i.e. cardiology, rheumatology, and gastroenterology) has revealed comparable rates registration, although sample size alterations during the study were not sought [6].

The study was limited to a relatively small time span (i.e. 6 years since the publication of the ICMJE statement) [3, 4]. This may be important because trials that have commenced since 2005 might not yet be completed or published, and therefore rates of pre-commencement trial registration might now be greater than reported in this study. However, while the detection of discrepancies between registered and published trial information is facilitated by insistence on pre-commence registration, such changes can still occur. Our analysis of ICMJE status is limited since there may be journals not listed with the ICMJE who follow the guidelines, and conversely there may be some journals listed with the ICMJE who do not follow all of the recommendations. We acknowledge that our search strategy may have missed a small number of trials that may be relevant to our study, but the intent of our analysis was not to be exhaustive; rather, it was to identify whether or not there were issues with registration practices in critical care literature.

While our study describes the frequency of deviation from initially specified sample size and primary outcome, we are unable to determine the reasons behind these changes and cannot exclude the possibility that some alterations—although not explained in the respective manuscripts—were based on sound reasoning. Furthermore, the ability to easily track changes in a trials registry is key to transparency. The ability to track changes is not readily available across all registries, and not in an intuitive fashion. An additional complication is that the terminology permitted (by registries and journals) sometimes results in a lack of clarity relating to key elements (e.g. sample size, primary outcome).

In conclusion, these data suggest that registration of clinical trials in the critically ill is frequently omitted, and among trials that are registered, the timing of registration and the presence of study alterations are usually not apparent in the published paper. There seems little justification for delaying trial registration until after patient enrolment has begun; indeed protocol changes that result in publication of potentially invalid data may be a greater problem than selective reporting, the prevention of which was the main intent of these registries. Changes in trial design occurring after a study has commenced (and certainly after it is complete) should be documented and justified for peer-reviewers and for readers.