Article Text

Download PDFPDF

How often do US-based human subjects research studies register on time, and how often do they post their results? A statistical analysis of the Clinicaltrials.gov database
  1. Christopher J Gill
  1. Department of International Health, Center for Global Health and Development, Boston University School of Public Health, Boston, Massachusetts, USA
  1. Correspondence to Dr Christopher J Gill; cgill{at}bu.edu

Abstract

Context The Food and Drug Administration Modernization Act of 1997 (FDAMA) and the FDA Amendment Act of 2007 (FDAAA), respectively, established mandates for registration of interventional human research studies on the website clinicaltrials.gov (CTG) and for posting of results of completed studies.

Objective To characterise, contrast and explain rates of compliance with ontime registration of new studies and posting of results for completed studies on CTG.

Design Statistical analysis of publically available data downloaded from the CTG website.

Participants US studies registered on CTG since 1 November 1999, the date when the CTG website became operational, through 24 June 2011, the date the data set was downloaded for analysis.

Main outcome measures Ontime registration (within 21 days of study start); average delay from study start to registration; proportion of studies posting their results from within the group of studies listed as completed on CTG.

Results As of 24 June 2011, CTG contained 54 890 studies registered in the USA. Prior to 2005, an estimated 80% of US studies were not being registered. Among registered studies, only 55.7% registered within the 21-day reporting window. The average delay on CTG was 322 days. Between 28 September 2007 and June 23 2010, 28% of intervention studies at Phase II or beyond posted their study results on CTG, compared with 8.4% for studies without industry funding (RR 4.2, 95% CI 3.7 to 4.8). Factors associated with posting of results included exclusively paediatric studies (adjusted OR (AOR) 2.9, 95% CI 2.1 to 4.0), and later phase clinical trials (relative to Phase II studies, AOR for Phase III was 3.4, 95% CI 2.8 to 4.1; AOR for Phase IV was 6.0, 95% CI 4.8 to 7.6).

Conclusions Non-compliance with FDAMA and FDAAA appears to be very common, although compliance is higher for studies sponsored by industry. Further oversight may be required to improve compliance.

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Article summary

Article focus

  • Since 1997, clinical trials in the USA involving ‘serious or life-threatening conditions’ are required to register on the website Clinicaltrials.gov within 21 days of study start.

  • Since 2007, interventional, Phase II–IV studies governed under an IND application were also required to post basic results on the website.

Key messages

  • Prior to 2004, the majority of studies were not being registered at all.

  • Overall, registration of studies occurs long after study start.

  • Approximately 75% of studies that appear required to post results do not do so.

Strengths and limitations of the study

  • This analysis includes over 54 000 studies registered in the USA, and thus provides considerable insight into registration practices and how these are affected by statutory actions.

  • The analysis may somewhat overestimate non-compliance rates, since not all studies that register on Clinicaltrials.gov are required to do so.

Introduction

The Food and Drug Administration Modernization Act of 1997 (FDAMA) required the National Institutes of Health to establish a clinical trials registry to track efficacy studies involving human subjects. This resulted in the Clinicaltrials.gov (CTG) website, launched in November 1999.1 FDAMA requires that all studies for ‘serious or life-threatening conditions’ register on CTG. This applies to all Federally and industry-sponsored studies conducted in the USA and to studies conducted abroad if under the jurisdiction of an Investigational New Drug (IND) application. In 2002, FDA issued a binding guidance document listing CTG's four required elements: (1) a description of the study and experimental treatment; (2) recruitment information (eligibility and study enrolment status); (3) location and contact information; and (4) administrative data (protocol ID numbers and sponsor).2 The guidance also established timelines for registration (within 21 days of study start).

Subsequently, the FDA Amendment Act of 2007 (FDAAA) established an additional mandate that certain categories of industry-sponsored studies post basic trial results on CTG within 1 year of trial completion.3 This requirement applied to studies which (1) include subjects enrolled from at least one US site; (2) intervention studies at Phase II or beyond; (3) governed under an IND application; and (4) were initiated or ongoing as of 27 September 2007.4 In 2008, the CTG website was updated to include a templated interface for uploading basic trial results.4 ,5

The impetus for the CTG national trials registry rested on two key factors. First was patient equity: patient advocacy groups had long lobbied for a registry to help patients gain access to up-to-date information about potentially life-saving therapies and have the opportunity to enrol in relevant trials. From an ethical perspective, registration of studies is consistent with the Belmont Report's three core principles: respect for individuals; beneficence; and justice.6

Second, CTG was intended to combat publication bias. In the early 2000s, public trust in the medical literature was severely damaged when it emerged that data had been suppressed regarding an apparent increase in suicides among adolescents taking selective serotonin reuptake inhibitors7 and of severe cardiovascular events in patients taking the Cox-2 inhibitor rofecoxib.8 A recent meta-analysis of several candidate antidepressants provided a quantified example of publication bias. This analysis assessed 74 clinical studies, all of which had been registered with the FDA under IND, and therefore comprised a complete set of studies on these drugs. Of the 38 ‘positive studies’, 37 were published. In contrast, of the 36 studies judged ‘negative’ by the FDA, only 3 were published as such, whereas 11 were published so as to imply a positive result and the remaining 22 were never published at all.9 In response to such events, in 2004 the International Committee of Medical Journal editors (ICMJE) issued a policy declaring trial registration as a precondition for publication of efficacy studies.10

Yet 13 years after the CTG website has been launched, little is known about patterns of utilisation for efficacy studies conducted in the USA, nor to what degree study sponsors are compliant with the deadlines for trial registration and posting of results. To better understand these issues, I analysed the CTG database itself, which is freely available for download from the CTG website.

Methods

After limiting the data set to studies for which the ‘Location’ tab was the USA (selecting ‘Country 1’=‘United States’), I downloaded the CTG dataset on 24 June 2011 from http://clinicaltrials.gov/ct2/search/advanced. The data were exported as an XML file, which I then imported into SPSS V. 19 for further analysis.

My analysis focused on the following three questions:

  1. ‘What was the pattern of trial registrations over time, and how did this relate external events intended to promote compliance with the mandates?’

  2. ‘What proportion of studies were registered on time?’

  3. ‘Among relevant studies (as defined below), what proportion posted their results on CTG within 1 year of study completion?’

For the analysis of study registrations, I defined a trial as registered on time if its start date was ≤21 days before the CTG registration date (ie, within FDAMA's compliance window). For the analysis of study result postings, I defined compliance more narrowly to fit with the limits set by FDAAA. Accordingly, I limited the analysis set to interventional studies at Phase II or beyond for which industry was the sole sponsor, since these should be governed under IND. I then narrowed the set to a relevant window of time. On the front end, I included studies that listed their completion date as 28 September 2007 or beyond, reasoning that if the completion date was on or after the 28th, then the study must have been ongoing as of the 27th. Since studies were required to post results within 1 year of the completion date, I limited the set to studies that were listed as complete as of 23 June 2010, that is, 1 full year prior to the date when I downloaded the data set for analysis. In this way, only trials that had a complete year in which to post their results were included. As a spot check to the fidelity of these limits, I manually inspected the study descriptions of the first 200 studies retrieved and found that all were interventional, industry sponsored, US-based studies at Phase II or beyond.

I defined a ‘completed study’ as one for which a valid completion date had been entered on CTG, either under the field ‘Completion Date’, or when this field was left blank, the ‘Primary endpoint completion date’. The average time from registration to start dates was calculated as the difference between the CTG registration and study start dates. The average study duration was calculated from study start and study completion dates. Funding sources were classified as ‘Industry only’, ‘Non-industry only’ and ‘Blended funding’, indicating a mixture of industry and non-industry funds. Studies were categorised by the phase of development (Phases I–IV). For studies that were listed ambiguously as Phase I/Phase II, I classified them as Phase II; for studies that were listed ambiguously as Phase II/III, I classified them as Phase III. Study subject composition was categorised as ‘Males ’, ‘Females’ or ‘Males and Females’; ages were categorised as ‘Adults’ (including seniors), ‘Children’ or ‘All ages combined’.

In bivariate analyses I calculated relative risks and one-way analysis of variance (ANOVA) where appropriate. I stratified results by source of funding and calendar years. For the analysis of results postings, I included industry- and non-industry-sponsored studies with a completion date on or after 28 September 2007. To create an indexed interpretation of posting rates, I divided the number of results posted by the number of studies completed during the same period, recognising that these would not necessarily be the same studies.

To identify factors related to on-time registrations of posting of results, I used logistic regression models to generate adjusted OR (AOR). The regression models were built by selecting variables for inclusion in the models that either showed significant relationships in bivariate analyses, or which I felt a priori should be included (sex, age groups). The final models included the following variables: study phase; year of registration; sexes included; age groups included; and source of funding. In the multivariate analysis for posting of results, I limited the analysis set to completed, industry-sponsored studies, at Phase II or beyond, which occurred within the period 28 September 2007 through 23 June 2010.

Results

US studies registered on CTG

As of 24 June 2011, 54 890 US studies had been registered on CTG and were included in this analysis (table 1). Most studies recruited subjects of both sexes. Exclusively paediatric studies comprised 5.4% of the total. Later stage (Phase II and beyond) accounted for 77% of the studies that provided this information. Roughly one-third of studies were sponsored by industry. Among the 46 888 ‘completed’ studies, the average study duration was 1222 days (SD 1085 days). This varied significantly by funding source: 783 days (SD 732 days) for industry studies versus 1454 days (SD 1199 days) for non-industry studies versus 1178 days (SD 879 days) for studies with blended funding (ANOVA, 2df, p<0.001).

Table 1

Characteristics of studies registered on the Clinicaltrials.gov website1

Figure 1 summarises the numbers of registrations by year across the three funding categories. The overall number of registrations in 1999 far exceeded those during each of the years 2000–2004. This is likely an artefact created by the backlog of ongoing trials that were registered en masse with the launch of the CTG website.

Figure 1

Studies registered on Clinicaltrials.gov, by year and funding source.

In 2002, the number of industry-sponsored trial registrations increased from 83 and 95 per year in 2000 and 2001, respectively, to between 373 and 466 in the years 2002–2004. This change appears to coincide with when FDA posted its final guidance document for the use of CTG in 2002, mandating a timeline for compliance with registration.

Between the years 2004–2005 there was a very large increase in the number of new studies being registered. Of note, 2004 was the year that the ICMJE issued its policy that journals should not publish any study that had not been registered. This event created a natural experiment by which to estimate the extent of ‘non-registration’ in the period 2000–2004. By calculating the average number of trials registered/year from 2000–2004, and the average number of registrations per year from 2005–2010 (I excluded 2011 from this calculation since it only included half a year of data), and assuming that the latter period was a truer estimate of new trials actually being initiated each year, I estimated the proportion of studies that were probably not being registered during 2000–2004. Overall, plausibly 79% of studies were not being registered prior to 2005; for industry-sponsored trials, 85.6% were not being registered; for non-industry trials 72.7% were not being registered; and for trials with blended funding, 95.7% were not being registered.

A smaller increase in registrations occurred in 2008. This appears coincident with when FDAMA was amended in 2007 to require posting of results within 1 year, and when ICMJE updated its policy in 2008 to also require registration of Phase I and pharmacokinetic studies.

Registration of studies on CTG

Even excluding the backlog of ongoing studies that came in during 1999 (for which studies registered an average of 1204 days (SD 1386 days) after study start), late registration was the rule rather than the exception (table 2). On average, studies were registered on CTG 322 days (SD 764 days) after the study had begun, with non-industry-sponsored studies showing the greatest delay and industry studies the least: 442 days (SD 929 days) vs 216 days (SD 577 days), respectively (ANOVA, 2df, p<0.001). Viewed another way, among trials that listed a start date on CTG, 55.7% (29 035 /52 622) registered after the 21-day window mandated by FDAMA. Compared with studies funded by industry, non-industry studies were less likely to be registered on time than industry studies (AOR 0.51, 95% CI 0.49 to 0.54). Across all years, 56.0% (8426/15 048) of industry studies vs 38.5% (12 063/31 333) of non-industry studies registered on time (χ2, 2 df, p<0.001). Among industry-sponsored studies, the proportion registered on time steadily increased over the years (figure 2), with the exception of 2005, the year after the ICMJE's policy was issued. Consistent with the theory that this transient drop in compliance was driven by a backlog of ongoing studies being registered en masse, the average delay to registration for studies registered in 2005 was several hundred days longer than in either of the two adjacent years (table 2). Nonetheless, by 2011, non-industry studies and industry studies, respectively, still registered an average of 256 (SD 762 days) and 139 days (SD 557 days) after study start (ANOVA, 2df, p<0.001).

Figure 2

Proportion of studies registered on time on Clinicaltrials.gov, by year and funding source.

Table 2

Time between registration on Clinicaltrials.gov website and study start, by year and source of funding

In multivariate analyses (adjusting for sexes included age groups, year of registration, study phase and source of funding), on-time registration became steadily more likely with each year after 1999. Compared with Phase I studies, Phase II and III studies were 20% more likely to be registered on time, whereas Phase IV trials were 20% less likely. Neither the age composition nor gender of study subjects were associated with the likelihood of registering on time (table 3).

Table 3

Adjusted odds for on-time registration of studies on Clinicaltrials.gov

Posting of results on CTG

Within the period 28 September 2007 and 23 June 2010, a total of 7442 US-based, intervention studies at Phase II or beyond were entered as completed on CTG (table 4). Of these, 3368 were sponsored solely by industry (45.3%). Indexing the number of studies that posted results on CTG against the number of studies completed in the same interval, the proportion of posted/completed peaked in 2008 for industry-sponsored studies at 33.7%, and averaged 28.0% across the entire period. Non-industry-sponsored studies were not required to post results to CTG, and >90% did not. Compared with non-industry studies, industry studies were significantly more likely to post results of completed studies to CTG (RR 4.3, 95% CI 3.7 to 4.9).

Table 4

Posting of study results on Clinicaltrials.gov website indexed against number of completed trials, by year and source of study funding

Among the industry-sponsored studies at Phase II or beyond, within this same time period, 13.7%, 36.9% and 49.4% of Phase II, III and IV studies, respectively, posted results to CTG. In multivariate analysis, factors related to posting vs non-posting included paediatric study populations, completion of the study in 2008 (the first year that FDAAA came into effect) and studies beyond Phase II (table 5).

Table 5

Factors related to posting of study results on Clinicaltrials.gov for US-based Phase II, II or IV industry sponsored studies*

Discussion

This analysis indicates that compliance with registration of US-based human subject research trials on CTG remains quite poor. While compliance with registration has improved substantially since 2005, overall >50% of studies register after the 21 day limit set by FDAMA, with an average delay from study start to registration on CTG of >200 days. Since (by definition) this analysis only included studies that were registered on CTG, it is impossible to precisely define what proportion of studies were left unregistered. What the analysis could show was that the ICMJE's policy on registration was temporally associated with a significant increase in registration rates. The implication of this is that prior to 2005, plausibly 80% or more of studies were not being registered at all.

Interestingly, in 2006, the FDA conducted an internal audit of CTG compliance within a subset of industry-sponsored oncology trials registered between May and July 2005. Of these, only 76% had been registered on CTG.11 This suggests that non-registration may still be very common. Similarly, even among those studies specifically targeted by FDAAA for posting results on CTG, >75% do not currently so. While this analysis focuses solely on metrics defined by the registration/posting process, poor quality of the data that are entered on CTG is a further problem that will need to be addressed in the future.12

Regarding the pattern of registering trials on CTG, several points merit further discussion.

First, studies are frequently registered on CTG long after data collection has begun. For patients who might be interested in enrolling in ongoing studies, late registration constitutes a barrier to enrolment and therefore conflicts with the principles for the ethical conduct of clinical research. Late registration may also contribute to publication bias by allowing for interim reviews of study data prior to registration. Consequences could range from sponsors amending key study endpoints post hoc, or even halting studies and foregoing registration entirely if results appear to be unfavourable.

Second, it appears that funding source is strongly associated with registration rates on CTG. This may reflect the fact that industry-sponsored trials operate within a highly scrutinised and regulated research environment, in which non-compliance could lead to financial penalties or suspension of an IND, or that industry has more resources available and is thus better able to respond to the reporting mandate. Regardless, FDAMA provides no exemption on the basis of sponsorship, and publication bias has an equally corrosive influence on the fidelity of the published literature, regardless of funding source. What are some key messages from this analysis? First, the mere fact of a statutory requirement for the registration of trials, for registration to be on time and for results to be posted, does not appear to have had much impact on the behaviours of researchers and sponsors conducting trials in the USA. In contrast, the non-statutory penalty imposed by ICMJE's refusal to publish unregistered trials had a very potent effect on registrations.

Third, judging by how few results are posted to CTG, attempts to curb publication bias through statute do not appear to have been terribly successful. Given how influential the ICMJE was on encouraging registrations and on-time registrations, one could question whether a similarly powerful impact could be seen if sponsors were also required to post their results on CTG prior to completing the peer-review process. It should be noted that the FDAAA focused only on a particular category of research studies, namely US-based, later phase interventional studies governed under an IND application. Outside these parameters, non-publication of research remains a very serious problem. In a recent analysis of NIH-sponsored studies that were registered on CTG, only 68% of studies had been published in the literature, even after allowing for 30 months to pass from the time the study was listed as complete on CTG.13

This analysis had several limitations. First, because of the large number of studies included, it was not possible to manually inspect all studies in the set to confirm whether all would have met the requirements for registration on CTG. It is possible that some studies that were registered on CTG were not required to have done so, and this could overestimate the degree of non-compliance to some degree. Similarly, the requirement to post results on CTG only applied to industry-sponsored US-based intervention studies governed under IND at Phase II or beyond. I limited the analysis set to studies that met these parameters, but it is still possible that this restricted set might have contained some studies that fell outside the mandate of the statute. That said, of the 200 studies I manually inspected after applying my limits for the analysis, all proved to be Phase II, US-based intervention studies as expected. Therefore, the effect of this misclassification, if any, should be minimal.

Second, the definition of studies to be covered by the FDAMA statute as ‘serious and life-threatening conditions’ is somewhat vague. Recognising this ambiguity, FDA issued guidance that this should be interpreted broadly rather than narrowly, citing depression as an example of a condition that was serious and potentially life threatening. In this analysis, I therefore assumed that a sponsor's decision to register a study on CTG indicated that the sponsor deemed the study to have met these criteria, but it is possible that some proportion did not need to be registered.

Lastly, the number of postings reflects a theoretical maximum, since studies supporting new drugs under review for licensure are not required to post results until licensure has been granted. Unfortunately, there was no practical way to determine what proportion of studies in the data set might have qualified from this exemption. Nevertheless, the overall conclusion that posting rates are sub-optimal remains intact.

The results from this analysis are in broad agreement with two other recent analyses of the CTG dataset. Law et al. also found that studies often registered late and rarely posted their results. One notable difference, Law's analysis indicated a far lower rate of compliance with posting of results from industry-sponsored studies than the current analysis (9.5% vs 28.0%). One factor that might account for this difference is time, since Law's analysis included CTG data through May 2010. It is possible that many studies posted results subsequent to that time and were captured in the current analysis. In support of this theory, Law noted that a large number of studies posted their results after a delay of 300 days or more.14 Similarly, Prayle et al15 analysed CTG data through December 2009, and found that few studies posted results, with higher rates among industry-sponsored trials.

Moving forward, it is important to better understand the reasons for non-compliance with registration and reporting of results. While much of the focus regarding compliance has centred on industry-sponsored studies, academic institutions are also governed by FDAMA. At present, it is unknown as to whether non-compliance is due to willful disregard of the mandates, reflects a lack of resources with which to be compliant or other factors. For example, it could be instructive to better understand the experience of registering or reporting on CTG, to determine whether ‘ease of use’ is relevant to compliance and could be improved.

Acknowledgments

I wish to thank Drs Matthew Fox, Donald Thea and Jonathon Simon for their thoughtful advice during this analysis. Dr Gill declares that he has no conflicts of interest related to this analysis.

References

Footnotes

  • Contributors CJG conceived the analysis, downloaded the data set, performed the statistical analysis and solely wrote the manuscript. The data sets for this analysis can be provided on request.

  • Funding This work was not supported by any outside funding agency.

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement The raw data files used for this analysis have been uploaded to DRYAD for public access: DOI 10.5061/dryad.4j190.