Article Text
Abstract
Objectives Previously, we identified a 10-year cohort of protocols from applications to the Norwegian Medicines Agency 1998–2007, consisting of 196 drug trials in general practice. The aim of this study was to examine whether trial results were published and whether trial funding and conflicts of interest were reported.
Design Cohort study of trials with systematic searches for published results.
Setting Clinical drug trials in Norwegian general practice.
Methods We performed systematic literature searches of MEDLINE, Embase and CENTRAL to identify publications originating from each trial using characteristics such as test drug, comparator and patient groups as search terms. When no publication was identified, we contacted trial sponsors for information regarding trial completion and reference to any publications.
Main outcome measures We determined the frequency of publication of trial results and trial characteristics associated with publication of results.
Results Of the 196 trials, 5 were never started. Of the remaining 191 trials, 71% had results published in a journal, 11% had results publicly available elsewhere and 18% of trials had no results available. Publication was more common among trials with an active comparator drug (χ2 test, p=0.040), with a larger number of patients (total sample size≥median, p=0.010) and with a longer trial period (duration≥median, p=0.025). Trial funding was reported in 85% of publications and increased over time, as did reporting of conflicts of interest among authors. Among the 134 main journal articles from the trials, 60% presented statistically significant results for the investigational drug, and the conclusion of the article was favourable towards the test drug in 78% of papers.
Conclusions We did not identify any journal publication of results for 29% of the general practice drug trials. Trials with an active comparator, larger and longer trials were more likely to be published.
- General practice
- Publication bias
- Drug industry
- Medical writing
- PRIMARY CARE
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
Strengths and limitations of this study
A complete cohort of general practice drug trials over a 10-year period was identified from a complete national medicines archive for clinical trial applications. Most trials were multinational.
Trials that were not publicly registered were included in the cohort.
We performed extensive literature searches for publications from the trials and contacted sponsors of trials if publications were not identified.
We explored trial characteristics for association with publication, but for unpublished trials we did not have access to the direction (‘positive’ or ‘negative’) of trial results, which has previously been shown to be a strong predictor of publication.
Introduction
Conducting research on humans and exposing them to potential risk without fulfilling the obligation of making the results publicly available is ethically unacceptable and a violation of the Helsinki Declaration. Nevertheless, it is well documented that results from a significant proportion of clinical trials are never published in scientific journals.1–5 A recent systematic review of studies of non-publication of projects approved by research ethics committees or included in trial registries concluded that only 60% of randomised controlled trials (RCTs) were published as full journal articles.4 Trials with positive findings are generally published more often and more promptly than those with negative results.4 ,6–9 Data from clinical trials are synthesised in systematic reviews and meta-analyses, which form the basis for clinical guidelines. Including unpublished data in meta-analyses has been shown to change the combined effect of a drug, with the direction of change varying by drug and outcome.10 For antidepressants, the overall effect size was 32% greater in published trials than in all published and unpublished trials included in the US Food and Drug Administration (FDA) drug reviews.11 Missing trial data may therefore lead to a skewed or flawed evidence base, on which clinical decisions in single-patient consultations rest.
Since a majority of physician–patient contacts occur in primary care, and most prescription drugs are issued there, the general practice setting may be regarded as an ideal setting for testing the effectiveness of drugs most commonly used in primary care. The vast majority of drug trials in general practice are conducted by the pharmaceutical industry; however, few trials are conducted solely in general practice.12 General practitioners (GPs) invited by a pharmaceutical company to participate in a trial may sometimes find it hard to differentiate between a trial primarily designed for marketing and a sound scientific trial. It has been claimed that drug trials mainly designed for marketing, so-called ‘seeding trials’, may explain the more frequent use of expensive antihypertensive drugs in Norway compared with the UK.13 One feature of seeding trials is that they are less likely to be published.14
Although many clinical drug trials take place in general practice,12 non-publication of clinical trial results in this setting has only rarely been investigated. In an audit of general practice drug trials in the UK from 1984 to 1989, Wise and Drury found that 63% of completed trials were not published.15 Partly based on this low publication rate, they concluded that drug research in general practice did not appear to generate a high level of scientifically valid and clinically relevant findings.15 To our knowledge, no similar investigation has been undertaken since then.
We therefore aimed to investigate the reporting and publication of trial results, and to identify trial characteristics associated with publication in a complete national cohort of general practice drug trials over a decade. We also wanted to characterise the transparency of reported trial funding, authors' conflicts of interest, assistance from medical writers and to investigate the number of citations of main publications from the trials.
Methods
Cohort of trials identified from the Norwegian Medicines Agency
In Norway, all clinical pharmaceutical trials must be approved by the Norwegian Medicines Agency (NoMA), a national regulatory authority for new and established medicines. In the NoMA paper archive, we identified applications and protocols from the period 1998–2007 for trials planned to be partly or entirely conducted in general practice. General practice trials were defined as trials where the address and/or titles indicated that at least one of the Norwegian clinical investigators worked in general practice. We identified 196 trial applications, and this defined our cohort of general practice trials. Of these trials, 189 were industry initiated (ie, funded or conducted by a pharmaceutical company), 182 were multinational, and the total planned sample size (all countries) was over 330 000 patients.12 A majority (151 trials) had trial sites in both general practice and specialist care settings. According to the protocols, the trials were planned to be completed between 1998 and 2012. The time period left enough time to study the publication output from the trials. The identification and selection of trial protocols have been described in more detail elsewhere.12
Search for publications of trial results
The files in the NoMA archive did not contain trial results. To identify publication output from the cohort of trials, we performed extensive literature searches. We built up an individual literature search for each trial for the three databases MEDLINE, Embase and CENTRAL (Cochrane Central Register of Controlled Trials) (see box 1). Before searching for publications, we searched for trial registration in the largest and most widely used clinical trials database (http://www.clinicaltrials.gov), to identify the unique trial registration number (NCT number) used in the database if the trial was registered there. If the trial was registered, we included the NCT number in the search for publications searching for given trial characteristics or NCT number to include matches from both search strategies. For trial protocols where the drug was identified only as a product code, we searched the Drug Information Portal of the US National Library of Medicine16 for generic drug names, and we included both the drug code and the generic name if identified. All searches were recorded in an electronic logbook. We performed the initial searches between January 2013 and February 2014.
Setup of publication searches to identify articles presenting trial results
Generic drug name or product code of test drug.mp*
Trade name of test drug.mp
1 OR 2
Generic drug name of comparator or trade name if this was used in the protocol.mp
3 AND 4
Protocol acronym, if available
5 OR 6
Patient group (if the description of patient group was complex, this search field was omitted)
7 AND 8
Registration number at clinicaltrials.gov (NCT number), if identified
9 OR 10
Limit: yr=‘Year of application at NoMA–Current’†
*.mp (multipurpose) used for searches in MEDLINE and Embase, both in the Ovid platform.
†In CENTRAL, the limit ‘trials’ was also used to exclude Cochrane reviews.
Duplicates were removed in the reference manager program Endnote X7 (Thomson Reuters). A search filter included articles containing ‘random*’ for randomised trials, and excluded letters, editorials, reviews, guidelines and discussion papers. We screened titles and abstracts manually to decide whether the publication described a trial in the cohort by comparing with information from the trial applications: test drug including dose, comparator drug including dose, trial population and sample size, trial duration, time the trial was performed, trial location and name or acronym of trial. If we could not determine whether the title and abstract were likely to describe a particular trial in the cohort, we retrieved the full-text article. Pooled analyses were excluded unless it was explicitly stated that these analyses were planned before the trial and that the results were presented separately for each trial in an unambiguous way. We defined a trial as published if results of the primary outcome(s) were published in a peer-reviewed journal. We also recorded whether the trial was reported elsewhere in other publication types (eg, articles without results presentation, conference abstracts, clinical study reports, records in trial registries). For trials where no journal publication or only a published abstract was found, a new search was performed in February 2015 using Google Scholar, Google free text, and the clinical trial registries of sponsors. We also checked whether results for these trials had been posted on clinicaltrials.gov or the EU Clinical Trials Register. The initial searches for publications were performed by one author (AMB). Another author (RBJ) independently repeated the searches in December 2015 for trials where the initial search did not identify a publication. We did not repeat the search when the sponsor had confirmed that the trial was not started, discontinued or not published.
Publications
We retrieved all presumed matching publications in full text. A data extraction form was developed in a web-based database with explicit instructions for coding. Data from the publications were extracted by one author (AMB) regarding whether they matched a particular trial from the cohort, publication type, author characteristics, reporting of funding and listed conflicts of interest. Any further doubt regarding whether a publication matched a trial was resolved by discussion between the authors. The extraction form was pilot tested by AMB, JS and AK. We defined the most complete publication presenting results for the primary outcome as the main journal article. For these papers, we recorded whether the results for the primary outcome were statistically significant in favour of the test drug (p<0.05 was considered statistically significant unless the study authors specified another level of significance, and for non-inferiority trials, non-inferiority was coded as ‘in favour’); not statistically significant/mixed (when one or more primary outcome was not statistically significant); statistically significant in favour of the comparator; or unknown/not relevant (eg, when no comparison or statistical test was performed). The conclusions of the articles were classified as favourable if the test drug was preferred to the comparator, neutral if the test drug and comparator were described as about equal, or not favourable if the comparator drug was preferred to the test drug. Classification was done by AMB and RBJ independently, and the inter-rater reliability was good (κ values 0.77 for classification of results and 0.70 for conclusions). Cases of disagreement were discussed, and consensus was reached in all instances.
Contact with sponsors for information not found elsewhere
Where no journal article was identified, we sent a letter to the trial sponsors in February 2015 asking for information on whether or not the trial had been conducted, registered and published. Sponsors of trials were companies, institutions or persons responsible for conducting or financing a clinical trial. Furthermore, we inquired about reasons for not conducting or publishing a trial. We sent letters to 19 sponsors of trials (18 industry sponsors and one university) regarding a total of 63 trials. From seven industry sponsors and the one university sponsor, we received responses regarding 33 trials (52%). We did not receive information of any publications or public trial registrations that had not already been identified in the main or supplementary searches. Trials without any identified publications were classified as discontinued or not started if this was substantiated by data in the NoMA archive, in clinicaltrials.gov, or from correspondence with trial sponsors.
Bibliometric data
We extracted bibliometric data regarding the journals where the trials were published, impact factors were found in the Journal Citation Reports of the ISI Web of Knowledge17 (for the year 2008 or the first subsequent available year). We extracted citation reports for the individual papers from Web of Science for the main publication from each trial.18
Statistical analyses
We report descriptive statistics with frequencies of characteristics recorded from the NoMA archive for trials published, results reported elsewhere or not published. We used χ2 tests to compare publication rates between trials with different characteristics recorded, and p values <0.05 were considered statistically significant. We calculated the κ measure of agreement between the raters for the classification of results and conclusions. Statistical analyses were performed using IBM SPSS Statistics V.22.
Results
The NoMA archive information and/or contact with the sponsors indicated that of the 196 trials in the cohort, five trials had not been launched; two because of remarks or lack of approval from the regional ethics committee and/or NoMA and three because the sponsors no longer considered the trial relevant. These five trials were excluded from further analyses of publication status.
Publication of trial results
For the remaining 191 trials, we identified at least one journal publication for 135 (71%) trials, with a total of 285 journal articles resulting from the trials (figure 1). For 22 (11%) trials, results were publicly posted elsewhere; in sponsors' trial registries, at clinicaltrials.gov, or in a conference abstract (figure 2). No trial results were found for 34 (18%) trials. The cumulative planned sample size across participating countries for these 34 trials was over 41 000 patients, constituting 12% of the total sample size of the 191 trials (table 1).
Six trials had results reported only at clinicaltrials.gov without any journal publication, and 11 unpublished trials were registered at clinicaltrials.gov with no results reported.
Ten trials were stopped prematurely. Two trials had results presented on the sponsors' website with information about the discontinuation of the trial programme, four trials were registered at clinicaltrials.gov without results posted (trial programme terminated n=3, showed no benefit n=1). For the remaining four trials, we got the information after contacting the sponsors. Reasons given for stopping these trials were recruitment difficulties (n=3) and withdrawal of drug (n=1).
Predictors of publication
Publication status by trial characteristics is shown in table 1. Publication of results was more frequent among trials that used an active comparator, and for trials with durations or sample sizes above or equal to the median. Other variables not significantly associated with publication were mixed versus GP-only setting; international versus national trial; registration status at clinicaltrials.gov; time of study (before/after 2002); drug group (tested for top five drug groups and vaccines separately); or sponsor (tested for top five sponsors separately). Reporting of trial results (both journal publications and other reports included) was more common after 2002 than before (89% vs 76% of trials, respectively, p=0.018) (figure 3). Mean time from estimated end of study to main publication of results was 3.6 years (95% CI 3.3 to 4.0; range 0–9 years), however, the exact end of each study was not available in our data.
Positive or negative results and conclusions of papers
Eighty one (60%) of the 134 main journal articles presented statistically significant results in favour of the test drug for the primary outcome, while only one (0.7%) showed significant results in favour of the comparator drug. Furthermore, 34 (25%) trials had mixed or non-significant results, while the direction of results was either unclear or not relevant for the remaining 18 papers (13%), typically because no statistical comparisons were performed. The conclusions of the papers were favourable towards the test drug in 104 papers (78%), neutral in 22 (16%), not clearly stated in three (2.2%) and unfavourable to the test drug in only five papers (3.7%).
Reporting of funding and conflicts of interest
Information regarding trial funding was provided in 241 of the 285 (85%) articles for the 191 trials. In 189 of the 285 (66%) articles, one or more of the authors declared that they had conflicts of interest. Overall, in each article a mean of 51% of authors reported conflicts of interest (95% CI 46.4 to 56.3), 30% of authors were employed by the sponsor (95% CI 26.8 to 32.9), and only 7.4% of authors explicitly declared that they had no conflicts of interest (95% CI 5.03 to 9.72). Funding information was reported in 112 (83%) of the 135 papers defined as the main journal articles from each trial, and at least one author declared conflicts of interest in 78 (58%) papers. Reporting of both funding and of conflicts of interest increased over time (figure 4).
For 125 of the 285 papers (44%), we found information indicating assistance from a medical writer. In 123 papers, the writing assistance was declared in the acknowledgements section while the medical writer was listed among the authors in only five papers.
Bibliometric data
The 285 papers were published in 112 different journals with a median impact factor of 4.3 (min–max: 0.7–50). Most journals were topic specific, but high-impact general journals such as the Lancet and the New England Journal of Medicine (NEJM) were among the 10 most frequently used journals. The median annual number of citations for the 125 main publications available in Web of Science was 4.4 (IQR: 1.7–10.4, min–max: 0.12–308). The median total number of citations for each main publication was 33 (IQR: 14–96, min–max: 1–2463).
Discussion
Main findings
In this 10-year cohort of drug trials including Norwegian general practice trial sites, 3 out of 10 trials had not had any results published in journals in the 7–17 years since application for approval in NoMA. For 12% of trials, no trial information was traced at all, representing missing data from potentially over 40 000 patients internationally. Publication was more common in trials that used an active comparator, larger trials and trials of longer duration.
Findings in relation to other studies
A publication rate of 71% corresponds quite well with that reported in a recent systematic review.4 However, our publication rate is much higher than the 37% Wise and Drury found when analysing drug trials in UK general practice from the 1980s.15 Since their non-publication rate of 63% was based on responses from almost all trial sponsors, their finding was unlikely to have been caused by incomplete publication searches, and the authors were therefore concerned about the type of research performed and the underlying motives for the research.15 More recent studies, although not limited to the general practice setting, have found higher proportions of published results that are more consistent with our findings. In a study of large trials registered at clinicaltrials.gov before 2009, 29% remained unpublished, and of the unpublished trials, 78% did not have results available at clinicaltrials.gov either.2 Of 940 trials of pharmacological interventions for stroke, 20% were completed, but not published.3 Selective reporting of study results has been found across different specialties, interventions and over time.8 RCTs have been found to be published more often than observational studies,4 ,15 and phase III trials more often than phase II trials.4 As drug trials are often RCTs, and there were few phase II trials in our cohort, this might partly explain why we found a relatively high proportion of publications from the trials.
In general practice, small units with relatively few eligible patients at each site make it challenging to run clinical trials. Usually, a large number of practices are needed to provide a sufficient number of patients.15 ,20 Although termination of drug development programmes was the most common reason for stopping a trial, three sponsors of uncompleted trials in the cohort reported difficulties with recruiting patients and/or GPs. This is generally the most common reason for the termination of trials.21 In a cohort of RCTs approved in Switzerland, Germany and Canada in 2000–2003, as many as 25% of trials were discontinued, most often because of poor recruitment.22 However, trial discontinuation was less likely for industry trials and trials with large sample sizes, and discontinued trials were more likely to remain unpublished.22
For over 30 years, there have been calls for trial registration and increased transparency, and during the last decade, progress has increasingly been made. From 2005 onwards, the International Committee of Medical Journal Editors required public registration of clinical trials to consider publication, and from 2007, trial registration and reporting of results were incorporated into the US legislation through the FDA Amendments Act.23 Nevertheless, we found that one-third of all trials with no publicly available results were registered at clinicaltrials.gov, but without any results posted. This is consistent with previous studies showing that only around 20% of registered trials posted results at clinicaltrials.gov within 1 year of trial completion.5 ,24 ,25 Although such reporting is mandatory, still <40% have posted results after 5 years.5 The finding that reporting of trial results increased over time when we included formats other than journal publications (figure 3), is consistent with a German study showing increasing availability of trial results during the years 1989–2010 when all publicly available sources were included in a publication search.26 However, in a review of methodological studies, no substantial change in non-publication over the past 30 years was found.8 In recent years, the AllTrials campaign has worked systematically for trial registration and reporting of results,27 and in April 2015, WHO called for public disclosure of clinical trials results, including the results of older, still unpublished trials.28 There is ongoing debate regarding how this may best be implemented, in particular, for older trials.29 ,30 Alarming discrepancies between papers and results posted at clinicaltrials.gov have been disclosed,31 ,32 and adverse drug events are typically incompletely reported in journal papers.31 ,32 So far, complete clinical study reports are not commonly available,26 and the study reports we found in sponsors' trial registries were only summaries. Although journal articles remain the gold standard for reporting study results, this format is now increasingly being supplemented by more comprehensive formats, making it possible for others to reanalyse data, which will undoubtedly benefit both science and healthcare.
About eight out of 10 of the main publications identified in our study had a positive conclusion in favour of the tested drug, which probably reflects the general tendency to report positive rather than negative results.6 ,11 ,33 The low proportion of trials with negative conclusions in our study and in other studies is concerning, and might suggest publication bias, highlighting of findings other than the main outcome or that the principle of equipoise has been violated. In a recently published study, the authors found that after the year 2000, significantly fewer cardiovascular trials reported positive results for the primary outcome than before.34 The authors argued that this was likely to be an effect of the required prospective trial registration.34 Since we did not have access to unpublished results, we were not able to analyse publications in relation to the direction of the study outcome. Others have found that studies from pharmaceutical companies more frequently report favourable efficacy results than non-industry trials.35 Since there were too few non-industry trials in our cohort, we were not able to analyse whether industry-sponsored trials more commonly reported findings in favour of their drug than independent trials. Reporting and interpretation of findings in RCTs with non-significant primary outcomes is commonly inconsistent with the results.36 This corresponds well with our finding that papers with mixed or non-significant results and a positive conclusion, typically highlighted secondary outcomes or a more favourable adverse events profile.
The more common practice over time to report funding and authors' conflicts of interest is consistent with requirements by medical journals over the last years. The impact of disclosing funding and conflicts of interest on physicians' interpretation of trials has been studied in two randomised trials, albeit reaching opposite conclusions: While a study of French GPs did not find any significant difference in GPs' confidence in industry-funded versus non-industry-funded RCTs,37 a US study found that internists downgraded the credibility of a study if it reported industry funding.38 However, it is noteworthy that only a small fraction of the authors of the articles we analysed reported no conflicts of interest. Assistance from a medical writer was reported in almost half the publications, but <2% listed a medical writer as an author, which is consistent with analyses of diabetes trials published in 1993–2013.39 A survey of authors of articles in high-impact journals revealed that 12% of research articles met the criteria for ghost authorship—that is, individuals making substantial contributions without being listed as authors—and that 25% of research articles had an honorary (guest) author.40 Our data did not allow us to draw conclusions regarding the fulfilment of authorship criteria.
The papers from the 191 trials were generally published in high-impact to medium-impact journals, indicating that research in the general practice setting influences the general medical literature; however, most were drug trials from mixed clinical settings, with only a few solely general practice trials. The papers were quite frequently cited, with a median of 33 citations, but with a wide range—the most frequently cited paper having over 2000 citations indexed in Web of Science. Two of the top three journals were also the two most popular journals for publishing RCTs on new diabetes drugs.39 The Lancet and NEJM's position among the 10 most frequently used journals was also consistent with previous findings.41
Strengths and weaknesses of the study
The inclusion of all trials from a mandatory National Medicines Agency archive is a strength compared with other studies investigating the publication of results of trials only registered at clinical trial registries. Although the cohort only included trials in Norway, most trials were multinational. This increases the generalisability of our findings, making them relevant to other countries. As the identification of general practice trials from the NoMA archive was performed by manual search in a paper archive, random errors may have occurred in the initial data collection from applications in the archive. The trial applications were from a 10-year period that did not extend quite up to the present. This may limit the transferability to current practice. However, because it generally takes several years from trial completion to the publication of results, this kind of study needs to be conducted with some time lag. Another potential limitation is the failure to identify all publications from trials in the cohort. The search for publications from the trials was initially performed by one author, and repeated by another author independently for trials in which no publications were originally found. Ideally, all searches and selections should have been duplicated. However, we believe that after the repeated extensive searches in several databases and additional searches in sponsors' registries, free-text internet searches and contact with sponsors, it is unlikely that additional searches would have substantially changed our results. The cumulative sample size of unpublished trials was based on protocol information regarding recruitment targets, and must therefore be considered to be an estimate. We obtained information from sponsors for slightly more than half the trials we requested. Among the remaining trials, there might be some that were planned but not started. However, we did not identify information supporting this in the NoMA archive correspondence. For trials where we identified a publication, we did not specifically investigate whether or not the trial had been discontinued prematurely. The results and conclusions of the main papers were classified according to the direction of the results. This might, to some extent, be a subjective assessment, and there is, therefore, some uncertainty regarding this; however, the classification was done independently by two raters, and there was good agreement between the two.
Conclusions
Comparable to similar studies from other fields of medicine, a considerable share of drug trials conducted in Norwegian general practice remains unpublished 7–17 years after application for approval. This non-publication rate may imply missing trial data from potentially 40 000 patients internationally. Data from clinical trials not available for public appraisal should raise ethical concerns regarding both a deficient evidence base and unfulfilled obligations towards trial participants. When reviewing research output, it is important to check trial registries and sponsors' websites, as one-fifth of the trial results were only found there. The finding that 60% of papers reported favourable results for the investigational drug, while only 0.7% showed favourable results for the active comparator, is striking. It is encouraging that, over time, more trials had results reported. This also applies to the increased transparency in reporting of funding and conflicts of interest. On the other hand, very few authors declared that they had no conflicts of interest to report, which may suggest that there are still future challenges for the credibility of drug trials, especially for general practice, where few drug trials are conducted independently of the pharmaceutical industry.
Acknowledgments
The authors thank Kaspar Buus Jensen for his participation in the initial planning of the study and the collection of data from the NoMA archive. They also thank Ingvild Aaløkken, Head of Section for preclinical assessment and clinical trials at NoMA, for support and admission to the NoMA archive during the identification of trials in the archive. Librarian Inger Marie Juul from the University of Oslo Library kindly gave advice in the development of the search strategy.
References
Footnotes
Contributors AMB, JS and AK took part in planning the study and designing the data extraction form. AMB searched for publications, registered data from the publications, performed the statistical analyses and drafted the manuscript. RBJ performed the repeated search for publications and the double coding of results and conclusions, RBJ, JS and AK participated in the analyses and interpretation of results and critically revised the manuscript. All authors read and approved the final manuscript and are accountable for all aspects of the work.
Funding This study was funded by the Norwegian Research Fund for General Practice.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement No additional data are available.