Article Text

Download PDFPDF

Feasibility study to examine discrepancy rates in prespecified and reported outcomes in articles submitted to The BMJ
  1. Jennifer Weston1,
  2. Kerry Dwan1,
  3. Douglas Altman2,
  4. Mike Clarke3,
  5. Carrol Gamble1,
  6. Sara Schroter4,
  7. Paula Williamson1,
  8. Jamie Kirkham1
  1. 1Department of Biostatistics, MRC North West Hub for Trials Methodology Research, University of Liverpool, Liverpool, UK
  2. 2Centre for Statistics in Medicine, University of Oxford, Oxford, UK
  3. 3All-Ireland Hub for Trials Methodology Research, Centre for Public Health, Queen's University Belfast, Belfast, UK
  4. 4The BMJ, BMA House, London, UK
  1. Correspondence to Dr Jamie Kirkham; jjk{at}liv.ac.uk

Abstract

Objectives Adding, omitting or changing prespecified outcomes can result in bias because it increases the potential for unacknowledged or post hoc revisions of the planned analyses. Journals have adopted initiatives such as requiring the prospective registration of trials and the submission of study protocols to promote the transparency of reporting in clinical trials. The main objective of this feasibility study was to document the frequency and types of outcome discrepancy between prespecified outcomes in the protocol and reported outcomes in trials submitted to The BMJ.

Methods A review of all 3156 articles submitted to The BMJ between 1 September 2013 and 30 June 2014. Trial registry entries, protocols and trial reports of randomised controlled trials published by The BMJ and a random sample of those rejected were reviewed. Editorial, peer reviewer comments and author responses were also examined to ascertain any reasons for discrepancies.

Results In the study period, The BMJ received 311 trial manuscripts, 21 of which were subsequently published by the journal. In trials published by The BMJ, 27% (89/333) of the prespecified outcomes in the protocol were not reported in the submitted paper and 11% (31/275) of reported outcomes were not prespecified. In the sample of 21 trials rejected by The BMJ, 19% (63/335) of prespecified outcomes went unreported and 14% (45/317) of reported outcomes were not prespecified. None of the reasons provided by published authors were suggestive of outcome reporting bias as the reasons were unrelated to the results.

Conclusions Mandating the prospective registration of a trial and requesting that a protocol be uploaded when submitting a trial article to a journal has the potential to promote transparency and safeguard the evidence base against outcome reporting biases as a result of outcome discrepancies. Further guidance is needed with regard to documenting reasons for outcome discrepancies.

  • MEDICAL EDUCATION & TRAINING
  • STATISTICS & RESEARCH METHODS
  • MEDICAL ETHICS

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • The BMJ manuscript tracking system was used to gain access to all study protocols which are often unavailable.

  • The study assumed that unpublished protocols submitted to The BMJ were final versions.

  • The study only described reasons for outcome discrepancies that were documented by the trial authors or questioned during peer review.

  • The results were limited to trials submitted to The BMJ which may not be representative of all trials.

Introduction

Selective outcome reporting occurs when a subset of the originally recorded outcome variables in a trial are selectively reported in a publication based on their results. When outcome reporting is informed by the statistical significance and/or effect size (eg, outcomes where the results are not statistically significant are not reported or reported only as p>0.05), we refer to this as outcome reporting bias.1 This form of bias has been identified as a threat to evidence-based healthcare because clinical trial outcomes with statistically significant results are more likely to be published.2 As a safeguard against this form of bias, current CONSORT (Consolidating Standards of Reporting Trials) guidance recommends that completely defined primary and secondary outcome measures should be prespecified and any changes to trial outcomes after the trial starts should be documented with reasons (CONSORT, items 6a and 6b); and the results for each outcome should be reported for each group, and the estimated effect size and its precision (item 17a) should also be reported.3 Despite this guidance, empirical research has shown that statistically significant outcomes were more likely to be fully reported (where a fully reported outcome was one with sufficient data for inclusion in a systematic review meta-analysis) compared with non-significant outcomes (range of ORs 2.2–4.7).4 When comparing trial publications with protocols, it was found that 40–62% of studies had at least one primary outcome that was changed, introduced or omitted.4 Previous qualitative research in this field has found that the prevalence of incomplete outcome reporting is high and researchers were generally unaware of the implications of not reporting all outcomes and protocol changes.5

Some journals have adopted policies such as the International Committee of Medical Journal Editors (ICMJE) recommendations to deter trial authors from selectively reporting outcomes. For example, all The BMJ follow the recommendation that any clinical trial which started after 1 July 2005 will only be considered for publication if the trial was prospectively registered before the recruitment of any participants (http://journals.bmj.com/site/authors/editorial-policies.xhtml#clinicaltrial). The BMJ will also not consider a report of a clinical trial unless the protocol is also submitted to inform the peer review process.

The aim of this feasibility study was to gain an understanding of the practicalities of comparing prespecified outcomes from the trial protocol and reported outcomes in the initial manuscript submission during the peer review process in order to inform a larger future study. We take a cohort of trial reports submitted to The BMJ where half of the trials were recently accepted for publication by The BMJ and half were rejected. The main objective of the study was to document the frequency and types of outcome discrepancy between prespecified outcomes in the protocol and reported outcomes in the submitted paper. A secondary objective was to review whether these discrepancies were discussed by authors in the trial registry or manuscript, or in the case where the manuscripts underwent external peer review, during correspondence with editors and peer reviewers.

Methods

Identification of trials

After signing a confidentiality agreement, we searched The BMJ's manuscript tracking system to identify all reports of randomised controlled trials (RCTs) submitted to the journal between 1 September 2013 and 30 June 2014. Articles are archived after a 12-month period in the system, and therefore this 10-month period was chosen in order to avoid the loss of any articles between the screening and assessment stage. We included only articles that assessed the efficacy or effectiveness of a medical intervention and were submitted as the primary study publication as indicated by the trial author. If it was unclear as to whether the article was a primary publication, two experienced members of the study team (JK and PW) made a judgement via discussion. Secondary trial papers including cost-effectiveness analyses were excluded, since such articles are unlikely to report on all prespecified outcomes.

The study followed a nested case–control study design. From the cohort of all RCT submissions to The BMJ meeting the inclusion criteria, cases were defined to be all trials that were submitted in the chosen period and subsequently published by The BMJ. Controls consisted of a random sample (of equal size to the number that were published) of submitted and subsequently rejected articles. All rejected articles had equal chance of selection irrespective of the stage in which they were rejected, that is, following initial screening, following external peer review or after an appeal. If a rejected article did not have a submitted protocol, then the next rejected article in the random sequence was selected until enough controls were selected.

Trial documentation for assessment

For each submission, we downloaded the trial registry entry and the following documents from The BMJ's manuscript tracking system: study protocol, first submission, any online supplementary material, editor and peer review comments with author responses, and final published article available on ScholarOne. Trial registry entries were identified from the ‘trial registration’ name and number reported as a required abstract item for RCT manuscripts submitted to The BMJ.

Assessment of trials and data extraction

Using the detail from the submitted articles, we recorded trial characteristics relating to the design, conduct and size of the trial. For trials rejected by The BMJ, we recorded the main reason why the article was rejected since not all rejections are based on methodological reasons, for example, an article maybe rejected because the research question lacks novelty, relevance or importance. We also recorded whether or not the trial was prospectively/retrospectively registered according to the ICMJE definition,6 and whether or not the protocol submitted to the journal was published. In addition, we recorded any outcomes-related changes with reasons that were documented in either of the historic changes and amendments section of the trial registry entries or the trial articles.

Outcome definitions used in this study

We defined outcomes of interest as domains. An outcome-specific measurement is the metric or tool used to support the measurement of the domain. These definitions are coherent with those used in other studies.7 ,8 Some specific measurements have multiple subscales, for example, the Short Form-36 (SF-36) for measuring general health has eight health subscales and two summary scores (the physical and mental component summary scores). We treated each subscale as a separate outcome only if it was the triallists' intention to analyse each subscale separately.

Outcome comparisons and assessment

Comparison of outcomes listed in the trial registry, the trial protocol, the initial article submission and the final published article (if available) was undertaken independently by two researchers (JW and JK or JW and KD) to identify any similarities or discrepancies in outcome specification between the source documents. For each trial, a matrix listing the outcome domains, and specific measurements was constructed showing whether the outcome was mentioned in each of the four possible source documents. If only an outcome domain was prespecified without reference to individual subscales, then this was not classed as a discrepancy if the individual outcomes were reported. For example, in rheumatology, if the disease activity index was prespecified but the individual components of the index were reported, but not prespecified (tender joint count, swollen joint count, patient global health and acute phase reactant), then this was not classified as a discrepancy. Similarly, a discrepancy was not declared if the specific measurement instruments used to measure outcomes were not prespecified. For example, if quality of life was listed as a prespecified outcome and the report used the EQ-5D to measure quality of life then this was not considered a discrepancy.

Any discrepancies between the information extracted by the two reviewers were resolved through discussion. As an additional quality control check, one author (JK) checked all comparisons.

Data analysis and presentation of results

Trials were analysed and presented in accordance to whether or not they were accepted or rejected for publication by The BMJ. Inconsistencies between prespecified outcomes (those listed in the protocol) and those reported in the initial trial report submission were documented. As a supplementary analysis, we also considered prespecified outcomes to be those listed in the trial registry. Specific discrepancy types comprised new outcomes introduced into the trial report that were not prespecified and prespecified outcomes that were not mentioned in the trial report. We also noted any changes in the importance of outcomes (eg, upgrading and downgrading prespecified outcomes to primary and secondary outcomes, respectively) and changes in measurement tools used.

For articles published by The BMJ, a review of the editorial, peer review and authors' responses was also undertaken. We describe how often outcomes discrepancies were picked up during peer review, by whom and any reasons that were provided by the authors for the discrepancy. Outcomes that were reported in the final published manuscript were compared with those reported in the initial submission and any changes in the discrepancy rates between the protocol and initial submission and final published article reported. Any reduction in the discrepancy rate between the protocol and the final publication was seen as a crude measure of the impact of the peer review process when compared with the initial manuscript submission.

Results

Between 1 September 2013 and 30 June 2014, 3156 research articles were submitted to The BMJ but only 10% (311/3156) of these were RCT related (figure 1). Thirty-six RCTs were excluded for reasons described in figure 1. Of the remaining 275 RCTs, 21 (8%) were accepted for publication by The BMJ and the remaining 254 were rejected. As a requirement of publication, all accepted RCTs had an available protocol within the manuscript tracking system but authors of more than a half of the rejected RCTs (55%; 139/254) did not submit a protocol during the initial submission (figure 1).

Figure 1

Flow diagram of research articles submitted to The BMJ (1 September 2013 to 30 June 2014).

Characteristics of the 21 accepted trials and 21 randomly selected trials from the 115 rejected submissions with an available protocol are shown in table 1. No trial was exclusively funded by industry and all trials were registered. Dates of trial registrations for accepted RCTs ranged from 2006 to 2012, for accepted protocols ranged from 1998 to 2013. Dates of trial registrations for rejected RCTs ranged from 2005 to 2012, for rejected protocols ranged from 2004 to 2013. A higher proportion of trials that were rejected by The BMJ were retrospectively registered and there was a tendency for more trials that were accepted by The BMJ to have a published protocol. The median number of outcomes mentioned in the source documents for trials accepted by The BMJ was 15 (range 4–42) and the median for those rejected by The BMJ was 19 (range 3–35). Sixteen of the rejected articles were rejected by the editor following initial screening, four were rejected after external peer review and one was rejected following an appeal. The decision made by the editor to reject the article was retrospective registration for 5 of the 21 rejected trials. Other reasons for rejection included preliminary study/low priority for journal (6), not suitable for publication in The BMJ (5), article would be more suitable in a specialist journal (3), underpowered study (1) and premature termination of trial (1). As of April 2015, 12 of the 21 articles rejected by The BMJ have been identified as published elsewhere (table 2).

Table 1

Characteristics of randomised controlled trials included in the study

Table 2

Outcome discrepancies between prespecified (protocol) and reported outcomes (initial submission) from trials submitted to The BMJ

Outcome discrepancies—protocol to initial manuscript submission

In trials published by The BMJ, over a quarter of the prespecified outcomes (27% (89/333)) in the protocol were not reported in the initial manuscript submission; 19% (63/335) in those trials that were rejected (table 2). Eleven per cent (31/275) of outcomes reported in the initial manuscript submission that were published by The BMJ were not prespecified compared with 14% (45/317) in those trials that were rejected (table 2). There were no discrepancies between prespecified and reported outcomes in 19% (4/21) of trials accepted by The BMJ and 10% (2 of the 21 trials) rejected by The BMJ.

We found three instances from three separate trials (all subsequently accepted by The BMJ) where a prespecified primary outcome was downgraded to a secondary outcome in the submitted report (table 2). None of the trial authors specified a reason for this change, although one author did amend the trial registry entry accordingly. Prespecified secondary outcomes were not upgraded to primary outcomes in any of the submitted trial reports.

Outcome discrepancies—trial registry to initial manuscript submission

Online supplementary table S1 provides the results where the trial registry was used to define the prespecified list of outcomes. In comparison to using the protocol to define the list of prespecified outcomes, there was a marked increase in the number of reported outcomes that were not prespecified; 32% (87/275) for the trials accepted by The BMJ and 28% (90/317) for the trials rejected by The BMJ.

Reasons for discrepancies

Three rejected trial reports noted changes to outcomes in either the trial registry or protocol but did not document a reason for the change.

Editorial impact on discrepancies (trials accepted for publication by The BMJ)

In the 21 trials accepted by The BMJ, the editorial and peer review process raised awareness about outcome discrepancies in 82% (14/17) of the trials where there were outcome discrepancies. Editors asked authors of 10 trials to make sure all registered outcomes were reported in the trial manuscript and to document any reasons for changes. The percentage of prespecified outcomes that went unreported following peer review reduces from 27% to 21% (71/333) when substituting the initial manuscript submission for the final published article in the assessment, but the percentage of outcomes that were reported but not prespecified remained largely unchanged (11%, 32/294; see online supplementary table S2). Editors did, however, request that reported outcomes that were not prespecified were labelled as ‘post hoc’ (five trials). In responding to peer reviewer comments, two authors suggested that qualitative and subjective outcome measures were to be reported elsewhere, and in one trial there was clear evidence that the additional data had been published elsewhere. Authors also reported space limitations (despite the fact that The BMJ has no word limit for research), outcomes still being analysed and errors with updating the trial registry entry in their communications with editors as reasons for outcome discrepancies identified during the peer review process.

Discussion

Researchers have a moral responsibility to report trial findings completely and transparently.9 Mandating the prospective registration of a trial and requesting that a protocol be uploaded when submitting a trial article to a journal have the potential to promote transparency and reduce the incidence of outcome reporting biases as a result of outcome discrepancies. Despite these requirements, the principal findings from this study showed that, at The BMJ where these are mandated, over a quarter of prespecified outcomes from trials that were published by The BMJ may still go unreported in submitted manuscripts reporting the primary trial results, while just over 10% of outcomes were newly introduced. For articles accepted by The BMJ, a review of editor, peer reviewer and author responses revealed that reasons for identified discrepancies given by trial investigators were not considered to indicate bias, since the reason for the discrepancies were unrelated to the results. There was evidence that The BMJ peer review process reduces the amount of unreported outcomes with fewer discrepancies found between the protocol and the published trial report than the initial submission. On a number of occasions, editors also requested that reported outcomes that were not prespecified were clearly labelled as post hoc outcomes.

Comparison with other studies

Two empirical studies conducted by Chan and colleagues compared the protocol and the final publication with respect to the primary outcome and concluded that 40–62% of trials had major discrepancies between the primary outcomes specified in the protocols and those defined in the published article.10 ,11 Four empirical studies have found that between 13% and 31% of primary outcomes specified in the protocol were omitted in the publication, and between 10% and 18% of reports introduced and outcome in the publication that was not specified in the protocol.10–13 In one previous qualitative study, interviews were undertaken with trial investigators to discuss any discrepancies found between trial protocols and subsequent publications.5 This interview study found that in almost all trials (15/16, 94%) in which prespecified outcomes had been analysed at the time of primary publication but not reported, this under-reporting resulted in bias. In nearly a quarter (4/17, 24%) of trials in which prespecified outcomes had been collected but not analysed, the direction of the main findings influenced the investigators' decision to not analyse the remaining data collected.

Strengths and limitations

The strength of our study rests in the use of The BMJ's tracking system to gain access to the study protocols for each comparison. However, we did make the assumption that unpublished protocols submitted to The BMJ were finalised and no further undocumented changes were made after any outcome data were analysed. We also defined prespecified outcomes to be those listed in the protocol since many of the rejected trials were retrospectively registered and therefore may not be truly ‘prespecified’ outcomes, making any comparison between rejected and accepted trials unfair. Nevertheless, we acknowledge that at this time, there are no reliable, practical processes available to identify version numbers and time stamps for unpublished protocols.

A further limitation of our work is that we were only able to document reasons for outcome discrepancies that were either documented by the trial author in the source documentation or questioned during peer review. Our research uncovered many other outcome discrepancies for which the reasons are unknown. Many of these unknown reasons could be the result of outcome reporting bias. We should also be aware that over half of the authors of rejected trials that were not considered for full assessment in this study did not upload a protocol at submission. Without a protocol, a full assessment would not have been possible but also we have assumed implicitly that those rejected trials with an available protocol were not different in terms of selective reporting than those that did not submit a protocol. Furthermore, our results apply only to reports of trials submitted to The BMJ, which may not be representative of all trials; for example, no industry trials are included in the sample. Selective outcome reporting and discrepancies in outcome measures has previously been identified in industry-funded trials where the study protocol were supplied through litigation.14

Conclusions and policy implications

There are familiar lessons from this study for triallists, journal editors and peer reviewers. Triallists must write sufficiently detailed protocols, clearly identify any amendments and new versions, and adhere to them to minimise the scope for outcome reporting bias. The trial protocols supplied to The BMJ were highly variable in terms of quality, clarity and depth. For this reason, The BMJ research editors always use the publicly-registered information about the trial as the main source of prespecified outcomes. When reporting findings, triallists should ensure that trial registry entries correspond to the protocol. They should describe clearly in trial reports outcomes that were measured, analysed and compared, and report and explain deviations from or additions to planned outcomes. We found that none of the trial authors explained the outcome changes in the trial registry entries and submitted reports. We also found that many trial registry entries specify only outcome domains without specifying individual-specific measurement tools and subscales which are to be used and the time points in which outcomes are to be measured. Journal editors and peer reviewers need to be able to identify all outcome discrepancies during peer review and should compare outcomes prespecified (both primary and secondary outcomes) in the trial registry and protocol to those reported in the submitted report. Authors should be asked to make a comment on the reasons for all outcome discrepancies following peer review and journal editors should recommend that triallists upload these reasons into the trial registry entry as well as being included in the trial report. Authors should also ensure that any prespecified outcomes not reported in the main manuscript are easily accessible if not published elsewhere.

In addition to journal editors, trial steering committees for individual trials can also perform a role in providing trial authors with guidance regarding outcome specification, monitoring changes in outcomes and measures, and observing good practice in line with reporting standards. The SPIRIT guideline (Standard Protocol Items: Recommendations for Interventional Trials) is a valuable resource for triallists at the development stage of a trial protocol, and we encourage the use of such guidelines to aid transparency of outcome selection and reporting.15

Future work

Understanding the reasons for outcome discrepancies is important in order to identify what kind of mechanisms drive outcome reporting bias and also to provide guidance on how to minimise it. A larger study looking at outcome discrepancies involving more journals needs to be undertaken in order to gain insight into different journal policies using the methods outlined in this feasibility study. Owing to the problems with recall bias in the previous interview study,5 interviews with triallists during the peer review process need to be undertaken to identify the reasons for all outcome discrepancies.

Acknowledgments

The authors would like to thank Trish Groves, The BMJ Deputy editor for suggesting editorial changes to the document and providing insight into current editorial policies at The BMJ.

References

Footnotes

  • Contributors PW and CG conceived the idea for the study. JK applied for funding for the study. JK, PW, CG, KD, DA and MC designed the study. JW downloaded all source documents from The BMJ manuscript tracking system. JW, JK and KD carried out a comparison of the source documents. All comparisons were checked by JK. JK, JW, KD and PW undertook the data analysis. All authors helped interpret the data. All authors had full access to all of the data in the study and can take responsibility for the integrity of the data and the accuracy of the data analysis. JK prepared the initial manuscript. All authors commented on the final manuscript before submission. JK is the guarantor for the project.

  • Funding This work was supported by the MRC Network of Hubs for Trials Methodology Research (MR/L004933/1-R47).

  • Disclaimer The funders had no role in the design and conduct of the study; in the collection, analysis, and interpretation of the data; or in the preparation, review, or approval of the manuscript of the decision to submit.

  • Competing interests SS is senior researcher at The BMJ and regularly researches the peer review process, and DA and JK are statistical advisers at The BMJ.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement No additional data are available.