Objectives As much as 50%–90% of research is estimated to be irreproducible, costing upwards of $28 billion in USA alone. Reproducible research practices are essential to improving the reproducibility and transparency of biomedical research, such as including preregistering studies, publishing a protocol, making research data and metadata publicly available, and publishing in open access journals. Here we report an investigation of key reproducible or transparent research practices in the published oncology literature.
Design We performed a cross-sectional analysis of a random sample of 300 oncology publications published from 2014 to 2018. We extracted key reproducibility and transparency characteristics in a duplicative fashion by blinded investigators using a pilot tested Google Form.
Primary outcome measures The primary outcome of this investigation is the frequency of key reproducible or transparent research practices followed in published biomedical and clinical oncology literature.
Results Of the 300 publications randomly sampled, 296 were analysed for reproducibility characteristics. Of these 296 publications, 194 contained empirical data that could be analysed for reproducible and transparent research practices. Raw data were available for nine studies (4.6%). Five publications (2.6%) provided a protocol. Despite our sample including 15 clinical trials and 7 systematic reviews/meta-analyses, only 7 included a preregistration statement. Less than 25% (65/194) of publications provided an author conflict of interest statement.
Conclusion We found that key reproducibility and transparency characteristics were absent from a random sample of published oncology publications. We recommend required preregistration for all eligible trials and systematic reviews, published protocols for all manuscripts, and deposition of raw data and metadata in public repositories.
- cross-sectional study
- replication crisis
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Strengths and limitations of this study
This investigation is an observational study using a cross-sectional design based on a broad sample of the oncology literature, which increases the generalisability of our findings.
We extracted eight key reproducibility and transparency characteristics finding that 29 publications had 0 indicators, 62 publications had 1 indicator, 209 publications had 2–5 indicators and 0 publications had 6 or more.
We engaged in extensive training as a research team prior to analysis, and conducted all data extraction and data analysis in a double blind manner to avoid bias.
Because of the breadth of this analysis, questions remain about the reproducibility and transparency in specific study designs (eg, randomised trials).
A lack of reporting reproducibility or transparency characteristics may not equate to failure to engage in reproducible and transparent research practices.
The ability to reproduce, or replicate, research results is a cornerstone of scientific advancement.1 2 Absent efforts to advance the reproducibility of scientific research, advancements in patient care and outcomes may be delayed,3 4 in part due to a failure in the translation of evidence to practice.5 Evidence may fail translation to practice owing to bias,6 7 lack of publication4 or poor reporting.8 Thus, it may not be surprising that recent estimates of irreproducible research span a range of 50%–90% of all articles, costing upwards of $28 billion in USA alone.9 Moreover, it may not be surprising that large-scale efforts to replicate (ie, re-enact or reconduct previously published research studies) have failed,10 in part due to an inability to navigate published methods. What is lost when scientific research fails to be reproducible carries significant weight; namely, the ability of science to be self-correcting11 and produce trustworthy results.12
It is commonly accepted that certain items are essential to improving the reproducibility of biomedical research. Examples of such items include preregistering studies, publishing a protocol, making research data and metadata publicly available, and publishing in such a way to allow free access to the final manuscript. Preregistering a study and publishing a protocol are important to prevent selective publication of studies with ‘positive’ results13 and preventing the reordering of endpoints based on statistical significance.14 15 Providing access to one’s raw research data, metadata and analysis script allows independent researchers to computationally reproduce results, tailor results to specific patient populations and determine the rigour of statistical analysis.16 17 Publishing in open access journals or using preprint servers allows readers across economically diverse countries to access research articles that have implications for clinical practice.18 Altogether, reproducible research practices aim to increase the efficiency, usefulness and rigour of published research.5
Despite a high rate of author endorsement of reproducible practices,19 20 some evidence suggests that authors infrequently implement them.21 In the absence of such reproducible research practices, attempts to validate study findings may be thwarted. For example, Bayer and Amgen both attempted to replicate oncology research studies, with each failing to do so.22 23 Bayer’s attempt to reproduce prior research studies is especially significant because they attempted to reproduce internal studies. Other non-pharmaceutical entities have attempted to replicate cancer research studies with similar results.24 One may hypothesise that improved use and reporting of key reproducible or transparent research practices would improve future efforts to reproduce oncology research studies and build trust in existing evidence. Building on recent, similar analyses,25–27 here we report an investigation of key reproducible or transparent research practices in the published oncology literature as part of a larger initiative to examine reproducible and transparent research practices across medical specialties
We performed an observational study using a cross-sectional design based on methods developed by Hardwicke et al 25 with modifications. Our study employed best-practice design in accordance with published guidance, where relevant.28 29 Study protocol, raw data and other pertinent materials are available on the Open Science Framework (https://osf.io/x24n3/). This study did not meet US regulation requirements to be classified as human research, therefore it is exempt from Institutional Review Board approval.30
We used the National Library of Medicine (NLM) catalogue to search for all oncology journals using the subject terms tag Neoplasms[ST]. This search was performed on 29 May 2019 which identified 344 journals. The inclusion criteria required that journals were both in ‘English’ and ‘MEDLINE indexed’. We extracted electronic ISSN (International Standard Serial Number) (or linking if electronic was unavailable) for each journal to use in a PubMed search on 31 May 2019. We selected publications between 1 January 2014 and 31 December 2018. This date range is consistent with Hardwicke et al (2014–2017), but we chose to also include the most current year (2018) at the time of data extraction and was expanded to include 2018. Publications were evenly distributed across years. From search returns, we selected a random sample of 300 publications using Excel’s random number function (https://osf.io/wpev7/).
We used a pilot-tested Google Form based on the one provided by Hardwicke et al 25 with modifications (https://osf.io/3nfa5/). The first modifications were extracting the 5-year impact factor and the date of the most recent impact factor, neither of which were extracted by Hardwicke et al. Second, additional study designs were added to include cohort, case series, secondary analyses, chart reviews and cross-sectional studies. Third, funding options were expanded that allowed for greater specification of university, hospital, public, private/industry or non-profit sources. When screening studies, we relied on the authors’ descriptions of their study designs.
The Google Form contained questions for investigators aimed at identifying whether a study demonstrated the information necessary to be reproducible (online supplementary table 1, table 1). Variations in study design changed the data that were extracted from each study. For example, publications with no empirical data (eg, editorials, commentaries (without reanalysis), simulations, news, reviews and poems) were unable to examined for reproducibility characteristics. However, for all publications, the following data were extracted: title of publication, 5-year impact factor, impact factor of the most recent year, country of corresponding author and publishing journal, type of study participants (eg, human or animal), study design, author conflicts of interest, funding source, whether the publication claimed to be a replication study, and whether the article was open access (table 2). Publications with empirical data were examined for the following characteristics in addition to those stated above: material and data availability, analysis scripts and linkable protocol. Preregistration statements were further assessed in publications for which preregistration through trial databases, such as ClinicalTrials.gov, is the norm. Observational designs may also be registered on clinical trial registries. Systematic reviews and meta-analyses may be preregistered through PROSPERO. Preregistration for chart reviews and case studies and series is not typically performed. As, to our knowledge, there is not currently a registration site for preclinical studies,31 thus we have excluded these publications from examination of preregistration statements. Together, the eight key reproducibility and transparency indicators analysed were as follows: material availability, raw data availability, analysis scripts, linkable protocol, trial preregistration statements, author conflict of interest statement and funding source. Open access was determined using www.openaccessbutton.org, an online service that searches for open access publications freely available to the public without a journal subscription. In the event a publication could not be found, investigators performed a Google search to see if the publication was freely available elsewhere. Novelty was assessed by searching each publication for whether the publication claimed to be novel, a replication study or provided no statement related to study novelty. Web of Science was used to evaluate whether each examined publication (1) had been replicated in other works and (2) was included in future systematic reviews or meta-analyses.
Prior to data extraction, each investigator underwent a full day of training to increase the inter-rater reliability of the results between authors. This training consisted of an in-person session that reviewed study design, protocol and Google Form. Investigators (CGW, NV) extracted data from three sample articles and differences were reconciled following extraction. A recording of this training session is available and listed online for reference (https://osf.io/tf7nw/). One investigator (CGW) extracted data from all 300 publications. ZJH extracted data for 200 publications and NV extracted data for 100 publications. CGW’s data were compared with ZJH’s and NV’s with discrepancies being resolved via group discussion. All authors were blinded to each other’s results. A final consensus meeting was held by all authors to resolve disagreements. If no agreement could be made, final judgement was made by an additional author (DT). Our manuscript has been made available as a preprint, online at www.medRxiv.org (https://doi.org/10.1101/19001917).
Descriptive statistics were calculated for each category with 95% CIs using the Wilson formula for binomial proportions.32 The total number of each data point present in the publications was presented in addition to the proportion of the whole sample.
The NLM search identified 344 journals but only 204 fit our inclusion criteria. Our initial search string retrieved 199 420 oncology publications, from which, 300 were randomly sampled. Approximately 296 publications were analysed for study reproducibility characteristics; four publications were not accessible, thus they were excluded from our analysis. Of these 296 publications, 215 contained empirical data and 81 did not. Publications without empirical data were unable to be analysed for study reproducibility characteristics. Additionally, 21 publications with empirical data were case studies or case series. These case studies and series are unable to be replicated, thus are excluded from the analysis of reproducibility characteristics. In total, we were able to extract study reproducibility characteristics for 194 oncology publications (figure 1).
In our sample of oncology publications, the publishing journals had a median 5-year impact factor of 3.445 (IQR 2.27–5.95). The majority (156/296, 52.7%) of journals were located in USA. Over half (165/296, 55.8%) of the publications were available for free via open access networks. The remaining 131 publications (44.2%) were located behind a paywall—making the publications inaccessible to the public—available only through paid reader access. Approximately 109 publications (36.8%) made no mention of funding source. Public funding (95/296, 32.1%), such as state or government institutions, comprised the next most prevalent source of study funding. Publication authors disclosed no conflict of interest more frequently than conflicts of interest (174/296, 58.8 vs 57/296, 19.2%); however, 65 publications (22.0%) had no author conflict of interest statement. Human participants were the most common study population in sample (154/269, 52.0%). Citation rates of these 296 publications by systematic reviews and meta-analyses can be found in table 2.
Only 21 publications (21/194, 10.8%) made their raw data available. Nine of these publications with available raw data were downloadable by readers, while the rest was available on request from the corresponding author of the publication. Of these nine publications, only three provided complete raw datasets (online supplementary table 2). An expanded description of study materials required to reproduce the study—laboratory instruments, stimuli, computer software—was provided as a supplement in 6/194 publications (3.1%). Of those publications with available materials, most (4/6) were only accessible to readers on request to the corresponding author, rather than being listed in a protocol or methods section. Two publications provided their materials accessible as a supplement, but neither publication provided all of the materials necessary to replicate the study. None of the included publications made their analysis scripts accessible, which details the steps the authors used to prepare the data for interpretation. Only five (5/194, 2.6%) publications provided a protocol detailing the a priori study design, methods and analysis plan. One publication (1/194, 0.05%) claimed to be a replication study; all remaining publications studies (193/194, 99.5%) claimed to be novel or did not provide a clear statement about being a replication study. Twenty-two publications (22/194, 11.3%) were cited within future systematic reviews/meta-analyses. Excluding preclinical publications (n=79), chart reviews (n=7), systematic reviews or meta-analyses (n=7), or publications with multiple study designs (n=13) in which preregistration with trial databases, such as ClinicalTrials.gov, would not be relevant, we found seven publications (7/88, 8.0%) with preregistration statements. Of these 88 publications, 15 were clinical trials; however, only 6 (6/88, 6.8%) were preregistered with ClinicalTrials.gov prior to commencement of the study. None of the systematic reviews and meta-analyses (n=7) were preregistered with PROSPERO. A subgroup analysis of the eight key reproducibility and transparency indicators demonstrated that 29 publications had 0 indicators, 62 publications had 1 indicator, 209 publications had 2–5 indicators and 0 publications had 6 or more.
Our cross-sectional investigation of a sample of the published oncology literature found that key reproducibility and transparency practices were lacking or entirely absent. Namely, we found that publications rarely preregistered their methods, published their full protocol, or deposited raw data and analysis scripts into a publicly-accessible repository. Moreover, conflicts of interest were not discussed approximately 20% of the time and just over half of the included publications were not accessible due to journal paywalls. Given the challenges in understanding the molecular mechanisms that drive cancer, the continuum of research in the field of oncology is slow, laborious and inefficient.33 To combat these inherent obstacles, transferring outcomes and information from preclinical to clinical research demands consistency and precision across the continuum. Otherwise, publications downstream in the cancer research continuum may be based on spurious results incapable of independent confirmation due to a lack of access to study data, protocols or analysis scripts. Science advances more rapidly when people spend less time pursuing false leads,34 thus, for patients with cancer and for whom rapid scientific advancement is most significant, it is paramount that scientists, researchers and physicians advocate for an efficient research system that is transparent, reproducible and free from bias.
Preregistration of research study methods is a mechanism to improve the reproducibility of published results and prevents bias—either from selective reporting of outcomes or selective publication of a study.35 Previously, it has been shown that the selective reporting of study endpoints affects the research portfolio of drugs or diseases.15 36 37 For example, Wayant et al found that 109 randomised control trials of malignant haematology interventions selectively reported their trial endpoints 118 times, with a significant portion doing so in a manner that highlighted statistically significant findings.36 Had trial registries not been available, these trials may have never been found to exhibit selective outcome reporting. Now, through trial registries, haematologists and other interested researchers are able to independently assess the robustness of not only study rationale and results, but also study rigour and reporting. The present study indicates that preregistration of study methods was rare, even among trials and systematic reviews that have available registries. The importance of preregistration across the continuum of cancer research cannot be understated. For example, preclinical animal models serve as the foundation for clinical trials, but have exhibited suboptimal methods,38 which may explain why animal study results fail to successfully translate to clinical benefit. In fact, it was recently shown that many phase 3 trials in Oncology are conducted despite no significant phase 2 results.39 One possible explanation for why phase 3 trials proceed despite non-significant phase 2 results is the strong bioplausibility demonstrated in preclinical studies. If it is true that preclinical studies exhibit poor research methods, it is not unlikely that they are affected by selective outcome reporting bias, just like clinical research studies. Thus, to strengthen oncology research evidence—from foundational, preclinical research to practice-changing trials—we recommend either the creation of relevant study registers or the adherence to existing registration policies. In so doing, one key aspect of research—the accurate reporting of planned study endpoints—could be monitored, detected and mitigated.
Equally important to self-correcting, rigorous cancer research is the publication of protocols, raw data and analysis scripts. Protocols include much more information than study outcomes—they may elaborate on statistical analysis plans or decisions fundamental to the critical appraisal of study results.40 It is unlikely that anyone would be able to fully appraise a published study without access to a protocol, and far less likely that anyone would be capable of replicating the results independently. In fact, two recent efforts to reproduce preclinical studies revealed extant barriers to independent verification of published findings,20 41 including the absence of protocols, data and analysis scripts. Our present investigation found that only five (2.6%) studies published a protocol, nine (4.6%) fully published their data and none published their analysis scripts. In the context of the recent failures to reproduce cancer research publications, one may reasonably conclude that our study corroborates the belief that oncology research is not immune to the same shortcomings that contribute to an ever-expanding cohort of irreproducible research findings.42 Oncology research, like all biomedical research, is at an inflection point, wherein it may progress toward more transparent, reproducible, efficient research findings. However, in order to do so, the availability of protocols, data and analysis scripts should be considered fundamental.
In summary, we found that key reproducibility and transparency characteristics were absent from a random sample of published oncology studies. The implication of this finding is a research system that is incapable of rapid self-correction, or a research system that places a stronger emphasis on what is reported rather than what is correct. We recommend three key action items which we believe benefit oncology research and all its stakeholders. First, require preregistration for eligible trials and systematic reviews, since these study designs have existing registries available, and support the development of registries for preclinical studies. Second, understand that published reports are snapshots of a research study, and require protocols be published. Last, encourage a scientific culture that relies on data that is true and robust, rather than author reports of their data, by requiring the deposition of raw data, meta data and analysis scripts in public repositories.
This study has several strengths and limitations. First, for strengths, we sampled 300 published oncology articles indexed in PubMed. In doing so we captured a diverse array of research designs in an even more diverse range of journals. As such, all oncology researchers can read our paper and glean useful information and enact changes to improve the reproducibility of new evidence. With respect to our limitations, our study is too broad to make absolute judgements about specific study designs. All signals that suggest irreproducible research practices from our study fall in line with prior data in other areas of medicine,25–27 but are nonetheless signals rather than answers. For example, an examination of biomedical literature by Wallach et al found that less than 30% provided study materials as a supplement; however, none of the available materials allowed for replication of the protocol or contained analysis scripts and exactly one study (1/104) had a linkable protocol. Furthermore, about 18% provided data availability statements, yet none of these publications shared the complete raw data for the study.27 Similarly, an examination of the social sciences by Hardwicke et al found that no publications made their protocol publicly available, less than 2% provided the raw data, and exactly one publication had an accessible link to the study’s analysis scripts.25 Therefore, we suggest more narrow investigations of the reproducibility of specific study designs and suggest trials and animal studies be prioritised due to their potential influence (present or future) on patient care. Moreover, we do not suggest that irreproducible research findings are false; however, the trust in the results may be blunted. Further, replicating (ie, reconducting) a study is not necessary in all cases to assess the rigour of the results. If a protocol, statistical analysis plan and raw data (including metadata) are available, one fundamental pillar of science would be reinforced: self-correction.
Contributors DT and MV developed the protocol and conceptualised the study. CWal, ZJH, CWay, NV, MW, JC, DT and MV will conduct all literature searches. CWal, ZJH, CWay and NV will conduct all statistical analyses. CWal and DT will manage all data, including the management of the OSF repository. CWal, ZJH, CWay, NV, MW, JC, DT and MV will participate in all writing. CWal, ZJH, CWay, NV, MW, JC, DT and MV are equally the guarantors of the study and the integrity of the data.
Funding This work was supported by the 2019 Presidential Research Fellowship Mentor—Mentee Program at Oklahoma State University Center for Health Sciences.
Competing interests MV is funded through the US Department of Health and Human Services Office of Research Integrity and the Oklahoma Center for the Advancement of Science and Technology.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available in a public, open access repository. All datasets, materials, and the protocol are available online at https://osf.io/usb28/