Article Text

Download PDFPDF

Using the CONSORT statement to evaluate the completeness of reporting of addiction randomised trials: a cross-sectional review
  1. Matthew Vassar,
  2. Sam Jellison,
  3. Hannah Wendelbo,
  4. Cole Wayant,
  5. Harrison Gray,
  6. Michael Bibens
  1. Psychiatry and Behavioral Sciences, Oklahoma State University Center for Health Sciences, Tulsa, Oklahoma, USA
  1. Correspondence to Dr Cole Wayant; cole.wayant{at}okstate.edu

Abstract

Objectives Evaluate the completeness of reporting of addiction randomised controlled trials (RCTs) using the Consolidated Standards of Reporting Trials (CONSORT) statement.

Setting Not applicable.

Participants RCTs identified using a PubMed search of 15 addiction journals and a 5-year cross-section.

Outcome measures Completeness of reporting.

Results Our analysis of 394 addiction RCTs found that the mean number of CONSORT items reported was 19.2 (SD 5.2), out of a possible 31. Twelve items were reported in <50% of RCTs; similarly, 12 items were reported in >75% of RCTs. Journal endorsement of CONSORT was found to improve the number of CONSORT items reported.

Conclusions Poor reporting quality may prohibit readers from critically appraising the methodological quality of addiction trials. We recommend journal endorsement of CONSORT since our study and those previous have shown that CONSORT endorsement improves the quality of reporting.

  • clinical trials
  • reporting quality
  • CONSORT
  • addiction

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

View Full Text

Statistics from Altmetric.com

Strengths and limitations of this study

  • Application of robust methodology, as recommended by the Cochrane collaboration.

  • 15 addiction journals included over a 5-year period.

  • Cross-sectional design, limited to addiction journals, reduces generalisability of our findings.

Introduction

The completeness and clarity of reporting research studies is essential for readers to fully appreciate and evaluate a study’s methodological rigour. Complete reporting is also necessary to determine the applicability of findings to patient care. Sims et al 1 likened poor reporting to blinding readers when important methodological details or results are omitted from published reports. Moher et al argue that, “inadequate reporting borders on unethical practice when biased results receive false credibility”.2 Previous studies have found that clinical trial interventions are insufficiently reported to permit replication or to allow physicians to enact the intervention in the clinical setting.3 4 Others have found that the poor reporting of systematic reviews does not even permit the initial searches to be replicated.5 Thus, across the clinical research spectrum, reporting is variable, but often suboptimal and in need of improvement.

To address reporting deficiencies, researchers have developed reporting guidelines which provide best-practice guidance to study authors on reporting pertinent information for various study designs. The Consolidated Standards of Reporting Trials (CONSORT) statement6 is an evidence-based set of 25 items that provides specific guidance for reporting randomised trials and has an accompanying flow diagram to document the flow of participants throughout a trial. CONSORT has been widely adopted by 585 journals; over 50% of core medical journals listed in the Abridged Index Medicus on PubMed currently endorse or require CONSORT.7 We found only four addiction journals listed as endorsers on the CONSORT website.

In this study, we evaluate the completeness of reporting of addiction clinical trials, an area of study in which little is known about reporting practices. We used the CONSORT statement as the basis for this investigation, as CONSORT is widely recognised as the authoritative source for trial reporting. Results from this investigation will assist in identifying areas well reported within addiction trials and areas where improvements are needed. We also evaluate whether particular trials characteristics are associated with more complete reporting.

Methods

We conducted a cross-sectional study of published addiction clinical trials; therefore, our study was not subject to Institutional Review Board oversight as it did not meet the regulatory definition of human subjects research. For purposes of reporting, we followed the reporting guidelines for meta-epidemiological studies8 and, when relevant, the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines.9

Bibliographic databases searches and journal selection

One investigator (MV) searched PubMed (which includes the MEDLINE collection) on 22 June 2018. This search was conducted to identify clinical trials published between 1 January 2013 and 31 December 2017 using PubMed’s Clinical Trial[ptyp] filter. This filter has been shown to maximise sensitivity to ensure relevant studies are not excluded.10 Journals listed in the addiction category of Google Scholar metrics were selected based on their h5-index. Beginning with the journal with the highest h5-index, we conducted PubMed searches to see whether each journal had published at least 10 clinical trials. We continued this process until 15 journals were selected.

We deployed the final search string as follows: ((((((((((((((“Addiction (Abingdon, England)“[Journal]) OR (“Drug and alcohol dependence’”[Journal])) OR (“Nicotine & tobacco research : official journal of the Society for Research on Nicotine and Tobacco”[Journal])) OR “Addictive behaviors”[Journal]) OR (“Alcoholism, clinical and experimental research”[Journal])) OR “Psychology of addictive behaviors : journal of the Society of Psychologists in Addictive Behaviors”[Journal]) OR “Addiction biology”[Journal]) OR (“Journal of studies on alcohol and drugs”[Journal])) OR “The International journal on drug policy”[Journal]) OR (“Drug and alcohol review”[Journal])) OR (“Alcohol and alcoholism (Oxford, Oxfordshire)“[Journal])) OR “Journal of substance abuse treatment”[Journal]) OR “Alcohol (Fayetteville, N.Y.)“[Journal]) OR “The American journal on addictions”[Journal]) OR “Substance use & misuse”[Journal] AND (Clinical Trial[ptyp] AND (“2013/01/01”[PDat] : “2017/12/31”[PDat])).

Screening records for eligibility

To be eligible for inclusion, a study must have reported the use of a randomised clinical trial design and address one of the following related to drugs, alcohol or tobacco: (1) addiction prevention, (2) stabilisation following excessive use of a substance (drugs, alcohol or tobacco), (3) relapse prevention and (4) recovery maintenance. For purposes of this study, the National Institutes of Health definition of clinical trial was used to determine inclusion, which involves the prospective placement of participants to an experimental condition using randomisation methods and testing the effects of an intervention.11 We eliminated other study types, including observational study designs (eg, case–control and cohort studies), systematic reviews and meta-analyses, and case reports. We also excluded letters to the editor, other editorials, commentaries and perspectives articles.

Two investigators (SJ and HW) screened all studies for eligibility in an independent, blinded fashion which is consistent with our previous investigations.12–14 We used Rayyan, an online systematic review application, to screen PubMed records for eligibility with the blinding feature turned on. After the initial screening process was completed, the two investigators held a consensus meeting to review the screening decisions and resolve disagreements by discussion.

Data extraction and scoring

Two investigators (SJ and HW) performed blinded, double data extraction. As with screening, a consensus meeting was held after completion of the data extraction process to review and resolve discrepancies. The following items were extracted from each article: journal, year of publication and funding source. We next evaluated each item of the CONSORT Statement, which can be found in table 1. For each included journal, we manually reviewed the Instructions for Authors page (or equivalent) to determine if CONSORT was endorsed.

Table 1

Included journals (ordered by Google Scholar ranking) and mean adherence to CONSORT items (n=31)

We planned a multiple regression analysis to investigate the association between funding source, journal and journal endorsement of CONSORT on individual trial CONSORT scores. This regression analysis was thwarted because of a large predominance of public funding and collinearity. Therefore, we conducted an independent sample t-test to compare the mean CONSORT score for trials published in CONSORT-endorsing journals and non–CONSORT-endorsing journals. We further conducted a one-way ANOVA, with Bonferroni adjustments, to compare trials related to drug, alcohol, tobacco or mixed (eg, co-occurring alcohol and tobacco) addictions. All analyses were conducted using Stata 15.1.

Results

Our database search returned 1546 records, of which 394 RCTs were eventually included (figure 1). A full list of included RCTs can be found online (https://osf.io/cy5j3/). The 394 RCTs were most often published in Drug and Alcohol Dependence (n=73), Addiction (n=65), and Nicotine and Tobacco Research (n=61). Included RCTs were most often funded by public sources (eg, government) (n=315).

Figure 1

Flow diagram of included and excluded studies. RCT, (drug, alcohol, and tobacco) randomised controlled trial.

CONSORT compliance

Figure 2 presents a histogram that summarises the distribution of trials obtaining particular CONSORT compliance scores. The mean number of CONSORT items reported was 19.2 (SD 5.2), out of a possible 31. The adherence to CONSORT for all included trials, stratified by journal, is shown in table 1. Twelve items were reported in <50% of RCTs (table 2), including such items as where a protocol can be accessed (item 24) and sample size estimations (item 7a). Similarly, 12 items were reported in >75% of RCTs, including important items like sources of funding (and role of funders) (item 25), eligibility criteria (item 4a), and a balanced interpretation of harms and benefits (item 22).

Figure 2

Histogram of trial adherence to CONSORT.

Table 2

Adherence to each CONSORT item

Our pre-planned multiple regression investigating the association between journal, consort endorsement and funding source on adherence to CONSORT was thwarted because of the large disparities in funding source group sizes (public: n=315/394, 79.9%) and collinearity of journal adherence to CONSORT as a predictor. Therefore, we conducted an independent sample t-test comparing the mean CONSORT adherence for trial published in CONSORT-endorsing journals and trial published in non–CONSORT-endorsing journals. The mean difference between the two groups was −4.5 (95% CI −5.49 to 3.55) items, indicating that trials published in CONSORT-endorsing journals adhere to significantly more items than other trials. On comparison of RCTs related to drug, alcohol, tobacco or mixed addictions, we found that drug dependence RCTs (n=111) had the highest mean CONSORT score (20.0, SD 4.7) and alcohol dependence RCTs (n=117) had the lowest mean CONSORT score (18.2, SD 5.6). The mean difference between these two cohorts was 1.9 CONSORT items and was statistically significant (p=0.04). No other mean differences were significant.

Discussion

In this investigation of trial reporting, 12 CONSORT items were reported less than 50% of the time in RCTs published in addiction journals. Previously, it has been shown that low-quality studies may be incorporated into meta-analyses, thus biasing downstream treatment effects.15 Further, bias associated with key trial characteristics, such as allocation concealment, has been shown to exaggerate trial summary effects.16–18 Additional forms of bias, such as selective outcome reporting bias,19–22 are prevalent across biomedicine. Consequently, poor reporting quality may render readers, who are likely aware of at least one form of bias prevalent in RCTs, incapable of critically appraising the validity of addiction RCT results. However, our study also showed that journal adherence to CONSORT was associated with better reporting of RCT items. It is possible that this result is confounded by journal impact factor, but we are reassured of the effect of CONSORT endorsement by previous studies. A systematic review of 53 published studies found that overall, reporting quality in RCTs is suboptimal but that journal endorsement of CONSORT is an intervention that has proven benefit.23 Namely, journal endorsement of CONSORT greatly improved the reporting of allocation concealment, scientific rationale for the trial, sample size estimations and method of sequence generation.

Other than our study, evaluations of the completeness of reporting of RCTs in addiction science have been limited. Our study found that mean CONSORT adherence was approximately two-thirds of included CONSORT items and that journal endorsement of CONSORT resulted in higher mean CONSORT adherence by included trials. One previous study24 investigated the completeness of reporting of 127 alcohol treatment outcome RCTs. Trials published in Addiction, Alcohol and Alcoholism, Drug and Alcohol Dependence, and Journal of Consulting and Clinical Psychology—all CONSORT-endorsing journals—were compared with trials published in Alcoholism: Clinical and Experimental Research, Journal of Studies on Alcohol and Drugs, Journal of Substance Abuse Treatment, and Psychology of Addictive Behaviors—non-endorsing journals. Authors reported that improvements in trial reporting over time were noted in both groups; however, endorsing journals experienced improvements over time for reporting random assignment, masking, participant flow and statistics. In contrast, the trend over time for non-endorsing journals was not statistically significant for any of these item subgroups. Results from this study formed the basis for a policy change at Alcohol: Clinical and Experimental Research 25 that began requiring clinical trialists to adhere to CONSORT for trial reporting. Two narrative reviews26 27 accompanied the editorial and discussed the importance of improved trial reporting and design for alcohol use disorders. A 2019 investigation of the Instructions for Authors sections of 88 addiction journals found that less than a quarter of the journals endorsed adherence to various reporting guidelines, with CONSORT endorsement being highest at only 14.8% of journals.28 In response to these findings, these authors expressed, “there is an urgent need to improve the author instructions segment of addiction science journals so that the process of research dissemination can occur more effectively”.28

In our study, trials published in Addiction—a CONSORT-endorsing journal—received the highest composite scores on overall reporting. We speculate two possibilities here. First, Addiction provides explicit directions for research reporting in its instructions to authors. Multiple reporting guidelines are mentioned by name. The EQUATOR Network, the international establishment devoted to the advancement of improving study reporting, is also referenced. Previous studies have confirmed that when journals provide detailed guidance to authors, quality of research reporting is improved.29 30 Second, Addiction encourages authors to use Penelope (www.penelope.ai), a tool created by the EQUATOR Network, to perform an automated inspection of a manuscript on reporting compliance with various reporting guidelines. Penelope generates a report to authors that assesses structure, declarations, statistics, referencing and other common reporting errors prior to manuscript submission to the journal. While we are unaware of any published studies that evaluate Penelope, we surmise that its simplicity of use and quick feedback may prompt investigators to make alterations to their manuscripts prior to journal submission. Empirical evaluations on Penelope are recommended.

While in this discussion we focus on the issues of trial reporting at large, our results confirm that specific items are particularly problematic. The CONSORT explanation and elaboration document outlines in detail the rationale and importance for each item.31 Many items relate to reporting methodological information, such as randomisation (items 8a, 8b, 9, 10) and blinding (item 11a). None of the randomisation or blinding items were reported at a high rate, with the most reported item relating to the method of randomisation (8a) and the least reported item relating to who was blinded (11a). Other items relate to the availability of published protocols (item 24) or trial registration numbers (item 23) that can be used to inspect the possibility of biases such as selective outcome reporting or questionable trial alterations. These items were also poorly reported, especially item 24 regarding protocols. Only 44.7% (176/394) of included RCTs provided a registration number, while 8.9% (35/394) directed readers to a protocol.

Our study is subject to strengths and limitations. Regarding strengths, we applied gold standard systematic review methodology recommended by the Cochrane Collaboration32 for study screening and data extraction—both were done in a blinded, duplicate fashion. Furthermore, we included a large number of journals relative to other investigations that restricted their samples to five or so journals. We also included a larger sample of trials than similar investigations across other clinical disciplines. Taken together, these strengths lend credibility to the validity of our data and, thus, the robustness of our conclusions. Regarding limitations, our study design is cross-sectional. Our results should be interpreted descriptively, and caution should be taken when generalising our findings outside the scope of our sample. Additionally, we only looked at articles published in addiction journals, which does not completely encompass all addiction trials published. This may have led to an underestimation of CONSORT adherence as other trials may have been published in journals with stricter reporting requirements. It is also possible that confounding factors may influence our results rather than CONSORT endorsement. We did not look particularly at funding source, and funders—such as the National Institutes of Health—may have their own particular reporting requirements outside of CONSORT that influenced results.33 Some CONSORT items are subjective and may be interpreted differently than we interpreted them. While we applied the greatest standardisation possible, this subjectivity should be carefully considered when interpreting results from our study.

In conclusion, our study found inconsistencies in the completeness of reporting of RCTs published in addiction journals. To ensure that all trial evidence generated for the prevention, treatment or management of addiction can be critically appraised by all stakeholders, we recommend all addiction journals require trial authors to consult the CONSORT checklist prior to submission. Turner et al’s23 Cochrane review found no evidence that journal endorsement hinders the completeness of RCT reporting. Further, the authors of this review argue that journals are not sending clear messages to authors and that the fidelity of endorsement of reporting guidelines by journals has been weak. Explicit guidance and follow-up from addiction journals may, thus, lead to the publication of RCTs which are better reported, better interpreted and better implemented in the clinical setting.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
  31. 31.
  32. 32.
  33. 33.
View Abstract

Footnotes

  • Contributors MV and MB conceptualised and designed the project. SJ, HW and HG participated in data extraction and analysis. CW conducted all statistical analyses. All authors participated in writing the manuscript and give final approval.

  • Funding The research results discussed in this publication were made possible in total or in part by funding through the award for project number HR18-119, from the Oklahoma Center for the Advancement of Science and Technology.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Data are available on reasonable request.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.