Original ArticleNo evidence of bias in the process of publication of diagnostic accuracy studies in stroke submitted as abstracts
Introduction
Publication bias is defined as the “tendency to publish research results based on the strength and direction of a study's findings” [1]. It has been demonstrated that the publication of both observational and experimental studies is influenced by the characteristics of the study, and research findings are less likely to be published if they are shown to be negative rather than positive, or if they are based on small patient populations [2], [3], [4], [5]. The term “publication bias” is also used in the literature to refer to “other biases related to the time, type and language of publication, and multiple publications” [6]. Publication and other related biases may lead to an overestimation of the magnitude of treatment effects, and consequently, may affect decisions about patient management. They also represent a serious threat to the reliability of systematic reviews, which focus primarily on evidence from the published literature. Although there is a substantial literature on publication bias in systematic reviews of randomized controlled trials, there is little empirical evidence on the frequency and determinants of publication bias in systematic reviews of studies of diagnostic test accuracy [7]. Determinants of publication bias in studies of diagnostic test accuracy are likely to differ from those in clinical trials [8]. Because many aspects of stroke management (e.g., acute treatment, secondary prevention, investigation of complications) depend on the results of diagnostic tests, we considered it important to determine whether or not there was evidence of publication bias.
Publication bias in randomized controlled clinical trials has been evaluated by tracing cohorts of trials identified from ethics committees, and investigating the determinants of publication within the cohort [1], [2], [3], [4]. Analogous cohorts of studies of diagnostic test accuracy are more difficult to be identified as formal registration of diagnostic research, and consequently, ethical approval is not uniformly required. We have followed an alternative approach of a cohort of studies at a point further down the research process, where they are presented as conference abstracts, and investigating the determinants of future full publication. Although this approach does not capture the full magnitude of publication bias, we hypothesize that the determinants of full publication in this stage would be similar.
We therefore sought: (1) to assess publication bias by determining what proportion of studies of diagnostic accuracy presented as abstracts at international stroke meetings were subsequently published in full in peer-reviewed journals; and (2) to assess which factors were predictive of time to publication.
Section snippets
Methods
We reviewed all proceedings of the International Stroke Conference and the European Stroke Conference between 1995 and 2004. All abstracts submitted to both stroke conferences were peer reviewed blind to authorship before acceptance. Acceptance rates varied from year to year, with about 60% of all submitted proceedings accepted for presentation in most recent years [9]. These proceedings were published as abstracts in special issues of Stroke and Cardiovascular Diseases. Abstracts were selected
Statistical analyses
We performed Kaplan–Meier survival analysis to examine the relationship between abstract presentation and time to publication.
We examined potential factors predictive of time of publication, one at a time, using univariate Cox regression analyses, and the results were expressed as hazard ratios. Time to publication was defined as the time from the date the study abstract was published in Stroke or Cerebrovascular Diseases to the date of the first peer-reviewed full publication identified in the
Results
One hundred and sixty abstracts met our inclusion criteria. Seventy-six percent (121) of all abstracts did not report on blinding, and 65% (104) did not mention study design. Approximately half of all abstracts did not provide estimates of sensitivity and specificity. Eighty-eight percent (141) reported “positive” diagnostic test results, whereas only 6% (nine) reported “non-informative” test results.
We were able to locate a full-text publication for 117 abstracts in MEDLINE or EMBASE. Study
Discussion
In systematic reviews, it is crucial to identify all relevant published studies in the literature and minimize possible publication and related biases. There is significant empirical evidence of publication biases for randomized clinical trials. However, publication and related biases for studies of diagnostic accuracy have not been investigated to any great extent, including in the field of stroke, where diagnostic testing plays a major role in clinical practice. This study examined the
Acknowledgments
MB's work was supported by the Scottish Executive Health Department Chief Scientist Office. JD was funded by a Senior Scientist in Evidence Synthesis Award from the English Department of Health.
References (13)
- et al.
Publication bias in clinical research
Lancet
(1991) - et al.
The performance of tests of publication bias and other sample size effects in systematic reviews of diagnostic test accuracy was assessed
J Clin Epidemiol
(2005) - et al.
Association between time interval to publication and statistical significance
JAMA
(2002) The existence of publication bias and risk factors for its occurrence
JAMA
(1990)- et al.
NIH clinical trials and publication bias
Online J Clin Trials
(1993) - et al.
Evidence of publication bias in reporting acute stroke clinical trials
Neurology
(2006)
Cited by (30)
Publication bias may exist among prognostic accuracy studies of middle cerebral artery Doppler ultrasound
2019, Journal of Clinical EpidemiologyCitation Excerpt :For studies of diagnostic and prognostic tests, there is less evidence of such reporting practices, although several reviews have shown that studies with higher accuracy estimates reach full-text publication sooner than those reporting lower estimates [7,8]. Previous evaluations of conference abstracts of test accuracy studies have not identified significant associations between reported accuracy estimates and full-text publication [19,21,22]. Our study showed that systematic reviewers of prognostic accuracy studies can include a considerable amount of unpublished material as part of gray literature, if they invest in additional efforts to identify conference abstracts.
Reported estimates of diagnostic accuracy in ophthalmology conference abstracts were not associated with full-text publication
2016, Journal of Clinical EpidemiologyCitation Excerpt :Among 418 diagnostic accuracy studies that were registered between 2006 and 2010 in ClinicalTrials.gov, a full-text publication could be identified for 54% [11]. In an evaluation of 160 conference abstracts describing diagnostic accuracy studies that were presented between 1995 and 2004 at two international stroke meetings, a full-text publication was found for 76%; no association was observed with reported accuracy estimates [12]. In a similar evaluation of 250 abstracts describing diagnostic accuracy studies that were presented in 2009 at three dementia conferences, a full-text publication was identified for only 39%, but potential associations with reported accuracy estimates were not assessed [13].
Thirty percent of abstracts presented at dental conferences are published in full: a systematic review
2016, Journal of Clinical EpidemiologyLiterature survey of high-impact journals revealed reporting weaknesses in abstracts of diagnostic accuracy studies
2015, Journal of Clinical EpidemiologyCitation Excerpt :The review team consisted of four researchers, all of them part of the STARD group (D.A.K., with 2 years of experience, J.F.C., with 4 years of experience, and L.H. and P.M.M.B., each with more than 10 years of experience in performing literature reviews of diagnostic accuracy studies). First, a longlist of 36 potentially relevant items was generated based on the STARD statement [20], the CONSORT for Abstracts checklist [4], the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) for Abstracts checklist [5], QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies) [21], existing guidance on the structured reporting and the assessment of the quality of journal abstracts in general [22–25], and previous studies evaluating the content of abstracts of diagnostic accuracy studies [13,14] (Appendix A at www.jclinepi.com). After this, each item on the longlist was discussed within the review team, and a subset of items deemed most relevant was selected based on general consensus.
Systematic reviews and meta-analyses of diagnostic test accuracy
2014, Clinical Microbiology and InfectionUptake of methods to deal with publication bias in systematic reviews has increased over time, but there is still much scope for improvement
2011, Journal of Clinical EpidemiologyCitation Excerpt :These differences can be attributed to the different approaches taken by authors to deal with perceived problems of publication bias in different fields. Brazelli et al. (2009) examined the frequency and determinants of publication of studies of diagnostic accuracy submitted as abstracts at international stroke meetings and subsequently published in full peer-reviewed journals [20]. They found that 76% of 160 abstracts were subsequently published in full, and that clinical utility of results or other study characteristics did not predict their publication.