Commentary
Beyond publication bias

https://doi.org/10.1016/j.jclinepi.2010.09.003Get rights and content

Abstract

In drug development, clinical medicine, or health policy making, basing one’s decisions on a selective part of the available evidence can pose a major threat to the health of patients and the society. If, for example, primarily positive research reports are taken into account, one could wrongfully conclude that a harmful drug is safe. The systematic error introduced by summarizing evidence that is not representative of the available evidence is commonly referred to as “publication bias.” Some, however, prefer other terms to refer to the same concept. In this article, we explore the terminology and concepts relevant to this bias and propose a more systematic nomenclature than what is currently used.

Section snippets

Background

What is new?

  • Scientific dissemination of evidence involves transforming scientific data into evidence by creating scientific reports, publishing and presenting these reports, and including them in databases and empirical evidence summaries.

  • Selective processes in the reporting (of outcomes and of entire studies), publication, and inclusion (in databases and reviews) of the evidence can but does not necessarily lead to reporting bias, publication bias, and inclusion bias, respectively.

  • The common

Distinguishing selectivity and bias

We will start by defining “scientific evidence dissemination” as the process by which scientific data are transformed into scientific evidence for the public domain. This transformation includes processes of creating scientific reports, publishing and presenting these reports, and integrating them in databases and empirical evidence summaries. We explicitly use the term scientific and thereby exclude other types of (more informal) dissemination, such as the transfer of information through

Assessment of dissemination selectivity and bias in meta-analysis

When exploring and summarizing evidence in a meta-analysis, only the presence or absence of data trends can be assessed; one cannot reliably attribute the trends to individual selective processes. The trends are, thus, reflections of selectivity in general, possibly occurring by chance or unrelated to the dissemination process [19], [20]. Although the types of selectivity cannot be distinguished, it is arguably important to “diagnose” its presence in general and assess its influence on the

Conclusion

In the scientific dissemination of evidence, selective processes in the reporting (of outcomes and of entire studies), publication, and inclusion (in databases and reviews) can be distinguished. These selective processes may or may not lead to reporting bias, publication bias, and inclusion bias, respectively. We prefer not to use publication bias as a summary term to refer to these biases, as it would constitute a “pars pro toto”—a term that names a part to describe the whole (and would, in

Acknowledgments

The authors are grateful to Arno Hoes of the Julius Center for Health Sciences and Primary Care and to Toshihiko Satoh of the Kitasato Clinical Research Center for their feedback during the drafting of the manuscript. The authors also thank Nina Tsuneda of Lingua Editing for her editorial contributions. Karel G. Moons gratefully acknowledges the support of The Netherlands Organization for Scientific Research (91208004 and 91810615).

References (31)

  • J.P.T. Higgins et al.

    Cochrane handbook for systematic reviews of interventions

    (2008)
  • K. Dwan et al.

    Systematic review of the empirical evidence of study publication bias and outcome reporting bias

    PLoS One

    (2008)
  • N. Rifai et al.

    Reporting bias in diagnostic and prognostic studies: time for action

    Clin Chem

    (2008)
  • R.A. Davidson

    Source of funding and outcome of clinical trials

    J Gen Intern Med

    (1986)
  • C. Zielinski

    New equities of information in an electronic age

    BMJ

    (1995)
  • Cited by (21)

    • Meta-analysis of consumers' willingness to pay for sustainable food products

      2021, Appetite
      Citation Excerpt :

      In the absence of publication selection bias, the plot looks like a symmetrical funnel (Dolgopolova & Teuber, 2018). Meta-analysts attempted to minimize the publication bias because of including working papers and any other unpublished reports (Stanley, 2011). Furthermore, subgroup analysis was adopted to test deeper heterogeneity of the data.

    • Personality versus traffic accidents; meta-analysis of real and method effects

      2017, Transportation Research Part F: Traffic Psychology and Behaviour
      Citation Excerpt :

      This will be further described in the method section. Dissemination bias (previously known as publication bias; Bax & Moons, 2011), such as when the results of a study influences its availability, is a problem in many areas of research (Ioannidis, Munafò, Fusar-Poli, Nosek, & David, 2014). Therefore, many different methods for detecting such bias have been invented, mostly based upon the assumptions of larger studies having more reliable results and studies with large effects being easier to publish (Møller & Jennions, 2001).

    • The Driver Behaviour Questionnaire as accident predictor; A methodological re-meta-analysis

      2015, Journal of Safety Research
      Citation Excerpt :

      This is well known in many other research areas (Chang, van Witteloostuijn, & Eden, 2010; Cote & Buckley, 1987; Hessing, Elffers, & Weigel, 1988; Lindell & Whitney, 2001; Moorman & Podsakoff, 1992; Ng, Eby, Sorensen, & Feldman, 2005; Sharma, Yetton, & Crawford, 2009), but has almost been completely overlooked in traffic safety research (af Wåhlberg, 2009; for exceptions see Barraclough, af Wåhlberg, Freeman, & Watson, 2014; Harrison, 2010; Lajunen, Corry, Summala, & Hartley, 1997). One of the common problems encountered when undertaking meta-analysis is (in the terminology of Bax & Moons, 2011) dissemination bias, an umbrella term for when the outcome of a study influences its availability. Publication bias is the most well known of these, with negative findings having less of a chance of being published, or being published later than others (Vevea & Woods, 2005).

    • Citations alone were enough to predict favorable conclusions in reviews of neuraminidase inhibitors

      2015, Journal of Clinical Epidemiology
      Citation Excerpt :

      Certain data used to support claims made by the company producing the drug were not released to the public [8], and concerns have been raised about the conflicts of interest held by members of the World Health Organization advisory panel that recommended stockpiling the drug in case of a pandemic [9]. Differences in the way evidence is selected for inclusion in literature reviews that could affect the conclusions are described as reference or inclusion bias [2,10]. These biases come in many forms, including the preferential inclusion of studies with positive outcomes and statistically significant results [11–13], from high-impact journals or authors with financial conflicts of interest [14–17], or disproportionate levels of self-citation [18,19].

    • Pain in multiple sclerosis: A systematic review of neuroimaging studies

      2014, NeuroImage: Clinical
      Citation Excerpt :

      Other cranial pain syndromes examined in included studies (such as occipital or glossopharyngeal neuralgia (Carrieri et al., 2009; de Santi et al., 2009; Minagar and Sheremata, 2000; Vilisaar and Constantinescu, 2006)) are even less common. These observations could suggest that studies identifying neuroradiological correlates of neuropathic pain syndromes in general, and headache or facial pain syndromes in particular, are disproportionately represented by the current literature (Bax and Moons, 2011). The included headache studies largely aimed to examine neuroradiological correlates of specific headache subtypes.

    View all citing articles on Scopus

    Competing interests: The authors declare that they have no conflicts of interest relevant to this article.

    View full text