Intended for healthcare professionals

Papers

Evidence b(i)ased medicine—selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications

BMJ 2003; 326 doi: https://doi.org/10.1136/bmj.326.7400.1171 (Published 29 May 2003) Cite this as: BMJ 2003;326:1171
  1. Hans Melander, senior biostatistician (hans.melander{at}mpa.se)1,
  2. Jane Ahlqvist-Rastad, senior medical officer1,
  3. Gertie Meijer, documentalist1,
  4. Björn Beermann1, professor
  1. 1 Medical Products Agency, Box 23, S-751 03 Uppsala, Sweden
  1. Correspondence to: H Melander
  • Accepted 6 March 2003

Abstract

Objectives To investigate the relative impact on publication bias caused by multiple publication, selective publication, and selective reporting in studies sponsored by pharmaceutical companies.

Design 42 placebo controlled studies of five selective serotonin reuptake inhibitors submitted to the Swedish drug regulatory authority as a basis for marketing approval for treating major depression were compared with the studies actually published (between 1983 and 1999).

Results Multiple publication: 21 studies contributed to at least two publications each, and three studies contributed to five publications. Selective publication: studies showing significant effects of drug were published as stand alone publications more often than studies with non-significant results. Selective reporting: many publications ignored the results of intention to treat analyses and reported the more favourable per protocol analyses only.

Conclusions The degree of multiple publication, selective publication, and selective reporting differed between products. Thus, any attempt to recommend a specific selective serotonin reuptake inhibitor from the publicly available data only is likely to be based on biased evidence.

Introduction

Drug treatment should rely on solid evidence, and it is now generally recognised that the standard basis for treatment guidelines is systematic literature reviews or meta-analyses of all randomised controlled trials. However, as meta-analyses are usually limited to publicly available data, several factors can give rise to biased conclusions. These include selection of studies submitted or accepted for publication,1 2 inclusion of undetected duplicate publications,3 4 and selective reporting (such as failure to report intention to treat results). Several actors (editors, investigators, and sponsors) affect whether and how scientific results reach the public domain. In clinical trials of drugs the role of the sponsor is especially important. The sponsor usually has access to all data on a specific product and has an obvious conflict of interest.5

Several authors have provided direct evidence of publication bias by investigating the publication status of protocols submitted to ethics committees or research organisations.611 These investigators did not, however, examine whether there was a biased selection of results reported in the studies that were eventually published. The objective of our study was to investigate the relative impact on bias caused by multiple publication, selective publication, and selective reporting in studies sponsored by the pharmaceutical industry.

Material and methods

Studies submitted to drug regulatory authority

Five selective serotonin reuptake inhibitors were approved in Sweden between 1989 and 1994 for treating major depression. Forty two short term (4-8 weeks) placebo controlled clinical trials with the approved doses were submitted to the Swedish drug regulatory authority and formed the basis for the approvals. When applying for marketing authorisation, the applicants are obliged to submit full reports of all studies performed by the applicants as well as all available information on any study not performed by the applicants. Thus, it is reasonable to assume that the submitted studies have not been subject to selection bias.

Studies published

We identified published versions of the submitted studies through a computer aided search in Medline (PubMed), Embase, and PsycINFO (Psychological Abstracts); scrutiny of reference lists with special focus on review articles and meta-analyses; and inquiries to the sponsoring companies. For each submitted study, we investigated the publication status and the degree of multiple publication. We classified a published article reporting results from a single submitted study as a stand alone publication, whereas we classified articles based on data from two or more submitted studies as pooled publications.

Comparison of studies

We chose the percentage of patients responding to treatment as the criterion for comparing results from submitted reports with those from published articles. Response was defined as at least a 50% reduction of the initial score on the Hamilton depression rating scale (HDRS) in most studies. In four studies response rates were based on the Montgomery Åsberg depression rating scale or the clinical global impression of change. In the pooled analyses of response rates we combined the estimate from the individual studies, with the inverse of the variance of the estimates as weights.12

Results

We identified 38 publications presenting data from 38 of the 42 studies submitted to the drug regulatory authority.1350 They were published between 1983 and 1999 and included duplicate publications and pooled analyses. The sponsoring companies confirmed the completeness of our search.

Multiple publication

Figure 1 shows the degree of multiple publication: this varied from no duplicate publication (drug 3) to extensive multiple publication (drug 1) with three stand alone publications appearing twice and two subsets of studies published as pooled publications three times each.

Fig 1
Fig 1

Publication pattern for studies of the five selective serotonin reuptake inhibitors approved in Sweden between 1989 and 1994 for treating major depression

For drug 1, there were no cross references between the pooled analyses of the same subsets of studies. For each of the subsets, the first author was different in two of the pooled analyses, and the third publication had a single author. Many of the studies had appeared previously as stand alone publications, but reference to these in the pooled publications was given in two cases only, once for each subset. Some of the analyses were presented as a pooled analysis of stand alone centres and some as a multicentre study. For both subsets of studies, the pooled results differed slightly between the publications.

For drug 2, eight studies resulted in three pooled publications based on different combinations of studies. The pooled analyses based on two and eight studies appeared simultaneously (as “a double blind comparison” and “a large multicentre study” respectively) with one author in common but without cross reference. Later, the five study analysis was presented as an intention to treat reanalysis of the per protocol analysis in the eight study publication without revealing that three studies were omitted. Nor was it said that two of the included studies had been published earlier as stand alone publications.

The pooled publication of studies of drug 4 was denoted as a review of multicentre controlled studies without identification of the included studies. Two of the studies later appeared as stand alone publications without acknowledgement of their earlier inclusion in a pooled publication. There was no author name in common in the pooled and stand alone publications.

For drug 5, the pooled analysis was presented as a meta-analysis of the five available placebo controlled studies, clearly identified by the name of the principal investigator. Reference was given to one previous stand alone publication. The other stand alone publication appeared seven years later without reference to the pooled publication.

Selective publication

Of the 42 submitted studies, 21 found the test drug to be significantly more effective than placebo in the primary variable (fig 1). Nineteen of these studies appeared as stand alone publications. Only six of the 21 studies not showing significant results were published as stand alone publications. Of the four studies that never reached the public domain, all showed non-significant results with respect to the primary variable.

Selective reporting

All but one of the study reports submitted to the regulatory agency presented results from two or more alternative analyses (intention to treat and per protocol). Only two of the stand alone publications presented an intention to treat analysis as well as a per protocol analysis. The remaining stand alone publications presented only one analysis, which tended to be the more favourable per protocol analysis. In the 15 stand alone and five pooled publications reporting differences in percentage response, patients who withdrew or who could not be evaluated were usually ignored in the calculations of the response rates. As figure 2 shows, this could result in large overestimates compared with the intention to treat analysis based on the submitted reports, where patients who withdrew or could not be evaluated were considered to be non-responders. In one extreme case the published difference in the percentage of patients responding to treatment was 51%, whereas no difference was seen in the intention to treat analysis. In five other cases the size of the overestimation was 10-25%. The degree of overestimation tended to be higher in smaller studies.

Fig 2
Fig 2

Difference in estimated size of treatment effect (% response to drug minus response to placebo) from published studies and estimate from intention to treat analysis of submitted studies plotted against total sample size.

Comparison of pooled results from submitted and published studies

In 41 of the 42 submitted studies data on response rate were provided or could easily be calculated on an intention to treat basis. In total, 15 stand alone publications and five pooled publications reported response rates based on data from 32 studies. For each drug, we compared a pooled analysis of all studies submitted to the regulatory agency with a pooled analysis of a correct selection of published studies in which all duplicates were excluded. We also made a pooled analysis of published studies including those duplicates that probably could not be identified as such without access to information about all studies. In this second selection we excluded duplicates with at least one author in common and only minor differences with respect to patient numbers and efficacy results but included any duplicates unidentifiable because of lack of cross reference between pooled publications and stand alone publications.

The pooled analyses of published data generally gave larger differences in response rate (drug minus placebo) than did the estimates from all submitted studies (fig 3). The result of the comparison was conspicuous for two products. The estimate based on evidence from all available studies on drug 2 indicated a marginal effect, whereas the pooled analysis of published data gave an estimated effect size similar to that for most of the other drugs. Similarly, the analyses based on published data gave the impression that drug 4 was substantially more effective than the other drugs, whereas the analysis based on submitted studies did not. Since the estimates based on the published studies for these two drugs included data from all the submitted studies, the overestimations are due to selective reporting rather than selective publication. Overall, there were only minor differences in response rates between the correct selections of published studies and the plausible selections. Thus, in this material there is no indication of any major bias due to multiple publication.

Fig 3
Fig 3

Differences (95% confidence intervals) in response rate (% response to drug minus response to placebo). Results from pooled analyses of all submitted studies, correct selection of published studies (duplicates excluded), and plausible selection of published studies (including probably undetectable duplicates)

Discussion

In a cohort of studies submitted to the Swedish regulatory agency to secure marketing approval for five selective serotonin reuptake inhibitors for the treatment of major depression we have found evidence of duplicate publication, selective publication, and selective reporting. There was a high frequency of duplication due to the inclusion of different subsets of studies in several pooled publications. Studies showing significant differences between efficacy of drug and placebo were three times more likely to appear as stand alone publications than were studies with non-significant results. Although both intention to treat analyses and per protocol analyses were available in the submissions to the regulatory agency, only 24% of the stand alone publications reported the usually less favourable intention to treat results. In our material this selective reporting was the major cause for bias in overall estimates based on published data.

Strengths and limitations of study

To our knowledge, access to full reports and study protocols for all studies, published as well as unpublished, is unique to our investigation. This has enabled us to study the impact of different sources of publication bias. It also allowed us to elucidate the sometimes complex pattern of publications. Our investigation is restricted to one class of antidepressant drugs, but there is no reason to believe that drug manufacturers have different policies for reporting and publishing studies of different drugs. Indeed, in a review of an antiemetic drug a similar pattern of duplicate publication has been reported.4 Thus, our results are likely to be valid for other classes of drugs with a similar structure of the efficacy documentation—that is, several studies with small to medium sample size.

The percentage of submitted studies with full stand alone publications in our investigation (60%) is similar to what has been reported by others. In a review of five similar investigations the percentage of full publications ranged from 48% to 80% (median 62%).51 The ratio of stand alone publications with significant results to those with non-significant studies was 3.2 in our investigation, which is somewhat higher than the overall corresponding ratio of 2.3 reported for the above investigations.51 This difference might be explained by the difference in study materials. All the studies in our investigation were initiated by the sponsor, and the investigators were usually clinical practitioners for whom academic research was not the primary interest. Hence, the decision on how and whether a study should be published was probably left entirely to the sponsor. The studies in the other investigations were more heterogeneous with respect to funding (public funding, no external funding, etc) and the study sponsors probably took a less active part in the reporting of the studies.

What is already known on this topic

Duplicate publication, selective publication, and selective reporting are likely to introduce bias in systematic literature reviews and meta-analyses

Several reports have provided evidence of duplicate publication and selective publication as well as the tendency to publish only studies with significant findings

What this study adds

Access to full documentation of all studies (published and unpublished) made it possible to investigate the relative impact of the different sources of bias

Selective reporting (tendency to publish the more favourable per protocol results only) was a major cause for bias

A sponsor in control of all studies does not seem to improve the situation with respect to duplicate publication, selective publication, and selective reporting

Conclusions

The outcome of our investigation should not be used to dispute the value of systematic literature reviews and meta-analyses in general. However, for anyone who relies on published data alone to choose a specific drug, our results should be a cause for concern. Without access to all studies (positive as well as negative, published as well as unpublished) and without access to alternative analyses (intention to treat as well as per protocol), any attempt to recommend a specific drug is likely to be based on biased evidence. The probable choice of a specific selective serotonin reuptake inhibitor based on a pooled analysis of publicly available data is not likely to be supported by an analysis considering the total body of evidence.

Footnotes

  • Contributors: HM, JA-R, and BB designed the study. GM performed the search for publications based on the data submitted to the Swedish drug regulatory authority. HM and JAR reviewed the submitted reports and the publications, and extracted the data. HM performed the statistical analysis. HM, JAR, and BB interpreted the results. All authors contributed to writing the paper. HM is guarantor for the study.

  • Funding None.

  • Competing interests None declared.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 27.
  27. 28.
  28. 29.
  29. 30.
  30. 31.
  31. 32.
  32. 33.
  33. 34.
  34. 35.
  35. 36.
  36. 37.
  37. 38.
  38. 39.
  39. 40.
  40. 41.
  41. 42.
  42. 43.
  43. 44.
  44. 45.
  45. 46.
  46. 47.
  47. 48.
  48. 49.
  49. 50.
  50. 51.
View Abstract