Article Text

Download PDFPDF

Clinical research projects at a German medical faculty: follow-up from ethical approval to publication and citation by others
  1. A Blümle1,
  2. G Antes1,
  3. M Schumacher1,
  4. H Just2,
  5. E von Elm1,3
  1. 1
    Department of Medical Biometry and Statistics, University Medical Center, Freiburg, Germany
  2. 2
    Research Ethics Committee, University Medical Center, Freiburg, Germany
  3. 3
    Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
  1. Dr A Blümle, Department of Medical Biometry and Statistics, Institute of Medical Biometry and Medical Informatics, University Medical Center Freiburg, Stefan Meier Strasse 26, 79104 Freiburg, Germany; bluemle{at}cochrane.de

Abstract

Background: Only data of published study results are available to the scientific community for further use such as informing future research and synthesis of available evidence. If study results are reported selectively, reporting bias and distortion of summarised estimates of effect or harm of treatments can occur. The publication and citation of results of clinical research conducted in Germany was studied.

Methods: The protocols of clinical research projects submitted to the research ethics committee of the University of Freiburg (Germany) in 2000 were analysed. Published full articles in several databases were searched and investigators contacted. Data on study and publication characteristics were extracted from protocols and corresponding publications.

Results: 299 study protocols were included. The most frequent study design was randomised controlled trial (141; 47%), followed by uncontrolled studies (61; 20%), laboratory studies (30; 10%) and non-randomised studies (29; 10%). 182 (61%) were multicentre studies including 97 (53%) international collaborations. 152 of 299 (51%) had commercial (co-)funding and 46 (15%) non-commercial funding. 109 of the 225 completed protocols corresponded to at least one full publication (total 210 articles); the publication rate was 48%. 168 of 210 identified publications (80%) were cited in articles indexed in the ISI Web of Science. The median was 11 citations per publication (range 0–1151).

Conclusions: Results of German clinical research projects conducted are largely underreported. Barriers to successful publication need to be identified and appropriate measures taken. Close monitoring of projects until publication and adequate support provided to investigators may help remedy the prevailing underreporting of research.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

It has long been recognised that only a proportion of research projects ultimately reaches the stage of full publication in peer-reviewed journals. In 1979 a scenario was described in which journals are filled with spurious results that happen to reach statistical significance (p<0.05), whereas the researchers’ file drawers are filled with non-significant results.1 The selective reporting of studies with statistically significant results confirming a study hypothesis is known as “publication bias” or “positive outcome bias”. Results of studies with “positive” treatment effects are more likely to be published.2 Several other factors could be shown to be associated with the likelihood of the publication of study results.3 Consequently, reviews summarising the available evidence are likely to give an over-optimistic effect of treatment.4 This can lead to inappropriate or even detrimental treatment recommendations.5

The magnitude of the “file drawer problem” can only be investigated if retained study results are made available. The earliest stage at which a planned study is documented in detail is the study protocol submitted to a research ethics committee (REC) or a funding agency. Study protocols are increasingly recognised as a valuable source of information for methodological research into the dissemination of scientific evidence.6 Several investigations have followed defined samples of study proposals approved by REC or institutional review boards until publication.712 However, few studies extended the observation beyond publication also to explore the post-publication period of studies, ie, their later citation in the scientific literature.12

We set out to assemble a cohort of studies of all designs that were approved by a REC and conducted at a German medical faculty and to investigate their publication and post-publication stage. In particular, we aimed to estimate the publication rate overall and by study design, to identify factors associated with the full publication of study results and to determine the frequency of citation of corresponding articles.

METHODS

Cohort of study protocols

The REC of the Albert-Ludwigs-Universität, Freiburg, Germany granted access to the electronic files of all study protocols submitted in 2000. For the purpose of archiving, this committee scanned all paper files electronically including submitted study protocols, amendments, progress reports and related correspondence (eg, committee decisions) and saved the data on a secured server. We established a Microsoft Access database to store and manage the study data extracted from these electronic files.

We classified the design of all submitted studies according to predefined criteria (Appendix 1). Within randomised studies, factorial design was considered a variant of parallel group design; crossover trials were classified as randomised studies only if the treatment allocation had been randomised.

Identification of publications and citations

Full publications were defined as articles published in scientific journals that provide adequate information on at least the objectives of the study as well as on its methods and results. We excluded conference abstracts or review articles. To identify full publications with a potential link to the included protocols, we systematically searched Medline (platform Ovid, database Ovid MedlineR + Daily update), the university’s publication registry (Forschungsdatenbank Freiburg), and through the Medpilot platform (http://www.medpilot.de) the databases of Current Contents Medizin (http://opac.zbmed.de/wocccmed/start.do?Login = ccmed1) and of the publishers Hogrefe, Karger, Kluwer, Springer and Thieme. For controlled clinical trials we also searched the Cochrane Central Register of Controlled Trials (CENTRAL; Clinical Trials), issues 4/2006 and 1/2007.13 14 CENTRAL contains Medline records of randomised controlled trials (RCT; quarterly updated), Embase records (annually updated) and records identified by manual searches of journals that are not indexed in electronic literature databases.15

For each protocol a new search strategy was established including relevant keywords from the protocol, such as experimental drug, study name or acronym, studied disease or condition or names of applicants. These searches allowed for permutations and variants of search terms and additional search terms, if appropriate. We retrieved the full text of potentially eligible publications and set up an electronic library of pdf documents linked to our MS Access database. If we were unable to decide on the eligibility of an article based on the database entry, we retrieved the full text article. If we came across additional eligible references by other sources (eg, searches in other databases, reference lists of identified articles), we included them. Disagreement on inclusion was resolved by consensus.

Between January and May 2007 we contacted the applicants of all included protocols by a personal letter. For each submitted protocol separately, we asked about the current project status, for verification of already identified publications and references of additional publications we may have missed. We verified the addresses of non-responders and contacted them again by letter.

We accessed the ISI Web of Science (WoS; http://apps.isiknowledge.com) for each identified publication and extracted the number of citations that a publication received between the time of publication and December 2007. We summed up citation counts if a reference was listed twice or more in the WoS database (eg, due to different spellings).

Data collection and definitions

A standardised data extraction form was used to collect data on study characteristics, including study design, single/multicentre status, national/international study, sample size, length of enrolment, source of funding, number of prespecified primary outcomes of all eligible protocols. Data were extracted by one investigator and cross-checked by another.

Commercial funding was defined as any direct financial support or provision of material (eg, of the study drug) by a private company. For commercially funded trials, we recorded if a private company was involved in the study planning or the management or analysis of data. We assumed such involvement if the trial protocol was provided by a private company or if a co-author was affiliated with such a company. Non-commercial funding was defined analogously and included financial or other support by public funding agencies, public or private foundations (if not clearly linked to a private company) or research funds of hospital or university entities.

A study with at least one centre outside Germany participating in patient recruitment was classified as an international study. For quantitative data on planned sample size or length of enrolment we used the smallest value if the protocol indicated a range of values. Primary outcomes had to be either identified as such or on the basis of a sample size calculation in the protocol. If none of the outcomes met this definition, we considered none of the outcomes as primary. A study was regarded as completed if the data collection was terminated or if study results were published (including preliminary results published before the end of recruitment or data collection). Delays from application to publication were calculated based on the year of application (ie, 2000) and year of publication.

Data analyses

Standard descriptive statistics were used to characterise the included protocols. The publication rate was calculated as the proportion of completed protocols with at least one related full article. For statistical analyses, we used STATA version 9.2.

RESULTS

Characteristics of included studies

During the year 2000, 318 study protocols were submitted to the REC (fig 1). We excluded 19 protocols because electronic files were incomplete, the application was rejected or retracted, or the study was an extension of a previous study or was not started at all. We were left with 299 eligible protocols submitted by 163 different applicants, all affiliated with the university. When we contacted these applicants, we obtained responses for 260 of 299 protocols; the response rate was 87%.

Figure 1 Flowchart of included protocols.

The studies were conducted in various fields of medicine (table 1). Almost half of the included studies were RCT (141; 47%) followed by uncontrolled studies including case series (61; 20%). Thirty studies (10%) were laboratory studies, eg, those using human tissue or blood. Another 10% (29) of the studies were non-randomised intervention studies, followed by cross-sectional studies (13; 4%), cohort studies (12; 4%), diagnostic studies (11; 4%) and case–control studies (2; 1%). Most RCT were of parallel design (130 of 141, 92%); of those, 34 had three or more treatment arms and three had a factorial design. Eleven trials had a crossover design (8%).

Table 1 Clinical specialty of included research proposals

The planned sample size was given in 288 of 299 protocols (96%) and ranged from five to 8300 participants (median, 100). The planned duration of enrolment was specified in 80 (27%) protocols and ranged from 2 to 120 months (median, 12 months). Instead of the planned enrolment period several protocols indicated an estimate for the overall study duration. However, we extracted only information on clearly specified enrolment periods. In 134 (45%) protocols one primary outcome was specified, in 37 (12%) more than one, and in 128 (43%) none.

A total of 117 (39%) studies were single-centre studies, and 182 (61%) multicentre studies. In 97 (53%) of the multicentre studies the investigators collaborated internationally and in 61 (34%) nationally. Collaboration was unclear in the protocols of 24 (13%) studies. Of the 97 international studies, 14 (14%) were led by researchers affiliated with the university and 82 (85%) by another study centre in Germany or abroad (leading centre unclear in one). Of the 61 national studies, 14 (23%) were led by researchers affiliated with the university and 47 (77%) by another national study centre. For 24 (13%) protocols, it was unclear if other centres were collaborating; in four such studies a local researcher was leading.

Information on any funding source was available from 204 of 299 (68%) protocols or related documents. Commercial (co-) funding was evident from 152 (51%) protocols. In 21 (14%) of those, the sponsor provided study drugs or other support but was not involved in trial conduct. Non-commercial funding was stated in 46 of 299 protocols (15%). It was provided by the German Research Foundation (Deutsche Forschungsgemeinschaft) for seven (2%) protocols and by the Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung) for four (1%). Of note is the fact that these numbers only include studies that required approval by the REC (ie, no basic science studies). In a further 10 studies a decision on non-commercial funding was pending at the time of REC submission. Of those, nine had applied for Deutsche Forschungsgemeinschaft funding. Four (1%) studies received both commercial and non-commercial funding.

Follow-up of included studies

Seventy-four of 299 studies (25%) were not completed at the time of the survey for various reasons (fig 1). Most of these studies had been stopped prematurely because of the slow accrual of participants or changes in the work place or retirement of the applicant. Ten studies were never started, six were still ongoing. Our definitive sample for further analyses comprised 225 protocols of completed studies. Our literature searches identified 138 full publications and our survey an additional 72.

A total of 210 full publications corresponded to 109 of 225 submitted and completed protocols. Consequently, the publication rate was 48%. The median number of publications per protocol was one (range 1–7). Fifty-one (22%) protocols had more than one related publication. The year of publication ranged from 2000 to 2007. The median duration from submission to the REC until first publication was 4 years (fig 2). The publications appeared in 151 different journals, of which 134 (89%) were indexed in the WoS. For RCT the publication rate was 52%. For studies of other designs it ranged from 0% to 78% (table 2). The proportion of completed studies that resulted in a full publication did not vary with regard to other study characteristics, including funding status, study size or collaboration (table 2).

Figure 2 Publications corresponding to study protocols submitted in 2000. Note: eight additional papers published in 2007 are not shown in the figure. They were identified after our literature searches by a survey of applicants.
Table 2 Characteristics of included completed studies

Citation of published studies

As of mid-December 2007, 168 of 210 identified publications (80%) were cited in subsequent articles indexed in WoS. Of those, nine (4%) were cited once and 159 (76%) several times. Thirteen publications (6%) had received 100 or more citations. Thirteen were never cited. The median number of citations was 11 (range 0 to 1151). Twenty-nine publications (14%) were not indexed in WoS.

DISCUSSION

Summary of results

In our sample of completed studies, 109 of 225 (48%) of the protocols were fully published after approximately 6 years. The full publication of completed studies did not vary across study characteristics, including funding, study size, international status, and multicentre status. Eighty per cent of published studies had been cited at least once.

Strengths and limitations

We used a comprehensive literature search procedure in several databases. However, we did not search in all literature databases that might include potentially relevant publications. For this reason, we deemed it even more important to carry out a survey of applicants to learn about additional publications. In our survey, we achieved a high response rate. Nevertheless, we still may have missed publications derived from the included protocols and consequently the publication rate may have been underestimated.

When extracting data, we used several arbitrary definitions that may have influenced our results. For example, we used a classification scheme for study design that proved useful in a previous study.16 Arguably, study design could be classified differently. For RCT, we did not use the phase I to III classification scheme because it was not applied consistently in the protocols included. Also, alternative definitions of other study variables would have been possible. However, as all variables were defined a priori we are confident that the choice of definitions did not lead to systematic errors in our results.

Clearly, it would have been interesting to analyse more recent study protocols, as the quality of protocols and publications and the practices of scientific reporting change over time. However, sufficient time must have elapsed before studies are completed and their results published. The obvious dilemma is that including more recent protocols would have left insufficient time for studies to be completed and results to be published.17 We identified only six studies that were still recruiting participants at the time of our literature searches and excluded them from our analyses.

We analysed the period from REC approval to publication because reliable data for both time points were available. An estimate of the time elapsed between completion of the study (eg, end of data collection) until publication would have been preferable. There is, however, no universally accepted definition for when a study ends, and such information is not regularly reported in study reports. Although of potential interest, we therefore refrained from collecting such data for pragmatic reasons.

We analysed an unbiased sample of study protocols submitted to an academic REC during one year. Because most studies were multicentric and a large part international, the results could possibly be generalised to other settings of clinical research in Germany or abroad.

Findings in context with other studies

Underreporting of studies

We demonstrate that clinical research conducted in Germany is largely underreported. This is consistent with evidence from a similar German study12 and from other countries (table 3).711 Our estimate for the publication rate is lower than in a previous study conducted in a similar setting.12 Methodological differences between the two studies are a possible explanation for this. Of note is the fact that the REC of both universities do request final reports and publications of approved projects but have not actively monitored publication output until now. Generally, many clinical research projects, including RCT, remain unpublished after the completion of data collection. Only approximately half of abstracts presented at conferences are later published in full.17 Consequently, potentially important information will go unrecognised by the scientific community. In previous studies, rates of full publication ranged from 31% to 67%, with higher proportions of published papers in the United States, England and Australia than in non-Anglophone countries. Those studies did, however, use different methodology, eg, they did not combine surveys of applicants with electronic literature searches. Furthermore, characteristics (eg, design) of the original studies included in these follow-up studies varied and this may have influenced the estimates of the publication rate. The limited number of protocols included in our study did not allow more advanced statistical analysis.

Table 3 Studies with follow-up of research proposals submitted to research ethics committees

Ethical implications of findings

Both authors and publishers of scientific research have ethical obligations.18 Positive as well as negative results should be published or otherwise made publicly available.19 The International Committee of Medical Journal Editors argues that “patients who volunteer to participate in clinical trials deserve to know that their contribution to improving human health will be available to inform health-care decisions”.20 To further knowledge and help other patients represents one of the main motivations of participants asked about their reasons to consent to trials.21 If researchers select study results for publication based on their statistical significance and direction, patients are possibly not treated based on an appraisal of all the available evidence. Furthermore, other researchers may build on biased evidence. Future research can be misguided and scarce resources will be wasted in the replication of unnecessary experiments. Finally, patients or volunteers will unnecessarily be subjected to the hardships and risk of clinical experiments. As early as 1990, the underreporting of study results was called “a form of scientific misconduct”.22 However, the compulsory registration of all trials in accessible databases is still being discussed and is yet a goal for the future.23 Several International Committee of Medical Journal Editors member journals have published a joint statement that requires clinical trials to be registered in a public trials registry before patient enrolment as a condition to consider study results for publication once the study is completed.24 Recently, this statement has been updated to include preliminary trials.25 Whereas prospective registration may become a requirement for clinical trials in the near future, other types of clinical studies are far from being registered systematically. Research resources are wasted if results from those studies are not made available to the scientific community and the public through publication.

Our study provides valuable insights into the dissemination of scientific evidence from the planning to the publication stage and beyond. It adds to the growing body of empirical evidence on the underreporting of scientific results in many biomedical research areas, countries and languages. Although the need to implement comprehensive trials registries has been recognised internationally, the magnitude and consequences of the problem are less well known for other types of studies. We hope that research protocols submitted to REC will be recognised as a valuable source of information and be made accessible to scientists in other institutions for similar methodological research.6

Acknowledgments

The authors are grateful to the staff of the research ethics committee of the Albert Ludwigs Universität, Freiburg, Germany and the applicants who responded to the survey.

Appendices

Appendix 1 Criteria for study design classification for submitted studies. RCT, randomised controlled trial.

REFERENCES

Footnotes

  • Competing interests: None.

Other content recommended for you