Article Text

Download PDFPDF

Infrequent and incomplete registration of test accuracy studies: analysis of recent study reports
  1. Daniël A Korevaar1,
  2. Patrick M M Bossuyt1,
  3. Lotty Hooft2
  1. 1Department of Clinical Epidemiology, Biostatistics and Bioinformatics (KEBB), Academic Medical Centre (AMC), University of Amsterdam (UvA), Amsterdam, The Netherlands
  2. 2Netherlands Trial Register (NTR) and Dutch Cochrane Centre (DCC), Academic Medical Centre (AMC), University of Amsterdam (UvA), Amsterdam, The Netherlands
  1. Correspondence to Dr Daniël A Korevaar; d.a.korevaar{at}amc.uva.nl

Abstract

Objectives To identify the proportion of articles reporting on test accuracy for which the corresponding study had been registered.

Design Analysis of a consecutive sample of published study reports.

Participants PubMed was searched for publications in journals with an impact factor of 5 or higher in May and June 2012. Articles were included if they reported on original studies evaluating the accuracy of one or more diagnostic or prognostic tests or markers against a clinical reference standard in humans.

Primary and secondary outcome measures Primary outcome was registration of the reported test accuracy study. We additionally explored study characteristics associated with registration.

Results We found 1941 references; 351 study reports fulfilled the inclusion criteria, of which 52 studies (15%) had been registered. Of these, 27 (52%) provided a registration number in the publication, and 12 (23%) provided a reference to the publication in the registry. Registration rates were similar for studies on diagnostic versus those on prognostic tests, and among studies on imaging tests versus those on laboratory techniques. Studies reporting some form of industry involvement were more often registered (33%) than studies reporting another source of funding (11%), and studies without a (reported) source of (external) funding (9%; p<0.001). Of the registered studies, 8 (15%) had been registered after completion, 14 were registered before initiation (27%) and 30 (58%) between initiation and completion. Only 16 (31%; 5% of the total sample) had registered the published primary outcome measures before completion.

Conclusions Few test accuracy studies published in higher impact journals are registered. Only 1 in 22 of such studies register their primary outcomes before study completion. Owing to the reasons for registering studies that investigate the cause-and-effect relationship between health-related interventions and health outcomes also apply to test accuracy studies, prospective study registration of these studies should be further promoted among investigators and journal editors.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • Response rates were relatively good: 58% of the corresponding authors participated in our email survey.

  • As test accuracy studies often do not report the study completion date, we may have included studies completed before 2005, that is, when the International Committee of Medical Journal Editors's (ICMJE's) registration policy was launched.

  • Only papers published in journals with an impact factor of 5 or higher were included; registration rates may differ for study reports in lower impact journals.

Introduction

Since September 2005, the International Committee of Medical Journal Editors (ICMJE) has required researchers to register essential information about the design of their clinical trials in a publicly available trial registry before enrolment of the first patient.1 By facilitating transparency and completeness of reporting, this policy forms an important measure in preventing negative effects of publication bias and outcome reporting bias, defined as the non-publication and selective reporting of research findings depending on the strength and direction of outcomes.2 ,3 This requirement improves the evidence base on which clinical decisions are made. Furthermore, duplication of research efforts can be prevented, research and knowledge gaps can be identified, collaboration can be facilitated and a more efficient allocation of research funds can be promoted. Full disclosure of study material may also be an ethical obligation, especially to human study participants and future patients.

The ICMJE requires registration of “any research project that prospectively assigns human subjects to intervention and comparison groups to study the cause-and-effect relationship between a medical intervention and a health outcome”.4 The reasons for registration also apply to studies quantifying the accuracy of diagnostic and prognostic tests and markers,5 especially since failure to publish and selective reporting may also be prevalent among these studies.6 ,7 Approval and proper usage of medical tests should be based on a thorough scientific evaluation.8 Test accuracy studies form an essential part in this process. Such studies evaluate the ability of a test to correctly differentiate between patients with and without a target condition. This can be a disease (screening or diagnosis), a disease stage (staging), a condition in the near future (monitoring and surveillance), response or benefit from therapy (predictive) or an event in the future (prognosis).

At present, many clinical trial registries also include studies that do not fall under ICMJE's registration requirement. Although controversial,9–11 increasing numbers of observational studies are also being registered.12 This is illustrated by the fact that 19% of 156 143 records in ClinicalTrials.gov, one of the major trial registries, are tagged as observational (accessed 27 November 2013).

Increasing numbers of test accuracy studies seem to be registered as well. Although most test accuracy studies can be considered as interventional, since consenting participants are prospectively assigned to one or more medical tests, accuracy usually only contributes indirectly to changes in health outcomes. ICMJE's registration requirement, therefore, seems to exclude test accuracy studies. The Food and Drug Administration (FDA), however, requires registration of “controlled trials with health outcomes of devices subject to FDA regulation, other than small feasibility studies.”13 This may imply that studies that indirectly contribute to health outcomes, such as test accuracy studies, should also be registered.

The primary aim of this study was to identify the proportion of articles reporting on test accuracy studies for which the corresponding study had been registered, to evaluate whether registration had preceded study initiation, and to assess whether registration included the published primary outcome measures.

Methods

Search

A sample of test accuracy studies was identified by searching PubMed (National Library of Medicine). In May and June 2012, we searched for studies published in journals with an impact factor of 5 or higher. A previously validated search filter for test accuracy studies (‘sensitivity AND specificity.sh’ OR ‘specificit*.tw’ OR ‘false negative.tw’ OR ‘accuracy.tw’ (where ’.sh’ indicates subject heading and ‘.tw’ indicates text word))14 was combined with a list of names and corresponding international standard serial numbers (ISSN) of all the 536 journals that had been assigned an impact factor of 5 or higher in 2011. We applied this cut-off value because we expected the number of registered studies to be larger in higher impact journals. This impact factor cut-off is in line with previously published analyses of test accuracy studies.15 ,1,6 The final search was performed on 25 February 2013.

Articles were included if they reported on studies evaluating the accuracy of one or more tests or markers against a clinical reference standard in human subjects. Tests for screening, diagnosis, staging, monitoring, prediction or prognosis were all eligible. We limited our search to papers published in English that had an abstract. We excluded studies that did not report an accuracy measure (sensitivity, specificity, likelihood ratio, positive or negative predictive value, diagnostic OR, area under operator curve or c-index), as well as commentaries, discussion articles and systematic reviews.

One author (DAK) scanned the search results to identify potentially eligible articles. Studies that did not provide an accuracy measure in their abstract, but were deemed likely to publish one in their full text, were also tagged as potentially eligible. The full text was then obtained to evaluate whether the study met the inclusion criteria. Two authors (DAK, and PMMB or LH) independently evaluated the potentially eligible articles. Disagreements were resolved through discussion.

Included studies were classified as diagnostic studies, which evaluated the ability of a test to identify a current ((pre-)stage of) disease, or prognostic studies, which used a follow-up period to evaluate the ability of a test to predict a future state or event. Based on the test under investigation, included studies were tagged as imaging studies, laboratory studies or other. Laboratory studies included all measurements on body fluids or tissues, except for histology and cytology (which were classified as ‘other’). We extracted the funding sources from the full publication. Studies that clearly described a source of support were categorised into those reporting some form of industry involvement and those reporting sources of funding not including an industrial party. Studies that did not report a source of support, or only indicated that ‘no external funding’ was obtained, were categorised as ‘no (external) funding reported’.

Identifying registration

The following steps were taken to find out if a study had been registered. First, the full text of the included articles was checked for a trial registration number. When this number was not reported, the corresponding author was asked through email whether the study had been registered and, if so, in which registry and under which registration number. Contact attempts were limited to three emails, each sent in a week's gap. If no answer was received, the WHO Search Portal, which searches several registries, was used. In addition, we searched ClinicalTrials.gov, the International Standard Randomised Controlled Trial Number Register and national trial registers of the country of the first author. In these registries, we searched for the names of first, last and corresponding authors, publication title, evaluated tests and target disease/outcome. We matched registered records with publications by comparing the data on study design, sample size, country, outcomes and contact information. If no registration number was found, a study was considered as not registered. When a paper included in our review was a secondary (post hoc) analysis, we also considered the study as registered if we were able to identify a registered record for the initial study, in which the data had been collected. We categorised studies as those where the data collection had and had not been registered. We further classified studies with a registered data collection as those that had registered the published primary outcomes, those that had registered the published primary aim but vaguer or slightly different, and those that had not registered the primary outcomes or aims.

We checked whether the study had been registered before its initiation by comparing the registration date with the start and completion dates of participant enrolment as reported in the registry. Registration was defined as before initiation if the date of registration fell in or preceded the month of the study's start date as reported in the registry. A study was considered as registered after completion if it had been registered in the same month or after the registered completion date. All other studies were considered as registered in-between initiation and completion.

Statistical analysis

Data are reported as frequencies and percentages. We used χ2 tests to evaluate associations between study characteristics and the chances of being registered for statistical significance. Data were analysed using SPSS V.20.0.

Results

The search identified 1941 articles of which 351 fulfilled the inclusion criteria (figure 1). Characteristics of included studies are summarised in table 1. The majority of studies (71%) evaluated the accuracy of a diagnostic test, while 29% evaluated a prognostic test. Comparable numbers of studies focused on imaging tests and tests based on a laboratory technique: 33% and 36%, respectively. The remainder focused on another type of test (24%), such as physical examination, electrocardiography (ECG) or pathology, or on (a combination of) tests that were assigned to more than one category (8%). Some form of industry involvement was reported by 19% of the included studies, while 58% reported sources of funding that did not include an industrial party. The remainder (23%) did not have or report an (external) source of funding.

Table 1

Characteristics of included studies and the distribution of registered studies among different characteristics

Figure 1

Flow chart showing how the papers entered the study.

The data collection had been registered in 52 of 351 studies (15%). Of these, 27 provided a registration number in the final publication. We contacted the authors of 324 studies without a registration number in their publication and 187 (58%) responded, providing another 14 registration numbers. Non-registration was confirmed by the authors of 173 studies. We searched the registries for the remaining 137 studies and identified another 11 registered records. Only four of the included studies had a randomised controlled design, and, of these, two (50%) had been registered.

Of the 52 registered studies, 27% had been registered before initiation (table 2). The other studies were registered somewhere between the start and completion date (58%), or after the completion date (15%). Only 23% of the registered studies provided a reference to the full publication in the registered record.

Table 2

Characteristics of registered studies

The proportion of registered studies for subgroups defined by study characteristics is shown in table 1. There was no significant difference between diagnostic and prognostic test studies, or between imaging and laboratory studies. Of the studies reporting some form of industry involvement, 33% had been registered. This was significantly more often than studies reporting another source of funding (11%), and studies without a (reported) source of funding (9%; p<0.001).

Only 16 (31%) registered studies had registered the published primary outcomes before the completion date. Among another 12 (23%), the published primary aim had been registered before the completion date, but it was described more vaguely or somewhat differently. Of the remaining studies, the published primary outcome or aim was not registered before study completion, or not registered at all. A majority in the latter group consisted of post hoc analyses, in which the authors had used data from a registered, previously completed study, and reports of substudies that were part of a larger registered project.

Discussion

Using a previously validated sensitive search filter, we found that the data collection of only 15% of diagnostic and prognostic test accuracy studies published in journals with an impact factor of 5 or higher in May and June 2012 had been registered. Registration rates were comparable between studies of diagnostic and those of prognostic tests, and among studies of imaging tests and those on laboratory tests. Studies reporting some industry involvement were registered more often than studies with other sources of funding and studies without reported funding sources.

Adequate assessment of selective reporting among registered test accuracy studies proved difficult: only a quarter of the registered studies—4% of all published studies—had been registered before initiation, and only one-third of the registered studies—5% of all published studies—had registered the published primary outcomes before the study completion date. About half of the registered studies reported a trial registration number in the publication, and a reference to the final publication was reported by a quarter of the registered studies.

Our study has some potential limitations. We searched only for test accuracy studies published in journals with an impact factor of 5 or higher. It is possible that studies published in these journals are more likely to be registered than those published in lower impact journals, in which case 15% is an overestimation of the proportion of all registered test accuracy studies.

We may have included studies initiated before 2005, when study registration was largely unknown among researchers. We were unable to exclude these because many test accuracy studies do not report their start and ending dates.16 ,1,7 Since we only included studies published in May and June 2012, 7 years after the ICMJE's registration policy was launched, we expect this number to be negligible.

Although response rates to our email survey were relatively good, 42% of the study authors did not reply. We thoroughly searched several registries to identify a corresponding registration for these studies but may have missed some, especially since searching in most registries proves to be difficult, as extended search options are lacking.

We included studies independent of their study design and type of data collection. We decided to do so because we wanted our study cohort to give a fair presentation of all types of test accuracy studies, and because of the inherent difficulties in categorising test accuracy studies, due to scarce and substandard reporting.16–18 For example, many test accuracy studies do not report whether the study is prospective or retrospective.16 ,1,7

Why are these results disappointing and promising at the same time? The results of our study indicate that, at this point, study registration for test accuracy studies does not provide many advantages. The number of registered studies is low, published primary outcomes are often not adequately registered, not registered in an informative way, and many registered studies are not registered before initiation. In addition, registration numbers are often not reported in the final publication, making it hard to find out if a study has been registered. References to the published study are often not reported in the registry, which does not facilitate finding out if a registered study has been published. We acknowledge that prospective registration of test accuracy studies is currently not officially required by the ICMJE. The fact that a considerable number of authors of these studies already seem to endorse the necessity of study registration is promising.

Study registration facilitates the identification of underexplored research areas, and the prevention of unnecessary duplication of research efforts and the corresponding waste of research funds. Full disclosure of all study material, including the protocol, is widely considered as an ethical obligation, especially to human study participants. Study registration also allows interested parties, such as reviewers, editors, physicians, policymakers, members of ethical committees, patients and colleagues, to identify ongoing, unpublished and selectively published studies. Non-publication and selective reporting jeopardise evidence-based medicine mainly through skewed literature syntheses. Unpublished research results are not easy to find and include in a systematic review, and this may lead to faulty conclusions based on an incomplete evidence base. Selective reporting may generate bias, offering a too optimistic presentation of test performance. Both are widely recognised problems, especially among randomised controlled trials. Evidence of cohorts of studies registered in ClinicalTrials.gov suggests that only between 46% and 63% gets published.19 ,20 Studies with positive or favourable results are more likely to be published than those with negative or disappointing ones.21 Although formal evidence is scarce, these phenomena are also suspected to be prevalent among test accuracy studies.5 ,6

In 2010, Lancet and BMJ announced that they would, from then on, encourage researchers to register observational studies in a manner similar to what has become a requirement for clinical trials.22 ,23 This caused some disapproving reactions.11 ,24 Criticism especially focused on the fact that observational studies vary widely in their design, and that prospective registration is not as useful for one type of study as it is for the other.25 Several of these issues also apply to test accuracy studies. Study data can be collected prospectively or retrospectively, and study aims, hypotheses and protocols can be formulated before or after the analysis of the data. Some test accuracy studies are exploratory in nature. Such studies often do not have a predefined protocol or hypothesis, and existing datasets are used to explore potentially interesting findings. The benefits of study registration are not as clear for such studies. Although non-publication and selective reporting are likely to be more prevalent among exploratory studies, it would be impossible to find out whether the study had been registered before the post hoc hypothesis was formulated. The bureaucratic load of prospectively registering every post hoc analysis would be enormous and probably outweigh the benefits.

More in general, all of the reasons for registering clinical trials seem to equally apply to interventional test accuracy studies, and probably also to all protocol-driven test accuracy studies with a priori defined aims, irrespective of whether data collection was prospective or retrospective. Therefore, we strongly recommend that authors of such studies register their protocol before initiation, and that journal editors start to think about expanding required registration to this type of research.

Acknowledgments

The authors would like to thank René Spijker, MSc (Dutch Cochrane Centre, University of Amsterdam) for assisting with the searches of this project.

References

Footnotes

  • Contributors DAK developed the study design and analysed the data in consultation with PMMB and LH. DAK, PMMB and LH performed the study selection.

  • Competing interests None.

  • Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement Full dataset and statistical code are available from the corresponding author at d.a.korevaar@amc.uva.nl.