Intended for healthcare professionals

General Practice

Pharmaceutical trials in general practice: the first 100 protocols. An audit by the clinical research ethics committee of the Royal College of General Practitioners

BMJ 1996; 313 doi: https://doi.org/10.1136/bmj.313.7067.1245 (Published 16 November 1996) Cite this as: BMJ 1996;313:1245
  1. Peter Wise, vice chairmana,
  2. Michael Drury, chairmana
  1. a Clinical Research Ethics Committee of the Royal College of General Practitioners, London SW7 1PU
  1. Correspondence: Dr Wise.
  • Accepted 26 September 1996

Abstract

Objective: To assess the outcome of 100 general practice based, multicentre research projects submitted to the ethics committee of the Royal College of General Practitioners by pharmaceutical companies or their agents between 1984 and 1989.

Design: Analysis of consecutive submitted protocols for stated objectives, study design, and outcomes; detailed review of committee minutes and correspondence in relation to amendment and approval; assessment of final reports submitted at conclusion of studies.

Subjects: 82 finally approved protocols, embracing 34 523 proposed trial subjects and 1195 proposed general practice investigators.

Main outcome measures: Success at enrolling subjects and investigators; commencement and completion data; validity of final report's assessment of efficacy, safety, tolerability, and acceptability; and method of use and dissemination of findings.

Results: 18 studies were not approved and 45 had to be amended. Randomised controlled trials comprised 46 of the original submissions. Remuneration considerations, inadequate information or consent sheets, pregnancy safety, the need to discontinue existing therapy, and suboptimal scientific content were major reasons for rejecting studies or asking for amendments. Of the 82 approved studies 8 were not started. Shortfalls of investigators (of 39%) and trial subjects (of 37%) and an overall 23% withdrawal rate were responsible for a significant incidence of inconclusive results. Within the six year follow up interval, only 19 of the studies had been formally published.

Conclusions: This audit identified substantial ethical concerns in the process of approving multicentre general practice pharmaceutical research.

Key messages

  • A common feature of this audit was a shortfall of investigators and trial subjects. This needs to be anticipated by those planning multicentre trials, to avoid the risk of inconclusive results. Given that the level of scientific advance resulting from such research is not high, greater regard is required for patient welfare in terms of information provision and suspension of existing therapy.

  • Rigorously prepared and observed committee guidelines are likely to improve the quality of research protocols and reduce amendment and non-approval rates.

  • Randomised controlled trials using comparator drugs should form the basis of multicentre general practice pharmaceutical research, which in turn should result in higher publication rates.

Introduction

Debate about the role, organisation, and performance of local research ethics committees in reviewing multicentre studies has grown. In April 1996 the Department of Health circulated a consultation paper proposing that a single research ethics committee for multicentre research should be established in each region. For some years a committee of the Royal College of General Practitioners has dealt with the ethics of clinical research in general practice (see box). In the light of recommendations for a critical review of ethics committee procedure1 2 and in the hope that lessons learnt by this committee might be useful to any new committees that may be established, we performed an audit of the first 100 protocols submitted by, or on behalf of, pharmaceutical companies to the college's clinical research ethics committee. Although the committee was established by the Royal College of General Practitioners, its membership and function have always been independent of the college, whose major role has been to provide logistical support. This paper therefore represents our views and those of the committee and not those of the college.

RCGP clinical research ethical committee

The clinical trials ethics committee, later renamed the clinical research ethics committee, of the Royal College of General Practitioners was established in 1984 at a time when ethical review of proposals for research in general practice was nei- ther a requirement nor widely sought. Further- more, there was no mechanism for obtaining independent ethical scrutiny of research supported by the college itself. After the establishment of a network of local research ethics committees it became apparent that there were many problems associated with multidistrict research (as much research in general practice is). The remit of the committee was then changed to consider multidis- trict studies involving three or more districts. The committee consists of two representatives from the college, two from the Royal College of Physicians, and one from the Royal College of Nursing; an academic pharmacologist; a medical practitioner with a legal qualification; a person experienced in the ethics of medical practice; a sociologist; a health economist; and a statistician.

The committee has always scrutinised both the ethical and scientific aspects of protocols, believing unscientific research to be unethical. As there is no national database of studies, we do not know what proportion of multidistrict studies is submitted to the committee. When scrutiny by a local research ethics committee became mandatory coordinators were also asked to submit protocols to them so that local considerations could be examined. It was hoped, and by and large achieved, that in this way the science in multidistrict studies would not be subject to multiple or contradictory request for protocol modification.

Almost all studies reviewed in the earlier years were initiated by the pharmaceutical industry. Only more recently has the committee invited a broader range of protocols, dealing with research that is not necessarily drug related and submitted by other bodies, including academic departments and the Medical Research Council general practice research framework.

The first 100 protocols were all submitted between 1984 and 1989. The length of the follow up—at least six years—allowed us to ascertain outcomes of research studies more precisely, in the knowledge that enough time had elapsed for such studies to have been completed, evaluated, and published.

Methods

The initial protocols received by the committee for approval were systematically reviewed. Every element of the application and the deliberations of the committee, including correspondence, was subsequently logged and entered on a computer database. These data included the need to amend the protocol, the nature of the amendment, approval, and the reasons for non-approval. The purpose of the research study, the success rate in recruiting investigators and subjects, and full data about withdrawal of subjects from studies were also recorded. Final reports were invited both to obtain these data and also to identify whether the study conclusions were justified by the data provided in the report. Finally, we ascertained in what way the results of the studies had been used by the submitting body (either a pharmaceutical company or an agency acting on its behalf) and in what way the findings and conclusions had been disseminated.

Results

Of the 100 proposed studies 46 were double blind randomised controlled trials and nine were single blind trials. Twenty two were placebo controlled, 38 had an open design or were postmarketing surveillance studies, and 7 were studies which did not require clinical endpoints for interpretation.

AMENDMENTS

Although 82 of the 100 studies were eventually approved, 45 protocols required amendment and resubmission, with an average 1.5 amendment items per protocol. For 15 such resubmissions, however, chairman's approval (subject to later endorsement of the parent committee) was provided to enable the study to start at the earliest possible time. The reasons for amendment are given in table 1.

Table 1

Reasons for requesting amendment of submitted protocols: 66 amendment items in 45 protocols

View this table:

Safety concerns largely centred on comparative drug doses (on which agreement could mostly be reached), surveillance frequency, and the principles of notifying the supervising practitioner (investigator) about adverse events. The amount of remuneration for participating general practitioners figured strongly in committee discussion, where the need to avoid undue encouragement to recruit or retain in a study was paramount. The presumed use of NHS investigation facilities was a problem in several protocols: research tests had to be clearly separated from those required for routine clinical care (and hence paid for from NHS sources). Inadequate patient information sheets were a factor in 18% of the committee's requests for amendment. Unnecessarily complex wording, entangled information and consent forms, and lack of guidance on subjects' right to withdraw and subsequent rights to treatment figured high among the reasons for amendment. There was a lack of clarity about information on intercurrent pregnancy and also about indemnity issues. Several information sheets did not give prospective subjects enough information to make an informed decision to participate. Patronising terminology (such as “drop outs”) was also a basis for amendment. Statistical aspects figured highly in committee discussion, but only four protocols were amended because of a statistical issue.

REASONS FOR NON-APPROVAL

Four reasons accounted for the 18 cases of non-approval. Seven protocols were considered more appropriate to hospital based evaluation because of organisational or safety issues; five were rejected because the potential conclusions that could be drawn were inadequate for the committee to approve patient involvement; one was rejected because the drug's efficacy had already been proved; and five were rejected because of the need to suspend existing drug treatment. In most of the cases where scientifically valid conclusions were unlikely, together with the study about a drug already proved to be effective, the committee considered that the studies were more marketing and promotional exercises than genuine research. Open or postmarketing surveillance studies comprised only 28%(23) of the studies approved but 83%(15) of those not approved (P = 0.01).

A major issue surrounded suspension of existing drug treatment. This applied particularly to antihypertensive and antianginal drugs and was a feature in 17 protocols. The principle adopted by the committee was to allow a current drug to be replaced by a study drug only when existing therapy was ineffective or had unacceptable side effects. When this principle could not be assured approval was not granted.

OUTCOMES IN APPROVED TRIALS

Commencement and completion—Only 74 (90%) of all approved studies were actually started, of which 66 (90%) were multidistrict studies. The reasons why the remaining eight studies did not start were difficult to clarify both from the final reports and from responses to chaser letters sent to investigators. In two cases, however, pharmaceutical companies indicated that they themselves had subsequently considered the studies to be impractical or no longer necessary on licensing grounds. Failure to enrol enough trial subjects was the explanation in the three studies that were started but not completed (table 2).

Table 2

Overall outcome of audited studies, rates of recruitment of investigators and patients, and patient withdrawal data

View this table:

Investigator and patient numbers—Of the 74 studies that were started we had enough matching data on the initial protocol and subsequent report to enable only 20 to be assessed for success or otherwise in enrolling general practitioner-investigators (table 2). There was a 39% shortfall. The 74 studies that were started planned to recruit 34 523 patients; in the event investigators managed to recruit 63% of this number in the 74 studies that started. Furthermore, only 77% of the patients who were recruited actually completed the studies: 5069 patients withdrew (table 2). Only 36 of the 71 final reports on completed projects provided reasons for withdrawal. The 1896 subjects covered by these 36 reports represented a 37% sample of all withdrawals. Adverse events (57%), lack of efficacy (26%), and unspecified self withdrawal (17%) were the reasons provided.

PURPOSE OF STUDIES

In all cases we analysed the final reports, firstly, to identify whether trial coordinators perceived success in evaluating their target outcomes and, secondly, to determine whether the raw data and statistics provided in the reports supported their claims (table 3). In most cases the data supported claims about efficacy, tolerability, and safety, but only 33% of claims of acceptability were supported by data in the final reports.

Table 3

Analysis of claims in final reports

View this table:

USE AND DISSEMINATION OF STUDY FINDINGS

Companies responsible for 68 (96%) of the 71 completed projects subsequently provided information on use and dissemination of the findings: 31 studies were used in licensing or registration applications, 19 were published in books or journals, and 11 were presented at scientific meetings. The results of 21 studies were neither promulgated in any way or used for registration or licensing purposes.

Although open and postmarketing surveillance studies comprised almost 30% of studies completed, they figured in only 15% of publications and presentations.

Discussion

STUDY AMENDMENT

The high proportion of submissions that needed to be amended (45%) was a point of serious concern. Not only did this provide the committee with additional work; it also disrupted the smooth programming of research projects by trial coordinators. After safety issues, payment of general practitioners was the commonest issue prompting a need for amendment. Payment was mostly based on the number of patients enrolled. Undue incentives were seen by the committee as a potential hazard, not only by fostering research through the wrong mechanism, but, more dangerously, by encouraging enrolment of patients with borderline entry criteria. This was a particular concern, given that some proposals were seen as promotional and with little scientific content.

Some of the amendment requests concerned poorly structured information sheets and consent forms. Standard methods such as the Gunning-Fog index3 are available for evaluating and ensuring readability, and we have recently advocated this approach. The use of complex or patronising terminology and a lack of clarity about indemnification, pregnancy, and self withdrawal were all common items requiring modification. In later years, progressively more detailed “in house” guidelines were made available to prospective applicants, emphasising the potential educational value of ethical scrutiny.

STUDY APPROVAL

Scientific principles need to be adhered to in the context of ethical approval.4 If the committee considered that new and statistically meaningful information was unlikely to be identified (most particularly from open or postmarketing surveillance studies), or if comparative studies were likely to be marred by not being randomised or used a placebo rather than a comparator drug, then such a study was seen by the committee to be unethical. Even so, the fact that only 46% of initial protocol submissions (72% of those finally approved) were in fact randomised control trials does not reflect well on current research patterns.

The frequent need to discontinue existing treatment in order to examine the trial drug(s) was viewed with great concern. Newly diagnosed patients would be preferred for comparative studies. However, it was repeatedly pointed out by trial coordinators (and conceded by the committee) that the long period required to recruit enough newly diagnosed patients, or, alternatively, the need to increase the number of participating centres, would render trials restricted to such patients impracticable.

STUDIES THAT NEVER STARTED

An excessive expectation of the ability to enrol general practitioner-investigators appeared to be a significant component in those studies that never started, despite approval. A 39% shortfall in the number of investigators compared with what was planned or expected was potentially wasteful of the sponsoring company's and committee's time. Notwithstanding the fact that general practitioners are paid for these studies, the clinical and administrative workload in general practice almost certainly acts as a disincentive to participate in time consuming patient monitoring and data recording. Whether higher financial rewards for this type of research activity would make any difference remains conjectural, since the sums offered were already generous and sometimes exceeded those recommended by the British Medical Association.

RECRUITMENT AND WITHDRAWAL

The 37% shortfall in patient recruitment was disappointing and was undoubtedly a reason for unsatisfactory statistical outcomes. The additional 23% withdrawal rate further diminished the study populations. These shortfalls were not anticipated either in the initial submissions or by our committee. Accordingly, initial statistical calculation of the probabilities of showing differences often proved inaccurate and led to inconclusive results. Thus, in only 64 out of 71 completed studies (90%) were the data adequate for the relevant company to identify comparative efficacy, while comparative tolerability, safety, and acceptability could be evaluated by them in only 50–75% of studies in which this information was being sought. Such low information yields waste resources. They also represent an abuse of patient goodwill. To some extent our committee (and ethics committees elsewhere) share this responsibility with trial coordinators and research sponsors.

The reasons for patients' withdrawal from studies were not well documented in final reports. In the 36 reports in which firm data were provided—representing 1896 of the 5069 overall withdrawals—57% withdrawals were for real or potential adverse effects and 26% for lack of efficacy. These data imply that at least 1 in 5 trial subjects were obliged to withdraw for one or other of these two indications: if this “risk” had been provided to patients at the time of enrolment would this have influenced their agreement to participate?

PUBLICATION AND DISSEMINATION

Low rates of publication of research have been repeatedly referred to in the literature. Pearn has argued that even an 85% publication rate for publicly funded research is inadequate.5 We are aware that journals are less likely to publish data which have not received ethical approval, but that reason cannot explain the low (27%) publication rate among these studies approved by our committee. Even when presentation at scientific meetings and use in licensing or registration are included as valid uses of research findings, the remaining 30% non-dissemination rate is a criticism both of the type of research being performed and the motivation which underlies it. As might be expected, the findings of randomised controlled trials were twice as likely to be accepted for publication or presentation as other research formats.

Even negative findings, difficult (and embarrassing) as they may be to publish, need to be made public. Whether by “letters to the editor,” by establishment of a national or international drug research database (using the Internet), or by more traditional means, publication should be an ethical imperative.5 The prior insistence by ethics committees of an intention to publish is an essential responsibility which can only improve the quality of research.

CONCLUSIONS

Few of the findings of this audit were expected, and they provide some cause for concern. There are four key messages relevant to all ethics committees, which might be regarded as criteria against which their work can be judged. Firstly, committees need a more critical approach towards encouraging and approving scientifically valid (randomised control) trials. Secondly, they need a more cautious approach to statistical approval, in so far as this depends so heavily on the number of investigators and patients finally recruited for studies. Thirdly, they should make firm requests for a comprehensive final research report. Finally, they should emphasise to trial coordinators the need to publish, or otherwise promulgate, the research findings.

Broader issues which the committee identified were the value of a temporarily co-opted specialist member (the need for whom can probably be anticipated by the chairperson on reviewing the agenda items) and of a statistician well versed in studies of this type. Our committee was fortunate in attracting such a statistician only half way through its 12 year period of activity. An overall greater respect for the participating patient probably needs emphasis, considering the comparatively low key conclusions reached by most of the studies examined in this audit. Finally, in the absence of a national register or database of trials, we have no way of knowing how representative our findings are of pharmaceutical studies in general practice as a whole.

We are deeply indebted to the hard working members of the committee. Specifically we appreciate the help and initial guidance of Professor Sir Eric Scowen, foundation chairman of the committee, and the members of the committee 1984–96: Dr Linda Beeley, Dr Ann Cartwright, Mrs Margaret Puxon QC, Professor Margot Jefferys, Professor Edith Penrose, Miss Geraldine Swain, Dr Stuart Carne, Dr Christopher Donovan, Dr Christopher Ellison, Dr Stanley Ellison, Professor Gordon Dunstan, Dr John Havard, Dr Michael Linnett, Dr Kenneth McRae, Professor Michael Orme, Professor Michael Rawlins, Dr John Tripp, and Dr John Wright. We also thank the administrative assistants of the Royal College of General Practitioners, Ms Jenny Singleton, Miss Fenny Green, and Mr Noel Bell, without whose efforts this audit would not have been possible.

Footnotes

  • Funding Royal College of General Practitioners.

  • Conflict of interest None.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.