Article Text

Methods to improve recruitment to randomised controlled trials: Cochrane systematic review and meta-analysis
  1. Shaun Treweek1,
  2. Pauline Lockhart1,
  3. Marie Pitkethly2,
  4. Jonathan A Cook3,
  5. Monica Kjeldstrøm4,
  6. Marit Johansen5,
  7. Taina K Taskila6,
  8. Frank M Sullivan1,
  9. Sue Wilson6,
  10. Catherine Jackson7,
  11. Ritu Jones8,
  12. Elizabeth D Mitchell9
  1. 1Division of Population Health Sciences, University of Dundee, Dundee, UK
  2. 2Scottish School of Primary Care, University of Dundee, Dundee, UK
  3. 3Health Services Research Unit, University of Aberdeen, Aberdeen, UK
  4. 4Frederiksberg, Denmark
  5. 5Norwegian Knowledge Centre for the Health Services, Oslo, Norway
  6. 6Primary Care Clinical Sciences, School of Health and Population Sciences, University of Birmingham, Birmingham, UK
  7. 7School of Medicine, University of St Andrews, St Andrews, UK
  8. 8Nkhoma CCAP Hospital, Nkhoma, Malawi
  9. 9Social Dimensions of Health Institute, University of Dundee, Dundee, UK
  1. Correspondence to Dr Shaun Treweek; streweek{at}mac.com

Abstract

This review is an abridged version of a Cochrane Review previously published in the Cochrane Database of Systematic Reviews 2010, Issue 4, Art. No.: MR000013 DOI: 10.1002/14651858.MR000013.pub5 (see www.thecochranelibrary.com for information). Cochrane Reviews are regularly updated as new evidence emerges and in response to feedback, and Cochrane Database of Systematic Reviews should be consulted for the most recent version of the review.

Objective To identify interventions designed to improve recruitment to randomised controlled trials, and to quantify their effect on trial participation.

Design Systematic review.

Data sources The Cochrane Methodology Review Group Specialised Register in the Cochrane Library, MEDLINE, EMBASE, ERIC, Science Citation Index, Social Sciences Citation Index, C2-SPECTR, the National Research Register and PubMed. Most searches were undertaken up to 2010; no language restrictions were applied.

Study selection Randomised and quasi-randomised controlled trials, including those recruiting to hypothetical studies. Studies on retention strategies, examining ways to increase questionnaire response or evaluating the use of incentives for clinicians were excluded. The study population included any potential trial participant (eg, patient, clinician and member of the public), or individual or group of individuals responsible for trial recruitment (eg, clinicians, researchers and recruitment sites). Two authors independently screened identified studies for eligibility.

Results 45 trials with over 43 000 participants were included. Some interventions were effective in increasing recruitment: telephone reminders to non-respondents (risk ratio (RR) 1.66, 95% CI 1.03 to 2.46; two studies, 1058 participants), use of opt-out rather than opt-in procedures for contacting potential participants (RR 1.39, 95% CI 1.06 to 1.84; one study, 152 participants) and open designs where participants know which treatment they are receiving in the trial (RR 1.22, 95% CI 1.09 to 1.36; two studies, 4833 participants). However, the effect of many other strategies is less clear, including the use of video to provide trial information and interventions aimed at recruiters.

Conclusions There are promising strategies for increasing recruitment to trials, but some methods, such as open-trial designs and opt-out strategies, must be considered carefully as their use may also present methodological or ethical challenges. Questions remain as to the applicability of results originating from hypothetical trials, including those relating to the use of monetary incentives, and there is a clear knowledge gap with regard to effective strategies aimed at recruiters.

  • Statistics & Research Methods
  • Medical Ethics

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/3.0/ and http://creativecommons.org/licenses/by-nc/3.0/legalcode

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Article summary

Article focus

  • Despite representing the gold standard in evaluating the effectiveness and safety of healthcare interventions, many randomised controlled trials do not meet their recruitment targets.

  • Poor recruitment can lead to extended study duration, greater resource usage and findings that are not as statistically precise as intended; in the worst case, a trial may be stopped.

  • A systematic review was carried out to identify methods used to improve recruitment to randomised controlled trials, and to quantify their effects on participation.

Key messages

  • There are promising strategies for increasing recruitment to trials, most notably telephone reminders, open-trial designs, opt-out strategies and financial incentives.

  • Many trials of recruitment methods involve hypothetical trials, and the applicability of their results to the real world is still unknown.

  • There is a clear knowledge gap with regard to effective strategies aimed at those recruiting to trials.

Strengths and limitations of this study

  • This Cochrane review utilised a comprehensive search and appraisal strategy, thereby ensuring that all relevant evidence was included.

  • Many of the included studies were small, increasing the likelihood of their being underpowered, and resulting in CIs that included the possibility of substantial benefit.

  • The interventions evaluated by included studies varied greatly, making it difficult to pool data for met-analysis.

Introduction

Randomised controlled trials represent the gold standard in evaluating the effectiveness and safety of healthcare interventions, primarily because they help guard against selection bias.1 Nonetheless, the recruitment of clinicians and patients to these studies can be extremely difficult.2 While there are several possible consequences of poor recruitment, perhaps the most crucial is the potential for a trial to be underpowered.3 In such circumstances, clinically relevant differences may be reported as statistically non-significant, increasing the chance that an effective intervention will either be abandoned before its true value is established, or at the very least, delayed as further trials or meta-analyses are conducted. Similarly, while poor recruitment can be addressed by extending the length of a trial, this too can create delay in the roll-out of a potentially effective intervention, while increasing the cost and workload of the trial itself.

Several investigations of recruitment have attempted to quantify the extent of the problem, and while estimates differ, it is clear that many trials do not meet their recruitment targets.2 ,4–6 Of those that do, many achieve them only after extending the length of the trial. A recent cohort study of 114 multicentre trials, supported by two of the UK's largest research funding bodies (the Medical Research Council and the Health Technology Assessment Programme), found that less than a third achieved their original target (n=38; 31%), and more than half had to be extended (n=65; 53%).2 In a similar study of 41 trials in the US National Institute of Health inventory, only 14 (34%) met or exceeded their planned recruitment, while a quarter (n=10; 24%) failed to recruit more than half.4 In many cases, trials may have to close prematurely due to recruitment problems.6

While trialists have used many interventions to improve recruitment, it has been difficult to predict the effect of these. The purpose of this review was to quantify the effects of specific methods used to improve recruitment of participants to randomised controlled trials, and where possible, to consider the effect of study setting on recruitment. Although there have been three previous systematic reviews on strategies to enhance recruitment to research, two do not include the most recent literature,7 ,8 while the third considers the combined effects of interventions across four strategic areas rather than the individual effects of specific interventions.9 Our synthesis builds on and updates an earlier Cochrane review;8 the protocol and full review are available from the Cochrane Library.10

Methods

Criteria for inclusion

Study types and participant

We included randomised and quasi-randomised controlled trials, including those recruiting to hypothetical studies, that is, where potential participants are asked if they would take part in a trial if it was run, but where no trial exists. Studies examining ways to increase questionnaire response rates, evaluating the use of incentives or disincentives to increase clinicians’ recruitment of patients or studying strategies to improve retention were excluded as these are addressed by other Cochrane Methodology Reviews (CMR).11–13 The study population included any potential trial participant (eg, patient, clinician and member of the public), or an individual or a group of individuals responsible for recruiting trial participants (eg, clinicians, researchers and recruitment sites).

Types of intervention

A recruitment intervention was defined as any method implemented to improve the number of participants recruited to a randomised controlled trial, whether this was directed at potential participants, at those responsible for recruiting participants or at trial design or co-ordination. Interventions used in any study setting were included.

Outcome measure

The outcome of interest was the proportion of eligible individuals or centres recruited.

Identification of studies

We searched the CMR Group Specialised Register 2010, Issue 2, part of The Cochrane Library (http://www.thecochranelibrary.com), ERIC (Educational Resources Information Centre), CSA (1966 to April 2010), Science Citation Index and Social Sciences Citation Index, ISI Web of Science (1975 to April 2010), National Research Register (online) (2007, Issue 3), The Campbell Collaboration Social, Psychological, Education and Criminological Trials Registry (C2-SPECTR) (up to April 2008), MEDLINE, Ovid (1950 to March week 5 2010) and EMBASE, Ovid (1980 to 2010 week 14). The UK Cochrane Centre previously ran a series of searches in MEDLINE (in 2000) and EMBASE (in 2004) to identify reports of methodological studies, with the resulting citations being subsequently entered into CMR. To increase the efficiency of our searches, we therefore restricted our searches of MEDLINE and EMBASE to records entered from 2001 and 2005, respectively. We searched PubMed to retrieve ‘related articles’ for 27 studies included in the previous version of this review. No language restrictions were imposed. A sample search is given in appendix 1; the complete strategy is available online from the Cochrane Library.10

Selection of studies

Titles and abstracts of identified studies were independently screened for eligibility by two reviewers. Full text versions of papers not excluded at this stage were obtained for detailed review. Potentially relevant studies were then independently assessed by two reviewers to determine if they met the inclusion criteria. Differences of opinion were discussed until a consensus was reached; the opinion of a third reviewer was sought when necessary.

Data extraction and assessment of bias

Data extraction of included studies was carried out independently by two reviewers (ST with EM, PL or MP) using a pro-forma specifically designed for the purpose. Data were extracted on trial design, study setting, participants, inclusion and exclusion criteria, interventions and outcomes evaluated and results. In addition, data on the method of randomisation, allocation concealment (adequate, clear and inadequate), blinding (full, partial and none), adequacy (objective, unclear and subjective) and reporting of outcome measures and level of follow-up were collected to allow the risk of bias in each study to be determined.14 This was independently assessed by the same two reviewers, and summarised in line with Cochrane guidance (A, low risk; B, moderate risk and C, high risk).15 Studies at a high risk of bias were not excluded, but results were interpreted in light of this.

Data synthesis

Data were processed in accordance with the Cochrane handbook.15 Trials were grouped according to the type of intervention evaluated (eg, monetary incentives, alternative forms of consent, etc), with intervention groupings based on similarities in form and content. Where available, binary data were combined as risk ratios (RR) and the associated 95% CIs generated. Cluster randomised controlled trials were included only where there were sufficient data to allow analyses that adjusted for clustering. In such a case, an odds ratio (OR) was used as the summary effect in the meta-analysis, with the pooled result subsequently being converted to an RR using the average comparator group risk.

Heterogeneity was explored using the χ2 test, and the degree of heterogeneity observed (ie, the percentage of variation across studies due to heterogeneity rather than to chance) was quantified with the I2 statistic. Where there was substantial heterogeneity, we informally investigated possible explanations and summarised data using a random-effects analysis if appropriate. Subgroup analyses were planned to explore key factors considered to be potential causes of heterogeneity, namely (1) trial design (randomised vs quasi-randomised); (2) concealment of allocation (adequate vs inadequate or unclear); (3) study setting (primary vs secondary care; healthcare vs non-healthcare); (4) study design (open vs blinded; placebo vs none); (5) target group (clinicians, patients and researchers) and (6) recruitment to hypothetical versus real trials. However, there were too few studies evaluating the same or similar interventions to allow these analyses to be conducted. Similarly, it was not possible to explore publication bias.

Results

Description of studies

Search results

The search strategy identified 16 334 articles, of which 301 appeared to meet the inclusion criteria and were subject to detailed review (figure 1). We retrieved the full text of an additional 10 papers identified from the reference lists of previous reviews, and one article published out with the search period but which appeared relevant, giving a total of 312 potentially eligible studies. Forty-five papers, targeting more than 43 000 individuals, were included in the final analysis. Nineteen studies evaluated recruitment to hypothetical trials (table 1).

Table 1

Summary of included studies

Figure 1

Flow of studies into the review.

Study characteristics

Almost half of the studies were carried out in North America (n=21; 47%), with the remainder located in Europe (n=18; 40%) and Australia (n=5; 11%). One study involved centres in 19 countries worldwide. Studies were comparatively small in size, involving between 6 and 2561 participants (mean 493; median 79). It was not possible to determine actual participant numbers for two studies aimed at recruiters. In a further six studies evaluating recruitment to hypothetical trials, the number willing to participate was unclear, or was reported as a mean score. In more than half of the studies, participants were recruited from secondary care (n=23), or from secondary care in combination with another setting (n=2). Trials based in the community (n=8) or in primary care (n=6) were also common (table 1).

Risk of bias within studies

All of the studies were described by their authors as being either randomised (n=41) or quasi-randomised (n=4), but more than a third failed to provide details of the method used to achieve this. Similarly, while allocation concealment was adequate in half of the studies, details were poorly reported in many others. This was also true in relation to the procedures used to blind participants, which was often missing or not fully reported. All studies provided details on the outcome measures used, many of which were subjective (eg, willingness or intention to consent). When considered across the domains, 12 studies had a low risk of bias, 13 had moderate risk and 20 had a high risk (table 1).

Effects of interventions on recruitment

The 45 included studies evaluated 46 interventions across six main categories: trial design, obtaining consent, approach to participants, financial incentives, training for recruiters and trial co-ordination (table 2). As might be expected, the majority of studies were aimed directly at trial participants (n=40), with few studies targeting those responsible for recruitment. Although some of the categories incorporate several studies, we considered the majority of interventions to be sufficiently different to make pooling them inappropriate. Where reported data did not allow for calculation of an estimate of effect based on our outcome measure, the results from the paper have been presented. Effects of the interventions studied are presented in table 3 and figures 27; only those figures relating to pooled estimates have been presented.

Table 2

Recruitment intervention and effect on participation

Table 3

Effects of interventions to improve recruitment

Figure 2

Recruitment with open and blinded trial design.

Figure 3

Recruitment with consent to experimental, standard and usual consent procedure.

Figure 4

Recruitment with audiovisual and standard trial information.

Figure 5

Recruitment with clinical trials booklet and standard trial information.

Figure 6

Recruitment with invitation including study questionnaire and standard invitation.

Figure 7

Recruitment with telephone reminder and standard follow-up.

Trial design

Six studies (5675 participants; one study also recruited 28 general practices) considered the effect of trial design changes on recruitment.

Two trials16 ,32 compared an open design (where participants know what treatment they are receiving) with a blinded, placebo-controlled design, and found that an open design improved trial recruitment (RR 1.22, 95% CI 1.09 to 1.36; figure 2). A study investigating the impact of a placebo group on women's willingness to participate in a hypothetical hormone replacement trial59 suggests that the number likely to take part may be less when a non-active comparator is included (RR 0.76, 95% CI 0.59 to 0.99). A trial of menorrhagia management compared conventional randomisation with a patient preference design, where those with a preference for a specific treatment receive it, while the remainder are randomised.18 Although this made little or no difference to the number who agreed to be recruited to the trial, women were more likely to participate in the study overall (96% vs 70%).

In a crossover trial for palliative care, cluster randomisation was compared with consenting individuals after randomisation if they were assigned to experimental treatment (Zelen design).25 Only two sites with few participants were included (6/24 recruited in the cluster arm vs 0/29 in the Zelen arm; p=0.02). The final study involved 28 general practices in a trial of two delivery methods for insulin, and compared internet-based data capture with paper-based collection, reporting higher recruitment with the paper-based method (45/52 vs 28/28; p=0.04).42

Obtaining consent

Five studies (4468 participants) considered modifications either to the consent process (including timing) or to the format of the consent form.

Consent process

In a trial on decision aids for colorectal cancer screening,55 the use of opt-out (potential participants were contacted unless they withdrew their details) was found to improve recruitment when compared with an opt-in approach to contact (RR 1.39, 95% CI 1.06 to 1.84). Two studies recruiting to hypothetical trials (one on a new drug and one on anaesthesia) evaluated various combinations of prerandomisation and consent.48 ,50 Both evaluated consenting specifically for the experimental or standard treatment, but there was considerable heterogeneity for the latter (I2=93%), and under a random-effects model, neither form of consent may lead to any difference in recruitment (figure 3). Three other variants of consent were also considered: (1) consent allowing those refusing participation to choose between the treatments,50 (2) consent to a 70% chance of receiving the experimental treatment because the clinician believes it is better48 and (3) consent to a participant-modified chance of receiving the experimental treatment (60%, 70% and 80%).48 All three appear to have had little effect on recruitment compared with usual consent.

Consent format

Two trials dealt with how the consent form was presented to potential participants. Researchers in a smoking cessation trial56 compared the effect of the consent form being read aloud by the researcher with it being read by participants, while a cluster trial recruiting to oncology studies evaluated an easy-to-read version of the consent form.19 Neither study found that the intervention improved recruitment.

Approach to participants

Twenty-eight studies (31 910 participants) evaluated the effect of modifying trial information or the way it was delivered.

Delivery of trial information

Nine studies considered various ways of providing potential participants with information about the trial. Studies using video or other audiovisual materials had mixed results. A study evaluating the effect of providing a 10 min video alongside written information in a trial of pregnant women with prelabour rupture of membranes60 found that this most likely improved willingness to participate compared with written information alone (RR 1.75, 95% CI 1.11 to 2.74). There were three studies presenting audiovisual overviews of clinical trials (including risks and benefits, randomisation and value to society) for a range of cancer studies (figure 4),21 ,22 ,33 one using interactive computer information in a hypothetical trial on managing complications after heart attack36 and another using video plus a pamphlet for a hypothetical HIV vaccine trial,28 but all found little or no difference in recruitment.

Interactive computer presentation compared with audio-taped presentation in a hypothetical cancer trial44 slightly improved recruitment (RR 1.48, CI 95% 1.00 to 2.18), while showing a multimedia presentation of key trial information while a research assistant was available to answer questions, appears to have had little impact compared with just the research assistant in a hypothetical drug trial for schizophrenia.35 Finally, a study using a brief verbal education session for Spanish-speaking women eligible for a trial on high breast cancer risk45 found slightly improved recruitment compared with print materials alone (RR 1.14, 95% CI 1.01 to 1.28).

Supplementing trial information

Five studies considered the effect of supplementing usual trial information with additional materials. Two studies evaluated the inclusion of a booklet on clinical trials, one in a hypothetical breast cancer trial,23 the other in a real trial for HIV patients,34 while two trials on physical activity31 and injury prevention37 included study-relevant questionnaires with the invitation letters to potential participants. All four interventions made little or no difference to recruitment (figures 5 and 6). In the final study, the authors investigated the effect of including a newspaper article publicising the trial.51 This led to little or no difference in recruitment, even when the article was replaced with one that was more favourable to the trial.

Framing and content of trial information

Eight studies evaluated modifications to the way study information was presented, seven of them for hypothetical trials. The only study to evaluate an intervention for a real trial compared total disclosure of information relevant to a cancer trial with a more limited individual approach, where the level of detail was at the clinician's discretion.53 This found that providing more information led to little or no difference in recruitment. Similarly, a study comparing a more detailed information leaflet with a less detailed one in a hypothetical cancer trial also found that this made little or no difference.27

A consent form describing a new medication that ‘may work twice as fast as usual treatment’ most likely increased recruitment compared with one describing it as working ‘half as fast’ (RR 1.62, 95% CI 1.10 to 2.37),52 while describing treatment as ‘new’ rather than ‘standard’ may have slightly decreased recruitment (RR 0.81, 95% CI 0.66 to 0.99).38 Similarly, emphasising the pain or risk involved in a trial most likely decreased recruitment (RR 0.55, 95% CI 0.36 to 0.85 and RR 0.41, 95% CI 0.24 to 0.68, respectively).54 Neutrally framed information about side effects and survival compared with negatively or positively framed information43 appears to have led to little or no difference in recruitment.

Two studies investigated the effects of disclosing the financial interests of those involved in the trial. In the first, a hypothetical heart disease trial, three scenarios outlining the investigators’ interests were presented.57 Willingness to participate reduced when the investigator had an investment in the drug company, compared with no disclosure (p=0.03) or per capita research payments to the investigating institution (p=0.01). In the second study, five scenarios were presented to research-interested adults with asthma or diabetes.58 Again, willingness to participate was lowest when the investigator had an investment in the drug company, and highest when the company paid the running costs (p<0.001).

Telephone contact

Three studies used telephones as a means of contacting potential participants. Two trials (on returning sick-listed people to work49 and activity in older people31) found that using telephone reminders to follow-up written invitations improved recruitment (OR 1.95 95% CI 1.04 to 3.66; figure 7), although there was moderate heterogeneity related to the magnitude of effect (I2=59%). In the third study, a series of SMS messages containing quotes from existing recruits were texted to potential participants of a smoking cessation trial.26 This improved recruitment compared with the standard written invitation (RR 35.09, 95% CI 2.12 to 581.48), although small numbers overall led to a wide CI.

Eligibility screening

Four studies considered the use of different methods for screening potentially eligible participants. In a study recruiting African Americans to a cancer trial,24 conducting baseline screening and data collection at face-to-face church sessions most likely improved recruitment compared with standard procedures (RR 1.37, 95% CI 1.05 to 1.78). In two other studies evaluating willingness to take part in a hypothetical lifestyle trial, face-to-face (researcher) eligibility screening was compared with telephone screening,20 and with varied methods of participant self-completion of a screening questionnaire.29 Telephone screening may have improved willingness to participate compared with researcher administration20 (RR 1.26, 95% CI 1.06 to 1.50), but neither face-to-face administration nor electronic completion led to any difference in recruitment compared with standard self-completion on paper.29 A fourth study recruiting to chronic depression treatment trials46 incidentally reported on the influence of screening personnel, comparing senior investigators with research assistants, but this had little impact on recruitment.

Financial incentives

Three studies involving 1698 participants evaluated the effects of offering financial incentives on recruitment. In one smoking cessation trial, the inclusion of a monetary incentive (GBP £5) with the study information and consent form was found to increase recruitment (RR 12.95, 95% CI 1.71 to 98.21).26 In two other studies, the incentive was payment for participation (in a hypothetical trial), which was varied relative to the risk involved. One study combined three levels of trial risk (high, medium and low) with three levels of payment ($1800, $800 and $350),17 while the other varied the payment levels ($2000, $1000 and $100) and the risk of adverse drug effects or of receiving placebo in a hypothetical antihypertensive drug trial.30 Both studies found that willingness to participate increased with payment (p=0.015, p<0.001, respectively) in one case, regardless of the associated risk.17

Training for recruiters

Two studies, one with 98 recruiters and the other with 126 recruiting centres, considered interventions aimed at those recruiting, both involving educational packages.39 ,40 One study evaluated training Hispanic participants in a prevention trial as lay advocates—Embajadoras—to refer other Latinas to the study.40 Data analysis did not correct for clustering and no ICC was provided, but the authors reported that more Embajadoras recruited to the trial than either untrained Hispanic or Anglo controls (8/28 vs 0/26 and 2/42, respectively). The second study, a cluster trial involving 126 centres in a cancer and leukaemia research network, compared the standard input for recruiters with an educational package (including a symposium and monthly mailings) aimed at improving recruitment of older participants.39 Although centre-level data and ICC were not provided, clustering was considered in the analysis, and the authors found that additional education did not significantly influence recruitment (31% vs 31%, p=0.83).

Trial co-ordination

Two studies involving a total of 302 trial sites looked at the effect of greater contact from the trial co-ordinators. In the first, a breast cancer trial, 68 of the 135 recruiting centres received on-site visits (including an initiation visit to review the trial protocol, etc), while the remainder received none.41 In the second, an international diabetes trial, additional communication from the co-ordinating centre (frequent emails, individually tailored feedback on recruitment, etc) was compared with usual communication.47 Neither study presented the proportion of eligible participants, but both reported finding little difference in recruitment when site visits were made (302 with visits vs 271 with no visits), or when communication was increased (median number of recruits 37.5 vs 37.0 for standard communication).

Discussion

Principal findings

In this systematic review, we assessed the evidence from 45 trials evaluating the effect of intervention strategies designed to improve recruitment to randomised controlled trials. We found that a number of interventions do appear to be effective, although the evidence base related to some is still limited. Telephone reminders to non-responders,31 ,49 opt-out procedures requiring potential participants to contact the research team if they do not want to be contacted about a trial,55 including a financial incentive with the trial invitation,26 and making the trial open rather than blinded16 ,32 all improved recruitment in high-quality studies involving real trials. The effect of other strategies to improve recruitment, however, remains less clear.

Although partial preference designs may improve participation in a study as a whole, they appear to have little impact on recruitment to randomisation,18 and with the exception of the opt-out approach already mentioned, a variety of strategies involving changes to consent procedures failed to produce any increase in recruitment. Similarly, modifications to the method or quantity of information presented to potential participants—either about trials in general or about a specific trial—did not provide clear evidence of the benefit of this approach to improving recruitment. Providing information to prospective participants in the form of quotes from existing participants via SMS shows potential, but it was evaluated in a single study,26 and requires further evaluation. Few studies looked at interventions aimed not at potential participants but at those recruiting them,39–41 ,47 and none presented clear evidence in favour of the strategies used.

While several of the interventions studied show promise, there are some caveats. Pooled analysis for telephone reminders had moderate heterogeneity (I2=59%), although it would appear that it is the magnitude of effect rather than the benefit of the intervention that is in doubt. Similarly, while the inclusion of a financial incentive as used by Free et al26 did improve recruitment, the number of participants recruited was small, leading to uncertainty about the magnitude of effect. Two additional studies involving financial incentives found that increasing payment led to increased recruitment,17 ,30 but these involved hypothetical trials as well as sums of money that might not be feasible when recruiting to real studies. In addition, ethical concerns have been raised about the use of some of these strategies. Telephone reminders and financial incentives have both been used and accepted by many as a legitimate recruitment tool, but they may be considered by some to be a form of coercion. Opt-out procedures have previously been proposed as a way of improving recruitment to health research,61 but this approach remains controversial, as ethics committees generally require that research participants provide express approval for research participation, including being contacted about the study by researchers. However, it is worth noting that the trial included in this review55 studied opting-out of being contacted about a trial rather than opting-out of consenting to trial participation. This may be viewed as being less controversial, and as such, ethics committees may be more willing to accept it as part of a recruitment strategy. Finally, while it may be easier to recruit to an open trial rather than a blinded trial, there is clearly a greater risk of bias involved, and it is therefore an approach that requires careful consideration before being implemented.

Limitations of the review

Many of the studies included in this review were small, likely to be underpowered and with CIs including the possibility of substantial benefit. This is particularly true of interventions that modified the approach made to potential participants. In addition, 19 studies involved hypothetical trials, and the implications of their results for real trials are still unclear.

The interventions used by studies varied significantly, making it difficult to pool data. Even those studies adopting the same basic approach, such as altering the consent process, were generally sufficiently different to make pooling inappropriate.62 For example, while there were five studies of seven interventions looking at changes to consent procedures, only two interventions were comparable enough to be pooled. Similarly, video presentations were used in six studies but generally delivered different information, or were used in combination with other interventions that differed between studies. Consequently, only three could be combined in the same analysis. At the outset of the review, we had planned to undertake a number of subgroup analyses of the key factors considered relevant to heterogeneity, but variations in the interventions themselves would have made these comparisons meaningless. One such subgroup related to the impact of recruiting to a hypothetical trial versus a real trial. There was, however, only one comparison where there was at least one of each type of trial, and we were therefore unable to assess this factor. Only one of the cluster trials31 provided sufficient data to allow an appropriate analysis to be incorporated in the review. In addition, there were a number of studies which potentially had data clustered by the study the participant was invited to join, even though participants were individually randomised. As such, estimates from these studies may be overly precise.

Potential bias was also a problem in many of the studies, often linked to hypothetical trials. Although allocation concealment was considered high quality for 22 of the 45 trials (it was unclear for 16 and poor for 7), the overall assessment of the risk of bias was considered as low for only 12 studies. Twenty trials were considered to be at a high risk of bias. It was not possible to predict the direction of effect that any bias may have had on study outcomes. In addition, we were unable to make statistical judgements about the likelihood of publication and related biases due to the relatively small number of included studies per comparison, and the wide variation in the recruitment strategies being evaluated.

However, this review provides an update to previous reviews in the field, identifying a greater number of relevant studies and presenting new evidence relating to trial design (the potentially negative impact of using a Zelen design), the approach to participants (the benefits of using SMS messages, framing of trial information, financial disclosure) and financial incentives (including a cash incentive with the trial invitation). In addition, it has generated further evidence to support the broad conclusions from earlier work, namely that opt-out procedures, open rather than blinded trials, paid participation and telephone reminders to non-responders improve recruitment, while various methods of consent and the provision of supplementary information appear to have little effect.

Implications for research

The findings from this review would suggest that there are two key areas within recruitment-related research where activity could be focused. First, despite the failure of many trials to meet their recruitment targets, and the significant implications of this both practically and in relation to the delayed application of effective interventions,2–6 few strategies designed to improve trial participation have been rigorously evaluated in the context of a real trial. Almost half of the trials in this review involved hypothetical studies, including many of those evaluating changes to the consent process, and all but one of those looked at the use of financial incentives. In some of these studies, there was evidence of benefit. In others, the intervention demonstrated little impact. But what is true for all is that their effect in a real setting is unknown. Given that, we would argue that while the use of hypothetical trials to study recruitment interventions has its place, trialists should include evaluations of their recruitment strategies within their trials, and research funding bodies should support this as part of future trial methodologies. Where uncertainty exists around two or more strategies, an evaluation could actually help trialists to focus their efforts on the most effective strategy (or strategies) while at the same time adding to the methodological literature. If recruitment is carried out in phases, evaluation could be used in the early phases with later phases employing the most effective strategies identified.63 Since everyone receiving a recruitment intervention ‘counts’ for the evaluation—the study is simply counting the number of yes and no responses—statistical power is generally not a problem. Graffy et al64 have discussed nested trials of recruitment interventions in more detail.

Second, previous research on potential barriers to trial participation has suggested that there are various factors that may provide the means by which recruitment can be increased, many of them related to trial recruiters. These include evaluating a clinically important question, minimising the workload of participating clinicians, removing responsibility for consent away from clinicians and involving research networks.65–67 Only 4 of the 45 studies included in this review evaluated interventions specifically designed for recruiters, and of those, only one reported an improvement in recruitment (although the data analysis did not adjust for clustering).40 There is clearly a gap in knowledge with regard to effective strategies targeting this group, and additional research aimed at how to increase recruitment by individuals or sites participating in trials would be beneficial. Other authors have used multivariable regression to look for factors that influence recruitment, although there were few insights gained from this.2 ,67 However, this approach may be worth revisiting as more evaluations of recruitment interventions are published.

Evidence from this review has demonstrated that there are promising strategies for increasing recruitment to trials, including telephone reminders to non-responders and requiring potential participants to opt-out of being contacted by the trial team. Some of these strategies, such as open trial designs, need to be considered carefully as their use also has disadvantages. Many, however, require further rigorous evaluation to conclusively determine their impact.

Acknowledgments

The authors would like to thank Hatim Osman for his help with screening abstracts, Marian Pandal and Gail Morrison for their help with obtaining full-text articles, and Karen Robinson and Mary Wells for identifying two studies that were missed in the previous version of this review.

References

View Abstract

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:

Footnotes

  • Contributors JAC, CJ, MJ, RJ, MK, MP, FMS, ST and SW were involved in the design of the review. MJ developed and ran the electronic searches. Data abstraction tools were designed by EDM with input from ST. JAC, CJ, RJ, MK, PL, EDM, MP, FMS, TKT, ST and SW were involved in record screening and study selection. MP reviewed the reference lists of review articles identified by the search. ST, PL, EDM and MP undertook data abstraction and assessment of risk of bias. JAC and ST analysed the data. The article was drafted by EDM and ST, and all authors contributed to subsequent drafts. ST is the guarantor.

  • Funding Jonathan Cook held Medical Research Council UK Training (reference no: G0601938) and Methodology (reference no: G1002292) Fellowships, which supported his involvement in this review. The Health Services Research Unit receives core funding from the Chief Scientist Office of the Scottish Government Health Directorates. The views expressed are of the authors and do not necessarily reflect those of the funders.

  • Competing interests None.

  • Ethics approval Not required; this was a systematic literature review.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement There are no additional unpublished data available.