Skip to main content

A systematic review of the effect of retention methods in population-based cohort studies

Abstract

Background

Longitudinal studies are of aetiological and public health relevance but can be undermined by attrition. The aim of this paper was to identify effective retention strategies to increase participation in population-based cohort studies.

Methods

Systematic review of the literature to identify prospective population-based cohort studies with health outcomes in which retention strategies had been evaluated.

Results

Twenty-eight studies published up to January 2011 were included. Eleven of which were randomized controlled trials of retention strategies (RCT). Fifty-seven percent of the studies were postal, 21% in-person, 14% telephone and 7% had mixed data collection methods. A total of 45 different retention strategies were used, categorised as 1) incentives, 2) reminder methods, repeat visits or repeat questionnaires, alternative modes of data collection or 3) other methods. Incentives were associated with an increase in retention rates, which improved with greater incentive value. Whether cash was the most effective incentive was not clear from studies that compared cash and gifts of similar value. The average increase in retention rate was 12% for reminder letters, 5% for reminder calls and 12% for repeat questionnaires. Ten studies used alternative data collection methods, mainly as a last resort. All postal studies offered telephone interviews to non-responders, which increased retention rates by 3%. Studies that used face-to-face interviews increased their retention rates by 24% by offering alternative locations and modes of data collection.

Conclusions

Incentives boosted retention rates in prospective cohort studies. Other methods appeared to have a beneficial effect but there was a general lack of a systematic approach to their evaluation.

Peer Review reports

Background

Longitudinal cohort studies are important for the understanding of aetiological mechanisms underlying population and individual differences in the incidence of disease and for monitoring social inequalities in health[1, 2]. Selective attrition, however, is a known problem in cohorts as those in disadvantaged socio-economic groups, ethnic minorities, younger and older people and those at greater risk of ill-health are more likely to drop out[3]. This may result the generalisability of findings being limited and estimates of association being biased[4]. Although direct evidence for this is limited with some studies[5] finding evidence of biased estimators with different response rates, while others found the associates were unaffected by selective attrition[68].

Overall, therefore, against a background of declining response rates in surveys in the UK[9] and in many cohort studies[10] more focused efforts to prevent attrition are required to ensure the benefit of the findings from cohort studies to public health do not become limited.

The barriers to recruitment and retention of participants in clinical trials are fairly well documented. General distrust of researchers and studies, concerns about research design, the consent process, discordance between lay beliefs and medical practice, patient treatment preferences, uncertainty about outcomes, and additional demands of the trial (e.g. duration of interventions, cost of travel, etc.) are frequently cited reasons for non-participation[1113]. Similar reasons have recently been cited for non-participation in longitudinal cohort studies[14, 15]. While a number of reviews have reported different ways of improving study participation, as well as the contextual factors that may affect these approaches,[1618] little is known about the effectiveness of specific retention strategies, which may differ by study design. Reasons for attrition may differ for randomized trials, e.g. random assignment to an unwanted treatment group, and the process of randomization and the types of interventions, which, make it difficult to extrapolate the effectiveness of certain retention methods to cohort studies. Cohort studies are expensive, with the follow-up of high risk groups requiring the most effort and resources, so there is a critical need to identify effective retention strategies. For these reasons, in this review, we have decided to focus exclusively on cohort studies. The main objective of this review was to determine the effectiveness of retention strategies in improving retention rates in prospective population-based cohort studies.

Methods

This review focused on the evaluation of retention strategies in prospective population-based cohort studies with health as an outcome. A population-based cohort was defined as "any well-defined population defined by geographic boundaries, membership or occupation"[19]. Studies were included if there was at least one wave of follow-up data collection in which the participant, or a proxy, was personally contacted by the study, at least one retention method was described and method-specific retention rates were reported. Studies were excluded if they were clinical or non-clinical trials evaluating the effectiveness of treatment regimens or intervention/prevention programmes, non-population-based cohort studies or cohorts with record linkage as the only method of follow-up. Only English language studies were searched and selected in order to reduce potential biases from misinterpretation. Studies which focus solely on locating (e.g. tracing) respondents, although an important activity for cohort maintenance, were not included in this review. Similar reviews of the effect of initial recruitment on subsequent retention have not been considered here, although again there is evidence that significant effort at recruitment may reduce subsequent attrition[20].

The electronic databases Medline, PsycINFO, PsycABSTRACTS, Embase, CINAHL, ISI, AMED and the Cochrane Central Register of Controlled Trials were initially searched for studies published through to June, 2007. The review was updated with additional searches of Medline, PsycINFO, ISI and the Cochrane Central Register of Controlled Trials conducted in November 2008 and in January 2011. Existing reviews were identified by searching the Cochrane library, the Database of Abstracts of Reviews of Effects (DARE) public databases at the Centre for Reviews and Dissemination, and other relevant medical and social science databases, including those produced by the Health Development Agency. Manual searches of bibliographies were also conducted to obtain reports on primary studies that were not retrieved through the electronic search. A list of potential prospective population-based cohort studies was also developed and study investigators were contacted and study websites searched for unpublished and technical reports.

Five terms were used in the electronic search 1) recruitment, 2) retention, 3) attrition, 4) participation, and 5) study design. In most cases a second keyword was included with the main search term, for example, attrition was paired up with any variation of "minimi" (e.g. minimization, minimizing, etc.). Use of the term 'recruitment' enabled identification of cohort studies that may have been missed through use of the retention term only. The search was restricted to include only publications where the two words were within two words of each other (i.e. minimization of attrition, attrition was minimized, etc.). Specific terms were agreed upon by the authors and adapted for each database (Additional File 1). CB conducted the initial appraisal of all titles and abstracts of papers. SH conducted a 20% re-check working independently to ensure that potentially relevant cohort studies or retention evaluations were not missed. Any disagreements were resolved through discussion. Data items to be extracted were agreed by all authors, and a data extraction database containing details of each study was developed by CB. MB independently reviewed all data extracted.

Retention strategies were categorised as 1) incentives to participate (monetary and non-monetary), 2) reminder calls or letters, repeat visits (i.e. more than one visit to schools to follow-up pupils who were not present on previous days of data collection) or repeat questionnaires and alternative modes of data collection and 3) other methods (e.g. method of posting, length of questionnaire). Differences in retention rates across the different retention strategies were examined with Meta-Analyst software[21]. Individual study proportions, that is the number of participants retained from a specific retention method divided by the number of participants approached, and 95% confidence intervals were calculated using random model analysis weighted by sample size and variance[21]. The individual study proportions given in the tables are the additional increase in the proportion of subjects retained from the specified method. Due to the heterogeneity of the methods within and between the RCTs and non-experimental studies meta-analyses were not conducted.

Results

Literature Search

The literature, bibliography, and website searches, together with correspondence with study investigators identified 17 210 papers. As Figure 1 shows, the vast majority of these were excluded because they were not population-based studies or they focused on recruitment rather than retention strategies, leaving 913 potential papers. Two-thirds of these papers were excluded because they had no information on the retention strategies, 30% of which were then excluded as they did not contain information on the evaluation of retention strategies.

Figure 1
figure 1

Flowchart of search methodology.

Twenty-eight studies from thirty-two papers, unpublished papers, technical reports, book chapters, and one personal communication were identified as eligible for inclusion in this review[2253].,[Dudas, e-mail, 14 December 2007] Table 1 provides a description of each study (see Additional File 2 for a more detailed description).

Table 1 Description of Studies Included in Review

Of the 28 studies reviewed more than half were conducted in the USA, were postal questionnaires and were conducted with adult cohorts. The majority were less than 10 years old when the retention strategy took place and had less than 10 follow ups.

Of the 28 studies, 11 were RCTs,[2233] of which 9 focused on the effectiveness of incentives and 2 experimented with the interview length and postal methods; the remaining 17 conducted other types of analyses of the effectiveness of the retention methods used[3453].,[Dudas, e-mail, 14 December 2007] Some of the retention efforts were conducted in pilot studies[36, 40, 42] and others were trialled after the main attempt data collection had been completed[22, 24, 2731, 33]. This use of retention methods on sub-populations may have a significant effect on the response rates reported. For example, pilots on hard-to-reach groups or reluctant participants may have low response rates; however, the addition of these participants could improve both the overall response rate and representativeness of the study population.

Incentives

Incentives were evaluated in ten studies,[22, 23, 2528, 32, 33, 40, 45] the results of which are shown in Table 2. Incentives were associated with an increase in overall retention rates[22, 2528, 32, 33, 45]. Five studies[23, 25, 26, 32, 40, 45] trialled incentives with all study participants, i.e. incentives were included in the first data collection attempt. The increases associated with the provision of, or increase in the value of incentives, ranged from 2% to 13%. Two studies[25, 26, 32] randomized differing amounts of monetary incentives and results showed that retention was higher in groups that received higher amounts. Two studies[23, 40] examined the effects of non-monetary incentives on retention and found no increase in retention. However, Rudy et al. in a non-experimental study, reported a 79% response rate for those receiving $100 incentive and 66% for those receiving non-cash gifts (X 2 (1,166) p < 0.05)[45]. Retention rates also increased with the value of monetary incentive offered[22, 2528, 32, 33].

Table 2 Increase in Study Retention Rates for Incentive and Reminder Letters by Data Collection Type

An exception to the finding of increased response with greater monetary value was the study by Doody et al. which, found that a $2 bill resulted in a higher retention rate than a $5 cheque[22]. One possible explanation is that in the United States, a $2 bill is rare and may have novelty; alternatively, the higher amount was given as a cheque and the transaction cost of cashing a low-value cheque may have reduced the effect of the incentive[22]. The National Longitudinal Surveys (NLS) found that not all respondents who received cheques or cash cards as incentives used them. Although equivalent cash value were not offered to provide a direct comparison, it is likely that while reducing the overall cost of incentives to the study,[27, 28] this strategy also reduced the impact of an incentive on the retention rate. There is tentative evidence that providing incentives, particularly upfront to specific groups, e.g. non-responders to previous data collection, may reduce the cost per interview as the cost of the incentive is cheaper than multiple visits or calls to obtain such respondents without an incentive[27, 28].

Reminders, repeat contacts and alternative modes of data collection

The most common approach to improving retention was to write to or call respondents to remind them to complete a questionnaire or take part in an interview; to send additional questionnaires, make repeat calls or visits; or, to offer alternative modes of data collection in an attempt to capture reluctant respondents. Seventeen studies included at least one of these methods,[22, 24, 3344, 4653] most including a range of them in a hierarchical fashion, starting with the least labour intensive (i.e. reminders) and ending with the most costly (i.e. alternative modes of data collection). With the exception of one study[42] it was possible to separate out the effect of each specific stage on retention.

Table 2 shows the additional proportion retained after posting reminder letters or postcard following a postal survey, which appeared to increase with number of letters sent. The time between sending out the initial postal questionnaire and the reminder varied by study, but no study evaluated the optimal time between postings.

Ten studies posted questionnaires to participants multiple times,[22, 31, 33, 3537, 40, 42, 46, 48], [Dudas, e-mail, 14 December 2007] with nine[22, 31, 33, 3537, 40, 46, 48], [Dudas, e-mail, 14 December 2007] providing retention rates for each posting. Table 3 shows that the additional proportion retained from posting repeat questionnaires appeared to increase with the number posted. Only one study compared the effectiveness of reminder letters with that of repeat questionnaires. Hoffman et al. found that those who received a second questionnaire were much more likely to be retained than those who received only a reminder postcard[40].

Table 3 Increase in Study Retention Rates for Repeat Questionnaires and Alternative Methods of Data Collection by Data Collection Type

Ten[34, 35, 38, 4144, 46, 47, 5053] of the twenty-eight studies offered alternative data collection modes to participants; seven[34, 35, 38, 42, 46, 47, 5053] of these studies had already used other retention methods. Table 3 shows that there was an increase in retention with any alternative additional data collection method. The additional retention was highest for face-to-face studies,[41, 43, 44] which conducted the first interview in a central location e.g. a clinic or school and the subsequent modes either followed up with home interviews or postal questionnaires[41, 43, 44]. In these studies, alternative modes of data collection were generally the second stage of the study, which in addition to the convenience of home-based interviews might help to explain the larger average increases in retention[41, 43, 44].

The increase in retention from reminder calls made for postal survey studies was also examined in four studies (data not shown); all of which had already sent reminder letters[34, 37, 46, 5053]. Reminder calls appeared to have a greater effect on younger age cohorts,[34, 5053] with an increase of between 10% and 16%, in comparison to increases between 1% and 6% among older cohorts[37, 46, 5053] retention only.

Two telephone surveys demonstrated the need to make multiple calls to achieve a completed interview (data not shown)[38, 47]. Garcia and colleagues found 62% of participants only required between one and three telephone calls to complete an interview. However, ten or more calls, however, were required to successfully interview 9% of participants[38]. The National Population Health Survey found that less than 15 attempts were needed to conduct 90% of the interviews; however, up to 50 calls were required for the remaining 10%[47].

In a school-based study of multi-ethnic pupils in England, Harding et al., (data not shown) used multiple school visits (i.e. up to 13 additional visits to schools to follow-up pupils who were not present on previous data collections). Retention increased by 26%, 6% and 2% after the second, third and fourth visits respectively[39].

Multiple Methods

Thirteen postal survey studies[22, 24, 3337, 40, 42, 46, 4853] in this review used multiple retention methods and eleven of these[22, 24, 3337, 42, 46, 4953] had retention rates of more than 70%. This might suggest that the more effort studies put into retaining respondents, including use of multiple methods, the higher the retention rate will be. However, it is important to keep in mind the costs will also be higher. These studies[22, 24, 3337, 40, 42, 46, 4853] all began with the cheapest methods, e.g. posting reminder letters, and ended with the most expensive, such as alternative modes of data collection, so that the number of respondents that needed to be captured with each additional method decreased as the costs increased.

Other methods

In addition to the broad approaches described above a few more specific initiatives were tried by some studies. Rimm et al. found that retention was significantly higher for questionnaires sent via certified mail than those sent by other mail types, and for envelopes that were handwritten than those that were not[31]. However, Doody et al. did not find any difference in retention from alternative methods of posting the questionnaire[22]. Kalsbeek and Joens reported some evidence of increased retention by providing personalised information in letters, however as this was combined with non-monetary incentives, determining the true effect of personalization was difficult[23].

In two more detailed experiments, Hoffman et al. found a modest increase in retention if a 4-page questionnaire was used instead of a 16-page questionnaire (p = 0.145);[40] Clarke et al. found that including income questions did not affect retention rates but asking for proxy respondents to complete a cognitive questionnaire about the primary study participants appeared to decreased retention when the main questionnaire was used[36].

Discussion

In the studies reviewed here, incentives were associated with an increase in retention rates, which improved with higher values. Whether cash was the most effective incentive was not clear from studies that compared cash and gifts of similar value. Studies of other methods (i.e. reminder letters or calls and alternative modes of data collection) also demonstrated a benefit, but it was difficult to assess their impact due to a less standardised approach. This is the first-known review of the effect of retention strategies on retention rates specifically focused on population-based cohort studies. It is important to consider the effect of different retention strategies on longitudinal studies specifically as different mechanisms may operate once a participant has been/or expects to be in a study over a period of time.

Strengths and Limitations

A major strength of this review is its extensive systematic search of the literature. The general lack of studies that rigorously evaluate retention methods suggests, therefore, that such evaluations are rarely conducted. A key challenge in this review has been comparing retention rates from studies with different methods of calculating or reporting them. In general, we used the retention rate as reported by the authors but we are aware that different methods of calculation could have been used. For example, among studies with two or more follow-up data collections, some used the number of participants eligible for a specific wave of data collection as the denominator[41, 48] while others used the baseline sample[34, 43, 44, 49]. An additional difficulty was created by the inclusion of new or additional participants between waves (e.g. "booster samples", new members in the household, previous non-responders or drop-outs, or studies adopting a policy of continuous recruitment) and whether they were included or excluded from the denominator.

The majority of studies in this review were conducted in the United States which may limit their generalisability to studies conducted in other countries, which, may have different cultures about participating in research or have different ethical guidelines. We also attempted to examine whether the time between data collections or between baseline data collection and the evaluation wave may have influenced retention. However, due to the small number of studies involved and heterogeneity between them, the findings were not reliable.

Compositional factors such as the gender, age and socioeconomic status of participants and contextual factors such as location of the study, the recruitment methods the tracking methods or other indirect methods such as study loyalty or study publicity may also have influenced the effect of retention methods. Few studies had empirical data or reported these in a systematic way so that evaluation or aggregation could be conducted across studies. However, there is some evidence that additional efforts are required to track and retain vulnerable [54] and/or disadvantaged groups[5]. The differential effect of different retention methods across population groups therefore requires further systematic review.

The use of a narrative approach for this review rather than meta-analysis was due to the relatively small number of studies and the heterogeneity in their methodologies, which limited our ability to quantify effects associated with specific retention methods and carry-out meta-analyses. The findings of this study demonstrate the need for studies to rigorously evaluate their retention methods as well as examine the cost-effectiveness of those methods.

Comparisons of effects of similar methods used in non-cohort studies

In other research, incentives have been shown to have a positive effect on retention rates in postal surveys,[5557] other study designs, in-person and telephone interviews and online studies,[5860] and also in the recruitment of study members in longitudinal cohort and cross-sectional studies and clinical trials[6164]. In a meta-analysis of the methods used in postal studies to increase response and retention rates, the odds of response or retention increased by more than half when monetary incentives were received versus non-monetary incentives[57]. Our review was inconclusive in relation to cash versus gift incentives but it did suggest that cash incentives may have a greater effect than cheques on retention rates. There is some support for this in other studies,[6571] although they are from health promotion and health care projects rather than epidemiological research studies, and therefore their findings may not be transferable to research studies.

The effect of the timing of incentives on retention rates in our review was unclear. However, the meta-analysis by Edwards et al. showed that prepaid incentives increased response more than conditional incentives (OR = 1.61 [1.36,1.89]) in studies that posted questionnaires[57]. Unconditional incentives were also found to lower attrition in a postal questionnaires[70]. In a recent review of studies that used either face-to-face or telephone interviews, conditional incentives did not increase response compared to unconditional (β = 2.82, SE = 1.78, p > 0.05)[59].

Although there was a general lack of standardised approaches of evaluating other retention methods here, there is support in the literature for a beneficial effect on response and retention rates of reminder methods in trials, cross-sectional, prospective and non-population based cohort studies[57, 7280].

A recent systematic review of retention methods used for in-person follow-up showed that retention rates increased with the number of methods used,[81] which is supported by our findings. The relative lack of evaluation of retention strategies in cohort studies is possibly linked to funding constraints as well as to the potential threat of compromised retention from employing control arms. The use of sub-studies[22, 2730] or pilot studies[36, 40, 42] provided useful insights about how retention can be enhanced by the evaluation of methods without compromising retention rates. Olson argued that targeted strategies, such as incentives to non-responders from previous waves of the study, is a cost effective approach to retaining those participants who often drop out of studies[27, 28].

Conclusions

Producing generalisable results is a key objective of cohort studies to ensure that the benefits of research can be applied to a wider population. Researchers are encouraged to ensure that participants are given the opportunity to take part and are not excluded due to socioeconomic disadvantage. Much has been written on the ethics of incentives,[8284] but there is still a lack of consensus, for example, whether varying incentives amounts should be offered to different sub-samples in a study. There is little ethical discussion about whether repeated attempts to obtain consent to follow-up is perceived as pressure to participate or whether research ethics should be adapted to suit the cultural/socio-economic characteristics of the study population. Due to international differences in the regulation of research, the approach to these issues will invariably vary. There was little mention of these issues in the studies we reviewed.

The cost of evaluation, and the risk to study loyalty among participants, may explain the small number of studies that evaluated retention strategies or examined their cost-effectiveness. Raising awareness of the need for such studies among researchers and funding bodies is important to ensure the longevity and scientific value of cohort studies in the future.

References

  1. Department of Health: Independent inquiry into inequalities in health. Edited by: Acheson CSD. 1998, London: The Stationary Office

    Google Scholar 

  2. Department of Health: Saving Lives: Our Healthier Nation. 1999, London: The Stationary Office

    Google Scholar 

  3. Patel M, Doku V, Tennakoon L: Challenges in recruitment of research participants. Advances in Psychiatric Treatment. 2003, 9: 229-238. 10.1192/apt.9.3.229.

    Article  Google Scholar 

  4. Marcellus L: Are we missing anything? Pursuing research on attrition. Canadian Journal of Nursing Research. 2004, 36 (3): 82-98.

    PubMed  Google Scholar 

  5. Scott CK: A replicable model for achieving over 90% follow-up rates in longitudinal studies of substance abusers. Drug & Alcohol Dependence. 2004, 74 (1): 21-36. 10.1016/j.drugalcdep.2003.11.007.

    Article  Google Scholar 

  6. Bergman P, Ahlberg G, Forsell Y, Lundberg I: Non-participation in the second wave of the PART study on mental disorder and its effects on risk estimates. International Journal of Social Psychiatry. 2010, 56 (2): 119-132. 10.1177/0020764008098838.

    Article  PubMed  Google Scholar 

  7. Schmidt CO, Raspe H, Pfingsten M, Hasenbring M, Basler HD, Eich W, Kohlmann T: Does attrition bias longitudinal population-based studies on back pain?. European Journal of Pain. 2011, 15: 84-91. 10.1016/j.ejpain.2010.05.007.

    Article  PubMed  Google Scholar 

  8. Thygesen LC, Johansen C, Kelding N, Giovannucci E, Gronbaek M: Effects of sample attrition in a longitudinal study of the association between alcohol intake and all-cause mortality. Addiction. 2008, 103: 1149-1159. 10.1111/j.1360-0443.2008.02241.x.

    Article  PubMed  Google Scholar 

  9. Martin J, Matheson J: Responses to decline response rates on government surveys. Survey Methodology Bulletin. 1999, 45: 33-37.

    Google Scholar 

  10. Shulruf B, Morton S, Goodyear-Smith F, O'Loughlin C, Dixon R: Designing multidisciplinary longitudinal studies of human development: Analyzing past research to inform methodology. Evaluation & the Health Professions. 2007, 30 (3): 207-228. 10.1177/0163278707304030.

    Article  Google Scholar 

  11. Bower P, King M, Nazareth I, Lampe F, Sibbald B: Patient preferences in randomised controlled trials: Conceptual framework and implications for research. Social Science & Medicine. 2005, 61 (3): 685-695. 10.1016/j.socscimed.2004.12.010.

    Article  Google Scholar 

  12. Mills E, Wilson K, Rachlis B, Griffith L, Wu P, Guyatt G, Cooper C: Barriers to participation in HIV drug trials: A systematic review. Lancet Infectious Diseases. 2006, 6 (1): 32-38. 10.1016/S1473-3099(05)70324-8.

    Article  PubMed  Google Scholar 

  13. Ross S, Grant A, Counsell C, Gillespie W, Russell I, Prescott R: Barriers to participation in randomised controlled trials: A systematic review. Journal of Clinical Epidemiology. 1999, 52 (12): 1143-1156. 10.1016/S0895-4356(99)00141-9.

    Article  CAS  PubMed  Google Scholar 

  14. Burton J, Laurie H, Lynn P: The long-term effectiveness of refusal conversion procedures on longitudinal surveys. Journal of the Royal Statistical Society Series A - Statistics in Society. 2006, 169 (3): 459-478.

    Article  Google Scholar 

  15. Martin SA, Haren MT, Middleton SM, Wittert GA, Members of the Florey Adelaide Male Ageing Study (FAMAS): The Florey Adelaide Male Ageing Study (FAMAS): Design, procedures & participants. BMC Public Health. 2007, 7: 126-10.1186/1471-2458-7-126.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Hill Z: Reducing attrition in panel studies in developing countries. International Journal of Epidemiology. 2004, 33 (3): 493-498. 10.1093/ije/dyh060.

    Article  PubMed  Google Scholar 

  17. Seibold-Simpson S, Morrison-Beedy D: Avoiding early study attrition in adolescent girls: Impact of recruitment contextual factors. Western Journal of Nursing Research. 2010, 32 (6): 761-778. 10.1177/0193945909360198.

    Article  PubMed  Google Scholar 

  18. Ribisl KM, Walton MA, Mowbray CT, Luke DA, Davidson WS, Bootsmiller BJ: Minimizing participant attrition in panel studies through the use of effective retention and tracking strategies: Review and recommendations. Evaluation & Program Planning. 1996, 19 (1): 1-25. 10.1016/0149-7189(95)00037-2.

    Article  Google Scholar 

  19. Szklo M: Population-based cohort studies. Epidemiologic Reviews. 1998, 20 (1): 81-90.

    Article  CAS  PubMed  Google Scholar 

  20. Haring R, Alte D, Volzke H, Sauer S, Wallaschofski H, John U, Schmidt CO: Extended recruitment efforts minimize attrition but not necessarily bias. Journal of Clinical Epidemiology. 2009, 62 (3): 252-260. 10.1016/j.jclinepi.2008.06.010.

    Article  PubMed  Google Scholar 

  21. Wallace BC, Schmid CH, Lau J, Trikalinos TA: Meta-analyst: Software for meta-analysis of binary, continuous and diagnositic data. BMC Medical Research Methodology. 2009, 9 (1): 80-10.1186/1471-2288-9-80.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Doody MM, Sigurdson AS, Kampa D, Chimes K, Alexander BH, Ron E, Tarone RE, Linet MS: Randomized trial of financial incentives and delivery methods for improving response to a mailed questionnaire. American Journal of Epidemiology. 2003, 157 (7): 643-651. 10.1093/aje/kwg033.

    Article  PubMed  Google Scholar 

  23. Kalsbeek WD, Joens SE: Cost-effectiveness and advance mailings in a telephone follow-up survey. Proceeding of the Survey Research Methods Section, ASA. 1995, 204-209.

    Google Scholar 

  24. Koo MM, Rohan TE: Types of advance notification in reminder letters and response rates. Epidemiology. 1996, 7 (2): 215-216. 10.1097/00001648-199603000-00025.

    Article  CAS  PubMed  Google Scholar 

  25. Laurie H, Lynn P: The use of respondent incentives on longitudinal surveys. Methodology of Longitudinal Surveys. Edited by: Lynn P. 2009, West Sussex, UK: John Wiley & Sons, Ltd, 205-233.

    Chapter  Google Scholar 

  26. Laurie H: The effect of increasing financial incentives in a panel survey: An experiment on the British Household Panel Survey, Wave 14. ISER Working Paper 2007-5. 2007, Colchester: University of Essex

    Google Scholar 

  27. Olsen RJ: The problem of respondent attrition: Survey methodology is key. Monthly Labor Review. 2005, 128 (2): 63-70.

    Google Scholar 

  28. Olsen RJ: Predicting respondent cooperation and strategies to reduce attrition. 2008, Department of Economics and Center for Human Resource Research, Ohio State University

    Google Scholar 

  29. U.S. Bureau of Labor Statistics: NLS Handbook, 2005. 2005, Columbus, OH: Center for Human Resource Research

    Google Scholar 

  30. NLS97 Users Guide. [http://www.nlsinfo.org/nlsy97/nlsdocs/nlsy97/97sample/rni.html]

  31. Rimm EB, Stampfer MJ, Colditz GA, Giovannucci E, Willett WC: Effectiveness of various mailing strategies among nonrespondents in a prospective cohort study. American Journal of Epidemiology. 1990, 131 (6): 1068-1071.

    CAS  PubMed  Google Scholar 

  32. Rodgers W: Incentive Size effects in a longitudinal study. 2006, Ann Arbor, MI: Survey Research Center

    Google Scholar 

  33. White E, Carney PA, Kolar AS: Increasing response to mailed questionnaires by including a pencil/pen. American Journal of Epidemiology. 2005, 162 (3): 261-266. 10.1093/aje/kwi194.

    Article  PubMed  Google Scholar 

  34. Boys A, Marsden J, Stillwell G, Hatchings K, Griffiths P, Farrell M: Minimizing respondent attrition in longitudinal research: Practical implications from a cohort study of adolescent drinking. Journal of Adolescence. 2003, 26 (3): 363-373. 10.1016/S0140-1971(03)00011-3.

    Article  PubMed  Google Scholar 

  35. Calle EE, Rodriguez C, Jacobs EJ, Almon ML, Chao A, McCullough ML, Felgelson HS, Thun MJ: The American Cancer Society Cancer Prevention Study II Nutrition Cohort: Rationale, study design, and baseline characteristics. Cancer. 2002, 94: 500-511. 10.1002/cncr.10197.

    Article  PubMed  Google Scholar 

  36. Clarke R, Breeze E, Sherliker P, Shipley M, Youngman L, Fletcher A, Fuhrer R, Leon D, Parish S, Collins R, et al: Design, objectives, and lessons from a pilot 25 year follow up re-survey of survivors in the Whitehall study of London Civil Servants. Journal of Epidemiology & Community Health. 1998, 52: 364-369. 10.1136/jech.52.6.364.

    Article  CAS  Google Scholar 

  37. Eagan TML, Eide GE, Gulsvik A, Bakke PS: Nonresponse in a community cohort study: Predictors and consequences for exposure-disease associations. Journal of Clinical Epidemiology. 2002, 55: 775-781. 10.1016/S0895-4356(02)00431-6.

    Article  PubMed  Google Scholar 

  38. Garcia M, Fernandez E, Schiaffino A, Peris M, Borras JM, Nieto FJ, for the Cornella Health Interview Survey Follow-up (CHIS.FU) Study Group: Phone tracking in a follow-up study. Sozial- und Praventivmedizin. 2005, 50: 63-66. 10.1007/s00038-004-4052-4.

    Article  PubMed  Google Scholar 

  39. Harding S, Whitrow M, Maynard MJ, Teyhan A: The DASH (Determinants of Adolescent Social well-being and Health) Study, an ethnically diverse cohort. International Journal of Epidemiology. 2007, 36: 512-517. 10.1093/ije/dym094.

    Article  PubMed  Google Scholar 

  40. Hoffman SC, Burke AE, Helzlsouer KJ, Comstock GW: Controlled trial of the effect of length, incentives, and follow-up techniques on response to a mailed questionnaire. American Journal of Epidemiology. 1998, 148 (10): 1007-1011.

    Article  CAS  PubMed  Google Scholar 

  41. Lissner L, Skoog I, Andersson K, Beckman N, Sundh V, Waern M, Zylberstein DE, Bengtsson C, Bjorkelund C: Participation bias in longitudinal studies: Experience from the Population Study of Women in Gothenburg, Sweden. Scandinavian Journal of Primary Health Care. 2003, 21 (4): 242-247. 10.1080/02813430310003309-1693.

    Article  PubMed  Google Scholar 

  42. Michaud DS, Midthune D, Hermansen S, Leitzmann M, Harlan LC, Kipnis V, Schatzkin A: Comparison of cancer registry case ascertainment with SEER estimates and self-reporting ina subset of the NIH-AARP Diet and Health Study. Journal of Registry Management. 2005, 32 (2): 70-75.

    Google Scholar 

  43. Mills CA, Pederson LL, Koval JJ, Gushue SM, Aubut JL: Longitudinal tracking and retention in a school-based study on adolescent smoking: Costs, variables, and smoking status. Journal of School Health. 2000, 70 (3): 107-112. 10.1111/j.1746-1561.2000.tb06455.x.

    Article  CAS  PubMed  Google Scholar 

  44. Novo M, Hammarstrom A, Janlert U: Does low willingness to respond introduce bias? Results from a socio-epidemiological study among young men and women. International Journal of Social Welfare. 1999, 8 (2): 155-163. 10.1111/1468-2397.00076.

    Article  Google Scholar 

  45. Rudy EB, Estok PJ, Kerr ME, Menzel L: Research incentives: Money versus gifts. Nursing Research. 1994, 43 (4): 253-255. 10.1097/00006199-199407000-00012.

    Article  CAS  PubMed  Google Scholar 

  46. Russell C, Palmer JR, Adams-Campbell LL, Rosenberg L: Follow-up of a large cohort of Black women. American Journal of Epidemiology. 2001, 154 (9): 845-853. 10.1093/aje/154.9.845.

    Article  CAS  PubMed  Google Scholar 

  47. Tolusso S, Brisebois F: NPHS data quality: Exploring non-sampling errors. 2003, Ottawa: Household Survey Methods Division, Statistics Canada

    Google Scholar 

  48. Ullman JB, Newcomb MD: Eager, reluctant, and nonresponders to a mailed longitudinal survey: Attitudinal and substance use characteristics differentiate respondents. Journal of Applied Social Psychology. 1998, 28 (4): 357-375. 10.1111/j.1559-1816.1998.tb01710.x.

    Article  Google Scholar 

  49. Walker M, Shaper AG, Lennon L, Whincup PH: Twenty year follow-up of a cohort based in general practices in 24 British towns. Journal of Public Health Medicine. 2000, 22 (4): 479-485. 10.1093/pubmed/22.4.479.

    Article  CAS  PubMed  Google Scholar 

  50. Women's Health Australia Research Group: Women's Health Australia. The Australian Longitudinal Study on Women's Health: Report 10. 1998, Newcastle: The University of Newcastle

    Google Scholar 

  51. Women's Health Australia Research Group: Women's Health Australia. The Australian Longitudinal Study on Women's Health: Report 13. 1999, Newcastle: The University of Newcastle

    Google Scholar 

  52. Women's Health Australia Research Group: Women's Health Australia. The Australian Longitudinal Study on Women's Health: Report 15. 2000, Newcastle: The University of Newcastle

    Google Scholar 

  53. Women's Health Australia Research Group: Women's Health Australia. The Australian Longitudinal Study on Women's Health: Report 17. 2001, Newcastle: The University of Newcastle

    Google Scholar 

  54. Kuhns LM, Vazquez R, Ramirez-Valles J: Researching special populations: retention of Latino gay and bisexual men and transgender persons in longitudinal health research. Health Education Research. 2008, 23 (5): 814-825.

    Article  CAS  PubMed  Google Scholar 

  55. Church AH: Estimating the effect of incentives on mail survey response rates: A meta-analysis. Public Opinion Quarterly. 1993, 57: 62-79. 10.1086/269355.

    Article  Google Scholar 

  56. Edwards P, Cooper R, Roberts I, Frost C: Meta-analysis of randomised trials of monetary incentives and response to mailed questionnaires. Journal of Epidemiology and Community Health. 2005, 59: 987-999. 10.1136/jech.2005.034397.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Edwards PI, Roberts I, Clark MJ, DiGuiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S: Methods to increase response rates to postal and electronic questionnaires. Cochrane Database of Systematic Reviews. 2009, 3

  58. Alexander GL, Divine GW, Couper MP, McClure JB, Stopponi MA, Fortman KK, Tolsma DD, Strecher VJ, Johnson CC: Effect of incentives and mailing features on online health program enrollment. American Journal of Preventive Medicine. 2008, 34 (5): 382-388. 10.1016/j.amepre.2008.01.028.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Singer E, Van Hoewyk J, Gebler N, Raghunathan T, McGonagle K: The effect of incentives on response rates in interviewer-mediated surveys. Journal of Official Statistics. 1999, 15 (2): 217-230.

    Google Scholar 

  60. Henderson M, Wight D, Nixon C, Hart G: Retaining young people in a longitudinal sexual health survey: a trial of strategies to maintain participation. BMC Medical Research Methodology. 2010, 10: 9-10.1186/1471-2288-10-9.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Beydoun H, Saftlas AF, Harland K, Triche E: Combining conditional and unconditional recruitment incentives could facilitate telephone tracing in surveys of postpartum women. Journal of Clinical Epidemiology. 2006, 59: 732-738. 10.1016/j.jclinepi.2005.11.011.

    Article  PubMed  Google Scholar 

  62. Martinson BC, Lazovich D, Lando HA, Perry CL, McGovern PG, Boyle RG: Effectiveness of monetary incentives for recruiting adolescents to an intervention trial to reduce smoking. Preventive Medicine. 2000, 31 (6): 706-713. 10.1006/pmed.2000.0762.

    Article  CAS  PubMed  Google Scholar 

  63. Steinhauser KE, Clipp EC, Hays JC, Olsen M, Arnold R, Christakis NA, Lindquist JH, Tulsky JA: Identifying, recruiting, and retaining seriously-ill patients and their caregivers in longitudinal research. Palliative Medicine. 2006, 20 (8): 745-754. 10.1177/0269216306073112.

    Article  PubMed  Google Scholar 

  64. Mapstone J, Elbourne D, Roberts I: Strategies to improve recruitment to research studies. Cochrane Database of Systematic Reviews. 2007, MR000013-2

  65. James JM, Bolstein R: Large monetary incentives and their effect on mail survey response rates. Public Opinion Quarterly. 1992, 56 (4): 442-453. 10.1086/269336.

    Article  Google Scholar 

  66. Malotte CK, Hollingshead JR, Rhodes F: Monetary versus nonmonetary incentives for TB skin test reading among drug users. American Journal of Preventive Medicine. 1999, 16 (3): 182-188. 10.1016/S0749-3797(98)00093-2.

    Article  CAS  PubMed  Google Scholar 

  67. Croft JR, Festinger DS, Dugosh KL, Marlowe DB, Rosenwasser BJ: Does size matter? Salience of follow-up payments in drug abuse research. IRB: Ethics & Human Research. 2007, 29 (4): 15-19.

    Google Scholar 

  68. Festinger DS, Marlowe DB, Dugosh KL, Croft JR, Arabia PL: Higher magnitude cash payments improve research follow-up rates without increasing drug use or perceived coercion. Drug & Alcohol Dependence. 2008, 93 (1-2): 128-135.

    Article  Google Scholar 

  69. Ingels SJ, Pratt DJ, Rogers JE, Siegel PH, Stutts ES: Education Longitudinal Study of 2002: Base-year to first follow-up data file documentation (NCES 2006-344). Edited by: Education USDo. 2005, Washingtion, DC: National Center for Education Statistics

    Google Scholar 

  70. Jackle A, Lynn P: Respondent incentives in a multi-mode panel survey: Cumulative effects on non-response and bias. ISER Working Paper 2007-01. 2007, Colchester: University of Essex

    Google Scholar 

  71. Ryu E, Couper MP, Marans RW: Survey incentives: Cash vs. in-kind; Face-to-face vs. mail; Response rate vs. nonresponse error. International Journal of Public Opinion Research. 2006, 18 (1): 89-106. 10.1093/ijpor/edh089.

    Article  Google Scholar 

  72. Hebert R, Bravo G, Korner-Bitensky N, Voyer L: Refusal and information bias associated with postal questionnaires and face-to-face interviews in very elderly subjects. Journal of Clinical Epidemiology. 1996, 49 (3): 373-381. 10.1016/0895-4356(95)00527-7.

    Article  CAS  PubMed  Google Scholar 

  73. Bonfill Cosp X, Castillejo MM, Vila MP, Marti J, Emparanza JI: Strategies for increasing the particpation of women in community breast cancer screening (Review). Cochrane Database of Systematic Reviews. 2001, 1

  74. Hertz-Picciotto I, Trnovec T, Kočan A, Charles MJ, Čiznar P, Langer P, Sovčikova E, James R: PCBs and early childhood development in Slovakia: Study design and background. Fresenius Environmental Bulletin. 2003, 12 (2): 208-214.

    CAS  Google Scholar 

  75. Haynes RB: Determinants of compliance: The disease and the mechanics of treatment. Compliance in Health Care. Edited by: Haynes RB, Taylor DW. 1979, Baltimore: Johns Hopkins University Press, 49-62.

    Google Scholar 

  76. Mayer JA, Lewis EC, Slymen DJ, Dullum J, Kurata H, Holbrook A, Elder JP, Williams SJ: Patient reminder letters to promote annual mammograms: A randomized controlled trial. Preventive Medicine. 2000, 31 (4): 315-322. 10.1006/pmed.2000.0718.

    Article  CAS  PubMed  Google Scholar 

  77. Nakash RA, Hutton JL, Jorstad-Stein EC, Gates S, Lamb SE: Maximising response to postal questionnaires: A systematic review of randomised trials in health research. BMC Medical Research Methodology. 2006, 6: 5-10.1186/1471-2288-6-5.

    Article  PubMed  PubMed Central  Google Scholar 

  78. Taplin SH, Barlow WE, Ludman E, MacLehos R, Meyer DM, Seger D, Herta D, Chin C, Curry S: Testing reminder and motivational telephone calls to increase screening mammography: A randomized study. Journal of the National Cancer Institute. 2000, 92 (3): 233-242. 10.1093/jnci/92.3.233.

    Article  CAS  PubMed  Google Scholar 

  79. Fuligni AJ, Tseng V, Lam M: Attitudes toward family obligations among American adolescents with Asian, Latin American, and European backgrounds. Child Development. 1999, 70 (4): 1030-1044. 10.1111/1467-8624.00075.

    Article  Google Scholar 

  80. Salim Silva M, Smith WT, Bammer G: Telephone reminders are a cost effective way to improve responses in postal health surveys. Journal of Epidemiology & Community Health. 2002, 56 (2): 115-118. 10.1136/jech.56.2.115.

    Article  CAS  Google Scholar 

  81. Robinson KA, Dennison CR, Wayman DM, Pronovost PJ, Needham DM: Systematic review identifies number of strategies important for retaining study participants. Journal of Clinical Epidemiology. 2007, 60 (8): 757-765.

    Article  PubMed  PubMed Central  Google Scholar 

  82. Erlen JA, Sauder RJ, Mellors MP: Incentives in research: Ethical issues. Orthopaedic Nursing. 1999, 18 (2): 84-91.

    Article  CAS  PubMed  Google Scholar 

  83. Grant RW, Sugarman J: Ethics in human subjects research: Do incentives matter?. Journal of Medicine and Philosophy. 2004, 29 (6): 717-738. 10.1080/03605310490883046.

    Article  PubMed  Google Scholar 

  84. Tishler CL, Bartholomae S: The recruitment of normal health volunteers: A review of the literature on the use of financial incentives. Journal of Clinical Pharmacology. 2002, 42: 365-375. 10.1177/0091270002424001.

    Article  PubMed  Google Scholar 

Pre-publication history

Download references

Acknowledgements

The authors would like to thank Alastair Leyland, Ian White and the steering group members who provided support and guidance. We would also like to thank Mary Robins and Valerie Hamilton for their assistance with the literature search, and the reviewers for their comments on an earlier version of this paper.

Funding: This research was supported by the MRC Population Health Sciences Research Network (PHSRN) (WBS Code U.1300.33.001.00019.01).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cara L Booker.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

CB was involved in the development of the concept for the paper, oversaw the literature search and conducted the data extraction. She also conducted the analyses, prepared drafts and undertook edits. SH was involved in the development of the concept, direction, drafting and editing of the manuscript. She also rechecked the abstracts for study inclusion. MB was involved in the direction and concept of the paper. She independently conducted data extraction and was involved in the drafting and editing of manuscript drafts. All authors have read and approved of all versions of the manuscript.

Electronic supplementary material

12889_2010_3038_MOESM1_ESM.DOC

Additional file 1: Is an example of the electronic database search for retention/attrition in cohort studies. (DOC 24 KB)

12889_2010_3038_MOESM2_ESM.DOC

Additional file 2: Is an extended version of Table 1 with additional information on the evaluation method and retention rates associated with those methods. (DOC 89 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Booker, C.L., Harding, S. & Benzeval, M. A systematic review of the effect of retention methods in population-based cohort studies. BMC Public Health 11, 249 (2011). https://doi.org/10.1186/1471-2458-11-249

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2458-11-249

Keywords