Article Text

Download PDFPDF

Paper
An empirical study on the preferred size of the participant information sheet in research
  1. Evangelia E Antoniou1,
  2. Heather Draper2,
  3. Keith Reed1,3,
  4. Amanda Burls4,
  5. Taunton R Southwood5,6,
  6. Maurice P Zeegers1,7
  1. 1Unit of Urologic and Genetic Epidemiology, Department of Public Health, Epidemiology and Biostatistics, University of Birmingham, Edgbaston, UK
  2. 2Centre for Biomedical Ethics, Primary Care Clinical Sciences, University of Birmingham, Edgbaston, UK
  3. 3The Twins and Multiple Births Association (TAMBA), Surrey, UK
  4. 4Department of Primary Health Care, University of Oxford, Oxford, UK
  5. 5Institute of Child Health, University of Birmingham, Edgbaston, UK
  6. 6Birmingham Children's Hospital NHS Foundation Trust, Birmingham, UK
  7. 7Section of Complex Genetics, Department of Genetics and Cell Biology, NUTRIM School for Nutrition, Toxicology and Metabolism, Maastricht University Medical Centre, Maastricht, The Netherlands
  1. Correspondence to Evangelia E Antoniou, Department of Public Health, Epidemiology and Biostatistics, University of Birmingham, Edgbaston B15 2TT, UK; exa660{at}bham.ac.uk

Abstract

Background Informed consent is a requirement for all research. It is not, however, clear how much information is sufficient to make an informed decision about participation in research. Information on an online questionnaire about childhood development was provided through an unfolding electronic participant sheet in three levels of information.

Methods 552 participants, who completed the web-based survey, accessed and spent time reading the participant information sheet (PIS) between July 2008 and November 2009. The information behaviour of the participants was investigated. The first level contained less information than might be found on a standard PIS, the second level corresponded to a standard PIS, and the third contained more information than on a standard PIS. The actual time spent on reading the information provided in three incremental levels and the participants' evaluation of the information were calculated.

Results 77% of the participants chose to access the first level of information, whereas 12% accessed the first two levels, 6% accessed all three levels of information and 23% participated without accessing information. The most accessed levels of information were those that corresponded to the average reading times.

Conclusion The brief information provided in the first level was sufficient for participants to make informed decisions, while a sizeable minority of the participants chose not to access any information at all. This study adds to the debate about how much information is required to make a decision about participation in research and the results may help inform the future development of information sheets by providing data on participants' actual needs when deciding about questionnaire surveys.

  • Electronic participant information sheets
  • ethics committees
  • ethics committees/consultation
  • informed consent
  • research ethics
  • web-based clinical research

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Gaining consent is a prerequisite for nearly all health research involving human participants. The main aim of gaining informed consent is to respect and promote participants' autonomy and to protect them from ignorance about potential harm. European directive regulations1 stipulate that participants in clinical trials must be adequately informed about the aims, the method, the expected outcomes and the potential risks associated with study participation. It does not, however, elaborate on what ‘adequately informed’ amounts to in practice. Jefford and Moore2 suggest that informed consent requires the provision of unbiased, up-to-date, relevant information on the consequences of choices, and that the potential participant can freely choose between two or more options (as a minimum whether to enter the study or not). However, they also do not specify the level of detail a potential participant needs to make a choice between the options offered. Current National Research Ethics Service (NRES) guidance suggests that, when appropriate, the participant information sheet (PIS) should be divided into two parts. The first part should contain brief and clear information on the essential elements of the specific study, such as what is the research about and what participants will have to do. The content of this part should be enough for participants to decide whether they wish to participate in the study. A second part should contain more detailed information, such as data confidentiality, which patients may wish to have.3

Some studies aimed at improving the readability of the information sheet have concluded that understanding might be improved if the form is easy to read,4–6 and emphasise the need for plain, accessible language,7–9 whereas others suggest that understanding and even recall of the information might be enhanced if sufficient time is allowed for reading.10 11 Short consent forms may also be useful.5 12 13

The extent to which current practice ensures that adequate information is given to potential participants is unclear. Ferguson14 found that most patients who participated in clinical trials did feel adequately informed and that they were capable of understanding most of the information provided. Similarly, Olver et al15 found that out of 100 cancer patients, 68 felt they had been given the right amount of information, 14 felt there was insufficient information, and only five felt they received too much information. We were unable to find any research that empirically and systematically determined the actual amount and type of information people wanted in order to make a decision about participation in research—existing literature only reports on whether, having been provided with a fixed and predetermined amount of information, participants felt informed.

Not everyone is the same, and some people will want more, and some less, information than others. One of the advantages of web-based information is that it can use hypertext markup to make the text interactive and thereby enable users to choose what they want to see and access different levels of information according to their interests and needs. In an online survey-based study on childhood development, we provided the PIS in a structured format so that people could get the amount of information they felt they needed. We judged this method of presenting information was safe because the project posed little or no risk to the participants; no clinical interventions or tests were involved and participants were asked to complete a questionnaire, which could readily be discontinued at any point and the information already recorded discarded. Therefore, the risk that participants might be harmed by entering a study on the basis of too little information was negligible.

This study sets out to explore how people used the information provided in order to inform the future development of information sheets according to participants' actual needs.

Methods

The study reported in this paper investigates what information people sought or wanted in order to be able to decide whether to participate in an online study. This research was embedded within a nation-wide population-based study investigating the development of twins during early childhood, conducted by the University of Birmingham,16 which involved the completion of an electronic questionnaire accessed via the study's website by a parent. The Ethical Review Committee of the University of Birmingham approved this study.

The population sampling frame for the study consisted of parents with twins aged from birth to 5 years of age. Advertisements for recruitment about the study were sent in the form of an invitation letter to members of the Twins and Multiple Births Association and via public advertisements in twin-specific magazines and websites. The final sample for the study (n=552) consisted of parents who completed the twin survey, and therefore had access to the PIS, between July 2008 and November 2009. We tracked what information was accessed and for how long using each participant's computer's internet protocol. Because this study was embedded within a larger study, demographic information was available to enable us to compare the information potential participants actually accessed before deciding to participate by various characteristics (see below).

Before completing the main survey participants were directed to a PIS, which offered access to six domains of information in three levels of detail. The domains provided answers to the following questions, which at the time of the study's design were recommended by the UK National Research Ethics Service:3 (1) What is our research about?; (2) Why are we doing this study?; (3) Why have you been invited to take part?; (4) What would we like you to do?; (5) Who will see the information that is collected?; (6) What will happen to the information that is collected?

To access the information the participant had to click a (+) sign option next to each question. The first level of information was sufficient to give them a broad understanding of the nature of the project and what would be required of them if they chose to participate. The remaining levels were accessed by a deliberate decision of the potential participant by clicking on a second and then a third (+) sign option. The second level was longer and more detailed than the first and provided the reader with what we estimated to be the level of detail required in a standard NRES PIS. The third level was even more sophisticated and normally included links to academic articles or other non-lay sources directly related to the study, containing more information than on a standard PIS. An example of the second domain with the three folds is presented in appendix 1.

The readability of the PIS in each level was calculated using the Flesch–Kincaid reading ease score and grade level. The higher the reading ease score and the lower the grade level the easier it is to read and understand a document. Whether existing readability measurements can accurately evaluate the readability of provided health information is debatable.17 18 Evidence suggests that the Flesch–Kincaid scale is widely used in studies of readability, has excellent repeatability and high correlation with other established readability scales (r=0.87–0.90).19 20 In addition, Kim and colleagues21 showed that readability scores between four different measurement scales, including the Flesch–Kincaid scale, were similar when compared with a health-specific readability measure that takes into account the text unit length alongside semantic and syntactic features of the text.

The information provided in all levels of the PIS had a mean reading ease score of 65.4 and a mean grade level score of 8.8, which indicates that the text was expected to be understood by an average student in the 8th grade (usually around ages 13–14 years according to the English educational system).22 The readability statistics are displayed in table 1. We calculated the average time needed to read each domain of the information, based on 200 words per minute, which is the number of words an average person can read in a minute.23 24

Table 1

Readability statistics for all levels of domains

At the end of the questionnaire, participants could opt to complete a further short questionnaire about the information they read in the PIS. Participants were asked to choose all options that applied to them from the following list: (1) I didn't click any of the (+) signs options; (2) I didn't find the information under (+) very useful; (3) I didn't find the information under (+) very interesting; (4) I found the information under (+) interesting but it didn't influence my decision to complete the questionnaire; (5) I would not have completed the questionnaire without being able to read the information under (+); (6) I would have liked more information about the project; (7) I would have liked more information about the questionnaire; (8) I would have liked more information about what you are going to do with the results of your study.

Statistical analysis

The baseline characteristics of the population were recorded. The number of the participants who entered each level of the domains, the actual time they spent reading this information, and the number of the participants who assessed the PIS were calculated. To explore whether there were any differences based on the participants' sex, socioeconomic status, ethnicity, age or the age of the twins and the information they accessed, we calculated the expected mean scores of the maximum level of information accessed and the time spent for every question by each category of the sample characteristics. All analyses were performed using the statistical software package STATA 11.

Results

Of those who completed the survey and spent time reading the PIS, 98% (n=540) were women and 2% (n=12) were men. With regard to the educational level, 66% (n=309) of the participants had a university education, 20% (n=93) had a college/professional qualification and 14% (n=64) had a high school or lower education. Of those who participated, 98% (n=488) were white and 2% (n=10) of other ethnic background (Asian, black or mixed ethnicity). At the time of the survey, 55% of the participants (n=270) were employed and 45% (n=222) were not employed. With regard to the age of the sample, 8% (n=39) were between the ages of 20 and 30 years, 77% (n=360) were between 31 and 40 years old and 15% (n=68) were between the ages of 41 and 50 years (table 2).

Table 2

Frequency distribution table of the main sample characteristics

Most participants (77%) chose to access the first level of information of each domain. Only 12% accessed the first and second level and 6% accessed all three levels of the domains. More specifically, 82% of the participants accessed the first level of the question on what participants will have to do, whereas only 11% accessed the first two levels and 7% accessed all three levels of the same question. The first level of the information on what the research was about was accessed by 80% of the participants, whereas 18% of the participants accessed the first two levels and 12% accessed all three levels. The rest of the questions follow the same pattern, with the first level being the more accessed (from 70% up to 76% of the participants) and the remaining the levels accessed by only a minority (from 3% up to 11% of the participants; table 3).

Table 3

Number/percentage of people who entered/clicked each level for every question

The actual time participants spent on each level of every domain is displayed in table 4. The estimated time needed to read the content is also presented. Generally, participants spent more time on the second and third levels of information. On average, the participants spent more time on information about why the survey was being done (25 s), on what participants were being asked to do (20 s) and on what the research was about (17 s).

Table 4

Average stay time measured in seconds for each level and estimation of the average reading time needed per level per question (based on average adult reading 200 words per minute)

Participants spent approximately 3 s less than the average reading time on the first level of information about ‘what would we like participants to do’, which was the most accessed information. They also spent less than the average reading time for levels 2 and 3 of this question. For the second more accessed information relating to ‘what our research is about’, the anticipated reading time was 6.3 s, whereas the participants spent 7.6 s (a difference of 1.3 s) more time reading the first level than the average reading time. They spent more time reading levels 2 and 3 but still less time than the average person would need to read and comprehend the content. By contrast, participants spent more than the average reading time on the information provided on the first level about ‘why are we doing this research’ (difference of 9 s).

There was no statistical difference in the pattern of accessing and time spent on the three levels of information between white and non-white ethnic groups, or a difference in the educational level of the parents and the age of the twins. Participants aged 41–50 years spent more time (p=0.03) reading the question on ‘what is our research about?’ than those in the other two age groups. Women were more likely than men to access at least the first level of information concerning ‘what is our research about?’ (p<0.01) and ‘why are we doing this research?’ (p=0.02). Men were more likely than women to spend more time on the information on ‘why have you been invited to take part?’ (p<0.001).

We also wanted to assess whether participants' perceptions about the quality of the information they read correlated with the actual time spent reading the information provided. The results on how participants perceived the information they read suggested that 34% (n=160) found the information interesting but it did not influence their decision to complete the questionnaire. Twenty per cent of the participants (n=93) would have liked more information about what we are going to do with the results of the study, even though only 6% clicked through to the third level of information. Seventeen per cent (n=82) said that they would not have completed the questionnaire without being able to read the information under the frequently asked questions, 15% (n=71) would have liked more information about the project, which again contrasts with the number who actually accessed higher levels of information (see table 3). Six per cent (n=30) would have liked more information about the questionnaire. Four per cent (n=20) said that they did not click any of the (+) sign options (which contrasts with the over 18% who we know did not click on any), whereas 3% (n=16) did not find the information interesting and 1% (n=3) did not find the information very useful.

Discussion

As far as we are aware, this is the first empirical study to assess in detail the amount and type of information potential research participants use before they decide to participate in a research study. It recorded how much information was accessed and the actual time spent reading it was compared with the average reading times for the same text.

Level 1 information was the most visited while information on levels 2 and 3 received much less attention. Few participants accessed levels 2 and 3 of information and spent little time looking at it, suggesting that the level of detail on standard PIS is not required by most participants in an online survey.

In the case of the most accessed domains in the PIS (information on what the research was about, what participants would have to do and why the survey was being done), the actual time reading and the average reading time for the first and second level were similar, suggesting that participants did read all the information accessed. When accessing the third level of the same domains, however, participants on average only spent approximately half (for the domains on what is our research about and why are we doing this survey) and approximately one-third (for the domain on what would we like participants to do) of the average reading time. This may have been because, having seen what was included, they found they were not interested in reading more detailed information; alternatively the level may have been accessed out of curiosity as to what lays behind the ‘fold’.

Even though 20% of participants said that they would have liked more information about the study, only 6% accessed the third level of information and only 17% read the information that might be reproduced on a standard PIS (level 2). In short, even when there was information available it was not always utilised. Moreover, participants did not accurately report the extent to which they had actually accessed the information provided. More striking, perhaps, is the proportion of participants who were willing to take part without accessing any information in one or more domains before looking at the questionnaire. As table 3 indicates, between 28% and 30% (depending on the domain) chose not to access any information, and between 88% and 91% chose not to access information comparable to that provided in a standard PIS (level 2). When asked about the information provided, 34% stated that reading it did not influence their decision to complete the survey. We can speculate that rather than relying on the information provided, they went straight to the questionnaire and then decided on the basis of the kinds of questions being asked and the extent to which they found these intrusive, or on whether they felt that their answers would reveal anything they regarded as private or sensitive. We are unable to tell how many people chose not to participate after looking at either some of the information or the questionnaire itself as we only gathered information from those who chose to participate.

Nonetheless, the proportion of those who chose not to access information, or for whom it is reported not to have influenced decision-making, cannot be ignored for several reasons. First, it suggests that a significant minority of people did not want or use the information provided when they were actually making a decision about participation in the parent study. This requires further investigation, for example, to determine whether, taken together with the low uptake of information beyond level 1, too much weight is being placed on detailed PIS being available to questionnaire studies more generally. Second, taken together with the results on the reported use of information and the mismatch between information accessed and the reported need for more information, our results suggest at least a significant minority of participants actually rely less on the PIS to make a decision than can be inferred from the detailed scrutiny that these receive from research ethics committees. Third, the results highlight an ethical question about the responsibilities of researchers using online surveys. Should we have programmed the online system so that potential participants were unable to sign up to participate until they had spent at least the average reading time on all domains under level 2 (that which we regarded as being the standard PIS)? Of course, this would not have guaranteed that the information had been read, but it may be regarded as a safeguard for online studies in general. This, however, may raise many questions about what constitutes autonomous decision-making. The view that individuals can autonomously chose not to receive ‘standard’ information when making decisions is not without support, even in conservative bioethics.23 Unfolding or otherwise interactive electronic information sheets undoubtedly permit potential participants to choose for themselves what information they need to make a decision. Refining them as a means of providing information will mean using them on studies in which the risks may be more significant. Taking seriously the idea that information needs vary from person to person means taking seriously the idea that some individuals may want to know less than we might ourselves, and that the duty to inform might be discharged by making a variety of information available rather than by insisting that everyone reads (or at least appears to have read) a fixed amount of information, with the tailoring only coming into play for those whose informational needs exceed this prescribed minimum. The extent to which research ethics committees will be comfortable embracing this as a principle in either research into participants' actual information needs or when applied to the more general use of tailored information (in which participants can actively choose to know less than may currently be required on a standard PIS) remains to be seen.

One challenge of this study was to determine whether our participants were actually using the information they accessed to inform their decision, given that they could click into a domain and have the browser open without actually reading the material provided. In the case of the first level of information, the comparison with average reading times is strongly suggestive that the materials were being read. In the case of the two further levels, things are less clear. The time spent on these domains was generally less than average reading times suggested were required to read them properly. On the other hand, potential participants may value the opportunity to skim read through the additional information, either picking out specific sentences of interest or to satisfy themselves that there was nothing further that concerned them. Accordingly, they may both value access to the information and consider that it does not influence their decision-making. Furthermore, the participants to this study knew that they could read through the questionnaire and then decide not to continue, which is easier in the case of internet-based studies than, for instance, personally administered paper questionnaires in which it might be harder to decide not to continue when the researcher is present. Internet studies are, however, similar in this regard to postal surveys, in which again, the paper version can be scanned before making a decision about whether or not to complete it.

There are limitations to the generalisability of the results of this study. Our participants were predominantly women (98%), white (98%), well educated (66% were university educated and 20% had college or professional qualifications) and all were under 50 years of age. The main study was an online-only study so our participants probably all had reasonably good computer skills and access to the internet. We were not able to record how many people decided not to participate nor, therefore, what information was accessed in order to make this decision.

Conclusion

Our aim was to examine the amount of information potential participants to the parent study read before they decided to participate, in order to inform discussions about how much information should be contained in a standard PIS. We were able to monitor in an innovative way what information the participants thought they would find most useful at the time a decision was required of them, and then how long they spent in each information domain. This time spent was then compared with average reading times to determine the likelihood that the participants had actually read all of the information on that domain, identifying that information was most significant for them to read based on how long they spent reading it. Level 1 information was the most accessed, ie, the briefest information, which was less than we would have anticipated being required for a standard PIS. Time spent on these areas was similar to the average reading times, suggesting that the information was actually read.

Our results on the participants' pattern of accessing and reading information suggested that the majority of our potential participants sought very little information before making a decision about whether or not to participate in our low risk, on-line, questionnaire-based study, and a significant minority felt they needed no information at all.

The NRES guidance for researchers and reviewers 3 has raised the concern that information sheets are becoming increasingly lengthy and complex, and may be deterring participation in clinical research. There is little evidence from which to determine how much information sheet participants actually need. A balance needs to be struck between overwhelming potential participants with too much information and giving them insufficient information to make an informed choice. Our study design offered a real possibility for personally tailored information, which may go some way to addressing this concern and improving participant understanding. It remains to be seen whether this method of tailoring information will be regarded as acceptable in clinical research.

Acknowledgments

The authors would like to thank Ralph Ramah, Chief Executive of the Discount Web Design, and his team for the provision of technical support on the study's website design and maintenance. They would also like to recognise the contribution of Sue Wilson, Professor of Clinical Epidemiology at the University of Birmingham, for her valuable comments on the manuscript and highlight their collaboration with the International Network for Knowledge about Wellbeing (ThinkWell) represented by Dr Amanda Burls.

Appendix 1

  1. What is our research about?

  2. Why are we doing this research?

    • We would like to know whether the development of twins is influenced by the genes they inherit from their parents or by other environmental factors within the family.

    • Around two out of three sets of twins are non-identical (fraternal). In twin studies we assume that identical and non-identical twins do not differ in the way they are treated by their parents, thus they share a common family environment. Therefore, doing a twin study we can work out, by examining the differences between identical and non-identical twins, how much of a difference is caused by genetic factors and how much by environmental ones. In order to work out the genetic and environmental influences on trait variations and gather information on various aspects of your twins' behaviour we will ask you some questions on how your twins behave, how they think and understand what is happening, and how they react to in their surrounding environment. We will also ask you to answer some similar questions about yourself.

    • All the questions we will ask you have to do with the general growth and development of your twins. By analysing your answers to these questions, we will be able to work out whether your twins' development is influenced by the genes they inherit from you or by other environmental factors within the family. Also, in order to estimate these environmental factors, we need background information about the parents of twins. For example whether they smoke, how much they exercise now and before the twin pregnancy and their educational background. Specifically, we are interested in finding out how these factors, which describe the family environment, may influence twins' growth (their weight and height) and how they behave and feel. Also, by asking the parents to report which hand and foot they and their twins prefer, we will decide if the parents' preference of either the left or the right hand is associated with their twins' hand and foot preference. We need this information because we want to find out if there are any kind of differences between children who use their left hand and children who use their right hand. We already know that twins are more likely than singletons to be born with a low birth weight. Babies who are born prematurely or with a low birth weight may experience a greater number of neonatal complications and spend more time in hospital than other babies. Premature babies are more likely to have poor motor skills and poor abilities to adapt to new demanding situations, such as when they first start school, or meet new people. Several studies report that a delay in motor development is associated with low birth weight and shorter gestational age. Overall, prematurity and low birth weight are known to significantly affect the way a child develops. If we find out information about this, and information about your twins development we will be better able to pinpoint the role played by genetics and the environment.

  3. Why have you been invited to take part?

  4. What would we like you to do?

  5. Who will see the information that is collected?

  6. What will happen to the information that is collected?

References

Footnotes

  • Competing interests None declared.

  • Ethics approval This study was conducted with the approval of the Ethical Review Committee of the University of Birmingham.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Other content recommended for you