Article Text

Download PDFPDF

Original research
Readability and understandability of clinical research patient information leaflets and consent forms in Ireland and the UK: a retrospective quantitative analysis
  1. Lydia O'Sullivan1,2,
  2. Prasanth Sukumar3,
  3. Rachel Crowley3,4,
  4. Eilish McAuliffe5,
  5. Peter Doran2,3
  1. 1School of Medicine & School of Nursing, Midwifery and Health Systems, University College Dublin, Dublin, Ireland
  2. 2Health Research Board - Trials Methodology Research Network, Galway, Ireland
  3. 3School of Medicine, University College Dublin, Dublin, Ireland
  4. 4Department of Endocrinology, Saint Vincent's University Hospital, Dublin, Ireland
  5. 5Centre for Interdisciplinary Research, Education, and Innovation in Health Systems, School of Nursing, Midwifery and Health Systems, University College Dublin, Dublin, Ireland
  1. Correspondence to Lydia O'Sullivan; lydia.osullivan{at}ucd.ie

Abstract

Objectives The first aim of this study was to quantify the difficulty level of clinical research Patient Information Leaflets/Informed Consent Forms (PILs/ICFs) using validated and widely used readability criteria which provide a broad assessment of written communication. The second aim was to compare these findings with best practice guidelines.

Design Retrospective, quantitative analysis of clinical research PILs/ICFs provided by academic institutions, pharmaceutical companies and investigators.

Setting PILs/ICFs which had received Research Ethics Committee approval in the last 5 years were collected from Ireland and the UK.

Intervention Not applicable.

Main outcome measures PILs/ICFs were evaluated against seven validated readability criteria (Flesch Reading Ease, Flesh Kincaid Grade Level, Simplified Measure of Gobbledegook, Gunning Fog, Fry, Raygor and New Dale Chall). The documents were also scored according to two health literacy-based criteria: the Clear Communication Index (CCI) and the Suitability Assessment of Materials tool. Finally, the documents were assessed for compliance with six best practice metrics from literacy agencies.

Results A total of 176 PILs were collected, of which 154 were evaluable. None of the PILs/ICFs had the mean reading age of <12 years recommended by the American Medical Association. 7.1% of PILs/ICFs were evaluated as ‘Plain English’, 40.3%: ‘Fairly Difficult’, 51.3%: ‘Difficult’ and 1.3%: ‘Very Difficult’. No PILs/ICFs achieved a CCI >90. Only two documents complied with all six best practice literacy metrics.

Conclusions When assessed against both traditional readability criteria and health literacy-based tools, the PILs/ICFs in this study are inappropriately complex. There is also evidence of poor compliance with guidelines produced by literacy agencies. These data clearly evidence the need for improved documentation to underpin the consent process.

  • clinical trials
  • medical ethics
  • medical law
  • medical education & training
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • This study provides a broad assessment of the complexity level of clinical research Patient Information Leaflets/Informed Consent Forms (PILs/ICFs), including both traditional readability and health literacy-based measures.

  • This study also compares the compliance of academic, pharmaceutical companies and hospital-based clinical trial sponsors with the Plain English guidelines published by literacy agencies.

  • The purposive and convenience sampling method used to collect the documents in this study may have led to an enthusiasm bias, thus underestimating the complexity level of actual PILs/ICFs, as investigators interested in improving the readability of their trial documents were probably more likely to contribute documents.

Introduction

Best practice, regulations and international standards demand that participants in human research are fully informed when providing consent.1 This is critical to ensuring that participants’ autonomous decision making is respected. Patient Information Leaflets (PILs) and Informed Consent Forms (ICFs) are important components of the informed consent process2 as they enable patients to make informed, autonomous decisions. However, patients often have difficulty understanding the complex concepts contained in clinical research PILs and ICFs.3–5 While it is recommended that written healthcare information is prepared for an age level of 11–12 years6 it has been shown that many patient-facing documents do not achieve this goal.7 Since the introduction of the European Union (EU) General Data Protection Regulations (GDPR) in May 2018, study sponsors are now required to provide patients with information relating to the storage and processing of personal data, which may create additional challenges when trying to ensure PILs and ICFs are understandable.8

Recognising the importance of these documents being accessible to patients, some studies have assessed the readability of clinical research PILs and ICFs using traditional readability criteria. These metrics include, among others, Flesch Reading Ease (FRE),9 Flesch Kincaid Grade Level (FKGL),10 Simplified Measure of Gobbledegook (SMOG),11 Gunning Fog (GF),12 New Dale Chall (NDC),13 Fry14 and Raygor.15 These criteria were developed for various purposes (child and adult education, healthcare, business writing) and have been extensively validated and used in the healthcare7 16 and health promotion settings17 and also for clinical trial PILs and ICFs.18–22 However, while these criteria provide an objective and reproducible method of evaluating readability, they have been criticised for focusing on syntactic and semantic complexity only.23 24 In contrast, health literacy models contend that a reader’s motivation and their ability to process information, among other factors, are also key components of a patient’s understanding.25 This is an important consideration given that low health literacy is linked to adverse health outcomes.26 27

The Clear Communication Index (CCI) was developed by the Centre for Disease Prevention and Control (CDC),28 which aimed to provide a tool, based on health literacy models, for evaluating patient-facing documents. This tool has been tested in the context of public health documents29 30 and shown to enhance understandability. Similarly, the Suitability Assessment of Materials (SAM) criteria31 was designed to take into consideration the role of a document’s layout and presentation in the communication process and has similarly been used to evaluate public health documents.7 Given the limitations of the traditional readability criteria, some studies have combined them with a tool based on health literacy models. However, to date none of these studies have analysed clinical research PILs/ICFs.30 32 33

There is also international consensus among literacy organisations that typographical considerations (eg, the use of a Sans Serif type face, using sufficiently large font size, the use of bullet points) are also important considerations in order to make documents more accessible to variety of readers, particularly those with low literacy or dyslexia.34 35 However, to the knowledge of the authors, an assessment of the compliance of clinical research PILs/ICFs against these guidelines has not been published to date.

Given the importance of ensuring that patients understand information leaflets and the complexity of measures for their evaluation, herein the readability and understandability of actual patient-facing documents were evaluated using a range of criteria which provide a broad assessment of written content, communication and presentation.

Methods

Document collection

A combination of purposive and convenience sampling was used to obtain the PILs/ICFs. Email contact was made with investigators and clinical research facilities affiliated with hospitals and academic institutions in Ireland through the Health Research Board Clinical Research Coordination Ireland—this represented a cross-section of clinical research investigators in Ireland. If no response was received, a follow-up email was sent. Requests for PILs/ICFs were also made via social media (Twitter and Linked In) and via the Health Research Board–Trials Methodology Research Network newsletter. In this way, a representation of clinical research activity was achieved. Requested PILs and ICFs included those prepared by academic, pharmaceutical and hospital sponsors, and encompassed both interventional and observational studies. Documents were included in subsequent analysis if: they included both a PIL and an ICF, the intended audience for the document was lay, research participants, the PIL/ICF was written for adults or adolescents and the PIL/ICF had received research ethics committee (REC) approval within the last 5 years.

Overall analysis

Documents meeting these criteria were included in the subsequent analysis. To characterise the sample set, documents received were classified according to: Study Type—Investigational Medicinal Product (IMP) study versus non-IMP; Sponsor Type—academic, industry, hospital, collaborative group; Site Type—single-site versus multi-site study; Study Origin—Irish or international origin.

Study data were collected and managed using REDCap (V.7.4.10) tools hosted at University College Dublin.36 37 Statistical analysis was performed using IBM SPSS Statistics for Windows (V.24.0).

A Spearman’s correlation was calculated between the readability and health literacy-based criteria.

Readability analysis

Readability was assessed using the Oleander Readability Studio Software (V.2015). The following validated readability criteria was used: FKGL, FRE, GF, SMOG, NDC, Fry and Raygor Estimate. The traditional readability criteria (FKGL, FRE, GF and SMOG) focus on syntactic complexity, such the length of sentences and the numbers of polysyllabic words. SMOG was developed specifically for assessing healthcare materials. The NDC criteria, in contrast, compares the analysed text to a pre-specified list of 3000 words generally understood by 10-year olds.13 This group of criteria was selected because they are validated, recognised methods of assessing readability. Raygor Estimate and Fry graphs were prepared, and a Pearson correlation coefficient was calculated to determine how the individual readability criteria relate to each other.

In order to ensure an accurate analysis, documents were formatted to remove non-narrative text (ie, headings, logos which were not comprised of full sentences) prior to importing into the Readability software.

A one-way analysis of variance (ANOVA) was used to determine if there were any significant differences in readability between sponsor and study types. Independent sample t-tests were used to determine if there were significant differences in readability for Clinical Trial of Investigational Medicinal Product (CTIMP) versus non-CTIMP studies, single versus multi-site studies and studies originating in Ireland versus those outside of Ireland.

PILs and ICFs were evaluated as whole documents and also in the following sections: Introduction/Background, Aim/Purpose of Study, Risks/Side Effects, Study Procedures and Regulatory (Legal/Insurance/Data Protection/Confidentiality). Dividing the documents into sections and analysing them separately indicated whether a particular section was more challenging to read compared with another. The mean and median scores, range and SD for the collective group of PILs and ICFs per each criteria were calculated. A one-way ANOVA was performed to determine if the sections of the PIL/ICF differed in complexity. Comparisons were made between the recommended reading level for patient-facing documents and the readability of the assessed PILs and ICFs. A paired sample t-test was performed to determine if the Regulatory section of the PIL/ICF was more difficult to read compared with the PIL/ICF as a whole.

CCI and SAM analysis

Each PIL and ICF was assessed as a single document against the CCI and SAM criteria by a single researcher (LOS), and an overall score was calculated. Due to the subjective nature of the assessment process for these criteria, a random 10% sample of the PILs and ICFs were selected using a simple random sampling method using random numbers generated in Microsoft Excel, and independently scored by a second reviewer (PS). The inter-assessor variability was measured using an interclass correlation coefficient.

The SAM incorporates six factors: content, literacy demand, graphics, layout and typography, learning stimulation and motivation, and cultural appropriateness. A percentage score was determined by dividing the total score assigned to the document (as described by Doak 1996) by the total possible score. As not every factor applied to every PIL and ICF, the denominator varied. The percentage scores were then divided into categories: 0%–39% (Inadequate), 40%–69% (Adequate) and 70%–100% (Superior).31

The CCI consists of four parts. Part A, the core component, focuses on the use of visuals and layout considerations, the use of the active voice and whether the main message is clearly portrayed at the beginning of the materials. Part B considers behavioural recommendations, but as this section was not applicable for clinical research documents, it was omitted. Part C assessed the use of numbers, including whether the reader can understand the way in which numbers are presented. Part D evaluated risks, including whether both risks and benefits are explained in an understandable format. A percentage score was determined by dividing the total score assigned to the document by the total possible score, as described by Alpert.29 Percentages were compared with the 90% advised by the CDC to ensure clear communication.28

Comparison of pre-GDPR and post-GDPR PILs/ICFs

To investigate the impact of additional regulations on document accessibility, all PILs/ICFs originating in Ireland were divided into pre-GDPR or post-GDPR. An independent sample t-test compared the mean readability level, CCI, SAM and mean reading age between the two categories.

Seven matched PILs/ICFs for the same trial were available in both a pre-GDPR and post-GDPR format. A paired sample t-test was used to compare the readability level, CCI, SAM and mean reading age.

Comparison with plain English UK/Irish National Adult Literacy Agency guidelines

Each PIL and ICF was assessed against the following recommendations of the Plain English UK and the Irish National Adult Literacy Agency (NALA) guidelines: mean sentence length <20 words, percentage of passive verbs <10%, use of a non-Sans Serif font, use of a font size of at least 12 point, use of headings consisting upper and lower case letters, and the use of 1.5 line spacing. Finally, the use of justified text—when spaces are added between words to ensure that each line of text is the same length38—was assessed. The addition of the spaces makes distinguishing words more difficult for readers with dyslexia.

These metrics provided a quantitative assessment of whether the PILs/ICFs complied with Plain English UK and the Irish NALA guidelines.

All data generated from these assessments were entered onto the RedCap Database for subsequent analysis.

Patient and public involvement

Patients were not directly involved in this study.

Results

Overall results

A total of 179 PILs/ICFs were received from the Republic of Ireland and the UK. Of these, 154 were deemed evaluable. Of the 25 excluded documents, 2 PILs/ICFs were for healthy volunteers; 5 were written for paediatric patients; 3 were duplicates; 5 were not written for lay persons and 10 ICFs were not available to accompany the PILs.

The study and sponsor types, as well as the nature of the sites represented by these documents, were recorded and are shown in table 1. Whether the documents were created before or after the introduction of GDPR was also recorded. The area of research to which the documents were related was also recorded, and a good distribution of disease areas was represented by the documents received.

Table 1

Breakdown of study characteristics

Readability analysis

Overall readability scores: full PIL/ICF

The readability of the total cohort of documents was first assessed using the previously discussed scales. The mean FRE was 49.6 (recommended score for Plain English is 60–70), while the mean reading age was 16.1 years (recommended age is 11–12 years). The mean grade levels (per the American grade school system, where the recommended grade level is 6th grade) were FKGL: 11.3, GF: 12.1, SMOG: 13.0, Raygor: 10.5, Fry: 12.3, NDC: 10.7. The mean, median, range and SD for each metric, and the mean reading age (as assessed by all the metrics) are listed in table 2.

Table 2

Mean, median, range and SD per readability criteria for full PIL/ICF

Flesch Reading Ease

Using plain English as an aggregate of readability, 7.1% of the total cohort of PILs/ICFs were evaluated as ‘Plain English’ per the FRE criteria, 40.3% were deemed ‘Fairly Difficult’, 51.3% ‘Difficult’ and 1.3% ‘Very Difficult’, as shown in figure 1.

Figure 1

Flesch Reading Ease graph illustrating the language difficulty level of the Patient Information Leaflets/Informed Consent Forms in this study.

Fry criteria

Using the Fry criteria, and as shown in figure 2, all the PILs/ICFs were above the 6th grade level (recommended by the American Medical Association and the National Institutes for Health).

Figure 2

Fry graph illustrating the grade level distribution of Patient Information Leaflets/Informed Consent Forms in this study.

Reading age

This aggregate data show that all the PILs/ICFs had a mean reading age of >11–12 years, as recommended by the American Medical Association and the National Institutes for Health.

Comparison of readability per study and sponsor type

No significant difference in readability (FRE, FKGL, GF, SMOG, NDC, Fry, Raygor) or reading age was detected between sponsor types (academic, hospital-based, collaborative group or industry sponsors), single versus multi-site studies, origin of study (Ireland vs outside of Ireland) or CTIMP versus non-CTIMP. A significant difference in FRE (p=0.001), FKGL (p=0.024), SMOG (p=0.020), Raygor (p=0.001), Fry (p=0.005) and mean reading age (p=0.001) between interventional and non-interventional (observational and translational) studies was identified, showing that non-interventional studies contain more complex language compared with interventional studies.

Readability of the different sections of the PIL/ICF

As mentioned in the Methods section, the PIL/ICF was divided by section (as per table 3) to assess whether particular sections were more complex. The Regulatory section was more complex compared with the other sections as per the FRE (p<0.001), FKGL (p=0.002), SMOG (p=0.001), Raygor (<0.001), Fry (p<0.001) and NDC (p<0.001) criteria (see table 3).

Table 3

Mean reading ease or grade level±SD

The Regulatory section was also compared with the scores for the full PIL/ICF and was significantly higher in complexity (ie, significantly lower FRE and higher-grade level; p values all<0.001) (see table 4).

Table 4

Mean, median, range and SD per readability criteria for full PIL/ICF and Regulatory section

CCI and SAM analysis

In order to gain a broad assessment of the understandability of the documents, two criteria based on health-literacy models (the CCI and SAM criteria) were applied to each PIL/ICF.

Recognising the potential for observer variability, we first sought to determine concordance between two independent assessors. The inter-assessor variability, measured using an intraclass correlation coefficient, was determined to be ‘good’ for both criteria: 0.73 for the SAM and 0.75 for the CCI.

On analysis it was shown that none of the PILs had a CCI score of >90%, as recommended by the CDC. According to the SAM criteria, 29.2% of documents were ‘Superior’, 70.8% were ‘Adequate’ and none were ‘Inadequate’. Table 5 shows the mean±SD, median, range for both criteria.

Table 5

Mean, median, range and SD per readability criteria for full PIL

No significant difference in CCI and SAM was detected between sponsor types (p=0.261 for CCI and p=0.093 for SAM), origin of study (p=0.71 for SAM and p=0.15 for CCI) or between interventional versus non-interventional studies. SAM and CCI scores were significantly higher for single-site studies compared with multi-site studies (p=0.003 for CCI and p=0.021 for SAM). CCI and SAM scores were significantly higher for non-CTIMPs compared with CTIMPs (p=0.02 for CCI and p<0.001 for SAM). No difference was found between interventional and non-interventional (observational and translational studies) for either CCI (p=0.76) or SAM (p=0.12).

The majority (91.7%) of documents did not use an illustration or graphic to explain or support the main message. Of the remainder, 1.5%, 2.3%, 4.5% of PILs/ICFs used an illustration that was considered ‘not suitable’, ‘adequate’ and ‘superior’, respectively.

Correlation with traditional readability scores

A strong positive correlation was observed between the traditional readability criteria (FRE, FKGL, SMOG, GF, Raygor, Fry and NDC) scores (coefficients ranged from 0.975 to 0.693). A moderate correlation was observed between the traditional readability criteria and the SAM (coefficients ranged from 0.433 to 0.548). A weak correlation was observed between the traditional readability criteria and the CCI (coefficients ranged from 0.367 to 0.251). A moderate correlation was observed between the CCI and the SAM (0.348).

Comparison of pre-GDPR and post-GDPR PILs/ICFs

In order to assess the impact of the EU GDPR, Irish PILs/ICFs were categorised as pre-GDPR (96 documents) and post-GDPR (37 documents) and their complexity level was compared. The complexity level using all readability criteria and mean sentence length was higher for the post-GDPR PIL/ICFs. An independent sample test showed a significant difference in FKGL (p=0.034), SMOG (p=0.040) and Fry (p=0.019).

Seven matched PILs/ICFs for the same trial were available in both a pre-GDPR and post-GDPR format. The complexity level using all readability criteria, mean sentence length, percentage passive sentences and CCI was higher for the post-GDPR PIL/ICFs. A paired samples t-test showed a significant difference in FRE (p=0.021), GF (p=0.015), NDC (p=0.010), Fry (p=0.014), mean reading age (p=0.038) and CCI (p=0.016).

Plain English UK/Irish NALA guidelines

In order to determine if PILs/ICFs complied with literacy agency guidelines, the documents were assessed against six recommendations. The proportion of PILs/ICFs with a mean sentence length of more than the recommended 20 words was 35.3%. In addition, 39.8% of PILs/ICFs did not use a Sans Serif font, 43.6% of PILs/ICFs did not use a point size of 12 or more, 36.1% of PILs/ICFs used headings with all capital letters and 88% of PILs/ICFs did not use 1.5 line spacing. Just under half (47%) of PILs/ICFs used justified text—that is, the spacing between words was adjusted so that each line of text is the same length.38 Only two PILs/ICFs complied with all the Plain English guidelines.

Discussion

True informed consent depends on autonomy, capacity and disclosure of relevant information.39 40 The WHO’s guidelines for research in humans also state this information should be understandable to participants.41 In this study we have demonstrated that the majority of patient-facing documents used to enable the consent process in clinical research are complicated, unreadable and do not meet recommended guidelines. These findings lend weight to the concept that informed consent for clinical research is undermined by poor quality documentation. The implication is that the consent provided is not valid, and that potential participants with health literacy challenges are excluded. The finding that PILs/ICFs are more complex following the introduction of the GDPR is ironic, given that this regulation, which should give the public enhanced autonomy, in fact may reduce it.

Previous investigations have shown that healthcare providers cannot accurately identify individuals with low health literacy.42 43 Public health organisations, such as the Agency for Healthcare Research and Quality, therefore recommend that the universal precautions principle should be applied—that is, that all patient-facing documents are prepared and presented at an accessible level.44 The analysis of the syntactic and semantic complexity of PILs/ICFs reported herein, however, agrees with many previous studies in different countries21 45 46— the reading age and grade level of clinical research PILs/ICFs is inappropriately high. Only a single, older study concluded that the readability level of clinical research patient-facing documents was appropriate.20

In efforts to enhance document quality, automated readability measures have the advantages of being consistently reproducible, affordable and time efficient.47 As demonstrated herein, they allow large scale interrogation of patient-facing documents and the development of approaches to improving these. These measures also allow an individual researcher to assess the suitability of their materials prior to submission for REC review. Despite this, there remains some debate as to whether simply improving the syntactic and semantic complexity of a PIL/ICF improves patient comprehension. Results of the START trial, a non-inferiority cluster-randomised study comparing a standard length PIL/ICF with an abbreviated one found no improvement in patient satisfaction or understanding.48 An older study similarly suggested that reduction in document length alone does not improve understanding.49 Furthermore, another study showed that increasing readability level, as measured by FKGL, does not enhance comprehension, although the authors still felt that the Microsoft Word version of the FKGL is a useful tool for the initial assessment of a PIL.50 In contrast, another study which randomised patients to receive a standard information leaflet or a leaflet with a lower FKGL which also added comparative tables and emphasised key information, to promote colorectal screening, found that recognition of information improved with the adjusted text.51 Peterson improved readability according to automated readability scores but found that most assessors preferred a document which had been edited graphically only.52 These results seem to indicate that understanding of written information is multi-faceted and that typographical and layout considerations are also important. The results of our study, indicating a lack of compliance with best practice guidelines in typography and layout, suggest that investigators, sponsors and ethics committees should consider these factors when designing PIL/ICF templates.

Health literacy plays a key role in health outcomes.26 For this reason, health literacy models have been used to improved public health patient-facing documents. Public health and clinical research documents share the common goal of clear and effective communication to support informed decision making. Hochhauser contends that writers should never rely on a single readability formula to assess the suitability of a PIL/ICF,47 so given the limitations of reviewing syntactic and semantic complexity, perhaps the traditional readability formulae should be combined with those based on health literacy models. This approach has been shown to be feasible: Saeed successfully used a combination of the syntactic and semantic measures (FRE and FKGL) and the CCI to evaluate the readability and accessibility of web-based information for individuals with meningioma.30 Similarly, Hoffman and colleagues used SMOG and SAM to assess information for patients who had a stroke.33 Wallace et al32 used a combination of the DISCERN instrument and the SAM to evaluate internet information for individuals with osteoporosis. The results of our study show deficits in clear communication, with no PILs/ICFs receiving a CCI score of >90. An illustration or graphic to explain or support the main concept of the document is recommended by both criteria, however it is noteworthy that in our study 91.7% of documents in this study used neither. We propose that readable and understandable documents are a vital component of the overall consent process. The results of our study show that research PILs/ICFs can be assessed by not only traditional, automated readability measures, but also by health-literacy based criteria, such as the CCI, which has been shown to improve the clarity of public health documents.28

The results described herein indicate there is no significant difference between the readability, as measured by traditional readability criteria of PILs/ICFs prepared by academic-based or hospital-based sponsors, compared with studies sponsored by pharmaceutical companies. However, there was a significant difference between interventional studies and non-interventional studies. Two studies of adult and paediatric trial PILs found that industry-sponsored PILs were more readable and were more likely to contain an illustration,45 53 although they were also longer in length. Interestingly, a study conducted in the United Arab Emirates found that interventional studies were significantly more readable compared with observational studies.46 Also, industry sponsored interventional studies were significantly more readable than interventional non-industry sponsored studies. De la Moira-Molina et al’s study of industry sponsored studies showed no difference between studies in different disease areas or companies.54 Mader’s study of 94 emergency medicine PILs/ICFs found that increasing risk to patients was positively correlated with the complexity level of the document.55

It is of concern that only 2 out of the 133 PILs/ICFs assessed in our study complied with all seven of the Plain English criteria assessed in this study. Chubaty analysed 388 healthcare information leaflets for routine clinical care for older persons and found that only one-third used at least 12 point, only 18.6% used at least 1.5 spacing.56 Writing in simple language, while still retaining the breadth and meaning of complex information is challenging. However, changing to a Sans Serif font, removing all capital headings and not justifying text are achievable goals, and may improve accessibility for readers. A PIL/ICF template, incorporating the above guidelines could also be used at an institutional level to ensure compliance.

Some researchers have sought to improve the readability of their documents by employing the services of professionals or members of the public. Bjørn randomised 235 individuals to receive either an original PIL/ICF prepared by a pharmaceutical company or a PIL/ICF of the same study which had been simplified by a professional linguistics service.57 Both participants’ perception of their understanding and their actual cognitive understanding were improved with the amended document, but it is unlikely that individual researchers would have sufficient resources for a professional service for every PIL/ICF. Knapp and colleagues similarly have done a considerable amount of work on user testing of PILs/ICFs, including an analysis of the Theralizumab (TGN1412 trial).58–60 While engagement with the public is undoubtedly crucial, it is important, however, that lay reviewers of PILs/ICF are carefully selected, so that they are representative of, or cognisant of the target reading level. In order to ensure that accessibility is maximised, it may also be important not to rely solely on user testing, but instead to combine this with adherence to the Plain English guidelines. This study included a broad assessment of clinical research PILs/ICFs, incorporating assessment of syntactic and semantic complexity, and against health literacy-based criteria and recommendations from national literacy agencies. This provides a framework for further improvement of clinical research PILs/ICFs.

Methodological limitations of this study

The purposive and convenience sampling method used to collect the documents in this study may have led to an enthusiasm bias. Therefore, complexity level of actual PILs/ICFs may have been underestimated, as investigators interested in improving the readability of their trial documents were probably more likely to contribute documents.

Conclusions

The clinical research PILs/ICFs in this study are not readable, both when assessed by traditional syntactic and semantic complexity, and by health literacy-based models. The majority of PILs/ICFs in this study also do not comply with the Plain English guidelines recommended by national literacy agencies. Preparing clinical research PILs/ICFs which meet participant’s information needs, satisfy regulatory requirements and yet are understandable to the those with low health literacy is challenging. It is recommended that documents are assessed using a range of tools, and that consideration should be given to health literacy-based criteria and best practice guidelines for patient-facing documents.

Acknowledgments

The research team is grateful to the Health Research Board–Trials Methodology Research Network and the UCD Clinical Research Centre for their contributions to this study. Thanks also to all of the investigators and sponsors who provided documents for review.

References

Footnotes

  • Twitter @LydiaOSullivan9

  • Contributors LOS contributed to the study design, carried out data collection and analysis, and wrote the manuscript. PS contributed to the data analysis and review of the manuscript. RC and EMcA participated in the study design and review of the manuscript. PD led the research team, developed the concept, and contributed to the design of the research and review of the manuscript.

  • Funding This work was supported by the Health Research Board Trials Methodology Research Network (HRB-TMRM) as part of the HRB-TMRN-2017–1 grant.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Ethics approval An ethics exemption was obtained from the affiliated university Ethics Committee in Dublin, Ireland, as no personal data were processed or analysed as part of this study.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Data are available upon reasonable request.