Article Text

Original research
Qualitative study measuring the usability of the International Cardiac Rehabilitation Registry
  1. Hana J Abukhadijah1,
  2. Karam I Turk-Adawi2,
  3. Nora Dewart3,
  4. Sherry L Grace4,5
  1. 1Department of Public Health, College of Health Sciences, QU Health, Qatar University, Doha, Qatar
  2. 2Department of Public Health,College of Health Sciences,QU Health, Qatar University, Doha, Qatar
  3. 3Global Health, McMaster University, Hamilton, Ontario, Canada
  4. 4School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, Ontario, Canada
  5. 5KITE Research Institute—Toronto Rehabilitation Institute & Peter Munk Cardiac Centre, University Health Network,University of Toronto, Toronto, Ontario, Canada
  1. Correspondence to Dr Sherry L Grace; sgrace{at}yorku.ca

Abstract

Objective Cardiac rehabilitation (CR) is a comprehensive model of secondary preventive care. There is a wide variety in implementation characteristics globally, and hence quality control is paramount. Thus, the International Council of Cardiovascular Prevention and Rehabilitation was urged to develop a CR registry. The purpose of this study was to test the perceived usability of the International Cardiac Rehabilitation Registry (ICRR) to optimise it.

Design This was a qualitative study, comprising virtual usability tests using a think-aloud method to elicit feedback on the ICRR, while end-users were entering patient data, followed by semistructured interviews.

Setting Ultimately, 12 tests were conducted with CR staff (67% female) in low-resource settings from a variety of disciplines in all regions of the world but Europe before saturation was achieved.

Primary outcome measure Participants completed the System Usability Scale. Interviews were transcribed verbatim except to preserve anonymity, and coded using NVIVO by two researchers independently. The Unified Theory of Acceptance and Use of Technology 2 informed analysis.

Results The ICRR was established as easy to use, relevant, efficient, with easy learnability, operability, perceived usefulness, positive perceptions of output quality and high end-user satisfaction. System usability was 83.75, or ‘excellent’ and rated ‘A’. Four major themes were deduced from the interviews: (1) ease of approvals, adoption and implementation; (2) benefits for programmes, (3) variables and their definitions, as well as (4) patient report and follow-up assessment. Based on participant observation and utterances, suggestions for changes to the ICRR were implemented, including to the programme survey, on-boarding processes, navigational instructions, inclusion of programme logos, direction on handling unavailable data and optimising data completeness, as well as policies for authorship and programme certification.

Conclusions With usability of the ICRR optimised, pilot testing shall ensue.

  • CARDIOLOGY
  • Quality in health care
  • HEALTH SERVICES ADMINISTRATION & MANAGEMENT

Data availability statement

Data are available upon reasonable request.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • Usability testing comprised multiple approaches: think-aloud method, semistructured interviews and quantitative usability ratings.

  • Interview transcripts were coded independently by two raters, based on the Unified Theory of Acceptance and Use of Technology 2.

  • Representative generalisability is not established through qualitative research, so while purposive sampling was used and saturation was achieved, applicability to all low-resource settings to which the registry is targeted cannot be known.

  • While best practices were applied, social desirability bias may have impacted findings.

Introduction

Cardiovascular diseases (CVDs) continue to be the leading cause and a significant contributor of disease burden globally,1 with rising incidence in low-income and middle-income countries (LMICs).2 According to a 2019 Global Burden of Disease Study, the prevalence of CVD has nearly doubled from 271 million in 1990 to 523 million. This is mainly due to advancements in screening and associated risk factor control as well as acute treatments, such that most cardiac patients survive on initial diagnosis, but then live with CVD chronically at an increased risk of mortality and further morbidity.3

Cardiac rehabilitation (CR) is a comprehensive model of secondary preventive care to mitigate this burden.4–7 Core components of risk factor assessment and control, structured exercise, patient education as well as psychosocial counselling are delivered by a multidisciplinary team. Participation in CR reduces cardiovascular mortality and hospitalisation by 20%,8 and improves quality of life.9 Benefits are also robustly established in LMICs.10 However, CR programmes in low-resource settings are less comprehensive, and more often delivered in privately funded centres, which can be of lower quality than publicly funded ones.11 There is grossly insufficient CR capacity,11 12 particularly in LMICs. Thus, groups such as the International Council of Cardiovascular Prevention and Rehabilitation (ICCPR) are leading the way in supporting development of new, high-quality programmes.

Clinical registries can support care quality.13–15 Unfortunately however, there are few CR registries, with a recent systematic review revealing only seven worldwide,16 most of which are not applicable to LMICs. Therefore, the ICCPR recently developed one specific to low-resource settings.17 The purpose of this study was to test the perceived usability of the International Cardiac Rehabilitation Registry (ICRR) in such settings,18 to ensure applicability and optimal utility, and hence ultimate uptake to achieve its goals (https://globalcardiacrehab.com/ICRR-Governance).

Methods

Details on the development of the ICRR and the data dictionary are reported elsewhere.17 The process has been user-centred and iterative, and the current usability test is the next stage of preplanned evaluation.18

In brief, interested programmes complete a programme survey detailing the structural aspects of their programme such as number of prescribed sessions and duration, on which registry post-test assessment timing is based. The ICRR collects 12 program-reported and 17 patient-reported (or depending on local context and ethics approvals, patients report to CR staff to enter) variables, assessed pre and post CR (including in those who do not complete the programme); some of the variables are assessed at both timepoints. There is also an annual assessment, which was not assessed during the usability test, but was discussed during the subsequent interviews. Definitions of most variables are built into the ICRR screens with information bubbles (‘i’; figure 1) or hover features (https://globalcardiacrehab.com/ICRR-Variables-&-Data-Dictionary).

Figure 1

Screenshot of International Cardiac Rehabilitation Registry (ICRR) Patient Data Entry Interface. Note: dummy patient shown in demo registry. CR, cardiac rehabilitation.

The registry governance structure includes a user subcommittee tasked with on-boarding programmes and supporting them in entering quality data as well as programme quality improvement activities (https://globalcardiacrehab.com/ICRR-Governance). The latter is supported through two outcome dashboards (six variables each), where programme performance is compared over time and also to other programmes. The ICRR also generates an optional patient lay summary participating patients can be provided post programme to support them in optimising self-management long term (see template here: https://globalcardiacrehab.com/ICRR-for-Patients).

Design and procedure

This was a qualitative study to explore users’ experience in depth, to elicit shortcomings of the ICRR and use this to improve its’ content and design. The study comprised virtual usability tests using a think-aloud method19 followed by semistructured interview (see online supplemental appendix S1) for triangulation. Interviews were performed from May to September 2021. Results are reported in accordance with relevant reporting guidelines (Consolidated criteria for Reporting Qualitative research).20

Potential participants were recruited through ICCPR’s network (~60 council/friend member associations, and through ICCPR’s programme email distribution list) via email and social media. Interested parties in each WHO region and from diverse healthcare disciplines represented in CR (eg, physician, physiotherapist) were interviewed first, and interviews continued thereafter through to theme saturation. Sampling was purposive such that additional study advertising was undertaken thereafter to solicit participants in unrepresented regions.

Interested parties were emailed an informed consent form which they were asked to review and sign before the interview, or they could discuss the contents at the outset of the interview and sign before it began (ie, written consent). Also, before the test/interview, participants were directed to ICRR’s website to read the information about joining the ICRR (https://globalcardiacrehab.com/ICRR_sites). They were also provided with copies of relevant registry materials (eg, data dictionary, information on navigating ICRR’s ancillary features such as outcome dashboards and data export) as well as a login to the ICRR demonstration (‘demo’) site and told to familiarise themselves prior to the usability test. Participants were asked to share a copy of the patient information letter and consent form with some patients to get some input to share after the usability test, as well as the variables/survey for patient report and the lay summary. They were also asked to be ready to enter the data of a patient who has completed their programme during the test, ensuring they not reveal the identity of the patient. Finally, a copy of the interview guide was provided where participants were interested.

Virtual usability tests were held via Zoom and video recorded. One of the ICRR cochairs (SLG, KIT-A) led each interview, which was also attended by an ICRR trainee (HJA) who notated non-verbal communication, among other contributions; roles shifted as necessary. Some participants knew the ICRR cochair, and so all participants were asked not to respond in a socially desirable manner in case this familiarity would impact their test, being reminded that the goal was to receive as much constructive feedback as possible to ensure the utmost utility of the ultimate registry.

All participants had their web cameras on and consented to that, and completed the test in a private and quiet environment, using a computer. Interviewers also had their web cameras on to facilitate communication. As shown in the online supplemental appendix 1, participants were asked to log in to the ICRR demo site (https://demo.e-dendrite.com/icrr/; figure 1) and share their screen, as well as to have their data dictionary readily accessible. They were then instructed to enter preprogramme and postprogramme data on the graduated patient they had preidentified, while thinking and talking aloud (figure 1). During observation, interviewers noted the interviewee’s thoughts spoken out loud to enhance data collection. Notes included aspects of the context of the test, facial expressions and gestures that would not be recorded, and some ideas raised by the participants in the process.

This was followed by a semistructured interview (see online supplemental appendix S1). The interview guide and subsequent analyses were based on concepts of the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2),21–23 which builds from the Technology Acceptance Models (1–3).24 These models seek to characterise the drivers of acceptance of new technology such as a registry, and assess the likelihood of its adoption and use in practice. UTAUT constructs are found to account for up to 70% of the variance in behavioural intentions to use, and about 50% in actual use of technologies.21 25

During the interview, the ICRR cochair shared the relevant documents or aspects of the ICRR demo on screen, and if not applicable the interview guide itself. Finally, participants were asked to watch for an email from REDCap with the 10-item System Usability Scale (SUS) to complete.26

Participants

CR staff working at a CR programme meeting inclusion criterion for ICRR participation were sought.17 In brief, programmes had to be offering phase II (ie, post acute, outpatient), in a low-income or middle-income country (based on World Bank),27 or be in a high-income country but be considered ‘low-resource’ in relation to CR on the basis of limited financial and/or healthcare resources, lack of patient and healthcare provider awareness and/or patient disadvantage.28 Exclusion criteria were: (1) inability to read and communicate in English and (2) residing in a country that already has a CR registry, or where one is in development.16 For instance, given plans by the European Association of Preventive Cardiology to develop a registry, no participants were sought from that region.

Previous research suggests 85% of usability problems can be discovered by four or five participants, while 100% of usability problems can be discovered by 15 users.19 29 30 Therefore, we initially aimed for 15 interviews; however, they continued to saturation.

Measures

The instructions for the think-aloud segment of the usability test were standardised (see online supplemental appendix S1). The subsequent interview guide was developed by the senior author, and input was integrated from ICRR Executive (see online supplemental appendix S1). Based on previous knowledge31 32 and informed by theory,21 the interview questions aimed to invite variation on the following parameters: registry adoption (eg, effort expectancy for approvals, patient consent), perceived ease of use/operability, system characteristics such as variables and patient report, as well as perceived usefulness of ICRR output to support quality improvement and other programme needs. We strived to keep the interviews open to incite participants to tell us as freely as possible about important aspects of adopting and using the registry seen from their point of view.

Participants were then emailed the SUS,26 33–37 and asked to rate perceived usability of the ICRR. The SUS is used by the International Organization for Standardization (ISO 9241-11).26 35 36 It consists of 10 items, each rated on a 5-point Likert scale from 1 to 5 (‘strongly disagree’ to ‘strongly agree’). The scale incorporates positively and negatively framed items to account for biases that may result from respondent’s potential lack of attention while completing it.26 Total score was calculated using Brooke’s standard scoring method,26 using the standard Excel sheet and formulas through the online SUS Score calculator.38 Scores range from 0 to 100, with scores above 68 considered acceptable.39 Scores are also converted to letter-grades (from A+ to F) based on percentile ranks.37 The SUS is a valid and reliable instrument to measure perceived usability. It has been shown to effectively distinguish between unusable and usable systems even with very small sample sizes of 8–12.40 It correlates highly with other questionnaire-based measurements of usability.34

Recording transcripts were cleaned to be verbatim and anonymised, and the trainee finalised notes for ICRR cochair SLG review and approval. Notated behavioural observations and participant comments from the think-aloud method, including facial or verbal expressions as participants navigated ICRR screens, were considered by them, and analysed using content analysis. The cochair generated a list of potential ICRR revisions (eg, to the platform itself, dashboards, supporting files or website content for example) at the end of each interview. These were discussed with ICRR Executive, and those agreed were implemented as soon as possible, such that subsequent interviewees received updated materials.

Interview analysis was concurrent with data collection, undertaken using NVIVO V.1.5.1.41 A deductive-thematic approach as outlined by Crabtree and Miller was used.42 43 Following training and calibration with the senior author, each interview was coded independently by two independent coders: the trainee who attended the interview (HJA) and another trainee from the senior author’s team with expertise in Global Health but who was not present on the interviews (ND). To ensure reliability, coding for each interview was then reconciled between them, and to ensure validity, in a meeting with the senior author, until consensus was reached. Each theme and subtheme was supported by meaningful quotations (verbatim, except some minor edits were made to increase clarity in the case where the respondent’s first language was other than English). To ensure credibility, themes with subthemes were then shared with all interviewees to inquire whether they resonated, and requesting any input (ie, member checking).44

Patient and public involvement

No patients or members of the public were involved in this study, but the main subjects were CR programmes. They were involved in the design and conduct of the study, as we sought to learn what CR programmes want to know and what will optimise registry usability for them, and hence on what we should focus in the usability testing and subsequent interviews. CR programmes were not involved in the write-up of results, but as outlined above results were shared with programmes to confirm them and solicit further input.

Results

Sixteen CR staff expressed interest in participating. As shown in table 1, ultimately 12 interviews were conducted before saturation was achieved, with CR staff from a variety of disciplines in all regions of the world but Europe (67% female). The four whom were not interviewed were also from Latin America, which was already well represented in the sample. Participants worked in both privately and publicly funded programmes. One was from a rural area. The tests and interviews combined averaged 1 hour in duration.

Table 1

Interviewee characteristics

Usability tests

Overall, participants were readily able to navigate the demo registry to enter the preprogramme and postprogramme data. It was evident that most of the variables were assessed in their routine practice and definitions were consistent with their practice, such as tobacco use, blood pressure, body mass index, functional capacity, quality of life, work status and education level. Participants used different functional capacity tests at their programmes, but the data dictionary (the relevant excerpt of which is available in the registry screen in an information bubble; figure 1) provides information on how to convert the various measures to metabolic equivalents of task (METs); several participants successfully converted values during the usability test (ID5, ID6).

Based on participant observation and utterances, suggestions for changes to the ICRR were raised, of which some are shown in table 2. For instance with the registry itself, some software glitches were identified (eg, could not edit participant email for patient-reported outcomes; ID3), the definitions of some variables required clarification (eg, years of education, ID12; referral diagnoses cardiac only and other diagnoses to be reported elsewhere, ID1), the response options or ranges on some variables required modification (eg, maximum number of sessions, ID12; entering multiple referral interventions, ID12) and the addition of an optional variable pre and post programme was suggested (eg, blood glucose, ID6). All of these suggestions were implemented.

Table 2

Main ICRR changes made, with supporting quotes

Figure 2

Emerged themes/subthemes.

Usability themes

Four major themes emerged from the interviews, as shown in figure 2. Exemplary quotes are shared for each below, with some text added in square brackets in some instances to provide context of interviewer question for clarity.

Theme 1: ease of approvals, adoption and implementation

This theme comprised five subthemes, with the first three regarding approvals, staff and time relating to theoretical constructs of facilitating conditions and effort expectancy. The subthemes of registry navigation and application to private as well as public centres relate to the theoretical constructs of perceived ease of use and operability.

ICRR on-boarding involves securing institutional signature on a site agreement as well as research ethics approvals. Most participants perceived they could secure these approvals, but noted the time required for the latter. Participants at privately funded programmes sometimes did not have a research ethics board associated with their institution, so they stated they would need to reach out to collaborators to secure approval elsewhere and there is a fee. Programmes had someone on staff with the necessary institutional appointments to be eligible to apply for ethics approval.

I have two centres here. One mine, it’s a private centre, so I have no problem to install these kinds of registries in my program. And I have another workplace that’s a public institution, it’s a hospital. And there is of course, an ethical committee … And of course, I have to propose to that committee. (ID2)

I mean if the program director doesn't have an academic appointment, he wouldn't be able to apply for ethics. So, I'd have to get our medical director to do it I think because he’s an appointed professor overseeing students. I mean you'd have to have someone with an academic affiliation. (ID1)

Participants also talked about which staff would enter data in the ICRR, and how they would carve out time to do so from their full clinical schedules. Most programmes were small with few team members. Some of the physician participants wanted to enter the data solely to ensure it was of the utmost completeness and quality. Ideas to ensure data entry feasibility included engaging trainees and administrative staff, as well as exploiting the patient report feature.

I must be honest here. This is probably my major problem, it’s time. (ID12)

Yes, we have a lot of work, but because it’s pretty important, all the data I collect it’s by me personally. The administrative staff will take care about all this administrative stuff. (ID3)

I was thinking about this [using a trainee to enter the data] when I read the questions and well, I think we can find a way to do that because we're really interested. I think it’s a good idea. It’s an excellent idea. We have to promote this. (ID2)

On a related note, participants discussed the need for time to adopt and make optimal use of the registry. They recognised the amount of time that would be needed for approvals as outlined above, but found the registry so easy to use they thought the on-boarding process would not be time-prohibitive. They did raise about the time to enter the data in addition to the usual data collection requirements at their institution (eg, MS Excel, electronic health records, paper charts); they commented that there would need to be a real champion dedicated to the registry to ensure the variables were entered. They did appreciate the lay discharge summary feature, which they perceived rendered the effort to enter the data worthwhile.

We should improve our health information system because we use paper. (ID9)

Yes. Here we have maybe a little problem because we have a lot of work and also, we have our own database …. And that of course is a lot of work. There’s just me and maybe one or two cardiologists that could be doing it. (ID2)

There are two cardiologists. There are two physical therapists; I have one specialist that maybe could also help me in this. (ID2)

Having something like this [lay summary]… the amount of time it would take me to input the data would save me the amount of time that it’s going to take me to write the report. (ID1)

Participants verbally reported the ease of using the software, logging in, navigating and exiting the patient data entry area of the registry made it seem the ICRR would be quite seamless to adopt. No matter the region, all participants were able to access the patient lay summary, download an outcome dashboard figure, as well as export their entered data.

It was easy actually to enter the data and to get in, to log in. And it’s actually short, you know, the time you spend. and so, it’s okay. (ID8)

Yes, I can [export the entered data and download the patient lay summary]. (ID5)

Finally, participants, including those that worked at private and publicly funded centres, perceived the registry would work well in both contexts, although motivations for adoption may be different. Participants from privately funded programmes were particularly interested in the programme certification option leveraging data entered into the ICRR (https://globalcardiacrehab.com/Program-Certification).

So, it will be nice to participate in this program. I think it’s very important to be a member of this project, and we should start working from now to prepare our submission to the ethical committee. (ID9)

Theme 2: benefits for programs

This theme comprised two subthemes. The first around research utility relates to the theoretical construct of perceived usefulness. Similarly, the other subtheme was around the utility of the many ICRR feedback mechanisms, also related to the theoretical constructs of output quality, performance expectancy, result demonstrability and job relevance.

Interviewees raised many benefits of participating in the registry, which would outweigh or at least balance the downside of time required to get approvals and enter data for each patient. Participants working at academic centres noted how readily the registry lends itself to research. They wanted to know how contributing programmes could be involved in research and how their participation would be recognised. They were pleased with the ready ability to download their own site data at any time for research or other purposes.

So, once we use this registry, this is kind of a database for research. (ID2)

I think for us low- and middle-income countries, it’s important to participate in this registry. To compare our program, and our results of this program with the other countries, and to improve our program and develop our rehabilitation. (ID9)

All the information we have already entered in the database I think will be used and analysed. It’s simple. It’s the most important things and it’s a great initiative. (ID3)

Participants noted four in-built feedback mechanisms they perceived as major benefits of the registry. First, they could see how they could use the outcome dashboards to fulfil reporting requirements in their institution. Although the variables were not exactly consistent with what was required, it was perceived they would complement them nicely. Participants did express desire however for some other comparisons for the outcomes other than the two available; they requested to compare to only other programmes in their region rather than all programmes in the registry, and where applicable to compare to only programmes outside of urban and/or academic centres.

… It’s useful. I mean, the amount of time that you spent entering the data that’s how much information that it gives you back. … I think you're a really nice fit. (ID8)

I wouldn't want us to be lumped in with [city], I'd want to know how we do compared to some of their community programs, and I would really be interested in knowing how rural versus urban sites are doing. (ID1)

We were wondering about region, doing it by region (ID5)

Second, they appreciated the planned quality improvement supports to be provided by ICRR’s user committee (see: https://globalcardiacrehab.com/resources/Documents/ICRR_QI%20plan_v1-2.pdf). They reported they wanted to do more quality improvement but had limited time, and also would appreciate the tools and resources provided. Third, they did want to take advantage of the programme certification possibility for programmes that participate in the registry. While they perceived the cost as reasonable but suggested a sliding scale based on country income classification (which was implemented), they did ponder whether ICCPR would be known to patients and their institution.

If we're going to do quality improvement, like what are we going to do with it, and generating those tools for people that maybe don't have the same knowledge. Here a lot of people that work in cardiac rehab are clinicians, so helping us and supporting us in that way would be useful. (ID1)

I don't think $500 over the course of three years is unrealistic. (ID1)

I don't think anybody would really care or know what it means to be certified by ICCPR.(ID1)

I think this is interesting that you can help us, and then I do agree you may want to charge. I think that would really help. (ID10)

The final in-built benefit participants raised was the lay discharge summary, as it quantifies for patients how they have improved and encourages further self-management post programme. Respondents did suggest having more figures or images rather than text.

This is actually good because it shows the improvement for the patient and what s/he needs to continue or what s/he needs to improve. (ID8)

I would say yes, this would be amazing. It'd be cool to have a print-out with visuals. I'm meeting too many patients who don't really read all right, so the graphs could show them how they've done. (ID1)

Theme 3: variables and their definitions

This theme consisted of five subthemes, namely: number of variables, measurement operationalisation, difficulty securing lipid bloodwork, the patient-centred and clinically relevant nature of the outcome variables, and variables that were missing or could be added. These subthemes speak to the theoretical constructs of learnability, efficiency, system characteristics and satisfaction.

Interviewees were very pleased with the low number of program-reported variables and found the variable definitions or operationalisation to be clear and uncomplicated.

… I like the registry because it was so quick. We don't want random information getting filled up. … I like it was really short and sweet with important details, very precise. Like what are the points, and you just look for that: … necessary information and no extra details. (ID6)

They were also satisfied with the clarity regarding units of measurement, for example, for the variables around servings of fruit and vegetables as well as years of schooling. For low-density lipoprotein (LDL), the two major units used internationally were available, so it was easy for all participants to enter this data after specifying units.

When discussing concordance between variables in ICRR’s data dictionary and their practice, participants reported the pre-programme and clinical variables were routinely collected. Many variables were operationalised exactly the same way in their practice (eg, METs, blood pressure, body mass index, programme completion). However, programmes reported they were not assessing or were assessing differently some of the patient-reported variables such as socioeconomic status, medication adherence and social support.

I don't ask people directly about further medication. But, I have had people bring it up when I asked them what their concerns are. (ID1)

One variable was commonly not assessed, namely LDL (ID1, ID3, ID5, ID6, ID10).

We try to collect cholesterol, LDL, HDL [high-density lipoprotein], saliva, serum. Um, much of them we don't have because they come to us, we set up their program, and they go back to the cardiologists not us. (ID3)

Finally, we asked the interviewees about variables that they think are missing or need to be modified; suggestions included adding maintenance programme participation and blood glucose (there were few).

Theme 4: patient

This final theme, with six subthemes, addressed the issues of securing informed consent from patients to participate in the registry, language, facilitating their provision of data and retention post programme and annually thereafter. These related to system characteristics and facilitating conditions.

Participants had, up on our direction, invited some patients to review the ICRR information sheet and consent form. The only question they reported patients raised was about where data were stored, and whether it was outside of the country. Overall, they reported patients surveyed found the documents clear, they were willing to participate, and also to provide their email address for sending surveys.

I think it is very understandable for the patient. (ID2)

I gave it to three people. The only question that came up was on the second page in the last paragraph, the second last line, it says ‘data may be subject to access by third parties as a result of security legislation now in place in many countries’. So the patient was asking is this data available outside of [country]. (ID1)

… I asked at least one patient. He said he was willing to participate. In that consent form, it was said there, you have to provide your email address, and he was willing to provide it. (ID8)

There was a major issue of language, however. ICRR materials are only available in English at this time. Some sites will have to translate the consent information so patients can provide informed consent. The sites would not be able to take advantage of patient report, which significantly reduces the number of variables that programmes need to enter for each patient, unless they also translated the surveys and gave them to participants on paper; they identified that this raises questions of the validity of the translations, particularly as they did not have funding for professional translation, and only three of ICRR’s items are validated and have available translations (eg, depressive symptoms). Participants stated they would interpret the items in their consultations with patients to enter the data themselves.

I think, unfortunately, English is not our first language or not our mother tongue. So, we must interview the patient, because maybe only 20% can understand the questions and answer fully. So, what we would try to do is to interview the patient, and we will do program-reported data. So, we will ask the medical officer to interview patients, and then he will enter the data into the system. It’s very hard for a patient to complete the questionnaire. (ID4)

We have more patients who are [language] speaking and others we have, [language] speaking. So, maybe like half, at least half of them only would know like perfect English. But I asked one, [language speaker] and one [language speaker] too. If they would understand, they understood, but they really would have wanted it to be in their, translated in their own language they said, especially if they didn't have a college degree or they hadn't gotten to university. (ID8)

Again, when they were planning to take advantage of the patient report surveys, they reported most patients do have personal devices to receive and respond to the surveys, but some older patients did not. Moreover, they wanted to know if the surveys could be sent via WhatsApp, as that was the most commonly used communication means used by and with their patients. However, many programmes already sent their intake packages to patients electronically, so they perceived it would be very easy to also send the registry consent form and assessments (another subtheme).

Maybe some patients because they are very aged and I think it, maybe for them, it’s going to be very difficult to get a smartphone, to introduce the information there. We are going to find it really difficult. (ID2)

We do have intake packages that we send out. We have them fill that before they bring it in. We send our intake packages out by email. If you have a template, rather than email it, I could slip it in their package. (ID1)

We have to use the smartphone to keep in contact with the patient, and we ask always if the patient prefers to use WhatsApp, email or text message. I think that most patients use WhatsApp, but always you have a patient that doesn't have it. (ID5)

It’s 50/50 to be honest. I mean we have one group that’s educated -very well educated, and then we have another group of patients who are probably not well-educated, but the caregivers are okay; They help us out with all this information with the patient. This information gets collected, sometimes through the caregiver more than the patient. (ID12)

They also raised concerns about patient retention post programme, which could lead to attrition bias. They experience quite high loss to follow-up in their low-resource settings, as patients often have to pay out-of-pocket for services. Patients dropout for various clinical and non-clinical reasons. They identified some factors that may support their ability to get follow-up data. They reported they find it easier to contact patients given ubiquity of personal smartphones. In many countries, it is now possible to port phone numbers to different carriers, so they can often still contact patients a year later. Moreover, many of their institutions now have electronic health records, where patient contact information is regularly updated.

I think that now is a little easier than before. Because when you want to change your cell phone company, you keep your own number. And another way is that we always try to ask for a family member number. Or may be you would have two contact numbers. (ID5)

Most of my patients go back to their cardiologist, so I lose them. (ID3)

They don't come for a follow-up visit. (ID1)

What happens is that it’s difficult to get them back. The patients are coming from various other districts and far off places, so they prefer-- to be honest-- lesser number of sessions. (ID6)

Some sites had maintenance programmes, so perceived they could quite easily collect annual follow-up data as well. Many participants talked about how they wanted to do annual assessments to properly evaluate their services, and participating in the registry would support this at last. Programmes have been calling patients already in their hybrid models now with the COVID-19 pandemic; the patients are quite used to and receptive to calls.

Yeah, because our program, there’s the maintenance program built right into it. (ID1)

So, we can make of course a good registry of the follow-up at a year. I think we can do that. (ID2)

That was an issue initially because we wanted to have long-term data. So, we ended up with just a one year "yes or no” whether the patient is still alive, and more or less the well-being at the end of a year. So, our registry is going to go on for at least a year (ID10)

Usability ratings and other ICRR changes considered based on findings

The mean SUS Score was 83.75 (SD 19.63), demonstrating ‘excellent’ perceived usability of the ICRR, as shown in figure 3.39 This SUS Score corresponds to a percentile from 90% to 95%, and is considered a letter-grade A.34

Figure 3

ICRR usability ratings. *Reverse-scored items. ICRR, International Cardiac Rehab Registry.

With regard to non-entry usability issues identified, utterances also identified the need to make some changes to other ICRR elements, of which many are shown in table 2. For instance, the ICRR programme survey (eg, some programmes prescribe a variable number of sessions to each patient, ID2; clarity around delivery of alternative models, ID6). Moreover, there was lack of clarity on patient inclusion criteria, as participants had not read the full protocol on ICRR’s website (eg, exclusion of primary prevention patients, ID1); an on-boarding meeting agenda was created where it will be confirmed this has been reviewed and understood. Utterances also related to navigating through registry screens, knowing which ones pertained to preprogramme and postprogramme data, as well as how to exit a patient record (ID6). This detail was added to the data dictionary, and a training manual with annotated screenshots was developed for on-boarding programmes. Participants also inquired about adding their CR programme name and/or logo to the email or texts sent to consenting patients with the survey link as well as the lay summary (ID8), which has been implemented by the software company Dendrite.

Participants were also unclear what to do when they did not have data for a particular variable (eg, lipids, ID6); instructions were added to the data dictionary preamble. For example, two programmes assessed two variables (ie, functional capacity and body mass index) pre-programme only, not at post. Nevertheless, participants spontaneously reported willingness to start collecting these variables to improve their programme.

The post-exercise peak METs, usually we do not assess it regularly. But now we are starting a post-program exercise stress test to evaluate the effectiveness of the program. (ID11)

They also raised about getting postprogramme data from patients who do not return, and stated how they try to get it through alternate means (eg, phone administration of Duke Activity Status Index for functional capacity, automated blood pressure monitor from pharmacy, ID6); some of these suggestions were added to the data dictionary for other programmes to consider, where valid data could be collected.

For many patients in our history, we can’t find LDL lipids, body mass index, blood pressure or METS in our documents, because we have paper. And often we don't have access to patients at the end of the program. They go home, and we can’t keep contact with the patient. But we hope to improve the system by the end of the year. (ID9)

There are patients who have BP [blood pressure] monitors at home, so we get this information. (ID6)

Some issues identified could not be addressed, such as potentially sending out patient surveys via WhatsApp (ID3, ID5). Participants also stated they did not measure some of the variables (eg, social support) and suggested alternate variables; these were considered but ultimately we remained true to the final variable list as established through the modified Delphi process.

Finally, as outlined above in theme two on benefits, the discussion of research opportunities for participating sites, the ICRR steering committee has approved a modification to their Data Access & Dissemination policy whereby contributing academic data stewards in good standing in terms of data quality and having a minimum amount of data entered would be recognised on all publications stemming from the registry as an ‘ICRR collaborator’ (https://globalcardiacrehab.com/ICRR-Governance). Moreover, a sliding scale for programme certification cost was proposed and approved by ICCPR.

Discussion

Development of the ICRR has incorporated evaluation at every stage. This latest usability test has served to, following some corresponding improvements, optimise many aspects of the registry to promote adoption going forward. In particular, effort expectancy, ease of use, operability, utility, output quality, performance expectancy, result demonstrability, job relevance, learnability, efficiency and satisfaction were all established as positive. Quantitatively, perceived ICRR usability is considered ‘excellent’. Thus, on implementation, ICRR has the potential to establish the impact of CR in low-resource ‘real-world’ settings for the first time, as well as serve as a vehicle for quality improvement in these settings.

There are some challenges, however, that will be difficult to overcome. First, CR programmes are under-resourced, particularly in low-resource settings; therefore, carving out staff time to apply for necessary approvals, for initial training, ongoing data entry and ultimate use of the data for quality improvement activities will be problematic. Second, there are many languages spoken around the world, and it would be difficult for an international registry to have materials available in all needed languages. This limits assessment, use of the patient report surveys to reduce data entry burden by CR data stewards, as well as utility of the lay discharge summary. Finally, because patients more often pay out-of-pocket for CR in low-resource settings45 and have more barriers to participation including limited funds to pay for travel to sessions, attrition is high. Therefore, not all patients will be able to come for postprogramme assessments, resulting in retention bias.

Results of this study are concordant with the two other studies on the usability of CR registries, and many other studies on the usability of health registries or other health technologies more broadly.46–50 For instance, a qualitative study in Denmark and the UK established the perceived utility of CR registry feedback mechanisms.31 Another qualitative study in the same countries highlighted CR registry adoption and implementation issues around data entry processes, resources and management support, and quality improvement.32

The implications of this study in terms of revisions to optimise the utility of the ICRR have been outlined above. These changes have received ethics approval, and the ICRR was launched in October 2021. We are now embarking on field or pilot testing. This will allow us to test also the real-world on-boarding standard operating procedure developed, which may demonstrate ICRR has even greater learnability. It will also enable a real-world test of ICRR use in context, including a test of the patient consent rate to contribute data and retention for follow-up assessments. Indeed, herein the annual follow-up assessment was only discussed in interviews, not truly tested in practice. This will also test data quality, and ICRR data quality assurance processes. Finally, we will also determine whether CR programmes are eager to undergo the effort to translate some of the materials, and to take part in the Certification programme.

Limitations

Caution is warranted in interpreting these results. Representative generalisability is not established through qualitative research, so while purposive sampling was used and saturation was achieved, applicability to all low-resource settings to which the registry is targeted cannot be known. In particular, results may not be relevant in CR settings where English is not used, and given the small number of countries represented, to all low-resource settings. Moreover, related to the use of purposive sampling, this may have introduced bias. Efforts were made prior to and during the tests to minimise socially desirable responding. Efforts were also made to ensure interviewer neutrality, and coding was led by non-ICRR chairs. Finally, the nature of the design precludes causal conclusions.

Conclusion

This study has for the first time presented a usability test of a CR registry prior to launch. Several changes were made to the registry interface as well as supporting materials and policies to enhance usability, which was ultimately rated as excellent. The ICRR was established as easy to use, relevant, efficient, with easy learnability, operability, perceived usefulness, positive perceptions of output quality and high end-user satisfaction, in low-resource settings. It is hoped with the warranted launch subsequent to this test, and on favourable pilot-testing, the ICRR can serve as a mechanism for programmes in these settings where CR is needed most to test and improve their quality of CR delivery, ultimately improving patient outcomes.

Data availability statement

Data are available upon reasonable request.

Ethics statements

Patient consent for publication

Ethics approval

This study involves human participants and was approved by York University’s Office of Research Ethics approved the study (e2020-147; Toronto, Canada). All participants gave informed consent before taking part. Participants gave informed consent to participate in the study before taking part.

Acknowledgments

We are grateful to Mohiul Chowdhury for assistance in setting up this study.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Contributors SLG conceived of the project and the main conceptual ideas, developed the methodology, managed the research activity planning and execution, conducted formal analysis and synthesis, supervised two trainees in conducting the study, validated the results of the research and the analysis output, as well as cowrote the original draft, and responsible for the overall content as the guarantor. KIT-A acquired financial support, supervised a trainee in conducting the study, as well as reviewed and edited the manuscript. HJA conducted the investigation, cleaned the interview transcripts, maintained research data, conducted formal analysis and synthesis, including verifying transcripts and reconciling analysis with the second coder, cowrote the drafted manuscript and created data visualisation display items (figures and tables). ND conducted formal analysis and reviewed the manuscript.

  • Funding SLG is supported in her work by the Toronto General & Toronto Western Hospital Foundation and the Peter Munk Cardiac Centre, University Health Network. This work was supported by Qatar University International Research Collaboration Co-Fund grant number IRCC-2020-005.

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.