Article Text

Download PDFPDF

Qualitative instruments involving clients as co-researchers to assess and improve the quality of care relationships in long-term care: an evaluation of instruments to enhance client participation in quality research
  1. Aukelien Scheffelaar1,2,
  2. Nanne Bos1,
  3. Mattanja Triemstra1,
  4. Marjan de Jong3,
  5. Katrien Luijkx4,
  6. Sandra van Dulmen1,2,5
  1. 1 Nivel (Netherlands Institute for Health Services Research), Utrecht, The Netherlands
  2. 2 Radboud university medical center, Radboud Institute for Health Sciences, Department of Primary and Community Care, Nijmegen, The Netherlands
  3. 3 Independent co-researcher, Amsterdam, Netherlands
  4. 4 Tranzo Academic Centre for Transformation in Care and Welfare, Tilburg University, Tilburg, The Netherlands
  5. 5 Faculty of Health and Social Sciences, University of South-Eastern Norway, Drammen, Norway
  1. Correspondence to Dr. Nanne Bos; N.Bos{at}nivel.nl

Abstract

Objectives Enhancing the active involvement of clients as co-researchers is seen as a promising innovation in quality research. The aim of this study was to assess the feasibility and usability of five qualitative instruments used by co-researchers for assessing the quality of care relationships in long-term care.

Design and setting A qualitative evaluation was performed in three care organisations each focused on one of the following three client groups: frail older adults, people with mental health problems and people with intellectual disabilities. A total of 140 respondents participated in this study. The data comprised observations by researchers and experiences from co-researchers, clients and professionals.

Results Two instruments scored best on feasibility and usability and can therefore both be used by co-researchers to monitor the quality of care relationships from the client perspective in long-term care.

Conclusions The selected instruments let co-researchers interview other clients about their experiences with care relationships. The study findings are useful for long-term care organisations and client councils who are willing to give clients an active role in quality improvement.

  • participatory research
  • long-term care
  • care relationship
  • quality in health care
  • qualitative research
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • The study resulted in two optimised instruments to collect information and feedback from clients on care relationships in long-term care with the active involvement of co-researchers.

  • Co-researchers were actively involved in the preparations, utilisation and evaluation of the qualitative instruments.

  • The perspectives of co-researchers, clients and professionals were gathered and supplemented with observations of the researchers based on a joint interpretation framework to make an accurate and comprehensive evaluation possible.

  • The instruments were conducted and evaluated in three care large organisations providing care to various client groups.

  • Nevertheless, the manner of selecting care organisations, clients and professionals using a convenience sample is a limitation of the transferability.

Introduction

Clients in long-term care receive care, support or assistance for a long time or indeed permanently, and are typically people with an intellectual and/or physical disability, a mental illness, or physical or mental frailty due to old age. The quality of care relationships between these clients and their care professionals is important, as clients depend on professionals who provide the needed support for a substantial period. Positively experienced care relationships benefit the perceived quality of care and quality of life of clients.1 But the quality of the care relationship is not always satisfactory for clients receiving long-term care.2–5 A care relationship is dynamic and several factors are likely to affect it, such as trust,6–8 continuity,9 10 the listening skills of the professional11 12 and equality.3 7 13 As the client’s experience of a care relationship is continuously changing, there are often opportunities for improving the quality of a care relationship.13 14 Clients in long-term care may therefore raise several points of improvement.2 5 11 15 16

A growing number of quality improvement initiatives focus on client or user experiences.17 Evaluating care from the client’s perspective is essential for quality improvement as clients are in a legitimate position for deciding to what extent their needs and preferences are being met.18 A promising way of including the clients’ perspective in quality improvement is through their active involvement in quality improvement processes. Participatory research methods are useful in identifying improvement areas from real-life client experiences and fostering a sense of partnership between the client interviewers and clients as respondents.19 Client involvement is seen as essential for improving the quality of care and changes the substantive outcomes.18 20 As users of services themselves and because of their own experiential knowledge, clients may find it easier to interpret comments from other clients and extract the most relevant themes from a unique ‘insider’ perspective.21 When participating clients are positioned at the interface between clients and professionals, they also may offer a genuine client perspective for understanding the data collected.22 Client involvement in quality improvement can thus lead to useful areas to work on, empowerment of the clients involved and an open climate in care organisations letting clients express their preferences.23 Nevertheless, the involvement of clients as co-researchers in quality improvement still remains rather rare, due to a lack of experience with client participation.21 No qualitative instruments have yet been developed that clients can easily implement themselves. Giving clients an active role in the application of generic quality instruments will increase their influence on healthcare in general and is likely to benefit quality improvement from a client perspective.

Qualitative research provides rich and meaningful information about client experiences and enables exploring and understanding their views.24 As a specific qualitative research approach, qualitative descriptions of the experiences and perceptions of clients are an accessible and valuable source to help professionals reflect on their actions and behaviour, as well as often providing an immediate picture of areas for improvement.25 Qualitative research also lets professionals complement their own perceptions with those of clients26 and may derive a deeper understanding of the clients’ perceptions and corresponding meanings by offering concrete and detailed examples.19 27–29 Common qualitative approaches include narrative research focusing on life experiences of individuals, phenomenology focusing on subjective experiences and interpretations, case studies in which a particular case is studied, and participant observation.24 Care organisations that rely solely on large-scale survey data may overlook important nuances in how individual clients experience care.25 Professionals have suggested that quality improvement needs to give useful directions for reflecting on their own practice and actions.30 Qualitative research is therefore useful for professionals in helping them become more aware of the perspective of a client and attuning them to the needs of individual clients.31

Until now, qualitative instruments for improving the quality of care relationships have been developed and used solely for specific client groups and care settings. Recently, however, two studies showed that the quality of the care relationship between clients and professionals is influenced mainly by generic determinants that are broadly applicable to multiple client groups in long-term care.32 33 These recent findings suggest that generic qualitative instruments for measuring the quality of care relationships may serve the various client groups and settings in long-term care well. Using a generic instrument may potentially facilitate the exchange of quality improvement information between care settings and encourage reflection and learning among care professionals serving various client groups.

This study examines the feasibility and usability of qualitative instruments with the aim of finding the qualitative instruments that can be used by co-researchers for measuring the quality of care relationships in various long-term care settings. The instruments will be evaluated across the three main client groups in long-term care: physically or mentally frail older adults (OA), people with mental health problems (MH) and people with intellectual disabilities (ID). The term ‘co-researchers’ was chosen for those clients using the instruments, to emphasise the joint collaborative research process and active role. In this study, we evaluate existing qualitative instruments that have been adjusted to measure the quality of care relationships with co-researchers among long-term care client groups. As the feasibility and usability of the qualitative instruments depends on the interplay of several factors and actors involved, the process of evaluation includes the perspectives of all actors (ie, clients, co-researchers and professionals) to determine which of the qualitative instruments work best, how and when (under what conditions and for whom).34 35 The most feasible and useful instruments can be used by clients involved as co-researchers in long-term care, permitting clients themselves to take an active position in monitoring the quality of care relationships and giving professionals an overview of improvement areas as seen from the client perspective.

Methods

This study concerns a process evaluation aiming to evaluate the feasibility and usability of five qualitative instruments that can be used by co-researchers to assess and improve the quality of care relationships in long-term care.36 A process evaluation describing the implementation process and the context is useful for indicating whether the interventions were performed as planned, by assessing the experiences of the researchers and co-researchers, clients and care professionals involved.37 Co-researchers implemented the instruments, independently or with assistance of a supporting interviewer. The five instruments were first tested and evaluated for the selected client-group. The instruments that scored highest were then cross-tested in the other two client groups in a smaller sample to investigate whether these instruments could be used in the other two client groups as well. See for more information the original study protocol.38 This article is specifically focused on the feasibility and usability of the evaluated qualitative instruments.

Study design and setting

Data were collected between March and November 2018 in three large long-term care organisations in the Netherlands selected by a convenience sampling technique. To make sure a diverse group of clients would be included, care organisations were selected that provide care to large client populations with a diversity of recurring care needs, that deliver both inpatient and outpatient care and that comprise multiple locations.38 The participating organisation for mental healthcare (MH) treats about 30 000 clients with long-term psychological and/or addiction problems annually. This care organisation has 58 locations in the urban area around Amsterdam, including 12 clinics for both inpatient and outpatient care, and 3500 employees. The care organisation for physically and mentally frail OA provides home care and residential care at 35 locations in the province of Noord-Brabant (a rural area in the Netherlands) for about 9900 unique clients per year, of which 40% receive inpatient care and 65% outpatient support (eg, cleaning, home care, day care and case management). The care organisation providing care for people with an ID assists 2375 clients a year spread over 100 locations in the south-east of Noord-Brabant. The support provided covers a wide range of care, from 24/7 intensive care to occasional support (eg, for living, working, leisure time or day care).

Patient and public involvement

The co-researchers of the research teams were actively involved in the preparations, utilisation and evaluation of the qualitative instruments. The qualitative instruments were implemented by five to six co-researchers from each client group; their experiences were an important part of the evaluation. Three research teams were formed with co-researchers and researchers from every client group. Co-researchers were current or former clients of the care organisation in which the research took place. All were adults with a fairly stable health status, able to travel short distances, able to hold a conversation, read and write at a basic level, and open to experiences different from their own. Co-researchers had different educational and socio-economic backgrounds, including more practical focused occupations such as a butcher and a cashier in a super market and persons who did not have a paid occupation. Co-researchers were given training in interviewing techniques that was tuned to the needs and wishes of the co-researchers. Regarding the preparations, the research team, including co-researchers and two researchers (AS and NB or AB), discussed and adjusted the care relationship questions that were based on an earlier study and carried out preparatory activities such as setting up the invitations for respondents. Each co-researcher participated in a training on the interviewing techniques and structure of one type of qualitative instrument. Appointments for interviews and focus groups were made by the researcher and the co-researchers started interviewing respondents in the predesigned way. Some fundamental support with interviewing or reporting was arranged if necessary, particularly for the co-researchers of the ID and OA teams. These co-researchers preferred to interview clients with the help of a supporting interviewer or an experienced co-researcher from the mental health team. The research team gathered at work meetings to share initial experiences about interviewing and cooperation. In later work meetings, interview results, the summary of findings and the final evaluation of instruments were discussed and evaluated. For the analysis of the results, co-researchers were expected to share their experiences, give advice and participate in the discussions of the work meetings. Results were summarised by the team and communicated to the respondents to provide them with general information on the study findings.39

Qualitative instruments

Five qualitative instruments were evaluated in this study to assess whether they are useful for evaluating the quality of individual care relationships between a client and a professional in long-term care (see table 1 and online supplementary appendix 1 for a more detailed description of the instruments). The WIEK instrument was selected and evaluated for two client groups. The qualitative instruments were selected out of a total of 23 qualitative instruments inventoried by several stakeholders of the sectors using a Delphi method (see online supplementary appendix 2 for all inventoried qualitative instruments).38 40 41 Stakeholders included representatives of care providers and branch organisations, co-researchers, client or client council organisations with a nationwide scope and care organisations. These stakeholders assessed the available qualitative instruments on several criteria: corroboration, providing recommendations for improving a care relationship, clarity and structure, applicability of instruments in various client groups, validity and reliability, and the extent to which clients are or could be actively involved in implementing the instruments.

Supplemental material

Each of the qualitative instruments has its own unique properties and its own unique qualitative approach. Each instrument is characterised by a specific qualitative method of data collection (ie, open or semistructured interviews or narratives), the number and type of respondents (clients, family members, care professionals) present at the interviews, a specific person in charge of data collection (co-researcher, care professional, independent interviewer) and documentation. In two qualitative instruments, data were collected through open interviews with one client each time. One qualitative instrument concerns focus groups with clients, professionals and the manager of a ward, followed by follow-up meetings after 1 month. Two qualitative instruments include multiple methods, including individual interviews with clients and a focus group. Three instruments provide improvement information for individual care relationships, while the results of other instruments can be used for improvement opportunities at a more aggregated group level (ie, team, ward or organisation level). For some instruments, the co-researchers were also involved in converting the results into recommendations for quality improvement.

Table 1

Descriptive information about the qualitative instruments

The five qualitative instruments selected were already being applied in some care organisations, but they were initially aimed at measuring the quality of life or quality of care more generally. The instruments needed some adaptations to correspond to the purpose of the current study, that is, to provide a picture of the quality of the care relationship as experienced from a client perspective. The questions in each instrument were narrowed down to determinants of the quality of a care relationship based on the earlier findings of a systematic review and qualitative research.32 33 Some instruments were modified in advance to allow client participation by co-researchers using the instrument. See for a more specified description of the adjustments of each instrument online supplementary appendix 1.

Evaluation of the instruments

The evaluation concerned two phases, as shown in figure 1. The first phase concerns the evaluation of two instruments per client group, for which the instruments were originally chosen in a Delphi study. In the second phase, the instruments that scored best were cross-tested in the other two client groups to examine whether these instruments could be used in the other two client groups as well.

Figure 1

Evaluation phases of the instruments.

The qualitative instruments were evaluated on two core aspects: feasibility of the instrument and usability of the instrument outcomes.

  • Feasibility concerns whether or not those involved (co-researcher, respondent and care professional) can use the instrument appropriately. Two main topics were addressed regarding feasibility:

    • Is it possible for co-researchers, clients and professionals to perform the intended roles? Are co-researchers able to perform the described process of the instrument?

    • Does the instrument fit the specific client group (ie, physically or mental frail OA, mental health clients or people with an ID)? Does the manner of questioning of the instrument fit the respondents; that is, are respondents comfortable answering the questions asked, do they understand the questions and are respondents not too exhausted afterwards? Is it possible for clients to relate their experiences in the designated way?

  • Usability is defined as how well users (clients, professionals and managers) can use the instrument outcomes. Questions that were answered were as follows:

    • Does implementing the instrument result in useful information about the experienced quality of a care relationship from a client perspective?

    • Do the results of the instrument lead to concrete areas for improvement, and are these improvement areas clear so that professionals can make the changes needed?

Specific suggestions for modifications of instruments that came to the fore in the evaluation relating to feasibility or usability were also included in the Results section.

The evaluation is based on two primary sources: the experiences and perspectives of the stakeholders, and the observations of researchers. Experiences and perspectives of the following stakeholder groups were involved in the evaluation of the qualitative instruments: co-researchers, clients, care professionals and supporting interviewers with interview experience. Moreover, all interviews and group discussions were observed by a researcher (AS, NB and AB) using an observation list applicable to the type of instrument (individual interview or group interview).

This process evaluation was inspired by a realistic evaluation approach, based on the argument that evaluations need to indicate what works, how, and under what conditions and for whom.34 35 Both actors and programmes are rooted in a stratified social reality, resulting from an interplay between individuals and institutions with their own interests and objectives. Realistic evaluation helps find out in which specific conditions the intervention works and how. The accumulation of insights helps us to assess whether interventions that proved successful in one setting may also succeed in another setting and how. In this project, however, it was difficult to build up a theoretical basis as most of the instruments were practically oriented. Nevertheless, the assumptions about the mechanisms of each instrument were defined for each of them beforehand.34

Data analysis

The three field researchers developed a common interpretation framework by listening to the first two audio recordings of interviews of an instrument individually, filling in the observation list, and discussing similarities and differences between their interpretations. Based on the points discussed, the observation lists were adjusted to create a final version for broader use. The completed observation lists were analysed by the first author in working meetings with the research teams and in a number of reflection and discussion meetings among the three field researchers (AS, NB and AB). In all, 18 audio recordings were listened to and interpreted by a second researcher to check the written notes made by the first researcher and to see if they reached the same conclusions. In all, 11 of these recordings of various qualitative instruments were listened to and interpreted individually by the three researchers and discussed thereafter in six discussion meetings. The additions and notable differences in the observations were used as feedback for the researcher concerned and increased the inter-researcher reliability. The modified observation list is presented in online supplementary appendix 3.

The evaluation data were collected and analysed in an iterative process. Written materials including the experiences of co-researchers, respondents and care professionals and the completed observation lists were analysed in the qualitative data analysis software program MAXQDA in sub-themes for the core aspects of feasibility and usability. Thematic analysis was used for identifying, analysing and reporting patterns within data. It organises and describes a dataset in detail,42 and is often used to interpret various aspects of the research topic.42 43 In contrast to many other types of qualitative methods, thematic analysis is not inherently bound to a particular theoretical framework. Instead, thematic analysis provides a flexible and useful research tool from a realist account. In our coding procedures, a stepwise analytic process was followed as suggested by Braun and Clarke, starting from familiarising coders with the data, to generalising initial codes, searching for themes and reviewing themes.43

The data collected by implementing each instrument were summarised for one client group, based primarily on the experiences of stakeholders and observations of the researchers. The findings were then discussed by the three field researchers (AS, NB and AB). The final decision on the instruments was based on the totality of the advice given by the co-researchers, experiences of respondents, experiences of care professionals and the participant observations made by the researcher.

Participants and recruitment

The instruments were used in three client groups in long-term care: people with mental health problems (MH), physically or mentally frail older adults (OA) and people with intellectual disabilities (ID). The aim was to recruit at least 10 respondents for each instrument in phase 1, and recruiting at least six respondents in phase 2 for the cross-evaluation in the other two client-groups. For instruments including a focus group, that is, feedback consultation and participatory narrative inquiry—the total number of respondents was expected to be higher as these instrument types needed to be evaluated taking into account group dynamics. Clients were selected from a convenience sample by their care professional on the basis of the inclusion criteria and invited by letter to take part in an interview or focus group. Nevertheless, we aimed for variation with regard to relevant client characteristics such as age, sex, ethnicity and inpatient or outpatient care. We focused on clients who have had weekly recurring contact with care professionals for at least 3 months. Clients received care in their own home (outpatient) or within the care organisation in which they reside (inpatient). Most clients received care at least once every week, but the assistance for some outpatient clients with long-term mental healthcare was more loosely planned. Clients were aged 18 years or older, physically and mentally able to take part, and able to communicate verbally in Dutch. The instruments focused on the professionals clients speak to most often for assistance and supporting or physical care (eg, care workers, personal carers and nurses). We excluded other types of professionals, such as psychiatrists, medical specialists, general practitioners, or those who provide care on a voluntary basis. Participating departments were appointed by the contact person of each care organisation, and information was provided to professionals about the research project and research aims. If required, the legal representatives of people with ID were asked for permission first.

Respondents received information about the scope of the research project and gave verbal and written consent for their participation. Respondents and co-researchers were told they could always quit their involvement without having to state a reason.

Results

An overview of the general findings is provided in the next section. Thereafter, the results for the three instruments that were selected for the further evaluation were reported comprehensively, starting with a short summary of the general findings for each instrument followed by the feasibility and usability for every client group.

Overview

After the first evaluation phase, three out of five instruments were selected for further evaluation in all three client groups: ‘WIEK interview’, ‘Feedback consultation’ and ‘Participatory Narrative Inquiry’.

The two instruments that did not pass the first evaluation phase (‘Am I satisfied?’ and ‘Clients about Quality’) were excluded because of their lower performance in terms of feasibility and usability. In short, ‘Am I satisfied?’ was not selected for the second evaluation phase for three main reasons: (1) collaboration between professionals and co-researchers generally did not work out well, (2) co-researchers and professionals had difficulties performing their roles and (3) few areas for improvement were yielded due to people giving socially desirable answers.

‘Clients about Quality’ was not selected because of (1) contradiction regarding the content of the instrument (eg, multiple choice questionnaire vs open manner of interviewing) and (2) the imbalance between the time investment needed from clients (participating in the interviews and the 2-hour mirror conversation) and the lower usability of findings (as clients did not bring forward new points for improvement in the mirror conversation, possibly due to the presence of professionals). Detailed results for these two instruments are included in online supplementary appendix 4.

A summary of the evaluation findings for the five qualitative instruments is shown in table 2. Examples of the results of the instruments that were further tested in phase 2 are included in the online supplementary appendix 5.

Table 2

Summary of findings

WIEK interview

General findings in various client groups

The WIEK instrument proved to be feasible for all three client groups. In mental healthcare, the WIEK interview was carried out independently by a co-researcher. The co-researchers asked the questions and sometimes explained the questions to the client. In the ID and OA teams, an experienced co-researcher from the MH team helped the co-researcher by asking questions and asking probing questions, summarising the answers and drafting a report. Fixed duos should preferably be used so that the co-researcher and supporting co-researcher get used to each other’s way of doing things. The results of the WIEK instrument provide insights into the experiences of clients about the care relationship with a particular care professional and the areas of improvement identified can be used for working on this individual care relationship. The WIEK theme cards proved to be useful for clients in choosing the topics to discuss and at the same time provided assistance for the co-researchers in asking questions regarding the chosen topics. The researchers and co-researchers had the impression that the topics on the theme cards were related and sometimes overlapped; the examples and stories that clients wanted to share fitted multiple theme cards. The individual approach of the WIEK instrument suited all three client groups. An improvement for the OA would be that the questions could better be focused on the whole care team instead of on a single care professional. General descriptive statistics are shown in table 3.

Table 3

Descriptive data for the ‘WIEK interview’ instrument

WIEK: mental healthcare

Feasibility

The WIEK interview scored well on feasibility from the perspective of co-researchers, clients and researchers. Co-researchers were satisfied about their role and were able to ask the questions, drill deeper and summarise the answers. Making a report was felt to be challenging at first, as co-researchers had to develop these skills gradually. To facilitate this learning process, the first reports were read by the observing researcher and debriefed and supplemented as necessary. The majority of clients were able to answer the questions. From the observations, it appeared that clients were well able to choose two theme cards and talk about these topics, and most respondents enjoyed the opportunity to choose. One client said at the end of the interview that she did not understand two themes and therefore did not pick those cards. The length of the interviews was tailored to the concentration span of the client; if an interview did not take long and the client still looked energetic, co-researchers asked whether a client wanted to discuss an extra theme card.

Usability

In seven interviews, the interview resulted in one or more improvement areas. In six interviews, these improvement areas focused on the professional and one focused solely on the role of the client themselves. Co-researchers stated that the WIEK theme cards worked well for discussing the themes a client wanted to discuss. Professionals indicated that the results were recognisable for them. The interviews that did not reveal any improvement areas could nevertheless contain important information, for example when the results confirmed the view of a professional regarding the client’s wishes.

WIEK: Intellectual disability care

Feasibility

The WIEK interview was judged to be feasible for clients with an ID. The observations showed that co-researchers were able to perform their role with assistance of the supporter. The co-researcher asked the questions and sometimes explained the questions to the client. Co-researchers appreciated the cooperation and role division with the supporter. Co-researchers found the theme cards with questions easy to use, and noted that the layout and icons appealed to clients. In three initial interviews, there was little coordination between the co-researcher and assisting supporter, whereas in the following interviews the coordination between the interviewers went well. Clients liked to take part in an interview, understood the questions and the length of the interview was not felt to be too long. Sometimes, a question was difficult to answer for a client. The professionals indicated that the use of the instrument did not take much of their time and that expectations were clear from the beginning.

Usability

In 8 of the 10 interviews, an area for improvement for the care relationship with the professional was discussed. In four interviews, the improvement concerned the professional and in two other interviews the improvement concerned the client. Two interviews resulted in the improvement that trust needed to develop. Most professionals described the results as useful. Some interview reports did not yield new insights or showed areas for improvement that were already known.

WIEK: elderly care

Feasibility

Co-researchers in the OA team were able to perform their role if they received appropriate support from an experienced co-researcher. Co-researchers felt that the cards gave them guidance in the interview. Clients were positive about the length of the interview and appreciated it because they liked to share their experiences. However, one main modification was needed to let the cards fit well with the client group. The current questions on the cards were focused on one care professional, but in practice clients have contact with many care professionals. Clients found it difficult to talk about a single care professional and spoke almost automatically about the entire care team. The recommendation is that the cards should be changed from questions focusing on a single care professional to questions concerning the plural form 'care professionals', with feedback results for the entire team. Another prerequisite is the support of an experienced co-researcher who can properly ask questions, summarise and simultaneously give the co-researcher space for input and assist where necessary. According to co-researchers, poor hearing on the part of respondents made interviewing harder. Interviewing a couple was also experienced as more difficult than interviewing an individual.

Usability

Four interviews resulted in one or more areas for improvement for the care relationships. In three interviews, no improvement areas were mentioned. Two clients mentioned another point of improvement that did not concern contact with caregivers but quality of care in general. According to the care professionals, the presence of the co-researchers encouraged clients to say what bothers them.

Feedback consultation

General findings in various client groups

The feedback consultation helps understand the experiences of a group of clients, and initiates a group process in which clients and employees work on the two action points formulated by clients. In mental healthcare, feedback consultations were carried out independently by a co-researcher. In the ID and OA teams, an assisting team member from the mental health team helped the co-researcher keep to the structure to the meetings, ask probing questions, let all clients have a say and summarise the answers. The feedback consultation has proven to be feasible for clients receiving mental healthcare, and to a moderate extent for those with a mild ID. Feedback consultation was not feasible in elderly care. The group-oriented approach did not suit the client group well due to reluctance to discuss areas for improvement in the presence of professionals and the manager, as well as because of hearing impairments. The co-researchers could not perform the intended role and did not have enough guidance from the open instructions and lacked the experience to lead a group discussion. Moreover, the action points were not perceived as useful by professionals in the elderly care. In contrast, the action points formulated were felt to be useful by care professionals in mental healthcare and by a majority of the professionals in ID care. A recommendation for the writing task of the manager was to write down (anonymous) examples under each action point to give absent professionals a clearer picture of what exactly was meant. General descriptive statistics are shown in table 4.

Table 4

Descriptive data for the ‘feedback consultation’ instrument

Feedback consultation: mental healthcare

Feasibility

The feedback consultation was assessed as being feasible in mental healthcare. Co-researchers conducted the group discussions and follow-up meetings according to the designated structure, asked questions, let all clients have a say, clarified the questions when necessary and summarised the experiences of clients. Co-researchers felt that the group discussion worked best when a maximum of eight clients participated so that attention could be paid to all clients attending. Prioritising the discussion topics was adapted to the group size. Clients were positive about their participation. The observations showed that the attention span and understanding of clients differed considerably between clients and wards. As a result, the group discussions differed in length and interaction dynamics. Professionals stated that the time investment was in proportion to the returns and they appreciated the clear structure and the inclusive manner in which all clients were involved by co-researchers.

Usability

Each feedback consultation provided, as intended, two general action points focused on the entire ward. In addition, several individual points for improvement were discussed in each group discussion. After 4 weeks, some professionals reported they were still working on changes and they needed more time, which explained why some clients noticed little change concerning the action points in the follow-up meeting. Two professionals stated that periodic recurring feedback consultations could help the continuous improvement cycle. One hindering factor was that two managers cancelled the follow-up at the last minute; some clients and co-researchers saw this absence as a lack of interest and perceived importance. In addition, one professional had difficulty in reporting the improvement points to employees as they were absent during the feedback consultation.

Feedback consultation: elderly care

Feasibility

The feedback consultation was not feasible in elderly care. Although the respondents felt at ease and understood most questions, the group-oriented approach did not fit this client group. A number of clients found it hard to discuss areas for improvement because they preferred to see themselves as satisfied people and were not used to thinking critically about the care they received. Similarly, some clients had difficulties in prioritising themes, partly because the themes concerned areas for improvement. The focus on improvement areas and sharing issues in a group seemed to raise the threshold for sharing improvement suggestions and led to socially desirable answers. The hearing impairments of some clients also appeared to be a major barrier to having a smooth conversation. In both feedback consultations, the co-researcher did not have the intended role, that is, asking questions. One co-researcher was steering by talking a lot about his own experiences, whereas others were listening quietly. The co-researchers did not have enough guidance from the open instruction and did not have enough experience in leading a group discussion. The collaboration between the co-researcher and the supporting co-researcher (from the mental health team) was not entirely satisfactory in the two feedback consultations because the co-researchers still needed to get to know each other’s characters.

Usability

Each feedback consultation resulted in two action points, but care staff deemed these to be not really useful as all four action points were already known before the feedback consultation took place. Two action points were already passed on by clients and had been worked on by professionals. The other two action points were more about attention for clients by professionals if more money or time was available, but professionals did not think this was realistic. The co-researchers noticed that many of the clients were totally satisfied with the care contact with professionals. Moreover, co-researchers noticed that professionals did not change anything in response to the points for improvement, nor were the action points made clear for the clients.

Feedback consultation: Intellectual disability care

Feasibility

For clients with an ID, the feedback consultation was assessed as being moderately feasible. In two feedback consultations, the collaboration and roles of co-researcher and supporter worked well. In one feedback consultation, the collaboration and role division did not work well as the supporting interviewer helped too little with interviewing and co-researchers became stressed and therefore contributed little. Moreover, the internal communication between professionals was insufficient, as the professionals on duty did not know that the feedback consultation would take place. In general, clients liked to talk about their experiences and appreciated the possibility of choosing the themes for discussion. Most clients felt positive about the duration of the feedback consultation, although two clients thought it took too long. Only a small group of clients could participate in this instrument, as it is only appropriate for people with a mild ID, and relatively high intellectual and communication skills for interacting in groups.

Usability

Each feedback consultation resulted in two action points. The professionals in two feedback consultations thought that all action points were worth working on. In the feedback consultation that went less well, two action points did not concern the care relationship or were too generally formulated (according to the professionals). The follow-up meetings showed that professionals and clients worked on the action points in a very different way: from very active—weekly work on the group during mealtimes—to only discussing in the team and no further changes made. Presenting the action points on a visible site encouraged active follow-up of action points.

Participatory narrative inquiry

General findings in various client groups

Participatory narrative inquiry proved to be feasible for all three client groups. The combination of individual interviews and a story meeting allows different client groups to participate. Co-researchers noticed that the theme of the interview was left open and clients could choose which story they wanted to share. Due to the anonymous nature of the stories collected, the results are useful for reflection and learning by a large group of employees of a care organisation. The active contributions of co-researchers increased the commitment of professionals to work on the findings. Co-researchers also noticed that the stories created awareness among employees. However, interest and time are mentioned as conditions for letting professionals join in the reflection meeting. Some clients had difficulties answering some additional closed questions for interpreting the shared experience. One possibility for making the interview more easily accessible would be to remove these questions and write the answers after the interview by co-researchers. General descriptive statistics are shown in table 5.

Table 5

Descriptive data for the ‘participatory narrative inquiry’ instrument

Participatory narrative inquiry: elderly care

Feasibility

Co-researchers asked most of the questions, but it was difficult for them to ask more probing questions to clarify the open story of the client. In some interviews, they used a number of directing questions, for example by making comparisons between the situation of the client and their own. The role division in the workshop and the story meeting led by a moderator was adjusted so that co-researchers performed the roles as they wished. Most co-researchers used the audio recording option and did not write down the narrative of a client themselves. Clients had difficulties in answering four of the nine additional questions. The instrument was evaluated as feasible if two modifications to the instrument were taken into account. First, based on the observation and the co-researchers, the recommendation was to change the four additional questions. Second, some co-researchers need assistance in formulating and asking drill-down questions. In the interviews, the researcher performed this role several times, without this being intended in advance. It would be helpful if a supporting interviewer could help the co-researcher ask more in-depth questions and report on the answers.

Usability

After the story meeting, professionals indicated that useful points for improvement emerged from the client stories, including some that were previously unknown to them. The rich and personal description in a narrative worked well to show the clients’ perspectives on the care relationships with professionals. A total of 13 narratives showing improvement areas were selected from a total of 20 narratives. In four interviews, one or more narratives with improvement areas were collected; six narratives with areas for improvement were discussed in the group meeting. Commitment among professionals for the improvement areas as formulated in the meeting was seen as a precondition for real change.

Participatory narrative inquiry: mental healthcare

Feasibility

The interviews and storytelling were easily feasible for co-researchers in mental healthcare. They developed their skills and techniques during the interviews. Co-researchers also had an active role at the reflection meeting. Clients felt that the length of the interviews was fine, and they favoured being interviewed by a co-researcher who understood them well. Some of the additional questions were modified based on the results of the evaluation in elderly care. For two clients receiving mental healthcare, it was still difficult to give a specific example in the open part of the interview, and four of the six clients found it difficult to give a name or add a theme to their story.

Usability

Co-researchers selected nine useful stories from the interviews and stories meeting for the reflection meeting from the 17 stories collected. The other stories contained positive experiences or ambiguities. According to the professionals, the improvement themes were useful and recognisable, and showed a cross-section of the client population. They also indicated that the stories highlighted care relationships from a different angle, that is, from the perspectives of clients. The active contributions of co-researchers increased the commitment of professionals to work on the findings. Co-researchers noted that the stories created awareness among employees.

Participatory narrative inquiry: Intellectual disability care

Feasibility

The interviews and storytelling group meeting are well feasible for co-researchers with ID when assisted by another co-researcher. The supporting co-researcher from mental healthcare was partly responsible for asking probing questions, for assisting the co-researcher when necessary and for summarising and writing down the answers. Clients were positive about the interviews and their length. Even if a story became less concrete, it sometimes contained an area for improvement for professionals. Three clients of the six found it difficult to come up with a title for the story, and some clients have difficulties with one of the questions.

Usability

A total of 11 out of 17 stories were selected by the co-researchers for the reflection meeting to reflect on. Professionals found the stories recognisable and useful to reflect on, and they stated that professionals need both interest and time if they are to come to the reflection meeting in which concrete improvement actions are formulated.

Discussion

This study aimed to select qualitative instruments that can be used by clients as co-researchers to measure the quality of care relationships in long-term care. Five qualitative instruments were evaluated for three large client groups in long-term care. Findings of this study suggest that the two instruments ‘WIEK interview’ and ‘Participatory Narrative Inquiry’) can be broadly implemented by co-researchers in long-term care to monitor the quality of care relationships from a client perspective. For the study purposes, existing instruments were modified to make it possible for clients as co-researchers to interview other clients about their experiences with a care relationship. The two instruments may serve the aims of care organisations to give clients an active role (in quality improvement initiatives) in monitoring the quality of care relationships.

Both instruments have their own characteristics and aim to map out the quality of the care relationship from a client perspective. The WIEK interview is meant for evaluating or monitoring the quality of an individual care relationship, and reflecting on the individual client–professional relationship and aspects that can be improved. Participatory Narrative Inquiry, on the other hand, provides a collection of anonymous stories that show areas for improvement that are important to most clients. A group of professionals can reflect on the themes emerging from the client stories, and may formulate actions for their own team or organisation. This instrument targets the client–professional relationship at the team level. The two instruments are therefore complementary and care organisations may choose the instrument that best corresponds to their need for quality assessment and the level of results (ie, the individual or group level). The common success factors for application are the clarity and easy-to-use structure for co-researchers, the open and in-depth approach to addressing client experiences, and the small-scale, personal setting. Both instruments are comprehensively described in a toolbox to enable broader use in the future.

In this study, co-researchers with different strengths, skills and characters performed the qualitative instruments. To make their participation meaningful, roles and task divisions in the qualitative instruments need to be adjusted to the capacities of individual co-researchers. This was achieved by providing co-researchers with an option to conduct the interviews with a supporting interviewer who helped asking probing questions, summarised the results, made notes of the experiences and wrote a small report afterwards. Consequently, a variety of co-researchers were able to perform the instruments. At the same time, not all clients will be able to perform the role of interviewer in the selected qualitative instruments. Selective recruitment of co-researchers with the necessary skills and providing interview training is therefore needed to safeguard clients against bad interview experiences and to yield useful outcomes. Moreover, coordination and support during interviews requires a substantial amount of time.

Working with clients as co-researchers showed the following benefits. First, clients are more willing and able to express and share their experiences, wishes and needs with a co-researcher. Second, client participation in quality improvement is hereby made obtainable for a relatively large and diverse group, not only the client group in which client participation is most advanced at the moment. Third, it permits and encourages the interchange between co-researchers of different client groups, with the advantage that co-researchers can use each other’s experiences and even provide each other with practical support in using the instruments.

Regarding the study design, it could be questioned whether the client group setting in which an instrument was implemented during the first evaluation phase influenced the selection of instruments for the second phase. We tried to diminish this chance by not strictly retaining the criterion of selecting one instrument in one client group. In the mental health setting, two instruments showed promising results; both instruments were therefore selected for the second evaluation phase. The main reasons for not selecting the instruments ‘Am I Satisfied?’ and ‘Clients about Quality’ for the second evaluation phase concerned the design characteristics of the instruments, such as the presence of care professionals during data collection or the structured nature of the instrument, rather than client group characteristics.

An issue of concern regarding the implementation of the instruments is whether care professionals will regard the quality improvement suggestions brought forward by co-researchers as useful and supplementary to their own perspectives. Initial reactions from professionals showed that they were in fond of an active role for clients in quality improvement initiatives pursued by the instruments evaluated in this research. At the same time, a recent study focusing on client participation in inspectorate supervision in long-term elderly care homes showed that inspectors eventually ignored the information from ‘experts by experience’. The inspectors only illustrated their own report findings with notes made by the people with practical experience, but they did not use new experiential knowledge if it was not reflected in other data.44 It was hard for the inspectors to value the experiential knowledge that clients brought in as equal to their own. It is an interesting topic for future research if care professionals intend to take the areas for improvement seriously and if they want to work on improving the situation in practice.

Another attempt at quality improvement does not guarantee that the desired changes are actually achieved. Success depends not only on performing the instrument as intended but also on a number of general conditions that must be met, such as endorsement and commitment to the instrument application and outcomes by clients, professionals and management. Moreover, the instrument is not implemented in isolation: the context reshapes and affects the instrument outcomes as well. And ‘even where an intervention itself is relatively simple, its interaction with its context may still be highly complex’.45 Whether change is achieved by the instrument or not is related to these kinds of general conditions and contextual factors. It makes measuring real effects and changes in professional behaviour after the application of a qualitative instrument hard to get a grip on.

An important factor regarding the context of care that requires specific attention is the national quality frameworks for specific client groups, that is, disability care,46 home care47 and nursing home care.48 The quality frameworks underline the importance of reflection on quality improvement regarding client experiences and the involvement of clients in quality improvement. However, quality frameworks often provide quantitative sets of criteria and put high demands on quality instruments that can be hardly accomplished by qualitative instruments carried out by co-researchers. For example, the quality framework for ID care requires that all clients of a care organisation are questioned with the preferred instrument, and content validity and reliability requirements are operationalised quantitatively. In a Dutch essay entitled ‘About the new rules, obedience and prudence’, Baart criticises the way in which the complex reality is reduced into simplified, inflexible and uncompromising protocols and quality frameworks. He argues that professionals need to make independent reflections and moral judgements if they are to be able to provide high-quality care. Professionals must be permanently assisted and helped to freely perceive, critically interpret and substantiate how care can be best provided. This could best be done on the spot and in the moment.49 The two instruments selected by this study could help professionals to reflect open-mindedly on the everyday and complex realities by providing in-depth quality information from a client perspective. The extent to which care organisations will use the instrument findings, largely depends on the way care organisations define and operationalise the quality frameworks and whether the instruments correspond to the formulated vision of a care organisation.

Strengths and limitations

Some strengths and limitations can be identified regarding the study design and content. One strength is the data triangulation achieved by including multiple perspectives in the evaluation. The perspectives of co-researchers, clients and professionals were gathered and supplemented with observations of the researchers to make an accurate and comprehensive evaluation of the studied instruments possible. Another strength of the study was the active collaboration between co-researchers and researchers in carrying out the study, which made it possible to conduct this study and select the most promising instruments. Moreover, existing instruments were modified and then used in this study to take advantage of instruments that were already developed and utilised. Unfortunately, however, the quality and utility of most instruments had not been previously investigated or published in academic literature.

The instruments were evaluated for clients receiving long-term care who are able to talk about their own experiences. Other clients were excluded, such as clients with sere ID or advanced dementia. In long-term care, clients’ capabilities are diverse. For the study purposes, existing instruments were adjusted to enable clients as co-researchers to interview other clients about their experiences with their care relationships. Although the co-researchers participating in the three teams were quite diverse, the number of co-researchers was limited which might possibly influence the generalisability of the study findings. The role division must always be decided together with the co-researchers involved. For purposes other than measuring care relationships and with other actors such as care professionals carrying out instruments, alternative instruments may be more appropriate. Furthermore, the number of improvement areas mentioned in the interviews of each instrument was described as an indicator of the usability but needs to be interpreted with some caution as these are related not only to the characteristics of the instrument but also to the satisfaction levels of the clients interviewed.

The instruments were conducted and evaluated in three large care organisations providing care to various client groups. A total of 140 respondents participated in this study, with a minimum of 10 respondents for each instrument. Although clients in long-term care are diverse and each person and interview was unique, specific strengths and restrictions appeared regarding the feasibility and usability of the instruments evaluated, and saturation of the instrument findings was reached. However, the results of the current study are limited to the Dutch context. In other countries, the quality of care relationships is reported to be not optimal yet everywhere.2 4 5 Therefore, the qualitative instruments might be potentially profitable in other countries as well. To what extent the qualitative instruments could be useful in other countries could be topic for further research. Future research might show the dynamics of organisational features and cultures of other care organisations and their influence on the implementation, results and contributions of the instruments.

Conclusion

Based on this process evaluation, two out of five qualitative instruments evaluated can be performed by co-researchers to measure the quality of care relationships in long-term care: the WIEK interview and Participatory Narrative Inquiry. These two instruments scored well on both the feasibility and usability of the results. The selected instruments allow clients as co-researchers to interview other clients about their experiences with care relationships in long-term care. The study findings are useful for long-term care organisations and client councils who are willing to involve clients actively in quality improvement, thus making the client perspective visible in both the content and the process of quality improvement.

Acknowledgments

We would like to thank all the co-researchers for their enthusiastic and meaningful contributions to conducting the study and for their useful ideas. We would also like to thank André Bons for his contributions during the data collection and analysis. André Bons was involved as a field researcher in the data collection on behalf of LSR, a Dutch client council organisation with a nationwide scope. Finally, we would like to thank the care organisations and developers of the instruments for their cooperation.

References

Footnotes

  • Contributors AS, NB, MdJ were involved in the data collection, which was carried out in cooperation with co-researchers from three research teams. MdJ contributed as a co-researcher and an interview supporter. AS and NB were responsible for the data analysis. AS and NB listened to the interview recordings and participated in the discussion meetings. AS performed the data analysis in MAXQDA and was responsible for writing the manuscript. All authors (AS, NB, MdJ, MT, KL and SvD) thought about the main lines of the analysis and read several versions of the manuscript and provided their feedback and suggestions regularly. All authors (AS, NB, MdJ, MT, KL and SvD) read and approved the final manuscript.

  • Funding This work was funded by the Netherlands Organisation for Health Research and Development (ZonMw) grant number [516012506, 2016]. The funder played no role in the design of the study, collection, analysis or interpretation of data, or in writing the manuscript.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Ethics approval This study was submitted to the Medical Ethics Committee (MEC) of the Radboud University Medical Centre to decide whether the study needed formal ethical approval (file number 2016-2972). In the light of the Dutch Medical Research Involving Human Subjects Act, the MEC decided that extensive formal approval was not needed for this study.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement All qualitative data and analysis material is available on reasonable request at Nivel (n.bos@nivel.nl).