Article Text

Download PDFPDF

Process evaluation on quality improvement interventions
  1. M E J L Hulscher,
  2. M G H Laurant,
  3. R P T M Grol
  1. Centre for Quality of Care Research (WOK), University Medical Centre Nijmegen, 6500 HB Nijmegen, The Netherlands
  1. Dr M E J L Hulscher, Centre for Quality of Care Research (WOK), University Medical Centre Nijmegen, P O Box 9101, 6500 HB Nijmegen, The Netherlands;
 M.Hulscher{at}wok.umcn.nl

Abstract

To design potentially successful quality improvement (QI) interventions, it is crucial to make use of detailed breakdowns of the implementation processes of successful and unsuccessful interventions. Process evaluation can throw light on the mechanisms responsible for the result obtained in the intervention group. It enables researchers and implementers to (1) describe the intervention in detail, (2) check actual exposure to the intervention, and (3) describe the experience of those exposed. This paper presents a framework containing features of QI interventions that might influence success. Attention is paid to features of the target group, the implementers or change agents, the frequency of intervention activities, and features of the information imparted. The framework can be used as a starting point to address all three aspects of process evaluation mentioned above. Process evaluation can be applied to small scale improvement projects, controlled QI studies, and large scale QI programmes; in each case it plays a different role.

  • quality improvement
  • process evaluation
  • interventions

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

A wide variety of quality improvement (QI) interventions can be used in health care (see also the paper by Grimshaw et al later in this series). Reviews show that most interventions are effective in some settings but not in others.1–3 Studies on effective intervention programmes have shown varying and often modest improvements in healthcare performance. To understand in more detail why some interventions are successful while others fail to change practice, it is necessary to gain insight into the “black box” of QI interventions. Studying the black box of (un)successful interventions implies that we can no longer confine ourselves to describing a QI intervention in global terms—for example, merely as “feedback”, “reminders”, or “a combination of a CME seminar, free office materials and one office visit by a staff member” (box 1). The concrete activities taken as part of the QI intervention, the actual exposure of participants to these activities, together with their experience of these activities may influence the final result (success or failure). Process evaluation can throw light on the mechanisms and processes responsible for the result and the variation in results in the target group.

Box 1 Why is it important to look inside the “black box” of the intervention?

Szczepura et al5 concluded that an intervention involving feedback failed to change professional practice, whereas Nattinger et al6 showed that feedback led to significant improvements in professional care. However, careful analysis of the feedback applied in these studies showed that the QI interventions had different characters.

In the study by Szczepura et al general practitioners (GPs) in the intervention group received three sets of information about the care they had provided—at the start, after 12 months, and after 24 months. This information concerned cervical cancer screening, developmental screening/immunisation in children, and the determination of risk factors in persons aged 35–64 years such as blood pressure, alcohol consumption, smoking, and body weight. GPs in the so-called “graphic” feedback group received a profile containing the following values for each feedback item: group minimum, group maximum, median, and 20th and 80th percentiles; at each practice the GP’s own scores were marked clearly. The GPs in the control group received feedback in table form—that is, an overview of their own values accompanied by the minimum and maximum scores in the total group. Neither intervention was effective. On receipt of the comparative feedback information all GPs were asked to rate the feedback in terms of its acceptability and intelligibility and whether or not regular feedback in this form was helpful to the practice. No differences regarding these items were reported. No information was provided by the authors on actual exposure of the target population to the intervention—for example, how many actually read the feedback report.

In the study by Nattinger et al performed over a period of 6 months, general internists received monthly overviews of the percentage of patients who had been treated in accordance with the mammography guideline. The first 3 months concerned only the individual management of the internist in question, whereas the second 3 months concerned individual management compared with the management of an anonymous group of colleagues presented with the aid of a histogram. This feedback proved to be effective. The authors stated that they were unable to say how many physicians had actually read their feedback. Thus, no information was provided on actual exposure or the experience of those exposed.

Process evaluation is an important tool that can meticulously describe the QI intervention itself, the actual exposure to this intervention, and the experience of those exposed (participants) (box 2). This information is not only crucial for understanding the success—or lack of success—of QI interventions, but also for providing basic data for economic evaluation of quality improvement. Although the latter is beyond the scope of this paper, it enables estimates to be made of the cost in terms of time and/or money (see paper by Sculpher et al later in this series).

Box 2 Process evaluation

Process evaluation can be used:

  1. to describe the QI intervention—for example:

    • What was the exact nature of the QI intervention?

    • What material investments, time investments, etc were required?

  2. To check the actual exposure to the QI intervention—for example:

    • Was the QI intervention implemented according to plan?

    • Was the target population actually exposed to the intervention as planned?

    • Does this offer an explanation for not achieving the goals?

  3. To describe the experience of those exposed to the QI intervention—for example:

    • How did the target group experience the intervention and the changes?

    • What problems arose while implementing the changes?

    • What requirements for change were experienced?

This paper explores the purpose and value of process evaluation on QI interventions and addresses the issue of what data should be collected (“what to measure”) and data collection methods (“how to measure”).

PURPOSE AND VALUE OF PROCESS EVALUATION

The results of process evaluation serve different purposes.

  • A description of the “intervention as planned” acts as a blueprint to help change agents (and researchers) to apply the intervention as intended in a uniform way within the target population.

  • Actual exposure can be established by checking whether the intervention was performed as planned. This information can be used to adapt the intervention if necessary. Researchers or evaluators can use the information later to explain success or lack of effect, particularly when they do not want to change the intervention during the course of a study. If the intervention in its ultimate form differs considerably from the original plan, then this can be put down to “implementation error”.4 Failing to detect differences between the original intervention plan and the ultimate manner of implementation is sometimes referred to as a “type III error” (on analogy with the statistical type I and type II errors).4

  • A blueprint of the “intervention as performed” is important to enable other people to replicate the intervention. In addition, a detailed description of the “intervention as performed” will facilitate future comparisons between studies and (meta) analysis of crucial features of effective interventions.

  • The main purpose of gaining detailed insight into the experience of those exposed to the intervention is to revise the QI intervention in question. This information on influencing factors as experienced by participants can be used to improve the intervention either during its application (the developmental approach) or afterwards (the experimental approach).

PROCESS EVALUATION AND QUALITY IMPROVEMENT INTERVENTIONS: EXAMPLES

Process evaluation can be applied to QI interventions at any stage of their development. In this paper we distinguish between QI interventions at three stages of development: (1) pilot studies or small scale improvement projects, (2) controlled QI studies, and (3) large scale QI programmes. Process evaluation plays a different role in each case.

Pilot studies/small scale improvement projects

Effect evaluation of a newly developed QI intervention that is being tested in a pilot study or used within a small scale improvement project yields an estimate of the potential level of change. Process evaluation can provide important answers to questions on the feasibility and applicability of introducing the intervention; such answers might incite revision of the improvement activities in the intervention. Thus, researchers and implementers of this type of QI intervention can use process information to investigate whether they are on the right track or whether their approach needs adjustment.

Controlled QI studies

In a controlled study on the effectiveness of a QI intervention, the central issue is testing the effectiveness of the implementation method in standardised circumstances. In this case, process evaluation yields information that can help to explain heterogeneity in effects. Process evaluation is important to check whether the planned improvement activities have indeed been executed in a uniform way and whether the target population has actually been exposed to these activities as planned. Researchers and implementers of these QI interventions can use process information to detect gaps in implementation that might be responsible for failure or the disappointing outcome of an intervention. Another use for process evaluation in such studies is to determine how the participants experienced the activities: whether they encountered any bottlenecks while implementing the changes and whether they were satisfied with the intervention method. Together with data on the implementation of the QI intervention, this might explain why some participants successfully improved the quality of care while others did not, or why some participants were more successful than others (see example in box 3).

Box 3 Example 1: Improving the prevention of cardiovascular disease7

The study investigated the effectiveness of and experience with an, at that time, innovative method of introducing guidelines to improve the organisation of preventive activities in general practice. Over a period of 18 months, trained outreach visitors spent time solving problems in the organisation of prevention in general practice. The study showed that guidelines to organise prevention of cardiovascular disease in general practice could be introduced effectively (controlled study). To evaluate the scope and limitations of the QI intervention, process information was gathered at the end of the project from all the participants at the intervention practices (68 GPs and 83 practice assistants at 33 general practices). Information was collected on actual exposure to the QI intervention, experience with the intervention in general and with the outreach visitors, bottlenecks and advantages, results of the intervention regarding the number of newly detected patients at risk, and the influence of the intervention on the working methods of the GPs and practice assistants.

During 18 months of the intervention the practices were visited between 13 and 59 times (mean 25, SD 9). The mean duration of a visit was 73 minutes (SD 43) with a minimum of 0 minutes (delivering materials only) and a maximum of almost 5 hours. Practices spent, on average, 45% of the visit hours on training and 52% on conferring. The number of team members with whom the outreach visitors met ranged from 1 to 14. In 63% of the consultations the outreach visitor met with practice assistants only, in 7% she met with GPs only, and in 30% of the cases she met both practice assistants and GPs.

In 27 of the practices, adherence to guidelines increased for at least three guidelines, leading to a mean final adherence score of eight guidelines (minimum 7, maximum 9). These practices were visited on average 25 times for almost 31 hours. In five practices no increase or only a very small increase in adherence to guidelines was shown, leading to a mean final adherence score of four guidelines. In these practices the average number of visits (20) and the total duration of the visits (19 hours) were below the group average.

The majority of GPs and practice assistants had a positive opinion of the QI intervention. They were satisfied about the outreach visits, but the practice assistants experienced extra workload due to the QI intervention. Practice assistants expressed more complaints about the paperwork involved than the GPs, but mentioned fewer patient barriers. Practice assistants and, to a smaller extent, GPs remarked that their participation in the intervention had improved their work methods. Relationships were found between the experience of the participants and the degree to which the practice had changed: more positive experiences of the participants about the QI intervention in general and more newly detected patients than expected were related to more change.

Large scale QI programmes

In a large scale QI programme effectiveness analyses can show the extent to which the goals of the intervention have been achieved, whereas process evaluation provides information about the actual intervention and about exposure to and experience with the intervention. In case a control group is lacking, the results of process evaluation might yield information about the relationship between the QI intervention and the changes achieved (see example in box 4).

Box 4 Example 2: Cervical cancer screening8

In a national prevention programme GPs and practice assistants were exposed, over a period of 2.5 years, to a combined strategy to introduce guidelines for cervical cancer screening. The combination comprised formulating and distributing guidelines, supplying educative material and a software module, and providing financial support on a national level. On a regional level, agreements were made between the relevant parties (GPs, municipal health services, comprehensive cancer centres, pathology laboratories) and continuing medical education (CME) meetings were organised for GPs and practice assistants. On a local level, trained outreach visitors called at the practices. The evaluation (in a random one-in-three sample, response 62%, 988 practices) showed considerable improvements at the practices: after the intervention adherence to nine of the 10 key indicators had been improved. Information on actual exposure to programme elements was collected by postal questionnaire. Almost all practices in the study population (94%) had been informed about the national prevention programme. For practices that had had contact with an outreach visitor through a practice visit (40%), the median number of practice visits was 2 (range 1–13). The software modules were used by 474 practices (48%), either in full or in part.

Crucial elements for the successful implementation of the guidelines were:

  • making use of the software module (odds ratios (ORs) 1.85–10.2 for nine indicators);

  • having received two or more outreach visits (ORs 1.46–2.35 for six indicators); and

  • practice assistants having attended the refresher course (ORs 1.37–1.90 for four indicators).

WHAT TO MEASURE?

If it is decided to perform a process evaluation, researchers and implementers of QI interventions are faced with the following questions:

  • What “key features” of the QI intervention should be included in the description (before and/or after intervention) because they might cause or influence the effect of the intervention? This is the main question for those interested in developing a blueprint of the intervention, either (a) to support uniform performance of the intervention, (b) to enable replication of the intervention, or (c) to facilitate comparisons and meta-analysis of QI interventions.

  • What features of the QI intervention are important to measure (or monitor) while checking whether the participants were exposed as planned? This is the main question for those interested in (a) adapting the QI intervention during the course of the intervention or (b) explaining success or lack of success afterwards.

  • What are “crucial success and fail factors” as experienced by those exposed that might cause or influence the effect of the QI intervention? This is the main question for people interested in revising the QI intervention in question.

To provide practical guidance to researchers and implementers of QI interventions, we present some of our work that addressed these questions in process evaluation on QI interventions. The framework shown here can be used as a starting point in answering all three types of questions.

What to measure: a framework

On the basis of several theories that underlie different approaches to changing clinical practice,9–13 we developed a framework containing features of QI interventions that might influence their success or failure. We also used the checklist developed by the Cochrane Effective Practice and Organisation of Care Review Group (EPOC)14 to guide reviewers when extracting relevant information from primary studies. In addition, we used a number of reviews on the effectiveness of various interventions2 and explored the literature on process and programme evaluation.4,15–20

The resulting framework was tested on a convenience sample of 29 published studies that had used different QI interventions.21 We approached the 26 authors of these studies (response rate 86%). Many features of the intervention were not adequately described in the publications or were not described at all, but most authors were able to provide the lacking information when asked. The framework was revised based on the results of this pilot study.

In the framework (table 1) attention is paid to features of the target group, the implementers or change agents, the frequency of intervention activities, and the features of the information imparted. The left column gives a general description of the feature of an intervention that needs to be described in more detail in the right column.

Table 1

Framework for describing the key features of a QI intervention

HOW TO MEASURE?

What methods (single or combined) can be used to gather process data?

Depending on the main question being addressed by process evaluation, it is possible to take a more developmental approach (qualitative and inductive, see also the paper by Pope et al in this series22) or a more experimental approach (quantitative and deductive, see also the paper by Davies et al later in this series). Information can be gathered by on-site observation (on the spot or audio-video recording), self-reports (interviews and questionnaires or surveys), and from existing data sources (or secondary sources). Examples of secondarysources include minutes of meetings, bills, purchase orders, invoices, end of chapter tests, certificates upon completion of activities, attendance logs, signing in and signing out sheets, checklists, referral letters, diaries, news releases, etc.4,15–20

When choosing a measurement method (or a series of methods), it is important to consider the existing circumstances (for example, the amount of time available for gathering and interpreting data), practical issues, the homogeneity of the data, privacy and confidentiality, and the estimated tolerance levels of the respondents who will be asked to provide data. In addition, the instruments must be simple and user friendly so that they are not burdensome for the user. On the other hand, they must be detailed enough to answer the evaluation questions and goals. When selecting the instruments, it is necessary to consider whether the method of data gathering will have an undesirable influence on the ongoing investigation or intervention implementation, depending on the study design or type of project. It must also be guaranteed that the data will be gathered in a valid and reliable manner from selected population samples. Depending on the approach taken (developmental or experimental), respondent samples can be selected to reflect the diversity within a given population (purposive sampling) or to achieve statistical representativeness. Whatever method is chosen, the persons responsible for data gathering should have received adequate training in the skills and terms associated with the use of the instruments and be able to perform quality control checks.

DESCRIBING THE QI INTERVENTION

How do we elicit the information to describe the “intervention as planned” or the “intervention as performed”?

The implementers or researchers can be asked to fill in the framework to describe the features of the QI intervention as planned before starting the intervention. Interviews with the programme developers and associated parties can provide information. As a basis, it may be useful to fall back on existing documentation such as the study plan, the programme proposal, minutes of meetings, or existing records.

To describe the QI intervention as performed (after its implementation), use can be made of interviews with the implementers of the intervention and/or the participants, or questionnaires and surveys. Participants can often provide useful information about the intervention as performed in terms of their personal participation during implementation. However, the reliability of data reported in retrospect decreases as the complexity and extensiveness of the intervention increases, and as the interval since start of the intervention increases and it becomes longer ago that the persons were exposed to intervention activities. Moreover, the framework involves a great many features and details that the respondents may not have been aware of during the intervention, which once again makes it difficult to obtain valid data after the event. It is therefore often preferable to gather information during the process and to use these data to describe the intervention in its ultimate form (see below).

CHECKING ACTUAL EXPOSURE TO THE QI INTERVENTION

How can we measure actual exposure to the QI intervention?

The implementation of intervention activities can be studied periodically, continuously, or retrospectively by obtaining information from the respondents (implementers or participants), by using observation, self-reports and/or existing data sources. If possible, information should be gathered on all the features. However, it is no small task to verify whether the participants are performing the intended intervention activities. Because of resource restraints, for example, it may be necessary to select several central features of the intervention and pay the closest attention to them. During the implementation of QI interventions, sometimes it is permitted (and sometimes it is not permitted) for the execution process to vary across sites or across time. When checking exposure, the following rule of thumb can be used: the greater the variation allowed, the more attention must be paid to the feature concerned. The remaining features could then be measured by one simple method (see example in box 5).

Box 5 Example 3: Improving the prevention of cardiovascular disease7

The multifaceted intervention as planned consisted of four types of intervention:

  1. Providing all practice members with information about the guidelines and project. The information was provided by the outreach visitor during an introductory visit (standardised with the help of a checklist).

  2. Providing feedback on current practice. After an analysis of the practice organisation (standardised with the help of checklists), all practice members received a feedback report on current practice regarding all guidelines.

  3. Tailoring outreach visits from trained nurses. After receiving the feedback report, the practice members chose and discussed intended changes under the guidance of an outreach visitor. The outreach visitor helped the practice to implement the changes. Outreach visits were arranged according to needs and wishes.

  4. Tailoring the provision of educational materials and practical tools. Depending on their needs and wishes, practice members were provided with standardised educational materials and tools.

It was decided—mainly for practical reasons such as time and money constraints—that it would be most valuable to monitor the tailoring activities of the intervention—that is, the outreach visits and the materials received by the practice team (in which variation is allowed). In addition, the researchers repeatedly stressed the importance of using the checklists and determined the timing and content of the feedback.

To check actual exposure to the outreach visits and materials, a simple coded visit registration form was developed (and pilot tested) that had to be filled in by the outreach visitor after each visit to a practice. The following features of a visit had to be recorded:

  • the date and duration of the visit;

  • the participants in a meeting (name and function);

  • the type of activities during a meeting; and

  • the materials used or provided during a meeting.

Example 1 (box 3) describes how this information was ultimately used by the researchers to describe the intervention as performed.

DESCRIBING THE EXPERIENCE OF THOSE EXPOSED TO THE QI INTERVENTION

How can data be gathered on the experience of persons exposed to the QI intervention?

Participants can be asked, during and/or after the intervention, to provide self-reported information on how they experienced the QI intervention, including whether they perceived factors related to success or failure. Thus, opinions can be explored on the type(s) of intervention chosen and on all features of each type of intervention. Participants can also describe features they perceived as being most related to the outcome of the intervention (success or failure). In this way, information is obtained that is closely and directly linked to the intervention method as experienced (se example in box 6).

Box 6 Example 4: Process evaluation of a tailored multifaceted approach to changing general practice care patterns and improving preventive care23

Prevention facilitators (outreach visitors) tailored the following strategies to the needs and unique circumstances of 22 general practices (54 GPs):

  • audit and ongoing feedback;

  • consensus building;

  • opinion leaders and networking;

  • academic detailing and education materials;

  • reminder systems;

  • patient-mediated activities;

  • patient education materials.

Effect evaluation showed an absolute improvement over time of 11.5% in preventive care performance (13 preventive strategies, e.g. counselling for folic acid, advice to quit smoking, influenza vaccination, glucose testing, PSA testing).

The aim of process evaluation was to document the extent of conformity with the QI intervention during implementation and to gain insight into why the intervention successfully improved preventive care. Key measures in the evaluation process were the frequency of delivery of the intervention components (i.e. the different types of intervention), the time involved, the scope of delivery, the utility of the components, and GP satisfaction with the intervention.

Five data collection tools were used, as well as a combination of descriptive, quantitative and qualitative analyses. Triangulation was employed to investigate the quality of the implementation activities.

The facilitator documented her activities and progress on two structured forms known as the weekly activity sheet (hours spent on on-site and off-site activities) and the monthly narrative report (per practice: number of visits; activities and their outcomes; number of participants; plan for the following month). At 6 months and 17 months two GP members of the research team conducted semi-structured telephone interviews with the participating GPs to find out whether they were happy or unhappy with the interventions and to document their ideas about improvement and overall satisfaction (close ended questions).

Facilitators interviewed contact practitioners to obtain post-intervention feedback about their experience. GPs were sent a questionnaire by mail to report any changes that had taken place over the preceding 18 months. Facilitators generally visited the practices to deliver the audit and feedback, consensus building, and reminder system components. All the study practices received preventive performance audit and feedback, achieved consensus on a plan for improvement, and implemented a reminder system. 90% of the practices implemented a customised flow sheet while 10% used a computerised reminder system; 95% of the intervention practices wanted evidence for prevention, 82% participated in a workshop, and 100% received patient education material in a binder.

Content analysis of the data obtained during the GP interviews and bivariate analysis of GP self-reported changes compared with a non-intervention control group of GPs revealed that audit and feedback, consensus building, and development of reminder systems were the key intervention components.

Analysing barriers and facilitators while participating in the intervention and implementing the changes can also provide useful insights into how the QI intervention might be revised. The framework presented in table 1 does not provide help with this aspect of process evaluation. Ideally, a QI intervention that aims to change clinical practice is designed on the basis of a systematic scientific approach that (a) analyses barriers and facilitators and (b) links the intervention to these influencing factors (see also Pope et al22 and the paper by Van Bokhoven et al later in this series). A complete analysis of the experience of participants with the aim of gaining insight into how the QI intervention might be revised, should therefore also check whether barriers and facilitators were indeed successfully handled.

Key messages

  • To understand why some QI interventions successfully bring about improvement while others fail to change practice, it is necessary to look into the “black box” of interventions and study the determinants of success or failure.

  • Process evaluation contributes significantly to the development of potentially successful QI interventions.

  • Process evaluation helps to describe the QI intervention itself, the actual exposure to the intervention, and the experience of the people exposed.

  • A framework is presented in which attention is paid to features of the target group, the implementers or change agents, the frequency of intervention activities, and the features of the information imparted. All of these features might influence the success of the QI intervention in question.

  • Process evaluation is an intensive task that requires great attention to detail.

CONCLUSIONS

Process evaluation performed in a pilot study or small scale improvement project, a controlled QI study, or a large scale QI programme can throw light on the mechanisms and processes responsible for the result in the target group. In this way, process evaluation makes a very relevant and desirable contribution to the development of potentially successful QI interventions. The framework presented here gives the key features necessary to describe a QI intervention in detail, to check whether the intervention was performed as planned, and to assess the experience of participants.

REFERENCES