Article Text

Download PDFPDF

Longitudinal cohort study to determine effectiveness of a novel simulated case and feedback system to improve clinical pathway adherence in breast, lung and GI cancers
  1. Timothy Kubal1,
  2. Doug G Letson1,
  3. Alberto A Chiappori1,
  4. Gregory M Springett1,2,
  5. Riti Shimkhada3,
  6. Diana Tamondong Lachica3,
  7. John W Peabody3,4
  1. 1Moffitt Cancer Center, Tampa, Florida, USA
  2. 2University of South Florida, College of Medicine, Tampa, Florida, USA
  3. 3QURE Healthcare, San Francisco, California, USA
  4. 4University of California, San Francisco California, USA
  1. Correspondence to Dr John W Peabody; jpeabody{at}qurehealthcare.com

Abstract

Objectives This study examined whether a measurement and feedback system led to improvements in adherence to clinical pathways.

Design The M-QURE (Moffitt—Quality, Understanding, Research and Evidence) Initiative was introduced in 2012 to enhance and improve adherence to pathways at Moffitt Cancer Center (MCC) in three broad clinical areas: breast, lung and gastrointestinal (GI) cancers. M-QURE used simulated patient vignettes based on MCC's Clinical Pathways to benchmark clinician adherence and monitor change over three rounds of implementation.

Setting MCC, located in Tampa, Florida, a National Cancer Institute Comprehensive Cancer Center.

Participants Three non-overlapping cohorts at MCC (one each in breast, lung and GI) totalling 48 providers participated in this study, with each member of the multidisciplinary team (composed of medical oncologists, radiation oncologists, surgeons and advanced practice providers) invited to participate.

Interventions Each participant was asked to complete a set of simulated patient vignettes over three rounds within their own cancer specialty. Participants were required to complete all assigned vignettes over each of the three rounds, or they would be excluded from this study.

Primary outcome measure Increased domain and overall provider care adherence to clinical pathways, as scored by blinded physician abstractors.

Results We found significant improvements in pathway adherence between the third and first rounds of data collection particularly for workup and treatment of cancer cases. By clinical grouping, breast improved by 13.6% (p<0.001), and lung improved by 12.1% (p<0.001) over baseline, whereas GI showed a decrease of 1.4% (p=0.68).

Conclusions Clinical pathway adherence improved in a short timeframe for breast and lung cancers using group-level measurement and individual feedback. This suggests that a measurement and feedback programme may be a useful tool to improve clinical pathway adherence.

  • ONCOLOGY
  • MEDICAL EDUCATION & TRAINING

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • This multiservice study of three separate oncology service lines at Moffitt Cancer Center (MCC) provides insight into the adherence to clinical pathways developed within the same institution.

  • A novel method of scoring and feedback using simulated patients was used to ascertain initial adherence and measure improvement (or lack thereof) over three rounds of study.

  • Using the same simulated cases within each service line, case-mix variability has been removed, allowing the researchers to focus on provider variability and adherence.

  • The limitations of this study include no long-term follow-up to determine whether changes in adherence are maintained over longer time periods and the results may not be extendable to all practices, as MCC is a National Cancer Institute-designated Comprehensive Care Center.

Introduction

Variation in the care received by patients with cancer is a well-known and vexing problem in healthcare.1 Oncology organisations and health systems have responded to this unwanted variation by publishing guidelines to help practising oncologists choose diagnostic and treatment regimens that are in line with evidence-based standards of care. For example, in 2012, the American Society of Clinical Oncology published its inaugural Top Five recommendations for ‘choosing wisely’ in oncology.2 While detailed practice guidelines have been available for over a decade, such guidelines are typically not referenced, let alone applied consistently in the care delivery setting.3–6 There are a variety of barriers to the implementation and use of guidelines including the lack of (1) time to review the guidelines, (2) a system that reports adherence over time and (3) guideline-based recommendations that are sufficiently specific to guide patient care, among others.7 ,8

As practice guidelines have failed to compel clinical practice change, clinical pathways are being introduced. Clinical pathways offer a more directive solution for providers. As compared to guidelines, pathways (a) specify the sequencing and timing of interventions for a particular diagnosis; (b) describe an optimal care process rather than all care processes; (c) tailor care to the practice setting, by winnowing down the possible evidence-based options to one or two preferred choices instead of listing all of the acceptable practices; and (d) are easier to follow making them potentially available at the point of care. Nevertheless, challenges persist in provider adherence to clinical pathways.9

Whether organisations are using guidelines or pathways, we hypothesise that the missing link is active provider participation. We propose that active participation requires: (1) relevant measurement on adherence, (2) peer benchmarking and (3) educational feedback for practitioners to better understand, apply and integrate the clinical pathways into practice. Engaging providers can also link the increased use of clinical pathways by individuals to larger scale goals of departments and health systems, including the provision of high-quality and high-value care that is less varied and costly.

We describe herein the Clinical Pathways Programme at Moffitt Cancer Center (MCC), which, in 2009, was one of the first cancer centres to introduce a pathway approach. MCC Clinical Pathways are a proprietary set of pathways meant to provide Moffitt physicians with decision-making tools reflective of evidence-based best practices derived from the peer-reviewed literature and clinical guidelines, such as the National Comprehensive Cancer Network and Moffitt faculty expertise. The pathways are updated several times a year whenever important new evidence emerges. MCC introduced pathways in multiple service lines in 2010 as part of a broad-based improvement initiative throughout their cancer-only facility. Despite broad-based involvement and strong leadership commitment, pathway adherence was limited.10 ,11

After three years, it was clear that the challenge was getting providers to refer to, stay familiar with and use clinical pathways. So in 2012, the M-QURE (Moffitt—Quality, Understanding, Research and Evidence) (for Moffitt and QURE Healthcare, LLC) initiative was introduced as a joint project between MCC and QURE to help advance the use MCC's Clinical Pathways in breast, lung and gastrointestinal (GI) cancers. M-QURE used a provider engagement system built on the idea that (1) personalised measurement and confidential feedback of provider performance drive clinical pathway adherence and clinical practice change, and (2) giving providers the opportunity to be involved in a system that supports education, practice change, care coordination and value-driven care buttresses health system efforts to standardise practice and reach system-level goals.

The QURE system uses Clinical Performance and Value (CPV) vignettes. The CPV vignettes are simulated, common, realistic patients. For M-QURE, the simulated cases were built around the specific MCC Clinical Pathways for common cancer cases. CPVs measure multiple domains of quality including data gathering, clinical decision-making, diagnostic accuracy and appropriate utilisation of tests and procedures. The individualised feedback, provided to each physician completing the vignettes, makes recommendations based on specific evidence-based guidelines. The serial nature of the QURE measurement system makes it possible for the providers to demonstrate growing knowledge of and adherence to the pathways. As a tool for engagement and learning, the CPV vignettes are used on a continuous basis as a method for accountability. From other studies, we know that six rounds of CPVs show long-term, sustainable group practice changes. It seems that six rounds are needed to create a culture of accountability and learning, which the evidence suggests is mediated through personal transformation and greater clinical awareness.12 Previous research also shows that improvement can occur more quickly, typically after only three rounds.13

This study reports on the M-QURE experience with changing practice over three rounds. We will document the extent that active participation and feedback, using CPVs, impact adherence to clinical pathways for different types of cancer among different types of multidisciplinary clinical oncology providers (medical oncologists, radiation oncologists and advanced practice providers (APPs) (eg, physician assistants, nurse practitioners)) within the same institution.

Materials and methods

Setting

MCC, located in Tampa, Florida, is a National Cancer Institute Comprehensive Cancer Center. MCC created the Moffitt Clinical Pathways, translating evidence-based guidelines for personalised cancer treatment into disease-specific pathways. To overcome the challenges of adherence to clinical pathways and link network providers with a common quality metric, MCC joined with QURE Healthcare, LLC, to develop and administer oncology CPV vignettes in breast, lung and GI cancers. Each disease area under study was conducted independently of the others. The study was conducted at different time periods for each service line: between March and December 2013 for breast, between September 2013 and May 2014 for lung and between January and November 2014 for GI. Data collection occurred at quarterly intervals among the cohorts of participating providers.

Participants

Three non-overlapping cohorts at MCC (one each in breast, lung and GI) participated in this study. Each member of the multidisciplinary team (composed of medical oncologists, radiation oncologists, surgeons and APPs) was asked to complete a set of vignettes over three rounds within their own cancer specialty. Participants were required to complete all assigned vignettes over each of the three rounds, or they would be excluded from this study.

Ethics

The data gathered were obtained as part of standard hospital monitoring of clinical quality and safety. The data were not collected for research purposes and contained no patient information. As per the Office of Research Integrity of the US Department of Health and Human Services under the US Code of Federal Regulation, 45 CFR 46, the study is exempt from Institutional Review Board review.14

Clinical encounter

In a typical clinical encounter, there are four domains of interest from the time a patient enters the office (or hospital) to the time they leave. These domains are as follows: history (chief symptom, comorbidities, economic status, etc), physical (examination of the patient's head, chest, extremities, etc), workup (diagnostic imaging, procedures and laboratory work) and diagnosis with treatment plan (a determination by the physician as to what is the patient's condition, how severe is the condition and what steps need to be taken to help treat the condition).

Measurement and feedback system

Using the MCC clinical pathways and feedback from the provider groups, 12 CPV vignettes in each of the three disease areas—36 cases in total—were written to address pathway-specified diagnostic, therapeutic and cost challenges in cancer care of a typical clinical encounter. The range of scores available within each domain (history, physical, workup and diagnosis with treatment plan (DxTx)) and the overall score when the domains are aggregated (total) is 0–100%, where 100% denotes perfect adherence to the clinical pathways. Each domain has 6–18 points depending on the type and complexity of the domain/case. Two vignettes were completed each round per provider, and rounds were completed every 4 months. Vignettes were randomly assigned at the beginning of every round, and no provider saw the same case twice. (See the online supplementary appendix for a walkthrough of a CPV vignette.)

The CPV vignette tool has been previously validated as a measure of actual practice and a provider's ability to evaluate, diagnose and treat specific diseases and conditions.15 ,16 Each vignette takes ∼20–30 min to complete and asks the provider to respond online to open-ended questions as they proceed through a patient visit. Trained physician abstractors, blinded to vignette-taker's identity, score each vignette with special attention paid to the prevalence of on-pathway and off-pathway care and domain measures of overall care in history-taking, physical examination, laboratory and imaging studies ordered, diagnostic accuracy and treatment plan (domain score). Each item is scored yes or no, depending on whether the provider did or did not do the necessary item. All vignettes are scored by a single abstractor, with a 10% over-read of cases. This over-read is performed to maintain an inter-rater reliability of >95%. After every round, each provider receives confidential electronic feedback on each vignette, which includes an overall score, domain scores and adherence to pathways, as well as recommendations for improvement and links to relevant clinical guidelines and medical literature. Owing to the anonymous data collection and confidential feedback methods, MCC did not take any remedial action for low performers, relying instead solely on the individual feedback form. In contrast, around the time of feedback, items with poor group-level performance (eg, axillary evaluations in breast cancer) were highlighted, and clinicians could have their concerns heard regarding particular points of the pathways in order to clarify the evidence base and to amend the pathways as needed.

Objectives

We examine clinical pathway adherence using CPV scores (overall and by domain) over the three rounds of data collection in each of the three disease areas at Moffitt: breast, lung and GI cancers. We compare the adherence in these three disease areas and determine how their baseline and round-to-round adherence rates differ. We also subdivided the overall population into physicians and APPs to determine clinical adherence between the two subgroups.

Analysis

All group and subgroup comparisons were made using a one-way analysis of variance. Differences in CPV results between the first and third rounds were performed with a paired sample t-test. Comparisons between the three cohorts were made using a one-way analysis of variance.

We then combined all three cohorts into a single cohort to perform subgroup analyses using linear regression models. We first compared physician and APP performance to determine if there was a significant difference between these two groups in their adherence to pathways. In a second subanalysis, we looked for differences in pathways adherence between providers who had a higher clinical workload and those whose workload is lower.

All analyses were performed using Stata V.13.1.

Results

There were three different cohorts of providers who participated in this study, one in each disease area. Originally, there were 18 breast cancer providers, 17 lung cancer providers and 27 GI cancer providers who participated in the baseline round. However, owing to changes in staffing or failure to fully complete one of the three prescribed rounds, these numbers were reduced to 14 in breast (78%), 16 in lung (94%) and 18 in GI (67%). There was no statistically significant difference in age, gender or clinician type between those who completed all three rounds and those who did not. The characteristics of the providers included in the study are listed in table 1. These providers took vignettes starting at baseline and subsequently every 4 months for a total of three rounds.

Table 1

Baseline provider characteristics

We used a one-way analysis of variance to determine whether or not there were any significant differences between the three cohorts. Although GI clinicians were younger on average than clinicians in breast and lung, this difference proved not to be significant (p=0.14). Clinician mix (physician or APP) was not significant (p=0.43) between the three cohorts, and neither were per cent of patients with cancer seen per week (p=0.53), per cent of time teaching (p=0.67) or per cent of time researching (p=0.56). However, clinicians in GI had significantly fewer years of practice experience than their counterparts in breast and lung (p=0.01).

Over the three rounds, we found differing levels of response (improvement) in overall and domain CPV scores between the three disease areas (see table 2). Significant improvements were seen in the overall CPV scores and the individual domain CPV scores from the first round to the third round of data collection for breast and lung cancers. Breast cancer scores improved 13.6% overall (p<0.001), 12.4% in history (p=0.002), 13.7% in physical (p=0.002), 16.1% in workup (p=0.004) and 15.0% in DxTx (p=0.002). Similarly, lung cancer overall scores improved 12.1% (p<0.001), while the domain scores improved: 7.4% in history (p=0.02), 9.2% in physical (p=0.01), 18.9% in workup (p=0.003) and 16.0% in DxTx (p<0.001). Although increases were seen in breast and lung, when these service areas saw the most change was somewhat different (see figure 1). In breast cancer, the majority increase in overall score occurred between rounds 2 and 3, whereas in lung cancer, most of the increase occurred between baseline and round 2.

Table 2

Scores by round and service area

Figure 1

CPV scores by round for breast, lung and GI cancers. CPV, Clinical Performance and Value; GI, gastrointestinal.

In contrast, the GI cohort, who initially had higher overall scores at baseline compared to the other two cohorts (65.3% vs 56.3% for breast and 52.6% in lung), was not able to improve overall scores through round 3. By the third round, overall scores had decreased to 63.9%, although this difference was not significant (p=0.68). In individual domain scores, GI cancer had increases in history (9.1%, p<0.001) and physical (7.3%, p=0.03), and showed decreases in workup (10.1%, p=0.02) and DxTx (5.2%, p=0.14).

Combining the three cohorts and separating into physicians (n=33) and APPs (n=15) showed some differences in the baseline characteristics between these groups. Physicians had significantly more years of experience treating patients with cancer, an average of 13.5 years versus APPs who had 6.6 years (p=0.02). APPs were 93% women, while physicians were 64% men. There was no significant difference in percentage of patients seen with the disease-specific cancer (p=0.16), although physicians tended to see a higher percentage (85.5%, SD 19.4) compared to APPs (74.9%, SD 30.6). While there was no significant difference in percentage of time spent teaching (19.4% vs 19.3%), physicians did spend more time performing research than their APP counterparts (22.3% vs 2.7%), which was significant at the p=0.001 level. Using only baseline and round 3 data, a linear regression model comparing these two groups accounting for gender, years of experience and round showed no significant difference between physicians and APPs in overall score improvement or change in any domain score (details not shown).

Providers who saw ≤50% of patients with cancer within their specialty did not fare significantly worse when compared to those who saw >90%. In a linear regression model comparing the two groups and controlling for age, gender, round number and whether the provider was a physician, there was no significant difference in overall score or in any domain score. These providers who saw a lower percentage of within-specialty patients with cancer scored 2.7% lower than their high-percentage counterparts in the overall score (p=0.26), 1.8% lower in the DxTx domain (p=0.62) and 7.6% lower in the workup domain (p=0.08). The workup and DxTx domains were looked at because these areas were presumed to require the most specialised knowledge. While the differences failed to reach significance, there was a definite trend, and this analysis of the subpopulation may have been underpowered to detect a true difference between these groups.

Discussion

Using group-level measurement and individual feedback, MCC successfully improved overall adherence to clinical pathways in a short timeframe—in just 9 months and after only 3 rounds—for breast and lung cancers. Improvements were greatest in diagnosis and treatment, which were the skills emphasised in the MCC pathways. There were differences across the three groups with more challenges attaining pathways adherence in GI than breast or lung. The reasons for this are not readily apparent but may reflect a greater diversity of cancer areas (colon, pancreas, rectal, etc) or local factors that we did not explore. Regardless, the diversity of overall and domain pathways scores among all three disease areas speaks to the ability of CPVs to measure multidisciplinary team care, where there are expected differences in skills among team members.

Clinical pathways are dynamic, living documents, with updates made based on accumulation of evidence gathered by experts within their field. One aspiration of pathway implementation is for there to be a basic reference standard that can be accessed (as it was in this study) and used by all providers. These results suggest that high rates of adherence to clinical pathways can be implemented using methods similar to the one described in this study.

Studies have shown that accumulation of experience is not enough to increase adherence to clinical pathways. A 2005 Harvard Medical School study performed a systematic review of physician experience and quality of care provided.17 Of the 62 studies included, only 2 showed that doctors got better at providing quality as their experience grew. More than half indicated that physician performance declined over time, while the rest showed that their performance remained the same. It is not simply a matter of practice making perfect or better. Rather, increased quality of care comes from deliberate practice, or actually consciously working on their skills, and training with immediate feedback ‘either from a mentor or a computer program—can be an incredibly powerful way to improve performance’.18 This is shown to be the case with the CPV system.

There are a couple of limitations to this study which need to be addressed. First, although improvements were seen in two of the three service areas in the study, sufficient follow-up was not performed to determine whether the changes in adherence were long term. It may be that implementation of clinical pathways degrades over time, without the measurement and feedback process provided by the CPVs. Whether the initial adherence and improvements seen can be extended to other facilities is another issue which should be considered. MCC is a National Cancer Institute Comprehensive Cancer Center, and it may be expected that clinicians here would be more familiar with evidence-based guidelines, especially the pathways developed within their own hospital. However, as such, the results found here may not necessarily be representative of those in other oncology practices. Finally, because only one radiation oncologist was included in this study and other specialties had similarly low representation, any generalisations regarding improvement for these cancer and non-cancer specialties would be difficult to make.

An area that bears further investigation is the relative performance of APPs compared to physicians. Although there was not a significant difference in adherence to clinical pathways, the reasons for this were unclear. It may be that APPs are a reflection of the practice, which might explain why adherence levels were no different versus physicians. However, there may be other areas, such as unnecessary workup or referrals, where differences might be significant and should be investigated. Another interesting area of study would be determining why certain physicians dropped out. Although we found no significant difference between completers and non-completers, there are variables that we did not track which may indicate who is more receptive to adult learning and who is less so. In particular, the GI cohort showed high dropout rate and minimal overall improvement (compared to the other cohorts), and it would be interesting to discover the characteristics behind why these people chose to dropout and whether these same people also need the training and feedback system the most.

Referring back to figure 1, the differential gains between the three disease areas indicate that improvement occurs along different timelines. Whatever this might show (a difference in clinician engagement or a difference in leadership styles or something else entirely), a poststudy meeting between the clinical areas might help elucidate cultural and practice differences, pointing a possible way forward for greater clinical adherence in all service lines. It should be noted that although GI did not see a significant increase in CPV scores, this may also reflect a slower timeline than either breast or lung which would have not been covered in the short length of the study and the completion of six rounds as has been performed elsewhere.12

To the best of our knowledge, there have not been other initiatives that measure, track and explicitly try to improve pathway adherence over time. We believe that the rapid improvements in pathway knowledge seen in this study were due to the active participation of the provider clinicians in the completion of the cases, the comparative benchmarking and the personal feedback. The improvements may also be due to the unique ability of simulated patients to highlight to the multidisciplinary team that pathway adherence is based on difference in practice and not necessarily differences in patients.

A number of recent studies show the magnitude of the clinical and cost implications of better pathway adherence.19 ,20 For example, Highmark Blue Cross Blue Shield outpatient costs were 35% lower for patients with non-small-cell lung cancer treated on a clinical pathway compared with those who received non-pathway treatment.21 CareFirst reduced its costs by 15% using a clinical pathways programme for breast, lung and colon cancers, due primarily to a 7% decline in emergency room visits, shorter hospital stays, increased use of generic medications and more appropriate use of chemotherapy.22 The authors contend, and we agree, that payer–physician collaboration and engagement played a significant role in this programme's success. Clinical pathways may also help lower the unnecessary incidence of comorbidities, evaluate chemotherapy symptoms and reduce avoidable downstream costs.22

While clinical pathways in oncology offer a method to reduce unnecessary and costly treatment variation, we believe that pathways’ success relies on active provider participation. Barriers to wider pathway use include perceptions that pathways create ‘cookbook-style medicine’, physician time constraints and discomfort with changing practice patterns. Without an engagement and feedback system, however, providers are not as compelled to adopt a pathway programme, suggesting a need for a collaborative effort, such as M-QURE, that engages providers, benchmarks their adherence and provides individual feedback.

References

Footnotes

  • Contributors TK contributed to the design of the study, oversaw the acquisition of data, helped draft and edit the paper and gave final approval of the version to be published. DGL contributed to the conception of the study, oversaw the acquisition of data, revised the paper and gave final approval of the version to be published. AAC contributed to the design of the study, oversaw the acquisition of data, drafted and edited the paper, and gave final approval of the current version of the paper. GMS contributed to the design of the study, oversaw the acquisition of the data, revised the paper and gave final approval of the current version of the paper. RS performed the analysis and interpretation of the data, drafted and revised the paper, and gave final approval of the version to be published. DTL provided analysis and interpretation of the data, revised the paper and gave final approval of the paper. JWP was responsible for the original conception and design of the paper, data interpretation, and drafting and revising the paper, and he gave final approval of the version that is being submitted.

  • Funding The funding for this work was provided by MCC.

  • Competing interests AAC has a consulting/advisory role with Genentech and Novartis and is on the speaker's bureau for Genentech, Novartis, Pfizer, Celgene and Boerhinger-Ingelheim. JWP is the owner of CPV Technologies, which owns the CPV intellectual property described in this study. For the remaining authors, none were declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement No additional data are available.