Article Text

Download PDFPDF

Evaluating the effect of a national collaborative: a cautionary tale
Free
  1. Anne Sales1,2,
  2. Sanjay Saint1,3
  1. 1Center for Clinical Management Research, VA Ann Arbor Healthcare System, Ann Arbor, Michigan, USA
  2. 2Division of Nursing Business and Health Systems, School of Nursing, University of Michigan, Ann Arbor, Michigan, USA
  3. 3Division of General Internal Medicine, Department of Medicine, School of Medicine, University of Michigan, Ann Arbor, Michigan, USA
  1. Correspondence to Dr Anne Sales, Center for Clinical Management Research, VA Ann Arbor Healthcare System, 2215 Fuller Road, Ann Arbor, MI 48105, USA; salesann{at}umich.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

“There's something happening hereWhat it is ain't exactly clear…”—Buffalo Springfield

Improving the efficiency and quality of care that hospitalised patients receive is clearly important. The study by Glasgow and colleagues in this issue of BMJ Quality and Safety provides interesting insights into the summative outcomes of a large, national quality collaborative focused on reducing length of stay and discharging hospitalised patients before noon. Additionally, the authors included mortality and 30-day readmissions as secondary outcomes as part of their robust evaluation of a large mandatory collaborative (termed ‘FIX’) that occurred within the 130 hospitals that are part of the Veterans Health Administration (VHA). The findings of this ambitious study extend the literature evaluating quality-improvement projects. We applaud the authors on their achievement in reporting short-term outcomes of this large-scale initiative, and in going further to assess how any gains achieved in the initiative endured. Their innovative approach to measuring sustainability is an important step forward, and hopefully will encourage others with access to large systems with longitudinal data to continue metric development.

Their key finding was that less than half the hospitals showed improvement in the primary outcomes—length of stay and discharge before noon—beyond what would have been expected from trends unrelated to the initiative. However, even among hospitals that showed initial improvement, sustainability was difficult to achieve. Specifically, of the 130 hospitals participating in FIX, only 27 had both initial and sustained improvement in length of stay, while only 19 hospitals had both initial and sustained improvement in discharges before noon. Only five hospitals were able to sustain initial improvements in both length of stay and discharges before noon, representing <4% of all participating hospitals. There was no significant change in any of the secondary outcomes evaluated. Overall, this was limited juice for what appeared to be a considerable amount of squeeze.

Their paper points out some key challenges in undertaking this kind of evaluation. As the authors acknowledge, a completely retrospective design, while feasible in a system with longitudinal administrative data like the VHA, creates significant difficulties in understanding critical issues such as: (1) fidelity to the collaborative and the interventions adopted; (2) understanding barriers and facilitators that may have had an impact on both short-term and long-term success in achieving goals; and (3) understanding the impact of other simultaneous initiatives.

First, fidelity to the intervention—in this case, the methods and approaches agreed to through the collaborative process—is a critical element in understanding what was implemented, and whether one can ascribe the outcomes of interest to the initiative.1 ,2 Without direct assessment of fidelity and intervention implementation, a very strong assumption is made about the effect of the intervention, and without good reason. Implementation fails at least as often as it succeeds, so assuming successful and faithful implementation, especially across a very large and diverse group of hospitals, puts any causal inference on shaky ground.

Second, understanding barriers and facilitators, which are often determinants of success or failure of implementation, gives us insight into what may or may not actually work. Any collaborative is a complex intervention,3 usually consisting of a range of options from which each facility or team can choose, and often using multiple modalities for the intervention. From the description provided by Glasgow et al, participating hospitals varied in the specifics of intervention both within and across regions. Emerging reporting standards for quality-improvement projects, including large-scale projects, recommend that concurrent process evaluations accompany quality-improvement initiatives.4–6

In our own work, trying to understand why some hospitals are better than others in preventing hospital-acquired infection, the use of process evaluation, often using mixed methods, has proven to be crucial in uncovering barriers—such as the key role of ‘organizational constipators’ and ‘active resisters’7—to routinely using evidence-based infection-prevention practices. Likewise, identifying the key facilitators—such as the behaviours of effective leaders and champions8 ,9—to routinely using evidence to prevent infection, has also been largely dependent on process evaluation and qualitative methods.10 Ideally, future assessments of quality-improvement initiatives will include information that can only be provided through interviews, focus groups and direct observation done concurrently with the initiative.

Third, VHA has had a number of transformational initiatives over the last 15 years, many of which have been rolled out across the entire system over an extended period. Distinguishing an effect of one specific initiative is very difficult, especially without the detailed description of process, fidelity to intervention and other competing initiatives.11–14 Again, while Glasgow and colleagues acknowledge the challenges they face, the reality is that multiple initiatives may all have had some impact on distal outcomes like length of stay and making attribution challenging.

The findings on sustainability, which show that only a limited subset of hospitals sustained changes in length of stay with even fewer sustaining changes in both outcomes, may have been influenced by the large number of competing initiatives in a large, transforming system like VHA. It is possible that sustainability is in part a function of capacity on the part of staff and leadership, both clinical and administrative, to absorb, implement and act on all the initiatives or mandates that flow from different parts of a large centralised system. Holding on to gains once achieved is not a trivial task, especially if those gains were initially difficult to achieve. It usually requires dedicated resources to achieve change, and for the change to endure. However, without a well-designed prospective process evaluation that continues to collect data and monitor outcomes over time, these are just speculations.

Glasgow and colleagues should be commended for conducting a rigorous evaluation of a complex process ‘that ain’t exactly clear’. Indeed, assessing quality-improvement initiatives—especially those that are national in scope—is not easy and often leads to controversy. For example, the Institute of Healthcare Improvement's ambitious 100 000 Lives Campaign was credited with saving 122 300 lives. However, a critical appraisal of this initiative raised several salient issues questioning the ability to attribute such impact to the campaign.15 Many of the issues raised in their review are pertinent to this paper, and would likely apply to similar evaluations of most large-scale quality initiatives.

Finally, large-scale initiatives like FIX need some estimate of the costs of the initiative, and whether those costs were justified by what was achieved. While strict cost-benefit analysis requires strong causal inference, assessing costs of the multiple initiatives undertaken by large, complex healthcare systems would provide at least a rough sense of the value of these initiatives, as we see trends emerging over time in important outcomes, such as decreased length of stay, improved discharge practices and declines in unexpected mortality.

We encourage researchers and decision makers to address sustainability issues in the context of multiple, ongoing improvement initiatives, especially when these are layered on top of each other, and often not fully funded through dedicated resources. Our instincts suggest that this kind of continuous improvement, while almost certainly necessary and important, may require high standards of evidence to trigger ‘yet another’ improvement initiative. The burden on staff and costs to the system, which may include losing ground in areas not currently being measured, require careful scrutiny to justify launching of initiatives. Among decision makers responsible for health-system operations, there is ongoing discussion about the costs and impacts of these initiatives over time. We believe this should be debated among policy makers, and be included as an important component in research into sustainability of innovation diffusion.

We applaud Glasgow and his colleagues for their thoughtful evaluation of the impacts and sustainability of a complex but pragmatic intervention in a large health system. We believe that there is indeed ‘something happening here’, and that over time, we will have a better understanding of ‘what it is’.

References

Footnotes

  • Linked article 000243.

  • Competing interests None.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles