Intended for healthcare professionals

Editor's Choice

How can we make audit sexy?

BMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c2324 (Published 29 April 2010) Cite this as: BMJ 2010;340:c2324
  1. Fiona Godlee, editor, BMJ
  1. fgodlee{at}bmj.com

    Given how important it is to be able to measure quality of care it’s surprising, to me at least, how badly we currently do it. One measure widely used in many countries, including the UK, is the hospital standardised mortality ratio (HSMR). Various methods have evolved for calculating it, each one hotly defended by its proponents. But voices in this week’s BMJ say the HSMR has had its day and should be scrapped.

    Discrepancies between how Dr Foster and the Care Quality Commission rated hospitals in Mid Staffordshire set Nigel Hawkes on the HSMR’s trail, as well as the improbably large 7% reduction in HSMR reported across the UK last year (doi:10.1136/bmj.c2153). He found that one explanation was an increase in the number of diagnoses coded against each patient. If more co-morbidities are recorded, the hospital is seen to be doing a better job in keeping more seriously ill patients alive.

    Hawkes finds no actual evidence of gaming the system, and he quotes several people who think that HSMRs are good if used in the right way. But Richard Lilford and Peter Pronovost are merciless in itemising the HSMR’s shortcomings (doi:10.1136/bmj.c2016). The problem described by Hawkes—“coding depth”—is one. Another is that quality of care accounts for only a small proportion in the large variation in HSMRs between hospitals, partly because most deaths in hospital are unavoidable.

    Could they be used like the canary in the mine—as a signal of the need to investigate? No, say Lilford and Pronovost. Investigation is itself a sanction and if initiated on the basis of unreliable measures can lead to injustice and distraction from the real problems that need addressing.

    So how should we be measuring quality of care? In an accompanying editorial, Nick Black comes down in favour of good old fashioned audit (doi:10.1136/bmj.c2066). As chair of the UK’s national clinical audit advisory group he has the difficult job of making audit sexy. He says lots of good sources of data have been established for national clinical audits. These could provide meaningful comparisons for many services such as critical care, trauma, and renal replacement therapy. Many of them include outcomes other than death.

    Lilford and Pronovost agree. One great advantage of audit is that, unlike HSMRs, it reveals where the problem might lie and suggests what action should follow. They favour a bottom-up approach. Rather than collecting large amounts of poorly calibrated information centrally, clinical teams should have the tools and capacity to monitor and respond to their own error rates.

    Audit got a bad name in the 1990s. Dull, tedious, delegated to unskilled juniors, easily shelved, and rarely acted on. It still suffers in many eyes for not being seen as proper research. It’s going to need a major rebranding exercise, as well as training, support, and outlets for dissemination, if it’s going to capture the imagination of clinicians around the world. But if we’re going to drive up quality of care, that’s what we have to do.

    Notes

    Cite this as: BMJ 2010;340:c2324

    Footnotes