Intended for healthcare professionals

Feature Performance Data

Where are we with transparency over performance of doctors and institutions?

BMJ 2012; 345 doi: https://doi.org/10.1136/bmj.e4464 (Published 03 July 2012) Cite this as: BMJ 2012;345:e4464
  1. Aniket Tavare, clinical fellow
  1. 1BMJ, London WC1H 9JR, UK
  1. ATavare{at}bmj.com

A decade after the Bristol inquiry called for the public to have more information about quality of care, data are still hard to come by, finds Aniket Tavare

The Bristol paediatric cardiac surgery scandal shook the foundations of British medicine and the repercussions are still being felt today. In the early 1990s, the death rate for children aged under 1 year having heart surgery at the Bristol Royal Infirmary was exposed as being twice that of other centres and led to a public inquiry. One reason why Bristol happened was, “No standards were laid down against which performance in the NHS and quality of care could be measured,1” according to Alan Milburn, the health secretary at the time. Ian Kennedy’s damning inquiry report produced a broad set of recommendations for the NHS, including that “patients and the public must be able to obtain information as to the relative performance of the trust and the services and consultant units within the trust.”2

Fast forward a decade, and the health secretary, Andrew Lansley, stated: “[Patients] need to know who is providing quality, safe, effective, accessible services.”3 Laudable sentiments, but isn’t everyone providing good care? Unfortunately not, according to the NHS’ Atlas of Variation, which describes the substantial “unwarranted variation” in performance that still exists across England, even after patient and social factors have been controlled for.4

Although the UK has started to publish data from its public services only relatively recently, Tim Kelsey, director of transparency and open data at the Cabinet Office, says “we publish far more data than anyone else in the world in healthcare.” So how far are we towards Kennedy’s vision for a transparent health service?

After Bristol, the Society of Cardiothoracic Surgeons (SCTS) emerged as a paragon of healthcare transparency, publishing survival rates for all adult cardiac surgery since 2006. Anyone can go online5 and scrutinise the “raw” mortality data for individual surgeons, which is displayed in conjunction with an “expected” mortality range given the patient characteristics. Since the data have been published, mortality has fallen by an impressive 50%.6

But so far other specialties have been reluctant to follow suit. Nephrology is the notable exception. It publishes outcomes including adequacy of dialysis, haemoglobin levels, and blood pressure control through the UK Renal Registry to allow “clinical staff, commissioners and patients to . . . see how their renal centre is performing [against] specified national targets” and “centres to compare themselves to others.” The data show progressive improvements.7 According to Damian Fogarty, the registry chairman, there was “surprisingly little backlash” over publishing the data openly, adding there is a “recognised competitiveness between units to improve when they know their ranking.”

Cardiac surgery—the heart of the matter

In fact, the cardiac surgeons had been collecting data at the unit level and publishing it anonymously before Bristol. However, the system had failed to identify those few surgeons whose performance was below acceptable standards. Bruce Keogh, NHS medical director, writing in 1998 as SCTS database chair, outlined three missing components: individual surgeon data, meaning “a unit’s figures [could] easily camouflage an errant performer”; risk stratification tools, so “poor individual performance could be dismissed as a case-mix problem”; and reliable data collection in all units.8

In 2001, Dr Foster, a company cofounded by Kelsey, published crudely adjusted comparative mortalities for all trusts doing coronary bypass surgery using data from the administrative database Hospital Episode Statistics (HES). SCTS began work to develop models to allow meaningful risk stratification through adjustment for case mix. The society’s hand was forced somewhat by a freedom of information request by the Guardian newspaper in 2005, asking trusts for raw death rates for each surgeon. Continued publication of these raw rates alongside sophisticated risk adjusted data then became inevitable.

Some surgeons were initially uncomfortable with the plans to publish their death rates, concerned that ill informed judgments would be made about their performance based on inaccurate data.9 Ben Bridgewater, current chair of the SCTS database committee, describes the move towards transparency as “a bruising process” with many “difficult conversations” within the specialty.

Nonetheless, the leadership of the specialty was resolute: Keogh’s SCTS presidential address in 2008 acknowledged that there were methodological problems but added that “the [technical] shortcomings are not important in the grand scheme of public disclosure.”10 The majority of the society’s members now support the initiative.

How are the data used? As a tool to be “protective and supportive of surgeons rather than to hang people with,” according to Bridgewater. Surgeons are flagged up if their performance is “significantly worse than average” and then the investigation process (graded according to the size of the deviation) is worked through sensitively. The SCTS recognises that higher than expected death rates may be due to “chance alone, a quirk of casemix, or . . . sub-optimum performance”11 and state that abnormal outcomes should trigger a full investigation rather than punitive action against the surgeon or unit. It remains unclear why publishing outcomes has improved survival despite increasingly risky patients being operated on—how much is due to the poor performers being brought up to scratch and how much to the rest of the pack sharpening their skills.

Difficulties in measuring doctors

When he was health secretary Milburn hailed the cardiac database as “the first step to publishing more information on individual consultant outcomes over time.”1 So why haven’t other specialties followed the cardiothoracic surgeons’ lead?

“The issue for transparency is not, do doctors support it, because I think they largely do. It’s not even that it should be in public, or that they might be shown to underperform their peers. The real question is how do we credibly and fairly produce those metrics,” says Kelsey. “We’ve started at the relatively ‘easy’ end with death, but it gets a lot more complicated when considering dermatology or dementia,” he adds. Mike Parker, a council member of the Royal College of Surgeons, says that mortality is a “blunt tool” and “meaningless” for many specialities, adding that obtaining “credible, impartial, and meaningful data is extremely complicated.” Adjusting effectively for case mix remains difficult.

Pointing the finger at an individual is also uncomfortable given the increasingly multidisciplinary team based approach to medical practice. Bridgewater counters that despite cardiac surgery being a team endeavour, individual accountability meant “surgeons were pushing for any weaknesses to be improved because their name’s above the bed.”

Many fear that publicising outcomes, particularly mortality, will lead to “cherry picking,” where surgeons decline to operate on high risk patients. Although there is anecdotal evidence of this occurring both in the UK and in New York State (which had a cardiac database before the SCTS), a large published analysis failed to demonstrate such risk aversion.11

Cost is also prohibitive according to Parker, adding that the Royal College of Surgeons’ desire for outcome reporting was limited by “lack of resource both in terms of funding and clinician time.” The SCTS’ initiative is centrally funded and represents less than 1% of the costs of adult cardiac surgery, and saves an estimated £5m (€6.2m; $7.8m) a year for coronary artery surgery alone, through reducing length of stay.11 It has required a robust IT infrastructure at both local and national levels.

Applauding the SCTS, Donald Irvine, who was president of the General Medical Council, the UK doctors’ regulator, during the height of the Bristol scandal describes the mindset towards transparency within the rest of the profession as “can’t do, won’t do,” and the reluctance as “lack of self confidence.” Kelsey is enthusiastic about the shift within medicine from doctors saying “we’re un-measurable” to “how are we going to measure ourselves?”

Secondary care

After Bristol and Kennedy’s report, measuring hospital performance became pressing. Dr Foster’s data grew increasingly prominent, thanks to a favourable political and media climate, eventually leading to the Department of Health acquiring a large stake in the company. Describing the motives behind Dr Foster, Roger Taylor, who founded the company with Kelsey, says: “People inside the system were aware that there were significant differences between the quality of care provided by healthcare organisations but the public were kept in blissful ignorance.”

The company’s annual Good Hospital Guide publishes a range of comparative indicators derived from Hospital Episode Statistics, often to the chagrin of the clinical community. Produced for public consumption and often blunt, the latest edition exhorts, “Do not have an abdominal aneurysm repaired in one of the 39 hospitals that perform the operation infrequently. Patients are much more likely to die.”12 Taylor describes how such information is published in academic papers but obfuscated by jargon, adding “the failure to state that clearly to patients is an extraordinary thing.”

The guide also names trusts with standardised mortality rates above those expected. Using mortality to judge hospitals remains controversial.13 Many have methodological concerns about Dr Foster’s approach, which Taylor admits are “legitimate,” though he counters that many critics “are coming from a point of view where nothing will satisfy them.”

“Those who are not improving will always be reluctant to be measured,” says Sue Slipman, chief executive of the Foundation Trust Network. Nevertheless, trusts (and regulators) do pay attention to standardised mortality rates, which have progressively fallen. Some think this is largely the result of limitations of the data and gaming by trusts13—such as increasing diagnostic coding to imply riskier patients or inflating the number coded as receiving palliative care, both of which can lower standardised mortalities.14

The literature suggests improvements from public reporting largely occur through institutions acting to protect (or improve) their reputations, rather than because patient choice motivates lagging providers to shape up.15

Measuring services

Variation in patients’ comorbidities makes it difficult to compare outcomes at hospital, departmental, or clinician level. “What is needed are normative data to adequately reflect the complexity of what people are dealing with,” says Slipman. She adds that meaningful methods have yet to emerge, and a stumbling block has been the “lack of willingness of the clinical community to be benchmarked in the past.”

Undoubtedly scepticism exists over the data used. The Hospital Episode Statistics data, although ubiquitous and powerful, have well known limitations16 and often lack clinical credibility. Nonetheless, even when they outline “substantial variation,”17 the default response appears to be defensive18 rather than seeking to explore the potential reasons. The Department of Health’s Information Strategy commits to publishing all outcomes data at clinical team level from all national audits.19 Clinically derived data will hopefully assuage concerns arising from the use of administrative data and promote further introspection. What evidence based best practice looks like is becoming increasingly clear, and will inevitably become a focus for measuring performance in specialties with complex outcomes.

NHS Southwest is leading the way by piloting publishing its institutions’ performance against NICE quality standards for stroke and dementia and allows comparison of the hospitals in its region. Part of the reason behind the initiative was “people wanted to know whether their local services were up to scratch.”20

Primary care

While most transparency and measurement activities have focused on hospitals, general practices are gatekeepers to specialist services and comprise most NHS patient interactions. Practice level data are published on certain clinical indicators through the Quality and Outcomes Framework, allowing comparison of surgeries, but less than 10% of primary care activity is included.21 An extensive inquiry by the King’s Fund found that although care is “generally good,” there are “wide variations in performance and gaps in the quality of care.”21

The Department of Health publishes 280 general practice service and outcome measures to help people compare practices. The complexity of dealing with chronic conditions and undifferentiated patients means the vexing issue of what to measure remains. “Not all aspects of general practice lend themselves to quantitative assessment,” according to the King’s Fund.

The department is also going to publish scores out of 10 for patient experience, derived from its vast GP Patient Survey, ostensibly to help patients to “choose the right GP surgery for them.”22 The Royal College of General Practitioners is sceptical about the new scores, stating “a lot of what our patients tell us they value about general practice—trust, caring, kindness, and willingness to listen—is immeasurable.”

Undoubtedly true, and with relevant outcomes prone to confounding, measuring the processes of care remains vital. A damning report from the patient organisation Diabetes UK this year detailed the “significant numbers of people with diabetes who do not have access to the agreed essential standards of care.”23 It reported a more than 10-fold variation (from 6% to 69%) across the country in the number of patients receiving all nine key processes recommended by the Department of Health and NICE, such as retinal screening and blood pressure checks.23

When the citizens of Barnsley had similar information made publicly available, many thousands switched to better performing practices (box).

Barnsley

NHS Barnsley developed a scheme to highlight general practices that provide high quality services by identifying them with a green tick logo. Practices developed care pathways across 13 common areas describing best clinical practice. The practices then audited each other, setting tough, aspirational targets.

NHS Barnsley publicised the scheme and detailed what patients should expect from their practice if they had the diseases in question. Instead of highlighting the poor performers, they highlighted the good performers with the green tick. Subsequently between 5000 and 7000 patients changed practice.

The future

Kelsey is moving to a new job as director for Patients and information in the new NHS Commissioning Board, which will distribute £80bn of the NHS budget from April 2013. “Over the last decade there’s been a big shift in clinical attitudes,” says Kelsey. “Whereas before it was thought of as an act of terrorism to publish the Dr Foster data, no one would think that now.” He cautions, however, that “the ground we still have to cover is massive.”

Developing credible and fair metrics remains paramount. Although this will take time, Kelsey is confident that a “sensible comprehensive culture of transparent measurement in healthcare,” will eventually emerge. Strong clinical leadership, as seen with cardiac surgery and nephrology, will be necessary. Bristol was undoubtedly a huge impetus for the cardiothoracic surgeons, but it is “unacceptable to wait for another disaster” says Irvine.

The reformed NHS will see an increased appetite for open performance measurement. Commissioners will be judged on the outcomes achieved by the services they select. Given the parlous financial circumstances, they will need know what “good” looks like, even in complex and opaque areas of healthcare.

Kelsey thinks that the question that remains is, “How do we do transparency?” because “it’s only recently that people have accepted we should do it at all.”

He thinks the public are key: “They recognise that variation [in performance] is a fact of life. What we need to do is be transparent about the degree of variation and collectively as a society decide what quality is based on our knowledge of the variations that exist.”

Whoever ends up using such information, and for whatever purpose, the clamour for transparency over performance is unlikely to be muted. As Keogh wrote in 2008, “The genie is now out of the bottle, there is no going back.”10

Notes

Cite this as: BMJ 2012;345:e4464

Footnotes

  • Competing interests: The author has completed the ICMJE unified disclosure form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declares no support from any organisation for the submitted work; no financial relationships with any organisation that might have an interest in the submitted work in the previous three years; Alongside his role at the BMJ, AT works as a clinical fellow to Bruce Keogh and the NHS Commissioning Board, but not in a role related to performance assessment or transparency in healthcare. His salary for this fellowship is provided by NHS South Central.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References