Intended for healthcare professionals

Education And Debate

NHS “indicators of success”: what do they tell us?

BMJ 1995; 310 doi: https://doi.org/10.1136/bmj.310.6986.1045 (Published 22 April 1995) Cite this as: BMJ 1995;310:1045
  1. Radical Statistics Health Groupa
  1. a c/o London Hazards Centre, Interchange Studios, London NW5 3NQ
  • Accepted 23 March 1995

In the absence of any systematic evaluation of the changes it has made to the NHS, the government cites three “indicators of success.” These are record numbers of patients treated, shorter waiting times for hospital treatment, and more children being immunised against the main childhood diseases. Closer inspection of the statistics reveals that they do not support the conclusions inferred from them and that they are misleading measures of the impact of the changes made to the NHS.

The government has repeatedly claimed that the introduction of the internal market and the other changes to the NHS are a huge success. What evidence is there for this? There has still been no rigorous evaluation of the changes. Instead, government politicians keep quoting statistics to support their views. For example, Virginia Bottomley's speech to the 1994 Conservative Party conference (Conservative Party press release 751/94, 1994) and the 1993-4 annual report of the NHS Executive quoted the usual indicators of success. These were record numbers of “patients treated,” shorter waiting times for hospital treatment, and more children being immunised against the main childhood diseases.1 These same statistics were quoted three years ago in a document making similar claims for the success of the first six months of the internal market.2

This article takes a closer look at these statistics about the NHS in England, to see whether the claims based on them are justified, and closes by recommending ways of making NHS data more informative.

More patients treated?

“New figures from the government's statistical service show that in the past year alone, NHS hospitals treated an extra 455000 patients. That is a 4.7% increase.”3

Ministers frequently quote figures about the activity in NHS hospitals as if they referred to the actual numbers of people treated, but the statistics cited above by Virginia Bottomley are not based on individual people. Instead, they refer to numbers of day case admissions to hospital and numbers of inpatient “finished consultant episodes.” On other occasions, attendances at outpatient and accident and emergency departments are also included.

Up to the financial year 1987-8, people were counted each time they were discharged from an inpatient stay in hospital, and the statistics were expressed in terms of “discharges and deaths.” Since the financial year 1988-9, inpatients have also been counted each time they change consultant or specialty within a hospital stay, and figures are expressed in terms of finished consultant episodes. For example, someone could be admitted to the observation ward of an accident and emergency department, be transferred to an orthopaedic department for treatment of fractures, subsequently develop cardiovascular problems and be referred to a cardiologist, and finally be transferred to a rehabilitation ward under the care of a geriatrician before being discharged. This single hospital stay would have contributed four consultant episodes to the overall total instead of being considered as one discharge or hospital spell.4 Figures about hospital inpatient stays are now expressed in terms of these finished consultant episodes.5

Thus the increase of 4.7% quoted by Virginia Bottomley referred to the rise by 455000 in the numbers of day cases and inpatient finished consultant episodes between the financial years 1992-3 and 1993-4.5 Similarly, statements such as “our new health service, where 3000 extra patients are treated every day” (V Bottomley, speech to 1994 Conservative Party conference, Conservative Party press release 751/94, 1994) seem to be based on the increase in what are now called general and acute finished consultant episodes between the financial years 1990-1 and 1993-4. This category includes episodes in geriatric as well as acute specialties.

In England there are still no national data about successive hospital inpatient stays and outpatient attendances by the same person. At a local level, computer systems are capable of linking data about the same person's episodes of care and readmissions to hospital, but this is not done at a national level. Aggregated statistics submitted to the Department of Health on the KP70 return are not presented in terms of numbers of people treated, nor are there data about readmissions. The same is true of the individual records collected in the hospital episode statistics.

The numbers of admissions to hospital as inpatients and day cases and the numbers of outpatient attendances have been increasing for many years (fig 1). This would be expected from the changing age structure of the population, developments and improvements in medical and surgical techniques, and the increasing tendency to operate on older people. The number of people aged 75 or over, who make the heaviest use of the health service, is increasing. In the financial year 1992-3, 13% of inpatient and day case consultant episodes in the acute sector involved people aged 75 or over, while a further 14% involved people aged 65-74.5 Other changes, such as shorter lengths of hospital stay, may well have increased the rates of both planned and unplanned readmission.

FIGURE 1
FIGURE 1

NHS hospital activity in all specialties in England, 1974 to 1993-4 (data from statistical bulletins 5/85, 10/93, and 12/94 from the Department of Health and Social Security and Department of Health)

Counting consultant episodes

In the internal market hospitals are paid by the episode rather than according to the number of people treated. It has been suggested that this may have increased the incentive to ensure that all finished consultant episodes, however, short, are recorded.6 7 An article commenting on the increasing numbers of emergency admissions pointed to increases in very short lengths of stay observed in local studies.8

A similar phenomenon can be seen in national statistics. The table shows the numbers of inpatient episodes reported in the hospital episode statistics for 1990-1 and 1992-3.9 10 These show that episodes lasting up to one day accounted for virtually all the increase in numbers of episodes over the two years, at a time when the numbers of day cases were also increasing. The introduction of the internal market may not be the only factor that could have this effect; stays of up to one day had also contributed to the smaller increase in inpatient episodes between 1988-90 and 1990-1. Nevertheless, the increase between 1990-1 and 1991-2 was particularly large.

Changes in numbers of inpatient episodes in NHS hospitals in England between 1990-1 and 1992-3 (data from hospital episode statistics statistics9 10

View this table:

When questioned by the House of Commons Health Committee, the Department of Health mentioned a comparison made in 1988-9, when the KP70 return was first introduced and both the number of consultant episodes and the number of discharges and deaths were counted. This showed that the number of finished consultant episodes was 2.0% higher than the number of discharges and deaths.11 The committee was also told about checks made later that used more detailed data from the hospital episode statistics. As these were based on individual records, it was possible to identify and count the episodes that were the last episode in an inpatient's “hospital spell.” The Department of Health reported that, at a national level, the number of finished consultant episodes was 3.5% higher than the number of hospital spells in each of the years 1990-1, 1991-2, and 1992-3.11 Thus, it seemed at the time that there had been an increase in the ratio since 1988-9 but no further rise after 1990-1.

The department is now apparently doing further work on the subject and may change its opinion as a result. In response to a parliamentary question asking it to give trends since 1989-90 in numbers of consultant episodes and hospital spells, it replied, “Previously quoted estimates of hospital spells have been superseded by emerging new information and revised figures will be available in due course.”12

The difference between consultant episodes and discharges and deaths shown in figure 1 is wider. This is because the finished consultant episodes include the new category of “well babies,” which was introduced in 1988-9. These are healthy babies born in hospital. Sometimes they are excluded from counts of inpatient stays in acute and maternity departments and sometimes they are included.

Are more day cases a sign of greater efficiency?

“Hospital and community health services again provided care more efficiently in 1993/94, for example by the greater use of day case surgery enabling more patients to be treated.”1

The move to shorter lengths of stay and to day case surgery is often cited as a sign of increased efficiency. This move is a long term trend, as figure 1 shows, and results from developments in surgery, including the use of less invasive techniques, rather than changes in government or the structure of the NHS. The increase in number of day cases may also be influenced by changes in the extent to which short lengths of stay are categorised at a local level as inpatient stays of zero length, ward attendances, outpatient attendances, or day case admissions.

There seems to be an assumption that further increases in the proportions of operations done as day case surgery are inevitably beneficial. It is impossible to judge whether this is so without information about people's preferences or the availability of care at home from relatives, friends, and community nursing services. Data about readmissions to hospital and the outcome of treatment are also needed.

Keeping ahead of the times

“Since the NHS reforms there has been a spectacular increase in the numbers of patients treated—121 now for every 100 then” (Department of Health press release 94/529, 1994).

This statement from Virginia Bottomley in November 1994 is not only unreliable statistically, as we have shown above, but also raises other questions. The figure of 121 referred to the financial year 1994-5, and the statement was made when there were no published statistics for that financial year as it had not yet finished. An inquiry to the NHS Executive revealed that the figure was based on the planned numbers of finished consultant episodes given in purchasers' plans for 1994-5 rather than actual numbers. Although the purchasers' plans can be obtained from NHS regional offices, and are thus technically in the public domain, the figures they contain are not readily accessible. Other figures quoted in ministers' press releases (NHS Management Executive press release H93/828, 1993) or documents2 have been based on data from the NHS Executive's unpublished fast track monitoring returns. The possibility of publishing these data has been discussed, but it has now been stated that they are not published because they are provisional figures and are superseded by annual publications of data from KP70 returns.13

Even if these data were readily accessible, the comparisons with other data for previous years would be open to question. If data are collected in different ways in different statistical systems comparisons between them may reflect differences between the systems rather than real differences in what the data purport to measure. As it is, there are considerable discrepancies between the aggregated data in the KP70 returns and tabulated data from the hospital episode statistics. The problem is particularly severe for maternity data, many of which are missing from the hospital episode statistics,14 but non-maternity data are far from problem free. For example, a group developing a formula for NHS resource allocation found no data for Rugby Health Authority in the hospital episode statistics for 1990-1.15

Are needs being met?

More patient episodes are being counted, but does this mean that the needs of the population are being met? As the statistics discussed above do not count people, they cannot answer this question. Some information is available from the general household survey, an annual survey of a sample of the population. In this survey the proportion of people who had been admitted to hospital overnight in the previous 12 months remained at the same level from 1982 to 1992.16 This might suggest that the NHS was just managing to maintain services, but the survey did not include a question about day case admissions before 1992. This makes it impossible to use data from this source to monitor the impact of day case surgery on the population over time. Even if person based data were available with population denominators, it would still difficult to interpret them in the absence of data about the health care needs of the population or the effectiveness of the care given.

This raises many questions beyond those of the accuracy of the data and the appropriateness of measuring finished consultant episodes. First is the assumption that more activity means better care. This contradicts the Department of Health's stated aims on research and development, set out in its annual report and elsewhere. If the initiatives to review and disseminate the results of research lead to the desired aim of “maximising the effectiveness, efficiency and appropriateness of patient services,”1 this may mean that people have fewer episodes of effective care rather than more episodes of ineffective care. Nevertheless, given that much health care has yet to be evaluated, the impact of better dissemination of research findings may be relatively small in the short term.17 18

Waiting lists and waiting times

“There was good progress during the year on reducing waiting times for inpatient and day case treatment.”1

The numbers of people on waiting lists have been rising over the past few years (fig 2), and the government has shifted its agenda to waiting times (Conservative Party press release 751/94, 1994; Department of Health press release 94/529, 1994).1 19 Waiting times are measured from the date a clinician decides to admit a patient. Delays in making such decisions can make recorded waiting times shorter. In addition, patients on the waiting list who are offered a date but are unable to attend have their waiting times calculated from the most recent date offered. These are known as self deferred cases. The numbers of self deferred cases are no longer published in the Department of Health's six monthly statistical bulletin on waiting times. This item was removed when the format of the bulletin was simplified.19 Data requested by the House of Commons Health Committee show that self deferrals rose from 48343 during the period March to June 1988 to 66901 during September to December 1993.11 More recent data include a footnote explaining, “The numbers relate to the numbers who are waiting on the last day of the six-month period who have self-deferred and do not represent the total number who have self-deferred over the period.”20 Their numbers rose from 40766 people on the inpatient waiting list and 8977 on the day case waiting list in September 1987 to 43538 and 34946 respectively in September 1994.

FIGURE 2
FIGURE 2

Numbers of people on inpatient and day surgery waiting lists in England, 1957-1994 (data from various publications of the Department of Health and Social Security and Department of Health)

Waiting lists have always contained names of people who have died or who no longer require treatment. In most places, the lists are reviewed periodically to remove them. The numbers of such people removed for reasons other than treatment used to be tabulated in the Department of Health's statistical bulletins. This information was also excluded from the department's statistical bulletins when their format was revised. The House of Commons Health Committee was told that their numbers rose from 90931 during March to June 1988 to 219564 during September to December 1993.11 More recently, when asked for this information in a parliamentary question in January 1995, the department replied, “Information on the number of people removed from waiting lists for reasons other than treatment is not held centrally.”20 Three weeks later the department gave “the available information” for the years 1991 to 1993 in response to a more loosely phrased question.21 The reply explained that the numbers of patients removed included those who had been admitted for treatment as emergencies for the same condition or at hospitals other than those on whose waiting lists they were originally placed. It is not possible to subdivide the totals to look at possible association with increases in the number of nonelective admissions, about 80% of which are thought to be emergency admissions.

A graph provided to the House of Commons Health Committee showed that the median value of waiting times decreased very little between 1988 and 1993.11 This contrasts with the dramatic reduction in numbers of people waiting over 18 months shown in the graph on page 23 of the annual report of the NHS.1

The NHS Executive also monitors waiting times for inpatient treatment in its fast track monitoring returns. It publishes press releases giving provisional summary statistics based on these returns more frequently than the six monthly statistical bulletins, which give more detailed final figures. The fast track figures are usually published quarterly, although they were issued at monthly intervals for the three months immediately before the 1992 general election.

After numerous criticisms that waiting times could be shortened by delaying people's first outpatient consultations, the Department of Health started to collect statistics about how long people in England waited for their first outpatient appointment after referral by a general practitioner. The first published statistics on this were for the quarter ending in September 1994.22 It was reported that 83% of patients waited under 13 weeks and 96% under 26 weeks for their first appointment. These were compared favourably with a newly issued national standard that 90% should be seen within 13 weeks and that all should be seen by 26 weeks after referral (Department of Health press release 95/25, 1995). Even with this newer form of monitoring there is still the possibility of shortening reported waiting times for inpatient treatment by delaying putting people on the waiting list until there is a reasonable prospect that they will not have to wait too long.

Measuring the quality and outcome of hospital care

The publication of a small set of statistical data about the NHS in the form of league tables under the patient's charter23 has been cited as evidence that NHS data are becoming increasingly accessible.24 This limited set of data related largely to waiting times rather than to the quality of care for which people were waiting. It also emphasised admissions for elective surgery rather than emergency admissions or admissions for investigation or other forms of care. New items were added when Virginia Bottomley launched a second and revised patient's charter in January 1995 (Department of Health press release 95/24, 1995). Although there were a few additional items of a different sort—including a standard on hospital food, a promise that children would be admitted to children's wards, and that adults' preferences for single sex wards would be respected—most of the standards again related to waiting times.

None of the indicators relates to the outcome, effectiveness, or appropriateness of the care provided. They focus only on the time people wait to be seen, without considering the length of the eventual consultation. There is a real danger that the pressure to achieve targets linked to a relatively narrow set of indicators may distract staff's attention from other issues, notably the quality, effectiveness, and appropriateness of the clinical care provided.25 These problems may become more acute if the indicators are ranked in ascending order to form league tables. Data in the health service indicator package have been used informally for league tables for some years despite warnings that they should not be used in this way (Department of Health press release 94/335, 1994). These data are drawn from a variety of NHS information systems and are of variable quality. Furthermore, some of the indicators are based on very small numbers of events. To interpret them, it is necessary to see the numbers on which they are based. This would also allow the calculation of confidence intervals to assess whether differences are greater than would be expected by chance. Raw data for the most recent set of data were not published. This is an apparent contradiction to initiatives to promote greater availability of NHS data.

In the absence of more appropriate measures of outcome, it has been suggested that league tables of hospital death rates should be published. In fact hospital death rates have been included in the health service indicator package for some time but have rightly been ignored. The fallacies of such data have been pointed out by many statisticians, from Florence Nightingale onwards,26 and publishing them without background information is likely to cause misunderstanding. Hospitals that are centres of excellence and thus admit a higher proportion of high risk cases may well have higher death rates, as will hospitals with facilities for people with terminal illnesses. The percentage of acute inpatient episodes which ended in death decreased from 3.0% in 1988-9 to 2.7% in 1992-3.5 Further data are needed to assess whether this may reflect changes in the extent to which people with terminal illness chose to spend the last days of their lives in their own homes or in hospices and nursing homes outside the NHS.

Rather than using crude death rates or other routine indicators, it would be much more constructive to investigate deaths in hospital through the focused approach used in the confidential inquiry into perioperative deaths. Nevertheless, the vast majority of patients are discharged from hospital alive, and for them more appropriate measures of outcome are needed. As with schools and their examination results, the socioeconomic and ethnic composition of the population from which people are admitted to a hospital is likely to influence mortality and morbidity. There is therefore a need to include socioeconomic as well as ethnic data in NHS information systems.

The clinical outcome indicators published for Scotland are being misinterpreted as league tables and cited as a precedent. Although many of the criticisms also apply to them, they are at least derived from a much more sophisticated national statistical system.27 In Scotland there is extensive linkage between statistical records, and it is possible to link successive episodes of care received by the same person. Even so, none of the published indicators takes socioeconomic factors into account, though this was done in a supplementary analysis of admissions after myocardial infarction.

Primary care

“Labour said doctors would never meet targets for immunising more children. Well they have” (V Bottomley, speech to 1994 Conservative Party conference, Conservative Party press release 751/94, 1994).

Despite the increasing prominence of primary care, relatively few data are routinely collected from general practice and published on a national level. This probably reflects general practitioners' status as independent contractors. Immunisation rates are among the few items of data collected, so it is not surprising that these are often quoted by politicians. Government statements usually highlight increases in immunisation rates since the changes it introduced into general practice in 1990. In fact these increases are part of a longer term trend (fig 3).28 Immunisation rates have been rising steadily since the mid-1970s, when concern about possible side effects of whooping cough vaccine led to a large fall in vaccination against whooping cough and a smaller fall in other immunisation rates. In fact, rates of immunisation against measles, mumps, and rubella by the age of 2 years fell slightly, from 92% in 1992-3 to 91% in 1993-4. There is therefore no reason to conclude that recent increases in the uptake of immunisation are the direct consequences of any of the reorganisations that have taken place in the health service since 1979.

FIGURE 3
FIGURE 3

Percentage of children immunised by their second birthday in England, 1967 to 1992-3 (data from Department of Health forms SBL607 and KC51)

Growing satisfaction with the NHS?

“Increased public satisfaction with the NHS is an encouraging sign that the benefits of the government's reforms are showing through” (Department of Health press release 94/520, 1994).

In welcoming the findings of the British social attitudes survey, Virginia Bottomley commented, “The proportion of the public satisfied with the NHS has grown steadily since 1990, by seven percentage points, with a corresponding drop in public dissatisfaction” (Department of Health press release 94/520, 1994). However, she avoided mentioning the actual levels of satisfaction and dissatisfaction. In 1993, 44% of people questioned said that they were very or quite satisfied with the NHS, compared with 37% in 1990 but with 55% in 1983. Although the proportion who reported that they were very or quite dissatisfied with the NHS decreased from 47% in 1990 to 38% in 1993, this was well above the figure of 26% for 1983.29 Anyway, the answer to general questions about dissatisfaction are of limited value compared with surveys that ask people more specific questions on their views of particular aspects of NHS care.30 31

Interpreting NHS statistics

“If purchasing is the engine for improving NHS performance, then information is the fuel which will drive that engine. Information is the common currency of managers and an essential prerequisite of all management processes.”32

This analysis has shown that the government has insufficient evidence to support its claims that the internal market has increased the efficiency of the NHS. Its choice of statistics is extremely limited, and many of the statistics presented are flawed and were not intended for the purposes for which they have been used. Because of changes in data collection, it is often difficult to monitor trends over time in a consistent way and to assess whether like is being compared with like. Even data that have been collected in a more consistent way are open to misinterpretation. In particular, the government often restricts graphs of time trends to recent years and gives itself credit for changes that are part of much longer term trends which date back well before 1979.

Recent developments, particularly the introduction of the internal market, have decreased the availability of statistical information about the NHS. In particular, financial information is much less detailed than in the past, in order to protect the commercial interests of trusts. The information released tends to be selective, good news rather than a wider selection of data. In addition, there is a real concern that the loss of the information gathering function of NHS regions and the staff responsible may lead to a deterioration in quality of data. Even with the relatively limited data quoted by the government as indicators of success there is considerable scope for improvement.

Improvements needed

Firstly, there should be a move to counting people as well as activity. The finished consultant episode, which is based on activity and not on patients treated, is too limited in its scope. Person based data are essential if numbers are to be meaningful. All NHS information systems, including those at national level, should link episodes within an admission and also link admissions and readmissions for individual patients. Person based data, as used in Scotland, would overcome artefactual errors that can inflate numerators. The new NHS number for patients should be helpful in enabling this to be done in England.

Improved measures of case mix and outcome measures are required, and further research is needed here. Meanwhile, confidential inquiries linked to routine data may provide a better framework than flawed league tables. The administrative registers being developed should allow for linkage between records in different databases. The aim should be to identify further complications that might require care at a later stage, possibly by a different health professional. Admission and treatment rates, case fatality rates, and survival rates could then be calculated.

There is also a need to extend the collection of data about patients waiting for NHS treatment. The time of arrival of a referral letter to an outpatient department, the time of the initial appointment, and the time between appointment and admission should all be recorded. This would reveal any delays in the system and would monitor the number of people waiting to join a waiting list.

The lack of national data about primary care is a cause for increasing concern given the growing emphasis on the role of general practitioners as commissioners and providers of care. Better data on resource allocation to fundholders and comparative studies with non-fundholding practices are required. This will become essential as fundholding becomes more widespread. More data are required on the demographic characteristics of patients and the workload and staffing of practices so that comparisons can be made between practices and areas.

The national morbidity survey provides useful information about consultations in general practice, but it is undertaken only once every 10 years, which is inadequate given the current rapid pace of change. There is also the question of the extent to which the self selected set of volunteer practices that take part in the survey differs from a random sample. Although there are other initiatives (Department of Health press release 95/88, 1995), many problems remain to be resolved before adequate annual data about consultations in general practice and the people who make them can be obtained and published at the national level.

Even when NHS data do exist, their dissemination can be uneven. Although government statisticians now produce their own press releases, these tend to receive less press attention than ministerial press releases, which all too often use statistics in an undefined and misleading way. Statisticians' press releases ought to accompany ministerial press releases, or, if there is no separate press release, the sources and definitions of the data that form the basis of ministerial statements should be added.

Conclusions

The “indicators of success” quoted by government politicians today bear a strong similarity to those published shortly before the 1992 general election.2 They were unconvincing then,4 and time has not improved them. The statistics are far too limited to enable a proper assessment of the impact of changes in the NHS, and even the statistics provided do not support the interpretations government politicians place on them. NHS statistics, like the NHS itself, are still not safe in politicans' hands. Improvements are needed in the collection and publication of statistics.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
  31. 31.
  32. 32.