Article Text

Download PDFPDF

National neonatal data to support specialist care and improve infant outcomes
  1. Andrew Spencer1,2,
  2. Neena Modi3
  1. 1NHS Information Centre, Leeds, UK
  2. 2Neonatal Unit, University Hospital North Staffordshire, University Hospital of North Staffordshire Trust, Stoke on Trent, Staffordshire, UK
  3. 3Neonatal Medicine, Imperial College London, London, UK
  1. Correspondence to Dr S A Spencer, The NHS Information Centre, 1 Trevelyan Square, Boar Lane, Leeds LS1 6AE, UK; andy.spencer{at}doctors.net.uk

Abstract

‘Liberating the NHS’ and the new Outcomes Framework make information central to the management of the UK National Health Service (NHS). The principles of patient choice and government policy on the transparency of outcomes for public services are key drivers for improving the performance. Specialist neonatal care is able to respond positively to these challenges owing to the development of a well-defined dataset and comprehensive national data collection. When combined with analysis, audit and feedback at the national level, this is proving to be an effective means to harness the potential of clinical data. Other key characteristics have been an integrated approach to ensure that data are captured once and serve multiple needs, collaboration between professional organisations, parents, academic institutions, the commercial sector and NHS managers, and responsiveness to changing requirements. The authors discuss these aspects of national neonatal specialist data and point to future developments.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

In 2002, the UK Department of Health (DH) set out a visionary strategy, the National Programme for Information Technology (NPfIT).1 The original concept of a lifetime NHS electronic record for every patient had been replaced by the concept of an Integrated Care Record Service encompassing an electronic patient record, investigation results, image storage and retrieval, electronic prescribing and an appointments system. In 2005, Connecting for Health was launched, an agency charged with delivering the programme to the NHS in England. The next 5 years were characterised by disappointment and disillusionment, with slow progress, erosion of confidence, escalating costs and critical media reports.2 In 2010, a White Paper from the new UK coalition government called for an Information Revolution.3 The ‘big idea’ was to use data to drive up the standards and quality of care and to improve public accountability and patient choice, a concept that is now echoed in Cabinet Office proposals for ‘Open Data’.4 Though a laudable aim, the general perception is that this vision will take time and a lot of hard work to realise. However, in a small corner of the NHS, a quiet success story has been unfolding; this is the progress that has been made in data collection in the neonatal specialist care and the analysis of these data to support audit, quality improvement, service delivery and research.

Neonatal Data Analysis Unit

There are 171 neonatal units in England. These provide care for the most vulnerable of patient groups, preterm and sick newborn babies, numbering about 10% of all births, or 65 000 infants annually. Following a DH report in 2002, neonatal services were reorganised into managed clinical networks, each comprising around six to eight neonatal units.5 One of the drivers for the development of newborn networks was the publication of data indicating that the gestation-specific survival was better in many other European countries than in the UK.6 The intention was that each neonatal network would be largely self-sufficient in the ability to provide a complete range of intensive, high dependency and special care. The resulting need for the frequent transfer of newborns between neonatal units providing different levels of care and the concurrent requirement for the immediate transfer of information between NHS Trusts was the stimulus that led to the widespread capture of operational electronic data. Fortuitously, other developments helped drive the progress. The British Association of Perinatal Medicine (BAPM) had been developing and refining a neonatal dataset7 for several years because of the recognition by neonatologists of the benefits that would derive from a uniform approach to the capture of information. Furthermore, it was acknowledged that there was a lack of national data on the outcomes of neonatal specialist care.5 Over the same period of time, a small commercial firm had been developing a technical platform to capture neonatal data.8 Electronic neonatal data capture was introduced in 2005 and to date has been adopted by all but a tiny number (six at the time of writing) of the 171 neonatal units in England. These electronic data are held on an NHS platform and include fixed choice and free-text items. The NHS number is the principal identifier; static data are entered once only (eg, birth weight and gestational age), and data are entered daily in a standard format on treatments such as oxygen, nutrition and medications as well as on an episodic basis for items such as infection. Clinician-entered diagnoses are converted to the International Classification of Diseases version 10 (ICD10) codes. The record also incorporates mandated items from the Neonatal Critical Care Minimum Dataset (NCCMDS)9 and information for commissioning. The electronic system was initially known by various names around the country, of which the Standardised Electronic Neonatal Database was at one time the most widely used; increasingly, this system is now known by the trade name,8 the Badger.net system. As a technical solution, it is unique worldwide in capturing operational information from a geographically defined population to support neonatal specialist care and to provide data that are used for a number of purposes. For example, networks may access the data for management, local audit and quality improvement,10 a process which can only serve to further enhance the quality of the national data.

In 2006, the Royal College of Paediatrics and Child Health (RCPCH) was awarded a contract to deliver a National Neonatal Audit Programme11 (NNAP). In 2007, a group of academic investigators, a network manager and a national charity representing parents, established the Neonatal Data Analysis Unit (NDAU)12 with the aim of using standardised neonatal data for health services support, evaluation, surveillance, audit and research. In 2010, National Research Ethics approval was obtained for the NDAU to create a National Neonatal Database, which would contain a National Neonatal Dataset incorporating the BAPM dataset7 used primarily to define the daily level of care and the NCCMDS. After merging of records and data cleaning and with the appropriate regulatory approvals, the National Neonatal Database is used by the NNAP,11 to support commissioning, quality improvement, service development and research.

Neonatal audit

The history of capturing high-quality data in neonatal specialist care is a long one. Thirty years ago, punch cards and needles were used to evaluate mortality and other outcomes by birth weight and gestation. With the advent of the personal computer, locally developed programs for undertaking neonatal audit were often displayed at the annual meetings of the British Paediatric Association13 with some of the more successful programs being adopted widely.In the early days, audit consisted of producing an annual report for comparison with colleagues working in similar units. An invitation to a senior neonatologist to review the results often followed. At the time, this was largely an observational exercise, and it was not fully appreciated that benchmarking and comparison could be used as a tool for quality improvement and improved outcomes. Nevertheless, the argument for consistent data collection and clear unambiguous definitions for variables captured was well made.13 A strong commitment to measuring outcomes by the neonatologists was demonstrated by the EPIcure study14 in 1995 where the long-term outcome of a national cohort of babies born at less than 27 weeks has been monitored over several years.15 This study was rigorous in design and demonstrated the importance of obtaining the correct denominator data. Without obtaining information on every early neonatal death in the labour room, it would have been quite possible to seriously underestimate mortality. Furthermore, it is abundantly clear from this study that the national audit must include long-term outcomes. The Badger.net system includes a facility to enter data on a 2-year health status in a standardised, objective format developed by the Thames Regional Perinatal Outcomes Group, and this information has been incorporated into the National Neonatal Database held by the NDAU. Many audits have been criticised for being overly simplistic. If patients, clinicians and managers are to have confidence in the inferences from the audits, methodological rigour cannot be neglected. For example, the analysis of mortality in neonatal specialist care is reported by the NDAU12 with adjustment for case mix and attribution to the network rather than to an individual hospital or a trust. This is important given the substantial variation in patient characteristics, and because neonatal specialist care is organised on a network basis with babies frequently transferred between hospitals depending on their care requirements. The NDAU has shown that significant variation between networks in the mortality of babies admitted for specialist neonatal care is not observed after adjustment for risk factors16 (figure 1). From the 2011 analyses for the RCPCH, NNAP11 will be based on the National Neonatal Database managed by the NDAU rather than on the initial limited dataset restricted to 32 variables. This will facilitate the development of new audits and will enable more sophisticated and meaningful analyses. Other analyses by the NDAU reveal that following the reorganisation of neonatal services into managed clinical networks, the survival of extremely preterm babies has improved, and the proportion born in the most experienced centres has increased. However, there has been a rise in the proportion of acute as well as late postnatal transfers, and it is of concern that many of the former are to less experienced neonatal units.17 Additional information may be accessed through the NDAU12 and NNAP11 websites.

Figure 1

Mortality by the network of booking for babies <33 weeks gestational age admitted to neonatal specialist care and discharged from the neonatal networks in England in 2009. (Data courtesy of the Neonatal Data Analysis Unit.) The unadjusted (a) and case-mix adjusted; (b) Standardised Mortality Ratio (SMR) is shown. Adjustment was made for gestational age at birth, gestation/sex-specific birth weight SD score (SDS), sex, antenatal steroid use and multiplicity of pregnancy.45 Gestation/sex-specific birth weight SDS was obtained using the UK-WHO preterm growth46 with LMS growth software.47 A value >1 indicates that the observed number of deaths is greater than expected; a value <1 indicates the converse. The mortality ratios are shown on funnel plots with the ratio plotted against the expected number of deaths. On the plots, 95% and 99.8% CI are drawn for reference. These lines fall about two and three SD, respectively, from an SMR of one and can be used to assess the statistical significance of any extreme observations. Where all neonatal units in a network have contributed the data, the network is shown as a filled circle; the open circles represent networks where not all neonatal units have contributed the data.

Data for commissioning

Data for NHS commissioning are largely derived from the Secondary User Services (SUS) which is a large British Telecom run database. All NHS Trusts are obliged to submit a minimum set of data from their Patient Administration System (PAS) into this database. Clinical coders are responsible for converting diagnostic information into ICD10 labels and codes. These along with the procedure codes created using the Office of Population, Censuses and Surveys Classification of Surgical Operations and Procedures 4.5 are grouped into Human Resource Groups for Payment by Results (PbR). Currently, the neonatal services are commissioned on the basis of days of intensive, high dependency and special care. These data are submitted by NHS Trusts into SUS. In the West Midlands, the accuracy and reliability of the data submitted into Badger.net is so good that the Specialist Commissioners use these data for contracting purposes rather than that derived from SUS. Furthermore, they plan to use these data to monitor the performance for Commissioning for Quality and Innovation payment framework18 payments. It is technically feasible for trusts to import the data required for SUS from Badger.net into their PASs, but to our knowledge this has rarely been achieved, and sadly most trusts undertake an element of double recording. Repetitive data capture increases the burden on hard-pressed clinical teams. Figure 2 shows the analysis of the length of stay presented by the neonatal network (England only) prepared by the NDAU using operational data.

Figure 2

Median and IQR range for the length of stay in days, by neonatal network of first admission and gestational age category. The coloured reference lines show the medians for each gestational age category. Babies who survived to discharge whose final discharge was in 2010 were included in the analysis. (Data courtesy of the Neonatal Data Analysis Unit.)

Hospital Episode Statistics

Hospital Episode Statistics (HES) are derived from SUS by the NHS Information Centre.19 These cleaned and anonymised data are used for benchmarking, quality and public accountability including the publication of official statistics. External information providers such as Dr Foster Intelligence (DFI)20 and CHKS21 also provide the analysis of HES data to trusts. While HES is derived directly from hospital's own data submissions for PbR, the quality and utility of these data have been subject to the reports of substantial missing data, inconsistent coding and poor data definitions.22 ,23 The Academy of Royal Medical Colleges has recently called for a number of improvements so that clinicians become more engaged with the collection of national data.24 In neonatal medicine, the availability of high-quality data captured at the point of care by clinical teams has led clinicians to ignore HES neonatal data. Furthermore, the lack of linkage between the two systems means that neonatal HES data remain poor. Despite these shortcomings, HES data are still reported. For example, the DFI Hospital Standardised Mortality Ratio (HSMR)25 which monitors 56 diagnostic groups accounting for around 80% of inpatient deaths includes a group called ‘Other Perinatal Conditions’. This includes deaths in babies coded as live births, respiratory distress syndrome, short gestation, low birthweight, fetal growth retardation and other (includes some poorly defined conditions). Trusts wishing to improve their HSMR (currently under review by DH),26 which is published on NHS Choices,27 may ask the neonatal team to justify the deaths which are contributing to the overall trust picture. Consequently, it is still worth working to improve HES by linking data derived from Badger.net locally to PASs, as well as linking national HES data with the National Neonatal Database managed by the NDAU. The NDAU has received provisional approval from the National Information Governance Board to do so. The NDAU is able to provide information within 3–6 months of collection at the point of care, but there is typically a much longer delay in the availability of national HES data. Nonetheless, the activity and ICD10-coded information can be imported into SUS. Furthermore, the linkage of the National Neonatal Database with HES offers other potential benefits as Child Health and Maternity datasets are under development by the Information Centre. If these are also linked at the patient level, there will be substantial benefits for research and health services evaluations.

Improving outcomes

Audit, benchmarking and research are all required for improved patient outcomes. There is a great deal of unexplained variation28 across the disciplines of healthcare. Understanding the extent to which variation is due to remedial factors, whether clinical, organisational or process related, is a first step towards implementing change and improving outcomes. One example where rigorous public reporting of outcomes by individual practitioners has made a difference is cardiac surgery,29 where the mortality for given procedures has reduced year on year. Infant mortality is mentioned as a specific area for improvement in the NHS Outcomes Framework30 in the overarching domain of ‘Preventing people from dying prematurely’. Neonatologists should also be concerned about the domain relating to ‘Helping people to recover from episodes of illness and injury’ because good quality neonatal care is not just about the numbers of survivors, but also about avoiding disability in neonatal graduates. The domain about ‘Ensuring people have a positive experience of care’ is extremely important for parents. The inclusion of parent reported experience measures is currently being developed for inclusion in the NNAP (table 1).

Table 1

New developments using neonatal specialist care data (lead organisation in brackets)

There are difficulties associated with using outcome indicators as the direct measures of quality of care. Appropriate case-mix adjustment and valid attribution are necessary as have been discussed above in relation to the mortality of babies admitted to neonatal specialist care. High-quality analyses require development and testing. A concern is the need to ensure that such work is adequately resourced. There are two other important aspects that must be considered,31 structure and process. Structure relates to the facilities that are available to provide quality service. The BAPM has done considerable work in defining the essential facilities, skills and staffing required for different levels of neonatal unit.32 Much of this work has been published by the DH in the Neonatal Toolkit.33 It is argued that care is unlikely to be of good quality if staffing is inadequate. Assessment of this aspect of quality is probably best done by networks using a standards assessment process34 with the aim of supporting improvement, possibly through the rationalisation of care in a cost-contained environment. Process relates to whether appropriate evidence-based guidelines have been followed. The National Institute of Clinical Excellence (NICE) has been given the responsibility for producing quality standards to underpin the NHS Outcomes Framework. Specialist Neonatal Care was among the first disciplines to have NICE Quality Standards published.35 Adherence to a quality standard is not always easy to measure and so further work is needed to develop quality indicators and metrics which arise from these standards so that the national audit can address meaningful comparisons between neonatal units and networks. The development of a national Neonatal Dashboard is one example where such a work is currently underway (table 1). Such comparisons have the potential to improve the outcomes in three ways: first, commissioners may withdraw funding from a service that is seriously failing; second, patients may choose to go elsewhere though in neonatal specialist care it is usually neither possible nor appropriate for parents to exercise this choice in practice; and third, trusts may use the information to investigate and take steps to improve their own performance as pointed out by Lord Darzi36 noting that measurement is essential for improvement. In North America, the greatest benefit from publishing outcome data and quality indicators has come from providers using data to manage their own performance.37 The Vermont Oxford Newborn Network38 (VON) was established in 1988 in order to improve the quality and care of newborn infants through the sharing of outcome data across a large number of neonatal units. As an example, 14 centres across Ireland and Northern Ireland participated in a benchmarking exercise with VON.39 They found that the nosocomial infection rate in the all-Ireland group was about double that for VON. This is an area which had previously been shown to be amenable to quality improvement40 ,41 providing a clear opportunity for improving the outcomes. In the UK, the NDAU is leading a National Neonatal Collaborative with similar aims including a current initiative to reduce catheter-associated bloodstream infection in neonatal units,42 and the NNAP is auditing variation in neonatal bloodstream infection. The VON covers hospitals and hence full population coverage is lacking; in addition, the major focus for VON is on babies less than 1500 g birth weight and bespoke rather than operational clinical data are used. In contrast, the NDAU National Neonatal Collaborative achieves population coverage of all babies admitted to the specialist care and uses the full breadth of operational raw data employing these for diverse aims.

Research

Audit and service evaluation are distinct from research and this underlies the different regulatory frameworks that are applicable. It is also acknowledged that there are many overlapping areas and that the NHS has a great deal to offer to benefit patients by harnessing the potential of electronic patient records for research. In 2006, the UK Clinical Research Collaboration (UKCRC) Research and Development Advisory Group made a series of six recommendations to connecting for health with the aim of promoting the opportunities for the UK to enhance its clinical research capability through the use of electronic patient data.43 In 2007, the Wellcome Trust in collaboration with the UKCRC held a ‘Frontiers Meeting on the use of electronic patient records for research and health benefit’.44 This meeting brought together senior policy-makers in government, industry and academia to discuss research that is integrated with NPfIT within the NHS. In newborn medicine, these aims are now being achieved. A number of research projects are being delivered through the NDAU. These include the evaluation of preterm growth, health services research, facilitation of randomised controlled trials and epidemiological and case-controlled studies.

Conclusions

Neonatal specialist care has achieved comprehensive data capture at a relatively low cost. Data outputs are already in use and evidence of quality improvement in patient care is emerging. Success has been due to sustained professional leadership over many years, incremental achievable aims, close dialogue with industry enabling modifications and steady improvement, support of neonatal network managers and partnership with academia. Gaining agreement in relation to data items and definitions is a difficult work and professional organisations such as BAPM and the RCPCH are crucial to success and ongoing development. Most importantly, the emphasis has not been on an IT ‘system’, but on basing electronic capture around unambiguous data definitions and a professionally agreed standard, the National Neonatal Dataset. A flexible approach to development has also been adopted with the inclusion of free-text fields to enable the production of discharge and other patient summaries that combine standard data items with a more fluid explanation of the child's condition. Data that are captured once are used for multiple outputs and to fulfil a range of aims. With growing appreciation of the wider utility of the data, completeness and quality will continue to improve. Challenges for the future are to develop methodologically rigorous analyses and innovative presentation that aim to identify the impact of variation in practice on outcome. Other goals are to widen the uses to which the data are put, and ensure sustainability of capture. Completeness of data has been formally evaluated by the NDAU and is considerably higher than HES data (submitted). Assessment of electronic data quality against medical case notes is also currently being evaluated in the East of England neonatal networks. Linking outcomes to resources such as staffing and skill mix, and collecting the views of parents is currently being developed (table 1).

Acknowledgments

The authors thank Brian Derry, the Director of Information Services at the NHS Information Centre for Health and Social Care, for his expert knowledge and advice in drafting this study and Eugene Statnikov (NDAU Data Analyst), Sridevi Nagarajan (NNAP Data Analyst) and Shalini Santhakumaran (NDAU Statistician) for analyses.

References

Footnotes

  • Competing interests Dr Spencer is the National Clinical Lead for Hospital Specialties at the NHS Information Centre. Professor Modi leads the Neonatal Data Analysis Unit, and is the Vice President (Science & Research) of the RCPCH, a member of the National Neonatal Audit Project Board and the BAPM Data Group, and Chair of the Thames Regional Outcomes Group.

  • Provenance and peer review Commissioned; externally peer reviewed.

Linked Articles

  • Fantoms
    Ben Stenson