Article Text

Download PDFPDF

Capacity of English NHS hospitals to monitor quality in infection prevention and control using a new European framework: a multilevel qualitative analysis
  1. Michiyo Iwami1,
  2. Raheelah Ahmad1,
  3. Enrique Castro-Sánchez1,
  4. Gabriel Birgand1,2,
  5. Alan P Johnson3,
  6. Alison Holmes1,4
  1. 1NIHR Health Protection Research Unit (HPRU) in Healthcare Associated Infection and Antimicrobial Resistance, Imperial College London, London, UK
  2. 2Antenne Régionale de Lutte contre les Infections Nosocomiales (ARLIN) Pays de la Loire, Nantes, France
  3. 3Public Health England, London, UK
  4. 4Imperial College Healthcare NHS Trust, London, UK
  1. Correspondence to Dr Raheelah Ahmad; raheelah.ahmad{at}imperial.ac.uk

Abstract

Objective (1) To assess the extent to which current English national regulations/policies/guidelines and local hospital practices align with indicators suggested by a European review of effective strategies for infection prevention and control (IPC); (2) to examine the capacity of local hospitals to report on the indicators and current use of data to inform IPC management and practice.

Design A national and local-level analysis of the 27 indicators was conducted. At the national level, documentary review of regulations/policies/guidelines was conducted. At the local level data collection comprised: (a) review of documentary sources from 14 hospitals, to determine the capacity to report performance against these indicators; (b) qualitative interviews with 3 senior managers from 5 hospitals and direct observation of hospital wards to find out if these indicators are used to improve IPC management and practice.

Setting 2 acute English National Health Service (NHS) trusts and 1 NHS foundation trust (14 hospitals).

Participants 3 senior managers from 5 hospitals for qualitative interviews.

Primary and secondary outcome measures As primary outcome measures, a ‘Red-Amber-Green’ (RAG) rating was developed reflecting how well the indicators were included in national documents or their availability at the local organisational level. The current use of the indicators to inform IPC management and practice was also assessed. The main secondary outcome measure is any inconsistency between national and local RAG rating results.

Results National regulations/policies/guidelines largely cover the suggested European indicators. The ability of individual hospitals to report some of the indicators at ward level varies across staff groups, which may mask required improvements. A reactive use of staffing-related indicators was observed rather than the suggested prospective strategic approach for IPC management.

Conclusions For effective patient safety and infection prevention in English hospitals, routine and proactive approaches need to be developed. Our approach to evaluation can be extended to other country settings.

  • Infection prevention and control management
  • indicators
  • capacity

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • This is the first study to assess the indicators for successful infection prevention and control suggested by a European review in real-world hospital settings in England.

  • The novel multilevel approach to identify gaps would be applicable to other cultural settings by adaptation of the indicators and taking into account health systems and local contexts.

  • Despite the geographical, structural and managerial variations given in sampling, statistical generalisability is difficult.

Introduction

The burden of healthcare-associated infections (HCAIs) in European hospitals remains high. Each year, 1.9–5.2 million patients acquire at least one HCAI in European hospitals.1 In the English National Health Service (NHS), ∼244 000 patients are affected by HCAIs yearly,1 leading to increased mortality,2 additional hospital antimicrobial use3 and financial burden.4

The scope and level of implementation of national HCAI prevention programmes has varied significantly across Europe.5 ,6 In England, intensive efforts have been implemented since 1999 including regulatory, governance, hygiene and technological interventions aimed at the organisational and individual levels.5 ,7 ,8 While England has been among the first to publicly report HCAI indicators and implement mandatory surveillance,6 this performance monitoring approach was restricted to a selected set of infections.9

The significant reduction in methicillin-resistant Staphylococcus aureus (MRSA) bacteraemia and Clostridium difficile infection (CDI), both subject to mandatory surveillance,10 ,11 was most likely due to the multifaceted approaches employed.12 Progress must, however, be seen in context. Reduction in MRSA bacteraemia rates were not replicated in methicillin-susceptible S. aureus (MSSA).13 Moreover, during 2007–2011, when MRSA bacteraemia was declining, Escherichia coli bacteraemia reports increased by one-third,13 along with the emergence of strains resistant to cephalosporins due to production of extended-spectrum β-lactamases.3

Suboptimal organisational-level and individual-level responses persist9 and mandate for a broader approach to address these challenges, sustain improvements and mitigate unintended consequences of interventions.14 Governmental recommendations for a ‘board to ward’ approach (2008)15 were followed by appeals to adopt a whole systems perspective including explicit roles and responsibilities of national and local organisations.16 Similar attention to a broader approach to infection prevention and control (IPC) is seen in the international policy and academic discourse around this period. However, in England, vertical (ie, focused on specific pathogens17) and largely top-down (ie, national mandatory surveillance schemes) approaches to IPC have been dominant.

A recent European-led review18 provides a horizontal approach to IPC, linked to efforts to minimise risks of a wide range of infections.17 It details key components and organisational and managerial structure, process and outcome indicators, henceforth referred to as a ‘framework’. In essence, this describes core elements of a comprehensive IPC approach that would just require translation and validation in local contexts.

The objectives of our study were: (1) to assess the alignment of current national mandates/recommendations and hospital practices in England with the suggested European indicators;18 and (2) to examine the capacity of local hospitals to report on the indicators and their current use of data to inform IPC management and practice.

Methods

The indicators were extracted from the information provided in the European review.18 The 27 associated indicators under the 10 key components were assessed in the context of the English NHS using quantitative and qualitative data at three levels taking a multilevel approach:19 ,20 national level and healthcare organisation (hospital) level (using documentary research); at team level by interviewing the senior managers in one NHS trust (figure 1).

Figure 1

Overview of study methodology. (NHS hospitals can be administratively structured as acute trusts, including multiple hospitals. Additionally, some hospitals can obtain foundation trust status, enjoying significant managerial and financial freedom.) IPC, infection prevention and control; NHS, National Health Service; RAG, Red-Amber-Green; T, trust.

Setting and data collection

First, relevant regulations, national standards, policy, guidance and guidelines, published between January 2000 and June 2015, were identified, accessed from over 100 websites of national health authorities and regulators and assessed against the indicators. The full list of websites is included in online supplementary table. In addition, key terms from each of the 27 indicators were used to search these websites. A hand-search of reference lists from key documents was conducted to trace other relevant documents. Discussion with key informants also helped signpost to relevant sources.

Second, 14 hospitals were purposefully sampled, which are organised into three administrative organisations, called a ‘trust’ (two acute NHS trusts and one NHS foundation trust), to provide variation in structure: trust 1 (T1), a large teaching foundation trust (with significant managerial and financial freedom) in north England; T2, a medium size teaching trust in south England; and T3, a large teaching trust in London. For each hospital, publicly accessible electronic sources were reviewed against the indicators. These included trust reports, trust board meeting minutes and national databases, which include trust-level (or finer) information (see online supplementary table) for the period of financial year 2011/2012–2014/2015.

Third, an assessment of use of these indicators in current IPC management and practice was undertaken through documentary review, direct observation and interviews with three senior managers acting as key informants from T3. They were selected because they held major roles in IPC in the same trust and were in the best position to validate and provide relevant information for the study. We used the 10 components18 to devise open questions for interviews and used the 27 indicators for direct questioning (asked for each indicator—Do the data exist? Where and how can these data be accessed? How is it used to inform IPC management and practice in this hospital?). Interviews were conducted at the informant's workplace (March–June 2015), and their responses were recorded in field notes. Confidentiality and anonymity of participants and participating organisations has been maintained; written consent was obtained from the key informants. Three hospitals at the same trust were selected for observation because of the opportunities in different types of wards and variation of practice (medical, surgical and intensive care unit wards). Observation of the environment included information available on public notice boards and hand hygiene facilities on such wards in each hospital.

These approaches to data collection were used to ensure saturation in terms of key regulations/policies/guidelines and interventions (figure 1).

Data analysis and interpretation

To assess the inclusion of the indicators in any national mandates/recommendations and availability of these data at trust level, two researchers (MI and RA) independently reviewed the published sources and databases. A third reviewer (EC-S) resolved any disagreements. A ‘Red-Amber-Green’ (RAG) rating was developed: Red refers to ‘not included in national regulations/policies/guidelines, or no data available/accessible at the trust’; amber means ‘partially included in national regulations/policies/guidelines, or partial data available/accessible at the trust’; and green refers to ‘included in national regulations/policies/guidelines, or data consistently available/easily accessible at the trust’. The use of data for IPC management and practice was identified from key informants’ insights, and local practices were validated by observational data (T3). This was further corroborated by the senior author (AH) and NHS colleagues, based on their professional background, role and experience. We sought to identify areas of alignment and gaps across national, organisational and team levels, by comparing the RAG rating results. The current use of the indicators to inform IPC management and practice is also presented.

Results

A high degree of alignment was found between the suggested indicators and national regulations/policies/guidelines in England (table 1). Specifically, 21/27 indicators (78%) were included in national regulations/policies/guidelines (green). The remaining six were partially included or inconsistently available (amber).

Table 1

Current use and measurement of the indicators at the national and local level

A similar picture emerged regarding data availability at trust level, with 22/27 indicators (81%) available and the remaining five partially or inconsistently available.

Further detail is provided for the eight indicators rated as ‘amber’ at either the national or local level, or both, to highlight existing gaps (table 1), along with current use for IPC management and practice. This is followed by two additional indicators rated as ‘green’ at both levels, but not fully exploited in IPC management and practices. An online supplementary file shows detailed data for each indicator (see online supplementary table).

Current gaps at the national level

Appropriate staffing for IPC—component 1

Staffing has been the focus of national recommendations. While in 2001 all NHS hospitals were recorded as having an IPC team (including at least 1 IPC nurse), a low ratio of whole-time equivalent (WTE) IPC nurses to total number of beds has been reported in the UK.21 There is currently no national guideline on IPC nurse and doctor ratios for trusts to follow, but desired ratios of 1 WTE IPC nurse: 250 beds and 1 WTE IPC doctor: 1000 beds have been suggested.22 National policy does recommend an ‘appropriate mix’ of staffing and the inclusion of supporting staff, including administration, information technology and laboratory.23

Measurement of the number of audits (overall, and stratified by departments/units and topics) for specified time periods—component 6

National guidelines require that hand hygiene audits should be regularly conducted with results fed back to healthcare workers.24 The number of audits is, however, defined locally. Hand hygiene audits are part of trust audit procedures at the ward/unit level. Cleaning audits are required based on trust cleaning policies, with the frequency of audits tailored to risk levels of functional areas in accordance with national cleaning standards.25

Verification that programmes are multimodal—component 8

Verifying the extent to which programmes are multimodal is not explicitly set out at the national level. There are a number of national multimodal initiatives which emphasise the importance of this approach including: the ‘cleanyourhands’ campaign (2004–2010) comprising raising awareness, education, promotion of hand hygiene at the point of care, and audits;26 High Impact Interventions, effectively care bundles together with audit tools to measure compliance recommended through the UK Department of Health (DH) multicomponent, national ‘Saving Lives’ programme.27

Current gaps at the local level

Availability of alcohol-based handrub at the point of care; availability of sinks stocked with soap and single-use towels—component 3

The first indicator is highly prevalent across the national guidelines,24 ,28 and intensively promoted through national campaigns including ‘cleanyourhands’ (2004–2010).26 Likewise, the second indicator is incorporated into the national guidance,28 and in fact goes beyond mere availability, stipulating criteria for clinical wash-hand basins including: non-touch operation taps, no swan-neck, no overflow/plug, and adherence to single purpose (clinical hand washing for staff).28

Alcohol-based handrub and stocked sink availability was measured for all trusts, and found to be largely met according to on-site visit assessment (Patient Environment Action Team assessment) results 2012.29 This approach, however, was replaced by a new scheme called the Patient-Led Assessments of the Care Environment programme in 2013,30 and ‘cleanyourhands’ terminated in 2010, with devolution of responsibility for hand hygiene improvement and sustainability to trusts. Variation in practice is therefore observed. However, owing to the historical emphasis on this indicator, it is considered for IPC management and practice.

Gaps at both national and local levels

Average number of frontline healthcare workers—component 2

Overall, the national and local focus is on nursing/midwifery/ancillary staff, but not medical staff. This indicator is not proactively used to inform IPC management and practice.

Staffing strategies, applied to nurses, midwives and care staff in England, have been devised to ensure ‘the right staff, with the right skills, in the right place’,31 and ‘at the right time’.32 The significant progress made31–37 excludes doctors, however, as well as pharmacists and other health professionals.

Top-level data of staff numbers by professional group was available for each trust (through annual or human resources reports, or NHS workforce statistics38). Averages by workload or nursing hours per patient day37 can be calculated. A smaller unit of analysis may be more meaningful, however, for IPC (ie, by hospital or ward). All three trusts complied with national requirements to publish monthly safe staffing information (ie, ward-level actual vs planned hours of staff, by staff category, and shift, together with average fill rates). Some trusts offered further insights into staffing. For example, combining the number of beds with quality outcomes (ie, pressure ulcers, falls with harm and complaints) and staffing information.

Average proportion of pool (bank)/agency professionals (nurses and doctors)—component 2

Overall, the national and local focus is on temporary nursing but not medical staff.

The national safe staffing guideline33 includes routine monitoring of ‘high levels and/or on-going reliance on temporary nursing’. Staffing capacity and capability, including usage of temporary staff (broken down by bank/agency), should be reviewed and discussed at regular (at least biannual) public board meetings in each trust.32 Locally (trust) agreed acceptable levels are recommended. Expenditure on bank and agency staff per bed is also recommended as a related outcome measure.33 Emergent deficits arising from daily safe staffing reviews can be resolved by using bank/agency staff or moving staff from other clinical areas, but ideally for filling short-term gaps only.32 Data were available at the trust (eg, usage of, and spend on, bank/agency staff). This information was, however, not shared regularly with the IPC team, therefore not informing management and practice consistently. Assessment of usage of temporary staff was triggered by events such as serious incidents, and thus considered retrospectively through, for example, postinfection reviews and root cause analysis.

Interviews with frontline staff and IPC professionals—component 9

Partial recommendations of this indicator emanate from national guidance39 and one professional body,40 but no stipulation of the use of interviews with frontline staff and IPC professionals in the identification of champions or in engagement with interventions. Across the trusts, methods of engagement of champions varied. For example, in one trust, IPC nurses selected champions (link nurses) by assessing their IPC knowledge and ability to manage problems. The work of IPC champions was reported as individual effort-based rather than collective and more reactive than proactive; in one trust, outbreaks often impeded setting up a systematic approach. This indicator is not used routinely or proactively in IPC management and practice.

Indicators which are covered at the national level, available at the hospital level, but not fully exploited in IPC management and practices

Questionnaires about work satisfaction—component 10

National initiatives41 point out increasing evidence showing a link between staff satisfaction and quality of care,42 and emphasise trust chief executives to support such engagement and feedback activities. Two types of surveys measure staff satisfaction, the National NHS Staff Survey (Picker Institute Europe—annual snapshot43) and a more recent local staff engagement survey (NHS Employers—surveying 25% of staff per quarter—cumulatively targeting all staff44). Overall, the local staff engagement survey has a higher and increasing response rate than the national survey. Increased rates and quality of responses have been attributed to feedback of survey results and formulation of engagement action plans around advocacy. This indicator is not proactively used to inform IPC management and practice.

Human resource assessment of healthcare workers’ turnover and absenteeism—component 10

There was a strong focus at the national and local level on vacancy rates, as well as turnover and absenteeism, mainly to achieve cost reductions. Data availability for nursing staff through electronic rostering was of higher quality than that available for medical staff. Analysis at the ward and unit level was not possible for all staff groups, with the exception of nursing staff (assigned to wards). This resulted in difficulty linking staffing and outcomes data, thus limiting its use in IPC management. In addition, this indicator was used retrospectively, often triggered by results of periodic external inspections or the need to investigate serious incidents.

Discussion

Our results show that the infrastructure in the English context is aligned with the European indicators.18 Given this level of development, we need to discuss if data are readily available or disaggregated. Also, how can this be fully exploited to inform IPC and patient safety? For hospitals to optimise existing but disparate information, the following areas need consideration.

Need for a renewed focus on medical staff

The gaps in medical workforce data availability compound the cultural challenges widely reported in engaging this group in horizontal IPC approaches.45 Doctors are assigned to departments whereas nurses are assigned to wards. This results in a detachment from ward-based monitoring, audit and surveillance data, which are fed back to each ward and shared in real time. While poor compliance with aseptic non-touch technique (ANTT) among medical staff is documented via local ANTT competence assessments, internal monitoring for medical staff is less systematic compared with nursing staff. Medical staff issues tend to be flagged through periodic inspections by external regulators. Ensuring a uniform approach across all staff groups is critical,46 given the proportion of HCAIs potentially avoidable through everyday practice.12 ,47 A renewed focus on the medical workforce in terms of structure and assessment may help with the softer cultural issues of engagement and ownership.

Workforce

Recommendations for safe staffing levels must be viewed in context of a national shortfall of registered nurses ‘willing to work in the NHS’, (ref. 48, p.31) and the uncertainty regarding the evidence on optimal staffing levels. Nationally, there was an 83% increase on agency staff spending between 2011/2012 and 2014/2015.49 High use of bank and agency staff at the hospital level may in some cases be viewed positively, adhering to safe staffing. However, the difference in bank and agency staff needs to be noted; bank staff comprise staff employed substantively, different to agency staff. Agency staff are often new to trust policies and could be unaware of local rules and organisational culture, potentially leading to safety compromises. An example aimed at addressing these issues is NHS Professionals, a dedicated provider of trained temporary staff for the NHS, familiar with local policies. Given this national context, implications for effective IPC must be planned for, in particular implications for training and handover.

Proactive/mindful use of the indicators

Leaders within hospitals need to be mindful about data and information already available within their organisation. Use of workforce data, often prohibited by cultural and structural silos, seems a particular gap. Reactive, rather than proactive, ‘mining’ for information was prevalent, usually triggered by adverse events. Staff satisfaction levels in particular can be valuable for gauging safety culture, as well as influencing patient/public perceptions.50

Quality of data capture and appropriate analysis

Data reliability issues are affected by technical or structural factors including variation in interpretations of definitions, reporting conventions as well as workflow and patient numbers. Soft factors such as emotion, ethics, attitude, behaviour and organisational culture can also be at play. Although reporting/audit return rates have generally improved in recent years, it is questionable whether the rates of compliance with process indicators (eg, hand hygiene, ‘bare below the elbows’) reflect actual practice due to the reliance on self-audits. Trusts are aware of these issues and have begun tackling this through new strategies. Methods to triangulate include ‘mystery shoppers’ and validation by peers. Managers need to be aware of risks to staff morale as a potential consequence and negative impact on organisational culture (component 10).

Maintaining relevance

The 27 indicators are in line with the English policy trajectory as set out in the introduction, particularly recognition of multilevel drivers required for sustained change.32 ,48 ,51 ,52 This framework provides tangible process and outcome indicators to facilitate measurement at the hospital level in the backdrop of increasing calls for transparency and visibility of data and information.35 ,36 The Francis report52 highlights the need for increased transparency on staffing levels and vacancy rates; however, staff retention, training and development and organisational values must be embedded across hospitals.48 In addition to this report, major and recent failings such as the outbreak of Pseudomonas aeruginosa at neonatal units in Belfast have raised again the importance of basic structures and then the correct use of these facilities (processes).53 Scandals in England became something of a trigger to the development of the modern regulatory framework, but definitions of progress through a limited set of indicators may have been counter to fostering a safety culture.9 ,54

Regular appraisals are critical to ensure that the indicators remain relevant and practicable where macro influences must be explicitly acknowledged. Among these macro influences are: evolving and emerging pathogen threats, national policy and legislative changes, enhanced national performance targets, financial constraints, technological advancements, demographic changes and increased citizen expectations.

Themes missing in the European framework

Three main themes are absent from the framework. Occupational health (eg, influenza vaccination for healthcare workers) and antimicrobial stewardship are intentionally excluded by the authors of the framework as covered by other European projects in parallel.18 However, these are highly relevant to a system-wide approach to IPC, and in the case of England, hospitals are required to establish local programmes and audit.23 ,39 ,55–57 Patient and public involvement is also excluded. This absence is indicative of the lack of evidence of strategies thus far evaluated.58

For local or national use, this framework appears flexible enough to allow adaptation and a number of benefits are summarised in box 1, showing the value of our method of assessment to a range of actors and towards a number of aims. In addition, a financial analysis48 ,59 or operational impact of the local implementation of the framework is recommended. For organisational adoption of the new indicators, a preimplementation process is vital to allow for ‘buy-in’ from key stakeholders.

Box 1

Potential benefits of the use of the framework

1. Understanding of own/local context can contribute to improvements in local practice.

2. Raising awareness and fostering safety discourse at organisational level/motivating staff to discuss areas for improvement identified through the application of the framework, and design action plans or change initiatives.

3. Assessing internal variability (at the division/directorate/specialty/department/ward/unit level).

4. Benchmarking to evaluate organisational progress against the framework over time (as well as with other similar health organisations).

5. Facilitating proactive IPC management fostering organisational cultural change relevant to the macroenvironment.

6. Generating local evidence to assess the existence/strength of links between the indicators to identify trigger(s)/predictor(s)/tipping points critical to own clinical settings (eg, impacts of staffing on clinical outcomes in certain areas of care).

7. Recognising the complex approaches required to prevent and control multidrug-resistant organisms, beyond nationally set targets.

8. Integrating multiple hospital data sources for tackling a broad range of HCAIs13 and aligning with wider safety initiatives at the hospital level.

HCAIs, healthcare-associated infections; IPC, infection prevention and control.

Strengths and limitations

This study represents an efficient innovative assessment, through extensive and systematic documentary research and validation of findings via key informant interviews with senior managers. Limitations include the small number of cases. Generalisability can be enhanced by replicating the study and describing the context in detail. This paper demonstrates how IPC and hospital leaders can evaluate their own hospitals by using this approach at the systems level, identifying organisational priorities and efficiencies. Our multilevel assessment strategy to identify gaps would be applicable to other settings by adaptation of the indicators and consideration of local contexts and health systems.

Conclusions

This is the first study to assess the European framework in real-world hospital settings, demonstrating how to examine national drivers and structures in which organisational priorities are set. While local-level capacity exceeds national aspirations for a few indicators, for English hospitals to have the capacity to fully consider the framework, routine and proactive approaches need to be developed. Hospital managers and health professionals leading safety and IPC programmes need to ensure that data are readily available, aggregated and then fully exploited to inform local practices.

Acknowledgments

The authors are grateful to the key informants and other hospital staff for generously referring them to relevant sources. They would like to thank Esmita Charani and Professor Bryony Dean Franklin (NIHR Health Protection Research Unit in Healthcare Associated Infection and Antimicrobial Resistance, Imperial College London) for their helpful comments on their earlier draft.

References

Footnotes

  • Twitter Follow Enrique Castro-Sánchez @castrocloud

  • Contributors AH devised the study. MI and RA conducted the data collection, analysis, and drafted the first draft of the article. EC-S revised the first draft and contributed to the writing of the manuscript. APJ and AH provided a critical review of the analysis. GB provided further international input. All authors edited, read and approved the final manuscript.

  • Funding This research was produced by Imperial College London and commissioned by the Health Foundation, an independent charity working to continuously improve the quality of healthcare in the UK. The author(s) were partially funded by the National Institute for Health Research Health Protection Research Unit (NIHR HPRU (grant number HPRU-2012-10047)) in Healthcare Associated Infections and Antimicrobial Resistance at Imperial College London in partnership with Public Health England (PHE) and the NIHR Imperial Patient Safety Translational Research Centre. AH also acknowledges the support of the Imperial College Healthcare Trust NIHR Biomedical Research Centre (BRC).

  • Disclaimer Any conclusions, interpretations or policy options may not reflect the commissioner's views. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR, the Department of Health or Public Health England.

  • Competing interests None declared.

  • Ethics approval The study was assessed as a service evaluation on 23 May 2013 (AHSC Joint Research Compliance Office, Imperial College London and Imperial College Healthcare NHS Trust).

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement No additional data are available.