Article Text

Download PDFPDF

Incidence and causes of critical incidents in emergency departments: a comparison and root cause analysis
  1. M Thomas1,
  2. K Mackway-Jones2
  1. 1
    Emergency Department, Hope Hospital, Salford, UK
  2. 2
    Emergency Medicine Research Group, Manchester Royal Infirmary, Manchester, UK
  1. Dr M Thomas, Emergency Department, Hope Hospital, Salford M6 8HD, UK; martin.thomas{at}srft.nhs.uk

Abstract

Objectives: To investigate the incidence of critical incidents in UK emergency departments (EDs) and to compare the root causes of such incidents between different EDs.

Methods: An observational study with semi-structured interviews and root cause analysis was conducted over a 12-month period. It was set in EDs in two teaching hospitals and two district general hospitals in the north-west of England. A single investigator identified critical incidents by a variety of means and conducted interviews with involved members of staff. The main outcome measures were rates of occurrence of critical incidents per 1000 new patients in each ED and root cause analysis of identified critical incidents according to a predetermined system.

Results: 443 critical incidents were identified. The rate of occurrence ranged from 11.1 to 15.9 per 1000 new patients. The most common root causes underlying these critical incidents related to organisational issues outside the EDs; internal management issues; human errors relating to knowledge or task verification and execution; and issues related to patient behaviours. By contrast, technical root causes occurred infrequently. Significant differences were shown between the EDs for three types of root cause relating to organisational issues outside the EDs and internal protocol and collective behaviour issues.

Conclusion: Critical incidents occur frequently in EDs. There are significant differences, as well as common themes, in the causes of these critical incidents between different EDs.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Errors and other preventable adverse events are common in medicine in general.1 2 Few studies have specifically investigated the rate of such events in emergency medicine, but indirect evidence from reports of complaints and litigation35 and from studies involving specific activities commonly performed in emergency departments (EDs) such as ECG and radiograph interpretation610 suggests that there is no room for complacency in this field.

Medical Errors and Complications Causal Analysis (MECCA) is a risk management tool with origins in the chemical and steel industries.11 It has been specifically adapted for use in the medical domain, and has been tested successfully in small studies in emergency medicine, anaesthetics and blood transfusion.1214 It takes a systems-based approach to critical incidents, identifying latent conditions as well as active failures.15 Near misses are studied in addition to incidents with actual harmful effects, allowing for the investigation of greater numbers of adverse events and thus enabling quantitative analysis in a manageable time period. In common with the approach now being promoted by the National Patient Safety Agency for investigating incidents in the UK,16 it uses root cause analysis to identify why adverse events occur.

No study comparing the causation of adverse events between different EDs has been reported previously. If root causes of adverse events are uniform across all EDs, then the same methods of risk reduction should be universally effective. However, if adverse events occur in different EDs for different reasons, then the approach to risk reduction must be customised.

The aims of this study were to establish the incidence of critical incidents (actual or potential adverse events) in four EDs, and to compare the root causes of such events between these EDs.

METHODS

Study design

The study was observational, with root cause analysis using the MECCA methodology.13 14

Setting

The study was conducted in four EDs in north-west England from February 1999 to January 2000. Each ED was studied for six 1-week periods spread throughout the year. Annual new patient attendance ranged from 63 000 to 68 000.

Data collection and analysis

Critical incident identification

A critical incident was defined as an event that had actual or potential harmful effects on the outcome of the management of a patient or group of patients. Critical incidents were sought in an identical way by a single investigator (a research fellow in emergency medicine) using the following methods:

  • Direct critical incident reporting by medical and nursing staff.

  • Review of the ED records of each patient attending the ED during the week of investigation.

  • Direct observation of the ED.

  • Review of radiology and laboratory reports.

Narrative description of critical incident

When a critical incident had been identified by the methods described above, a confidential semi-structured interview was conducted with the staff concerned. Using information obtained by this interviewing process, a narrative description of the events leading up to the critical incident was composed. If staff could not be interviewed within 3 days of a critical incident, it was felt that their recall of the events leading to the incident might not be sufficiently reliable. The critical incident was then excluded from root cause analysis.

Construction of causal trees

Causal trees for each critical incident were constructed.11 This involved starting with the end result of a critical incident and looking back to describe any factors that led to it. Each factor was then examined and broken down into subfactors, with this process continued until smaller causative factors could not be identified. The causative factors found at the end of this process are considered to be the root causes of the critical incident.

Root cause classification

Each root cause in each causal tree was classified according to a predefined model. The initial classification used was the Eindhoven Classification Model (ECM): Medical Version.11 Some modifications were later made to the ECM: Medical Version and the definitions used to make them more applicable to the ED (see Appendix 1), and each root cause was subsequently re-classified. Briefly, the model classifies root causes into technical and equipment-related, organisational (or system-based), human and patient-related groups. Within the first three of these groups there are further divisions into those arising outside the ED and those that arise within the ED. Internal root causes have further specific subdivisions. Patient-related root causes are divided into those occurring as a result of actions performed by an individual patient and those arising from multiple patient actions (such as a sudden large influx into the ED).

For the purposes of assessment of the interobserver reliability of root cause derivation and classification, we selected the descriptions of 50 critical incidents at random. Root causes obtained by the main investigator for these critical incidents were compared with those obtained by an independent observer.

Outcome measures

The main outcomes were the numbers of critical incidents identified in each ED and the number of times each different type of root cause occurred.

Primary data analysis

The number of critical incidents occurring per 1000 new patients was calculated for each department. The demographic data of included and excluded critical incidents were compared using the Student t test and χ2 tests, with p values <0.05 being considered significant. The Kruskal-Wallis test was applied to each root cause in turn to assess differences in their frequency between departments. The level of significance was set at p<0.002 (Bonferroni correction for 21 different root causes17). For root causes where an overall statistically significant difference was found, Mann-Whitney tests were performed to identify between which departments the differences lay. Six separate comparisons were made between each pair of departments (p<0.008, Bonferroni correction17). For interobserver reliability testing the measure of reliability was calculated as the percentage of agreeing root causes with 95% confidence intervals (CI). A kappa test of interobserver reliability is not feasible, given the relatively large number of possible outcomes (root cause types).

SPSS 9.0 for Windows was used for all statistical calculations, except for confidence interval calculations for which Stata Statistical Software 5.0 was used.

RESULTS

Characteristics of critical incidents

A total of 443 critical incidents were identified, 94 of which were excluded because interviews with the relevant staff could not be conducted within 3 days, leaving 349 critical incidents for MECCA analysis. A total of 852 root causes were identified as having contributed to these critical incidents.

Table 1 summarises the basic data regarding included and excluded critical incidents. It can be seen that incidents not relating to a specific patient (such as significant overcrowding and delayed admissions) and incidents occurring in certain departments were more likely to provide sufficient data for analysis.

Table 1 Basic data regarding included and excluded critical incidents

Root cause analysis

Critical incidents were detected at a rate ranging from 11.1 (95% CI 8.8 to 13.8) to 15.9 (95% CI 13.5 to 18.7) per 1000 patient attendances. Table 2 shows the frequency of occurrence of each type of root cause for each ED in 349 critical incidents. The p values shown indicate the significance of any differences, as calculated by the Kruskall-Wallis test.

Table 2 Frequency of occurrence of different root causes with comparison between emergency departments

Further analysis was performed for the Organisational External, Organisational Protocols and Organisational Culture root causes.

Organisational External

Significant differences (p<0.001) were shown between departments B and C, and between departments C and D. An example of a commonly occurring Organisational External root cause is an inability to move patients out of the ED due to lack of bed space on inpatient wards.

Organisational Protocols

Significant differences were found between department C and department A (p<0.001), department D (p<0.001) and department B (p = 0.007). Organisational Protocol root causes occurred, for example, as a result of not using formal triage guidelines for patient assessment.

Organisational Culture

Statistically significant differences were shown between department C and department A (p<0.001), department D (p<0.001) and department B (p = 0.006). Organisational Culture root causes again often related to triage and the collective behaviour in triaging certain conditions at a lower level than would generally be considered appropriate.

Interobserver agreement

For the 50 critical incidents randomly selected for the test of interobserver reliability, 90 root causes had been identified and classified by the main investigator. The independent observer identified 84 root causes for these same critical incidents. There was exact agreement in 148 of these 174 identified root causes. Thus there was agreement for 85.1% of root causes (95% CI 79.8% to 90.4%). Most of the disagreements arose in relation to root causes involving Human Knowledge, Verification and Execution, usually where the same root cause was identified by both observers but given a different classification.

DISCUSSION

Critical incident rates ranged from 11.1 to 15.9 per 1000 new patients. Possible reasons for variation include variable expansion of patient numbers, differences in reporting rates, differences in skill mix and the presence or absence of further factors as revealed by root cause analysis.

Ninety-four critical incidents (21.2%) were excluded because relevant staff could not be interviewed within 3 days. No statistically significant difference was found when these excluded critical incidents were compared with the included critical incidents in terms of the sex and age of patients involved, and the days on which they occurred. Significantly more critical incidents were included where no specific patient was involved than in patient-specific cases (p<0.05). This is because, if a critical incident concerned a general situation, more members of staff would be fully aware of the circumstances surrounding the critical incident leading to a greater likelihood that interviewing could be conducted.

Two organisational root causes—Organisational External and Organisational Management—were found to contribute most frequently to critical incidents, with these accounting for approximately 22% of the total. Human factors relating to Knowledge, Verification and Execution, as well as Patient-related factors, also occurred in a large number of critical incidents. Technical root causes occurred relatively infrequently. Approximately 5% of root causes were unclassifiable by the modified ECM: Medical Version, with no new root cause category being identified when these were re-examined.

Significant differences were shown between EDs for three Organisational root causes (External, Protocols and Culture). Critical incidents were significantly more likely to involve the Organisational External root cause (such as inability to move patients from the ED to inpatient beds) at departments B and D than at department C. For both the Organisational Protocols and the Organisational Culture root causes, the significant differences lay between department C and all other EDs. This suggests that critical incidents were significantly more likely to occur as a result of the lack of availability or quality of protocols, such as triage guidelines, and as a result of collective behaviours at department C than at any of the other EDs. It is noteworthy that all significant differences lay in the root causes broadly categorised as “Organisational” rather than in the Technical, Human, and Patient-related areas. This suggests that there may be little difference between EDs in terms of their equipment, their personnel or the patients attending them, but that differences in the organisation and management of EDs can lead to differences in their critical incident rates.

A major strength of this study was its use of a single investigator, using an identical approach to critical incident identification and analysis in each ED, allowing valid comparison between departments to be made. Sufficient critical incidents were identified for quantitative analysis of a kind not previously reported to be performed. Multiple methods of critical incident identification were employed, maximising the pick-up rate and minimising bias.

Limitations of the study include the fact that it is likely that other critical incidents occurred which were not identified, and thus the true incidence of critical incident occurrence is likely to be higher than that found. It is possible that the critical incident rate is different in other EDs in the UK and in EDs in other countries. The presence of the investigator may have influenced the performance of staff being observed (the “Hawthorne effect”). However, the same effect would have been present in each department and would thus have been controlled for during interdepartmental comparison. It is possible that cultural differences between departments led to differences in the levels of reporting of critical incidents although, as direct reporting was only one of four methods used to identify critical incidents, it is unlikely that any under-reporting would have significantly affected the results. It is possible that changes in the UK provision of emergency care, including different staffing levels and the introduction of the 4 h performance target, would mean that the results would be different if the study were to be repeated now.

Few previous studies have investigated the frequency of failures in emergency medicine. Sakr et al18 studied the care of minor injuries and found that there was an important error in at least one stage of the care process in 9.2% of the patients seen by nurse practitioners and in 10.7% of the patients seen by junior doctors. These are far higher rates than were detected in the current study, although methodological differences meant that failures in history-taking and examination were more likely to be detected than in the current study, and many were described by the authors as being clinically unimportant. Their focus on minor injuries may also partly explain the difference in failure rates detected in these studies.

Stella et al19 investigated critical incidents in an Australian ED. They studied “corrective strategies” for their critical incidents by requesting suggestions for these on the critical incident reporting form. In contrast to the results of the current study, “improved communication” was identified as the most important corrective strategy, identified in 22% of incident reports. However, “training and education” was also identified as being important, being identified in 20% of incident reports. “Improved supervision” was identified in 16% of incident reports. A requirement for additional equipment was identified surprisingly commonly, in 15% of incident reports, and additional experience was identified in 12%.

Fordyce et al20 investigated errors occurring over the course of a week in an American ED by directly approaching staff members every few hours and asking them if they were aware of any errors that had occurred. They discovered that errors occurred at a rate of 18 per 100 patients. However, only 0.36 adverse events per 100 patients arose as a result of these. As they were investigating errors rather than critical incidents and used only direct reporting to investigators by staff to identify these errors, it is difficult to compare the results with those of the present study.

The results of this study have profound implications for all EDs. The null hypothesis that the same reasons lie behind critical incidents occurring in different EDs has been disproved. Hence, a uniform approach to risk reduction across all EDs is unlikely to be successful. Specifically for the departments included in the study, it has been shown that solving organisational problems outside the EDs themselves will give better yields at departments B and D than elsewhere. At department C, attention focused on certain protocol and cultural issues would be of the greatest benefit. However, as well as highlighting certain differences between the EDs investigated, MECCA analysis also identified common themes. Management decisions—mainly relating to junior doctors working unsupervised overnight—were a common factor in critical incidents in all the EDs studied, and it is likely that this is also the case elsewhere. The results also have strong implications for the training of medical and nursing staff, with root causes involving lack of knowledge having occurred frequently. Specific information from the causal trees where knowledge failures occurred could be used to focus junior doctor induction programmes and teaching. Such improvements would potentially mean that lack of supervision was less of a problem as focused knowledge was increased. It is evident that castigating ED staff will not bring about long-term solutions to the common root causes of critical incidents identified by this study. Instead, changes are needed which involve the organisation of the departments or of the hospital as a whole, and the training structures of the staff working within them.

Future studies should be performed to see if implementation of changes suggested by the results reported here have any appreciable effect on the incidence of critical incidents or on the nature of root causes. It is possible that implementing changes would have unpredictable detrimental effects as well as beneficial ones.

CONCLUSION

Critical incidents have been shown to occur relatively frequently in EDs. Root cause analysis using the MECCA risk management tool has been shown to be a powerful means of analysis of root causes of risks in emergency medicine. It has been demonstrated, using MECCA, that EDs differ in the causes of the risks that they run, and hence in the ways in which these risks may be controlled. MECCA could be used in other EDs to assess their own critical incident root causes and compare these with the results presented here.

Acknowledgments

Dr Brian Farragher of the University of Manchester Statistical Support Unit advised on statistics. In addition, we gratefully acknowledge the help provided by Dr N Edwards, Clinical Lecturer in Anaesthesia and Intensive Care, University of Adelaide and the late Dr R Boyd, Consultant in Emergency Medicine, Lyell McEwin Hospital, South Australia in interobserver reliability assessment.

REFERENCES

Footnotes

  • MT conducted the study and wrote the initial draft of the paper. KMJ conceived, designed and oversaw the study, contributed to the final draft of the paper and is the guarantor.

  • Funding: Funded by an NHS R&D grant.

  • Competing interests: None.