Article Text

Download PDFPDF

Impact of online education on intern behaviour around joint commission national patient safety goals: a randomised trial
  1. Tim J Shaw1,
  2. Luise I Pernar2,
  3. Sarah E Peyre3,
  4. John F Helfrick4,
  5. Kaitlin R Vogelgesang4,
  6. Erin Graydon-Baker5,
  7. Yves Chretien6,
  8. Elizabeth J Brown4,
  9. James C Nicholson1,
  10. Jeremy J Heit7,
  11. John Patrick T Co8,
  12. Tejal Gandhi7
  1. 1Workforce Education and Development Group, University of Sydney, Sydney, Australia
  2. 2Department of Surgery and Center for Surgery and Public Health, Brigham and Women's Hospital, Boston, Massachusetts, USA
  3. 3The Center for Experiential Learning, University of Rochester Medical Center, Rochester, New York, USA
  4. 4Partners Healthcare International, Boston, Massachusetts, USA
  5. 5York Hospital, York, Maine, USA
  6. 6Harvard Medical School, Boston, Massachusetts, USA
  7. 7Partners Healthcare, Boston, Massachusetts, USA
  8. 8Graduate Medical Education, Partners Healthcare, Boston, Massachusetts, USA
  1. Correspondence to Dr Tim J Shaw, Workforce Education and Development Group, K01, The University of Sydney, NSW 2006, Australia; tim.shaw{at}sydney.edu.au

Abstract

Purpose To compare the effectiveness of two types of online learning methodologies for improving the patient-safety behaviours mandated in the Joint Commission National Patient Safety Goals (NPSG).

Methods This randomised controlled trial was conducted in 2010 at Massachusetts General Hospital and Brigham and Women's Hospital (BWH) in Boston USA. Incoming interns were randomised to either receive an online Spaced Education (SE) programme consisting of cases and questions that reinforce over time, or a programme consisting of an online slide show followed by a quiz (SQ). The outcome measures included NPSG-knowledge improvement, NPSG-compliant behaviours in a simulation scenario, self-reported confidence in safety and quality, programme acceptability and programme relevance.

Results Both online learning programmes improved knowledge retention. On four out of seven survey items measuring satisfaction and self-reported confidence, the proportion of SE interns responding positively was significantly higher (p<0.05) than the fraction of SQ interns. SE interns demonstrated a mean 4.79 (36.6%) NPSG-compliant behaviours (out of 13 total), while SQ interns completed a mean 4.17 (32.0%) (p=0.09). Among those in surgical fields, SE interns demonstrated a mean 5.67 (43.6%) NPSG-compliant behaviours, while SQ interns completed a mean 2.33 (17.9%) (p=0.015). Focus group data indicates that SE was more contextually relevant than SQ, and significantly more engaging.

Conclusion While both online methodologies improved knowledge surrounding the NPSG, SE was more contextually relevant to trainees and was engaging. SE impacted more significantly on both self-reported confidence and the behaviour of surgical residents in a simulated scenario.

  • Education
  • graduate
  • safety
  • quality of healthcare
  • graduate medical education
  • health professions education
  • patient safety
  • Information technology
  • medical education
  • continuous quality improvement

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The need to educate health professionals in patient safety is now well recognised and is starting to be addressed through a variety of mechanisms.1 This includes the mandating of education in patient safety for all hospital staff by the Joint Commission, and the requirement by the Accreditation Council for Graduate Medical Education that residency programmes provide education for interns in safety and quality.2

Interns represent a particularly vulnerable population of health professionals due to their lack of experience and knowledge around patient safety.3 Their lack of knowledge, combined with their workload and competing learning priorities, potentially places these young doctors and their patients at an increased risk of being involved in adverse events. Otherwise known as the ‘July Effect’, some studies suggest that there is an increase in errors and adverse events in certain hospital settings at the beginning of the academic year, which coincides with an influx of new junior doctors.4–6

Online learning clearly represents new opportunities to reach interns and other health professionals with education. However, while there are a growing number of programmes being developed that focus on patient safety,7 ,8 there is little evidence in the literature regarding which educational methodologies are effective in actually changing the behaviour of health professionals.9

Online Spaced Education (SE) is a novel, evidence-based form of online education that has been demonstrated in randomised trials to improve knowledge acquisition,10 boost retention11 ,12 and change behaviour.13–15 SE involves participants receiving short, case-based multiple-choice questions and feedback via e-mail in a reinforcing pattern over a number of weeks. The methodology is based on two core psychological research findings: the spacing and testing effects. The spacing effect refers to the finding that educational encounters that are repeated over time increase the acquisition and retention of knowledge.16 The testing effect refers to the finding that the process of testing does not merely measure knowledge, but actually alters the learning process itself to significantly improve knowledge retention.17 ,18

Each SE item consists of an evaluative component (a clinically relevant multiple-choice question) and an educational component (the correct answer and a detailed explanation of the answer). Participants submit an answer, receive immediate feedback, and compare their performance with peers. To harness the educational benefits of the spacing effect, the SE item is then repeated over intervals of time ranging from 1 to 12 weeks. An adaptive algorithm tailors the length of the spacing intervals and number of repetitions of the content for each learner based on his or her performance. In order to be retired, each question must be answered correctly twice in a row. In a randomised trial, this adaptive SE algorithm was found to increase learning efficiency by over 35% over a non-SE programme with identical content.19

This study aimed to compare an SE programme developed for interns at Brigham and Women's Hospital (BWH) and Massachusetts General Hospital (MGH) with a more traditional online programme that uses an online slide show followed by a multiple-choice quiz format. Both programmes were designed to improve knowledge and compliance with the National Patient Safety Goal (NPSG). The study aimed to look impact on behaviour, knowledge retention and user satisfaction.

Methods

Education programmes

Setting

This study was conducted in 2010 at the BWH and the MGH. Both hospitals are primary teaching hospitals of Harvard Medical School located in Boston, Massachusetts.

Intervention

The SE programme consisted of 16 case-based multiple-choice questions. The cases were developed by a multidisciplinary curriculum committee consisting of physicians, nurses and educators in collaboration with current MGH and BWH residents in surgery and medicine. Cases were largely based on real-life scenarios encountered previously by the residents. Cases were selected to illustrate teaching points from the NSPGs covered in the programme. After answering each multiple-choice question, interns received feedback consisting of the key take-home message and a brief description of what actually happened. The cases and associated multiple-choice questions were e-mailed to the interns in the following manner: every 2 days, each intern was sent an e-mail containing two cases. An intern who answered the question incorrectly was sent the same case 8 days later. An intern who answered the question correctly was sent the same question 15 days later. Any question answered correctly twice in a row was ‘retired’ for that intern. An intern's course was completed once 80% of the questions had been retired. The e-mailing of initial questions and repeating of questions was fully automated once an intern logged onto the SE platform.

The slideshow-based online programme (SQ) was developed at BWH by safety and quality experts and contained 15 slides that discussed key aspects of the NPSGs. Upon reviewing the slides, interns completed a 14-question multiple-choice quiz.

The content of the SE programme was matched to cover the same content as covered in the SQ programme. The same nine 2009 NPSGs were covered in both programmes.

Study design

Incoming interns were randomised at each hospital into two groups (using the Research Randomiser Software http://www.randomizer.org). Subjects were drawn from surgical specialties (Surgery and OB-GYN) and medical specialities (Medicine, Anaesthesiology, Emergency Medicine and Psychiatry Programmes) at both hospitals. Participants were stratified by specialty. All incoming 2010 interns were eligible to participate. Randomisation was performed by a single research officer. All researchers involved in evaluation of performance were blinded to randomisation.

At MGH, one group received SE and the other group received no intervention. At BWH, one group received SE and the other received the SQ programme. There was a difference in approach at the two hospitals in that MGH was not planning on delivering online training in safety and quality to interns in the year of the study in contrast with BWH where SQ was planned to be rolled out to all residents. At the conclusion of the project, MGH interns were provided access to the SE programme so no group was disadvantaged.

Outcome measures

A randomly selected sub-group of SE and SQ interns at BWH completed a central line simulation. Interns were blinded to the purpose of the simulation, and the simulation was included in an existing orientation programme for interns. Each intern participated individually in the simulation station, and interns did not observe each other's performance. The simulation sessions were timed and also digitally recorded for later review. Two reviewers used the recordings to score performance. For each intern, each examiner recorded whether the intern did or did not properly perform each of 13 key procedural tasks tested in the simulation station. Table 1 details the procedural tasks assessed and their relationship to the NPSGs.

Table 1

Matching the 2009 NPSGs to expected behaviours at the central line simulation station

Interns from all groups completed a 15-item multiple-choice question pre-intervention and post-intervention test that tested knowledge around the NSPG. The questions on both tests were identical. The test was written by the project's multidisciplinary curriculum committee. The pre-/post-test was then piloted among a sample of interns at BWH and MGH from the year preceding the study and members of the curriculum committee not involved in its development. The pre- and post-tests were presented to the participants in an online format.

Interns who received either SQ or SE intervention completed an online exit survey consisting of seven questions, which asked them to rate their confidence around the NPSGs and acceptability of the interventions on a 1–5 Likert scale ranging from ‘Strongly agree’ to ‘Strongly disagree’ (table 2).

Table 2

Survey questions

BWH surgical interns from both SQ and SE groups participated in a 45-min semi-structured focus group to provide qualitative feedback on their experience of the interventions. The group discussion was analysed by three investigators. Grounded theory was used in the analysis of the data and emergent themes were identified for consideration.

Statistical analyses

Using video replay, two independent observers recorded whether each intern did or did not perform each of 13 mandatory tasks in the simulated environment. A log-transformed t test was used to analyse the interns' time to complete the simulation, and the number of NPSG-compliant behaviours they performed.

Analyses of simulation performance by specialty did not include interns from Anaesthesiology (n=10), Emergency Medicine (n=12) or Psychiatry (n=6) due to their limited numbers. Bonferroni correction was performed to adjust for multiple comparisons.

Scores on the knowledge tests were analysed with paired t tests. Survey results were analysed using a two-sample test of proportions, comparing the fraction within each group whose score was 1 (‘Strongly agree’) or 2 (‘Agree’).

The study was approved by Partners Institutional Review Board.

Results

Three hundred and seventy-one trainees participated in the study (196 at BWH and 175 at MGH). Randomisation was effective with an even distribution of trainees across interventions and between specialities.

Knowledge test and completion rates

At BWH, 88 of the 98 interns randomised to SE completed the pre-test, and 91 of the 97 interns randomised to SQ completed the pre-test. At MGH, 81 of the 85 interns randomised to SE completed the pre-test, and 79 of the 89 interns randomised to no intervention completed the pre-test. Among all 169 SE interns who completed the pre-test, 120 (71%) completed the post-test; among 91 SQ interns who completed the pre-test, 85 (93%) completed the post-test (p<0.001).

SE participants took 4–6 weeks to complete the SE course, and SQ participants completed the online quiz within this same 4–6-week period.

The post-test score was significantly higher than the pre-test score for the SQ group (average difference 1.0) and each of the two SE groups (1.2 at BWH and 1.4 at MGH, p<0.001 for all three differences; table 3). For the control group at MGH, there was no significant difference between pre-test and post-test scores. While the score increases of the pooled SE groups were greater than those of SQ interns, this difference did not reach statistical significance (p=0.18).

Table 3

Pre-/post-written knowledge test scores

Simulation

Inter-rater reliability between observers was high with a kappa coefficient of 0.9.

Among the BWH interns, 53 in the SE group and 48 in the SQ group participated the simulation. Among randomly selected participants who underwent simulation testing, SE interns demonstrated a mean 4.79 (36.6%) NPSG-compliant behaviours (out of 13 total), while SQ interns completed a mean 4.17 (32.0%) (p=0.09). Among those in surgical fields, SE interns demonstrated a mean 5.67 (43.6%) NPSG-compliant behaviour, while SQ interns completed a mean 2.33 (17.9%) (p=0.015) (table 4).

Table 4

Comparison of average simulation score within medical and surgically orientated residency programmes

Survey

On four out of the seven survey items, the fraction of SE interns responding ‘Strongly agree’ or ‘Agree’ was significantly higher (p<0.05) than the fraction of SQ interns. These corresponded to the items for ‘Improved confidence in handoff’, ‘Improved confidence re: infection control’, ‘Effective as a method to learn or reinforce key aspects of S&Q’, and ‘Engaging and enjoyable’. For the remaining three survey items, there was no statistically significant difference between the SE and SQ respondents. These results are displayed in table 2.

Focus group

Analysis of the focus group data with 30 Brigham and Women's surgical interns identified a number of themes. The interns found the SE online cases authentic and engaging—‘they made me feel interny’ or, ‘Intervention was looking for a program that was geared to my anxiety not to kill anyone this year and this program met my need!’ In contrast, participants did not find the SQ format as engaging or memorable ‘you just click through to the end…

A number of interns indicated that receiving either programme just prior to commencing residency training raised the profile of patient safety—‘Patient safety must be a priority at Partners Healthcare as this was the first real interaction I had with the organisation.

A number of the interns in the SE programme found that the content was memorable and could reflect on the outcomes of the patients—‘…the fact that that patient died due to that adverse event really stuck in my mind.’

The repeating nature of the SE methodology was also considered positively—‘I knew the cases were going to repeat so I made sure I concentrated on getting them right the first time.’ However, not all interns felt SEs intensive nature would be suitable for delivery of all types of education—‘…sometimes I just want a resource I can go to and look up things on a particular area.

Discussion

The results of this randomised control study demonstrate that both SE and the more traditional slide-based programme can be used to improve knowledge around the NPSGs. Although drawing on a relatively small number of participants, SE was superior to the traditional programme in changing behaviours among interns in the surgical specialities in a simulated environment. This finding was reinforced by the self-reported increased confidence scores when SE was compared with SQ. Qualitative data from the survey and focus group with surgical interns demonstrated that they found the delivery methodology of SE preferable to SQ. This study is one of a limited number of studies to evaluate the impact of educational interventions on the behaviour of interns.7 ,13 ,20 Why SE may have had a greater impact on behaviour than the more traditional online programme is likely due to multiple factors. The literature indicates that continuing medical education (CME), that is case-based and interactive, has a greater impact on knowledge and behaviour.21 There is a growing body of literature supporting that SE, by repeatedly presenting content and testing knowledge, itself has a significant impact on knowledge retention and behaviour in medical practitioners.13–15 Findings from the focus group held with surgical interns at BWH who had participated in the study indicated that the SE content was more memorable than the traditional format. The cases, exploring incidents that were largely based on events that had in fact happened, were felt to be realistic and directly applicable to the context of the intern. There was a strong consensus from the group around a statement from an intern who saw that the programme was aligned to his ‘anxieties’ on entering practice.

The study found relatively poor performance for all the interns in the central line simulation station. This indicates that there are perhaps significant gaps in medical school education in this area, and underlines the importance of delivering orientation programmes in safety and quality to young doctors as they assume independent patient care responsibilities.

The fact that interns found the SE programme significantly more engaging than the SQ programme, and that it was a more effective method to learn or reinforce key aspects of safety and quality is significant, as engaging young doctors in safety and quality during their early years represents a significant challenge for programme directors worldwide.

The findings of this study raise a number of questions that warrant further research. First, why did the SE programme have a greater impact on interns in surgical fields, that is, Surgery and Ob-Gyn, than on other interns, specifically those who enter non-procedural specialities? This difference is potentially due to the way the cases were written. A number of cases emphasised peri-surgical care, thus potentially capturing the imagination of the surgical specialty interns more readily. This may indicate that care in design of future SE courses is needed in order to tailor questions to the specialities of participants. This notion is supported by research that indicates that CME, which is contextually relevant to participants, is more likely to be effectual.22

Second, will the behaviour change detected in the simulation station translate to a change in practice on the ward? Presumably, if good safety and quality habits are developed, they will be practiced in the clinical environment. These practices should be, and are so in many hospitals, supported by standardisation of pre-procedure preparation, readily available hand sanitiser, nursing or senior resident supervision etc. Additional studies, however, are necessary to really determine what behaviours are displayed on the wards and how these affect patient safety outcomes.

More studies are also needed to answer the final question raised by this study; how long does the impact of either methodology last? Previous studies have indicated that SE increases the long-term knowledge retention around a variety of clinical conditions,10 ,14 but it is unknown how long knowledge surrounding safety and quality will be retained.

Strengths of the study include the randomisation of participants, the large number of participants and the multiple evaluation methods used, including a measure of behaviour change in a simulated environment.

Limitations of the study include the fact that there was no control group that did not receive any intervention in the simulation study. This prevents analysis of the impact of either methodology compared with no intervention, and may have prevented the study revealing a significant impact on behaviour by the slideshow-based programme, or a more significant impact on behaviour by SE. This limitation was unavoidable due to the requirements of intern training and the different sites. The authors believe this limitation is offset by the randomisation of participants. Other limitations include the small number of surgical interns, that the power of the study was limited by the set number of incoming interns, and the simulation test was only available to BWH residents.

In conclusion, this study is one of a very limited number that describes the impact of online learning programmes in safety and quality on the behaviour and knowledge of interns. This has importance, given the substantial clinical role that interns play in health systems around the world. In addition, this study is particularly relevant as many organisations are in the process of introducing mandated online education around safety and quality, and the evidence is still not clear as to which types of online learning have the greatest impact. Results from this study indicate that SE has the potential as a delivery methodology to impact more on intern behaviour than a slideshow-based methodology. It also shows that SE was more engaging for interns than the traditional format, and that both methodologies have a similar resource requirement for implementation. The authors are currently conducting further studies around the impact of SE on reporting of adverse events by junior doctors, and in the education of more senior interns and specialists.

Acknowledgments

The authors wish to thank Professor Bruce Dowton, Chief Operating Officer at Partners Harvard Medical International at the time of the study for supporting the project. The authors wish to thank the Brigham and Women's Hospital STRATUS Simulation Centre led by Chuck Posner for their generous support of the central line simulation. The authors wish to thank: Rick Van Pelt, Eddah Wakapa, Christine Dube, Lela Holden and Anthony Carpenter for assistance with writing cases; Greg Meyer, Susan Lacroix and Maria Yialamas for input into the Steering Committee, and also Megan Ryan for assistance in literature review.

References

Footnotes

  • All those listed as authors are qualified for authorship, and all who are qualified to be authors are listed as authors on the byline.

  • Funding This project was partially supported by a grant from Partners Healthcare. A small grant was awarded to JH as an intern to fund use of software.

  • Competing interests None.

  • Ethics approval The ethics approval was provided by Partners Institutional Review Board.

  • Provenance and peer review Not commissioned; externally peer reviewed.