Objective To systematically review the latest evidence for patient safety education for physicians in training and medical students, updating, extending and improving on a previous systematic review on this topic.
Design A systematic review.
Data sources Embase, Ovid Medline and PsycINFO databases.
Study selection Studies including an evaluation of patient safety training interventions delivered to trainees/residents and medical students published between January 2009 and May 2014.
Data extraction The review was performed using a structured data capture tool. Thematic analysis also identified factors influencing successful implementation of interventions.
Results We identified 26 studies reporting patient safety interventions: 11 involving students and 15 involving trainees/residents. Common educational content included a general overview of patient safety, root cause/systems-based analysis, communication and teamwork skills, and quality improvement principles and methodologies. The majority of courses were well received by learners, and improved patient safety knowledge, skills and attitudes. Moreover, some interventions were shown to result in positive behaviours, notably subsequent engagement in quality improvement projects. No studies demonstrated patient benefit. Availability of expert faculty, competing curricular/service demands and institutional culture were important factors affecting implementation.
Conclusions There is an increasing trend for developing educational interventions in patient safety delivered to trainees/residents and medical students. However, significant methodological shortcomings remain and additional evidence of impact on patient outcomes is needed. While there is some evidence of enhanced efforts to promote sustainability of such interventions, further work is needed to encourage their wider adoption and spread.
- Medical students
- Patient safety
- Physician trainees
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
Strengths and limitations of this study
This systematic review provides an update of the evidence on courses teaching core concepts of patient safety to medical students and trainees/residents.
The results confirm an increasing trend for developing educational interventions in patient safety delivered to trainees/residents and medical students.
However, we found that significant methodological shortcomings in studies reporting such interventions remain and additional evidence of impact on patient outcomes is needed.
While there is some evidence of enhanced efforts to promote sustainability of such interventions, further work is needed to encourage their wider adoption and spread.
The main limitations of this systematic review relate to the quality of the included studies, and to only including articles published in the English language.
Educational interventions for quality and safety improvement have garnered increasing interest over recent years. The importance of such interventions is acknowledged by the development and integration of dedicated patient safety and quality improvement curricula and frameworks into medical education at all levels. For example, the Association of American Medical Colleges (AAMC) endorses the introduction of formal quality improvement education from medical school through to postgraduate training and continuing medical education.1 ,2 The Accreditation Council for Graduate Medical Education (ACGME)3 and CanMEDS4 ,5 competency frameworks incorporate essential competencies relating to quality and safety for medical professionals. The WHO has developed a Patient Safety Curriculum Guide for Medical Schools6 and, recently, a multiprofessional edition.7 Such curricula aim to guide and support educators in developing and implementing educational programmes in patient safety.
There has been a significant increase in the number of publications relating to patient safety courses, particularly those aimed at residents. A systematic review on teaching patient safety and quality improvement to medical students and residents was published in 2010,8 identifying 41 studies published between January 2000 and January 2009, of which 27 included an evaluation of the described intervention. This review identified significant methodological limitations in most studies, including low response rates, single centre recruitment and small sample sizes (median=41 participants per study, IQR 20–106).8 Although most interventions were well received by participants, and resulted in improvements in safety and quality knowledge scores, few studies were able to demonstrate changes in learners’ behaviour or potential patient benefit.8 The reviewed articles also identified multiple barriers to sustainable integration of the courses, which spanned learner, faculty and institutional factors.8
Patient safety education is a rapidly emerging field and it is likely that, in part due to the recent development and implementation of patient safety curricula and frameworks highlighted above, an increasing number of articles have been published since this last systematic review, perhaps addressing some of the aforementioned methodological limitations of the older studies. The aim of this study was thus to perform a focused systematic review of research reporting courses that teach core concepts in patient safety, and that target medical students and junior physicians, published since 1 January 2009. We describe the educational content and teaching methods employed, evaluate the learning outcomes achieved and explore factors influencing implementation of these patient safety courses.
Data sources and search strategy
We prespecified the methods utilised in this systematic review and present them in accordance with PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) guidelines.9 A literature search was performed using the electronic databases of Embase (1996 to 2014 Week 18), Ovid MEDLINE (1996 to April Week 5 2014) and PsycINFO (2002 to May Week 1 2014); although the focus of this systematic review was to identify papers published since the last systematic review covering this topic, that is, from January 2009 onwards, we used a search strategy incorporating earlier start dates. This allowed us to perform an evaluation of the sensitivity of our search strategy by ensuring five reference papers that we identified as being highly relevant studies before performing the literature review10–14 were identified by our search strategy. All five reference papers were identified, and thus we were able to begin our search from the end of the data collection period of the previous systematic review covering this topic.8
Our search strategy (see online supplementary appendix 1) incorporated the two broad themes of ‘medical education’ and ‘patient safety’, and the content areas were combined using the Boolean operator ‘and’; a pilot search revealed that ‘medical education’ successfully encompassed both ‘education’ as the intervention and ‘medical students and/or trainees and/or residents’ as the population of interest. Search terms were generated with the assistance of key words from core reference texts15 and relevant articles,8 and a combination of MeSH terms and free text words (truncated wherever appropriate) were used to maximise the sensitivity of the search. We limited the search to human studies published in English language, and removed duplicates. Additional articles were sought through hand searching of reference lists of included studies.
As our data comprised studies that were previously published and publicly available, this study did not require ethical approval.
We included articles that described and evaluated an educational intervention that explicitly exposed medical students and/or trainees/residents to core concepts of patient safety. Articles that included medical students and/or trainees/residents in addition to other population groups were not excluded. To be included, reviewed articles were required to have sufficient empirical data for analysis (eg, conference proceedings were excluded), the educational intervention was required to include patient safety as core content and the study had to include an evaluation of the educational intervention. Detailed eligibility criteria can be found in online supplementary appendix 2.
Article review process
Titles of the initial 4027 articles identified by the search strategy outlined above were reviewed by an academic physician with expertise in patient safety and medical education (MA). After excluding articles with titles that were clearly irrelevant to the topic at hand, the remaining abstracts were reviewed for inclusion independently by MA and a second physician with expertise in medical education (MAK). Disagreements were resolved through consensus, involving a third reviewer with expertise in patient safety and medical education (NS), as necessary.
Data extraction and quality assessment
Consistent with Best Evidence Medical Education (BEME) recommendations,16 administrative data (including publication details and country of origin), topic-related data (including details of the educational intervention and number and type of participants) and research-related data (including methodology and results) were extracted from the studies that were identified as relevant. Factors influencing curricular implementation of the intervention were categorised under four broad headings (learner factors, faculty factors, curricular factors and learning environment factors) devised by the authors of the previous systematic review on this topic.8 Only factors that were explicitly described by the authors of the papers included in this systematic review were counted and categorised in this manner.
Assessing the quality of interventions is a well-documented challenge facing systematic reviews of educational interventions.17 The BEME review protocol recommends a system for assessing the quality of studies based on grading,16 but as no specific guidance as to how to apply these grades is provided, we assessed quality by extracting information on both stated and perceived limitations of the study as assessed by study design, sample size, completeness of data and overall coherence between study aims, methods and conclusions.
Given the anticipated heterogeneity in study designs and outcomes as per the previous systematic review on this topic,8 quantitative synthesis of the data (ie, meta-analysis) was not performed. Simple quantitative statistics were used to report on educational content, methodologies used, study populations and learning outcomes (where reported).
Studies were categorised by the learning outcomes reported by the authors, using the modified version of Kirkpatrick's levels of evaluation adopted by the BEME collaboration as a grading standard for systematic reviews.16 This assesses impact on learners’ satisfaction (level 1), changes in learners’ attitudes (level 2a), measures of learners’ knowledge and skills (level 2b), change in learners’ behaviour (level 3), changes to clinical processes/organisational practice (level 4a) and benefits to patients (level 4b). Correspondingly, the results of this systematic review are presented according to the Kirkpatrick learning outcome assessed.
The initial yield of the review was 4027 articles retrieved by the search strategy. The subsequent title screen of articles identified 304 potentially relevant titles for the abstract review stage. Independent review of abstracts against the eligibility criteria by two reviewers (MAK, MA) followed by consensus resulted in 61 papers for review. The agreement between the reviewers was excellent (κ=0.917, 95% CI 0.871 to 0.963). Review of the full text identified 25 papers that fully met the eligibility criteria for inclusion. An additional eligible paper was identified from hand searching of relevant reference lists, resulting in 26 papers for analysis. This process is summarised in figure 1.
Characteristics of included studies and study settings
Table 1 summarises the main characteristics of the included studies, including study design, participant number and type, and course structure and content. The majority of the 26 studies were conducted in the USA (n=17, 65%). Of the remaining studies, five (19%) came from the UK,18–22 two (8%) from the Netherlands,23 ,24 one from China25 and one from the Republic of Korea.26 Participants comprised trainees in 15 (58%) studies (often resident or specialty trainee/registrar grade), and medical students in the remainder. No studies recruited students and trainees/residents simultaneously. Participants learned in interdisciplinary groups in six of the studies: four involved students,18 ,27–29 one included junior and senior physicians,19 and another comprised residents and faculty.30 One study involved senior physicians (attending or consultant grade level) participating as part of faculty development activities, although their learning outcomes were not directly assessed.31
Characteristics of the courses
Features of the courses including the teaching modalities employed and the core content covered are summarised in table 2. The majority of courses employed a mixture of didactic and experiential teaching methods. Small-group discussions/workshops and lectures were commonly used approaches: n=14 (54%) and n=12 (46%) of courses, respectively. Multimedia approaches including web-based content, videos and/or DVDs were also employed in 10 studies (38%), mostly as an adjunct to other approaches and less so as a central feature of the course. Case-based learning utilising real-life examples of adverse events identified by either participants themselves20 ,31–35 or presented by patients,22 was used as a core feature in seven (27%) courses. Project work (quality or safety improvement) was used in six studies (23%), and role-play and simulation were used in only four studies (15%). The latter is in contrast to studies of non-technical skills training (such as team training), which typically rely on resource-intensive simulation-based teaching modalities.36
The most common content of the courses included a general overview of patient safety (including key terminology and the emergence of patient safety) and root cause analysis and/or systems-based analysis, featured in 17 (65%) and 16 (62%) studies, respectively. Communication and teamwork skills (both core ‘non-technical skills’) education was included in 13 (50%) studies, and quality improvement principles and methodologies in 12 (46%) studies. ‘Human factors (engineering)’ and ‘systems thinking’ were also covered in some studies, although these phrases were typically ill-defined by authors. Other less frequently covered content included medication safety, error disclosure, and incident reporting methods and barriers. Only three studies (12%) explicitly based their curricular content on the WHO's Patient Safety Curriculum Guide for Medical Schools; interestingly, all three were studies conducted outside of the USA.21 ,25 ,26 Of studies conducted in the USA, nine (53%) cited regulatory standards in education as the rationale to their work. This included reference to the AAMC Medical Schools Objective Project report, which recommends that medical schools deliver patient safety education to undergraduates,1 and the ACGME,3 which lists common competencies in practice-based learning and systems-based practice.
Study design and quality assessment
The majority of studies employed a before-and-after study design (n=18, 69%); four of these included a control group: two involved a contemporaneous control,22 ,37 one a historical control,32 and one a randomised contemporaneous control group.38 Only three (12%) studies included additional long-term follow-up, at 6 weeks,22 6 months,23 or ‘between 1 and 12 months’.37 Five (19%) studies involved a postintervention evaluation only. One study was a randomised controlled trial, however, due to logistical constraints, the control group did not undergo matched assessment of behavioural outcome measures.39
The median sample size across studies was 109 participants (IQR 52–188), and one outlier study had 1169 participants;20 some studies did not clearly indicate the exact number of participants. For example, one study was described as involving ‘over 787’ participants pooled over several years.28 The majority of studies were conducted within a single institution (n=18, 69%). Other common methodological limitations included poor response rates,19 ,22 ,35 ,40 ,41 inadequate description of the course19 ,39 and/or inadequate reporting of results.22 ,28 ,29 ,33 Limitations relating to the assessment tools employed are described in the following section.
Study evaluation and main findings
Table 2 displays the levels of evaluation assessed across the studies categorised by participant type (medical student or trainee/resident). Studies involving students primarily focused on participant satisfaction, attitudes and knowledge/skill acquisition, with less emphasis on behavioural change. In contrast, nearly all (n=13 of 15, 87%) studies involving trainees/residents examined behavioural change as a learning outcome, with six (23%) studies examining organisational impact through participant engagement in quality improvement work.19 ,20 ,35 ,42–44 None of the studies explored patient benefit (level 4b) as a result of the course.
The outcome measures, main findings and level(s) of evaluation reported in each study are displayed in table 3. Assessment tools used and main findings are discussed further under the respective Kirkpatrick's level headings below.
Level 1: participation/satisfaction
This was assessed in 19 (73%) studies. Satisfaction was mostly assessed using questionnaires postintervention requiring responses on a Likert scale. Three studies supplemented satisfaction questionnaires with either focus groups18 ,39 or interviews with participants.19 Satisfaction with the courses was generally high, although response rates were poor in some studies.22 ,35 ,41 Two studies evaluating courses that included web-based content reported poor uptake34 or lower satisfaction rates19 with the web-based learning component.
Level 2a: attitudes/perceptions
Patient safety attitudes/perceptions were assessed using a variety of tools in 20 (77%) studies. Bespoke questionnaires comprising items mapped to course learning objectives were used in 11 studies.18 ,26–28 ,30 ,31 ,33 ,39 ,40 ,42 ,44 Two studies used modified versions of validated tools,21 ,34 and a further four studies used modified versions of previously published questionnaires.20 ,23 ,25 ,32 One study used the previously published ‘Attitudes to Patient Safety Questionnaire’.22 One study assessed systems-based thinking using a validated scale (‘System Thinking Scale’, STS),41 and one study assessed perceived patient safety culture using the modified ‘Hospital Survey on Patient Safety Culture’.19 Of studies evaluating patient safety attitudes preintervention and postintervention, the majority of studies reported significant improvement in at least some domains. The study assessing systems-based thinking reported significant improvement in STS scale scores postintervention,41 while the study evaluating perceived patient safety culture reported no change postintervention.19
Level 2b: knowledge/skill acquisition
Fourteen (54%) studies evaluated knowledge acquisition using objective and/or self-report measures. Objective tests were used in 12 studies;18–22 ,31 ,37–39 ,41–43 these comprised multiple choice or true/false questions mapped to course learning objectives. One of these studies used knowledge questions from the ‘Attitudes to Patient Safety Questionnaire’.22 Most studies demonstrated significant improvements in knowledge acquisition, although in one study a poor response rate precluded statistical testing,19 in another no comparison between preintervention and postintervention scores was reported,43 and in yet another, increases in performance were observed postintervention but no statistical analyses were reported.22
Learners’ patient safety skills were assessed in seven (27%) studies,20 ,28 ,32 ,39 ,41–43 all of which employed self-reported measures. Six of these studies demonstrated significant improvement in scores for most or all items, with the remaining study not reporting a comparison between preintervention and postintervention scores.43
Level 3: behavioural change
Changes in safety-related behaviours were assessed in 16 (62%) studies, in a number of ways: behavioural intentions assessed via questionnaire;23 ,41 ,42 self-reported safety-related actions (eg, incident reporting);19 ,20 ,22–24 ,30 ,37 ,38 or by safety-related actions determined objectively.20 ,21 ,33 ,35 ,39 ,40 ,44 Of these latter studies, objective assessment included qualitative assessment of patient safety observations,21 National Patient Safety Goal (NPSG)-related behaviours assessed via simulation,39 engagement in quality improvement work20 ,35 ,44 and incident reporting assessed via submissions to formal hospital reporting systems.33 ,40 All studies reported favourable changes in safety-related behaviours, with the exception of one study, which found that whereas learners’ intentions to report significantly improved postcourse, actual (self-reported) incident reporting did not increase following the course.23 Notably, all but 3 of the 16 studies that evaluated change in participant behaviour were conducted on trainees/residents as opposed to on medical students.
Level 4a: organisational change
Six (23%) studies evaluated organisational change as an outcome measure of their course. Each of these studies involved learner engagement in quality improvement work,19 ,20 ,35 ,42–44 and all these studies reported subsequent positive impact at organisational level, including through the initiation/continuation of quality improvement projects/roles.20 ,35 ,42–44 Three quarters of the participants in one study indicated they had a formal or informal role in patient safety or quality improvement within their current practice environment.43 The team-based ‘Training and Action for Patient Safety’ (TAPS) programme found that 8 of the 11 interdisciplinary teams were able to demonstrate improvements in patient safety outcomes and/or practices through the use of weekly data plotted on run charts.19
Factors influencing curricular implementation
Table 4 displays the key factors influencing curricular implementation that we identified, with selected illustrative quotes and categorised under previously designed framework headings.8 In terms of learner factors, many studies identified the need to ensure personal/clinical relevance of the material to learners, with opportunities to apply the learning in order to enhance engagement.20 ,39 For studies involving physicians, competing clinical commitments were identified as barriers to engagement.21 In studies employing interprofessional modalities, improved teamwork and communication were welcome additional benefits of the course.19 However, difficulties in delivering such interprofessional learning were highlighted.28 Most studies identified the need for adequate faculty, with protected time to support delivery of the course and competing clinical commitments of faculty being barriers to faculty engagement.20 Some commented on the newfound maturity of the faculty infrastructure,41 while others aspired to broaden their faculty infrastructure to ensure sustainability of the course.42 Faculty role-modelling and clinical credibility were noted to be important influencing factors.25
Competing curricular demands were commonly cited as barriers to sustainability of the courses, with some suggesting instituting the course as a mandatory requirement to ensure protected time for learning.21 Promoting patient safety as a science was felt to be a key factor for successful implementation by the authors of one study.41 The majority of studies appreciated the need to strike a balance between didactic and experiential teaching modalities, and of the need for sufficient reinforcement while avoiding repetition and duplication of material. The authors of one study recognised that delivering a centrally administered intervention to the whole trainee population may ensure greater sustainability of the course than delivering it to a sample of the cohort.31
In terms of institutional/learning environment factors, many studies recognised institutional patient safety culture as a key determinant of successful implementation.23 Ensuring a safe learning environment to allow open discussion of sensitive material (eg, relating to adverse events) was recognised as being of particular importance when delivering education on patient safety. Forging improved links between the service provider (hospital) and the training providers was recognised as key to ensuring sustainability, particularly for courses that aimed for engagement in quality improvement work as a follow-on to the course.35
Six (23%) studies identified in this review reported data from courses that had been sustained over at least 2 years,18 ,20 ,27 ,31 ,35 ,43 two studies reported ‘booster’ courses designed to enhance/reinforce established safety educational interventions delivered earlier in the course of training,32 ,38 and one study described an educational intervention coupled with reorganisation of clinical services to facilitate quality and safety improvement efforts.44
This systematic review provides an update of the evidence on courses teaching core concepts of patient safety to medical students and trainees/residents. We identified 26 studies published between January 2009 and May 2014. This is in contrast to a previous systematic review addressing the same topic but with a wider remit and time period (January 2000 to January 2009), which found 27 studies published incorporating evaluation of the interventions.8 This suggests that there is increasing interest in developing, delivering and evaluating courses teaching patient safety.
In the previously published systematic review,8 the interventions were mostly well received by participants and resulted in improvements in safety and quality knowledge scores. However, few studies were able to demonstrate changes in learners’ behaviour (Kirkpatrick's level 3) or potential patient benefit (level 4b). Moreover, thematic analysis of the articles identified multiple barriers to sustainable integration of the courses, which spanned learner, faculty and institutional factors. Our systematic review has also found the included interventions to be mostly well received by participants, with improvements in safety knowledge and attitudes. Whereas more studies in our review were able to demonstrate positive changes in participant behaviour relative to the previous review, this was mainly for interventions targeted at trainees/residents rather than medical students, and most of these data on participant behaviour were self-reported. None of our identified studies demonstrated patient benefit (level 4b) from the interventions, although measurement of changes in clinical outcomes following educational interventions is notably difficult, in part due to the complexities in establishing true cause and effect.
Assessment of organisational change (level 4a) resulting from the intervention was also infrequent in our identified studies, particularly in those involving medical students. Furthermore, in the studies we reviewed, barriers to sustainable integration of the courses also spanning learner, faculty and institutional factors, were identified. Such factors included poor learner engagement, lack of expert faculty, competing educational priorities and an unsupportive institutional culture. There is no clear relationship between the length of the patient safety course and effect on learning outcomes, although a meaningful analysis of this is confounded by differences in course content and study design, quality and reporting.
Despite increasing evidence for the efficacy of educational interventions in patient safety, the wider implementation and adoption of successful interventions has been slow.45 ,46 As a result, recommendations to promote curricular integration of patient safety education aim to address the barriers outlined above—for example, through investing in faculty development, promoting patient safety as a science, and integrating patient safety competencies into accreditation standards and certification examinations, to ensure protected time and incentives for medical engagement.46 ,47
As in the earlier systematic review by Wong and colleagues,8 the majority of studies we identified in this systematic review were conducted in the US and preferentially targeted residents over medical students. The dominance of US studies in this systematic review may reflect the explicit integration of competencies in patient safety and quality improvement within national curricular statements and guidance.1 ,3 The majority of studies we identified in our review had small participant numbers, relied on single centre recruitment, and were designed as before-and-after studies with no control group or follow-up. Therefore, overall, the methodological quality of studies of patient safety interventions in medical students and trainees/residents has not changed significantly between this systematic review and the previously published one.8 This is despite recent years being characterised by the development of curricula and frameworks specifically targeting patient safety.1 ,2
Our systematic review does, however, provide some positive evidence of developments in the literature. Many of the studies we identified used previously published and/or validated assessment tools, demonstrating a knowledge and appreciation of the emergent evidence base in patient safety education. In line with good educational practice, the majority of studies employed experiential learning modalities (such as group discussion and project work), although one study relied solely on didactic lectures to facilitate integration into a ‘busy curriculum’.25 Interestingly, case-based learning of real-life adverse events was used in few studies, despite the recognised value of reflecting and learning from error and adverse events,48 and their popularity among trainees.49 ,50 It is particularly encouraging to note that we found an increase in studies explicitly commenting on sustainability of the described interventions, and their integration into the wider institution, in comparison to the previous systematic review.8 This may reflect a trend to more consideration of the longer term sustainability of patient safety interventions.
In the previous systematic review,8 the core content most commonly comprised of root cause analysis, systems thinking, general patient safety concepts and error incident reporting (all identified in over 30% of courses). In contrast, we found content to most commonly cover root cause/systems-based analysis, general patient safety concepts, communication and teamwork, quality improvement and human factors (all identified in 30% or more of published courses). Importantly, there was a marked increase in the proportion of studies covering general patient safety concepts between the previous systematic review and this one, from 34% to 65%. Coverage of root cause/system-based analysis also increased from 41% to 62% of studies. In addition, between the two systematic reviews there was a decrease in the number of studies covering error/incident reporting, from 32% to 12% of studies. This discrepancy between the two systematic reviews may reflect the different search strategies used. However, it may also relate to, for example, the increasing recognition of the importance of communication and teamwork in patient safety,51 and the importance of a foundation in basic patient safety knowledge and concepts. Without sufficient studies with long-term follow-up data on patient outcomes, it is difficult to ascertain the true implications of these changes in core content. This is clearly an area for future research.
The main limitations of this systematic review relate to the quality of the included studies and the narrower focus when compared with the previous systematic review. We only included manuscripts published in the English language. We may have missed some relevant studies, although no systematic review can truly claim to find all relevant studies. There was significant heterogeneity across the studies in terms of number and type of participants targeted, the educational content of the course, the teaching methods employed, assessment tools used and the outcomes measured, which prevented a quantitative synthesis of the results. Moreover, the identification of factors influencing implementation of the courses was wholly dependent on the quality of reporting of such factors by the authors, many of whom did not stipulate identifying such factors as the primary aim of their study. It may be that important barriers and enablers to the sustainable integration of patient safety courses remain unreported, although it is important to note that we identified similar barriers and enablers to those identified in the previous systematic review.8 In box 1 we offer some recommendations for a minimum description of content that could be used in future studies evaluating patient safety courses. Adhering to these should improve study reporting and the comparison of the relative effectiveness of patient safety training interventions.
Recommendations for minimum content reporting in studies evaluating patient safety training interventions
Study design (eg, prospective, retrospective, before and after design, control groups)
Study setting (eg, single centre, multicentre)
Participants including inclusion and exclusion criteria
Delivery method of all aspects of the intervention (eg, online, didactic lecture, group setting)
Thorough and explicit description of course content
Description of those delivering the intervention (faculty), their training and their qualification
Educational theory/theories underpinning the intervention
Method(s) of evaluation and detailed description of exactly when these were conducted
Specific outcomes assessed (eg, knowledge, attitudes, patient outcomes)
Length and type of follow-up
Data analysis methods
Factors influencing course implementation (barriers and enablers)
Limitations of the intervention
Areas for further work
In addition to the need for future studies to address the aforementioned limitations in the evidence base, the relationship between approaches to teaching (including underpinning educational theory) and the different types of learning outcomes, should also be explored. So, too, should the relationship between implementation approaches and the impact on sustainability of an educational intervention. Such knowledge should optimise the quality of the evidence base and facilitate the development of robust evidence-based guidelines on factors that can improve outcomes at multiple levels following educational interventions for patient safety.
For those involved in medical education, there are recommendations aimed at addressing barriers to the implementation of patient safety courses. These can be classified into recommendations related to the learner, faculty, curriculum and learning environment. Learner-relevant recommendations include: ensure courses have personal and/or clinical relevance, and offer the opportunity to apply learning to enhance engagement; ensure freedom from competing clinical/service delivery commitments; and make learning interprofessional. Faculty recommendations include: invest in faculty development; establish role models with clinical credibility; and ensure protected faculty time to deliver the patient safety course free from other commitments. Curricular recommendations include: promote patient safety as a science; avoid competing curricular demands; ensure an adequate balance between didactic and experiential learning, and between reinforcement of learning and repetition of teaching material; and provide adequate central administrative support to ensure sustainability. Finally, recommendations for the learning environment include: recognise the institutional culture as key to implementation; ensure a safe learning environment; foster links between training programmes and hospital improvement activities; and provide adequate financial support to fund the programme.
There is an increasing trend for the development of educational interventions in patient safety delivered to trainees/residents and medical students. The majority of such courses are well accepted by learners, and improve patient safety knowledge, skills and attitudes. Moreover, some interventions have been shown to result in positive behaviours, particularly through the subsequent engagement of trainees/residents in quality and safety improvement projects. However, no studies in the current systematic review demonstrated patient benefit. Significant methodological shortcomings in current studies exist, and additional evidence of the impact of such interventions on patient outcomes is needed. In addition, although the evidence appears to suggest some maturation in the approach and infrastructure required to support on-going delivery, significant barriers to the implementation of patient safety education remain. Further work is needed to successfully address the challenges and promote the sustainable integration of education and training in patient safety.
CV acknowledges support from the Health Foundation.
Contributors MAK participated in the article selections, conducted the literature review, helped to draft the manuscript and made subsequent revisions. MA conceived the review, conducted the literature review, participated in the design and article selections, and helped to draft the manuscript. NS and SA contributed to the article selection process and edited the manuscript for critical content. PB and CV edited the manuscript for critical content. All authors have read and approved the final manuscript.
Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests MAK and MA are Education Associates at the UK General Medical Council. MAK is a UK National Institute for Health Research (NIHR) Academic Clinical Fellow in Neurosurgery. NS is funded by the NIHR via the ‘Collaboration for Leadership in Applied Health Research and Care, South London’ at King’s College Hospital NHS Foundation Trust, London, UK. NS also delivers patient safety and team interventions, and training, to hospitals internationally on a consultancy basis through London Safety and Training Solutions Ltd. SA is affiliated with the Imperial Patient Safety Translational Research Centre, which is funded by the NIHR. CV carries out occasional consultancy and advisory work on patient safety. MA is a NIHR Academic Clinical Fellow in Primary Care, and a Trustee of the Clinical Human Factors Group and has previously undertaken consultancy work for Medical Education England. MA also conducts occasional consultancy work involving faculty development for patient safety curricula delivery (‘train-the-trainers’ courses).
Patient consent Obtained.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement No additional data are available.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.