Article Text

Does interprofessional simulation increase self-efficacy: a comparative study
  1. Colm Watters1,
  2. Gabriel Reedy1,2,
  3. Alastair Ross1,3,
  4. Nicola J Morgan1,
  5. Rhodri Handslip1,
  6. Peter Jaye1
  1. 1Simulation and Interactive Learning (SaIL) Centre at St Thomas House, Kings Health Partners, London, UK
  2. 2King's Learning Institute, King's College London, London, UK
  3. 3NIHR PSSQ, King's College London, London, UK
  1. Correspondence to Dr Colm Watters; colm.watters{at}doctors.org.uk

Abstract

Objectives In this work, we have compared uniprofessional and interprofessional versions of a simulation education intervention, in an attempt to understand more about whether it improves trainees’ self-efficacy.

Background Interprofessionalism has been climbing the healthcare agenda for over 50 years. Simulation education attempts to create an environment for healthcare professionals to learn, without potential safety risks for patients. Integrating simulation and interprofessional education can provide benefits to individual learners.

Setting The intervention took place in a high-fidelity simulation facility located on the campus of a large urban hospital. The centre provides educational activities for an Academic Health Sciences Centre. Approximately 2500 staff are trained at the centre each year.

Participants One hundred and fifteen nurses and midwives along with 156 doctors, all within the early years of their postgraduate experience participated. All were included on the basis of their ongoing postgraduate education.

Methods Each course was a one-day simulation course incorporating five clinical and one communication scenarios. After each a facilitated debriefing took place. A mixed methods approach utilised precourse and postcourse questionnaires measuring self-efficacy in managing emergency situations, communication, teamwork and leadership.

Results Thematic analysis of qualitative data showed improvements in communication/teamwork and leadership, for doctors and nurses undergoing simulation training. These findings were confirmed by statistical analysis showing that confidence ratings improved in nurses and doctors overall (p<0.001). Improved outcomes from baseline were observed for interprofessional versus uniprofessional trained nurses (n=115; p<0.001). Postcourse ratings for doctors showed that interprofessional training was significantly associated with better final outcomes for a communication/teamwork dimension (n=156; p<0.05).

Conclusions This study provides evidence that simulation training enhances participants’ self-efficacy in clinical situations. It also leads to increases in their perceived abilities relating to communication/teamwork and leadership/management of clinical scenarios. Interprofessional training showed increased positive effects on self-efficacy for nurses and doctors.

  • MEDICAL EDUCATION & TRAINING
  • QUALITATIVE RESEARCH
  • EDUCATION & TRAINING (see Medical Education & Training)

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • Collaborative and interprofessional practices within healthcare improve patient outcomes. Interprofessional education has been posited as a means of achieving this; however evidence in its support remains scarce. This study contributes to the sphere of interprofessional education research by showing that clinical trainee self-efficacy in some domains improved compared with a uniprofessional simulation course.

  • Outcome evaluation employs a mixed-methods approach, combining elements of the qualitative and quantitative paradigms. This seeks to investigate whether findings would converge, facilitating triangulation and the production of more insightful and robust results.

  • A non-randomised, quasi-experimental design is employed as is common in medical education research outwith the laboratory.

  • Logistical challenges in running learner groups over time in a ‘live’ educational setting, did not allow as in depth analysis of nurses compared with doctors, and limited the amount of qualitative data that could be collected.

  • As no suitable validated feedback tool could be found in the literature, a novel evaluation instrument was designed by a learning scientist, in conjunction with clinical support. Although this instrument has proved reliable, it is yet to be validated.

Introduction

Interprofessionalism and collaborative practices have been climbing the healthcare agenda over the past 50 years. Numerous organisations and institutions, including the WHO,1–3 Centre for Advancement of Interprofessional Education in the UK,4 General Medical Council5 and Nursing and Midwifery Council6 have argued for the benefits and the value an interprofessional (IP) and collaborative approach brings to healthcare.

Over this time the support for collaborative and IP practice has grown, and it is now recognised that collaborative practice in healthcare strengthens health systems and improves outcomes.3 ,5–9 IP education has emerged as an approach that seeks to create opportunities for healthcare professionals to learn their respective practices in an integrated way; it occurs whenever “two or more professions learn with, from and about each other to improve collaboration and the quality of care.”7 ,10 It has been argued that education is an important method of promoting interprofessionalism and collaborative practice within the current and future healthcare workforce.5 ,11–13

Research has already begun to show some positive outcomes from IP education within particular specialties and settings, among them are: improved emergency department culture and patient satisfaction;14 collaborative team behaviour and reduction of clinical error rates for emergency department teams resulting in enhanced patient safety;15 identification and care of domestic violence victims and perpetrators in a primary care setting;16 and mental health practitioner competencies related to the delivery of patient care.17 However, research evidence for IP education effectiveness remains relatively scarce, as highlighted by recent Cochrane18 and Best Evidence Medical Education12 reviews. Indeed, several recent reviews and publications have specifically called for strengthening of the research agenda for IP education.19–21

In this work, we have explored a simulation-based education intervention that is situated within the early years of doctors’ and nurses’ clinical postgraduate experience, in an attempt to understand more about how IP education might have an impact on students’ learning. We compared IP education and uniprofessional (UP) education versions of the intervention, using self-efficacy as a proxy measure of performance in practice, to look for evidence of the positive impact of IP education. Further, using limited qualitative responses from students, we sought evidence about whether there is something in the nature of the IP interaction that influences the learning for all involved.

Methodology

Setting

The intervention took place at the Simulation and Interactive Learning (SaIL) Centre at St Thomas’ House. It is a high-fidelity clinical simulation facility located on the campus of a large hospital in central London. The centre provides educational activities for King's Health Partners, an Academic Health Sciences Centre consisting of three inner-city tertiary hospitals with over 14 000 staff members, and the King's College London Health schools, the largest co-located schools in Europe. Approximately 2500 staff are trained at the centre each year.

Participants

Participants were nurses, midwives and foundation year 1 and 2 (FY1/2) doctors, all within their early years of postgraduate experience. As this innovation took place within a ‘live’ educational environment, all participants did so as part for their mandatory postgraduate professional development. Their participation was ensured by virtue of their necessity to attend the course in order to satisfactorily pass the educational component of their postgraduate year.

Intervention

The intervention consisted of 21 IP courses and 53 UP courses, which were taught from August 2010 to May 2012. Faculty consisted of a rotating group of simulation fellows and senior clinical staff from multiple professions and disciplines, all of whom were trained to facilitate and debrief participants. All facilitators had, as a minimum, attended a dedicated two-day debriefing essentials course, which utilised the description-analysis-application approach using the ‘debrief diamond’ tool.22 Facilitators all had, in addition to this level of training, a minimum amount of experience with debriefing, which ranged from 4 months to 15 years.

Each course consisted of a one-day, intermediate-fidelity simulation-based course composed of six scenarios. Learners took turns participating in five acute illness scenarios and one associated communication scenario. Each course comprised of 12 participants: UP cohorts consisted of either 12 doctors or 12 nurses/midwives; IP cohorts consisted of doctors, and nurses or midwives in approximately a 1:1 ratio.

Each learner participated in at least one scenario, often in pairs, with each scenario lasting approximately 15 min, while the other learners observed the activity via a live audiovisual feed. In the IP experience, participating pairs were made up of a doctor and a nurse or midwife.

All learners (participators and peer-observers) then reconvened after each scenario to participate in a facilitated debrief, focusing primarily on non-technical skills, lasting approximately 45 min. All debriefs were carried out by facilitators utilising the ‘debrief diamond’ tool.22

Study design

The design was quasi-experimental (non-randomised), with clinicians assigned to either IP or UP groups based on demand for and availability of courses. Owing to course allocation, two basic designed comparisons between IP and UP participation were possible for those attending: a pretest and post-test comparison for nurses and midwives and a post-test comparison for FY1/2 doctors.

Comparison 1 (n=115 nurses and midwives)

Comparison 1 was a quasi-experimental analysis of pretraining and post-training responses for nurses and midwives trained alone (UP; n=64) and interprofessionally with FY1/2 doctors (IP; n=66).

Comparison 2 (n=156 doctors)

Comparison 2 was a cross-sectional comparison of post-training responses between FY1/2 doctors trained either alone (UP; n=94) or interprofessionally with nurses/midwives (IP; n=62).

Outcome measures

Despite a survey of extant literature we were not able to find a validated feedback tool that is designed to gather ratings of self-perceived clinical competency, rather than designed for assessing learning and/or performance of candidates. Thus a novel measurement instrument was designed by a learning scientist, with considerable experience and expertise in the field of educational research. This process was done in conjunction with input from clinical and simulation experts. The instrument has face validity and high content validity, as it was designed and reviewed by a number of simulation experts and has proven robust in use over thousands of simulation trainees. Concurrent and predictive validity of the instrument has not yet been proven but this is largely due to current limitations in scope and scale of the research programme. Through the analysis of the included results, we have shown the instrument to be reliable.

Reponses consisted of quantitative and qualitative data, and employed fixed response (scalar) items as well as open-ended questions exploring themes around communication and leadership. The two parts of the instrument constituted a mixed-methods approach, combining elements of the qualitative and quantitative paradigms. This sought to investigate whether findings would converge, facilitating triangulation and the production of more insightful and robust results.23 ,24

Fixed response items

The feedback form included 10 specific items outlining leadership, situational management, team working and communication skills (online supplementary appendix A). Participants were asked to rate each item on a confidence scale from cannot do at all to highly certain can do. The scale end points were designed to assess self-efficacy, a psychological construct that has roots in general motivation theory, and holds that a person’s belief in their capabilities is at the centre of their ability to function under normal and also under difficult circumstances. Efficacy beliefs, Bandura25 argues, “determine the goals people set for themselves, how much effort they expend, how long they persevere in the face of difficulty, and their resilience to failures” (p.8). Bandura26 notes that self-efficacy is not a personality trait, but that it is highly situational: it differs based on the context (domain) and the behaviour that is under study.

Although the exact functioning of self-efficacy is complex and consists of multiply interlinked processes, it has been associated positively with work-related performance.27 In recent work, Artino et al28 showed that medical students’ reported self-efficacy increased over time in relation to students’ skills, experience and capabilities. Proxy measures such as self-efficacy are one way of trying to understand the potential impact of an educational intervention on later clinical practice; they are necessary because it is nearly impossible to follow clinical trainees into practice in order to observe their performance, in an attempt to attribute it to the intervention. It is, however, important not to overestimate the association between reported self-efficacy and abilities, but Bandura25 argues that “under cautious self-appraisal, people rarely set aspirations beyond their immediate reach, nor mount the extra effort needed to surpass their ordinary performances” (p.12).

We argue, like Artino et al28 that reported self-efficacy can be a useful measure in estimating learners’ abilities in a variety of clinical education situations. In this case, drawing from the concept of a relation between self-efficacy and ability, we designed a scale to measure reported confidence in approaching clinical scenarios and hypothesised that exposure to simulation training would increase self-reported efficacy in this domain.

Open-ended items

Participants were also asked to provide qualitative feedback in answering questions such as “What is the one thing you are going to take away with you at the end of this course?” This question was designed to prompt a participant to reflect on their own learning in the course and to gather evidence on which elements of the course reportedly contributed most to the learning experience. In addition, this forms part of the instructional component; the question serves to help a participant cement that learning in their memory by facilitating reflection and allowing participants time to frame learning outcomes from the session.29

Data analysis

Quantitative data analysis (using IBM SPSS V.19.0) consisted of descriptive statistics, as well as tests between groups for pre–post training scores (IP vs UP nurses) and post-training scores (IP vs UP doctors).

Factors in the 10-item questionnaire were also explored using the principal components method via a larger group of post-training scores (n=399). The resultant factors were used for further comparisons across the IP and UP groups.

Qualitative data were analysed thematically based on broad categories appearing within the data. Multiple researchers participated in the analysis of data, in an attempt to minimise researcher bias.30 From an initial group of 11 categories, the revising of codes via an iterative process led to a final broad thematic framework under the headings of teamwork, communication and leadership.

We hypothesised that self-efficacy would increase as a result of the training overall; that is, that participants would feel more confident about their abilities in the specific task domains of the course after completing the intervention and that this would be reported in scale and open-ended items. We further hypothesised that IP courses would show increased shifts in self-efficacy and final post-training outcomes.

Results

Statistical analysis of scaled items

Overall precourse and postcourse feedback

Overall, 187 participants were measured before and after the course for evidence of improvements in self-efficacy (115 nurses/midwives (70%) and 57 FY1/FY2 doctors (30%)). Where gender was reported (n=123), this group was 81% female (nurses 94% and doctors 65%). No significant gender differences or differences between nurses and doctors were found. Matched data were analysed by paired t test, and showed a mean shift in confidence from 63% (SD 14.6) before training to 77% (SD 12.3) after training (t=15.6; n=186, p<0.001). Thus the simulation training significantly improved participant ratings of self-efficacy (see online supplementary appendix A).

IP versus UP comparison 1 (n=115 nurses and midwives)

Pretraining and post-training responses were examined for nurses and midwives trained alone (UP; n=64) and interprofessionally with FY1/2 doctors (IP; n=66). The UP group improved overall by 12% (SD 14) and the IP group by 20% (SD 11). An independent samples t test for equality of means showed that this difference was significant (t=3.4; df 128; p<0.001; 95% CI 11.98 to 3.22). Therefore, our null hypothesis that there would be no difference between IP and UP training was rejected.

IP versus UP comparison 2 (n=156 doctors)

Comparison 2 was a cross-sectional comparison of post-training responses between FY1/2 doctors trained either alone (UP; n=94; 60%) or interprofessionally with nurses and midwives (IP; n=62; 40%). Doctors’ mean postcourse self-efficacy was higher by two percentage points (75–73%) in the IP group, but not significantly so (t=1.4; df 154).

Factor analysis

During the design of the study, the items were constructed to look at the self-efficacy components of two themes: confidence in performing leadership and management skills, and confidence in performing communication and teamwork skills.

An exploratory factor analysis of postcourse scores (n=399; principal components method with varimax rotation) shows a two-factor solution that explains 74% of the variance. Questions 2, 3, 5 and 7 form a leadership/management factor and the rest a communication/teamwork factor, supporting the design along these twin themes (online supplementary appendix A).

Table 1 shows reliability data for these factors, with IP versus UP data for nurses/midwives (precourse and postcourse difference IP vs UP) and doctors (postcourse scores IP vs UP), together with the scores for the overall 10-item scale.

Table 1

IP and UP participant ratings on 10-item self-efficacy scale and composite communication and leadership/management scores

It can be seen from table 1 that, as expected, the significant effect of IP training for nurses overall (comparison 1) is reflected in significantly better improvement on communication (p<0.05) and leadership (p<0.001) items. Postcourse scores for doctors were higher (but not significantly so) for leadership, and significantly better for communication/teamwork in the IP group (p<0.05).

Thematic analysis of open-ended responses

Open-ended responses provided insight into what participants found valuable in the course. The most common theme to emerge from the data was the value placed on communication. Learners reported (A) the importance of being able to practice communicating with colleagues in a ‘mock’ clinical setting and (B) enhanced understanding of the link between communication skills and clinical outcomes. One learner noted that communication was central and that she had learned to “ask questions if [she is] not sure of what is happening” (NI147). This was particularly associated with IP courses, where there was clear understanding of the need to “communicate thoughts out loud so other team members can help identify treatment gaps” (F2I42) when working across disciplines.

Similarly, leadership emerged as an important theme in driving good outcomes in simulated scenarios. Learners said that they had increased awareness of the need to identify who was leading clinical scenarios so that they could adjust their behaviour appropriately. This sometimes involved enabling others to lead by being responsive as a follower, or as one participant explained, learning to “[...] play an active part, decide your role and nominate a leader” (NI83).

Where leadership was required, candidates said they would now be likely to fulfil this role themselves, as one student put it, sometimes it was appropriate “to take [a] leadership role,” even “as [a] junior” clinician (FI132).

Finally, teamwork was also reported to be an important learning outcome for many participants in the course and in IP working in particular (teamwork and communication were overlapping themes, showing a clear relationship in students’ minds between these two concepts). The data showed the relationship between the two concepts to be a complex one: sometimes communication was seen by participants as a subset of what constitutes an effective team; however, other times team working was seen as a means to achieve good communication. In the words of one participant, a central learning outcome of the course was “When it all gets hectic take a time out to recap with [the] team” (F2I151). Learners were quick to realise that by communicating with the team the cognitive and psychological burden of the clinical emergency could be shared; or as one participant explained it, “through communication my team helped to work out [the] problems and how best to solve them” (NI114). One learner noted that by engaging all members of the team in an open and receptive manner, everyone contributed to not only the physical care of the patient but also to the decision-making process. As he described it,“helping each other complete the care tasks let us get on the same page mentally making the treatment plan obvious and decisions easier to make” (FI79).

Discussion

This was a comparative study interested in the overall impact of the course and on its relative impact in its UP and IP formats (interaction with course attendees). We hypothesised that self-efficacy would increase as a result of the training overall; that is, that participants would feel more confident about their abilities in the specific task domains of the course after completing the intervention and that this would be reported in scale and open-ended items. We further hypothesised that IP courses would show increased shifts in self-efficacy and final post-training outcomes.

Training improved participants’ overall confidence, or more specifically their reported self-efficacy (p<0.001), which is aligned with previous literature showing generally positive effects of simulated practice for nurses31 doctors32 and IP teams.33

IP courses showed an overall significantly better improvement for nurses and midwives (p<0.001) and improved factorial scores for communication/teamwork (p<0.05) and leadership/management (p<0.001). Doctors undergoing IP training had significantly higher factorial scores on postcourse communication/teamwork (p<0.05), and higher scores for leadership/management which were not significant. These data provide evidence that simulation training enhances participants’ self-efficacy and that combined doctor/nurse scenarios have the effect of improving learning outcomes. The WHO3 is clear that effective training in IP education can contribute to a ‘collaborative practice-ready workforce’ (p.10), and reviews of evidence show that this collaboration can improve patient care and safety. Lemieux-Charles34 outlines how collaborative education can overcome ‘professional silos’ (p.1926). This work builds on, and contributes to, these previous findings.

Qualitative responses to the question about the most important learning point of the course yielded responses aligned to three primary themes: communication, leadership and teamwork, which triangulate with the overall learning effect. This closely matches recent literature on analysis of postsimulation open-ended responses, which shows communication, leadership and teamwork as key themes, including “adaptability and requirement for flexibility in teamwork roles” and the “value of high-quality, clear communication”(ref. 35, p.205).

Limitations of the study

This study showed a consistent effect of IP training improving outcomes for doctors and nurses. However, there are some limitations. Comparison 2 for doctors is based on postcourse responses only. The effects are somewhat smaller for doctors but it would be necessary to test doctors before and after to see if there is an interaction whereby IP training is better received by the nurse group.

Studies outwith the laboratory are often quasi-experimental,36 especially in an applied social science like medical education, because of the realities of educational as well as clinical practice. What was most important in this case was to ensure that participants were able to access the simulation centre and attend what has proven to be a popular and well-regarded educational experience. In this case difficulties in comparison arose due to logistical challenges (eg, policy changes) in running multiple groups over time in a ‘live’ educational setting. Course participants were not randomised to IP or UP condition, though baseline measures showed no differences between groups. Non-randomised designs are common in simulation,37 but it is important to continue to consider which designs will best illuminate the questions we are interested in (see Cook and Campbell38 for a discussion of the relative advantages and disadvantages of quasi-experiments).

Finally, we have data that show improved outcomes for IP simulated education but it is important to view these results in context. While we were not able to have a control group (UP cohort) that consisted only of nurses due to logistical reasons, we feel this does not significantly impact on the results. Brannan et al39 found significantly improved post-test confidence in simulation learning as well as classroom/lecture learning approaches. Important concerns have also been raised recently about the relationship between self-reported measures of confidence40 and clinical performance. Liaw et al41 used independent ratings of clinical performance to show that this was independent of self-reported confidence, saying that this highlights ‘the potential danger of simulation experiences in leading toward overestimation of confidence over actual performance’ and recommending that ‘future studies should focus on the observation of clinical performance as a valid assessment strategy’ (p.e39).

Further work

Improved patient outcomes are the ultimate goal of these types of programmes, and it is important to investigate transference to practice if possible. For example, future areas to explore could include gaining consent to conduct follow-up interviews with a sample of participants to ask them to reflect back on a period or experience in the clinical environment, to investigate how the thematic improvements in communication and leadership are implemented and whether they are sustained. This presents some difficulty due to the frequent rotations of clinicians and their movement between specialties, departments and hospitals during their training. It is also difficult to isolate the effects of the IP training from confounding influences, including further training, in any interim period. Very few studies include longitudinal follow-up with participants after they have returned to practice, and there is therefore little evidence about how the skills learned in simulation are integrated into clinical practice.42 Thus questions remain about transference and sustainability of knowledge over time and this has been a relatively neglected area of simulation research.43

Conclusions

This study shows overall positive effects of IP simulation training for doctors and nurses, measured qualitatively via thematic analysis of open-ended responses and quantitatively via scale items drawing on self-efficacy in the clinical domain.

As education and training for healthcare professionals becomes more IP focused, underlying learner confidence and comfort performing in front of prospective peers and colleagues may develop. This in turn may then imply greater improvements with IP learning groups.

The natural working environment of healthcare is IP and thus IP education enhances the potential fidelity of simulation-based training. This is especially true in courses focused on non-technical skills like teamwork, communication, management and leadership which were the main themes in this case.

Finally, there are a number of questions raised by this work that should be addressed by future research. The question remains of how and why an IP learning experience differs from a UP learning experience. The medical education and simulation communities have called for work that explores the ways that learning occurs in these settings. This may well involve observational work using methodologies from anthropology and the social and educational sciences. In addition, longitudinal follow-up work with simulation candidates to see how the reported benefits of training are reflected in clinical practice and related to patient outcomes, while difficult, is a vital next step in our attempts to improve the healthcare systems we work in.

Acknowledgments

Dr Libby Thomas assisted in design of teaching materials and delivery of the programme. Rachael Bates and Maria Dibua provided administrative support and data entry for the programme. Dr Beth Thomas, Dr James Brewin and Dr Sanjeevan Aiyathurai all provided a significant teaching commitment as faculty.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:

Footnotes

  • Contributors CW led the research team on the project, assisting design and delivery of the programme, collecting, monitoring, cleaning and analysing the data, drafting and revising the paper. He is also the guarantor. GR developed the survey instrument, analysed data, drafted and revised the paper. NJM designed teaching materials and delivery of the programme, and reviewed and contributed to drafts of the paper. RH assisted in data collection, data analysis, reviewed and contributed to drafts of the paper. ARanalysed data and reviewed and contributed to drafts of the paper. PJ conceptualised and designed the programme, and reviewed and contributed to drafts of the paper.

  • Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None.

  • Ethics approval This study sought ethical approval from the St Thomas Research Ethics Committee and all participants gave informed consent before taking part.

  • Provenance and peer review Not commissioned; internally peer reviewed.

  • Data sharing statement Technical appendix and statistical code and data set available from the corresponding author at colm.watters@doctors.org.uk.