Objective To investigate the effect of in situ simulation (ISS) versus off-site simulation (OSS) on knowledge, patient safety attitude, stress, motivation, perceptions of simulation, team performance and organisational impact.
Design Investigator-initiated single-centre randomised superiority educational trial.
Setting Obstetrics and anaesthesiology departments, Rigshospitalet, University of Copenhagen, Denmark.
Participants 100 participants in teams of 10, comprising midwives, specialised midwives, auxiliary nurses, nurse anaesthetists, operating theatre nurses, and consultant doctors and trainees in obstetrics and anaesthesiology.
Interventions Two multiprofessional simulations (clinical management of an emergency caesarean section and a postpartum haemorrhage scenario) were conducted in teams of 10 in the ISS versus the OSS setting.
Primary outcome Knowledge assessed by a multiple choice question test.
Exploratory outcomes Individual outcomes: scores on the Safety Attitudes Questionnaire, stress measurements (State-Trait Anxiety Inventory, cognitive appraisal and salivary cortisol), Intrinsic Motivation Inventory and perceptions of simulations. Team outcome: video assessment of team performance. Organisational impact: suggestions for organisational changes.
Results The trial was conducted from April to June 2013. No differences between the two groups were found for the multiple choice question test, patient safety attitude, stress measurements, motivation or the evaluation of the simulations. The participants in the ISS group scored the authenticity of the simulation significantly higher than did the participants in the OSS group. Expert video assessment of team performance showed no differences between the ISS versus the OSS group. The ISS group provided more ideas and suggestions for changes at the organisational level.
Conclusions In this randomised trial, no significant differences were found regarding knowledge, patient safety attitude, motivation or stress measurements when comparing ISS versus OSS. Although participant perception of the authenticity of ISS versus OSS differed significantly, there were no differences in other outcomes between the groups except that the ISS group generated more suggestions for organisational changes.
Trial registration number NCT01792674.
- MEDICAL EDUCATION & TRAINING
- patient simulation
- in situ simulation
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Strengths and limitations of this study
To the best of our knowledge, this is the first randomised trial conducted to assess the effects of two different simulation settings, in situ simulation versus off-site simulation, on a broad variety of outcomes.
Previous non-randomised studies have recommended in situ simulation. However, in this randomised trial, no significant differences were found regarding knowledge, patient safety attitude, stress measurements, motivation or team performance when comparing in situ simulation versus off-site simulation. The participants in the in situ group scored the authenticity of the simulation significantly higher than did the participants in the off-site simulation group. However, this perception did not influence the individual and team outcomes. On the outcome on the organisational level, the in situ group generated more suggestions for organisational changes.
A strength of this trial is the involvement of authentic teams that mirrored teams in real life that resembles the real clinical setting in every possible way. This seem to be important for the so-called sociological fidelity.
A limitation of the trial is the fact that the outcomes were based only on immediate measurements of knowledge level and of team performance. Only perceptions of simulation were measured after 1 week (evaluation and motivation) and safety attitudes after 1 month. No clinical outcome was measured.
Frequently recommended as a learning modality,1–5 simulation-based medical education is described as “devices, trained persons, lifelike virtual environments and contrived social situations that mimic problems, events, or conditions that arise in professional encounters.” 5 However, its key elements remain to be studied in depth in order to improve simulation-based medical education. One potential aspect that may influence the effect of this kind of education is the level of fidelity, or authenticity in more layman's terms. Fidelity is traditionally described to be assessed on two levels: (1) engineering or physical fidelity, that is, does the simulation look realistic? (2) psychological fidelity, that is, does the simulator contain the critical elements to accurately simulate the behaviours required to complete a task? 6 ,7
Simulation-based medical education has traditionally been conducted as an off-site simulation (OSS), either at a simulation centre or in facilities in the hospital set up for the purpose of simulation. Recently, in situ simulation (ISS) has been introduced and described as “a team based simulation strategy that occurs on the actual patient care units involving actual healthcare team members within their own working environment.” 8–12 An unanswered question is whether ISS is superior to OSS. It has been argued that ISS has more fidelity and can lead to better teaching and greater organisational impact compared with OSS. 8–14
We hypothesised that the physical setting could influence fidelity, and hence ISS could be more effective for educational purposes. To the best of our knowledge, no randomised educational trials have been conducted comparing the ISS versus the OSS setting. Two articles that do use randomisation focused on frequency of training and not setting, and did not include a relevant control group.15 ,16 Previous studies have been criticised for having small sample sizes, weak study designs and a lack of meaningful evaluations of the effectiveness of the programmes.8 A recent retrospective video-based study showed that the performance was similar in all the tested simulation settings, but the participants favoured ISS and the authors argued that prospective studies are needed.17
Human factors such as stress and motivation impact learning,18–26 which is why we set out to investigate how stress and motivation were affected by ISS versus OSS. We anticipated that the participants would experience ISS as more demanding and as creating higher levels of stress and motivation, which might enhance their learning. Furthermore, we hypothesised that ISS might provide investigators with more information on changes needed in the organisation to improve quality of care.
In this trial, we wanted to apply simulation-based medical education in the field of obstetrics, as delivery wards are challenging workplaces, where patient safety is high on the agenda and unexpected emergencies occur.27–34 Simulation-based medical education is thus argued to be an essential learning strategy for labour wards.4 ,35 The objective of this randomised educational trial was to investigate the effect of ISS versus OSS on knowledge, patient safety attitude, stress, motivation, perception of the simulation, team performance and organisational impact among multiprofessional obstetric anaesthesia teams.
An investigator-initiated, single-centre randomised superiority educational trial was previously described in a design article.36
Setting and participants
The setting was the Department of Obstetrics and the Department of Anaesthesiology, Juliane Marie Centre for Children, Women and Reproduction, Rigshospitalet, University of Copenhagen, which has approximately 6300 deliveries per year. Participants were healthcare professionals who worked in shifts on the labour ward: consultant and trainee doctors in obstetrics and anaesthesiology, midwives, specialised midwives, auxiliary nurses, nurse anaesthetists and operating theatre nurses. Participants gave written informed consent. Exclusion criteria were lack of informed consent, employees with managerial and staff responsibilities, staff members involved in the design of the trial and employees who did not work in shifts.36
Recruitment of participants
Eligible participants were provided with information via meetings, a website and personal letters, but additional verbal and written information could also be obtained from the principal investigator (JLS). Informed written consent was obtained if people decided to participate in the trial.36
The experimental intervention was a preannounced ISS,8 ,9 that is, simulation-based medical education in the delivery room and operating theatre. The control intervention was an OSS, which took place in hospital rooms set up for the occasion but away from the patient care unit.
An appointed working committee consisting of representatives from all the healthcare professionals participating in the trial developed its aims and objectives, and they designed simulated scenarios for ISS and OSS.36 The two simulation scenarios were: (1) management of an emergency caesarean section after a cord prolapse; and (2) a postpartum haemorrhage including surgical procedures to evacuate the uterus. Focusing mainly on interprofessional skills and communication, the scenarios gave each healthcare profession a significant role to play.37
All participants recruited for a training day were told to arrive at a specific time dressed in work clothes, but had not been told what kind of simulation they were randomised to. The OSS room that was to function as the delivery room was in the doctors’ on-call room, which was small compared to the usual delivery room. A roller table prepared with the usual labour ward equipment had been placed in the room. The OSS room that was to function as the operating theatre was set up in the corner of a lecture hall. An anaesthetic trolley with the usual equipment was placed in the room and equipment for the operating theatre nurses was placed on a roller table. An introductory presentation was given to all participants on how the simulation was organised and then the participants recruited for OSS were shown the fictitious delivery room and fictitious operating theatre.
In the first part of the simulation in the delivery room, someone who has been instructed in role playing acted as the patient in the ISS and OSS settings. In the real and the fictitious operating theatre, a full-body birthing simulator, a SimMom, was used for parts of the simulation scenario.38 Recruited from the working committee, the instructors conducting the simulations were trained in facilitating simulations and doing debriefings. The working committee was trained in local organised courses and attended a British National train the trainers course: PROMPT (PRactical Obstetric Multi-Professional Training).39 They worked in groups of two comprising either a consultant obstetrician with a nurse anaesthetist or a consultant anaesthetist with a midwife. The debriefings lasted 50–60 min and comprised three phases: description, analysis and application.40 In addition to the simulation-based medical education, the training day also included video-based, case-based41 and lecture-based teaching sessions.
The primary outcome was the results from a knowledge test based on a 40-item multiple choice question (MCQ) test developed specifically for this trial.42 The choice of a knowledge test as the primary outcome was mainly a pragmatic choice. MCQ testing is feasible for testing many participants in a relatively short time and at a low cost.43 Furthermore, previously used knowledge tests could be used for inspiration and for sample size calculation.44 ,45 The participants completed the MCQ test at the beginning and at the end of the training day. They were asked not to discuss the MCQ test with other participants or instructors during the training day.
The Safety Attitudes Questionnaire (SAQ) is validated in a Danish context.46 It included 33 items covering five dimensions: (1) team work climate; (2) safety climate; (3) job satisfaction; (4) stress recognition; and (5) work conditions.47 ,48 The participants did the SAQ 1 month prior to and 1 month after participating in the training day.
Stress: Salivary cortisol levels were used as an objective measure of physiological stress.36 The salivary cortisol samples were obtained as a baseline before the first and the second simulation and at three additional times after the two simulations (figure 1). The subjective stress level was measured using the Stress-Trait Anxiety Inventory (STAI) and cognitive appraisal (CA) (figure 1).21 ,23 ,49 ,50
Intrinsic Motivation Inventory (IMI) included 22 items with four dimensions: (1) interest/enjoyment; (2) perceived competence; (3) perceived choice; and (4) pressure or tension (reversed scale).51
Evaluation questionnaire: Together with the IMI, each participant received an evaluation questionnaire at the end of the training day and they were asked to return it within a week.36
Team performance was video recorded and assessed by experts using a Team Emergency Assessment Measure (TEAM).36 ,52 ,53 The TEAM scale was used in the original version in English and supplemented with a translated Danish version. The scoring of team performance was done by two consultant anaesthetists and two consultant obstetricians from outside the trial hospital. All four video assessors jointly attended two times 3 h training sessions on video rating, but assessment of the trial videos was conducted individually. Each video-assessor received an external hard disc with 20 simulated scenarios in random order of teams and scenarios of management of an emergency caesarean section and a postpartum haemorrhage, respectively.
Organisational outcomes were registered using: (1) two open-ended questions included in the evaluation questionnaire on suggestions for organisational changes; and (2) debriefing and evaluation at the end of the training day, where participants reported ideas for organisational changes. The principal investigator (JLS) took notes during these sessions, which were then discussed in the previously mentioned working committee, which included authors MJ and KE.
Sample size calculation
We chose data from knowledge tests from previous studies to conduct our sample size estimation.44 ,45 We assumed the distribution of the primary outcome (the percentage of correct MCQ answers) to be normally distributed with an SD of 24%. If a difference in the percentage of correct MCQ answers between the two groups (ISS and OSS) was 17%, then 64 participants had to be included to be able to reject the null hypothesis with a power of 80%. Since the interventions were delivered in teams (clusters), observations from the same team were likely to be correlated.54 ,55 The reduction in effective sample size depends on the cluster correlation coefficient, which is why the crude sample size had to be multiplied by a design effect. With a design effect of 0.05, the minimum sample size was increased to 92.8 participants.55 We therefore decided to include a total of 100 participants.
Randomisation and blinding
Randomisation was performed by the Copenhagen Trial Unit using a computer-generated allocation sequence concealed to the investigators. The randomisation was conducted in two steps. First, the participants were individually randomised 1:1 to the ISS versus the OSS group. The allocation sequence consisted of nine strata, one for each healthcare professional group. Each stratum was composed of one or two permuted blocks with the size of 10. Second, the participants in each group were then randomised into one of five teams for the ISS and OSS settings using simple randomisation that took into account the days they were available for training.
Questionnaire data were transferred from the paper versions and coded by independent data managers. The intervention was not blinded for the participants, instructors providing the educational intervention, the video assessors or the investigators drawing the conclusions. The data managers and statisticians were blinded to the allocated intervention groups.
Data analysis and statistical methods
Owing to the low number of missing values, no missing data techniques were applied. Single missing items in the MCQ test or more than one answer to an MCQ item were treated as incorrect answers. Single missing items in the inventories SAQ, IMI and STAI were excluded from the overall calculation of the summary scores.
Calculation of 95% CI obtained after the simulation intervention (post-MCQ, post-SAQ, stress measurements, IMI) was based on generalised estimating equations (GEE)56 since observations from individuals on the same team were potentially correlated.
The evaluation data measured on a Likert scale were analysed as comparisons of location of the ordinal responses from items in the evaluation questionnaire performed by the Kruskal-Wallis rank sum test, and the p values were adjusted for multiple testing using the Benjamini-Hochberg method.57
The mean outcomes obtained after the simulation intervention (postmeasurements) in the two intervention groups were compared by a linear model including intervention and baseline (premeasurements) as explanatory variables (analysis of covariance (ANCOVA)), and inferences were based on GEE to account for the potential correlation within teams. To assess whether there was a difference in mean between pre and postmeasurements in each of the intervention groups, overall tests of whether the intercept equals 0 and the slope equals 1 from a linear model of the postmeasurements on the premeasurements were performed.
The team data, that is, the ratings from the four assessors, were analysed using linear mixed models to take into account the repeated measurements on the teams by the same assessors. Random effects for each team nested in the randomisation group and in each assessor were included. A model including the interaction between the randomisation group and simulation was used to estimate means, whereas an additive model was used to determine the overall difference in mean between the ISS versus the OSS intervention and the first (emergency caesarean section) and the second (postpartum haemorrhage) simulation (no interaction between randomisation and simulations was found).
Ideas for organisational changes were registered by participants and the reported suggestions were categorised as qualitative data and analysed using part of the framework from the Systems Engineering Initiative for Patient Safety model.58
SAS V.9.2, R V.3.0.2 and IBM SPSS Statistics V.20 were used for statistical analysis. Two-sided p values <0.05 were considered significant.
Recruitment, basic characteristics and follow-up of participants
Informed written consent for participation in the trial was provided by 116 healthcare professionals. The two randomised intervention groups were comparable (table 1).
The trial was conducted from April to June 2013. Out of 100 participants included, 97 participated (tables 1 and 2 and figure 2). The 10 simulations were conducted as planned, although one ISS had to be postponed for 15 min due to an ongoing, real emergency caesarean section. The mean number of minutes spent on the caesarean section simulation in ISS and OSS was 18 and 15 min, respectively (p=0.70), while the mean for the postpartum haemorrhage simulation was 26 and 24 min, respectively, (p=0.40).
MCQ test: There was no difference in mean post-MCQ scores between the ISS versus the OSS group adjusted for the pre-MCQ scores (table 3). Additional analyses based on the MCQ test, including 33 or 29 of the 40 items, gave similar results (data not shown). These additional analyses were performed because validation of the MCQ test revealed that 7–11 of the 40 MCQ items were disputable.42
Post hoc analysis: The average increase in percentage of correct answers in the MCQ test following training was 13.1% (95% CI 11.0% to 15.3%) in the ISS group and 12.7% (95% CI 10.3% to 15.2%) in the OSS group (overall tests of no difference between pre and post MCQ: both p<0.0001).
SAQ: No differences were found in the ISS versus the OSS group for any of the post-SAQ dimensions (table 4).
Salivary cortisol, STAI and CA: The mean change in baseline to peak was similar for ISS versus OSS for both the first (caesarean section) and the second (postpartum haemorrhage) simulation (table 5).
Post hoc analysis: The salivary cortisol and STAI levels increased significantly from baseline to peak in the ISS and OSS groups following the first (caesarean section) and the second (postpartum haemorrhage) simulation (overall tests for no difference between pre and post: all p<0.0001). CA decreased significantly from baseline to peak in the ISS and OSS settings in both the caesarean section and in the postpartum haemorrhage simulations (p<0.0001).
IMI: No differences were found in the ISS versus the OSS group for the IMI score (table 6).
Participant evaluations and perception: For almost all 20 questions in the evaluation questionnaire, the ISS and OSS groups did not differ significantly. However, the two questions addressing the authenticity or fidelity of the simulations were scored significantly higher by the ISS participants compared with the OSS participants (table 7).
TEAM: No significant differences were found in the team scoring of performance between the ISS versus the OSS group (table 8).
TEAM post hoc analysis: A significant increase was found in the team scoring of performance from the first simulation (emergency caesarean section) to the second (postpartum haemorrhage) (table 8).
Organisational changes: A qualitative analysis showed that more ideas for organisational changes were suggested by ISS participants than OSS participants. For details, see online supplementary table S1. The quantitative analysis, however, showed that participants in the ISS and OSS groups scored equally concerning whether the simulations inspired making changes in procedures or guidelines (table 7, questions 5 and 6).
In this randomised trial, we did not find that simulation-based medical education conducted as ISS compared with OSS led to different outcomes assessed on knowledge, patient safety attitude, stress, motivation, perceptions of the simulations and team performance. Participant perception of the authenticity of the ISS and OSS differed significantly, but this had no influence on other individual or team outcomes. We observed that ISS participants provided more ideas for organisational changes than did OSS participants. This is in accordance with several non-randomised studies describing a positive impact of ISS on the organisation.8 ,10 ,11 ,13 ,59–61
In the evaluation questionnaire (table 7), participants were asked about their perceptions of the authenticity of the simulations, which can be interpreted as their perception of the simulation's fidelity. The participants scored the authenticity to be significantly higher in ISS compared with OSS; however, there were no differences in any of the other outcomes between the ISS and OSS groups. The results from this randomised trial are not consistent with traditional situated learning theory, which states that increased fidelity leads to improved learning.62 ,63 The conclusions from this trial, however, are in alignment with more recent empirical research and discussions on fidelity and learning.6 ,64–66 Our study indicates that the change in simulation fidelity, that is change in setting for simulation, does not necessarily translate into learning. Another randomised trial, which compared OSS as in-house training at the hospital in rooms specifically allocated for training with OSS in a simulation centre, also showed that the simulation setting was of minor importance and that there was no additional benefit from training OSS in a simulation centre versus OSS in-house.44 ,67
The present trial involved simulation-based training with six different healthcare professions. A relevant perspective is the discussion on expanding the traditional concept of fidelity to include the recently introduced term sociological fidelity, which encompasses the relationship between the various healthcare professionals.37 ,68 After completing the trial, we decided to explore more closely the experiences between the healthcare professionals in a qualitative study.69
Post hoc analyses showed similar educational effects in the ISS and OSS groups with a knowledge gain of approximately 13% in both groups. It can be argued that this knowledge gain was due to the test effect.70 ,71 We believe, however, that the test effect was minimised as feedback was not given after the initial testing, which is viewed as crucial to learning from a test, and furthermore only one MCQ test was used.71
No differences were found in the mean SAQ score after simulation-based medical education in the ISS versus the OSS group. Earlier studies have described that high SAQ values mean that SAQ cannot be influenced by an intervention.72 ,73 The values for SAQ were generally high in this trial compared with various other studies from non-Scandinavian countries.72–75
There were no differences in the stress level when measured as salivary cortisol levels, STAI and CA in the ISS versus the OSS group. The post hoc analysis showed that simulation-based medical education triggered objective stress, measured by salivary cortisol, to the same extent in the ISS and OSS groups. CA seemed to be without discriminatory effect and a decrease was observed where an increase would have been expected, and the levels of CA were low compared with other studies. Previously used among students and medical trainees,22 ,76 ,77 CA appeared to have a less discriminatory effect in these more senior groups of healthcare professionals.
IMI24 ,51 revealed no differences between ISS versus OSS. Motivation has not previously been tested in educational simulation studies, and it is argued that a gap appears to exist in the simulation literature on motivational factors and further research has been encouraged.25 Some argue that simulation in the clinical setting, as with ISS, should increase motivation,14 but this was not confirmed by findings in this trial.
The evaluation data showed no differences between ISS and OSS. Both the ISS and OSS participants gave very high scores on the evaluation. This is in accordance with what is generally seen in interprofessional training.78
The team performance showed no differences between ISS versus OSS. The post hoc analysis showed that teams performed statistically significantly better in the second compared to the first simulation, which indicates that the simulations were effective. Validated in previous studies, the TEAM scale has been found to be reasonably intuitive to use,52 ,53 which was also our impression in this study.
According to the participants’ own perceptions, they found that ISS and OSS were equally inspirational with regard to suggesting organisational changes in the delivery room and operating theatre and for clinical guidelines. The qualitative analysis, however, revealed that ISS participants provided more ideas for suggested changes, especially concerning technology and tools in the delivery ward and the operating theatre.58 Previous non-randomised studies have suggested that ISS has an impact on organisations, but this has, to the best of our knowledge, never been confirmed in a randomised trial.8 ,11 ,13 ,17 ,59
Strength and limitations
This trial has several strengths. It was conducted with an adequate generation of allocation sequence; adequate allocation concealment; adequate reporting of all relevant outcomes; had very few dropouts; and was conducted on a not-for-profit basis.79–81 The trial was also blinded for data managers and statisticians. Generally, ISS programmes have been criticised for their lack of meaningful evaluations of the effectiveness of the programmes.8 A strength of this trial was its use of a broad variety of outcome measures using previously validated scales to assess the effect on the individual, the team and the organisational level.
A limitation of the study is the fact that the outcome was based only on immediate measurements of knowledge level and of team performance. Only perceptions of simulation were measured after 1 week (evaluation and motivation) and safety attitudes after 1 month. No clinical outcomes or patient safety data were measured.
A strength of this trial is the involvement of authentic teams that mirrored teams in real life, which seem to be of importance for the so-called sociological fidelity.37 ,68 The teams in this trial were authentic in their design and hence resemble the real clinical setting in every possible way.65 ,82 These kinds of teams are called ‘ad hoc’ on-call teams and are very difficult to follow and observe in the real clinical setting, and assessment of the clinical performance of ad hoc teams for a long period is almost impossible. The authentic teams may also be a limitation because two-thirds of the participants had some simulation experiences. The findings in this trial therefore need to be confirmed among other kinds of healthcare professionals with less experience in simulation-based education.
Previous research on assessment suggests that knowledge-based written assessments can predict the results of performance-based tests, and hence knowledge-based assessment could be used as a proxy for performance.83–85 However, a better approach to the assessment could have been performance-based tests of clinical work, but this was considered unfeasible.
In this trial, we did not measure long-term retention. The literature on retention of skills suggests that deterioration of the non-used skills appears to occur about 3–18 months after training. More research within the field of retention and on the effect of short booster courses is necessary.45 ,86–88
There is a risk of type II error and the trial is most likely underpowered, as many randomised trials are. On the other hand, it should be discussed whether performing a larger trial to detect a statistically significant effect of ISS is relevant or feasible and appears to have a clinically or educationally relevant effect.89
The improvements on knowledge and team performance may also be due to the Hawthorne effect, that is, due to individuals changing behaviour as a result of their awareness of being observed.90 From an educational perspective, a major problem with the Hawthorne effect is an intervention group versus a control group, where the control group is given no intervention.90 This issue was avoided in this trial as exactly the same intervention was used for both groups, the only difference being the physical setting, thus most likely minimising the Hawthorne effect in our trial.90
This randomised trial compared ISS versus OSS, where OSS was provided as in-house training at the hospital in rooms specifically allocated for training. From this trial, we concluded that changes in settings from OSS to ISS do not seem to provide key elements for improving simulation-based medical education. Although participant perception of the authenticity or fidelity of ISS versus OSS differed significantly, there were no differences in knowledge, patient safety attitude, stress measurements, motivation and team performance between the groups, except that the ISS group generated more suggestions for organisational changes. This trial indicated that the physical fidelity of the setting seemed to be of less importance for learning; however, more research is necessary to better understand which aspects of simulation to be most important for learning.
The authors would like to thank the doctors, midwives and nurses who took part in the working committee and in planning the intervention, especially midwife Pernille Langhoff-Roos (Department of Obstetrics, Rigshospitalet, University of Copenhagen) for her contributions to the detailed planning, recruitment of participants and completion of the simulations. They would also like to thank Jørn Wetterslev (Copenhagen Trial Unit) for advice on the potential impact of the clustering effect; Karl Bang Christensen (Section of Biostatistics, Department of Public Health, Faculty of Health and Medical Sciences, University of Copenhagen) for advice on the statistical plan; Solvejg Kristensen (Danish National Clinical Registries) for her advice and for providing a Danish edition of Safety Attitudes Questionnaire; Per Bech (Psychiatric Research Unit, Mental Health Centre North Zealand) for advice and for providing a Danish edition of the State-Trait Anxiety Inventory; Claire Welsh (Juliane Marie Centre for Children, Women and Reproduction, Rigshospitalet, University of Copenhagen) and Niels Leth Jensen (certified translator) for forwarding the back translation of Intrinsic Motivation Inventory; Jesper Sonne (Department of Clinical Pharmacology, Bispebjerg Hospital) for advice on medicine that can influence cortisol measurements; Anne Lippert and Anne-Mette Helsø (Danish Institute for Medical Simulation, Herlev Hospital, University of Copenhagen) for training the facilitators in conducting debriefing after simulation; medical student Veronica Markova for undertaking all practical issues on cortisol measurements; Bente Bennike (Laboratory of Neuropsychiatry, Rigshospitalet, University of Copenhagen) for managing the cortisol analyses; medical students Tobias Todsen and Ninna Ebdrup for video recordings of simulations; and Tobias Todsen for editing and preparing videos for assessment. Furthermore, the authors would like to thank Helle Thy Østergaard and Lone Fuhrmann (Department of Anaesthesiology, Herlev Hospital, University of Copenhagen), Lone Krebs (Department of Obstetrics and Gynecology, Holbæk Hospital) and Morten Beck Sørensen (Department of Obstetrics and Gynecology, Odense University Hospital) for reviewing and assessing all videos. Finally, thank you to Nancy Aaen (copy editor and translator) for revising the manuscript.
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
- Data supplement 1 - Online supplement
Contributors JLS conceived of the idea for this trial. BO and CVdV supervised the trial. All authors made contributions to the design of the trial. JLS, assisted by BO, was responsible for acquiring funding. JLS, JL and CG contributed to the sample size estimation and detailed designing of and execution of the randomisation process. JLS, MJ, KE, DOE and VL made substantial contributions to the practical and logistical aspects of the trial, while PW contributed to the discussion and practical and logistical issues concerning testing salivary cortisol. Jointly with JLS, SR and LS performed the statistical analysis. JLS wrote the draft manuscript. All authors provided a critical review of this paper and approved the final manuscript.
Funding The trial was funded by non-profit funds, including the Danish Regions Development and Research Foundation, the Laerdal Foundation for Acute Medicine and Aase and Ejnar Danielsen Foundation. None of the foundations had a role in the design or conduct of the study.
Competing interests All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/coi_disclosure.pdf. The principal investigator and lead author (JLS) reports non-profit funding mentioned above. DØ reports board membership of Laerdal Foundation for Acute Medicine. Other authors declare no support from any organisation for the submitted work.
Ethics approval Participants were healthcare professionals, and neither patients nor patient data were used in the trial. Approvals from the Regional Ethics Committee (protocol number H-2-2012-155) and the Danish Data Protection Agency (Number 2007-58-0015) were obtained. Participants were assured that their personal data, data on questionnaires, salivary cortisol samples and video recordings would remain anonymous during analyses and reporting. The participants were asked to respect the confidentiality of their observations about their colleagues’ performance in the simulated setting.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement No consent for data sharing with other parties was obtained, but the corresponding author may be contacted to forward requests for data sharing.