Article Text

Download PDFPDF

Supervised learning events in the Foundation Programme: a UK-wide narrative interview study
  1. Charlotte E Rees1,
  2. Jennifer A Cleland2,
  3. Ashley Dennis1,
  4. Narcie Kelly3,
  5. Karen Mattick3,
  6. Lynn V Monrouxe4
  1. 1Centre for Medical Education, Medical Education Institute, School of Medicine, University of Dundee, Dundee, UK
  2. 2Division of Medical and Dental Education, University of Aberdeen, Aberdeen, UK
  3. 3University of Exeter Medical School, University of Exeter, Exeter, UK
  4. 4Office of Research and Scholarship, Institute of Medical Education, Cardiff University, Cardiff, UK
  1. Correspondence to Professor Charlotte E Rees; c.rees{at}dundee.ac.uk

Abstract

Objectives To explore Foundation trainees’ and trainers’ understandings and experiences of supervised learning events (SLEs), compared with workplace-based assessments (WPBAs), and their suggestions for developing SLEs.

Design A narrative interview study based on 55 individual and 19 group interviews.

Setting UK-wide study across three sites in England, Scotland and Wales.

Participants Using maximum-variation sampling, 70 Foundation trainees and 40 trainers were recruited, shared their understandings and experiences of SLEs/WPBAs and made recommendations for future practice.

Methods Data were analysed using thematic and discourse analysis and narrative analysis of one exemplar personal incident narrative.

Results While participants volunteered understandings of SLEs as learning and assessment, they typically volunteered understandings of WPBAs as assessment. Trainers seemed more likely to describe SLEs as assessment and a ‘safety net’ to protect patients than trainees. We identified 333 personal incident narratives in our data (221 SLEs; 72 WPBAs). There was perceived variability in the conduct of SLEs/WPBAs in terms of their initiation, tools used, feedback and finalisation. Numerous factors at individual, interpersonal, cultural and technological levels were thought to facilitate/hinder learning. SLE narratives were more likely to be evaluated positively than WPBA narratives overall and by trainees specifically. Participants made sense of their experiences, emotions, identities and relationships through their narratives. They provided numerous suggestions for improving SLEs at individual, interpersonal, cultural and technological levels.

Conclusions Our findings provide tentative support for the shift to formative learning with the introduction of SLEs, albeit raising concerns around trainees’ and trainers’ understandings about SLEs. We identify five key educational recommendations from our study. Additional research is now needed to explore further the complexities around SLEs within workplace learning.

  • EDUCATION & TRAINING (see Medical Education & Training)
  • MEDICAL EDUCATION & TRAINING
  • QUALITATIVE RESEARCH

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • This is the first study to explore Foundation Programme trainee and trainers’ understandings and experiences of supervised learning events (SLEs) (compared with workplace-based assessments (WPBAs)).

  • The large number of narratives collected across England, Scotland and Wales enhances the transferability of our findings to other UK locations.

  • We had relatively low numbers of general practitioners (GP) and nurse trainers and trainees with GP and nurse trainer SLE/WPBA experiences so our findings are most relevant to SLEs conducted by hospital doctors.

Introduction

If you are a clinical educator or trainee doctor in today's National Health Service (NHS) in the UK, you will inevitably have participated in a ‘supervised learning event’ (SLE).1 SLEs review the personal development of trainee doctors, with an emphasis on patient safety.1 They were introduced into the UK Foundation Programme (UKFP) in 2012. SLEs specifically address concerns raised in the Collins report2 and previously published literature about assessment within the UKFP3; that trainees and trainers perceived workplace-based assessments (WPBAs) as excessive, onerous and therefore unvalued. Drawing on the same tools utilised within WPBAs (eg, Mini Clinical Evaluation Exercise: Mini-CEX, Direct Observation of Procedurals Skills: DOPS and Case-Based Discussion: CBD), SLEs are designed to: (1) highlight achievements and areas of excellence; (2) provide immediate feedback and suggest areas for further development; and (3) demonstrate engagement in the educational process (see ref. 1, pp. 57–59 for more details). Trainees are encouraged to complete a minimum number of SLEs spread evenly throughout their placements, with different trainers and covering diverse acute and long-term clinical problems.1 In this way, SLEs aim to facilitate a strong formative component of trainee doctors’ assessment.

Rather than indicating what a learner can/cannot do or knows (ie, summative assessment), formative assessments indicate the ‘gap’ between the learner's actual level of performance and the required standard, providing an indication of how performance could be improved to reach the required standard. Therefore, SLEs are designed to enable the provision of timely feedback about the effectiveness of care and the trainee's interactions with others, with a focus on the trainee's performance and development, which may identify areas of weakness requiring support and reflection. Thus, SLEs have the potential to be more meaningful for learning, motivating learners to ‘mastery goals’ such as understanding, rather than ‘performance goals’ like passing an examination.4 ,5

However, SLEs also have a summative role within the UKFP. Currently, evidence of SLEs must be included in every Foundation doctor's e-Portfolio, which in turn is a method of assessment of the Foundation doctor's success in achieving the outcomes described in the curriculum, and which educational supervisors use in the end of placement report. Thus, SLEs can be viewed broadly as information gathering activities that aim to benefit the quality of trainee learning, as well as monitoring their engagement with feedback for accountability purposes.

Effectiveness of the assessment tools

Previous research has examined the effectiveness of the assessment tools (eg, DOPS, Mini-CEX, CBD),6–8 drawing on van der Vleuten's utility equation9: educational impact × validity × reliability × cost-effectiveness × acceptability. Previous research has provided mixed results regarding their efficaciousness in terms of acceptability, reliability and validity: (1) the acceptability of WPBAs to trainees and trainers varies widely2 ,8 ,10–13; (2) reliability for the tools is frequently suboptimal14; and (3) the Mini-CEX and the ‘clinical encounter card’ appears to have high criterion validity in terms of strong and significant correlations with other assessment instruments.15 However, the cost-effectiveness and educational impact of the tools have been largely neglected. Indeed, few published articles have explored the educational impact of WPBA tools and there is therefore little evidence that they lead to improvements in performance.3 ,15

Effectiveness of WPBAs and SLEs

Research has also examined the effectiveness of WPBAs, albeit scant. What evidence there is suggests that WPBAs are reasonably ineffective, attributed to issues such as the suboptimal use of the tools for feedback.16 ,17 Some research suggests that the rating scales often utilised within the tools such as the Mini-CEX introduce artificiality into the assessment, concluding that open-ended comments may be more valuable as assessors are able to provide feedback in more ‘authentic’ terms.18 Additionally, there are issues with suboptimal learners being less likely to seek feedback.19 Outcomes such as learning, transfer of skills to new situations, or improved patient care are relatively unstudied, and when they are, conclusions drawn are limited due to weak study designs.

SLEs were introduced in 2012 to address these shortcomings but, so far, there has been no evidence to evaluate their success in doing so. Given that SLEs comprise similar tools to those used within the WPBAs but with fewer assessments and explicit formative goals, it is important that aspects such as acceptability and the educational utility of SLEs as a form of feedback are explored as a matter of priority. Given that acceptability and educational impact inter-relates with how trainees and trainers make sense of their experiences, emotions, identities and relationships, we felt it crucial to employ a narrative interview approach. We were therefore commissioned by the Academy of Medical Royal Colleges (AOMRC) to undertake an independent evaluation of the impact of the transition from WPBAs to SLEs.

Aims and research questions

This study is the first exploration of SLEs within the UKFP and aims to answer four research questions (RQ). (RQ1) What are participants’ understandings of SLEs and WPBAs and how do they differ between trainees and trainers? (RQ2) What are participants’ experiences of SLEs and WPBAs and how do they differ between trainees and trainers? (RQ3) How do participants make sense of their experiences through narrative? (RQ4) What are participants’ suggestions for how SLEs should be developed?

Methods

Study design

We conducted a qualitative study using group and individual interviews to elicit trainees’ and trainers’ understandings and personal incident narratives (PINs) of their experiences. We employed focus groups wherever possible because they can lead to richer data due to group dynamics (eg, synergism), but individual interviews were also utilised because of the difficulties in getting groups of clinicians together.20 Our study draws on social constructionist epistemology suggesting that there are multiple interpretations of reality and ways of knowing.21 We consider the individual and sociorelational aspects of stories of experience including how participants make sense of their SLE/WPBA experiences through narrative and how they share those stories and in doing so construct identities and trainee–trainer relationships.22

Sampling and recruitment

Following Deanery and Medical School authorisation, ethical approval was established at three sites in England, Scotland and Wales. Using maximum-variation sampling to attempt to obtain a greater range of understandings and experiences, we recruited Foundation doctors from year 1 to year 2 of the 2-year programme (F1s and F2s) with training experiences in hospital and general practice settings. We also recruited trainers across hospital and general practice settings, including clinical and educational supervisors and members of placement supervision groups such as specialist registrars, consultants and nurses. Using advice from our clinical reference group (see Acknowledgements section), we employed multiple recruitment strategies to maximise participation: (1) email; (2) physical notice-boards; (3) leaflets in strategic places (eg, medical libraries, common rooms); (4) snowballing through participant and trainee organisations (eg, BMA junior doctor committee); (5) social networking (eg, Facebook); and (6) face-to-face during formal curricula. We interviewed 110 participants (34 F1s, 36 F2s, and 40 trainers: see table 1 for participants’ characteristics). This overall sample and subsamples far exceeded the minimum sample size of 30 advocated by some qualitative scholars.23 Furthermore, we considered this to be the maximum number of participants we could feasibly interview given the time and financial constraints of our grant, another pragmatic consideration discussed by qualitative researchers.23

Table 1

Participant characteristics by group

Data collection

We conducted 55 individual and 19 group interviews (34 individual and 3 group interviews with trainers; 21 individual and 16 group interviews with trainees). All focus groups bar two were homogenous in terms of the type of study participant (ie, trainer or year-specific trainee groups). Interviews were recorded, transcribed and anonymised (mean length of focus groups 45:43 minutes:seconds (range 31:50–63:47) and individual interviews 36:38 minutes:seconds (range 17:37–69:50): total data around 46 h). Participants completed a personal details questionnaire, comprising demographic and education-related details enabling classification of sample characteristics by group, site and entire study. An interview schedule ensured consistency across multiple interviewers. Interviews began by exploring trainees’ and trainers’ understandings of SLEs and WPBAs. Using narrative interviewing, we encouraged participants to articulate their PINs of SLEs and WPBAs by asking a series of prompts around their narratives: Can you tell me about a memorable SLE/WPBA? What happened? Who was involved? Where did it happen? What did you do and why? How did you feel? What was the impact of that SLE/WPBA for trainee learning? We encouraged participants to narrate their SLE/WPBA experiences so that their views were grounded in actual lived experiences and we could understand how they made sense of those experiences, identities and relationships. Interviews continued until participants felt they had shared their experiences sufficiently. We then asked participants how they thought SLEs could be improved.

Data analysis

We employed multiple and complementary forms of analyses as per previously published research24: thematic and discourse analyses and in-depth narrative analysis of one exemplar PIN. We began with a primary level thematic analysis of the data called framework analysis (involving data familiarisation, thematic framework identification, indexing, charting, mapping and interpretation) to determine content-related and process-related themes (ie, what participants said and how they said it respectively).25 The identification and coding of process-related themes was akin to discourse analysis—that is, analysis of language-in-use in social interaction.26 We employed qualitative data analysis software (Atlas-Ti, V.7.0) to facilitate multianalyst coding of the data. This allowed us to explore patterns across our data qualitatively, such as possible differences in understandings between trainees and trainers, and sometimes quantitatively such as exploring differences in trainee and trainers’ SLE/WPBA experiences using descriptive (eg, frequencies and percentages) and univariate statistics (eg, χ2 tests). Finally, we present an in-depth narrative analysis of one exemplar PIN in this paper to illustrate how one trainee made sense of her workplace learning experiences, identities and relationships.27 We establish credibility in our study by describing our analytic methods, involving multiple data analysts and using illustrative quotes.28 Finally, we establish transferability through our inclusion of a large number of narratives from a diverse sample of trainees and trainers across three different UK countries.28

Results

Our thematic framework analysis identified seven themes in the data: one theme relating to our first research question (understandings of SLEs/WPBAs); four themes relating to our second research question (contextual codes for the PINs, processes of SLEs/WPBAs, factors facilitating learning in SLEs/WPBAs, and factors inhibiting learning); one theme relating to our third research question (how participants narrate their experiences); and one theme relating to our fourth research question (suggestions for improving SLEs).

RQ1: what are participants’ understandings of SLEs and WPBAs and how do they differ between trainees and trainers?

Many trainees and trainers admitted to not knowing what SLEs were, and this uncertainty was emphasised through hesitations (errs and ums), pauses, hedges (eg, ‘I guess’) and laughter. Some participants (eg, those new to training or new to the UK) were also unsure what WPBAs were but most seemed better able to explain WPBAs than SLEs.

Many trainers and F2s suggested that SLEs and WPBAs were conceptually and operationally the same. However, others did perceive them to be conceptually different, with SLEs having formative and WPBAs having summative aims. While participants volunteered a range of understandings for SLEs (eg, as learning, as assessment), they almost exclusively volunteered understandings of WPBAs as assessment (see table 2).

Table 2

Participants’ understandings of SLEs/WPBAs

Trainers seemed to volunteer understandings of SLEs as assessment and as a ‘safety net’ (ie, a diagnostic tool to help identify trainees who were ‘struggling’) more than trainees. However, only trainers defined WPBAs in this way. Another apparent difference we identified, was the extent of emotional talk (eg, negative emotion talk) employed by trainees when attempting to define SLEs and WPBAs. Trainees sometimes felt the formative focus relieved the pressure to perform and reduced anxieties.

RQ2: what are participants’ experiences of SLEs and WPBAs and how do they differ between trainees and trainers?

We outline key findings associated with four of our fragmentary themes (ie, themes that cross-cut all narratives) here: one contextual theme (covering the timing, location of SLEs/WPBAs, identity of trainer, type of tool and participant evaluation including the differences between trainees’ and trainers’ evaluations), and three conceptual themes all pertaining to participants’ lived experiences of SLEs/WPBAs (processes of SLEs and WPBAs; and factors facilitating and inhibiting learning within SLEs/WPBAs). It is important to indicate that narratives typically contain numerous elements including the narrator's commentary on their experience: also known as the ‘evaluation’.29 As per the interpretive approach, the analysts coded whole narratives to these codes depending on what participants said and how they said it. For example, narratives including mostly negative emotional talk (eg, ‘it was quite alarming’) would be coded to ‘negative evaluation’ and narratives including mostly positive emotional talk (eg, ‘it's nice to have nice things said about you’) would be coded to ‘positive evaluation’.

The context of SLE and WPBA narratives

We identified 333 narratives in the data (221 SLEs, 72 WPBAs; see table 3). Most SLEs and WPBAs narrated took place in hospital settings (n=253) and involved F1 doctors (n=185). Trainers within the incidents were usually hospital-based doctors (n=262), although some non-medical specialists (eg, nurses) also acted as trainers (n=15). CBD, DOPS and Mini-CEX were the most common tools narrated (totalling n=276). Finally, SLEs were overall more likely to be evaluated by the narrators positively (58%) than WPBA narratives (39%), and were less likely to be evaluated negatively by the narrators (13%) compared with WPBAs (22%: χ2=5.344, df=1, p=0.021). The descriptive statistics presented in table 3 illustrate more similarities than differences between trainees and trainers. Although trainees seemed to narrate more SLE experiences with positive evaluations (62%) compared with trainers (46%: χ2=0.000, df=1, p=1.000) and more WPBAs with negative evaluations (26%) compared with trainers (18%: χ2=0.237, df=1, p=0.627), these relationships were not statistically significant. However, trainees were more likely to narrate their SLE experiences positively (62%) compared with WPBAs (36%: χ2=5.148, df=1, p=0.023).

Table 3

Overview of personal incident narratives of supervised learning events and workplace-based assessments by participants: Frequencies (%)

Processes of SLEs and WPBA

SLEs and WPBAs were conducted in diverse ways, in terms of their initiation, tools employed, educational processes used and completion.

Initiating SLEs and WPBAs

WPBAs/SLEs were initiated by different parties, with different motivations and in different contexts. While SLEs should be trainee-initiated, trainers occasionally also initiated them, sometimes near the end of rotations (see table 4). Trainees and trainers described some trainees lacking proactivity to seek opportunities for SLEs/WPBAs. When trainees did initiate them, at times, they strategically chose a trainer they knew. This was sometimes done to enhance the learning experience, choosing someone they felt comfortable with, believed would engage in the process, and/or thought would support them in a positive way. At other times this was done with the intention of having a quick and easy experience where the trainer would just ‘tick the box’. Trainees often described feeling discomfort in asking for SLE/WPBA supervision and were often grateful when trainers initiated them. The initiation also varied in terms of the level of planning and organisation. Occasionally they were planned ahead of time, and this sometimes involved an element of rehearsal (particularly for the developing the clinical teacher tool: DCT). At other times, they were ad hoc, with opportunistic clinical encounters recognised as an opportunity for an SLE. Finally, they were sometimes initiated retrospectively, at times, long after the event, particularly when trainees had completed insufficient tools during their placements (see table 4).

Table 4

“I'll actively hunt”

Tools used

Participants talked about the unique aspects of tools, their preferences and their ‘workability’. However, they were sometimes unsure or mistaken about what comprised an SLE/WPBA assessment tool, or conflated tools (eg, CBD with Mini-CEX). Participants discussed the practicalities of various tools, and suggested that some were less workable in certain specialties (eg, DOPS in psychiatry). Interestingly, many participants expressed clear preferences and dislike for certain tools. For example, some clinicians expressed a preference for Mini-CEX over CBD: Mini-CEX allowed them to observe ‘real’ performances of trainees and identify ‘struggling trainees’, whereas CBDs gave trainees opportunities to rehearse thereby masking potential difficulties. Other trainees expressed a preference for CBD over DOPS: CBDs led to ‘real learning’, whereas DOPS were ‘tick-box exercises’, simply signing off already-competent procedures.

Feedback

The educational activities highlighted included: (1) trainers’ observation of the trainee; (2) didactic teaching of knowledge/skills; (3) scaffolding trainees’ learning through strategic questioning; and (4) feedback (most commonly verbal feedback during the event and written feedback afterwards). Feedback quality was thought to vary. Positive experiences included personal, meaningful and constructive feedback for learning. Negative experiences included generalised (non-specific), inadequate, inconsistent (eg, contradictory verbal and written feedback from the same trainer), unconstructive/abusive or overly positive (and therefore educationally unhelpful) feedback. Trainees often wanted formative feedback to help improve their performance (ie, feedforward) rather than ticks (ie, feedback).

Finalising SLEs and WPBAs

Some participants described examples of trainers completing forms promptly, sometimes during the SLE/WPBA itself, with the feedback being a dialogue. However, finalising the SLE/WPBA process often involved chasing trainers to complete forms within e-Portfolios, which trainees perceived as frustrating and awkward. Trainers were also frustrated if they received the link to the form weeks after the SLE. Trainers and trainees described how written e-Portfolio feedback could be inadequate: while some trainees used trainer comments to promote reflection within their e-Portfolio, others seemed to lack motivation to read their e-Portfolio feedback. Occasionally trainers relied on hearsay or having a general overview of a trainee, rather than seeing events for themselves, signing trainees off without actually witnessing their performance, a subtheme we called ‘manipulating the system through short-cuts’ (see tables 4 and 5).

Table 5

Issues around supervised learning events/workplace-based assessments

Factors facilitating and inhibiting learning in SLE/WPBAs

Participants described many factors that facilitated and inhibited learning throughout SLEs and WPBAs at four different levels: individual (eg, characteristics of individual trainees and trainers), interpersonal (eg, trainer–trainee relationships), cultural (eg, protected time) and technological (eg, e-forms; see table 6).

Table 6

Factors facilitating/inhibiting learning through supervised learning events/workplace-based assessments

RQ3: how do participants make sense of their experiences through narrative?

Participants narrated their SLEs/WPBAs with interesting discourse features (eg, pronominal, metaphoric and emotional talk and laughter), revealing how they constructed themselves, others and their relationships. In terms of pronouns, participants often referred to the ‘other’ as ‘them’, illustrating adversarial trainer–trainee relationships (eg, “they need a certain amount completed so particularly towards the end of placements you get a lot of reminders because you haven't done it ‘cause you haven't had time um and they're panicking ‘cause they need to get them” (Trainer, site 3). Participants’ metaphoric talk also illustrated how they understood the trainee-trainer relationship as adversarial, for example as war (eg, “we get at least one CBD… and questions get fired back and forward” (Trainee, site 2) and sport (eg, “I think it was… a win-win for both of us…. they realised where they were with it, they acknowledged that some of their deficiencies and I was able to form a game plan…” (Male Trainer, site 2). Participants employed positive and negative emotional talk throughout their narratives (eg, “the supervisors don't know their trainees because of the way the rotations work, and that must be very difficult I think… yes it is very difficult” (Female Trainer, site 2), and also laughter, in order to cope with the recounting of difficult stories (eg, “I'll talk about a good one I've had, because then we'll get on to the bad ones I've had ((laughs))” (Trainee, site 3).

To illustrate these themes in more depth, we next present one narrative exemplar from a trainee which demonstrates the complex interplay between what participants say and how they narrate their experiences in order to make sense of them, identities and relationships. We selected this narrative because it is fairly typical, illustrates a range of themes already discussed in this paper, and includes interesting discourse elements found across our data (see Rees et al30 for a further narrative analysis). Helena (a pseudonym) is a female F2. She narrates a WPBA experience from the end of her final F1 rotation. Her experience takes place in a medical setting within the hospital and involves her clinical fellow trainer. She recounts a fairly typical experience: “hunting” for outstanding WPBAs/SLEs near the end of rotations. In the following narrative, Helena explains how her trainer offers to sign off ‘inserting a venflon’ without observing her (see table 4), thus clearly indicating how trainees and trainers can manipulate the system through short cuts.

She constructs her own identity and that of her clinical teaching fellow through narrating her DOPS experience. Helena presents herself as a competent Foundation doctor by emphasising her day-to-day participation in the medical work of the hospital: taking blood and inserting venflons. She sees her competence in these procedures as without question, emphasised by her repeated comments about trainers “knowing” that she and her fellow Foundation doctors can insert venflons because they see evidence of them in patients’ arms. Helena suggests the obviousness of Foundation doctors’ competence, in that they would not be able to “survive on the wards” if they could not take blood. Helena positions her clinical fellow (and other trainers) as having insufficient time ‘to actually stand and watch’ trainees do basic procedures that they are competent in. Helena presents her trainer as knowledgeable and proactive because he checks she has completed her WPBAs for the end of her rotation. While he is partly constructed as helpful for offering to sign off a venflon insertion, he is simultaneously constructed as blasé in that her competence is ‘taken for granted’.

There are various discourse elements in Helena's narrative that are worthy of consideration, including her pronominal and metaphoric talk and laughter, all of which shed light on how she makes sense of this DOPS experience. In terms of her pronominal talk, she repeatedly positions herself as ‘we’ throughout her narrative (meaning me and the other Foundation doctors), and she repeatedly positions her clinical fellow as ‘they’ throughout the narrative (meaning him and other trainers). This use of ‘we’ and ‘they’, rather than ‘me’ and ‘him’, depersonalises and simultaneously generalises her experience, implying that all Foundation doctors commonly experience this event.31 Furthermore, this ‘them and us’ language within the narrative implies an oppositional relationship between trainees and trainers.31 In terms of metaphoric talk, Helena explains that she is “hunting” for patients in order to get DOPS signed off, and she is busy “surviving” on the wards by practising procedures competently. This latter metaphoric linguistic expression, for example, implies the common conceptual metaphor of medicine as war, and similar to the pronominal talk implies oppositional relationships between trainees and trainers.32 ,33 What is striking about these metaphoric linguistic expressions are that they are both accompanied by laughter, possibly for contextual coping (in the interactional moment of narrating the event) and non-contextual coping (due to uncomfortable feelings around the nature of what it is she's disclosing in her narrative).34 ,35 This laughter for coping suggests that experiences such as this (‘I don't find DOPS very useful’) can have a negative impact on trainees’ emotional learning experiences.

RQ4: What are participants’ suggestions for how SLEs should be developed?

In response to our final question (how do you think SLEs could be improved?), participants provided a range of suggestions at four different levels: individual (eg, improving trainees’ and trainers’ understanding and engagement), interpersonal (eg, improving trainer–trainee relationships), cultural (eg, shifting away from tick-box summative culture), and technological (eg, improving e-tools: see table 7).

Table 7

Suggested improvements to the supervised learning event process

Discussion

This independent evaluation, commissioned by the AOMRC, is the first of its kind to explore Foundation trainee and trainers’ understandings and experiences of SLEs compared with WPBAs since the introduction of SLEs in 2012.

Confusion reigned among participants about what SLEs were and how they differed from WPBAs. While participants ultimately volunteered diverse understandings of SLEs (eg, learning and assessment), they volunteered understandings of WPBAs that were almost exclusively assessment related. Trainers seemed more likely than trainees to volunteer understandings of SLEs as assessment and a ‘safety net’ to protect patients. That many trainers continue to understand SLEs as assessment means that they may continue to treat them as such, thereby jeopardising trainee learning.

The narratives illustrated that SLEs and WPBAs were conducted in diverse ways, with issues raised about their initiation, tools used, feedback and finalisation. Enthusiastic trainers and trainees and good relationships facilitated learning within SLEs/WPBAs, whereas time pressures and e-tools posed barriers to learning. SLE narratives were more likely to be evaluated positively than WPBA narratives. Trainees narrated more SLE experiences with positive evaluations and more narratives of WPBAs with negative evaluations. Some of these findings extend the already mixed evidence for WPBA in terms of its acceptability to trainees and trainers.2 ,10 ,36 Previous research, for example, indicates that feedback within the medical workplace can be suboptimal and numerous factors can hinder workplace learning, such as a lack of protected time for the trainee–trainer relationship.16 ,20 ,37 ,38

This study provides tentative support for the summative to formative shift in focus from WPBAs to SLEs initiated by the AOMRC.1 Furthermore, this study contributes to our understanding of the lived experiences of trainers and trainees, and provides quantitative data on differences in SLE/WPBA experiences between trainees and trainers. That trainees were more likely to report positive evaluations of their SLE experiences (and trainers not) suggests that trainees and trainers might want different things from SLEs/WPBAs (learning vs assessment respectively). Further, that participants constructed their own and others’ identities, and their relationships in numerous ways builds on other medical education research at the undergraduate level emphasising potential conflictual relationships between trainees and trainers31–33 ,39

Key suggestions to improve the SLEs included improving trainees’ and trainers’ understandings of SLEs, better trainee–trainer relationships through regular meetings and closing the ‘feedback loop’, improving the culture of workplace learning through formative learning rather than summative assessment, and improving the technology around SLEs, extending previous research within medical education.16 ,20 ,37–43

Methodological strengths and challenges of study

To the best of our knowledge, this is the first study to explore Foundation trainee and trainers’ understandings of SLEs and WPBAs, and their lived experiences. The large number of narratives collected, and our consistent findings across the three geographically dispersed sites, suggests that our results are transferable to other UK locations. Although our sample of trainees and trainers was intentionally diverse, we had relatively low numbers of GP and nurse trainers in our study, and relatively few trainees with GP and nurse trainer SLE/WPBA experiences. While this reflects the reality of training programme structures and processes, we must use caution when extrapolating our findings to GP settings and to GP and nurse trainers. Having employed qualitative methods, our sample is not necessarily representative, nor does it intend to be representative, of all UK trainers and trainees.

The geographical distance between sites and the need to collect large amounts of qualitative data in a relatively short time-frame (around 6 months) required multiple researchers across the three sites to undertake interviews and data analysis. Consistency was maintained across the researchers through training, the use of a discussion guide, regular meetings and use of a comprehensive coding framework. Finally, with around 46 h of qualitative data it was pragmatic for us to adopt different methods of data analysis to explore both the breadth and depth (and, therefore, the what's and how's) of participants’ experiences. Because of this voluminous data, we partly quantified it to identify patterns across our narratives that would otherwise be invisible.44 ,45 Some methodological purists would find this combination of quantitative and qualitative analyses problematic because of the different epistemologies underpinning these two approaches. However, we retained a process-orientated qualitative approach to our interpretation of numerical data.44 ,45

Implications for educational practice

Our recommendations are based on key findings from our research (what works and what does not work) and comments from our clinical reference group (see Acknowledgements section). First, we need to improve trainee and trainers’ understandings of SLEs. Both must understand the concepts of formative and summative assessment and be able to recognise good quality feedback; that feedback is a dialogic process; and how they can give, receive and seek feedback effectively within the workplace.46 Both need to appreciate the diversity of processes for conducting SLEs; know the tools and how they differ; and comprehend factors facilitating and hindering learning within SLEs.

Second, trainee–trainer relationships need to be improved. Good quality relationships, characterised by knowledge of the other person, mutual respect and trust, should be possible through prolonged engagement including multiple trainee–trainer meetings throughout rotations. We recognise that the pressures of service delivery make this recommendation challenging.

Third, the culture of workplace learning needs to be improved. The formative focus of SLEs could be emphasised further by rethinking the structures around SLEs, and particularly those structures that imply a summative focus. For example, SLEs should be undertaken at regular intervals with a cumulative formative impact over the course of a rotation, thereby allowing trainees to conduct SLEs in a meaningful way that is beneficial to their own personal and professional development, rather than encouraging a system of ‘hunting’ for SLEs at the end of a rotation to secure that ‘tick’.

Fourth, tools employed for SLEs need to be improved to emphasise their formative focus (eg, prioritising free-text comments) and making them easier to finalise (eg, applications for smartphones and tablets).5

Finally, we need to develop, assess and recognise trainers for the work they do including the provision of trainee feedback to trainers to close the ‘feedback loop’,46 and to be used as part of trainers’ annual appraisals. Furthermore, this process of feedback could form the basis of a trainer recognition programme, thus valuing the important role of the educator.

Implications for further research

The introduction of any new workplace-based initiative will benefit from investigation using a range of approaches. Further interview research is required using wider sampling (eg, capturing GP experiences) to more fully elucidate the themes identified in this paper. Also, additional qualitative (eg, longitudinal audio-diary, video-reflexive ethnography) and quantitative methodologies (eg, pragmatic cluster randomised trial) would be helpful to explore SLEs further. The latter could compare various outcomes (eg, trainee and trainer satisfaction, metrics around form completion) for an intervention group of trainers and/or trainees who have received theory-based training in giving, receiving and seeking formative feedback, compared with those not receiving the educational intervention. Ultimately, without such further research, it may be impossible to fully understand the complexities surrounding SLEs within workplace learning.

Acknowledgments

The authors thank all the trainers and trainees who participated. They also thank our administrative, academic and clinical colleagues who helped us recruit participants. In particular, the authors thank members of our clinical reference group, who advised us on the recruitment of participants, and gave us feedback on our interpretations of the data and developing educational recommendations. In alphabetical order these are: Professor Stuart Carney, previously University of Exeter; Dr Ben Hannigan, Cardiff University; Professor Peter Johnston, University of Aberdeen; Professor Jean Ker, University of Dundee; Dr David Leeder, University of Exeter; Professor Graham Leese, University of Dundee; Dr Murray Lough, previously NHS Education for Scotland; Dr Alan Stone, Cardiff University; Professor Frank Sullivan, previously University of Dundee; and Professor Mike Watson, Previously NHS Education for Scotland. The authors also thank Elaine Plenderleith at the Centre for Medical Education, University of Dundee, for her administrative support throughout the course of this project. Finally, the authors thank the Academy of Medical Royal Colleges for its contribution to this project. In particular, the authors thank Dr Ed Neville, Chair of the Supervised Learning Events Evaluation Working Group, and Dr David Kessel, Chair of the Academy Foundation Programme Committee, for their contribution to the design of this project, advice about recruitment of participants, thoughtful comments about the educational recommendations for the project, and feedback on the preliminary draft of our end-of-award report.

References

Footnotes

  • Contributors CER, JAC, KM and LVM designed the study and secured its funding. CER, KM, and LVM were site-specific leads and over-saw the work of AD and NK. JAC and AD conducted the literature review. CER, KM, LVM, AD, and NK secured ethics approval for the three sites and recruited participants. AD and NK did the bulk of the data collection (CER and LVM facilitated some interviews). All authors participated in a preliminary thematic analysis of selected transcripts. CER, LVM, AD and NK coded data using Atlas-Ti (the bulk of this was done by AD and NK). LVM and AD interrogated the coding using Atlas-Ti and CR conducted narrative analyses. CER, JAC, KM and LVM wrote parts of this paper, and CER edited it. All authors commented on various iterations. CER, JAC and AD conducted this research on behalf of the Scottish Medical Education Research Consortium (SMERC). CER is the Principal Investigator for the project and overall guarantor.

  • Funding This work was supported by the Academy of Medical Royal Colleges. The views expressed in this paper are those of the authors and not necessarily of the funders.

  • Competing interests None.

  • Ethics approval The relevant ethics committees within each site approved this study, and additional site-specific approvals were secured where necessary.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement No additional data are available.