Article Text

Download PDFPDF

Original research
Does performance at medical school predict success at the Intercollegiate Membership of the Royal College of Surgeons (MRCS) examination? A retrospective cohort study
  1. Ricky Ellis1,2,
  2. Duncan S G Scrimgeour1,3,
  3. Peter A Brennan4,
  4. Amanda J Lee5,
  5. Jennifer Cleland6
  1. 1Institute of Applied Health Sciences, University of Aberdeen, Aberdeen, UK
  2. 2Department of Urology, Nottingham University Hospitals NHS Trust, Nottingham, UK
  3. 3Department of Colorectal Surgery, Aberdeen Royal Infirmary, Aberdeen, UK
  4. 4Department of Maxillo-Facial Surgery, Queen Alexandra Hospital, Portsmouth, UK
  5. 5Department of Medical Statistics, Institute of Applied Health Sciences, University of Aberdeen, Aberdeen, UK
  6. 6Medical Education Research and Scholarship Unit (MERSU), Lee Kong Chian School of Medicine, Singapore
  1. Correspondence to Ricky Ellis; rickyellis{at}nhs.net

Abstract

Background Identifying predictors of success in postgraduate examinations can help guide the career choices of medical students and may aid early identification of trainees requiring extra support to progress in specialty training. We assessed whether performance on the educational performance measurement (EPM) and situational judgement test (SJT) used for selection into foundation training predicted success at the Membership of the Royal College of Surgeons (MRCS) examination.

Methods This was a longitudinal, cohort study using data from the UK Medical Education Database (https://www.ukmed.ac.uk). UK medical graduates who had attempted Part A (n=2585) and Part B (n=755) of the MRCS between 2014 and 2017 were included. χ2 and independent t-tests were used to examine the relationship between medical school performance and sociodemographic factors with first-attempt success at MRCS Part A and B. Multivariate logistic regression was employed to identify independent predictors of MRCS performance.

Results The odds of passing MRCS increased by 55% for Part A (OR 1.55 (95% CI 1.48 to 1.61)) and 23% for Part B (1.23 (1.14 to 1.32)) for every additional EPM decile point gained. For every point awarded for additional degrees in the EPM, candidates were 20% more likely to pass MRCS Part A (1.20 (1.13 to 1.29)) and 17% more likely to pass Part B (1.17 (1.04 to 1.33)). For every point awarded for publications in the EPM, candidates were 14% more likely to pass MRCS Part A (1.14 (1.01 to 1.28)). SJT score was not a statistically significant independent predictor of MRCS success.

Conclusion This study has demonstrated the EPM’s independent predictive power and found that medical school performance deciles are the most significant measure of predicting later success in the MRCS. These findings can be used by medical schools, training boards and workforce planners to inform evidence-based and contemporary selection and assessment strategies.

  • medical education & training
  • surgery
  • adult surgery

Data availability statement

Data may be obtained from a third party and are not publicly available. The dataset used in this study was acquired from the UK Medical Education Database and is held in Safe Haven. Data access requests must be made to UKMED. Full information for applications can be found at https://www.ukmed.ac.uk.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • This is the first study to investigate the relationship between medical school performance with performance at a high-stakes UK postgraduate surgical examination.

  • This is a large retrospective cohort study using the UK Medical Education Database.

  • This study examines whether performance on the educational performance measurement and situational judgement test used for selection into foundation training predicted success at the Membership of the Royal College of Surgeons (MRCS) examination.

  • Following previous studies, the relatively blunt measure of MRCS pass/fail results at first attempt was used as the primary outcome.

Introduction

Progression through the UK medical education and training pathway is based on performance on a series of index assessments, starting with examination performance prior to entry to medical school and (typically) ending with respective Royal College Fellowship examinations. Each assessment is designed to ensure appropriate standards for the stage of training and to ultimately safeguard patients.1 2

Performance at each stage also has implications on career progression. In the UK, doctors with higher academic scores during medical school are more likely to be offered their first choice of UK Foundation Programme (UKFP) training post on graduating.3 Those with higher academic scores during medical school are also more likely to be offered a training place in a more competitive specialty.4

Studies have already demonstrated the validity of academic performance during medical school in predicting performance during foundation training,5–7 although there is little research on the association between medical school performance and performance during specialty training in the UK. The seminal ‘Academic Backbone’ paper by McManus et al described how prior attainment is the best predictor of future performance in medical education.8 However, that study was carried out before standardised markers of medical school performance were introduced (see later) and therefore may not represent contemporary patterns of performance. This is an important deficiency in the literature as research from other contexts indicates that examination results taken during and shortly after medical school predict later performance on board certification examinations and patient complaints.9–14 Furthermore, if early assessments do not predict later performance, then their fitness for purpose as markers of performance and their use as gateways for progression in training are questionable.

At the time of this study, the UK did not have a national licensing examination for graduating doctors. Instead, performance during medical school is measured within schools by the educational performance measure, or EPM.15 The EPM is calculated out of 50 points and comprises three parts (table 1). The first component of the EPM is a quantitative measure of students’ medical school performance compared with their peers (EPM decile). Points are awarded depending on a student’s final performance decile; ranging from 34 points for the 10th (lowest) decile to 43 points for students in the 1st (highest) decile. The EPM decile is calculated using multiple assessments of a student’s knowledge and practical skills over time and by multiple assessments throughout medical school.15 The second part of the EPM is comprised points awarded for additional degrees (0–5 points are awarded according to degree grade achieved). The last part of the EPM is comprised points that are awarded for publications (maximum 2 points; 1 point is awarded per publication). Points awarded for additional degrees and publications (0–7 in total) are described as educational achievements (EAs). The selection process for the UKFP couples the EPM with a situational judgement test, or SJT16–21 also scored out of 50 points. The UKFP SJT could be described as a type of ‘procedural knowledge test’; assessing procedural knowledge about what to do in certain situations and how to do it.21 The procedural knowledge being assessed by the UKFP SJT aligns with the behaviours and attitudes expected of doctors as described in the General Medical Council’s ‘Good Medical Practice’.2 The graduate’s combined EPM score plus SJT score out of 100 is their application score for the UKFP.22

Table 1

The components of the educational performance measure (EPM) used to quantify performance at medical school

The current study aimed to assess whether performance in medical school, EPM and SJT scores, could predict success at the Intercollegiate Membership of the Royal College of Surgeons (MRCS) examination. The MRCS examination is often taken by UK trainees during foundation and core surgical training years and comprises two parts: Part A, a written examination with two papers and Part B, an objective structured clinical examination (OSCE).23 24 The MRCS is a high-stakes postgraduate assessment that is used as a gateway for applications for higher surgical training and is itself a good predictor of future surgical training outcomes.25–27

Given performance at medical school and success in postgraduate assessments is related to sociodemographic factors as well as academic ability, regression models were adjusted for sociodemographic factors known to be associated with MRCS success.3 23 25 28–30 These included gender, ethnicity and graduate status on entry to medical school. This analysis is timely given policy drivers in the UK to ensure that medical school and postgraduate assessments are fair,1 the pending imposition of a once-off high-stakes test, the Medical Licensing Assessment (MLA)31 and proposals to exclude EAs from the EPM score used in UKFP selection from 2023.32

Use of linked individual-level data from the UK Medical Education Database (UKMED: https://www.ukmed.ac.uk/) enabled a national-level analysis, drawing on data from sources including medical school assessment, FP selection and postgraduate assessment outcomes.33

Methods

A longitudinal retrospective cohort study was conducted on UK medical graduates who had attempted either the Part A (written) or the Part B (clinical) MRCS examination from April 2014 to May 2017.

The UKMED (https://www.ukmed.ac.uk/) was used to access linked data from UK medical schools and the four Royal Colleges of Surgeons in the UK and Ireland. All counts have been rounded according to Higher Education Statistics Agency data standards to ensure person-level anonymity.34

The following data were extracted: self-declared gender, self-reported ethnicity demographics and graduation status at the time of entry to medical school, medical school educational performance measure decile, additional degree and EPM publication scores, SJT score and MRCS Part A and B first attempt result. Figure 1 shows the flow of data through the study. Candidate first attempt results were used as they have been shown to be the best predictor of future performance in postgraduate examinations.35

Figure 1

Data flow through study. MRCS, Membership of the Royal College of Surgeons.

Except for SJT and EPM scores, all variables were subsequently dichotomised. Graduation status was defined as ‘yes’ if candidates had obtained a degree prior to entering medicine. Self-declared ethnicity was coded as ‘white’ or ‘non-white’ as used in similar studies to enable powered analysis of smaller cohorts.25 26 Part A and B MRCS performance was categorised as ‘pass’ or ‘fail’ at first attempt.

Statistical analysis

All analyses were conducted using SPSS V.22.0 (IBM). A χ2 test was initially employed to determine any associations with first attempt MRCS pass/fail outcomes. The relationship between SJT, EPM decile, additional degrees, EPM publication scores and Part A and Part B MRCS first attempt success was examined using independent t-tests since the distribution of scores was normal. Correlation coefficients were calculated for FP selection scores and the MRCS Part A (the written component of the MRCS examination) score relative to pass mark.

Logistic regression models were developed to identify predictors of success at MRCS at first attempt that were independent of other performance measures used in UKFP selection. Further regression models were developed to identify predictors of MRCS success, that were independent of other performance metrics and sociodemographic factors known to be associated with MRCS performance. While doctors are not selected for the UKFP based on sociodemographic factors, adjusting for these known predictors of MRCS success ensured that regression models were adjusted for these potential confounding factors and are therefore more applicable in real life. Potential interactions between significant predictors were also examined.

The highest standards of security, governance and confidentiality were ensured when storing, handling and analysing identifiable data.

Patient and public involvement

No patients or public were involved in this study.

Results

Part A MRCS

A total of 3000 UK medical graduates attempted Part A MRCS between April 2014 and May 2017. Of these 2585 had matched EPM and SJT data. Fifty per cent (n=1280) passed Part A MRCS at their first attempt. Sixty-three per cent of candidates (n=1635), were men, 56% were white (n=1435) and 81.5% had not undertaken a prior degree before entering medicine (n=2105). Mean (SD) total EPM and SJT scores for candidates who had attempted Part A MRCS were 41.6 (3.86) and 39.4 (3.54), respectively.

Pass rates for Part A MRCS by gender, ethnicity and graduate on entry to medicine status are shown in table 2. Differences in pass rates were statistically significant for: gender (54.9% men vs 40.5% women, p<0.001), ethnicity (54.2% white vs 44.1% non-white, p<0.001) and graduate status (50.7% no prior degree vs 44.7% prior degree, p=0.017).

Table 2

Univariate analysis of Membership of the Royal College of Surgeons first attempt pass rates by gender, ethnicity and graduation status for UK medical graduates

Univariate analysis of EPM and SJT scores are shown in table 3. Candidates who passed Part A MRCS at first attempt had performed better in their SJT (mean 40.0 (SD 3.3) vs 38.9 (3.7), p<0.001) and had scored higher for their total EPM (43.6 (3.3) vs 39.8 (3.4), p<0.001) compared with those who failed at first attempt. Figure 2 shows the relative increase in mean MRCS Part A pass rates at first attempt according to candidate EPM decile.

Table 3

Univariate analysis of EPM scores, SJT scores and MRCS Part A and Part B first attempt success

Figure 2

Relative increase in mean Membership of the Royal College of Surgeons (MRCS) pass rates at first attempt according to candidate educational performance measure (EPM) decile (1st EPM decile indicates the highest achieving candidates and 10th decile, the lowest achieving candidates).

Table 4 shows correlation coefficients between each FP selection score and MRCS Part A. According to Cohen’s guidelines36 EPM degree score, EPM publication score and the SJT show statistically significant weak positive correlation with Part A scores. Total EPM and EPM decile show statistically significant strong correlations with MRCS Part A.

Table 4

Correlation coefficients between Foundation Programme selection scores and Membership of the Royal College of Surgeons Part A scores (n=2585)

Table 5 shows the ORs and 95% CIs for independent predictors of passing Part A MRCS at first attempt. ORs were similar for UKFP selection metrics when multivariate analysis included sociodemographic predictors of MRCS success. EPM decile, EPM degree and EPM publication scores were predictors of MRCS success independent of other selection metrics and sociodemographic factors. Specifically, the odds of passing Part A MRCS at first attempt increased by 55% for every additional EPM decile (OR 1.55, 95% CI 1.48 to 1.61). The odds of passing Part A on first attempt increased by 20% for every additional EPM degree point (OR 1.20, 95% CI 1.13 to 1.29). Finally, the odds of passing Part A on first attempt increased by 14% for every additional EPM publication point awarded (OR 1.14, 95% CI 1.01 to 1.28). SJT score was not found to independently predict Part A first attempt success (p=0.177). There was a statistically significant interaction between ethnicity and gender in the final Part A MRCS regression model with white men more likely to pass (p=0.002). MRCS candidates who entered the medical school without a prior degree were more than two times as likely to pass Part A compared with those who entered medical school as graduates (OR 2.23, 95% CI 1.73 to 2.87).

Table 5

Predictors of pass at first attempt at Part A and Part B MRCS for UK medical graduates on multivariate analysis

Part B MRCS

In total, 755 of the Part A study cohort (n=2585) attempted MRCS Part B at a later date. 76.3% (n=575) of candidates passed Part B MRCS at first attempt. Unsurprisingly the demographics for Part B MRCS were similar to those observed for Part A MRCS candidates; 67% of candidates were men (n=500), 57% were white (n=430) and 84% had not undertaken a previous degree (n=635). The mean (SD) total EPM and SJT scores for UK graduates who had attempted Part B MRCS were 42.8 (3.67) and 40.0 (3.47), respectively.

Pass rates for Part B MRCS by gender, ethnicity and graduate on entry to medicine status are shown in table 2. There was no significant difference in Part B MRCS first attempt pass rates between men and women (75.9% vs 77.1%, respectively, p=0.719). Differences in pass rates were statistically significant for ethnicity (82.0% white vs 68.8% non-white, p<0.001) and graduate status (77.8% no prior degree vs 68.3% prior degree, p=0.025).

Univariate analysis of EPM and SJT scores are shown in table 3. Those who passed Part B MRCS at first attempt had performed better in their SJT compared with those who failed at first attempt (40.2 (3.2) vs 39.4 (4.2), p<0.010). Similarly, candidates who passed Part B at first attempt had scored higher in their total EPM (43.3 (3.6) vs 41.1 (3.4), p<0.001). Figure 2 shows MRCS Part B performance according to EPM decile score. The overall trend reveals a relative increase in mean MRCS Part B pass rates at first attempt according to candidate EPM decile.

Table 5 shows the logistic regression models for independent predictors of Part B MRCS first attempt. EPM decile and EPM degree scores were statistically significant predictors of MRCS success independent of other selection metrics and sociodemographic factors. The odds of passing MRCS Part B at first attempt increased by 23% (OR 1.23, 95% CI 1.14 to 1.32) for every additional EPM decile. The odds of passing MRCS Part B at first attempt increased by 17% (OR 1.17, 95% CI 1.04 to 1.33) for every additional EPM degree point awarded. Neither SJT score nor EPM publication scores were found to be independent predictors of Part B success at first attempt after adjusting for UKFP selection metrics and socio-demographic factors (p=0.429 and p=0.849, respectively).

White UK medical graduates were nearly two times as likely to pass Part B MRCS at first attempt compared with non-white candidates (OR 1.86, 95% CI 1.29 to 2.69). Candidates who had not undertaken a previous degree before entering medicine were more than two times as likely to pass Part B MRCS compared with those who had undertaken a prior degree (OR 2.54, 95% CI 1.57 to 4.13). There were no statistically significant interactions between any of the Part B MRCS variables in the adjusted regression model.

Discussion

EPM–deciles

We assessed the predictive validity of UKFP selection measures, the SJT and EPM, against the MRCS examination which is known to be a good predictor of future surgical training outcomes.25–27 We found that EPM deciles predicted success at both Part A (written) and Part B (OSCE) of the MRCS independent of other UKFP selection scores and sociodemographic factors. For every incremental EPM decile, candidates were significantly more likely to pass both MRCS Part A and Part B. Reassuringly, the predictive value of EPM deciles was not significantly altered when adjusting for gender, ethnicity and graduate status, indicating that very little of the association that exists between FP selection scores and MRCS performance is explained by these sociodemographic factors. Our results add to previous studies which found that the EPM predicts performance during foundation training,6 7 and provides assurance that UK medical school assessments appropriately gauge student competence and readiness for practice.

A key limitation of the EPM decile score as a selection tool is its ranking of medical school graduates at a local rather than the national level. Each medical school ranks their cohort of graduates internally into 10 equal groups (deciles) based on performance in a number of assessments taken over the duration of the medical course. Students are therefore ranked against their peers within each medical school, potentially penalising high-achieving individuals that study at more competitive schools, resulting in a lower decile score than if those individuals studied at schools with a less competitive cohort. Given that assessment also varies significantly in ‘volume, type and intensity’ between medical schools, concerns have been raised that students of equal proficiency may fall into different EPM deciles across schools due to differences in assessment rather than ability.37 Furthermore, the number of assessments used and scoring for each assessment varies considerably between schools which can limit the range of scores used for decile ranking, reducing the spread of candidates.38

Concerns regarding the impact of variation in local assessment and ranking have resulted in demand for the MLA in the UK. A national MLA is argued to provide a potentially more robust method of ranking medical graduates nationally and may also contribute to standard setting for education across medical schools. The MLA’s impending introduction has been met with a mixed response with some arguing that a single high-stakes exit examination is not as valuable as multiple local assessments over a number of years and may also result in schools teaching to pass instead of teaching to practice medicine.37 39 However, a one-off high-stakes examination on completion of medical school reflects the use of assessments throughout postgraduate medical training. The predictive and incremental validity of the new MLA must be scrutinised to justify its financial cost and its burden on both students and the medical education system.

Despite the limitations and potential shortcomings of the EPM decile scoring system that is currently being used, it appears to achieve its intended function for UKFP selection. It differentiates candidates by ability and demonstrates the ability to predict postgraduate performance.6 40 These data support its predictive validity and ongoing use as a UKFP selection tool. How the EPM, a local-level assessment, will sit alongside the proposed MLA, remains to be seen.

Our findings also align with the ‘academic backbone’ concept proposed by McManus et al; an idea that in medical education, current learning and achievement is dependent on attainment at earlier stages.8 This can be summarised simply as: medical students who are high achievers remain high achievers. Candidates ranked in the top deciles perform better at MRCS. Those who perform best in MRCS are more likely to achieve a specialty training (ST3) post at national selection, are more likely to progress through training with satisfactory Annual Review of Competence Progression outcomes and are more likely to succeed at Fellowship of the Royal College of Surgeons examinations at first attempt.25–27 The nature of the outcome measures have changed since the seminal study of McManus et al,8 but the principles have not: the road to success for those who wish to pursue a successful career in surgery begins early.

Educational performance measure–educational achievements

Points awarded in the EPM for additional degrees predict success at MRCS independent of other UKFP selection measures and sociodemographic factors. While points awarded for additional publications independently predict success in MRCS Part A, they were not an independent predictor of success in MRCS Part B. Correlations between MRCS Part A scores and EA points were considerably weaker than the correlation with EPM decile scores. It also appears that EPM decile scores are largely responsible for the strength of the correlation seen between EPM total and MRCS Part A scores. These results are timely and relevant given the recent announcement that points awarded for EA will be excluded from the EPM scoring system for UKFP selection from 2023.32

Points awarded for EA in the EPM undoubtedly play a role in increasing the spread of applicant scores when combined with EPM decile and SJT points for UKFP selection.40 However, there is evidence of increasing EA point inflation over recent years with the number of applicants earning EA points increasing from 30% to 70%.41 Given this EA point inflation, it is possible that correlations between EA scores and MRCS performance found in our data may be higher than those seen in cohorts that have graduated from medical school more recently. It is clear that over time the ability of EA points to differentiate candidates will diminish, but the financial barriers to success in medicine that these may cause would persist, with students from more affluent backgrounds being in a position to ‘pay for points’ by studying an intercalated degree. Indeed, given the recent drive to widen access to medicine it would appear contradictory for selection tools to encourage students to take on the significant financial burden of an intercalated degree that is not necessary for the practice of medicine, and does not necessarily improve patient care. Studying an intercalated degree does undoubtedly have many advantages that would ‘enrich the student experience’,42 but students should not be penalised in their national ranking if uninterested or unable to afford to do so.

Overall, it could be argued that the limited predictive value of EA points found in this study and others6 does not outweigh their potential to limit the score and subsequent ranking of applicants’ that are unable to afford to undertake an intercalated degree.

Foundation programme situational judgement test

Candidates who passed both parts A and B of the MRCS at first attempt scored higher in their SJT than candidates who failed, and there was a statistically significant positive correlation with Part A scores. However, the SJT did not independently predict MRCS success after adjusting for EPM scores and sociodemographic factors, displaying no significant incremental value over and above the predictive value of EPM decile scores. It is important to consider this finding in relation to the premise behind SJTs.

The FP SJT is based on a job analysis of being a foundation doctor.7 Significant correlation between SJT and EPM scores between schools has been identified.6 37 43 Additionally both SJT and EPM scores are independently associated with the odds of successful completion of the FP, and SJT score offers a degree of incremental predictive validity over that provided by the EPM deciles, suggesting that it is capturing additional, relevant, information on applicants, as intended.6 40 Research suggests that the FP SJT does what it was designed to do as well as succeeding in increasing the spread of candidates being ranked for foundation training posts (the arguments as to how it should be weighed in the FP selection process are outside the scope of this paper but we direct readers with an interest to other studies).6 44 45 It was not designed to select for specialty training: where specialty training programmes use SJTs for selection, these have been designed specifically against the role of a trainee/resident in that specialty.46–48 Given this, in retrospect it is unsurprising that the FP SJT does not independently predict performance on a postgraduate examination that tests the clinical knowledge, skills and professional attitudes expected of surgical trainees.

Strengths and weaknesses

The current study is one of the first to use the UKMED to examine the associations between medical school performance and FP SJT outcomes on success at a high-stakes postgraduate surgical examination. The UKMED enabled a nationwide, multi-cohort analysis and our breakdown of the FP selection process into EPM scores and SJT allowed us to look separately at academic attainment and other factors.

There are some limitations of the study. First, although candidates can take Parts A and B MRCS on multiple occasions, we used candidate first attempt results as the best predictor of future performance.35 We often used the relatively blunt outcome measure of pass/fail as this is what is meaningful to those sitting the MRCS, and has been used in previous studies looking at factors that predict performance in the MRCS.25 Self-declared ethnicity data were combined into two discrete categories to maximise power when analysing smaller cohorts, rather than this being an ethical or social decision. Regression analyses were adjusted for known sociodemographic predictors of MRCS success, but these were not the main focus of the current paper. We are currently undertaking further analyses to characterise group-level attainment differences that have been identified.49 Finally, the current analysis was based on retrospective quantitative data. A prospective study would have allowed us to examine more variables related to progression and attainment in surgical training. For example, being good at passing examinations is linked to academic ability, but the wider education literature makes clear that non-cognitive factors such as motivation, time management and resilience are also relevant to performance.50 51 If appropriate measures could be identified,52 it would be interesting to compare graduates and MRCS candidates on these factors.

Conclusion

Success at first attempt of MRCS Part A and B can be predicted from medical school performance (EPM decile score) but not from the FP SJT score. Put simply, medical students who do well in terms of medical school examination performance remain strong performers later on in their careers. These results may help to guide career choices for students and can be used by training institutions to inform evidence-based and contemporary selection and assessment strategies.

Data availability statement

Data may be obtained from a third party and are not publicly available. The dataset used in this study was acquired from the UK Medical Education Database and is held in Safe Haven. Data access requests must be made to UKMED. Full information for applications can be found at https://www.ukmed.ac.uk.

Ethics statements

Ethics approval

No formal ethical approval was required for this study of existing UKMED data. UKMED has received ethics exemption for projects using exclusively UKMED data from Queen Marys University of London Ethics of Research Committee on behalf of all UK medical schools (https://www.ukmed.ac.uk/documents/UKMED_research_projects_ethics_exemption.pdf). The Intercollegiate Committee for Basic Surgical Examinations (ICBSE) and its Internal Quality Assurance Subcommittee, which monitors MRCS standards, research and quality, approved this study.

Acknowledgments

The authors would like to acknowledge Iain Targett at the Royal College of Surgeons of England, for his help with data collection and Gregory Ayre from the Intercollegiate Committee for Basic Surgical Examinations for their support during this project. Our thanks to members of the UKMED Research Group who provided useful feedback on an earlier version of this manuscript, and whose comments helped refine the paper. The authors would also like to acknowledge Daniel Smith for his help with the UKMED database. Data Source: UK Medical Education Database (‘UKMED’). UKMEDP043 extract generated on 25 July 2018. We are grateful to UKMED for the use of these data. However, UKMED bears no responsibility for their analysis or interpretation. The data include information derived from that collected by the Higher Education Statistics Agency Limited (‘HESA’) and provided to the GMC (‘HESA Data’). Source: HESA Student Records 2007/2008 to 2015/2016. Copyright Higher Education Statistics Agency. The Higher Education Statistics Agency makes no warranty as to the accuracy of the HESA Data, cannot accept responsibility for any inferences or conclusions derived by third parties from data or other information supplied by it.

References

Footnotes

  • Twitter @RickJEllis1, @dsgscrimgeour

  • Contributors RE and DSGS wrote the first draft of the manuscript. RE and DSGS performed statistical analyses with AJL’s supervision. RE, DSGS, PAB, AJL and JC reviewed and edited the manuscript. JC led the study proposal for access to UKMED data. All authors approved final draft of the manuscript.

  • Funding Royal College of Surgeons of England, Royal College of Surgeons of Edinburgh, Royal College of Surgeons of Ireland and Royal College of Physicians and Surgeons of Glasgow (award/grant number is not applicable).

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.