Objectives The knowledge, skills and behaviours required of new UK medical graduates are the same but how these are achieved differs given medical schools vary in their mission, curricula and pedagogy. Medical school differences seem to influence performance on postgraduate assessments. To date, the relationship between medical schools, course types and performance at the Membership of the Royal Colleges of Surgeons examination (MRCS) has not been investigated. Understanding this relationship is vital to achieving alignment across undergraduate and postgraduate training, learning and assessment values.
Design and participants A retrospective longitudinal cohort study of UK medical graduates who attempted MRCS Part A (n=9730) and MRCS Part B (n=4645) between 2007 and 2017, using individual-level linked sociodemographic and prior academic attainment data from the UK Medical Education Database.
Methods We studied MRCS performance across all UK medical schools and examined relationships between potential predictors and MRCS performance using χ2 analysis. Multivariate logistic regression models identified independent predictors of MRCS success at first attempt.
Results MRCS pass rates differed significantly between individual medical schools (p<0.001) but not after adjusting for prior A-Level performance. Candidates from courses other than those described as problem-based learning (PBL) were 53% more likely to pass MRCS Part A (OR 1.53 (95% CI 1.25 to 1.87) and 54% more likely to pass Part B (OR 1.54 (1.05 to 2.25)) at first attempt after adjusting for prior academic performance. Attending a Standard-Entry 5-year medicine programme, having no prior degree and attending a Russell Group university were independent predictors of MRCS success in regression models (p<0.05).
Conclusions There are significant differences in MRCS performance between medical schools. However, this variation is largely due to individual factors such as academic ability, rather than medical school factors. This study also highlights group level attainment differences that warrant further investigation to ensure equity within medical training.
- medical education & training
- adult surgery
Data availability statement
Data may be obtained from a third party and are not publicly available. The dataset used in this study was acquired from the UK Medical Education Database and is held in Safe Haven. Data access requests must be made to UKMED. Full information for applications can be found at https://www.ukmed.ac.uk.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Strengths and limitations of this study
This is the first study to explore differences in Membership of the Royal Colleges of Surgeons (MRCS) performance between medical school course types, pedagogy and indicators of institutional esteem.
It is a large-scale longitudinal cohort study using the UK Medical Education Database.
The outcome measure of pass/fail at the MRCS examination may hide institutional differences in performance at the question level.
A-Levels were used as a marker of prior academic attainment in this study, which does not represent the full range of school-leaving examinations used across the UK.
A larger sample would enable a more granular look at group-level differential attainment.
Medical schools vary significantly in their teaching methodology, curriculum, course structure, assessment methods and standards.1–4 In the UK, the General Medical Council (GMC) acknowledged that these differences between medical schools exist and that it is ‘inevitable’ that this variation can influence a graduate’s ‘interests, abilities and career progression’ but that it is not a ‘cause for concern’,5 presumably because all new medical graduates must meet the same GMC standards. This view can be debated given that medical school seems to influence career progression, direction and success. For example, the number of graduates choosing each specialty differs significantly across medical schools.6–8 There is significant variation in preparedness for practice, progression through Annual Reviews of Competency Progression in UK training programmes and fitness to practice sanctions according to the medical school of primary qualification.5 9 There are also significant differences in the performance of graduates from different medical schools on high-stakes postgraduate examinations such as the Fellowship of the Royal College of Anaesthetists (FRCA),10 Membership of the Royal College of Obstetricians and Gynaecologists (MRCOG),11 Membership of the Royal College of Paediatrics and Child Health,12 Membership of the Royal College of General Practitioners (MRCGP)13 14 and Membership of the Royal College of Physician (MRCP).14–16 This variation in performance is far from unique to the UK, with significant differences in performance according to medical school also found in postgraduate assessments in other countries such as the USA.17 18 However, to our knowledge, no studies have yet demonstrated whether success at postgraduate surgical examinations differs according to medical school, course type or medical school indicators of esteem (eg, institutional ranking) in the UK.
Understanding the relationship between medical school, course type and pedagogy with markers of postgraduate success is vital for the optimisation of undergraduate teaching by enabling the alignment of undergraduate and postgraduate curricula and assessment values. This alignment ensures best educational practices and the optimisation of training to produce safe and prepared doctors.
The Intercollegiate Membership of the Royal Colleges of Surgeons examination (MRCS) is a high-stakes postgraduate examination, highly valued in the UK as a gatekeeper to the surgical profession.19 Success at MRCS is associated with success in surgical training, national selection for higher specialty training and first attempt success in the Fellowship of the Royal College of Surgeons examinations (FRCS) and can therefore be used as an indicative marker of future outcomes in a surgical career.20–22 Success in this examination can be used by medical schools in the alignment of training and assessment values, and students who wish to pursue surgery as a specialty may want to know which medical school will ‘best’ prepare them for a surgical career.23
In this study, we aimed to evaluate whether medical school of primary qualification or medical course type influence MRCS success. We aimed to establish this by the comparison of first attempt pass rates for MRCS across all UK medical schools and understanding the likelihood of passing MRCS based on university, course type and course pedagogy. Additionally, we aimed to investigate whether indicators of esteem such as Russell Group membership and institutional national ranking predict MRCS success.
Moreover, in order to understand the true impact of medical school differences on MRCS performance we adjusted analyses for prior academic attainment and sociodemographic factors that are known to predict MRCS success.24 25 Previous studies have found that after adjusting for these demographic factors (gender, maturity and ethnicity), variation in early surgical training experiences in the UK (Foundation and Core Surgical Training) has little impact on MRCS success.26 27 Prior academic attainment is known to be the strongest predictor of later success in medical education,20 28 29 and at MRCS.24 25 30 Given that some universities are more competitive at entry than others,30 31 it is likely that some medical schools recruit the highest performing candidates. As such, both factors are, adjusted for in our analyses.
This was a longitudinal retrospective cohort study. Individual-level linked data was obtained from the UK Medical Education Database (UKMED)32 and the four Royal Colleges of Surgeons of the UK and Ireland (Edinburgh, Glasgow, England and Ireland). The UKMED database contains background sociodemographic details and assessment results from school to postgraduate examinations and career progression data from combined sources linked at an individual level for all UK medical students and doctors in training.32 This novel database enables powerful multicentre longitudinal cohort studies by including large study populations with minimal missing data. Anonymised data were extracted from UKMED for all UK medical graduates who had attempted either the Part A or the Part B MRCS examination between 2007 and 2017.
The following data were extracted: place of primary medical qualification, course pedagogy and type, MRCS Part A and B first attempt result, gender, self-declared ethnicity and graduation status at the time of entry to medical school. Gender, ethnicity and graduate status were extracted as these are known predictors of MRCS success.24 25 Candidate first attempt results were used as they have been shown to be the best predictor of future performance in postgraduate examinations.24 33 These variables are described in more detail below.
Except for place of primary qualification, all variables were dichotomised. Part A and B MRCS performance was categorised as ‘pass’ or ‘fail’ at first attempt. Graduation status was defined as ‘no’ if candidates had not obtained a degree prior to entering medicine and ‘yes’ if they entered as a graduate. Self-declared ethnicity was coded as ‘white’ or ‘non-white’ as per similar studies to enable powered analysis of smaller cohorts, rather than this being an ethical or social decision.20 21 34 Course pedagogy was classified as ‘problem-based learning’ or ‘not problem-based learning’. Course type was classified as ‘Graduate-Entry’ (GEM: 4-year accelerated Graduate-Entry Medicine programmes) or ‘Undergraduate’ which was later further classified into ‘Standard-Entry’ programme (SEM) or ‘Medicine with a Gateway Year’ (5 years plus one preparatory year). Note that foundation year students were combined with gateway students for this last category, as both approaches have the aim of widening access to medicine; that is, providing alternative ways into medicine for those who do not meet the academic criteria for SEM courses because of socioeconomic or personal disadvantage.35
Finally, there are a significant number of graduates who choose to do an SEM programme,36 so candidates who undertook SEM courses were further defined as ‘Graduate on entry’ or ‘Not graduate on entry’.
At the time of this study, there were 35 medical schools in the UK recognised by the GMC, including a combined University of London awarding body. Most are undergraduate courses, offering a 5-year programme, plus 16 accelerated graduate entry programmes. Eleven medical schools offer gateway/foundation courses. The study-specific dataset included values for 31 medical schools: newer medical schools (eg, Lancaster, Anglia Ruskin and The University of Buckingham) were not represented in the dataset as very few if any of their graduates had attempted MRCS within the study period. Several GEM courses included in the analysis have since ceased to exist (such as Leicester and Bristol), additionally, new GEM and Gateway courses were not included if graduates of these courses had not attempted the MRCS within the study period.
Within the UK, a number of universities combine to create linked medical schools such as Leicester-Warwick Medical School (a combination of the Universities of Leicester and Warwick) and Peninsula Medical School (a combination of Plymouth and Exeter Universities). Many later cease their partnership, creating two independent medical schools. To represent this in the data analysis candidates who studied at either Leicester-Warwick or Peninsula Medical Schools were categorised according to the university from which they graduated (ie, Leicester, Warwick, Plymouth or Exeter). Graduates of Hull-York Medical School and Brighton and Sussex Medical School remain under the combined title as they were still combined institutions at the time of data analysis. Within the study period certain medical schools were also linked (eg, Keele students were awarded degrees by the University of Manchester until 2012). To acknowledge this, students were categorised by the place of graduation for their primary medical qualification, including London graduates.
Indicators of esteem: rankings
In this study, universities were ordered according to their ranking by ‘The Complete University Guide’ as of August 2020. ‘The Complete University Guide’ is the most well recognised independent university ranking system in the UK and uses the following data annually to create an overall score (100 points being the most a university can be awarded): entry standards, student satisfaction, research quality and intensity, graduate prospects, student to staff ratio, spending, honours and degree completion. More information on how the ranking system is calculated is available on the complete university guide website.31 This ranking system provides a quantitative comparator between universities for this study and its use does not suggest that its value is greater than that of any other ranking systems that exist which are calculated using similar data. Note that Lancaster University (ranked 16th) was excluded having only opened in 2006 and having insufficient outcome data. St Andrews Medical School (ranked 25th) was also excluded as it offers only preclinical education: those who commenced their studies at St Andrews were therefore categorised by their place of graduation (eg, Manchester University, The University of Dundee, etc). The ranking table was adjusted accordingly, to create an ordinal variable.
Indicators of esteem: Russell Group
Russell Group Institutions are a collection of self-selected research-driven universities that have developed a reputation of excellence.37 Most older medical schools are associated with the Russell Group. Whether these universities are truly the elite institutions within the UK is a highly debated topic38–40 but they do graduate the majority (80%) of the UK medical students.
Despite well-established definitions of what comprises PBL it can be challenging to identify which medical schools run PBL courses.41 42 We have aligned our definition with that of the British Medical Association as well as that used in recent studies to ensure consistency within the literature, enabling comparisons to be drawn between the results of these studies.1 15 43 PBL schools are: Liverpool, Manchester, Glasgow, Queen Mary, Cardiff, Plymouth, Exeter, Sheffield, Keele, Hull-York and East Anglia.
Markers of prior academic attainment
Individual-level linked performance data was extracted for A-Levels as a marker of prior academic attainment. A-Levels are taken as school exit examinations in the majority of schools in England and in some schools elsewhere in the UK. A-Level results are routinely used as a medical school selection metric.30 Total A-Level scores used in data analyses are the sum of all A-Level scores achieved that is, A=10 (being the highest score achievable for each A-Level), B=8, C=6, D=4, E=2, U=0 (being the lowest score for each A-Level). A small minority of candidates in the dataset (n=30) undertook A-Levels after A* grades were implemented in 2010. These were subsequently excluded for cohort homogeneity.
MRCS examination background
The examination comprises two parts: Part A, the written component made up of two multiple-choice questionnaire tests and Part B, a clinical examination that includes 18 Objective Structured Clinical Examination stations.44 Taken during Foundation and Core Surgical Training, both MRCS Part A and Part B must now be passed to enable the progression of trainees into higher surgical specialty (residency) training.45
All analyses were conducted using SPSS V.22.0 (IBM). χ2 tests were used to assess the relationship between two categorical factors such as medical school and first attempt MRCS pass/fail outcomes.
All counts have been rounded to the nearest five for illustration according to Higher Education Statistics Agency data standards.46 Regression models were used to calculate the ORs and 95% CI for passing MRCS Parts A and B at first attempt according to place of primary medical qualification. The University of Keele was declared the reference category for construction of the logistic regression model for MRCS Part A, as the pass rate at this university (58.6%) most closely resembled the pass rate of the entire cohort of Part A candidates from all universities. The University of Birmingham was declared the reference category for Part B in the logistic regression model, as the pass rate at this university (71.1%) most closely resembled the pass rate of the entire cohort of Part B candidates from all universities.
Potential independent predictors of first attempt success at Part A and B MRCS were identified using multivariate logistic regression models. Regression models were constructed using backward stepwise regression with and without adjustment for prior academic attainment (A-Level performance) for direct comparison.47 Any variable (sociodemographic factor, course type, teaching methodology or marker of institutional esteem) with an association with the outcome at a conservative p<0.10 on univariate analysis was entered into the logistic regression model. All potential predictors with p>0.05 in the full model were subsequently removed until only statistically significant predictors remained in the final model. Potential interactions between the remaining significant predictors were also examined.
The highest standards of security, governance and confidentiality were ensured when storing handling and analysing data. See later for details of ethics approval.
Patient and public involvement
No patients or members of the public were involved in this study.
Medical school differences
Between 2007 and 2017 a total of 9730 UK medical graduates from 31 medical schools attempted the MRCS Part A, with 59% (SD 49) passing on the first attempt. A total of 4645 candidates attempted MRCS Part B and 71% (SD 45) passed at their first attempt. Of all Part A examination candidates 64% were male, 59% were white and 86% had no degree-level qualification prior to studying medicine. Similar demographics were seen in Part B applicants with 65% male candidates, 61% white candidates and 86% of candidates having no prior degree. χ2 analysis revealed a significant difference in MRCS pass rates between medical schools for Part A (p<0.001) and Part B (p<0.001). Figure 1 shows MRCS Part A first attempt pass rates by medical school and figure 2 shows MRCS Part B first attempt pass rates by medical school. Raw data are presented in online supplemental appendix 1.
Medical school ranking and position of esteem
ORs for passing MRCS Part A and B at the first attempt for each medical school can be found in table 1. Oxford and Cambridge University graduates (ranked first and second, respectively) performed significantly better in MRCS Part A than the mean with resulting OR of 9.11 (95% CI 4.77 to 17.39) and 5.82 (3.42 to 9.90), respectively. After adjusting for prior academic attainment, Oxford University graduates were still found to be more than three times more likely to pass MRCS Part A at first attempt (OR 3.18 (95% CI 1.15 to 8.81)) and Cambridge graduates were more than two times as likely to pass (OR 2.64 (95% CI 1.03 to 6.78)). After adjusting for prior academic attainment, no medical schools were found to be statistically significant predictors of MRCS Part B first-attempt success and there was no statistically significant difference in MRCS performance between most medical schools.
There was a significant difference in MRCS Part A pass rates between candidates from Russell Group universities (60.7% (4970/8185)) and non-Russell Group universities (49.9% (770/1540)) p<0.001 (table 2). Similarly, a significant difference was seen in Part B of the examination with a pass rate of 71.4% (2790/3910) for Russell Group universities and 67.5% (495/735) for non-Russell Group universities p=0.038.
Univariate analysis of pass rates by course type is displayed in table 2. The majority of all MRCS Part A candidates had studied a Standard-Entry Medicine (SEM) course (8950/9730): only 745 candidates had graduated from a GEM course. There was a significant difference between Part A pass rates of SEM (59.3%) and GEM graduates (54.6%) p=0.012. Of the 335 graduates who attempted Part B, 69.3% passed first time, and there was no statistically significant difference in MRCS Part B pass rates between SEM and GEM candidates (p=0.533).
A small proportion of the trainees attempting MRCS Part A who had studied an SEM course (n=8950) entered medicine as graduates (n=730). There was a significant difference in MRCS Part A success between those entering without a prior degree 60.2% (4945/8220) and graduates 49.5% (360/730) from SEM courses, p<0.001. Similar results were found for MRCS Part B (71.5% (2830/3960) vs 65.0% (220/335), respectively p<0.001).
Table 2 shows that of all candidates who attended an SEM, 190 entered their course via a ‘Gateway Year’. A statistically significant difference was seen in MRCS Part A pass rates between students who undertook a Gateway Year (28.1%) and those who entered directly into a Standard-Entry course (60.0%) p<0.001. There was a difference in MRCS Part B pass rates between Gateway students (60.9% (40/70)) and direct-entry students (71.1% (3010/4230)) but this was not statistically significant (p=0.081).
Of all graduates from SEM courses, 49.5% passed Part A first time compared with 54.6% of graduates from GEM courses (p=0.054). Similarly, 65% of SEM graduates passed Part B first time compared with 69.3% of GEM graduates (p=0.251).
A significant difference was observed in MRCS Part A first attempt pass rates between candidates who studied on a course described as PBL and those who studied at medical schools with other core pedagogies (47.0% (1175/2505) vs 63.1% (4560/7225) p<0.001 (table 2)). A similar difference was observed in Part B of the MRCS (PBL: 66.6% (785/1180) and non-PBL: 72.2% (2505/3465) p<0.001).
A comparison of MRCS pass rates between GEM courses can also be found in table 3. There was a significant difference in pass rates between GEM schools for MRCS Part A (p=0.028) but not for MRCS Part B (p=0.072). Drilling down further highlights that the aggregate data disguise variation. For example, graduates of the King’s College London GEM programme performed above average (eg, 76.7% Part A and 81.0% Part B pass rates; table 3) but the MRCS performance of candidates from their undergraduate programme was lower than average (57% Part A and 70.5% Part B, figure 1).
Pass rates for MRCS Parts A and B by graduate on entry to medicine status, gender and ethnicity are shown in table 4. Non-graduates, males and individuals of white ethnicity had significantly higher pass rates for MRCS Parts A and B compared with their graduate, female and non-white ethnicity counterparts.
The multivariate logistic regression models showing independent predictors of success at MRCS Part A and MRCS Part B can be found in table 5. After adjusting for prior academic attainment, white candidates, males and those who studied medicine without a prior degree-level qualification were all significantly more likely to pass MRCS Part A at the first attempt (p<0.05). After adjusting for prior attainment, white ethnicity remains a statistically significant predictor of Part B success (p<0.05), although gender and graduate status were not independent predictors of Part B success.
Candidates who attended a non-PBL medical school were found to be 53% (OR 1.53 (95% CI 1.25 to 1.87)) more likely to pass Part A and 54% (OR 1.54 (95% CI 1.05 to 2.25)) more likely to pass Part B at the first attempt after adjusting for prior academic performance, compared with those who attended a PBL school. Candidates attending an SEM course were nearly four times more likely to pass Part A at first attempt (OR 3.72 (95% CI 2.69 to 5.15)) and 67% more likely to pass Part B (OR 1.67 (95% CI 1.02 to 2.76)) when compared with those entering SEM via a Gateway Year. After adjusting for prior attainment, SEM candidates were more than twice as likely to pass Part A (OR 2.34 (95% CI 1.21 to 4.52)) but attending an SEM course was not found to be a statistically significant predictor of Part B success.
Candidates who attended a Russell Group university were 79% more likely to pass Part A (OR 1.79 (95% CI 1.56 to 2.05)) and 24% more likely to pass Part B (OR 1.24 (95% CI 1.03 to 1.49)). However, after adjusting for prior academic attainment, attending a Russell Group university was found to predict success at MRCS Part B (OR 1.81 (95% CI 1.17 to 2.80)) but not Part A.
This study, the first to examine the variation in pass rates for the MRCS examination across UK medical schools, identified significant differences in pass rates for both MRCS Part A and Part B across schools, course type and pedagogy.
Our most important finding is the lack of statistically significant difference in MRCS success between medical schools after adjusting for A-Levels as a measure of prior academic attainment. This indicates that prior attainment is a significant contributory factor to postgraduate performance between different schools. In other words, differences in postgraduate examination performance are more closely related to individual factors than medical school differences. This reflects patterns seen in other medical assessments.11 14 17 20 21 28 48–51
Institutional esteem is a known pull factor for medical school applicants.52–54
We found that even after adjusting for prior academic attainment and, by extension, the selection of the highest achieving applicants (see later), both Oxford and Cambridge universities performed significantly better than other academic institutions. These results suggest that the training and education offered by these schools does add value to the likelihood of their student’s later success, over and above the individual’s academic ability.
However, with the exceptions of Oxford and Cambridge, we found little association between MRCS pass rates and medical school rankings. This is perhaps unsurprising given that rankings are based on amalgamated scores,31 several of which are not relevant to vocational medical degrees with their high retention and employability rates. Additionally, earlier studies indicated that staff to student ratio and student feedback, two seemingly relevant measures used in university rankings, seem to have no effect on performance in medical graduates.15 16 In contrast, Russell Group (research-intensive/focused universities) medical graduates were far more likely to pass MRCS at the first attempt. The relationship between research intensity/focus and MRCS outcomes is unclear. However, it may be that higher entry requirements for Russell Group universities55 56 play a role given the strong message from our findings and those of the wider literature that prior academic performance is the strongest predictor of future success.14 17 20 21 25 28–30 48–51 Indeed, we would suggest that educational institutions that are self-selecting as an elite group have a self-interest in selecting the very best applicants who will continue to perform at a high level after graduating in order to perpetuate their status as the leading schools.
As per McManus et al’s MedDifs paper (2020),15 we found that pedagogic differences (PBL vs non-PBL) are related to variation in outcome measures on postgraduate examinations. Graduates from PBL courses perform less well on MRCS A and B. Other literature hints at possible reasons for this. PBL graduates have strengths compared with those from non-PBL courses in some areas,57 58 but PBL graduates have reported less surgical teaching than is offered at other medical schools15 and differences in time dedicated to undergraduate surgical training in UK medical schools has been found to correlate with preparedness for clinical practice in surgery.23 PBL courses have also been criticised for neglecting basic science content,59 60 and this may be a contributing factor in the performance of PBL students at Part A of the MRCS, given that paper 1 (of 2) is dedicated to applied basic sciences.
Gateway courses provide a pathway to medicine for students from more diverse sociodemographic and academic backgrounds.61 62 Students from Gateway courses perform less well on assessments during medical school,61 63 at Foundation Programme Selection64 and, as found in this study. the MRCS. However, there are two points to note. While increasing the diversity of the medical workforce is high on the workforce planning agenda,65 the actual number of Gateway programme graduates in our analysis was very small (n=190). This suggests that surgery is not a common career pathway for these students. Why this is the case is unknown but it may be related to myriad factors including high competition for surgical training posts,66 a lack of perceived ‘fit’ with surgery, few visible role models from similar backgrounds in senior surgical roles, and/or a greater preference to choose a medical career which enables them to give back to under-served communities.67 68 Future research is required to examine this further.
Despite the performance of those who entered medical school as graduates being comparable to those who entered as undergraduates throughout medical school69 70 and on graduation,63 there remains a significant attainment difference between these groups on postgraduate specialty examinations.20 71 72 Our analysis suggests that this is not due to course type (GEM or SEM). Further work is required to ascertain whether graduates are disadvantaged in postgraduate training due to other factors, such as increased commitments on their time (eg, family, dependants and financial obligations)72 or whether this is a reflection of lower prior academic achievement.56 73
Implications for research, policy and practice
Much literature indicates that medical school influences the progression, direction and performance of their graduates.5–7 9–13 15 16 74 However, it is reassuring to find that the majority of this variation in performance between schools on the MRCS at least can be accounted for by individual factors, namely prior academic attainment. There were, however, clear differences in performance by course pedagogy and markers of institutional esteem which can be used by medical schools to optimise the alignment between undergraduate and postgraduate teaching, learning and assessment values in surgery, and by individuals when considering where to apply to study medicine.
These findings are relevant to medical school selection. In the UK, the first and major hurdle to entry into medicine is achieving high grades on school exit examinations (such as A-Levels). This is usually coupled with an aptitude test and, if an applicant reaches the required standard on these measures, an interview to assess non-cognitive (personal) qualities.75 There has been much debate in the selection literature as to the weight which should be placed on each of these selection components.76 Our data suggest that if a medical school wants to graduate doctors who are good at passing postgraduate exams, then prior academic attainment should be heavily weighted at the point of selection.
However, if the mission of medical schools is to graduate doctors who will, for example, meet social accountability mandates, then more holistic selection criteria are required.77 Moreover, there are other factors potentially influencing postgraduate success which we could not take into account: group factors (eg, factors related to the demographics of the student group)78; individual career preferences16 and prior schooling79; mentorship and research opportunities80 and a student’s overall experience of a specialty.74 We are unlikely to ever characterise all variables that contribute to postgraduate examination success, but this study goes some way to identifying key patterns.
In addition to variation in MRCS pass rates, there is also significant variation in the number of graduates from each medical school entering careers in surgery.6 52 Students who wish to pursue surgery as a specialty may want to know which medical school will ‘best’ prepare them for a surgical career.23 Many students enter medicine with clear views as to which specialty they wish to pursue.52 81 82 Perceptions of how well an individual will be placed for a surgical career on graduation may be one factor that is taken into account at the time of application to medical school.83 However, it will not be the only factor. Studies indicate that numerous factors are ‘traded-off’ when considering training location and these trade-offs differ for different groups (eg, on the basis of gender or socioeconomic background).84 85 Similarly, applicants may consider factors such as pedagogic approach (eg, PBL vs, for example, or a lecture-based course)86–88; course length if a graduate (graduates have the choice between a traditional 5year programme or an accelerated GEM course89); and/or the reputation and national ranking of a medical school when considering where to apply.52–54 90 In short, choosing which medical school to attend is a major decision and factors other than career preference may be important in this process.
Group differences in performance by gender, maturity and ethnicity reflect those seen in previous studies.20 24 These attainment differences have also been identified in other high-stakes medical examinations, including FRCS, MRCP, MRCGP, MRCPsych and the USMLE.20 34 48 91–93 Research that aims to investigate this differential attainment at MRCS is currently ongoing. Bias and discrimination at the question level must be ruled out using techniques such as differential item functioning analysis,94 as should the possibility of examiner bias.95 96 The wider literature also suggests the need to examine systemic inequities in the workplace learning environment.97
Strengths and limitations
To our knowledge, this large cohort study is the first to assess the relationship between MRCS success and medical school choice, type and ranking after adjusting for measures of prior academic attainment. The UKMED dataset enabled a large-scale, multi-cohort analysis of medical school differences on MRCS first attempt outcomes. The dataset had very little missing data enabling detailed and accurate analyses, demonstrating the utility of national medical education databases. We used candidate first attempt scores despite candidates being able to take multiple attempts at both parts of the MRCS, as first attempt performance in postgraduate examinations has been shown to be the best predictor of future performance33 and this outcome has been used in previous studies looking at factors which predict performance in the MRCS.20 24 The outcome measure of pass/fail was used as in previous studies since this is what is meaningful to those sitting MRCS.24 25 98 Data were not available for individual MRCS questions and stations potentially hiding institutional differences in performance.
A-Levels were used as a marker of prior academic attainment in this study. This does not represent the full range of school-leaving examinations used by all UK schools (others include Irish and Scottish Highers and the International Baccalaureate). However, A-Levels have been used previously as markers of prior academic attainment in seminal medical education papers and we have no reason to believe that other school-leaving examinations would show different results.28 29 The strengths and limitations of using markers of prior academic attainment such as A-Levels in high achieving cohorts such as doctors are discussed in these papers and in our previous work.30
Despite a long study period and a large study population; stratification of the analysis by medical school results in smaller cohort numbers (and therefore reduced statistical power) for comparison. Smaller cohort numbers and lower numbers of actual observations in some subanalyses may result in overfitting, affecting the predictive ability of regression models. Larger cohort sizes would have enabled a more detailed analysis of group differences such as self-declared ethnicity data, avoiding the need for the binary categorisation used here which ensured maximum statistical power.97 99
Stage of training is known to have an impact on MRCS performance, with those who attempt the examination earlier in their training generally performing better than their peers.24 Without access to stage of training data for the first attempt at MRCS, we were unable to adjust for this variable in the analyses. Stage of training could be extrapolated using the date of graduation, however, given that over half of UK doctors take at least 1 year out of training after the Foundation programme, this would introduce a significant degree of inaccuracy to the analyses. Similarly, we were unable to adjust for degree intercalation. Those who undertake an intercalated degree are known to perform better in later medical school examinations, which is to be expected, given that entry to intercalation programmes is competitive.100 It is therefore likely that this group will continue to be top performers in postgraduate assessments, given prior academic attainment is the best predictor of later success.28 Additionally, very few intercalating students will be graduates on entry to medicine and are therefore unlikely to experience the same burden of time, financial and caring commitments as graduates. The impact of intercalating on markers of postgraduate performance across all specialties would be best assessed in a separate study. This would be particularly relevant given the recent removal of points scored for undergraduate degrees in UK Foundation Programme selection measures, which has started a debate regarding the future merit of intercalating.
Analysis that includes multiple sociodemographic and course factors inevitably includes a degree of multicollinearity, although every effort was made to minimise this. Interaction terms were explored and statistically significant interactions are listed in the footnote of table 5. These highlight differences in cohort sociodemographics between each teaching methodology and course type. Further exploration of these differences may be of interest to those in charge of selection and recruitment for medical school. Courses change over time and as such results and attainment differences may also have changed throughout the study period: future studies may wish to use a time-series analysis to look at this.76
There are significant differences in MRCS performance between UK medical school course types and pedagogy. However, variation in MRCS pass rates between medical schools is largely due to individual factors, such as the academic ability of individuals, rather than medical school factors. This data has implications for those in charge of selection policy and curricula delivery. This study also highlights group level attainment differences that transcend training location and stage, warranting further investigation to ensure equity within medical training.
Data availability statement
Data may be obtained from a third party and are not publicly available. The dataset used in this study was acquired from the UK Medical Education Database and is held in Safe Haven. Data access requests must be made to UKMED. Full information for applications can be found at https://www.ukmed.ac.uk.
Patient consent for publication
UKMED has received ethics exemption for projects using exclusively UKMED data from Queen Marys University of London Ethics of Research Committee on behalf of all UK medical schools (https://www.ukmed.ac.uk/documents/UKMED_research_projects_ethics_exemption.pdf). The Intercollegiate Committee for Basic Surgical Examinations (ICBSE) and its Internal Quality Assurance Subcommittee, which monitors MRCS standards, research and quality, approved this study.
The authors would like to acknowledge Iain Targett at the Royal College of Surgeons of England, for his help with data collection and John Hines and Gregory Ayre from the Intercollegiate Committee for Basic Surgical Examinations for their support during this project. Our thanks to members of the UKMED Research Group who provided useful feedback on an earlier version of this manuscript, and whose comments helped refine the paper. The authors would also like to acknowledge Daniel Smith for his help with the UKMED database. Data source: UK Medical Education Database ('UKMED'). UKMEDP043 extract generated on 25/07/2018. We are grateful to UKMED for the use of these data. However, UKMED bears no responsibility for their analysis or interpretation the data includes information derived from that collected by the Higher Education Statistics Agency Limited ('HESA') and provided to the GMC ('HESA Data'). Source: HESA Student Records 2002/2003 to 2015/2016. Copyright Higher Education Statistics Agency Limited. The Higher Education Statistics Agency Limited makes no warranty as to the accuracy of the HESA Data, cannot accept responsibility for any inferences or conclusions derived by third parties from data or other Information supplied by it.
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Twitter @RickJEllis1, @dsgscrimgeour
Contributors RE wrote the first draft of the manuscript. RE performed statistical analyses with AL’s supervision. RE, PB, DSGS, AJL and JC all reviewed and edited the manuscript. JC led the study proposal for access to UKMED data. All authors approved the final draft of the manuscript. RE is study guarantor.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.