Article Text

Quality of randomised controlled trials in medical education reported between 2012 and 2013: a systematic review protocol
  1. Martin G Tolsgaard1,
  2. Cheryl Ku2,
  3. Nicole N Woods3,
  4. Kulamakan Mahan Kulasegaram4,
  5. Ryan Brydges5,
  6. Charlotte Ringsted6
  1. 1Centre for Clinical Education and the Juliane Marie Centre, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
  2. 2The Wilson Centre, University of Toronto and University Health Network, Toronto, Ontario, Canada
  3. 3Department of Surgery, The Wilson Centre, University of Toronto and University Health Network, Toronto, Ontario, Canada
  4. 4Department of Family, Community Medicine and The Wilson Centre, University of Toronto and University Health Network, Toronto, Ontario, Canada
  5. 5Department of Medicine, The Wilson Centre, University of Toronto and University Health Network, Toronto, Ontario, Canada
  6. 6Department of Anesthesia, The Wilson Centre, University of Toronto and University Health Network, Toronto, Ontario, Canada
  1. Correspondence to Dr Martin G Tolsgaard; martintolsgaard{at}gmail.com

Abstract

Introduction Research in medical education has increased in volume over the past decades but concerns have been raised regarding the quality of trials conducted within this field. Randomised controlled trials (RCTs) involving educational interventions that are reported in biomedical journals have been criticised for their insufficient conceptual, theoretical framework. RCTs published in journals dedicated to medical education, on the other hand, have been questioned regarding their methodological rigour. The aim of this study is therefore to assess the quality of RCTs of educational interventions reported in 2012 and 2013 in journals dedicated to medical education compared to biomedical journals with respect to objective quality criteria.

Methods and analysis RCTs published between 1 January 2012 and 31 December 2013 in English are included. The search strategy is developed with the help of experienced librarians to search online databases for key terms. All of the identified RCTs are screened based on their titles and abstracts individually by the authors and then compared in pairs to assess agreement. Data are extracted from the included RCTs by independently scoring each RCT using a data collection form. The data collection form consists of four steps. Step 1 includes confirmation of RCT eligibility; step 2 consists of the CONSORT checklist; step 3 consists of the Medical Education Research Study Quality Instrument framework; step 4 consists of a Medical Education Extension (MEdEx) to the CONSORT checklist. The MEdEx includes the following elements: Description of scientific background, explanation of rationale, quality of research questions and hypotheses, clarity in the description of the use of the intervention and control as well as interpretation of results.

Ethics and dissemination This review is the first to systematically examine the quality of RCTs conducted in medical education. We plan to disseminate the results through publications and presentation at relevant conferences. Ethical approval is not sought for this review.

  • EDUCATION & TRAINING (see Medical Education & Training)
  • EPIDEMIOLOGY

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • The first systematic review of the quality of randomised controlled trials in medical education.

  • The use of duplicate, independent and reproducible data coding of quality measures pertaining to research methodology and reporting.

  • To provide a current state of evidence on trial quality, only studies reported from 2012 to 2013 are included in this review.

  • Only articles in English are included in this systematic review.

Introduction

Medical education as a field has grown during the past 20 years. It has become a billion dollar industry accounting for about US$100 billion per year worldwide1 and increasing awareness of linking education to patient outcomes has brought focus on evidence-based medical education.2 The growing interest is reflected in the rise in the number of publications within this area over the past several decades.3 However, this is not unproblematic, as several scholars have warned that medical education research lacks methodological rigour.3 In a study of randomised controlled trials (RCTs) published between 2000 and 2003, a large proportion fell short of the criteria developed by the International Committee of Medical Journal Editors for reporting RCTs.4 Meanwhile, some argue that judging the quality of research being performed in medical education by any ‘objective’ checklist is insufficient.5 Instead, the quality of medical education research should be based on the advancement of our theoretical understanding, rather than on how well a particular research methodology has been adopted.5 Other viewpoints state that whatever method is used should comply with the highest standards of practice for that design.6 Thus, two discourses of evaluating quality have been promoted. One is assessing quality against ‘gold standards’ such as checklists and guidelines, and another is judging the advancement of theory.

In clinical epidemiological research, RCTs take on a central role when evaluating healthcare interventions. Since 2000, the CONSORT group has provided guidelines to improve the transparency and rigour when reporting randomised trials within biomedicine.7 Although the CONSORT statement does not include recommendations for designing, conducting and analysing trials, it indirectly affects design and conduct as transparent reporting may expose deficiencies in research if they exist7 Furthermore, CONSORT is informed by methodological theorists and practitioners in clinical epidemiology as well as biostatistics. Assessing the quality of RCTs in medical education using the CONSORT statement may, however, not capture advancement of theory. Insufficient use of a conceptual theoretical framework may lead to failure in identifying the active component of training interventions. Furthermore, a poor description of the context of the study as well as trainee characteristics limits the external validity in terms of generalisability to other settings and populations. Reporting should therefore also relate the study to a relevant theoretical context to justify how it uses and advances existing theory5 including thorough descriptions of context, educational intervention and control circumstances and trainee characteristics.8 However, these aspects are not assessed using the CONSORT statement, and other measures to evaluate study quality within medical education research may be warranted. To further evolve our understanding of the quality of RCTs conducted in medical education, we aim to explore the adherence to standardised quality criteria as well as the use of theory in recent literature. The research question of this review is: In randomised controlled trials in medical education reported between 2012 and 2013, what characterises the quality of papers published in journals dedicated to medical education compared to papers published in biomedical journals with respect to objective quality criteria?

Methods

This systematic review is designed according to the seven-step approach recommended for conducting systematic reviews in medical education2 and reported according to the PRISMA statement.9

Study eligibility

Broad inclusion criteria are used to obtain a broad range of randomised trials in medical education. Studies published between 1 January 2012 and 31 December 2013 in English are included. This period is chosen as new guidelines for reporting randomised trials were published in June 2010 and previous studies argued that the evaluation of reporting guidelines should first be evaluated 18–24 months following publication.10 ,11 All research papers in medical education using randomised designs are included. Medical education research is defined as ‘any original research study pertaining to medical students, residents, fellows, faculty development or continuing medical education for physicians.’12 Using this definition, studies on veterinary, nursing, pharmacist, physiotherapist and dentistry education research are not eligible. Parallel group studies, crossover studies and non-inferiority and equivalence studies are all included, whereas pseudorandomised studies are not.

Search

The search strategy is developed with the help of experienced librarians to search MEDLINE, EMBASE, CINAHL, PsychINFO, ERIC, Web of Science and Scopus for key terms. These terms include truncated search on random* and MeSH terms relating to medical education (eg, Education, Professional). Related domains are also included in the search to account for research not categorised under medical education (eg, health profession education, simulation, undergraduate medical education, technology-enhanced education, clinical reasoning, skills assessment, education professional, student health occupation, internship and residency, curriculum planning, instructional method, self-directed learning, etc). The search is supplemented by adding the reference lists of recent reviews in simulation-based medical education and with authors’ records of studies published in the period of interest. The authors’ records are used to refine the search strategy in an iterative way so that as many relevant randomised studies as possible are included in the online search.

Study selection

All of the identified studies are screened based on their titles and abstracts individually and compared in pairs to assess agreement so that all studies have been screened by two authors. Potential disagreement is solved by discussion until consensus is reached. If the title or abstract is insufficient for determining eligibility, the full text is reviewed. If consensus cannot be reached by two of the coauthors, the whole author team will decide whether to include the paper or not. The agreement between the raters is determined using intraclass correlation coefficients.

Data collection process

Data are extracted from included studies by duplicate and independent scoring of each study using a data collection form. The data collection form consists of four steps. Step 1 includes confirmation of study eligibility; step 2 consists of the CONSORT checklist; step 3 consists of the Medical Education Research Study Quality Instrument (MERSQI) framework; and step 4 consists of a Medical Education Extension to the CONSORT statement developed by the review group.

  • Step 1: The first step includes confirmation of study eligibility, extraction of study ID (created by the review author) as well as the name and focus of the journal (medical education/biomedical).

  • Step 2: The dichotomous CONSORT checklist is completed by ticking off each item when either present (=1), absent (=0) or not applicable (NA). The CONSORT statement recommends that researchers provide a scientific background for the study as well as present specific objectives and hypothesis, and thoroughly describe the intervention and control conditions, randomisation procedure, data analysis and interpretation of results.

  • Step 3: The Jadad Scale for reporting randomised trials13 is used to assess the methodological rigour of the included studies. It consists of three items pertaining to randomisation procedure, blinding and participant withdrawal or dropouts.

  • Step 4: The MERSQI12 is used to provide an established measure of study quality and scores are compared in pairs and discussed until consensus. Evidence of validity of the MERSQI framework has been established in a previous study.12 The MERSQI framework provides a measure of trial size (single or multiple institutions), validity of assessment instruments used, and the Kirkpatrick level of outcome measures used (a taxonomy for classifying training programmes). Hence, studies of a certain size that focus on patient outcomes would receive higher scores than single-institution studies that assess the impact of interventions on healthcare professionals’ knowledge or behaviour in a simulated setting.

  • Step 5: The Medical Education Extension (MEdEx) is developed by the study group through a literature review of relevant quality research in medical education. To further advance our understanding of the use of theory in the scientific background of the RCTs, the reporting of specific hypotheses, clarity of description of interventions14 and controls, and the use of theory in the interpretation of the observed results, we chose to include these factors in a MEdEx to the CONSORT checklist. In step 4, the following items are therefore included: (1) Scientific background,5 (2) Explanation of rationale,5 (3) Objectives or research question,4 ,6 (4) Hypotheses,4 ,6 (5) Description of the intervention and control circumstances,6 ,8 ,14 (6) Interpretation4 ,5 ,12 of results (see online supplementary appendix).

Statistical analysis

Inter-rater reliability is calculated using Intra-Class Correlation Coefficients. In the event of disagreement, assessments will be solved by consensus. Descriptive statistics for each of the three different quality measures will be performed. Logistic regression will be performed using journal type as dependent and CONSORT scores, MERSQI scores and MEdEx scores as predictor variables. Multiple regression using the same predictor variables and journal impact factor will also be performed to assess the relation between quality measures and the journal impact factor.

Discussion and dissemination

In parallel with the rise in publications in medical education over the past decades, increasing interest is being paid to systematically evaluate the quality of research conducted within this field. We chose to include three different quality measures in the data collection form for this systematic review. To evaluate the quality of reporting, the CONSORT checklist is included as a measure of the degree to which current medical education research adheres to the guidelines endorsed by the World Association of Medical Editors, the International Committee of Medical Journal Editors (ICMJE) and the Council of Science Editors.15 The second-quality measure includes the MERSQI framework, which has been used extensively in several recent reviews.16–18 Although MERSQI scores have been shown to correlate with the journal impact factor,12 it provides limited information on the use of theory or clarity in the description of interventions. Hence, to account for the use of conceptual theoretical frameworks in medical education RCTs, we plan to include a third quality measure in terms of the MEdEx framework.

We hypothesise that this review may demonstrate differences between different quality measures in RCTs reported in biomedical journals compared to those published in journals dedicated to medical education. We expect that RCTs reported in biomedical journals adhere more strictly to the CONSORT statement and use outcome measures that relate to the upper Kirkpatrick levels than RCTs reported in medical education journals. Finally, we hypothesise that RCTs published in medical education journals use theory in the rationale for their research question, methods and in their interpretation of results, whereas this may be missing in research published in biomedical or clinical journals.

The review results will be submitted for publication in a peer-reviewed general medical journal and disseminated through relevant international conferences.

The results of this review will help clarify the state of quality of education research by using common quality standards. The comparative analysis with clinical epidemiology will provide feedback for medical education researchers and contribute to raising the quality of research and improve the reporting of studies within this field.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:

Footnotes

  • Contributors All authors contributed to the design of the review and approved the final manuscript. MGT and CR were responsible for conception of the review. CK, KMK, RB, NNW, MGT and CR were responsible for designing the review and the MEdEx. MGT drafted the first version of the protocol and CK, RB, NNW, KMK and CR critically revised the paper.

  • Funding This review has been supported by the Laerdal Foundation.

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.