Original Article
The Oxford Implementation Index: a new tool for incorporating implementation data into systematic reviews and meta-analyses

https://doi.org/10.1016/j.jclinepi.2013.03.006Get rights and content

Abstract

Objectives

This article presents a new tool that helps systematic reviewers to extract and compare implementation data across primary trials. Currently, systematic review guidance does not provide guidelines for the identification and extraction of data related to the implementation of the underlying interventions.

Study Design and Setting

A team of systematic reviewers used a multistaged consensus development approach to develop this tool. First, a systematic literature search on the implementation and synthesis of clinical trial evidence was performed. The team then met in a series of subcommittees to develop an initial draft index. Drafts were presented at several research conferences and circulated to methodological experts in various health-related disciplines for feedback. The team systematically recorded, discussed, and incorporated all feedback into further revisions. A penultimate draft was discussed at the 2010 Cochrane–Campbell Collaboration Colloquium to finalize its content.

Results

The Oxford Implementation Index provides a checklist of implementation data to extract from primary trials. Checklist items are organized into four domains: intervention design, actual delivery by trial practitioners, uptake of the intervention by participants, and contextual factors. Systematic reviewers piloting the index at the Cochrane–Campbell Colloquium reported that the index was helpful for the identification of implementation data.

Conclusion

The Oxford Implementation Index provides a framework to help reviewers assess implementation data across trials. Reviewers can use this tool to identify implementation data, extract relevant information, and compare features of implementation across primary trials in a systematic review. The index is a work-in-progress, and future efforts will focus on refining the index, improving usability, and integrating the index with other guidance on systematic reviewing.

Introduction

Evidence-based practice encourages clinicians to look at systematic reviews of specific interventions, such as those produced and maintained by the Cochrane Collaboration, as the “gold standard” for measuring effects. Rigorous systematic reviews and meta-analyses are designed to minimize bias, efficiently distil large amounts of information, and provide information of value to clinicians [1]. However, systematic reviews and meta-analyses of medical or psychosocial interventions present methodological challenges [2]. Reviewers must exercise subjective judgment when deciding whether a statistical meta-analysis will be a reliable summary and when describing how results may be applied to clinical practice. Reviewers are trained to appraise many sources of variation across trials, particularly characteristics related to the methodological quality of included trials. Systematic review guidance, such as the Cochrane Handbook and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) statement, provide thorough instructions for the identification, extraction, and appraisal of information on methodological quality. These guidance documents, however, place less emphasis on intervention implementation, which can be an important source of variation across primary trials. Implementation encompasses information related to intervention design, delivery, and uptake in primary trials. To date, systematic review guidance has not provided clear direction for reviewers seeking to compare implementation across trials.

Section snippets

Distinguishing between fidelity and implementation

Individual trials use various terms for implementation fidelity, defined as the degree to which interventions are implemented according to the design. These terms include adherence, treatment fidelity, treatment integrity, program integrity, and implementation quality. As used in primary trials, these definitions of fidelity are intended to capture practitioners' and participants' compliance with intervention protocols. The rigorous assessment of implementation fidelity can provide many

Methods

The Oxford Implementation Index was developed by a team of systematic reviewers at the Centre for Evidence-Based Intervention at Oxford University. Team members are affiliated with the Cochrane and Campbell Collaborations and independently conduct randomized controlled trials in areas of public health, psychiatry, and social welfare. A steering committee was established, and the index development began in 2005.

Domains of the Oxford Implementation Index

Each implementation domain encompasses a number of implementation characteristics, which may vary across trials assessing similar interventions. The index highlights many of these characteristics, but reviewers should decide which aspects are most relevant to their topic areas. Broadly, the final domains are intervention design, the actual delivery by trial clinicians, the uptake of the intervention by participants, and contextual factors.

Discussion

The Oxford Implementation Index fills an important gap in current guidelines for conducting systematic reviews and meta-analyses. It provides an explicit and systematic method of assessing implementation data across trials, which can help reviewers to combine trials appropriately, explain heterogeneity within reviews, and critically appraise the generalizability of reviews to clinical practice. By encouraging reviewers to focus more explicitly and carefully on implementation data, we hope that

Acknowledgments

The authors thank the late Leonard Gibbs, Janet Harris, and graduate students in the Oxford Evidence-Based Intervention course for helping to refine the index in its early stages. They are grateful to Isabelle Boutron, Jane Dennis, Janet Harris, Alan Kazdin, Steve Milan, Francheska Perepletchikova, Prathap Tharyan, and Denise Thomson for their comments on this article. They also would particularly like to thank Sean Grant for his help in the latter stages of preparing this article.

References (92)

  • B.O. Ladd et al.

    Improving the quality of reporting alcohol outcome studies: effects of the CONSORT statement

    Addict Behav

    (2010)
  • I. Chalmers

    Trying to do more good than harm in policy and practice: the role of rigorous, transparent, up-to-date, replicable evaluations

    Ann Am Acad Polit Soc Sci

    (2003)
  • N. Jackson et al.

    The challenges of systematically reviewing public health interventions

    J Public Health

    (2004)
  • K.S. Dobson et al.

    Definitional and practical issues in the assessment of treatment integrity

    Clin Psychol Sci Prac

    (2005)
  • L. Dusenbury et al.

    A review of research on fidelity of implementation: implications for drug abuse in school settings

    Health Educ Res

    (2003)
  • E. Flannery-Schroeder

    Treatment integrity: implications for training

    Clin Psychol Sci Prac

    (2005)
  • M. Forgatch et al.

    Evaluating fidelity: predictive validity for a measure of competent adherence to the Oregon model of parent management training

    Behav Ther

    (2004)
  • J. Huppert et al.

    Therapists, therapist variables, and cognitive-behavioral therapy outcome in a multicenter trial for panic disorder

    J Consult Clin Psychol

    (2001)
  • J. Hutchings et al.

    Making evidence based interventions work

  • Mihalic S. The importance of implementation fidelity. Working paper by the Centre for the Study and Prevention of...
  • Mihalic S. Successful program implementation: lessons from blueprints. Report by the Office of Juvenile Justice and...
  • F. Perepletchikova et al.

    Treatment integrity and therapeutic change: issues and research recommendations

    Clin Psychol Sci Pract

    (2005)
  • S. Schoenwald et al.

    Multisystemic therapy: monitoring treatment fidelity

    Fam Process

    (2000)
  • J. Waltz et al.

    Testing the integrity of a psychotherapy protocol: assessment of adherence and competence

    J Consult Clin Psychol

    (1993)
  • C. Carroll et al.

    A conceptual framework for implementation fidelity

    Implement Sci

    (2007)
  • D.L. Fixsen et al.

    Implementation research: a synthesis of the literature

    (2005)
  • A.J. Bellg et al.

    Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium

    Health Psychol

    (2004)
  • B. Borrelli et al.

    A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research

    J Consult Clin Psychol

    (2005)
  • B. Resnick et al.

    Examples of implementation and evaluation of treatment fidelity in the BCC studies: where we are and where we need to go

    Ann Behav Med

    (2005)
  • F.M. Gresham et al.

    Treatment integrity in applied behavior analysis with children

    J Appl Behav Anal

    (1993)
  • F. Gresham et al.

    Treatment integrity of school-based behavioral intervention studies: 1980-1990

    Sch Psychol Rev

    (1993)
  • L. Peterson et al.

    The integrity of independent variables in behavior analysis

    J Appl Behav Anal

    (1982)
  • Tamayo S. Orange data: implications of treatment fidelity for systematic reviewing [M.Sc. Thesis in Evidence-Based...
  • D. Moher et al.

    CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials

    J Clin Epidemiol

    (2010)
  • G. Piaggio et al.

    Reporting of noninferiority and equivalence randomized trials: an extension of the CONSORT statement

    JAMA

    (2006)
  • M.K. Campbell et al.

    CONSORT statement: extension to cluster randomised trials

    BMJ

    (2004)
  • J.P.A. Ioannidis et al.

    Better reporting of harms in randomized trials: an extension of the CONSORT statement

    Ann Intern Med

    (2004)
  • K.W. Davidson et al.

    Evidence-based behavioral medicine: what is it and how do we achieve it?

    Ann Behav Med

    (2003)
  • I. Boutron et al.

    Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: explanation and elaboration

    Ann Intern Med

    (2008)
  • M.K. Campbell et al.

    Consort 2010 statement: extension to cluster randomised trials

    BMJ

    (2012)
  • M. Zwarenstein et al.

    Improving the reporting of pragmatic trials: an extension of the CONSORT statement

    BMJ

    (2008)
  • J.J. Gagnier et al.

    Reporting randomized, controlled trials of herbal interventions: an elaborated CONSORT statement

    Ann Intern Med

    (2006)
  • S. Hopewell et al.

    CONSORT for reporting randomized controlled trials in journal and conference abstracts: explanation and elaboration

    PLoS Med

    (2008)
  • A. Liberati et al.

    The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration

    J Clin Epidemiol

    (2009)
  • Cited by (66)

    • Community-Based Exercise Programs for Cancer Survivors: A Scoping Review of Program Characteristics Using the Consolidated Framework for Implementation Research

      2022, Archives of Physical Medicine and Rehabilitation
      Citation Excerpt :

      Any notable discrepancies were resolved through discussion and consensus. The following data were extracted from each eligible study using the Oxford Implementation Index, a previously developed tool designed to guide the extraction of implementation data from primary studies.23 Specifically, data related to (1) study identification (ie, country, setting, author names, institution); (2) study design; (3) intervention characteristics (ie, exercise type, frequency, program duration, delivery format, equipment/space requirements, fee structure, cost for participants); (4) characteristics of the participant population (ie, sociodemographic information, cancer type, cancer stage, treatment status), program staff characteristics (ie, training), and program delivery; and (5) contextual factors (ie, geographic location, organizational characteristics, contact with health care professionals).

    • A systematic review of the effectiveness of interventions designed for mothers who experienced child sexual abuse

      2020, Child Abuse and Neglect
      Citation Excerpt :

      BL and ABM independently completed data extraction for all included studies. The Oxford Implementation Index was used to determine what implementation data to extract from studies (Montgomery, Underhill, Gardner, Operario, & Mayo-Wilson, 2013). Although the expectation was that the most common intervention study outcomes would pertain to parenting and mental health, we extracted data related to any primary outcomes described by the study authors.

    View all citing articles on Scopus

    Conflict of interest: The Oxford Implementation Index was developed by a team of systematic reviewers at the Centre for Evidence-Based Intervention at Oxford University. The team members are affiliated with the Cochrane and Campbell Collaborations and independently conduct randomized controlled trials in areas of public health, psychiatry, and social welfare. P.M. is the author of over a dozen Cochrane Reviews and was, until recently, cochair of the Campbell Collaboration Social Welfare Group. He led this team and managed the project. P.M., E.M.-W., D.O., F.G., and K.U. contributed to the conception of the index and initial project planning. A systematic literature search was developed and conducted by K.U. and P.M. The index was developed by all the authors, led by P.M. and K.U. Piloting was led by P.M. and facilitated by F.G., E.M.-W., D.O., and K.U. K.U., E.M.-W., and P.M. initially evaluated the index using their own reviews. The Delphi panel was run by P.M. and K.U.; all the authors discussed feedback and, led by P.M., refined the index. All the authors contributed to the writing and revision of the article, led by K.U., E.M.-W., and P.M. The index was the focus of a doctoral dissertation by K.U.

    Funding: We acknowledge the support of the Oxford Department of Social Policy and Intervention and the Centre for Evidence-Based Intervention. This research was supported in part by infrastructure and resources provided by the Brown University Alcohol Research Center on HIV/AIDS (NIH/NIAAA P01AA019072) and the Lifespan/Tufts/Brown Center for AIDS Research (NIH/NIAID P30AI042853).

    Ethical approval: This article was based entirely on secondary research. Ethical approval was not required.

    Competing interests: All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf and declared no support from any organization for the submitted work; no financial relationships with any organizations that might have an interest in the submitted work in the previous 3 years; no other relationships or activities that could appear to have influenced the submitted work.

    View full text