Article Text

Download PDFPDF

Protocol
Ten years of implementation outcome research: a scoping review protocol
  1. Rebecca Lengnick-Hall1,
  2. Enola K Proctor1,
  3. Alicia C Bunger2,
  4. Donald R Gerke3
  1. 1The Brown School, Washington University in St Louis, St Louis, Missouri, USA
  2. 2College of Social Work, The Ohio State University, Columbus, Ohio, USA
  3. 3Graduate School of Social Work, University of Denver, Denver, Colorado, USA
  1. Correspondence to Dr Rebecca Lengnick-Hall; rlengnick-hall{at}wustl.edu

Abstract

Introduction A 2011 paper proposed a working taxonomy of implementation outcomes, their conceptual distinctions and a two-pronged research agenda on their role in implementation success. Since then, over 1100 papers citing the manuscript have been published. Our goal is to compare the field’s progress to the originally proposed research agenda, and outline recommendations for the next 10 years. To accomplish this, we are conducting the proposed scoping review.

Methods and analysis Our approach is informed by Arksey and O’Malley’s methodological framework for conducting scoping reviews. We will adhere to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews. We first aim to assess the degree to which each implementation outcome has been investigated in the literature, including healthcare settings, clinical populations and innovations represented. We next aim to describe the relationship between implementation strategies and outcomes. Our last aim is to identify studies that empirically assess relationships among implementation and/or service and client outcomes. We will use a forward citation tracing approach to identify all literature that cited the 2011 paper in the Web of Science (WOS) and will supplement this with citation alerts sent to the second author for a 6-month period coinciding with the WOS citation search. Our review will focus on empirical studies that are designed to assess at least one of the identified implementation outcomes in the 2011 taxonomy and are published in peer-reviewed journals. We will generate descriptive statistics from extracted data and organise results by these research aims.

Ethics and dissemination No human research participants will be involved in this review. We plan to share findings through a variety of means including peer-reviewed journal publications, national conference presentations, invited workshops and webinars, email listservs affiliated with our institutions and professional associations, and academic social media.

  • health services administration & management
  • quality in health care
  • organisation of health services
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • Following a strong scoping review process and adhering to established reporting guidelines will generate reliable and transparent findings about the state of knowledge on implementation outcomes.

  • This review will consider articles from a broad range of settings, interventions and study designs that will lead to insights for a wide array of healthcare research audiences.

  • This review will not report on effectiveness or the methodological quality of the included studies.

  • Conceptual ambiguity in implementation outcome terminology may lead to the exclusion of studies that do not use the 2011 taxonomy.

Introduction

Seventeen years is the frequently cited—and still alarming—amount of time that it can take for research evidence to reach healthcare clinicians and clinical care.1 2 To help address this lag, implementation science links what is discovered in highly controlled research environments to what actually happens in real practice settings. Implementation researchers seek to understand if a treatment was not successful, or if it simply did not have a chance to be successful because its implementation failed.3 This science requires direct measurement of implementation success, which is distinct from effectiveness of the intervention being implemented.3 As in most evolving fields, implementation science suffered early on from lack of clear conceptualisation and operationalisation of outcomes for evaluating implementation success.4

To advance the precision and rigour of implementation science, a 2011 paper proposed a working taxonomy of eight distinct implementation outcomes, conceptual definitions and a research agenda focused on implementation processes.5 The outcomes that comprise the taxonomy are acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration and sustainability.5 Acceptability is the perception among implementation stakeholders that a given treatment, service, practice or innovation is agreeable, palatable or satisfactory. Adoption is the intent, initial decision, or action to try or employ an innovation or evidence-based practice; also referred to as uptake. Appropriateness is the perceived fit, relevance or compatibility of an innovation or evidence-based practice for a given practice setting, provider or consumer; and/or perceived fit of the innovation to address a particular issue or problem. Feasibility is the extent to which a new treatment or an innovation can be successfully used or carried out within a given setting. Fidelity is the degree to which an intervention was implemented as it was prescribed in the original protocol or as was intended by programme developers. Implementation cost is the cost impact of an implementation effort. Penetration is the integration or saturation of an intervention within a service setting and its subsystem; calculated as a ratio of those to whom the intervention is delivered divided by number of eligible or potential recipients. Last, sustainability is the extent to which a newly implemented treatment is maintained or institutionalised within a service setting’s ongoing, stable operations. The 2011 paper cautioned that these eight outcomes were ‘only the more obvious’ ones and projected that other concepts might emerge in response to the research agenda that this original paper proposed.5

The original research agenda called for work on two fronts. First, Proctor and colleagues challenged the field to advance the conceptualisation and measurement of implementation outcomes by employing consistent terminology when describing implementation outcomes, by reporting the referent for all implementation outcomes measured and by specifying level and methods of measurement.5 Second, the team called for theory building research employing implementation outcomes as key constructs. Specifically, researchers were challenged to explore the salience of implementation outcomes to different stakeholders and to investigate the importance of various implementation outcomes by phase in implementation processes, thereby identifying indicators of successful implementation.5 Proctor and colleagues called for research to test and model various roles of implementation outcomes, including understanding how different implementation outcomes are associated with one another as dependent variables in relation to implementation strategies and as independent variables in relation to clinical and service system outcomes.5

The 2011 paper spurred several significant developments in implementation science. Soon after publication, the outcome taxonomy was reflected in research funding announcements which impacted implementation study conceptualisation and design. For example, the US National Institutes of Health’s PAR-19-277 for Dissemination and Implementation Science in Health identifies these implementation outcomes as important for inclusion in investigator-initiated research applications.6 Eighteen distinct institutes and centres signed onto this cross-cutting programme announcement and these outcomes have since been applied in a diversity of settings and fields.

The 2011 outcome taxonomy also sparked advances in measurement development and instrumentation, including a repository of quantitative instruments of implementation outcomes relevant to mental or behavioural health settings.7 8 These advances allowed implementation researchers to progress from asking descriptive questions to causal ones.9 Researchers are now systematically testing the effectiveness of implementation strategies and the mechanisms that explain how these strategies influence implementation outcomes.10–12 Taken together, we expect current implementation outcome research to reflect wide expertise, broad theoretical lenses and examination in varied settings.

Ten years since publication of the 2011 paper, our goal is to assess the field’s progress in response to the originally proposed research agenda, and outline recommendations for the next 10 years. To accomplish this, we first need to take stock of existing implementation outcome research through this proposed scoping review. The proposed review will address three aims. The first aim refers to the coverage of the outcomes. We will examine the degree to which each implementation outcome has been examined in the literature, including settings, clinical populations and innovations represented. We expect to see a range of medical specialties, behavioural health and social service settings. More specifically, our first aim is to assess the extent to which each of the outcomes has been researched. Addressing this aim will help us identify existing literature gaps.

We will next focus on the relationship between implementation strategies and outcomes, including the degree to which implementation outcomes and strategies have been concurrently studied. As such, our second aim is to describe if and how implementation strategies have been examined for their effectiveness in attaining implementation outcomes. As we review articles, we will note the salience and malleability of outcomes in response to implementation strategies in different contexts. Addressing this aim will help us advance theory and the conceptualisation of implementation strategies and their impact, including the identification of relevant mechanisms.

Finally, we will turn our attention to the role that implementation outcomes may play in predicting the cascade of service delivery and client outcomes. Our third aim is to identify studies that empirically examine relationships among implementation and/or service and client outcomes and to document what those relationships are. Addressing this aim is integral to articulating and demonstrating the tangible public health impact of successful implementation.

Methods and analysis

Our approach is informed by the first five steps of Arksey and O’Malley’s methodological framework for conducting scoping reviews.13 We will adhere to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR).14 Additionally, we will mirror the iterative and reflexive approach modelled by Marchand et al15 and Kim et al16 during each step of the review process. Our protocol is registered through the Open Science Framework (https://osf.io/rmq7x/?view_only=2e9ea65c209844a589966f2be051a2b2).

Stage 1: defining the research question

The research agenda presented in the 2011 paper is the basis for developing our research questions. The original research agenda consisted of two broad categories.5 One category was the conceptualisation and measurement of implementation outcomes.5 The other category was theory building around the implementation process.5 To assess the degree to which the field has responded to this agenda over the last 10 years, our scoping review will address the following research questions:

  1. To what extent has each of the implementation outcomes been researched and with what degree of rigour they have been investigated? We are interested in describing the diversity, range, and frequency of contexts (settings, populations and innovations), research designs and methods used to study each outcome.

  2. How have implementation strategies been examined for their effectiveness in attaining implementation outcomes?

  3. What are the empirical relationships between implementation, service and client outcomes?

Answering the first research question will help us assess how the field has advanced in terms of research on the conceptualisation and measurement of implementation outcomes. Answering the second and third research questions will help us assess and describe the field’s progress around implementation outcome theory building, including modelling attainment of and inter-relationships among implementation outcomes.

Our research questions also touch on the three functions of implementation outcomes outlined in the 2011 paper: they serve as indicators of the implementation success; they are proximal indicators of implementation processes; and they are key intermediate outcomes. The 2011 paper used ‘implementation success’ as a term that reflects attainment of any or all of the implementation outcomes studied. Our first research question will enable us to capture the various ways implementation success has been operationalised and measured across a diverse range of studies, while the second research question will allow us to explore the idea of implementation success in the context of strategy research. Implementation outcomes are all proximal, in that they are intermediate outcomes relative to clinical outcomes or service system outcomes. The third research question will allow us to examine the extent to which this function of implementation outcomes has been used in the existing literature. Finally, although the 2011 paper did not identify any particular implementation outcomes as ‘key’, determining the importance of an implementation outcome could be empirically explored by testing its relationship to clinical or service system outcomes. In answering the third research question, we will be able to identify which implementation outcomes have been studied as independent variables in relation to attainment of service system or clinical outcomes and document whether the relationship between an implementation outcome and service system or clinical outcome is statistically significant.

Stage 2: identifying relevant literature

We will use a forward citation tracing approach to identify all literature that cited the 2011 paper. We will conduct our search in the Web of Science (WOS) database, which was developed for citation analysis17 and indexes journals broadly across the health and social science disciplines that publish implementation research. Because there could be delays in archiving more recent works in WOS, we will also draw on citation alerts sent to the second author (EKP) from the publisher for a 6-month period coinciding with the WOS citation search. Citations will be managed using Mendeley and exported to Covidence, a web-based program designed to manage references for systematic and scoping reviews, for deduplication.18

Stage 3: article selection

Articles will be screened for inclusion in a two-phase process. First, two independent screeners will review each article title and abstract and apply inclusion and exclusion criteria. Articles will be included if they (A) report results of an empirical study, (B) are published in a peer-reviewed journal, and (C) are designed to assess at least one of the identified implementation outcomes (or their synonyms) as specified in the original implementation outcome taxonomy. These inclusion criteria are intended to include articles reporting on instrument or measurement development studies, and a diverse range of methodologies (eg, quantitative, qualitative or mixed).

Articles will be excluded if they (A) do not report on results of an empirical study (eg, editorials, commentaries, study protocols, summaries, narrative reviews, ‘lessons learned’), (B) are not published in a peer-reviewed journal (eg, books, book chapters, reports, monographs, magazines, websites/blogs, newsletters), (C) were not designed to assess an implementation outcome directly (eg, discuss the relevance of findings to implementation outcomes, or note the importance of assessing implementation outcomes in future studies without measuring the outcome), or (D) report on the results of a systematic review. However, if we locate a systematic review focused on measurement or evidence of implementation outcomes, we will locate and consider the studies included in those reviews. Discrepancies in screening decisions will be reviewed by two team members who will reach consensus on a decision.

Next, team members will independently review the full text of all articles included during the title and abstract screening step to further verify that they meet inclusion criteria. Articles included after full-text screening will be tracked using Covidence and exported to an Excel spreadsheet. The screening team will include trained implementation scientists (independent investigators in the field) and graduate trainees who have completed implementation coursework and/or work on implementation research studies. To ensure consistency across reviewers, all screening team members will review the original implementation outcome taxonomy, scoping review objectives and inclusion/exclusion criteria prior to screening. Team members will also practise applying the inclusion/exclusion criteria to a subset of articles before engaging in the final screening process.

Stage 4: data charting

Data will be charted using a customised Google Form. Completed articles will be assigned to and tracked by team members in an Excel sheet. The initial data charting form was developed and refined by the protocol authors. The proposed variables and definitions are described in table 1.

Table 1

Variables and definitions for data charting

We will then pilot test the data charting form with the remaining members of the charting team and make refinements to the Google Form accordingly. The data charting team will include many of the same expert members of the screening team, all of whom have prior training and experience in implementation research. We have put several steps in place to ensure rigour and consistency across the data charting team members. First, each team member will be trained—training involves an introduction to the data charting form including variables and definitions (and how they connect to the review objectives); procedures for accessing, reading and charting data for each full-text article; and practice application of the data charting form on three articles. Second, team members will be able to request consultation from the protocol authors for any extraction decisions that require a second opinion and this option is directly built into the Google Form. Consultation takes place in a one-on-one or small group format over video chat or email with at least one protocol author; if necessary, an additional protocol author will weigh in on any areas of lingering ambiguity or confusion. Third, each consultation decision will be documented and saved in a shared folder to foster consistency and transparency in how data charting concerns are resolved. Fourth, the protocol authors will meet weekly to discuss new questions, debrief about consultation issues and make decisions about potential refinements to the Google Form. Fifth, periodic emails will be sent to the entire team to communicate consultation issues that are generalisable to the group and alert everyone to new updates to the Google Form.

Stage 5: collating, summarising and reporting the results

We will generate descriptive statistics (eg, frequencies, cross-tabs, averages) from extracted data and organise results by the research questions outlined in the first stage. To achieve the overarching goal of this review, we will use the descriptive data to describe the field’s progress as it relates to specific aspects of the 2011 research agenda. We will also use these data to contextualise and inform recommendations for the next 10 years of implementation outcome research. We will follow the PRISMA-ScR14 guidelines when reporting our findings. Our anticipated 12-month timeline for completing this scoping review is presented in table 2.

Table 2

Anticipated timeline

Patient and public involvement

Since the state of implementation outcome research is of direct relevance to implementation scientists, patient and public involvement was not necessary for the design of our scoping review.

Probable limitations and strengths of review findings

Using the established and multistage process for conducting a scoping review as outlined by Arksey and O’Malley,13 and reporting our results consistent with the PRISMA-ScR checklist, enhances the rigour and transparency of our review design, and trustworthiness of our future results. We also anticipate that our work will provide continuity and coherency to a global research agenda focused on implementation outcomes by responding directly to the research agenda proposed 10 years ago when the initial taxonomy on implementation outcomes was articulated. Moreover, the review will consider a broad range of healthcare settings, interventions and study designs.

In addition to these strengths, probable limitations must also be considered. Consistent with the limits of a scoping review, we will not synthesise the effectiveness of implementation strategies described in the studies we review; however, this is a potential avenue for future systematic reviews and meta-analyses. As such, neither will this review report on the methodological quality of the included studies. Additionally, given the conceptual ambiguity regarding implementation outcome terminology (eg, the multiple ways in which scholars define and discuss ‘acceptability’), some studies that include implementation outcomes but do not cite the 2011 taxonomy may be excluded.

Ethics and dissemination

No human research participants will be involved in this review. Therefore, obtaining informed consent, protecting anonymity and receiving institutional review board approval are not relevant. We plan to share findings through a variety of means including peer-reviewed journal publications, national conference presentations, invited workshops and webinars, email listservs affiliated with our institutions and professional associations, and academic social media.

Ethics statements

References

Footnotes

  • Contributors All authors conceived the study protocol steps. RL-H developed the structure of the manuscript and led manuscript development. EKP drafted the rationale for examining progress on implementation outcome research. All authors (RL-H, EKP, ACB, DRG) reviewed several iterations of the manuscript and approved the final version.

  • Funding This work was supported by grants from the National Institute of Mental Health (T32MH019960; R25 MH080916–08; P50MH113660), the National Cancer Institute (P50CA19006), the National Center for Advancing Translational Sciences of the National Institutes of Health (UL1TR002345), the National Institute on Drug Abuse (R34DA046913) and the Robert Wood Johnson Foundation’s Systems for Action (ID 76434).

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.