Article Text

The effectiveness of toolkits as knowledge translation strategies for integrating evidence into clinical care: a systematic review
  1. Janet Yamada1,
  2. Allyson Shorkey1,
  3. Melanie Barwick2,
  4. Kimberley Widger2,
  5. Bonnie J Stevens2
  1. 1The Hospital for Sick Children, Toronto, Ontario, Canada
  2. 2The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada
  1. Correspondence to Dr Janet Yamada; janet.yamada{at}sickkids.ca

Abstract

Objectives The aim of this systematic review was to evaluate the effectiveness of toolkits as a knowledge translation (KT) strategy for facilitating the implementation of evidence into clinical care. Toolkits include multiple resources for educating and/or facilitating behaviour change.

Design Systematic review of the literature on toolkits.

Methods A search was conducted on MEDLINE, EMBASE, PsycINFO and CINAHL. Studies were included if they evaluated the effectiveness of a toolkit to support the integration of evidence into clinical care, and if the KT goal(s) of the study were to inform, share knowledge, build awareness, change practice, change behaviour, and/or clinical outcomes in healthcare settings, inform policy, or to commercialise an innovation. Screening of studies, assessment of methodological quality and data extraction for the included studies were conducted by at least two reviewers.

Results 39 relevant studies were included for full review; 8 were rated as moderate to strong methodologically with clinical outcomes that could be somewhat attributed to the toolkit. Three of the eight studies evaluated the toolkit as a single KT intervention, while five embedded the toolkit into a multistrategy intervention. Six of the eight toolkits were partially or mostly effective in changing clinical outcomes and six studies reported on implementation outcomes. The types of resources embedded within toolkits varied but included predominantly educational materials.

Conclusions Future toolkits should be informed by high-quality evidence and theory, and should be evaluated using rigorous study designs to explain the factors underlying their effectiveness and successful implementation.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • This systematic review on toolkits critically appraises research on strategies to facilitate practice change among health professionals.

  • Results highlight the importance of evaluating implementation outcomes in addition to behavioural and clinical outcomes.

  • This review was limited by a lack of an accepted definition for the term toolkit.

Introduction

Knowledge translation (KT) is a complex process occurring between researchers and knowledge users that includes the “synthesis, dissemination, exchange and ethically sound application of knowledge to improve health…provide more effective health services and products, and strengthen the health care system.”1 The degree of engagement in the KT process may be influenced by factors such as the research results and needs of the knowledge user.1 Clinical practice audits have demonstrated that health professionals do not consistently or effectively use current and high-quality research evidence as a basis for clinical care.2 Despite strategies to facilitate the process of implementing research into practice, such as the development and evaluation of clinical practice guidelines, a major disconnect remains between evidence-based practice and actual clinical practice.3

Evidence-based KT strategies for linking research evidence and clinical practice include but are not limited to printed educational materials, educational meetings, educational outreach, the use of local opinion leaders, audit and feedback, and reminders.3 These strategies have been used alone as single KT intervention or as multifaceted KT interventions, which consist of two or more strategies or variations of the same strategies (eg, educational materials) delivered in combination to change practice.4–6 The benefit of multifaceted versus single KT interventions to change clinical and practice outcomes remains unclear, with some investigators reporting they are no more effective.4 ,7 ,8

A variation on multifaceted KT interventions is the toolkit. Toolkits offer greater flexibility of use, and for the purposes of this review, are defined as a packaged grouping of multiple KT tools and strategies that codify explicit knowledge (eg, templates, pocket card guidelines, algorithms), and are used to educate and/or facilitate behaviour change.9 Use of KT strategies housed within a toolkit are not necessarily prescribed in any combination or temporality (eg, Strategy A+/or Strategy B+/or Strategy C, etc). The goal is for the user to select KT strategies in the toolkit that are supported by evidence of effectiveness and for use at their own discretion, according to their aims, resources and context. Toolkits differ from multifaceted interventions in which the coupling of more than one KT strategy must be implemented together to comprise the ‘KT intervention’; for example, Strategy A+Strategy B=multifaceted KT strategy.

Evidence-based toolkits can be used to facilitate practice change, and can include strategies for guideline implementation, informing policy, practitioner training, and provide quality audit materials.10 ,11 Currently, a wide range of toolkits address various clinical disease entities, such as diabetes and cancer care. For instance, the Registered Nurses Association of Ontario offers a toolkit on Best Practice Guidelines for patient care.12 Despite the uncertainty surrounding the effectiveness of multifaceted KT interventions, organisations are investing resources in the development of KT toolkits because they provide a simple, more flexible and expedient method for promoting and utilising best healthcare practices. Whether these toolkits or their components are effectively implemented and positively associated with clinical outcomes remains unknown.

Toolkits comprise KT strategies that can be effective in supporting a range of KT aims if they are based on a clear rationale, quality evidence of their effectiveness, supported by a conceptual framework and built on a careful assessment of contextual barriers.3 To be effective, toolkits should also provide high-quality evidence to guide their use or implementation. Currently, little is known about the effectiveness, feasibility and acceptability of toolkits. The aim of this systematic review was to identify and evaluate the effectiveness of toolkits for facilitating the implementation of evidence into clinical care and to inform future development, implementation and evaluation of toolkits.

Methods

The methods for this review were based on the PRISMA checklist (http://www.prisma-statement.org/2.1.2%20-%20PRISMA%202009%20Checklist.pdf).

Search strategy

A systematic literature search of four electronic databases, MEDLINE (1946–November 2013), EMBASE (1947–November 2013), PsycINFO (1806–November 2013) and CINAHL (1981–November 2013), was conducted by a library information specialist. Search terms included database subject headings and text words for the following concepts: toolkits or toolboxes; evaluation, adherence or outcome assessment; and hospitals and hospitalised patients. The evaluation search terms used in MEDLINE, EMBASE and PsycINFO were based on published optimised search strategies.13–15 CINAHL evaluation terms were based on the optimised MEDLINE strategy. No date, age or language limits were applied (see online supplementary appendix).

Study selection

Study selection was conducted in two stages. First, all titles and abstracts were screened independently by two reviewers (Winnie Lam and Tissari Hewaranasinghage). To establish inter-rater reliability of study selection, each reviewer pilot tested 10 studies using the inclusion criteria. There was 95% agreement on the selected review articles. If necessary, a third reviewer (AS) who was not involved in the selection process resolved any disagreements. In the second stage, the full texts of all selected studies were screened to assess study eligibility and determine the final list of included studies.

Studies were included if: (1) they evaluated the effectiveness of a toolkit to support the integration of evidence into clinical care, either alone or embedded within a larger multistrategy intervention (toolkit +); (2) the KT goals(s) were to inform, share knowledge, build awareness, change practice, change behaviour (in the public), and/or clinical outcomes in healthcare settings, inform policy, or to commercialise an innovation; and (3) they included a comparison group. Studies published in languages other than English, thesis dissertations and studies published in non-peer-reviewed journals or in abstract form only were excluded. All study designs were included. Reference lists from included papers were screened for additional studies.

Methodological quality ratings

The methodological quality of included studies was assessed using the Effective Public Health Practice Project's (EPHPP) Quality Assessment Tool for Quantitative Studies.16 The EPHPP assesses methodological quality in systematic reviews of effectiveness.17 Reliability and content and construct validity of the tool have been established.18

The EPHPP tool can be used to evaluate multiple study designs that include comparison groups. Six categories, each consisting of a series of questions, are used to rate each study: (1) selection bias (two questions); (2) study design (four questions); (3) confounders (two questions); (4) blinding (two questions); (5) data collection methods (two questions) and (6) withdrawals and drop-outs (two questions). Each category is then assigned a rating (strong, moderate or weak), and based on these individual category ratings, a global rating is assigned for the study (strong, moderate or weak). Additionally, the integrity of the study intervention and analyses is also examined; however, they do not contribute to the overall global rating.16

All studies were rated independently by two reviewers (AS and JY) using the EPHPP tool. Prior to rating the studies, the tool was pilot tested on 10 studies. Overall per cent agreement was 88.5% (κ=0.84, 95% CI 0.72 to 0.96). When necessary, consensus meetings were held between reviewers to compare results and reach agreement on all studies. A third reviewer (KW) who was not involved in the quality assessment process resolved any disagreements.

Data extraction and analysis

Utilising a standardised data extraction chart, three reviewers (AS, KW and JY) independently extracted the following data from the studies that received a strong or moderate methodological global rating: study type, type of study participants, toolkit content, KT strategy and clinical outcome measures, including implementation outcomes as defined by Proctor et al19 (ie, acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration and sustainability) and study results. Because many studies embedded the toolkit into a multistrategy intervention (ie, toolkit plus an additional KT strategy(ies)) and did not evaluate the toolkit alone, information regarding all of the components of the KT intervention was extracted. As well, the type of evidence, if any, underpinning the toolkits’ contents (KT strategies, tools) was extracted.

To determine toolkit effectiveness, Lugtenberg et al’s20 method was adopted to assign outcomes from each toolkit to one of three categories: (1) not effective (if no significant effects were demonstrated); (2) partially effective (if half or less of the outcome measures showed significant effects) or (3) mostly effective (if more than half the outcome measures showed significant effects). When study outcomes could not be at least partially attributed to the toolkit (eg, the toolkit was used in the multistrategy intervention and the control group), the study was excluded from detailed reporting.

If similar data from studies were available (eg, means, SDs, proportions), meta-analyses would be conducted. A weighted mean difference, or a standardised mean difference, relative risk, risk difference all with 95% CIs would be conducted using a fixed effects model. If pooling of results would not be possible, a narrative descriptive review of study results would be presented.

Results

The search strategy yielded 39 unique studies for inclusion in this review11 ,21–58 (figure 1). Given the diversity of studies in terms of participants and outcomes, a meta-analysis was not possible; therefore, we chose to report on all studies with a strong or moderate global ratings rather than focusing only on randomised controlled trials (RCTs) of potentially weak quality.

Figure 1

Study selection flow chart.

The majority of the studies were RCTs (n=11)26 ,30 ,33 ,36 ,44 ,45 ,49 ,51 ,53 ,57 or one-group cohort studies (n=13).21 ,24 ,25 ,27 ,32 ,34 ,37 ,39 ,46 ,50 ,52 ,55 ,58 Eighteen of the included studies had toolkits embedded within a larger multistrategy KT intervention,22 ,23 ,26–28 ,31 ,32 ,38 ,42–45 ,49 ,53–55 ,57 ,58 and 21 studies evaluated toolkits as standalone KT interventions.11 ,21 ,24 ,25 ,29 ,30 ,33–37 ,39–41 ,46–48 ,50–52 ,56

Among all of the toolkits, 20 were developed for a specific disease context,21 ,22 ,26–28 ,31–34 ,40–45 ,47 ,51 ,54 ,55 ,57 most commonly for cancer (n=8)27 ,28 ,31 ,40 ,41 ,43 ,54 ,57 and diabetes (n=3).22 ,26 ,51 The remaining toolkits were developed for disease prevention (n=5),23 ,38 ,46 ,52 ,58 infection prevention (n=2),11 ,53 postoperative pain (n=1),48 smoking cessation (n=1),49 care in the geriatric population (n=8),24 ,25 ,29 ,30 ,35 ,36 ,39 ,56 patient safety (n=1)50 and general hospital quality improvement (n=1).37

Toolkits were targeted to health professionals (n=29),11 ,21 ,23–27 ,29–32 ,35 ,37–39 ,42 ,44–47 ,49–52 ,53 ,55–58 patients (n=10)21 ,22 ,28 ,33 ,34 ,40 ,41 ,43 ,48 ,54 and caregivers (n=1).36 In one study, the intervention included separate toolkits for primary care physicians and patients.18

Only 2611 ,21 ,24–32 ,34 ,35 ,37 ,39–41 ,43 ,44 ,46–51 ,54 of the included studies specifically indicated the clinical evidence, rationale or theoretical basis underlying the toolkit strategies.

Methodological quality of the studies

The majority of studies (n=26)21–25 ,27–29 ,31 ,32 ,34 ,35 ,37–41 ,45–48 ,50 ,52 ,55 ,56 ,58 were rated as methodologically weak on the EPHPP tool (ie, in terms of study design, selection bias, confounders, blinding, data collection methods and withdrawals and drop-outs); with 8 studies11 ,26 ,30 ,42 ,44 ,49 ,51 ,53 rated as moderate; and 533 ,34 ,43 ,54 ,57 as strong. The 13 moderate and strongly rated studies still had some general weaknesses. In 7 of the 13 studies,11 ,26 ,30 ,33 ,42 ,44 ,57 blinding of outcome assessors and/or blinding of study participants to the research question were not explicitly stated. In the selection bias category, only 4 of the 13 studies26 ,36 ,44 ,57 reported the proportion of eligible participants who agreed to participate in the study. As well, in 6 of the 13 studies,11 ,26 ,30 ,36 ,42 ,44 raters agreed that the study participants were only somewhat likely, as opposed to very likely to represent the study population, introducing the potential for selection bias.

Evaluation of the effectiveness of the toolkits

In 5 of the 13 moderate to strongly rated studies,11 ,43 ,49 ,53 ,54 it was not possible to determine if clinical outcomes were attributable to the toolkit because all study participants received the toolkit in some variation. These five studies explored the effectiveness of the toolkit, either alone or paired with minimal additional interventions (multistrategy).A summary of the remaining eight studies is provided in table 1.26 ,30 ,33 ,36 ,42 ,44 ,51 ,57

Table 1

Effectiveness of toolkits in strongly and moderately rated studies (N=8)

Among the remaining eight studies, three33 ,36 ,51 evaluated the toolkit as a single KT intervention against a no KT intervention group, while five26 ,30 ,42 ,44 ,57 evaluated a multistrategy KT intervention against a no KT intervention group. Only four of five multistrategy intervention studies26 ,30 ,42 ,44 demonstrated partial to mostly effective results. Of the three single KT intervention studies, two33 ,36 were mostly effective at changing clinical outcomes. Additionally, no studies evaluated the relative effectiveness of each KT strategy (eg, use of audit and feedback); therefore, it was not possible to determine which components contributed to the change in outcomes.

The majority of the studies26 ,30 ,33 ,36 ,44 ,51 aimed to evaluate the toolkit's effectiveness for a variety of KT goals. One study focused on changing patient clinical outcomes (eg, myocardial infarction, number of falls); two studies also evaluated change in patient behaviour;26 ,33 and one evaluated behavioural change in family caregivers.36 Two studies44 ,51 focused on toolkit effectiveness for changing clinician behaviour in addition to improving patient clinical outcomes, and two studies42 ,57 were solely focused on improving clinician behaviour.

Implementation outcomes were mentioned in six studies.26 ,30 ,33 ,36 ,42 ,44 Dykes et al30 included a process for assessing fidelity of the KT intervention; Goeppinger et al33 examined the adoption, appropriateness and sustainability of the toolkit; Horvath et al36 provided information about the fidelity of the KT intervention and cost of the toolkit but did not conduct a cost/benefit analysis; and Cavanaugh et al,26 Majumdar et al42 and Menchetti et al44 examined the sustainability of improved clinical outcomes over time.

Toolkit content varied across studies. Two studies included self-management toolkits for patients and caregivers with a focus on arthritis33 and Alzheimer's.36 Six studies evaluated toolkits for health professionals on fall prevention,30 gastro-oesophageal reflux,42 depression,44 diabetes26 ,51 and cancer.57 Toolkit resources included information/handout sheets, posters, pocket guides and educational modules. Wright et al57 included reminder packages for participants comprised of a cover letter from an expert opinion leader, a peer-reviewed article and additional reminder pocket cards. In five studies,26 ,30 ,36 ,44 ,51 the authors reported that they relied on clinical experts, reviews of the literature or clinical practice guidelines to inform the toolkit components. Dykes et al30 also incorporated an assessment of the barriers and facilitators to optimal practice in falls prevention and designed the toolkit to address the identified barriers.

Discussion

Toolkits, either alone or as part of a multistrategy intervention, hold promise as an effective approach for facilitating evidence use in practice and improving outcomes across a variety of disease states and healthcare settings. There was significant variation in the combination and type of KT strategies contained within the toolkits, a range of diseases for which they were developed, and a variety of intended knowledge users (eg, health professionals or patients/caregivers), all of which contributed to key knowledge gaps.

Most toolkits contained printed educational materials, such as information sheets or guideline summaries, which were intended to fill knowledge gaps. Although feasible and relatively inexpensive, Giguère et al59 reported that printed educational materials tend to have little to no influence on health professional behaviour, and uncertain effects on patient behaviour. Additional efforts are required to ensure that knowledge users actively engage with toolkit materials, moving away from passive diffusion. Wright et al57 utilised reminders within the toolkit. The effects of computer reminders have demonstrated small to moderate benefits; however, further research is needed on other types of reminders, perhaps utilising social media strategies.60 There is currently no definitive evidence for the ideal combination or number of KT strategies and tools that should be used in toolkits. A planful approach (ie, need to identify the KT goal that is being addressed by the toolkit strategy) including evidence-based KT strategies, tailored and planful implementation support, active engagement, and evaluation of KT impacts that include implementation outcomes should be considered for achieving intended KT goals with the targeted audience.

Better understanding of toolkit effectiveness requires more thorough descriptions of the embedded KT strategies/components and how each individual component contributes to study outcomes. Toolkit descriptions and the contents of the eight moderate and strongly rated studies were brief. Dobbins et al61 suggested that use of multiple KT interventions may weaken the key message of the clinical content when compared with single KT intervention strategies, and the same may be true for toolkits. To minimise this potential weakness, each component within the toolkit should have a purpose and rationale3 that is clearly described for toolkit users.

Toolkit components should be based on high-quality evidence, particularly when the goal is to change practice;3 rationale for their inclusion in the toolkit, given the toolkit aims; and guidance on the implementation process62—how they are to be used. Although the eight studies in this review mentioned some form of evidence underlying each component, descriptions were vague and non-descriptive, and few mentioned high-quality evidence, such as systematic reviews. Often, evidence was provided for only one component of the toolkit. Cavanaugh et al26 used communication theory to design the ‘Diabetes Literacy and Numeracy Education Toolkit’, and did not specify any underlying evidence for their content. Shah et al,51 however, provided evidence for using educational materials as a resource within their educational toolkit, which focused on cardiovascular disease screening and risk reduction in patients with diabetes. Nevertheless, their content was not based on a barriers assessment, quality improvement or educational theory.51

Multiple barriers have been identified to account for the knowledge to practice gap, and many are intrinsic to health professionals and their practice environment or context. For example, organisational constraints, such as lack of time or an inability to access resources, are common barriers to KT.2 LaRocca et al63 suggested that the more successful KT intervention strategies were those that were accessible and could be tailored to the needs and preferences of the users. Components of the fall prevention toolkit by Dykes et al30 included patient/family education handouts that were tailored by the nurse based on the knowledge of the patient, thereby capitalising on high tension for change; adaptability, strength and quality of the intervention; and low complexity.64 The effects of tailoring strategies to address identified barriers to change require more clarity, but may improve care and patient outcomes,65 particularly when KT approaches can capitalise on what we know works in implementation.64 Only one of the eight reviewed studies30 assessed barriers and facilitators to inform the toolkit's components.66 Furthermore, determining the influence of modifiable components of context (eg, leadership support, culture, evaluation) would further allow for customisation of KT strategies to facilitate practice change and clinical outcomes.67 Further research is needed on how the toolkit was developed, and the influence of the practice context as these factors may influence study outcomes.

Consideration should also be given to factors implicated in successful implementation.64 Proctor's taxonomy for implementation outcomes19 was extracted from studies where possible, as these outcomes could be used to indicate successful implementation of the toolkit within the healthcare system. Developing toolkits supported by implementation guidance would go a long way in demonstrating how toolkits contribute to good clinical and implementation outcomes. Descriptions of most toolkits lacked details about the implementation process and outcomes. Evidence Based Practice for Improving Quality (EPIQ)68 is an example of a KT intervention that combines evidence, continuous quality improvement, an implementation process and assessment of implementation outcomes. In phase 1 (Preparation), the hospital unit identifies an implementation team, who are trained to review existing unit pain practices, guidelines and research evidence to inform targeted practice changes. In phase 2 (Implementation), the team identifies specific pain practice aims and KT strategies (eg, educational outreach, reminders and audit and feedback) to implement using quality improvement cycles. EPIQ was effective in improving pain process outcomes (ie, pain assessment and management) and reducing the odds of having severe pain by 51%.5

Only two studies reported on fidelity of toolkit implementation. To be clinically effective, healthcare interventions need to be effectively implemented. Yet, implementation outcomes are often overlooked in research and KT practice, creating high potential for type III errors; lack of clarity about whether the intervention or its implementation have been unsuccessful. This type of error can reduce the power to detect significant effects of an intervention.69 Assessing the fidelity of implementing complex interventions addresses type III error and provides evidence of variability in implementation of interventions, which could also contribute to limited effectiveness.70

All eight studies in this review used RCT designs to evaluate toolkit effectiveness. There is a common methodological challenge to RCT studies of KT effectiveness, in that this design could block important contextual factors that now have burgeoning evidence of their importance in successful implementation.64 Caution is required in interpreting which KT strategies are evidence-based, and new studies need to utilise more appropriate mixed methodologies or other types of randomised designs, such as wait listed or stepped-wedged designs, to address what works in implementation of practice changes.71

Several limitations to this systematic review warrant discussion. The term ‘toolkit’ was used in the studies included in this systematic review. However, there is currently no accepted definition for toolkits in existing taxonomies related to quality improvement and behavioural change strategies (eg, Cochrane Effective Practice and Organisation of Care Group). Although we chose a term that had some consistency in the literature, based on the evidence reported in this review, there is no consensus on key content, implementation strategies to promote behavioural change or theoretical approaches that should be included in implementation toolkits. These findings could explain the heterogeneity of the toolkits included in this review. Therefore, capturing all relevant literature was challenging because of the lack of standard terminology used for toolkits. As a result, relevant studies might have been missed by the search. The majority of studies had significant methodological shortcomings and were rated as weak, mostly due to the study designs. One of the limitations was that we focused on studies that evaluated the effectiveness of a toolkit to support the facilitation of evidence into clinical care; therefore, the studies included in this review reported quantitative results.

The literature search was limited to toolkits used in hospital and other clinical settings. Broadening the search to community or public health settings may have yielded additional studies for the review.9

In summary, toolkits have potential as a promising KT strategy for facilitating practice change in healthcare. To fully understand their effectiveness, a systematic approach to planning and reporting their development, the evidence underlying each component, and any direction regarding appropriate implementation is required. Toolkits should have (1) a clearly described purpose, rationale for each component; (2) components that are rigorously developed and informed by high-quality evidence, such as systematic reviews; (3) delivery methods that are guided by a comprehensive implementation process (eg, self-directed, facilitation, reminders) with consideration for fidelity of implementation where appropriate; and (4) a rigorous evaluation plan and study design that can help explain the factors underlying their effectiveness and successful implementation (ie, combining outcome and process measures including context).9

Only a few of the toolkits in this review met all of these criteria.33 ,51 Ideally future studies of toolkit effectiveness should also be informed by a theoretical approach. In conclusion, this study provides some evidence for the utility of the toolkit.

Acknowledgments

The authors would like to thank Thomasin Adams-Webber, librarian, for her assistance with the systematic literature search. The authors would also like to thank Ms Kamila Rentel for participating in an early review of the articles, and Ms Winnie Lam and Ms Tissari Hewaranasinghage for their assistance with screening articles for relevance.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:

Footnotes

  • Contributors JY led the writing of the manuscript, organised all aspects of the systematic review, participated in the screening of abstracts, rating of methodological quality, data extraction and analysis. She also drafted the initial manuscript, made revisions and approved the final manuscript as submitted. AS participated in the rating of methodological quality, data extraction, and analysis of articles included in the review. She also participated in drafting the initial manuscript and revisions. MB provided guidance and expertise in the overall conceptualisation of the review, revised and critically reviewed the manuscript, and approved the final manuscript as submitted. KW participated in reviewing the methodological quality, data extraction and analysis of all articles included in the report. She also participated in drafting the initial manuscript and revisions. BS provided guidance and expertise in the overall conceptualisation of the review, critically reviewed the manuscript and approved the final manuscript as submitted.

  • Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement No additional data are available.