Article Text

Download PDFPDF

Original research
Measuring research capacity development in healthcare workers: a systematic review
  1. Davide Bilardi1,2,
  2. Elizabeth Rapa3,
  3. Sarah Bernays4,5,
  4. Trudie Lang1
  1. 1Nuffield Department of Medicine, University of Oxford Centre for Tropical Medicine and Global Health, Oxford, UK
  2. 2Fondazione Penta Onlus, Padova, Italy
  3. 3Department of Psychiatry, University of Oxford, Oxford, UK
  4. 4School of Public Health, University of Sydney–Sydney Medical School Nepean, Sydney, New South Wales, Australia
  5. 5Public Health and Policy, London School of Hygiene & Tropical Medicine, London, UK
  1. Correspondence to Dr Davide Bilardi; davide.bilardi{at}gtc.ox.ac.uk

Abstract

Objectives A key barrier in supporting health research capacity development (HRCD) is the lack of empirical measurement of competencies to assess skills and identify gaps in research activities. An effective tool to measure HRCD in healthcare workers would help inform teams to undertake more locally led research. The objective of this systematic review is to identify tools measuring healthcare workers’ individual capacities to conduct research.

Design Systematic review and narrative synthesis using Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist for reporting systematic reviews and narrative synthesis and the Critical Appraisals Skills Programme (CASP) checklist for qualitative studies.

Data sources 11 databases were searched from inception to 16 January 2020. The first 10 pages of Google Scholar results were also screened.

Eligibility criteria We included papers describing the use of tools/to measure/assess HRCD at an individual level among healthcare workers involved in research. Qualitative, mixed and quantitative methods were all eligible. Search was limited to English language only.

Data extraction and synthesis Two authors independently screened and reviewed studies using Covidence software, and performed quality assessments using the extraction log validated against the CASP qualitative checklist. The content method was used to define a narrative synthesis.

Results The titles and abstracts for 7474 unique records were screened and the full texts of 178 references were reviewed. 16 papers were selected: 7 quantitative studies; 1 qualitative study; 5 mixed methods studies; and 3 studies describing the creation of a tool. Tools with different levels of accuracy in measuring HRCD in healthcare workers at the individual level were described. The Research Capacity and Culture tool and the ‘Research Spider’ tool were the most commonly defined. Other tools designed for ad hoc interventions with good generalisability potential were identified. Three papers described health research core competency frameworks. All tools measured HRCD in healthcare workers at an individual level with the majority adding a measurement at the team/organisational level, or data about perceived barriers and motivators for conducting health research.

Conclusions Capacity building is commonly identified with pre/postintervention evaluations without using a specific tool. This shows the need for a clear distinction between measuring the outcomes of training activities in a team/organisation, and effective actions promoting HRCD. This review highlights the lack of globally applicable comprehensive tools to provide comparable, standardised and consistent measurements of research competencies.

PROSPERO registration number CRD42019122310.

  • organisational development
  • organisation of health services
  • medical education & training
  • public health

Data availability statement

Data are available upon reasonable request. All data relevant to the study are included in the article. The complete data set generated by the systematic review and included in the extraction log is available upon request.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • Thoroughly conducted systematic review collecting data from all major existing databases and grey literature.

  • Topic not previously addressed in other reviews searching for tools to measure health research capacity building at individual level.

  • Brief overview of the identified tools to measure health research capacity building at individual level highlighting strengths and weaknesses of them.

  • Complex identification of relevant studies due to the lack of clarity on a common definition and terminology to identify health research capacity development.

  • None of the studies use the standard reporting procedures for qualitative or quantitative research.

Introduction

In 2004, the Global Forum for Health Research highlighted the challenge for low and middle-income countries to have the capacity to perform effective and locally led health research which address the major health problems affecting their own populations.1–3 Twenty years later, low and middle-income countries still carry 90% of the global disease burden, but only 10% of global funding for health research is devoted to addressing these persistent health challenges.4 Health research capacity development (HRCD) for healthcare workers has been recognised as a critical element to overcoming global health challenges, especially in low and middle-income countries.5 For too long HRCD in low and middle-income countries has been documented through training programmes which enable local teams to participate in externally sponsored trials, creating a false appearance of growth and generating dependence on foreign support.6 7

The process of progressive empowerment is usually referred to as capacity development.8 This term has been used in multiple areas and applied in different sectors to develop new or existing competencies, skills and strategies at a macro or individual level.9 In the field of health, research capacity development should support healthcare workers in generating local evidence-based results to inform policy and improve population health. The three health-related Millennium Development Goals, and more recently the targets ‘B’ and ‘C’ of the Sustainable Development Goals, all support the adoption of new strategies to strengthen the capacity of healthcare workers in all countries in performing their job and engaging in research.10–12 One of the critical barriers in supporting HRCD is the lack of empirical measurement of competencies in relation to the performance of research activities. Existing frameworks and tools have been developed for a particular purpose in a particular context.13 14 Others have identified barriers that healthcare workers encounter in engaging in research or have monitored and evaluated targeted training activities.15 This systematic review aims to identify tools to measure individual healthcare workers’ capacities to conduct research.

Methods

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist16 for reporting systematic reviews and narrative synthesis and Critical Appraisals Skills Programme (CASP) checklist17 on critical appraisal for qualitative studies were used to design this systematic review and to refine the extraction log according to recognised guidelines.

Inclusion and exclusion criteria

The aim of the systematic review was to identify existing tools which measure individual capacities in conducting research in healthcare workers. The inclusion and exclusion criteria were defined in advance and documented using an adapted version of a SPIDER table (table 1). The primary population of interest were all health-related professionals or healthcare workers involved in research activities. Healthcare workers delivering health services when research was not considered as the focus of the study were excluded. Occupational health research was excluded. Studies about volunteers, defined as people offering their services to support health activities with no specific training as health professionals, were also excluded. Initially, only healthcare workers working in low and middle-income countries were included, but this limitation was removed to identify any tool measuring HRCD in any setting. The Phenomenon of Interest was defined as: assessing HRCD; or identifying tools, frameworks and templates designed to assess HRCD. A comprehensive range of terms including synonyms for ‘assess’, ‘tool’ or ‘development’ was used. Studies were excluded which mentioned components that could be considered to assess, measure and ‘give evidence to’ research capacity development, but were not presented in any capacity development context. In addition, since the concept of capacity development is widely applied to different settings, studies on areas unrelated to health, such as ‘air pollution’, ‘financial capacity’ or ‘tobacco’, were also excluded. The study design criteria were broad to include qualitative, quantitative and mixed methods papers. Further criteria of eligibility included in the SPIDER table refer to the quality of the study (Evaluation) and the Research type.

Table 1

SPIDER diagram—inclusion and exclusion criteria

Information sources and search strategy

Eleven databases were searched from inception to 16 January 2020: Ovid MEDLINE; Ovid Embase; Ovid PsycINFO; Ovid Global Health; EBSCO CINAHL; ProQuest Applied Social Sciences Index & Abstracts (ASSIA); ProQuest Sociological Abstracts; ProQuest Dissertations & Theses Global; Scopus; Web of Science Core Collection; and the WHO Global Index Medicus Regional Libraries. The first 10 pages of results from Google Scholar were also screened. The search strategies used free text terms and combinations of the relevant thesaurus terms, limited to English language publications only, to combine terms for capacity building, measuring and health research. The ‘NOT’ command was used to exclude papers about students, postgraduate students, tobacco, air pollution and a variety of other concepts to minimise the number of irrelevant results (see box 1 for a full set of search strategies).

Box 1

Search strategy

Database: MEDLINE (Ovid MEDLINE Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Ovid MEDLINE Daily and Ovid MEDLINE) 1946 to present

  1. Capacity Building/ (1965)

  2. (capacit* adj2 build*).ti,ab. (5789)

  3. (capacit* adj2 develop*).ti,ab. (3591)

  4. (capacit* adj2 strengthen*).ti,ab. (924)

  5. (competenc* adj2 improv*).ti,ab. (1460)

  6. ((professional* adj2 develop*) and (competenc* or capacit*)).ti,ab. (1747)

  7. 1 or 2 or 3 or 4 or 5 or 6 (13649)

  8. Mentoring/ (820)

  9. mentor*.ti,ab. (13369)

  10. (assess* or measur* or evaluat* or analys* or tool* or equip*).ti,ab. (9653076)

  11. “giv* evidence”.ti,ab. (3814)

  12. framework*.ti,ab. (231138)

  13. 8 or 9 or 10 or 11 or 12 (9763562)

  14. Research/ (196782)

  15. clinical.ti,ab. (3158817)

  16. (health* and research*).ti,ab. (337604)

  17. 14 or 15 or 16 (3588891)

  18. 7 and 13 and 17 (3433)

  19. 18 (3433)

  20. limit 19 to English language (3346)

  21. (student* or graduate or graduates or postgraduate* or “post graduate*” or volunteer* or communit* or tobacco or “climate change” or “air pollution” or occupational or “financial capacity” or informatics or “IT system” or “information system” or transport or “cultural competenc*” or disabili* or trauma).ti,ab. (1828113)

  22. 20 not 21 (1673)

Google Scholar—screen the first 10 pages of results

Sorted by relevance:

(“capacit* build*”|“build* capacit*”|“capacit* develop*”|“develop* capacit*”|“capacit* strengthen*”|“strengthen* capacit*”|“professional* develop*”|“completenc* improv*”|“improv* competenc*”)(“health* research*”|clinical) https://scholar.google.co.uk/scholar?q=(%22capacit*+build*%22%7C%22build*+capacit*%22%7C%22capacit*+develop*%22%7C%22develop*+capacit*%22%7C%22capacit*+strengthen*%22%7C%22strengthen*+capacit*%22%7C%22professional*+develop*%22%7C%22completenc*+improv*%22%7C%22improv*+competenc*%22)(%22health*+research*%22%7Cclinical)&hl=en&as_sdt=0,5

Study selection

Two researchers, DB and ER, independently screened and reviewed studies using the Covidence systematic review software.18 In case of disagreement, DB and ER discussed the abstracts in question. After consensus on inclusion was reached, the full texts of all included studies were rechecked for inclusion by DB and confirmed by ER.

Study analysis procedure

Data from selected papers were extracted, and quality assessments performed using an extraction log created and validated against the CASP checklist17 on critical appraisal for qualitative studies. Macro areas of interest in the log were: general information on the paper such as author and title, main focus and study design. The source of funding, conflict of interests and ethics approval were also recorded. A separate section of the extraction log recorded the characteristics of the tool used or described in each selected paper (figure 1). The extraction log also included specific sections considering the study design, the methodology and the main findings of each paper. Furthermore, a dedicated section of the log collected data on the quality of each study, analysing selection biases and a critical appraisal derived from the CASP checklist. If a definition of capacity development was given, the definition was collected. Some of these sections of the extraction log are not present in figure 1 since it focuses on the description of the identified tool. The content method was used to define a narrative, described in the Discussion section.

Patient and public involvement

Patients and/or the public were not involved inthe design, or conduct, or reporting, or dissemination plans of this research.

Results

Database search and results screening

In December 2018, the first round of the search was performed in 11 different databases and in Google Scholar using the search strategy described in box 1. A total of 13 264 suitable records were found. A total of 6905 duplicates were removed, resulting in 6359 unique records for inclusion screening by title and abstract (table 2), which was performed throughout 2019. In January 2020, an additional search for papers published or included in publication databases in 2019 was performed using the same search strategy and resulted in 15 775 papers and after removal of duplications, a total of 1118 papers were found. These papers were then added to the 6359 papers identified from the first search. A total of 7474 unique papers were included for title and abstract inclusion screening (three duplicate records were removed in the Covidence software).

Table 2

Search results

The 7474 unique relevant studies identified were uploaded to the Covidence systematic review software. Two researchers, DB and ER, independently screened the studies, including or excluding according to the criteria in the SPIDER table (table 1). A total of 7280 studies were considered irrelevant. The full-text papers for the remaining 178 references were reviewed. Reasons for exclusion were identified by streamlining the SPIDER table criteria into three main criteria: wrong setting, irrelevant study design and wrong focus of the study. A reason for exclusion was assigned to each paper. All 178 studies described some form of activity to measure the competencies related to performing health research. Thirty were excluded because they were literature reviews on a different aspect of health research or because they described a general perspective on the topic of health capacity development without offering any specific measurement or without reference to research. In addition, 42 studies were excluded because of the wrong setting, since competencies were measured at the level of research institutions or within a specific network. An additional 90 studies were excluded because the study design did not match the inclusion criteria: 38 studies described the use of a measurement tool tailored to the context (eg, specific profession, intervention or setting) and not at the individual level; the remaining 34 studies were excluded because there was no mention of a specific tool to measure HRCD. The final 18 papers reported the use of an evaluation tool, but the tool was an ad hoc pre/postintervention questionnaire with low potential of applicability in a context different from the one described in the paper. A total of 162 studies were therefore excluded, leaving 16 studies for this review (figure 2).

Figure 2

Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) screening diagram.

Analysis of the findings across the selected papers

A total of 16 studies met the inclusion criteria set for this systematic review.19–34 The 16 articles were analysed using the extraction log created and validated against the CASP qualitative checklist.

The results are summarised in table 2. None of the papers were published before 2006 and only nine of them were published after 2014.20 21 23–26 31 33 34 The majority (n=13) applied a tool in high-income settings.19 20 22–24 26–32 34 Seven papers described the use of tools in Australia,20 22 24 26 28 29 34 three in low and middle-income countries (one in Ghana, Kenya, Malawi, Sri Lanka,25 one in the Pacific Islands21 and one in the Philippines33), one in Europe (Norway),19 one in the USA32 and one measured HRCD in a group linked to a specific intervention located in multiple areas of the world. Three of them described the creation of a tool without applying it to any specific context,23 30 31 but they were all designed by research groups in high-income countries (one in the USA and two in the UK).

All of the selected studies applied quantitative, qualitative and mixed methods analyses. The preferred approach (n=7) was to generate quantitative data using an HRCD tool.20 24 26–28 33 34 One-third of the studies (n=5) used a mixed methods approach19 21 22 25 29; quantitative tools were associated with semistructured interviews, or in some cases qualitative questions were added to the questionnaire. The three studies describing the creation of a tool were not analysed under this methodological category.

Of the 16 selected studies, three used the term ‘capacity development’,23 25 30 and two included a definition of the concept.25 30 Seven papers used ‘capacity building’,20–22 24 26 28 31 of which four also included a definition.20 22 24 28 In two papers, the capacity building definition was associated with the definition of ‘research culture’.20 22 Two additional papers used alternative generic terms like ‘research capacity’33 or ‘research self-efficacy’.32 Four papers did not refer to any specific term and therefore no definition was given.19 27 29 34

Five of the 16 selected papers openly declared no conflict of interests.22–24 28 31 Eight stated the source of funding used to carry out the activities described.19 21 23 27–31 The number of participants in the studies varied from 28 enrolled participants for a qualitative study21 to 3500 users of an online measurement tool.27

Analysis of the tools from the selected papers

The tools described or used in the 16 selected papers varied in nature, length and applicability. In general, even when there were similarities, each paper described a different perspective on the use of a tool. Four papers applied a questionnaire-type tool to assess research competencies and skills.19 21 25 33 The length of these questionnaires varied from 1921 to 5919 health research capacity-related questions, with the addition of open-ended qualitative questions in two studies,19 21 and a structured interview in another study.25

Three studies22 24 34 used, with a range of adaptations, the Research Capacity and Culture tool and one study20 revised this tool into a Research Capacity and Context tool referencing Research Capacity and Culture tool as a primary source. Another recurrence in the papers was the use of the ‘Research Spider’ tool.28 29 Again, the original tool had been adapted to the context, and in one case,29 the tool was used as a base for qualitative research on HRCD. Two additional papers described tools designed ad hoc to measure the impact of an intervention (CareerTrac27 and Cross-Sectional Electronic Survey26). These last two papers were not excluded under pre/postintervention since the action was wider, at a programme level and the tool used to measure HRCD was the main focus of the paper. Furthermore, another paper described a tool for a specific category of healthcare workers (Nursing Research Self-Efficacy Scale—NURSES).32 Three papers23 30 31 focused on the creation of a new tool and described the process of identifying a set of competencies required to run health research. The outcome of two of them was defined as a ‘core competency framework’.23 31 The third defined the outcome of the analysis as a ‘set of indicators’.30

In terms of the target population, the identified tools aimed to measure HRCD in a range of different healthcare worker professions. One-third of the papers (n=5) focused on measuring HRCD on allied health professionals (AHPs).20 22 24 26 34 Nurses were the main focus in two other studies,19 32 and four studies applied a tool to a range of health professions (ranging from laboratory scientists to data managers).21 25 28 29 Two other papers focused on groups linked to a specific intervention.27 33 All 16 papers included, alongside healthcare workers, representatives of technical professions in health such as managers, directors, faculty members and consumer organisation representatives. In the case of the three papers describing the creation of a new tool, they suggest that these tools would be applicable to all research roles.23 30 31

As per inclusion criteria, the main level of measurement of the tools was at the individual level. Seven papers only measured HRCD at the individual level.19 23 28 29 31–33 Three papers added to the individual level of measurement by including information on the perceived barriers in performing health research21 26 29; of these three, two also focused on understanding what motivates healthcare workers to become involved in health research.26 29 The five studies, which used the Research Capacity and Culture tool and its variants, included the measurement of HRCD at the individual level, and at the team and organisational level.20 22 24 25 34 One paper described the creation of a tool designed to be used at the organisational level, but embedded a measurement of HRCD at the individual level as well.30

The most common way a selected tool was validated was by referencing the main paper that described the selected tool and its validation process (n=6).23 28 29 32–34 This was the case for some of the ad hoc questionnaires,23 33 34 of the ‘Research Spider’ tool28 29 and of the NURSES tool.32 Papers which described an original process or used modified versions of an original tool validated the tool through a contextual validation process described in the paper.21 22 24 25 31 These validation processes included a consultation of a panel of experts22 24 31 or a reiterative process of validity.21 25 One paper stated that the tool used was a validated tool without referencing the process or tool.20

Overall, only two papers23 31 focused specifically on tools to measure HRCD on a wider level, without linking the measurement to a specific group or a geographical area which was done in the majority of papers.19 24 25 28 29 33 In four cases, the tools described were adapted to identify determinants or barriers of HRCD in a defined setting20 30 34 or to promote HRCD in relation to a specific disease or research topic.21 In other cases, the papers focused on a tool aiming to assess the impact of specific interventions or programmes on HRCD.26 27

Discussion

Summary of evidence

This systematic review aimed to identify tools which measure individual capacities in conducting research in healthcare workers; the 16 included articles19–34 which demonstrated that tools to measure HRCD in healthcare workers are available, even if they are limited in number. In most cases, the identified tools do not originate from the need to measure and foster HRCD as a necessary strategy to promote research capacity. There is, therefore, a need to design more comprehensive tools which are globally applicable and able to provide comparable, standardised and consistent measurements of research competencies.

The importance of measuring HRCD has only been recognised recently.15 As the date of publication of the identified papers shows, the appreciation of the contribution that health research can offer in capacity development at a personal level only began in the first decade of this new millennium. Almost half of the selected papers (n=7) refer to studies whose data have been collected after 2014.20 21 24 26 31 33 34 Of note is the high number of new publications which were retrieved from the academic databases (1118 papers) when the search strategy was rerun in 2020.

Questionnaires were the most commonly used method for assessing research skills and competencies. Almost two-thirds of the papers (n=10)19 20 22 24 26 28 29 32–34 based the measuring system of different research skills at a personal level using a 5-point Likert scale (n=6)19 26 28 29 32 33 or a 10-point scale (n=4).20 22 24 34 This choice highlights the need for a validated quantitative tool based on a set of competency-related questions that can bring standardisation, comparability and consistency across different roles and contexts. However, the extensive use of mixed methods, combining quantitative questionnaires with other qualitative instruments, reflects that HRCD depends on a complex series of components that need to be identified both qualitatively and quantitatively.

By not limiting the selection of articles for this review to those tools used in low and middle-income countries, this review has revealed that most of the tools identified were used in high-income settings. It is important to note that excluding pre/postintervention assessments significantly reduced the inclusion of studies performed in low and middle-income countries. This finding highlights that although health systems in low and middle-income countries may benefit from providing evidence for HRCD,5 they are rarely the focus of the HRCD literature. Most of the measurements of HRCD in lower income settings appear, in fact, to be narrowly linked to the measurement of the effectiveness of training offered for a specific study or limited to a particular disease. Even when the perspective is broader than a particular study, it is mostly limited to the evaluation and sustainability of training programmes and not linked to a plan of career progression and research competency acquisition. More attention should therefore be given in creating tools which are able to measure, support and promote long-lasting research capabilities in the perspective of professional growth for healthcare workers.

Three essential findings of this systematic review support a change in the perception of HRCD and the tools needed to measure it. First, many of the excluded papers (42 out of 162 excluded papers from the last round of analysis) focused exclusively on the institutional level of measuring research capacity. This is mostly because training interventions are designed to prepare a team to run a study and rarely to promote individual HRCD.1 35 36 In some cases, the measurement via a tool is also an exercise to demonstrate the investment in training activities for reporting purposes.37 38 It is therefore important to start promoting a more effective research culture which is independent of specific diseases or roles. This progression could be achieved by championing systems which measure the changes in research capacities at a team and personal level using a globally applicable tool. Most of the tools excluded were evaluation tools designed for, or used in, a specific setting and thus not suitable for a comparable, standardised and consistent analysis of long-term research competency acquisition strategies.

Second, papers that focused on measuring HRCD at the individual level confirmed that research is seen as an opportunity to learn the cross-cutting skills needed in healthcare. A defined set of standardised competencies required to conduct research could be used to measure an individual, team and organisation’s abilities. This was the focus of two papers23 31 which identified a framework of core competencies. Most of the tools (n=7) were designed to be applied to a wider variety of health professions.21 23 25 28–31 HRCD can be accessed at different entry points depending on the specific job title, but the set of skills acquired is common and shared among the research team.1 The approach on assessing these inter-related competencies should therefore be global and not role or disease based.39 The measurement at an individual level is essential to promote a consistent and coherent career progression for each person and role.40 However, the overall capability in running research programmes should be measured at a team level where all roles and competencies complement each other, skills are made visible, and measurable as a whole against an overall competency framework. Individual and institutional/team levels are therefore two aspects of HRCD that grow together supported by a common comparable, standardised and consistent tool.

Third, the lack of a standard definition for HRCD can lead to post-training evaluations being categorised as HRCD activities. Although pre/post-training evaluations are important, it might be helpful to define what a ‘structured action’ is to promote HRCD. As previously mentioned, the term ‘capacity development’ is not universally used, with many synonyms such as ‘research capacity’ or ‘capacity strengthening’, creating the possibility of different interpretations. Furthermore, inconsistent terminology was found in describing activities in support of HRCD that in reality were very similar (eg, workshop, training, course). Steinert et al41 suggest that there should be a standard definition in the context of educational capacity development. This suggestion, alongside a common taxonomy to describe health professions, would support the identification of HRCD as a defined process with specific characteristics and not with a general effort for research training.

The most common tool identified in this review was the Research Capacity and Culture tool.20 22 24 34 The Research Capacity and Culture tool consists of 52 questions that examine participants’ self-reported success or skill in a range of areas related to research capacity or culture across three domains including the organisation (18 questions), team (19 questions) and individual (15 questions). The Research Capacity and Culture tool includes questions on perceived barriers and motivators for undertaking research. The respondents of the Research Capacity and Culture tool are asked to rate a series of statements relevant to these three domains on a scale of 1–10, with 1 being the lowest and 10 being the highest possible skill or success level. It represents a good example of a comprehensive tool. As confirmed by the review findings, a potential limitation is its application mainly in an Australian context and almost exclusively to measure HRCD in AHPs.22 24 34 The generalisability of the tool should thus be confirmed. Nevertheless, the Research Capacity and Culture tool represents a strong example of how having a tool refined around a context, and a specific health profession can be an incentive in measuring HRCD.

Another tool highlighted by this review was the ‘Research Spider’ tool.28 29 42 This tool collects information on individual research experience and interest in research skill development in 10 core areas. These include ‘writing a research protocol’, ‘using quantitative research methods’, ‘publishing research’, ‘finding relevant literature’ and ‘applying for research funding’. In each area, the level of experience is measured on a 5-point Likert scale, from 1 (no experience) to 5 (high experience). The primary aim of the ‘Research Spider’ is to be a flexible tool. This flexibility is confirmed in two studies28 29 which used the ‘Research Spider’, with one28 using it as the main measurement, and the other29 as a quantitative base for qualitative semistructured interviews. The advantage of this tool is that it provides a visual overview of personal research competencies. However, although the limited number of measurement areas (n=10) makes the tool a good initial evaluation instrument, it does not offer a specification of the subskills of each area.

A critical mention should be reserved for the two papers which described the creation of a comprehensive research core competency framework.23 31 Despite no specific tool being described and the competency scores being visualised by using a spider diagram, these studies present the most accurate overview of the skills required in running research programmes related to health. As mentioned before, a tool which applies a scoring system to the list of competencies identified by these frameworks has the potential of being widely applicable and reliable. This wide applicability and the absence of explicit biases in measuring research skills improvement can foster a more robust approach to research in health. The measurement of HRCD unrelated to specific interventions would maximise the benefit of research at every level. At a personal level, it would clarify a potential career progression path highlighting possible gaps; at the team level, it would support a multidisciplinary approach to health challenges; and at an institutional level, the measurement of HRCD would make the know-how generated by the international scientific community accessible to a broader group of local health workers. Overall, health practice at a global scale would benefit from the incentive of getting involved in research derived from measuring the impact of it on improving competencies. Thus positive outcomes of measuring HRCD could place the issue of universal transferability, and applicability of research methodology and results at a higher level of priority in the design of health research projects.

Limitations of the systematic review

Methodological limitations are recognised for this systematic review. First, there is a lack of clarity on a common definition and terminology to identify HRCD which complicates the search strategy. A long reiteration process was necessary when developing the search strategies for the databases to try and include all the possible variants used to define ‘tool’, ‘capacities’ and ‘development’. Despite this effort, some studies may have been missed. Second, there was a lack of studies which referenced a standard reporting procedure, despite the presence of standards available for reporting qualitative or quantitative research43–45 as well as for mixed methods research.46 Other limitations typical for reviews may also apply. Third, while this review has attempted to be as comprehensive as possible, some sources might not have been detected due to the challenge in finding all the relevant grey literature, and the restriction to English language sources only. Finally, it was not possible to analyse the psychometric aspects of each identified tool due to inconsistent reporting.

Conclusions

Sixteen studies using or describing tools to measure HRCD were identified and analysed in this systematic review.19–34 Identifying capacity development with pre/postintervention evaluations or to generically evaluate capacity development without using a tool was common. There is a need for a clear distinction between simply measuring training activity outcomes in healthcare workers and effective action promoting HRCD for healthcare workers.

The most recurrent tools described were the Research Capacity and Culture tool20 22 24 34 and the ‘Research Spider’ tool.28 29 A variety of other tools, mostly questionnaire based, were identified, and in most cases, a broader applicability than described in the specific context of the paper may be possible. Two frameworks systematising research core competencies were identified.23 31 The potential of tools derived from these frameworks could be significant. The applicability of each tool depends on the context and on the level of accuracy needed. Such tools could be routinely incorporated into standard personal development reviews in order to consistently support capacity development in research studies and organisations.

Future directions for HRCD include the design of a standardised, comparable and consistent tool to measure individual HRCD not linked to training evaluation, but support a long-term research competencies acquisition strategy. In addition, the harmonisation of definitions and terminologies used in identifying HRCD actions and processes could facilitate standardisation and comparability of HRCD strategies.

Data availability statement

Data are available upon reasonable request. All data relevant to the study are included in the article. The complete data set generated by the systematic review and included in the extraction log is available upon request.

Ethics statements

Acknowledgments

Authors are sincerely thankful for the immense and competent support of Elinor Harriss, Librarian of the Bodleian Health Care Libraries; Rebekah Burrow who instructed on the different steps and tool needed to perform the present systematic review; and Filippo Bianchi, first DPhil colleague who provided the basic knowledge on systematic reviews.

References

Footnotes

  • Contributors DB and ER designed and conducted the systematic review. DB wrote the draft of the systematic review and revised it according to the commentaries of ER, SB and TL. DB provided the final version of the manuscript. ER critically reviewed the manuscript and substantially contributed to the final version of the manuscript. SB critically reviewed both the design of the systematic review and the manuscript, and was involved in the development of meaningful inclusion criteria. TL critically reviewed the design of the study, made important suggestions for improvement, critically reviewed the manuscript and substantially contributed to the final version of the manuscript. All authors approved the final version of the manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.