Article Text

Original research
Design and validity of an instrument to assess healthcare professionals’ perceptions, behaviour, self-efficacy and attitudes towards evidence-based health practice: I-SABE
  1. Arielly Souza Mariano Ruano1,
  2. Fabiane Raquel Motter2,
  3. Luciane Cruz Lopes2
  1. 1School of Pharmaceutical Sciences, São Paulo State University, Araraquara, São Paulo, Brazil
  2. 2Graduate Course in Pharmaceutical Sciences, University of Sorocaba (Uniso), Sorocaba, São Paulo, Brazil
  1. Correspondence to Prof Luciane Cruz Lopes; luslopesbr{at}gmail.com

Abstract

Objectives To develop and validate an instrument to measure Brazilian healthcare professionals’ perceptions, behaviour, self-efficacy and attitudes towards evidence-based health practice.

Design Validation of an instrument using the Delphi method to ensure content validity and data from a cross-sectional survey to evaluate psychometric characteristics (psychometric sensitivity, factorial validity and reliability).

Setting National Register of Health Establishments database.

Participants We included clinical health professionals who were working in the Brazilian public health system.

Results The Instrument to assess Evidence-Based Health (I-SABE) was constructed with five domains: self-efficacy; behaviour; attitude; results/benefits and knowledge/skills. Content validity was done by 10–12 experts (three rounds). We applied I-SABE to 217 health professionals. Bartlett’s sphericity test and the Kaiser-Meyer-Olkin (KMO) index were adequate (χ2=1455.810, p<0.001; KMO=0.847). Considering the factorial loads of the items and the convergence between the Scree Plot and the Kaiser criterion the four domains tested in this analysis, explaining 59.2% of the total variance. The internal consistency varied between the domains: self-efficacy (α=0.76), behaviour (α=0.30), attitudes (α=0.644), results/benefits to the patient (α=0.835).

Conclusions The results of the psychometric analysis of the I-SABE confirm the good quality of this tool. The I-SABE can be used both in educational activities as well as an assessment tool among healthcare professionals in the Brazilian public health settings.

  • Education & training (see medical education & training)
  • Health services administration & management
  • Primary care
  • Public health

Data availability statement

All data relevant to the study are included in the article or uploaded as online supplemental information.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • The Instrument to assess Evidence-Based Health was developed through a rigorous process, which involved the integration of evidence from the literature using a theoretical framework, a Delphi survey for the validity of the content, and psychometric assessments.

  • Although the response rate was 15%, this survey presented a good number of respondents from different types of healthcare professionals coming from diverse practice settings with different levels of experience, thus providing a good assessment of the overall knowledge and use of evidence-based health practice in public health settings.

  • Composite reliability was not performed in this study, therefore, future studies in a larger sample of health professionals are needed to assess reliability with greater robustness, as well as confirmatory factor analysis.

Introduction

Evidence-based health practice (EBP) is identified as one of the most important factors for improving the results and sustainability of health systems and it has become an important competency for health professionals involved in patient care.1 EBP is defined as the integration of best research evidence with clinical expertise and patient values.2 There are several studies of improved patient outcomes following implementation of EBP such as reductions in length of hospital stay and costs, increased patient satisfaction, and the elimination of unnecessary or ineffective practices.2

Although the incorporation of scientific evidence as a basis for health decision making is considered a critical factor to improve quality of care, the application of EBP is remains a major challenge.3–5 Studies showed competency gaps and low implementation rates among healthcare professionals across diverse practices and settings. Understanding of knowledge, skills, attitudes and barriers related to EBP among healthcare professionals can help to elaborate effective and systematic strategies for integrating the EBP in healthcare services.5

Despite the availability of tools to assess EBP implementation among healthcare professionals, most of them have been developed to assess knowledge and skills and none is able to cover all domains established by the Classification Rubric for EBP Assessment Tools in Education (CREATE) framework.6–8 According to a recent systematic review which includes 12 validated tools, few demonstrated multiple (≥ 3) types of established evidence on the reliability and validity of the instrument, and none addressed domains such as self-efficacy, behaviours or patient benefit.9

These limitations might compromise the ability to evaluate the impact of EBP implementation on health outcomes. The development of a validated instrument is important to determine gaps, to design interventions needed for integrating this competency in healthcare organisations, and to assess the effectiveness of future interventions in different contexts (eg, hospitals, primary care services).5

In Latin America, despite increased efforts to disseminate and apply the EBP concepts, the application of EBP among healthcare professionals is still limited.10 11 Research is lacking that supports the development of interventions to promote the EBP implementation in the clinical routine.10 11 In addition, no study developed a valid and reliable instrument to assess the gaps in EBP implementation among healthcare professionals in the Brazilian context. Thus, this study aims to develop and validate an instrument for determining healthcare professionals’ perceptions, behaviour, self-efficacy and attitude related to EBP in Brazil.

Methods

Identifiable information, such as names, phone numbers and addresses, was not collected from participants in order to fully protect their privacy.

The development was conducted in a systematic manner, using an accepted measure development methodology, which included development of items, content validity, pilot study and evaluation of psychometric characteristics. The flow of instrument development is shown in figure 1.

Development and validation of the instrument

Development of items

We drew on the EBP conceptual framework proposed by the CREATE to guide the item development process.12–14 This framework is a common taxonomy for new and existing tools and it is designed to help EBP educators/researchers identify the best assessment tool available and provide guidance for developers of new EBP assessment tools. Using this framework, the nature of an assessment can be characterised with regard to the five-step EBP model (Ask, Search, Appraise, Integrate Evaluate), type(s) and level of educational assessment specific to EBP, audience characteristics and learning and assessment aims.12–14

A scoping review was used to systematically select and summarise existing tools with established evidence on the reliability and validity.14–21 We used the CREATE framework to guide the data extraction of potential domains. Items were pooled by two researchers in five domains established by CREATE framework: (1) attitudes, (2) self-efficacy, (3) knowledge/skills, (4) behaviours and (5)=results/benefits for patients. The Excel spreadsheet was used to extract and analyse the items. Disagreements about the items included in each domain were resolved by a consensus-based discussion.

Considering that we used the CREATE framework, the method used to identify the items was the modified frameworks synthesis.22 This method is an excellent tool for supporting qualitative analysis because it provides a systematic model for managing and mapping the data.22 The definitions of domains derived from this framework are presented in online supplemental appendix 1. We used these definitions as a guide for the development of new items if there is no existing instrument.

After translation, technique revision and semantic evaluation by the research group, the initial item pool was discussed and critically assessed and appropriate changes to the translation were made to ensure consistency. After this stage, we used the consensus approach to ensure the content validity of instrument which is described in the later section entitled ‘content validity’.

Content validity

Content validity refers to the degree to which elements of the instrument are relevant to and representative of the targeted construct for a particular assessment purpose.23 This could be done using the results of several examiners’ analyses (panel of experts) who verify the items’ representation regarding content areas and the relevance of the objectives to be measured. We used a panel of experts through a consensus technique, according to simplified Delphi’s method.24

The Delphi method is a structured process distributing rounds of the questionnaire in analysis to gather information and set priorities or gain consensus regarding a specific issue. This method is characterised by anonymity, iteration, controlled feedback, and stability in responses among those with expertise on a specific issue.25 26 The Delphi technique was conducted in online web surveys where the panel of experts filled out the form given their responses directly and blinded from others.26

Selection and recruitment of experts

The panellists were identified through an advanced search system of the Lattes platform on the National Council for Scientific and Technological Development website (www.cnpq.br/lattes), using the following keywords: evidence-based health, EBP, evidence-based medicine, questionnaire, measurement instruments, questionnaire validation and psychometric analysis. The Lattes Platform is a publicly available information system about individual researchers working in Brazil maintained by the Brazilian Federal Government.

As this project aims to create an instrument to assess knowledge, skills and attitudes, we understand that the panel of experts should be composed of researchers working with EBP and healthcare professionals who use EBP in their practice. Considering theses aspects, the following criteria were used for selecting a panel of experts: publication of at least three peer-reviewed academic indexed journal articles on EBP or projects/articles that involved validation of questionnaires in the health area published in the last 4 years, or healthcare professional with at least 5 years of experience in EBP. We identified 25 potential participants who were then invited by email. Each potential panellist was informed about the voluntary nature of the study and was provided with full study information, outlining the aim of the study, the extent and the timing of their expected involvement.

Rounds

We planned at least three rounds. During the rounds, the panel board members were invited to comment on grammar and phrasing to improve uniform interpretation of items and prevent socially desirable responses. The content assessment was done considering Theoretical Dimension, Theoretical Relevance, Clarity and Relevance or representativeness as it was explained in our protocol.27 For each item in the questionnaire, we used the traditional 4-point Likert scale in which there is no neutral option (1=completely disagree; 2=disagree; 3=agree and 4=completely agree). In this case, neutral option is useless where researchers prefer to extract a specific opinion from the respondents on clarity, and relevance or representativeness of each item in the instrument.28 Additionally, following each item, a space was included for panellists to write their suggestions for improving the item or making comments. If the expert marked the answer I completely disagree with or disagree with, he must justify his answer. The experts were also offered the opportunity to add items. If they suggested additional items or dimensions, these were submitted to be assessed in the next round. Doubts about comments or suggestions were resolved with the experts by telephone or email. To avoid imposing our views on participants, the researchers only contacted panellists if there was some doubt about their suggestions in order to avoid possible mistakes related to elaboration of items. After each round, the results and comments were analysed and summarised by the research team in order to guide the instrument revision. The modified instrument was again sent to the panellist group for the next round of analysis. Each round lasted 30 days corresponding to 15 days for the panellists’ answers and another 15 days for the researchers’ analysis

Descriptive analyses

After each round, data generated from completing the online questionnaire were extracted to Microsoft Excel for descriptive analysis (frequencies and percentages) to determine the percentage rating of agreement or disagreement among experts.

Determining consensus

We used the traditional 9-point scale (1=extremely irrelevant to 9=extremely relevant) to assess each item. The participants’ responses were categorised as irrelevant (1–3), equivocal (4–6) and relevant (7–9). For each item, the consensus was reached if at least 80% of the participants’ votes belong to the same category (1–3, 4–6 or 7–9).29 30 Items that did not reach a consensus was reviewed and submitted for the next round. During the Delphi process, only one panellist suggested significant changes in the instrument. The items were revised and returned to the vote in the next round.

Criteria for dropping items at each round

If 80% or more of the participants’ votes completely disagree or disagreed, the item was excluded from the instrument. After the end of content validation, this stage was complemented with exploratory factor analysis which is described in the later section entitled ‘factorial validity’.

Feedback

Quantitative (percentage rating) and qualitative feedback from each round of the Delphi process were incorporated into the survey for the next round. The expert panel was instructed to consider the feedback.

Anonymity

The anonymity among panellists was ensured during the Delphi process as the entire was traditionally handled via remote participation that was coordinated by the researcher(s).31 Responses and feedbacks from panellists are always anonymous to everyone except the researcher(s). Therefore, the panellist didn’t know the identities of each other or their comments/suggestions.

Pilot study

In order to identify possible doubts regarding the understanding of the items, panellists were asked to indicate health professionals to answer the instrument. Each panellist appointed three health professionals, totalling 36 potential participants. Of these, 28 agreed to participate in the research. If any of the nominated professionals were a panellist during the content validation, this professional was not included in the pilot study. Therefore, the researchers asked to panellist appoint another possible participant.

Health professionals who agreed to participate in the pilot study had to answer the following three questions about the instrument in order to identify difficulties in the use of the Instrument to assess Evidence-Based Health (I-SABE): (1) How long did it take you to answer the instrument?; (2) Was there any difficulty in understanding any question? If YES, please describe it below. (3) Did you have difficulty with the topic?

In the case of a misunderstanding regarding one or more items of the instrument, and of over 20% of the assessed sample, the parts were reviewed by the expert panel.

Evaluation of psychometric characteristics

Study design: this step is a cross-sectional study.

Setting

We gathered the survey participants from the National Register of Health Establishments database (CNES), which hosts free access to data from all public health institutions of Brazil. Queries on CNES can be performed at http://cnes.datasus.gov.br/ filtering by geographical location (ie, state and municipality), and type of establishment. It also provides the name, role, workload and employment contract of each healthcare professional. We selected only medical professionals, nurses, dentists, and pharmacists who are working in Brazil’s public health sector (Unified Health Care System).

Participants

We included clinical health professionals who are currently working in the public health system and excluded professionals on leave from work for limited or unlimited time during the period of application of the questionnaire, or retired professionals.

Study size

The estimated minimum sample size was based on the requirement of 5–10 subjects per model parameter.32 In 2016, government database registered 240 750 physicians; 182 861 nurses, 58 421 dentists and 20 593 pharmacists. Thus, we choose to work with a representative sample bigger than that recommended for the statistical analysis. Considering a 30% response rate, we estimate a sample size of 1270 respondents needed to answer one of our questions (percentage of prior contact, familiarity with EBP), with 5% precision. To obtain this precision we dichotomized the first item of the survey (being favourable or not to EBP) assuming maximum variability (50% of responses favourable to EBP). A 95% CI was applied to the percentage of favourable responses.

Random sampling

The random sample was performed with the Microsoft Excel software in a central computer considering some stratifications (eg, type of professional, geography, settings). We recruited potential participants through email with an invitation letter containing a link to the web survey. Professionals without e-mail addresses available in CNES were be contacted by phone or fax at their workplace and will be sent a physical survey by postal mail to their work addresses.

Data collection

After health professionals agreed to participate in the study, the instrument I-SABE was sent online through the survey monkey platform (https://pt.surveymonkey.com/).

Data analysis

Data analysis were performed using SPSS (V.20.0) and Stata (V.12.0).

Psychometric sensitivity

The summary and shape measures of the questionnaire items distribution were used to estimate their psychometric sensitivity. Items with a skewness (Sk) greater than 3 and kurtosis (Ku) greater than 7 in absolute values are considered to have psychometric sensitivity issues.30 The diagnosis of multivariate outliers is to be performed by computing the Mahalanobis distance.30

Factorial validity

The exploratory factor analyses (EFAs) were directed to the following domains: self-efficacy, behaviour, attitudes and results/benefits. Therefore, only 20 items were included in this analysis. All items from domain knowledge/skills and item 21 from the domain attitude were not included since they are not measuring latent variables.

EFAs were conducted by using principal axis factoring in order to partition systematic and error variance in the solution.33 34 Promax oblique rotation was be used, allowing for factor intercorrelations. To promote simple structure, items were retained on a factor if they load at least 0.30 on the primary factor and less than 0.30 on all other factors.33

Reliability

The reliability of an instrument used for data collection is its coherence, determined by the constancy of the results.35 A reliable (stable) measure is consistent and precise because it provides a constant measurement of the variable.35 To estimate the reliability, both the internal consistency and stability were evaluated.

We explored internal consistency, that is, the reliability estimated from the internal consistency, by using standardised alpha Cronbach coefficient (α), where Cronbach ɑ of 0.7–0.8 is considered satisfactory, 0.8–0.9 is good and 0.9 is excellent.36

Patient and public involvement

No patient was involved.

Results

Development and validation of the instrument

The results of the development and validation of the instrument are described in figure 2.

Figure 2

Results of development and validation of the instrument.

Development of items

We developed a preliminary instrument containing 31 items across five domains: self-efficacy, behaviour, attitudes, results/benefits to the patient, and knowledge/skills (online supplemental appendix 1). The instrument was named I-SABE.

Content validity

Three rounds of expert panels were carried out to assess the preliminary instrument. Of the 15 potential experts selected, 12 (80%) agreed to participate in the study. The second and third rounds of instrument evaluation had the participation of 10 (66.7%) experts. Most respondents completed the questionnaire between 15 and 20 min.

In the first round, the experts identified items that were not clear. This process resulted in the exclusion and convergence of items according to the consensus adopted. Thus, 4 items out of 31 instrument items were removed, resulting in 27 remaining items (item 6 was incorporated in the item 2, items 7, 13 and 14 were excluded).

Some experts highlighted the need to include new items, for example, in the ‘Attitude’ domain, the following items were included: ‘The practice of EBP increases the satisfaction of the person in my care’ and ‘The practice of EBP provides an outlet of decision shared with the person in my care’ (item 32 and 33 were added).

In the second round, a consensus was reached for 100% of the domains selected. However, experts emphasised the importance of characterising the health professional’s practice, suggesting the inclusion of items that reflect clinical practice. Thus, after the second round, four items were added, resulting in a total of 31 items. These items, item 21 from the Attitude domain and all items from the Knowledge/Skill domain were not included in the analysis stage of psychometric characteristics, as these questions are not measuring latent variables.

In the third round, experts reached a consensus on the four items suggested in the previous round. Thus, they were included in the instrument. At the end of the content validity, the instrument I-SABE was finalised with 31 items across five domains. All changes, inclusion, and exclusion of the items are described in online supplemental appendix 2.

Pilot study

After determining the content validity, the instrument was applied to a sample of 28 health professionals which included physicians, nurses and pharmacists. Based on responses from health professionals, we modified item 19 ‘Time is a factor that favours my use of EBP’. This item was considered incomprehensible item. The item was reevaluated with members of the expert committee and changed to ‘I don’t use EBP because I don’t have time’. At the end of this stage, 77.7% of the participants reported not feeling any difficulty in filling out the I-SABE instrument and the average completion time was 12 min.

These modifications were included in the new version of I-SABE included which was submitted to the assessment of validity and reliability. The time of each participant took to complete the questionnaire varied between 24 and 66 min. The mean time that participants took to complete the questionnaire was 12 min. The perceived length of the same was deemed appropriate for most participants (88%). The mean perceived difficulty of the questionnaire was 2 (0=very easy; 10=very difficult).

Evaluation of psychometric characteristics

Participants

Of the 2550 health professionals listed, 1380 subjects were recruited from a random sampling. At the end of this stage, the response rate was 15% (figure 3).

Figure 3

Flow chart of sample composition.

The demographic and academic characteristics of 217 Brazilian health professionals who participated in the study were summarised in table 1. The majority of sample were women (n=148; 68.2%), pharmacist (n=84; 38.7%), have specialisation degree (n=90; 41.5%) and work in primary care (n=70; 32.2%). Detailed characteristics of survey respondents are presented in table 1.

Table 1

Demographic, academic and setting of work characteristics of participants

Psychometric sensitivity

Sk and Ku are within the commonly agreed-upon thresholds of lower than 1 for Sk and lower than 3 for Ku, indicating a normal distribution of the I-SABE, and, therefore, an adequate psychometric sensitivity (table 2).

Table 2

Summary and shape measures of I-SABE

Factorial validity

The sample suitability indices presented good conditions for the factorial analysis: Kaiser-Meyer-Olkin of 0.847 and Bartlett’s sphericity with p<0.001 (table 3). Visual inspection of the scree plot (figure 4) revealed that the point of inflexion in the plot occurred at the fifth factor, indicating that four factors should be retained.

Table 3

Value of Kaiser-Meyer-Olkin and Bartlett’s tests

Varimax orthogonal rotation allowed a more precise classification of each of the factors (domains) (table 4).

Table 4

Factor structure matrix with orthogonal varimax rotation of I-SABE

The analysis revealed four factors whose eigenvalues were >1, accounting for 52.6% of the total variance in the measure. After the completion of this step, item 12 was removed because it presented a confounding factor and with a factor load below 0.4. The final instrument is described in online supplemental appendix 3 (Portuguese version) and (online supplemental appendix 4) (English version).

Reliability

The reliability of the I-SABE instrument was assessed by Cronbach’s alpha, the values were calculated for each factor, as described in table 5.

Table 5

Cronbach’s alpha values for each factor (domain)

Discussion

The robustness of the results of a study depends on the quality and validity of the instrument used. This study presented the development and the initial validation process of an instrument (I-SABE) to verify different aspects of EBP, using a rigorous methodology. Our findings demonstrated that the I-SABE has an overall good level of psychometric properties measured as content and factorial validity, internal consistency reliability in order to measure the four domains of EBP among the different types of health professionals (mainly pharmacists, physicians and nurses), indicating that this instrument is an efficient and effective instrument for use in research and public health settings.

Although several tools combine more than one domain of EBP assessment in a single instrument, these predominantly focus on certain domains (ie, knowledge and skills) and EBP steps (ie, appraise).6 9 37–39 To our knowledge, I-SABE is the first tool that has addressed the following five domains in a single instrument: (1) self-efficacy; (2) behaviour; (3) attitude; (4) results/benefits and (5) knowledge/skills.6 9

The I-SABE was designed to evaluate EBP implementation among healthcare professionals with different levels of experience in Brazilian Public Health. Two instruments that assess EBP competencies have been culturally adapted and validated in Brazil.40 41 However, these instruments were developed to assess EBP in specific populations such as medical students and nurses. Furthermore, in the literature, few validation studies were developed with a multidisciplinary sample.42 However, for EBP to be fully implemented, it is essential to clarify possible differences among healthcare professionals since the EBP is a shared competency.

Regarding the five domains evaluated, the ’self-efficacy’ domain had a high factor load for the items and demonstrated a good correlation with the items, suggesting an adequate construction that allows measuring the self-efficacy of health professionals in the use of EBP. The domain ‘results/benefits for the patient’ accurately also reflects the content of the item and the direction of the I-SABE. This domain is considered an important aspect of EBP since it focuses on the impact of EBP on practice and results.13

The internal consistency of I-SABE was assessed by Cronbach’s alpha. Some authors recommend that Cronbach’s alpha value must be at least between 0.60 and 0.70 to have a reliable instrument.43 44 Based on this evidence, it can be observed that self-efficacy, results/benefits to the patient and attitude domains show adequate internal consistency.

On the other hand, we observed a lower internal consistency of the ‘behaviour’ domain. Low internal consistency suggested that the items within the construct of ‘behaviour’ were low correlated. A possible explanation might be the low number of items (n=3) in this domain. Cronbach’s alpha values are quite sensitive to the number of items in the scale, and with short scales (<10 items) it is common to find quite low Cronbach’s alpha values.

This limitation is in agreement with the findings reported for other studies. For instance, in the validation study of the ACE scale (Assessing medical trainees’ competency in evidence-based medicine), the authors identified a low internal consistency to questions about a critical appraisal, with specific reference to selection and performance bias.45 Findings from the evidence-based practice -knowledge, attitude, behavior questionnaire (EBP-KABQ) also observed lower internal consistency of the ‘knowledge’ domain compared with other items, suggesting that the six items within this construct were not adequately correlated.46

Finally, although the ‘knowledge and skill’ domain was not included in the analysis stage of psychometric characteristics since these questions are not measuring latent variables. The I-SABE considered the requirements from the CREATE framework, examining user knowledge and skills across steps 1–4 of the EBP process.13

Strengths and limitations

This study was developed through a rigorous process, which involved the integration of evidence from the literature using a theoretical framework, a Delphi survey for the validity of the content and psychometric assessments. As a strength, we use the CREATE taxonomy as a framework to elaborate and the instrument.13 This framework has been developed by a specialist group and describes seven areas of evaluation of EBP educational interventions, out of which five were used as a framework for the I-SABE. Second, the content of the instrument was based on a literature review and was validated by a panel of experts, and was pretested, which strengthened its validity. Third, we performed a simple random sampling of Brazilian healthcare professionals to select the participants of the study. Although the sample was relatively low when compared with the total number of professionals previously selected, the number of 217 healthcare professionals was sufficient to perform factors analysis since sample size calculation was based on a participant to item ratio of 5:1.32

However, there are some limitations to be considered. Web surveys are known to produce lower response rates compared with other data collection modalities.47 Although the response rate was 15%, this survey presented a good number of respondents from different types of healthcare professionals (physicians, nurses and pharmacists) coming from diverse practice settings with different levels of experience, thus providing a better idea of the overall knowledge and use of EBP in public health settings than many previous studies, frequently focused on a specific profession and a particular setting. Additionally, we had a higher proportion of pharmacists (38.7%) compared with other healthcare professionals (30.8% physicians: 17.1% nurses and 13.4% other healthcare professionals). It is important to note that we only included clinical pharmacists who work with healthcare teams in patient care and who was involved in the selection of intervention or medication for patients. Pharmacists have a crucial role in the health system to maintain the rational use of medicine and provide pharmaceutical care to patients.48 EBP is an essential approach to promote the rational use of medications, making sure that patients receive the right medicine in the right dose for the right diagnosis at the right time at the lowest possible cost suitable to their requirements.48 Finally. the composite reliability was not performed in this research. It is suggested that it be verified using future studies to assess reliability with greater robustness, as well as confirmatory factor analysis, which makes it necessary to compose a larger sample of health professionals to administer the instrument.

Implications for clinical practice and future research

The I-SABE was found to be a valid and reliable instrument to assess self-efficacy, behaviour, attitude and results/benefits towards EBP in Brazil. This tool can be used to measure the EBP competencies of healthcare professionals in Brazil and to identify barriers to and facilitators of EBP in clinical practice in order to improve the implementation of this practice. In addition, the instrument can be used in educational activities, as well as an assessment tool among healthcare professionals in different public healthcare settings.

Conclusion

The I-SABE is a valid and reliable instrument to assess the EBP among healthcare professionals. The application of this instrument is simple, quick, and provides a reliable assessment of the EBP in the main stages of the execution of the EBP in order to favour their implementation. Future research is required to further examine other psychometric properties of I-SABE and its utility in patient care.

Data availability statement

All data relevant to the study are included in the article or uploaded as online supplemental information.

Ethics statements

Patient consent for publication

Ethics approval

The study was approved by the Ethics Committee for Research at the University of Sorocaba (number 1.425.808), and all participants gave written, informed consent before interviews or survey participation.

Acknowledgments

The authors would like to thank all experts—Scientific Committee and health care professionals who participated in the study.

References

Supplementary materials

Footnotes

  • Twitter @Fabiane Mooter, @lulopesbr

  • Collaborators The authors would like to thank all experts—Scientific Committee and health care professionals who participated in the study.

  • Contributors LCL developed the study concept and design. ASMR conducted measurements. FRM did the statistical analyses. LCL, ASMR and FRM analysed the results and wrote the initial draft of the manuscript. All authors read and approved the final manuscript. Guarantor: LCL.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.