Article Text

Original research
WHO standards-based tools to measure service providers’ and service users’ views on the quality of hospital child care: development and validation in Italy
  1. Marzia Lazzerini1,
  2. Ilaria Mariani1,
  3. Tereza Rebecca de Melo e Lima2,
  4. Enrico Felici3,
  5. Stefano Martelossi4,
  6. Riccardo Lubrano5,
  7. Annunziata Lucarelli6,
  8. Gian Luca Trobia7,
  9. Paola Cogo8,
  10. Francesca Peri9,
  11. Daniela Nisticò9,
  12. Wilson Milton Were10,
  13. Valentina Baltag10,
  14. Moise Muzigaba10,
  15. Egidio Barbi9,11
  16. on behalf of the CHOICE Study Group
    1. 1 WHO Collaborating Centre for Maternal and Child Health, Institute for Maternal and Child Health IRCCS Burlo Garofolo, Trieste, Italy
    2. 2 Instituto de Medicina Integral Professor Fernando Figueira/IMIP, Recife, Brazil
    3. 3 Pediatric and Pediatric Emergency Unit, The Children Hospital, AO SS Antonio e Biagio e Cesare Arrigo, Alessandria, Italy
    4. 4 Pediatric Unit, Ospedale Santa Maria di Ca Foncello, Treviso, Italy
    5. 5 Department of Pediatrics, "La Sapienza" University -Hospital “Santa Maria Goretti” of Latina, Roma, Italy
    6. 6 Pediatric Emergency Department, Giovanni XXIII Pediatric Hospital, University of Bari, Bari, Italy
    7. 7 Pediatric and Pediatric Emergency Room Unit, “Cannizzaro” Emergency Hospital, Catania, Italy
    8. 8 Division of Paediatrics, Department of Medicine DAME, Academic Hospital Santa Maria della Misericordia, University of Udine, Udine, Italy
    9. 9 University of Trieste, Trieste, Italy
    10. 10 Department of Maternal, Newborn, Child and Adolescent Health and Ageing, World Health Organization, Geneva, Switzerland
    11. 11 Institute for Maternal and Child Health IRCCS Burlo Garofolo, Trieste, Italy
    1. Correspondence to Dr Marzia Lazzerini; marzia.lazzerini{at}burlo.trieste.it

    Abstract

    Objectives Evidence showed that, even in high-income countries, children and adolescents may not receive high quality of care (QOC). We describe the development and initial validation, in Italy, of two WHO standards-based questionnaires to conduct an assessment of QOC for children and young adolescents at inpatient level, based on the provider and user perspectives.

    Design Multiphase, mixed-methods study.

    Setting, participants and methods The two questionnaires were developed in four phases equally conducted for each tool. Phase 1 which included the prioritisation of the WHO Quality Measures according to predefined criteria and the development of the draft questionnaires. In phase 2 content face validation of the draft questionnaires was assessed among both experts and end-users. In phase 3 the optimised questionnaires were field tested to assess acceptability, perceived utility and comprehensiveness (N=163 end-users). In phase 4 intrarater reliability and internal consistency were evaluated (N=170 and N=301 end-users, respectively).

    Results The final questionnaires included 150 WHO Quality Measures. Observed face validity was excellent (kappa value of 1). The field test resulted in response rates of 98% and 76% for service users and health providers, respectively. Among respondents, 96.9% service users and 90.4% providers rated the questionnaires as useful, and 86.9% and 93.9%, respectively rated them as comprehensive. Intrarater reliability was good, with Cohen’s kappa values exceeding 0.70. Cronbach alpha values ranged from 0.83 to 0.95, indicating excellent internal consistency.

    Conclusions Study findings suggest these tools developed have good content and face validity, high acceptability and perceived utility, and good intrarater reliability and internal consistency, and therefore could be used in health facilities in Italy and similar contexts. Priority areas for future research include how tools measuring paediatric QOC can be more effectively used to help health professionals provide the best possible care.

    • quality in health care
    • paediatrics
    • epidemiology

    Data availability statement

    All data relevant to the study are included in the article or uploaded as supplementary information. The authors confirm that the data supporting the findings of this study are available within the article and its supplementary materials." as indicated in the website https://authorservices.taylorandfrancis.com/data-sharing/share-your-data/data-availability-statements

    http://creativecommons.org/licenses/by-nc/4.0/

    This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    Strengths and limitations of this study

    • This study describes the development and validation of tools to assess perceived quality of care from the perspective of service providers and users, based on the ‘WHO Standards to Improve the Quality of Care for Children and Young Adolescents at Facility Level’.

    • The major strength of the tools is the multiphase approach used for their development, which aimed at assessing different properties of the questionnaires, including: content validity—assessed with the contribution of both experts and end users—face validity, acceptability, perceived utility and comprehensiveness, reliability and internal consistency assessed in volunteers.

    • The tools shall be further validated for use in countries other than Italy.

    Background

    Despite reductions in child and adolescent mortality over the last 30 years, the global burden of disease remains immense. In 2019 alone, 7.4 (95% CI 7.2 to 7.9) million children and adolescents died mostly due to preventable or treatable causes.1 Europe, Norther America and Australia are the regions with the lowest child mortality.1 Notwithstanding low child mortality, even in high-income country’s quality of care (QOC) for children and adolescents is still a challenge in many settings.2–11

    Evidence suggest that key gaps in the quality of inpatient child healthcare in high-income and upper middle-income countries include inappropriate hospitalisations, medical errors, drugs over-use, inadequate pain management and unsatisfactory patient experience of care.2–10 For example, a recent report from the WHO highlights extreme variations in paediatric hospitalisation rates across Europe, ranging from 150 to 550 per thousand population, suggesting inequity in healthcare.2 Multicountry surveys and systematic reviews4 5 report antibiotic prescription rates of up to 60%–75% for common paediatric conditions such as fever, upper respiratory tract infections and diarrhoea, driving high healthcare costs and increasing the risk of antibiotic resistance.6 7 On the other hand, pain prevention and treatment for children continues to be suboptimal, with a need for wider implementation of both pharmacological and non-pharmacological interventions.8 9 Finally, patient experience of care has been reported as unsatisfactory in several high-income countries.10

    For adolescents, evidence from high-income, middle-income and low-income countries shows that adolescents experience many barriers to receiving quality healthcare, including related to factors such as low agency, restrictive laws and policies regarding informed consent, judgmental attitude of healthcare providers, unequal access to resource’s, health services fragmentation and poor coordination.11

    Poor QOC impacts individual health outcomes and increases risks and costs for the entire community. The WHO Global strategy for women’s, children’s and adolescents’ health (2016–2030) recognises QOC as a priority for improving the health of children.12 To operationalise this vision, a framework for paediatric QOC and standards of care were developed between 2015 and 2018.13 The WHO Framework13 identified eight domains of QOC grouped under three key dimensions: (1) provision of care; (2) experience of care; and (3) availability of resources (figure 1). In 2018 WHO defined, through an extensive consultation, eight standards for improving the quality of paediatric and young adolescent (0–15 years) care, articulated in 40 WHO Quality Statements and 520 WHO Quality Measures.13

    Figure 1

    Key phases in questionnaires development.

    These WHO Standards and Measures have been developed in the best interests of children and young adolescents, to ensure that their particular needs and rights (eg, to family-friendly health facilities and services; child-specific and young adolescent-specific appropriate equipment, etc) are met and their risks for harm are minimised during health service delivery.13 The WHO Standards should be implemented in healthcare facilities following the ‘Plan Do Study Act’ cycle, which implies, as a first step, a baseline assessment.13

    Nevertheless, there is a lack of documented experience on how best to collect data on the Quality Measures defined by WHO,13 especially in high-income countries. While tools have been developed to collect data on WHO Quality Measures related to maternal and newborn QOC,14 15 and outpatient and primary care for adolescents,16 no tool yet exists related to the WHO inpatient paediatric standards. In 2019, drawing on previous research conducted on the WHO Standards,17–21 we started a multicentre project called CHOICE (Child HOspItal CarE) aiming at implementing the WHO Standards13 to improve QOC for children and young adolescents in health facilities in Italy and Brazil. This paper describes the development of two WHO-Standards-based questionnaires13 and their validation in Italy, which were the initial steps of the CHOICE project. These two questionnaires aim at collecting data on priority WHO paediatric Quality Measures from service users and service providers. The process of validation in other countries, as well as the development of a third tool aiming at collecting key measures on ‘provision of care’ from official hospital records, will be reported separately.

    Methods

    The development of the two CHOICE questionnaires included four subsequent phases, as shown in figure 2, which applied to both questionnaires equally.

    Figure 2

    The WHO Framework for improving the quality of paediatric and young adolescent care.13

    The methodology used for the development and validation of the tools was based on existing guidelines,21–27 examples of questionnaires development reported in literature14 24 28 and authors’ experience in developing similar tools.16–20

    Table 1 summarises the properties of the questionnaires which were evaluated through the whole process, and the methods used.

    Table 1

    Questionnaire property evaluation24 25

    Phase 1: development of the draft questionnaire

    As a preliminary step, we conducted a literature review to assess whether any other similar tool existed. Relevant experts in the field were consulted. A wide search strategy (online supplemental table 1) was applied to PUBMED, with no language restrictions. A snowballing process was used to identify additional relevant articles for the review using the reference list from primary articles. No other tool was identified therefore the process went on into defining the questionnaires’ scope and desired characteristics (table 2).

    Supplemental material

    Supplemental material

    Table 2

    Expected used and desired characteristics of the CHOICE questionnaires

    The expected use (table 2) of the two questionnaires was to collect priority indicators useful to improve paediatric QOC as defined by the WHO Quality Measures13 at facility level in high-income and upper middle-income countries. The focus on this specific setting, as well as the identified data sources (ie, service providers or service users) were considered critical for prioritising the WHO Quality Measures. The two tools were conceived as complementary to a third tool aiming at collecting key measures of provision of care from hospital records. Based on previous experience16–20 we felt it was important to include several open questions in the CHOICE questionnaires, allowing for collection of any additional comment on QOC, and questions related to responders’ recommendations on how to improve care in their own setting. Criteria for questionnaire structure and wording, were based on existing guidance on how to develop a questionnaire.22–24

    After these preliminary steps, the following steps which brought to the draft questionnaires were: the categorisation of the WHO Quality Measures, their prioritisation, and their translation into questions in the two draft questionnaires. Specifically, first the WHO Quality Measures for paediatric QOC were categorised based on: (1) domain of the WHO Framework13 they pertained to; (2) most appropriate source of information (ie, health providers, health service users or both).

    Second, the WHO Quality Measures for paediatric QOC were prioritised by a team of experts, including paediatricians, adolescent health specialists and researchers involved in developing the WHO paediatric QOC framework. A predefined criteria and scoring system (from a minimum of 0 to a maximum of 5) was used to prioritise Quality Measures: (1) relevance to QOC in the context of high-income to upper middle-income countries in the WHO European Region; (2) feasibility of data collection and expected data reliability and (3) potential utility of the information for use in a quality improvement process. All Quality Measures with a total score of at least 10 points were selected for the first draft of the questionnaire. Sociodemographic items (age, sex, type of diseases for children, type of health professionals, etc) were chosen and designed according to literature and previous experience.15 23 Indicators relevant to COVID-19 were extracted from existing WHO guidance and relevant literature available.29 30

    Finally, the prioritised Quality Measures were translated into questions in the two draft questionnaires, following existing guidance (clear, specific, direct, concise questions) and previous experience22–24 (online supplemental table 2).

    Phase 2: assessment of content and face validity

    The two draft questionnaires were submitted to both volunteered experts and end-users to assess content validity (table 1). Opinion of end-users, and not only of experts, was considered particularly important, based on the fact that the two questioners aimed at collecting information on their perceived QOC. Two rounds of revision, including both experts and end-users were conducted.

    The team of experts who reviewed both questionnaires included 49 paediatricians involved in the CHOICE project, with experience both at tertiary and secondary care and senior experts from different settings (Italy and Brazil) with long-term experience in developing and/or using WHO indicators and standards.13 15 17–21 31 32 The WHO Standards13 were made available to experts. The questionnaires were circulated in Italian, thanks to knowledge of the Brazilian expert of this language.

    End-users included health professionals and parents of children hospitalised. Volunteers were selected based on responder characteristics as defined in table 2. In each of the two round of revision 30 health professionals with different backgrounds (senior paediatricians, junior paediatricians, residents in paediatrics, nurses, chief nurses), from different countries (Italy and Brazil) and settings (hospital of different level) reviewed the questionnaire for service providers. Similarly, 30 parents of hospitalised children with different conditions, and presenting different characteristics (ie, age, education, parity, nationality), including a subsample of immigrants living in Italy reviewed the questionnaire for service users.

    General Delphi process rules were followed33: experts and end-users reviewed the questionnaires and provided written feedback through the two rounds. In each round, specific feedback on the following topics were requested: (1) formulation and wording of questions (ie, whether each question was clear, specific to a single measure and sufficiently concise); (2) importance and relevance of every question, including whether any item should be added or dropped; (3) organisation of domains (ie, division of items in different sections) and (4) overall content and length of the questionnaires. Recommendations for improvement received were discussed within the team of experts until consensus on a final version was reached. The resulting revised version was then assessed for face validity.

    Face validity was assessed by asking end-users (ie, parents of hospitalised children and health workers) to evaluate each question in written form, using a dichotomous scale (Yes/No) in terms of: (1) ‘relevance’ (defined as the ability of a question to address the extent to which findings, if accurate, apply to the setting of interest) and (2) ‘appropriateness’ (defined as the ability of items’ content to describe the intended characteristic of a construct). Face validity was expressed as absolute frequencies, per cent observed agreement and Cohen’s kappa (K) statistics. The minimum predefined acceptable value of K based on existing literature34–41 was 0.70. Responders were selected at random among the population of health professionals and parents of hospitalised children at the Institute for Child Health Burlo Trieste, a large referral maternal and child hospital caring for paediatric cases from the whole national territory. The sample selection aimed at including responders with different backgrounds (ie, for service providers: senior and junior paediatrician registers in paediatrics, senior and junior nurses; for service users: parents of different ages, sexes, education levels and nationalities of children hospitalised due to different conditions).

    The questionnaires were optimised based on the steps above.

    Phase 3: field testing

    The revised questionnaires were field tested among 163 volunteers (130 parents and 33 health workers) to assess: (1) response rate (calculated as number of respondents out of those asked to complete the questionnaire); (2) perceived utility (yes/no); (3) comprehensiveness (yes/no); (4) length appropriateness (right length/too short/too long; (5) sections perceived as more important (all/A/B/only specific items in each section) and (6) any further recommendations for improving the tool (eg, adding or deleting questions, rephrasing, etc).

    Phase 4: assessing reliability and internal consistency

    The final questionnaires, optimised based on the steps above (online supplemental annex 1 and 2, table 3) were assessed for intrarater reliability over time, by administering the questionnaire twice (test–retest). It was evaluated on multiple-choice questions—excluding sociodemographic items—using the Cohen’s lappa (K) statistic and other indexes of agreement (ie, Gwet’s AC1, Bennet and Brennan-Prediger coefficients of agreement).38–41

    Internal consistency was assessed through Cronbach’s alpha correlation (alpha), for sections A and B of each questionnaire (online supplemental table 3), where items were meant to be interrelated. For values of Cronbach’s alpha greater than or equal to 0.70 internal consistency was considered good.22 34 Both reliability and internal consistency were assessed in a sample of volunteers from different regions across Italy.

    A simple scoring system (online supplemental table 4) was developed based on examples in literature and previous authors’ experience.15 28 In developing the scoring system, the following key considerations were made. First, it was acknowledged that other recent scoring systems developed to describe the QOC28 41 42 did not attribute different weights to different Quality Measures; in fact, it is difficult if not impossible to quantify the importance of different aspects of care (eg, antibiotic prescriptions vs respectful care), as all of these aspects are equally linked to human rights.13 Second, literature suggests that a scoring system with values ranging from 0 to 100 is easier to understand (compared with other ranges).28 Consequently, in the CHOICE scoring system each WHO Quality was given the same weight, with a total score ranging in each domain from 0 to 100, thus allowing easy comparison across domains (eg, resources, experience, etc). This study did not aim at further testing the scoring system.

    Data analysis

    For face validity, the required minimum sample size, calculated based on exiting guidance,21 22 34 resulted in 20 service users and 20 healthcare providers, assuming in the null hypothesis a K value of 0.3 and in the alternative hypothesis a K value of at least 0.7, 80% power and a significance level of 2.5% with a one-tailed test. For reliability, assuming in the null hypothesis a K value of 0.45 and in the alternative hypothesis a K value of at least 0.65 (with a proportion of 0.20, 0.3 and 0.5 in the three categories of the item), 80% power and a significance level of 2.5% with a one-tailed test, the required sample resulted in 74 cases for each questionnaire. For the internal consistency, assuming in the null hypothesis an alpha of 0.55, and in the alternative hypothesis an alpha of at least 0.70, 80% power, a number of items equal to 10 (to be conservative), and a significance level of 2.5% with a one-tailed test, a required sample of 108 service users and 108 health professionals was needed.

    Summary statistics were presented as absolute frequencies, percentages, and as K statistic and other indexes of agreement (ie, Gwet’s AC1, Bennet and Brennan-Prediger coefficients of agreement) and as Cronbach’s alpha correlation value, as appropriate. All tests performed were two-tailed and a p value of <0.05 was considered statistically significant. Statistical analyses were performed using Stata V.14 and R V.3.6.1.

    Patient and public involvement statement

    Service user, selected on a voluntary basis, were involved in the development and validation of the CHOICE questionnaires. They had the opportunity to provide feedback on the health service user questionnaire, and express freely their preferences. Inputs received from patients were used to revise the content of the questionnaire, including reducing its length, and to improve acceptability.

    Results

    Phase 1: questionnaire draft development

    The process of prioritisation of the WHO Quality Measures resulted in the inclusion of 85 Quality Measures in the service user questionnaire, and 80 Quality Measures in the service provider questionnaire, respectively. Considering additional items (ie, questions to assess sociodemographic characteristics of responders, and open questions), these first versions included 100 and 95 total questions respectively. The draft questionnaires were further assessed and optimised in the following phases described below.

    Phase 2: content and face validity

    The Delphi process among experts optimised several questions, including questions on management of diarrhoea, respiratory infections, fever, pain and organisation of care. A few items were dropped and substituted by other WHO Quality Measures which were deemed more specific, relevant to the context of high-income and middle-income countries, and potentially actionable (eg, availability of clear criteria for hospitalisation for diarrhoea, constant availability of a minimum set of drugs to treat pain in children, non-pharmacological pain prevention). Specific questions required rewording after feedback from end-users.

    Since responders recommended to reduce the length of the questionnaires, the total number of included WHO Quality Measures was slightly decreased. Specifically, 10 measures which were repeated in both questionnaires were dropped from the service user questionnaire; 5 measures deemed less relevant by end-users and experts were dropped from the service provider questionnaire. The revised tools included 75 Quality Measures each, for a total of 150 WHO Quality Measures across the two instruments (online supplemental table 3).

    Results of the subsequent face validity test are reported in online supplemental table 5. More responders than expected based on the initial sample size calculation contributed to face validity, resulting in a final sample of 30 parents and 20 health providers. For most questions it was impossible to estimate the Cohen’s kappa statistics due to the fact that none of the responder considered any question as not relevant nor appropriate, except for a single question in each questionnaire with a resulting kappa value of 1, indicating excellent face validity. Thus, there was no need to further modify the questionnaires.

    The final version of the two questionnaires is reported as online supplemental annex 1, 2. The questionnaire for health workers included the following six sections: (A) physical resources for health workers (40 items); (B) organisation of work (25 items); (C) management of COVID-19 pandemic (12 items); (D) overall satisfaction (two questions); a section to collect sociodemographic characteristics of health workers; a final section to collect feedbacks on the perceived utility and acceptability of the questionnaire. Similarly, the questionnaire for health service users included the following six sections: (A) physical resources for children and their care-givers (25 items); (B) experience of care (40 items); (C) management of COVID-19 pandemic (10 items); (D) overall satisfaction (two questions); a section to collect sociodemographic characteristics of health workers; a final section to collect feedbacks on the perceived utility and acceptability of the questionnaire. In each of the two questionnaires, section A, B, C, D included a final open question to collect suggestions from health workers on how to improve the QOC (online supplemental table 3).

    Phase 3: field testing

    The field testing of the final version of two questionnaires with 163 volunteers resulted in a high response rate (98% for service users, 76% for service providers), among which 96.9% and 90.4%, respectively rated the questionnaires as useful (online supplemental table 6). Overall, 86.9% and 93.9%, respectively rated the questionnaire as comprehensive, with most responders considering all sections of the questionnaire as important (83.1% and 75.8%, respectively).

    In the open field for recommendations for improvement we received several messages of appreciation, and only a minor suggestion for revisions. No other changes were therefore needed after field testing.

    Phase 4: reliability and internal consistency

    Findings on intrarater agreement are reported in online supplemental table 7. We received more answers than expected, resulting in a final sample of 95 parents and 75 service providers, resulting in a power of 89% and 88%, respectively. The value of Cohen’s kappa was at least 0.70 for all questions, with the exception of selected cases were the paradox of Cohen’s kappa (ie, low kappa values in presence of a high degree of agreement) was observed, due to substantial imbalance in the table’s marginal totals.37 39 All additional indexes of agreement—Gwet’s AC1, Bennet index, Brennan-Prediger coefficient—indicated for all items at least a good agreement (Gwet’s AC1 >0.60),40 with the exception of two question with a value of Gwet’s AC1 of 0.55 and 0.60, respectively, which were rephrased by the team of experts to improve clarity.

    Internal consistency findings are reported in online supplemental table 8. We received more answers than expected, thus resulting in a final sample of 193 parents providing a power 96.4%. The Cronbach’s alpha values were 0.84 and 0.83 for sections A and B of the service user questionnaire, respectively, while for the service provider questionnaire the values were 0.95 and 0.85, indicating very high internal consistency for both questionnaires.21–23

    Discussion

    Collecting service users’ and service providers’ views on paediatric QOC is critical for improving it. This paper presents the first results of the long process of designing, developing and validating two questionnaires which comprehensively collect data on 150 WHO Quality Measures13 for measuring QOC for children and young adolescents at hospital level. The ultimate objective of these tools is helping department directors and other policy makers understand what works well and what needs to be improved in facilities where children and adolescent receive healthcare. The availability of a unified comprehensive approach to measuring QOC for children at facility level as defined by the WHO Standards13 and using validated tools will allow comparisons of data across settings and over time and enhance efforts to improve paediatric QOC.

    We believe that the process we adopted had several strengths. It included multiple phases, based on existing recommendations on health questionnaire development,22–26 and on guidance to evaluate patients’ experience of care.27 28 The initial questionnaires were optimised through a sequence of logical steps, which included several rounds of revisions after feedback from international experts and end-users, field testing, and formal statistical assessment of the relevant psychometric properties of the tools. Other questionnaires previously developed and used in recent large surveys did not go through all these steps.43 Interestingly, a recent systematic review emphasised the lack of clear, scientifically sound recommendations on methods to validate patient-reported outcomes measures.44

    As a limitation of this study, we acknowledge that the sample size used for validation only included professional and parents from Italy. The questionnaires and the score system are now undergoing additional validations and field-testing in Brazil and in other countries. Results of these ongoing efforts will be reported separately. Another priority area for future research is documenting how these tools can be better used to drive a quality improvement process. In the future the questionnaire may also be further adapted for use in large ‘quick’ online surveys.

    The two questionnaires were intentionally aiming at assessing perceived inpatient QOC for children and young adolescent from services users and service providers perspectives. As such, they may have the limitations of excluding older adolescents, excluding the outpatient and low-income settings, and capturing only perceptions on QOC. We believe that no single tool can fit all purposes while retaining acceptability. Most tools to measure QOC actually use surveys in service users, since this is an important perspective.14 15 18 28 42 43 Further research is needed to develop tools that cover the population and settings excluded by the two-questionnaire described in this study.

    As anticipated in the introduction, to allow triangulation of data from different data sources, we developed a third, complementary tool aiming at collecting key WHO Quality Measures on provision of paediatric care from official hospital patient records. The three tools have been conceived and developed in parallel, aiming at collecting, when used together, 170 WHO Quality Measures on paediatric QOC.13 Findings of the development and validation of this third tool will be reported separately.

    The scoring system should be intended only as a complementary (not substitutive) way to quantitatively measure paediatric QOC in a synthetic format, and should always be interpreted looking at detailed results of the whole list of indicators collected. Properties of the score system shall be evaluated in future studies.

    Conclusions

    This study suggests that the two WHO standards-based tools developed have good content and face validity, high acceptability and perceived utility, and good intrarater reliability and internal consistency, and therefore could be used in health facilities in Italy and similar contexts. Priority areas for future research include how tools measuring paediatric QOC can be more effectively used to help health professionals provide the best possible care.

    Data availability statement

    All data relevant to the study are included in the article or uploaded as supplementary information. The authors confirm that the data supporting the findings of this study are available within the article and its supplementary materials." as indicated in the website https://authorservices.taylorandfrancis.com/data-sharing/share-your-data/data-availability-statements

    Ethics statements

    Patient consent for publication

    Ethics approval

    This study involves human participants and The CHOICE study was approved by the Ethical Committee of the Friuli Venezia Giulia Region (Protocol number 0035348) and by all ethical committees of 12 participating centres in Italy and Brazil. Participants in the validation and field testing of CHOICE questionnaires were informed about the objectives and methods of the study, including their right to decline participation, and signed an informed consent before responding to the questionnaires. Anonymity in data collection was ensured by not collecting any information that could disclose participants’ identities.

    Acknowledgments

    We would like to thank all project partners and volunteers who helped in the development of the tool.

    References

    Supplementary materials

    • Supplementary Data

      This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Footnotes

    • Collaborators CHOICE Study Group: Claudio Germani MD (Institute for Maternal and Child Health - IRCCS 'Burlo Garofolo', Trieste, Italy), Angelika Velkoski MD (Institute for Maternal and Child Health - IRCCS 'Burlo Garofolo', Trieste, Italy), Elia Balestra MD (Institute for Maternal and Child Health - IRCCS 'Burlo Garofolo', University of Trieste, Trieste, Italy), Benmario Castaldo MD (Institute for Maternal and Child Health - IRCCS 'Burlo Garofolo', University of Trieste, Trieste, Italy), Alice Del Colle (Institute for Maternal and Child Health - IRCCS 'Burlo Garofolo', University of Trieste, Trieste, Italy), Emanuelle Pessa Valente PhD (Institute for Maternal and Child Health - IRCCS 'Burlo Garofolo', Trieste, Italy), Giorgio Cozzi MD (Emergency Department, Institute for Maternal and Child Health IRCCS 'Burlo Garofolo', Trieste, Italy), Alessandro Amaddeo MD (Emergency Department, Institute for Maternal and Child Health IRCCS 'Burlo Garofolo', Trieste, Italy), De Monte Roberta Coordinatrice (Institute for Maternal and Child Health - IRCCS 'Burlo Garofolo', Trieste, Italy), Tamara Strajn Coordinatrice (Institute for Maternal and Child Health - IRCCS 'Burlo Garofolo', Trieste, Italy), Livia Bicego MD (Institute for Maternal and Child Health - IRCCS 'Burlo Garofolo', Trieste, Italy), Andrea Cassone PO (Institute for Maternal and Child Health - IRCCS 'Burlo Garofolo', Trieste, Italy), Silvana Schreiber PO (Institute for Maternal and Child Health - IRCCS 'Burlo Garofolo', Trieste, Italy), Ilaria Liguoro MD (Department of Medicine DAME-Division of Pediatrics, University of Udine, P.zzale S Maria della Misericordia, 15, 33100, Udine, Italy), Chiara Pilotto MD (Department of Medicine DAME-Division of Pediatrics, University of Udine, P.zzale S. Maria della Misericordia, 15, 33100, Udine, Italy), Lisa Stavro MD (Department of Medicine DAME-Division of Pediatrics, University of Udine, P.zzale S. Maria della Misericordia, 15, 33100, Udine, Italy), Chiara Stefani MD (Pediatric Unit, Ca' Foncello's Hospital, 31100 Treviso, Italy), Paola Moras MD (Pediatric Unit, Ca' Foncello's Hospital, 31100 Treviso, Italy), Marcella Massarotto (Pediatric Unit, Ca' Foncello's Hospital, 31100 Treviso, Italy), Paola Crotti (Pediatric Unit, Ca' Foncello's Hospital, 31100 Treviso, Italy), Benedetta Ferro (Pediatric Unit, Ca' Foncello's Hospital, 31100 Treviso, Italy), Riccardo Pavanello (Pediatric Unit, Ca' Foncello's Hospital, 31100 Treviso, Italy), Silvia Bressan MD (Pediatric Emergency Unit - Department of Woman's and Child Health, University of Padova, Italy), Marta Arpone PhD (Diagnosis and Development, Murdoch Children's Research Institute, Royal Children's Hospital, Melbourne, VIC, Australia), Silvia Fasoli MD (Paediatric Unit, Carlo Poma Hospital, Mantua, Italy), Pelazza Carolina, MSc (Infrastruttura Ricerca Formazione Innovazione, AO SS Antonio e Biagio e Cesare Arrigo, Alessandria, Italy), Francesco Tagliaferri MD (Division of Pediatrics, Department of Health Sciences, University of Piemonte Orientale, Novara, Italy), Marta Coppola MD (Division of Pediatrics, Department of Health Sciences, University of Piemonte Orientale, Novara, Italy), Chiara Grisaffi MD (Division of Pediatrics, Department of Health Sciences, University of Piemonte Orientale, Novara, Italy), Elisabetta Mingoia MD (Division of Pediatrics, Department of Health Sciences, University of Piemonte Orientale, Novara, Italy), Idanna Sforzi MD (Emergency Department and Trauma Center, Meyer Children’s Hospital, Viale Pieraccini 24, 50139, Florence, Italy), Rosa Santangelo Inf (Emergency Department, Meyer Children’s Hospital, Viale Pieraccini 24, 50139, Florence, Italy), Andrea Iuorio Inf (Emergency Department, Meyer Children’s Hospital, Viale Pieraccini 24, 50139, Florence, Italy), Sara Dal Bo MD (Department of Pediatrics, 'S. Maria delle Croci” Hospital, AUSL della Romagna, Ravenna, Italy), Federico Marchetti MD (Department of Pediatrics, “S. Maria delle Croci' Hospital, AUSL della Romagna, Ravenna, Italy), Vanessa Martucci MD (Pediatric and Neonatology Unit, Maternal and Child Health Department, 'La Sapienza' University of Roma – Hospital 'Santa Maria Goretti' of Latina, Roma, Italy), Mariateresa Sanseviero MD (Pediatric and Neonatology Unit, Maternal and Child Health Department, 'La Sapienza' University of Roma – Hospital 'Santa Maria Goretti' of Latina, Roma, Italy), Bloise Silvia MD (Pediatric and Neonatology Unit, Maternal and Child Health Department, 'La Sapienza' University of Roma – Hospital 'Santa Maria Goretti' of Latina, Roma, Italy), Alessia Marcellino MD (Pediatric and Neonatology Unit, Maternal and Child Health Department, 'La Sapienza' University of Roma – Hospital 'Santa Maria Goretti' of Latina, Roma, Italy), Annunziata Lucarelli MD (Giovanni XXIII Pediatric Hospital, Pediatric Emergency Department, University of Bari, Bari, Italy), Eleonora Canzio MD (Giovanni XXIII Pediatric Hospital, Department of Pediatrics, University of Bari, Bari, Italy), Roberta Parrino MD (Pediatric Emergency Unit, Maternal and Child Department, Arnas Civico, Palermo, Italy), Salvatore Gambino (Pediatric Maternal and Child Department, Arnas Civico, Palermo, Italy), Melania Guardino MD (Department of Neonatology and NICU, University Hospital Policlinico P. Giaccone, Palermo, Italy), Luca Lagalla MD (Department of Sciences for Health Promotion and Mother and Child Care 'G. D'Alessandro', University of Palermo, Via A. Giordano 3, 90127, Palermo, Italy), Beatrice Vaccaro (Pediatric Maternal and Child Department, Arnas Civico, Palermo, Italy), Giuseppina de Rosa (Pediatric Maternal and Child Department, Arnas Civico, Palermo, Italy), Vita Antonella Di Stefano MD (Pediatric and Pediatric Emergency Room Unit, 'Cannizzaro' Emergency Hospital – Catania, Italy), Francesca Patané MD (Pediatric Postgraduate Residency Programme, Department of Clinical and Experimental Medicine, University of Catania, Catania, Italy).

    • Contributors ML is the guarantor, conceived the study, in dialogue with EB, MM, WMW. IM analysed data, with inputs from other authors. ML, IM, TRdMeL, EF, SM, RL, AL, GLT, PC, FP, DN, WMW, VB, MM, EB participated in questionnaires’ development and/or in other steps on the tools’ validation. ML wrote the first draft. All authors revised the paper until its final version.

    • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

    • Disclaimer The authors alone are responsible for the views expressed in this article and they do not necessarily represent the views, decisions or policies of the institutions with which they are affiliated.

    • Competing interests None declared.

    • Provenance and peer review Not commissioned; externally peer reviewed.

    • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.