Article Text
Abstract
Objectives To evaluate the feasibility of a quality improvement programme aimed to enhance the client-centeredness, effectiveness and transparency of physiotherapy services by addressing three feasibility domains: (1) acceptability of the programme design, (2) appropriateness of the implementation strategy and (3) impact on quality improvement.
Design Mixed methods study.
Participants and setting 64 physiotherapists working in primary care, organised in a network of communities of practice in the Netherlands.
Methods The programme contained: (1) two cycles of online self-assessment and peer assessment (PA) of clinical performance using client records and video-recordings of client communication followed by face-to-face group discussions, and (2) clinical audit assessing organisational performance. Assessment was based on predefined performance indicators which could be scored on a 5-point Likert scale. Discussions addressed performance standards and scoring differences. All feasibility domains were evaluated qualitatively with two focus groups and 10 in-depth interviews. In addition, we evaluated the impact on quality improvement quantitatively by comparing self-assessment and PA scores in cycles 1 and 2.
Results We identified critical success features relevant to programme development and implementation, such as clarifying expectations at baseline, training in PA skills, prolonged engagement with video-assessment and competent group coaches. Self-reported impact on quality improvement included awareness of clinical and organisational performance, improved evidence-based practice and client-centeredness and increased motivation to self-direct quality improvement. Differences between self-scores and peer scores on performance indicators were not significant. Between cycles 1 and 2, scores for record keeping showed significant improvement, however not for client communication.
Conclusions This study demonstrated that bottom-up initiatives to improve healthcare quality can be effective. The results justify ongoing evaluation to inform nationwide implementation when the critical success features are addressed. Further research is necessary to explore the sustainability of the results and the impact on client outcomes in a full-scale study.
- Peer assessment
- Self-assessment
- Record keeping
- Communication
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
Strengths and limitations of this study
This study evaluated the feasibility of a quality improvement programme based on what physiotherapists do in their day-to-day practice by assessing client records, client communication and management information.
The quality improvement programme and implementation was theory-based and evidence-based; stakeholders and end-users were actively involved.
The results provided meaningful information on the critical success features of the programme, relevant to more rigorous evaluation and nationwide implementation.
In the beginning, some participants were unwilling or unable to expose their professional behaviours for online assessment; therefore, data on the improvements made are limited.
Introduction
Healthcare professionals and provider organisations have an ethical and professional obligation to strive for continuous quality improvement of services. When healthcare professionals are able to self-regulate and account for the quality of their services, they perceive control of the quality improvement strategies and the outcome measures used, in contrast to external regulations. Professionals often resist external audits; they fear a deterioration of their professional identity and an increase of administrative burden. Moreover, external regulations can potentially be effective, but the evidence is not convincing regarding the sustainability of the results and the strategy might induce unwanted consequences such as undertreatment of clients with multimorbidity and disparities in healthcare delivery.1–3
Research has shown that bottom-up quality improvement initiatives, such as communities of practice and professional networks focusing on collaborative learning, might hold better and more sustainable results than external, top-downregulations4–6 because shared social and professional norms are important predictors for behaviour change.7 ,8 Conditional to successful self-regulation is that professionals share the quality standards of their services and demonstrate the willingness and ability to critically appraise their own and their colleagues' performance.4 ,6 ,9 Literature showed that quality improvement programmes targeting self-regulation should not be limited to individual healthcare professionals, but also involve teams and provider organisations to align the desired processes and outcomes.10 Clinical governance has been introduced as a multilevel approach to quality improvement, bridging the gap between managerial and clinical approaches. The approach allows for early spotting of poorly performing clinicians, teams and organisations to support self-regulated quality improvement.4 ,11 ,12 Following this approach, we developed and tested the feasibility of a programme combining self-assessment, peer assessment (PA) and clinical audit as a strategy to improve the quality of physiotherapy services.
Self-assessment is the process whereby professionals reflect on their clinical and organisational performance according to quality indicators.13 In PA—also known as peer review—professionals evaluate or are being evaluated by their peers and provide each other with performance feedback.14 The aim of PA is to guide the self-directed quality improvement process towards desired mutual accepted performance standards.15 A cornerstone of the PA strategy is to raise awareness of clinical performance informing self-assessment16 and to develop a critical attitude towards the process and outcomes of healthcare by introducing professionals with an ‘assessor’ or ‘auditor’ perspective.17 The results of a systematic review by Fox et al18 on PA practices in healthcare demonstrated that peer review was associated with measurable performance improvement of healthcare professionals on several outcome levels and in a variety of competency domains. Clinical audit is a common strategy for quality improvement at the level of provider organisations. Aspects of the structure, processes and outcomes of care are selected and systematically evaluated against explicit criteria by trained colleagues. The method has proved its effectiveness in primary care.15 ,19 Although PA and clinical audit are well-studied strategies,20–22 our programme design is innovative because it focuses on assessment of authentic clinical behaviours and integrates clinical performance and organisational performance assessment.
We used the framework of the Medical Research Council 2015 to develop, test and implement the programme.23 The framework recommends a feasibility and piloting phase to allow for optimising the programme design and implementation prior to evaluating effectiveness in a larger study. Online supplementary appendix 1 shows the details of the design process including the development of performance indicators. This study addresses the evaluation of three feasibility domains: (1) acceptability of the programme for quality improvement purposes including strengths and weaknesses, (2) appropriateness of the implementation strategy to execute the programme as intended including barriers and facilitators and (3) impact of the programme on quality improvement and professional behaviour change.23 ,24
Supplementary appendix
Aim
The aim of this study was to evaluate the feasibility of a quality improvement programme aimed to enhance the effectiveness, client-centeredness and transparency of physiotherapy services to allow for optimising the programme design and implementation prior to more rigorous evaluation and nationwide implementation.
Methods
Design
Programme feasibility was evaluated with mixed methods using qualitative and quantitative data.24
Subjects and setting
We tested our programme with physiotherapists working in primary care clinics, organised in a regional network of communities of practice in the Netherlands. In communities of practice, professionals share the same interests, setting or specialisation. Formally registered physiotherapist networks were invited by the Royal Dutch Society for Physical Therapy (KNGF) via a digital newsletter. Participation was voluntary and was awarded with 30 accreditation points for the quality register. To facilitate the implementation process, we used two knowledge brokers (FT and HK) as the linking pin between researchers and participants,25–27 trained PA coaches (n=5) to support the PA process and trained auditors (n=3). The knowledge brokers were leaders of the professional network and took part of the stakeholder group. Coaches and auditors were members of the professional network recruited by the knowledge brokers.
Programme content
Assessment addressed three performance domains related to the three quality domains: client-centeredness, effectiveness (including evidence-based practice) and transparency of physiotherapy services: (1) record keeping, (2) client communication and (3) organisation and management. The programme contained two cycles of PA with an interval of 4–6 weeks, followed by one cycle of clinical audit. Assessment was based on predefined performance indicators which could be scored on a 5-point Likert scale. Online supplementary appendix 1a–c presents the performance indicators and their relationships with the three quality domains. Online supplementary appendix 2 shows how all programme activities were scheduled.
Supplementary appendix
Assessment of clinical performance included online self-assessment and PA of (1) client records and (2) video-recordings, followed by (3) face-to-face discussion of the results, supported by a trained group coach. Participants were assigned to upload one electronic client record and one video-recording of client communication—limited to the discussion of the diagnosis and treatment plan—in cycles 1 and 2. Before assessing their peers, the participants self-assessed their performance using the same indicators. Peers provided online scores and written improvement feedback if relevant. Discrepancies in scores were used as input for the subsequent discussions. After the first PA session, participants designed an online personal improvement plan; during the second session they reflected on the improvements made. Participants who objected to uploading videotapes were allowed to choose for role-playing instead; they simulated the client conversation and their performance was assessed on the spot.
PA coaches received a programme guide and two additional training sessions conducted by professional trainers (MJMM and PJvdW) using samples of client records, video-recordings and role-play to train the process of providing, receiving and using feedback.
Clinical audit was scheduled after all PA activities were completed. A convenience sample of four private clinics was invited to participate. These clinics provided online management information according to the programme guidelines and self-assessed their organisational performance using an online scoring sheet (see online supplementary appendix 1c). Clinical audit included inspection of the property, assessment of two at random selected client records and discussion of self-assessment scores for management and organisation. Afterwards a report was written according to a structured reporting format. Participants were invited to comment on the clinical audits report before it was finalised. Auditors received a programme guide and a training conducted by professional auditors (MO and MB) using worked samples and role-play to train the auditing process.
Website
We developed a web-based assessment system that allowed for (1) downloading programme guides and instruction manuals, (2) uploading assessment materials such as client records, video-recordings, management information and improvement plans, (3) online scoring, (4) downloading assessment results and (5) storing and exporting qualitative and quantitative data. See online supplementary appendix 1 for additional information.
Programme delivery
A research team member (FD) was the programme manager. She provided participants of a programme guide including a manual for uploading and downloading assessment material and guidelines for providing and using quality improvement feedback.16 ,22 ,28–31
Ethical issues
All participants gave their online informed consent. Clients providing video-recordings and client records gave their written informed consent. This study was approved by the Medical Ethical Committee Arnhem Nijmegen (CMO): 2015–1797.
Evaluation of programme feasibility
All three feasibility domains were explored with focus groups, in-depth individual interviews and written coach reports aiming for saturated information from multiple perspectives to optimise the credibility and transferability of the results.32 In addition, we evaluated the impact on quality improvement quantitatively by comparing self-assessment and PA scores, and cycle 1 and cycle 2 scores.
Data sampling and analyses of interviews
We aimed to bring together coaches, clinic visitors and knowledge brokers in separate focus groups to explore their experiences in performing the same role. Individual participants were purposely sampled for in-depth interviews including all participants in clinic clinical audits. The website provided us with data to identify and select little, moderate and very active PA participants. They were approached by email. An interview guide was designed by the research team (MJMM, FD, PJvdW, MWGNvdS) addressing the three feasibility domains tailored to the participant role. Focus groups lasted 90–100 min and were conducted face-to-face by MJMM (MSc, educational scientist, performance assessment research) and PJvdW (PhD, movement scientist, evidence-based practice research) using open-ended questions allowing for group discussion and knowledge construction. In-depth interviews permitted us to explore thoughts and feelings that might not easily be shared with colleagues, but relevant to understanding participant's behaviour; they were conducted by MJMM or FD (MSc, health scientist, quality of healthcare research) using teleconferencing technology and lasted 50–60 min. Interviews of all participants, including verbal consent, were audiotaped and transcribed verbatim afterwards. The analytic process was guided by ‘template analysis’ that combines a-priori codes informed by the research questions with emerging codes from the interview data.33 PJvdW and MJMM independently studied and coded five transcripts. Differences in coding were discussed, and a code book was created based on consensus. Subsequently, all transcripts were analysed line-by-line, using ATLAS-ti v.7 software. Codes were compared and some codes were merged into higher order codes. Emerging themes were identified by constant comparison of codes and higher order codes. Finally, we summarised the results relevant to ongoing programme development and implementation.34 To increase the credibility of the results, a peer debriefing and member checking procedure was conducted with research group members FD and MWGNvdS (PhD, allied healthcare scientist, healthcare quality research) and with knowledge brokers and stakeholders.
Data sampling and analyses of scores
Online scores for record keeping and client communication in the first and second assessment cycle were imported in IBM-SPSS Statistics 22. Indicators that were scored as ‘not relevant’ or ‘not applicable’ were treated as missing values.
The mean and median indicator scores for each performance indicator and for each performance domain were calculated for self-scores and peer scores as well as the percentages of missing values. We used the Wilcoxon Signed Rank test to calculate differences between self-assessment and PA scores, and between cycle 1 and cycle 2 scores, including p values for statistical significance.
Results
In total 64 physiotherapists took part in the programme. Twelve peer groups were formed based on specialisation, each containing four to six participants. Eleven peer groups participated in online PA; one group used printed scoring sheets. Three group clinics and one solo clinic participated in clinical audits. Table 1 shows an overview of participants' characteristics.
Participant demographics and characteristics
Qualitative results
We conducted two focus groups and 10 in-depth interviews reaching data saturation. Results are discussed using predefined categories including references to quotes labelled by number and participant's role (KB=knowledge broker, C=coach, V=visitor, P=participant). Quotes are presented in table 2. We identified strengths and weaknesses of the programme design, implementation barriers and facilitators, the impact on quality improvement and critical success features relevant to programme development and implementation. In table 3, the results are summarised.
Quotes of participants
Summary of findings
Acceptability of the programme design
General perceptions
At the beginning, participants were sceptical regarding the feasibility of the programme aims and procedures. Frustrated by the quality demands of health insurers, they were not seeking for an extra administrative burden. However, their views changed along with the programme (Q1-P7). Looking back participants were positive about the programme, because it focused on their core-business and uncovered ‘what happens behind closed doors’ (Q2-P4).
Despite the guidelines for constructive feedback, providing and receiving feedback were not self-evident. Participants struggled with critically appraising their peers. Being insecure about their own performance, they were cautious in providing critical feedback resulting in ‘halo marking’ (too high) as supported by the quantitative data (Q3-P6). Experiencing a safe setting, allowing to make mistakes, was perceived as conditional to critical peer appraisal. Regarding receiving feedback, some participants faced difficulties with adequately responding to it (Q4-P9). Participants were unanimous in their view that feedback should be critical to enable meaningful improvement. Compliance to programme guidelines and shared responsibility for group learning was perceived as critical to programme efficacy (Q5-KB). Although participants generally accepted the programme for quality improvement purposes, some of them reported that client records and videotapes might present an overly optimistic picture of clinical practice because they were self-selected (Q6-P8).
Assessment of client communication
Some participants objected to making video-recordings, unwilling to put an unnecessary load on their clients, worrying about client privacy and assuming that they would not consent. Although clients rarely objected to it—in contrast to participant's preassumptions—online assessment activities in cycle 1 were limited (Q7-C). Initial reluctance disappeared when participants personally experienced the added value of video assessment by simply ‘doing it’, or by watching others ‘doing it’; worked samples enhanced the acceptability. It became clear that peer groups needed time and deliberate practice to get used to video-assessment and to feel safe enough to expose their clinical performance. Participants who preferred video-assessment to client records argued that this instrument allows to observe what physiotherapists ‘do’ instead of what they ‘say they do’. They agreed that ‘taking a look inside’ provided valuable information, such as attitudes becoming observable (Q8-P6). On the one hand, video-recordings allowed for modelling professional behaviours of skillful colleagues, on the other hand, unwanted behaviour became transparent triggering suggestions for alternative behaviours, especially regarding the efficiency of chronic disease management (Q9-P9).
Although online peer scores did not always reflect the ability or willingness of participants to critically appraise their peers, during the sessions feedback quality increased by comparing self-perceptions with peer perceptions and by discussing quality standards of performance. Participants who consciously selected their best videotape could be confronted with different views on quality indicators (Q10-P1).
Participants who preferred assessment of client records to video-recordings argued that they felt uncomfortable with the knowledge that their conversation was recorded (‘audience effect’) or that a ‘snapshot’ poorly represents the process of patient management (Q11-P2).
PA of client records
Assessment of record keeping was valued because client records present the process of patient management unlike videotapes, allowing to assess clinical reasoning and decision-making such as the application of clinical practice guidelines, the use of client-reported outcome measures and performance outcome measures. Here again face-to-face discussions were critical to an in-depth understanding of quality indicators, for example, for evidence-based practice. For example, one of the knowledge brokers noticed that ‘clicking on the guideline button’ in the electronic record system, indicating the use of a particular guideline, was no guarantee for adequately ‘applying’ the guideline in the specific context of the patient problem (Q12-KB).
In contrast to feedback provided by professional auditors, peer feedback was perceived as a good vehicle to self-direct improvement (Q13-P3).
Clinical audit
Participating private clinics all appreciated clinical audits. They reported that they were ‘pretty nervous’ in advance, but valued the safe setting allowing for discussion of strengths and weaknesses, providing them feedback to guide improvement of management and organisation towards its quality standards and giving them back responsibility and ownership (Q14-P4; Q15-P5). They all agreed with their audit reports, providing minor comments, although reports were perceived as more formal than audits.
Appropriateness of the implementation strategy
Motivational issues
At baseline, participants were poorly informed about the programme aims, intended outcomes and consequences. Frustrated by the dominant role of insurers in quality control, participants were suspicious about whose interests were being served by their extra efforts affecting their motivation to participate. Alignment of expectations might have prevented false cognitions and enhanced motivation to invest time and effort (Q16-P7).
Communication technology support
Although the website has been improved continuously throughout the programme, it was not perceived as user-friendly causing feelings of frustration (Q17-P10). Despite the supply of a user manual, peer support and learning by doing turned out to be more effective.
The role of knowledge brokers, group coaches and auditors
Although the knowledge brokers were involved in writing the programme guide, they did not succeed in adequately inform the participants. Their role as linking pin between researchers and clinicians required advanced communication and leadership skills (Q18-KB).
The role of the coach was perceived as crucial in facilitating critical reflection and an in-depth understanding of quality standards. However, some group coaches had to deal with the ‘wait and see’ attitude of some participants who did not provide online materials in time. They lacked the coaching skills to support active participation and shared responsibility for group learning (Q19-C).
Some clinic auditors struggled with their role identity. They were trained to communicate what they observed regarding the quality indicators; as such they felt competent to provide information on ‘what’ can be improved (feedback), but not on ‘how’ (feed forward) appealing to a counsellor role rather than an auditor role (Q20-V).
Impact on quality improvement and professional behaviour change
The programme impacted on different levels of professional practice, providing feedback to individuals, peer groups and clinics. Regarding professional development, positive feedback enhanced self-efficacy beliefs and motivation to participate in continuing PA activities. Intentions to behaviour change focused on guideline adherence, performance measurement and client-reported outcome measurement (Q21-P2; Q22-P9; Q23-P1). On the level of organisation and management, participants reported improved awareness of strengths and weaknesses and increased beliefs in the change capacity of the programme (Q14-P4; Q15-P5).
The collaboration between the research team and the network of participating physiotherapists resulted in context-specific knowledge, relevant to ongoing quality improvement activities. The network committed to continue with PA and clinical audits, intending to address its critical success features (Q24-KB).
Quantitative results
Table 4 presents the results of the online uploaded data on the website showing that online activities varied widely and that participation in cycle 1 was substantially lower than that in cycle 2. Perceived barriers to online activities are reported in the qualitative results section. Except for record keeping in cycle 1, peer scores were higher than self-scores but differences were not significant. Since participants' online activities were low in cycle 1, data on the improvements made are limited. As shown in the shaded area of table 4, differences between cycles 1 and 2 were not significant for client communication, but significant for record keeping, especially regarding the lower performers at baseline. Note that these differences relate only to the limited number of participants who were active in both cycles.
Differences between self-assessment and peer assessment scores and differences between cycles 1 and 2 scores tested with non-parametric Wilcoxon signed Ranks test (Likert Scale 1–5)
Discussion
This study focused on the feasibility of a quality improvement programme aiming to enhance the client-centeredness, effectiveness and transparency of physiotherapy services. The qualitative results showed that participants viewed the programme as an acceptable intervention for quality improvement purposes, allowing for stepwise self-directed quality improvement unlike the one-shot assessments of external auditors. We identified its critical success features such as training in performance appraisal and time to build a safe setting. Regarding the appropriateness of the implementation strategy to execute the programme as intended, participants reported several facilitators and barriers, allowing us to identify critical success features for broader implementation such as adequate communication of programme aims and intended outcomes at baseline, user-friendliness of the website design and competent group coaches. The weaknesses of the programme design and the barriers to programme implementation affected the impact on quality improvement and behaviour change. However, we identified meaningful self-reported results including awareness of clinical and organisational performance, improved evidence-based practice and client-centeredness and increased motivation to self-direct quality improvement. The assessed (quantitative) results showed that online activities were low in cycle 1, providing limited data on the improvements made in cycle 2. Despite the limited data, we observed significant improvement of self-scores and peer scores for record keeping.
When we look at programme acceptability, participants' views on the validity and the learning value of video-recordings and client records differed. We suggest that the acceptability of videos could be improved. Instead of using two single video clips, perceived as ‘snapshots’, several video recordings would provide more valid information as showed by a study of Ram et al.35 However, that involves additional time and costs and might threaten long-term feasibility. Assuming that each instrument to assess professional performance has its advantages and disadvantages (standardised clients, direct observation, multisource feedback) and that there is no single best measure as shown by a systematic review of Overheem et al,36 the use of multiple measures is justifiable and even desirable for the purpose of gathering valid and reliable information on clinical competence.37
Regarding programme implementation, we assume that the sociopolitical context—the dominant role of health insurers in quality assurance—impacted heavily on commitments to change and outcome expectancies.38 Although PA aimed to provide formative feedback, emphasising learning and improvement, it was viewed as summative assessment as physiotherapists questioned the stakeholders in their efforts. Improved communication at baseline might have enhanced participant's motivation and adherence to programme guidelines. Moreover, external, top down empowerment, a trade-off between trust and control, might be critical to successful outcomes on the long term as recognition of professional accomplishment and innovation is a strong motivator of improvement.1
Looking at the impact on quality improvement, we observed that peer scores for client communication were consistently higher than self-scores, demonstrating that participants either underestimated their own performance or overestimated their peers. In contrast to the literature on self-assessment showing that physicians generally overestimate themselves,39 we assume that feelings of insecurity underlie overestimation as underestimation in this case as supported by the qualitative data. Extended exposure to critical appraisal and reinforcement of constructive feedback practices could strengthen self-efficacy beliefs according to Bandura's cognitive learning theory.40 The results also showed that the programme was more effective in enhancing record keeping skills, than communication skills. Apparently, it took more time or effort to develop communication skills within the time span of the programme. This assumption is supported by feedback intervention theory31 ,41 explaining that the effectiveness of performance feedback is lower when the ‘task novelty’ and ‘task complexity’ is higher. Trained by audits of health insurers, participants were more familiar with assessment of record keeping. Moreover, the literature showed that clinical competency is content and context specific, meaning that competent (complex) behaviour in one case (cycle 1) is a poor predictor for another case (cycle 2),42 ,43 and this also applies to communication skills.44 Although this programme was not intended to produce generalisable scores, we suggest that prolonged engagement with video-assessment would yield better outcomes.
Strengths and limitations
This study evaluated what physiotherapists do in their day-to-day practice by assessing client records, video-recordings and management information. The quality improvement programme was systematically developed, and theory-based and evidence-based. Stakeholders and end-users were actively involved in programme development and implementation, and their experiences provided meaningful information on its critical success features. Participants did not fully adhere to the programme guidelines resulting in limited sample sizes threatening internal and external validity of the quantitative results. It should also be noted that we could not distinguish between ‘missing’ and ‘not relevant or not applicable’ indicator scores of active PA participants which might have biased the results. Although generalisability of the quantitative results is limited regarding the specific population of Dutch physiotherapists in primary care, we think that the qualitative results related to the acceptability and the implementation of the quality improvement programme are learning points for a broader group of healthcare professionals.
Conclusions
This study demonstrated that bottom-up quality improvement initiatives can be effective in improving healthcare quality. The results justify more rigorous evaluation to inform nationwide implementation when its critical success features are addressed. Crucial is the willingness of professionals and organisations to provide access to the confidential areas of their clinical practice. However, this information is vulnerable to summative judgement and should be protected by all stakeholders in healthcare quality. Further research is necessary to explore the sustainability of the results and the impact on client outcomes in a full-scale study.
Acknowledgments
The authors thank Marielle Ouwens (MO) and Menno Bouman (MB) for their input as professional auditors, the stakeholder group for their input in programme development, Frits van Trigt (FT) and Han Kingma (HK) for their efforts as knowledge brokers, all coaches and physiotherapists joined in the Network FysioGroep Haaglanden, Carol and David DeFields for proofreading the manuscript.
References
Footnotes
Collaborators Maria W G Nijhuis-van der Sanden, Femke Driehuis, Yvonne F Heerkens, Cees P M van der Vleuten, and Philip J van der Wees.
Contributors All authors read the final manuscript, gave their approval for publication and agreed to be accountable for all aspects of the work. MJMM, PJvdW and MWGNvdS contributed to study conception, design, sampling, analysis, interpretation of data, drafting and revising the manuscript. FD contributed to the conduct of the scoping review and the study. CPMvdV and YFH contributed to interpretation of the data and revision of the manuscript.
Funding This study was funded by the KNGF. The KNGF had no role in the conduct of this study, analysis or interpretation of data.
Competing interests None declared.
Ethics approval This study was approved by the Medical Ethical Committee Arnhem Nijmegen (CMO): 2015-1797.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement No additional data are available.