Article Text
Abstract
Objectives The aim of our study was to determine and enhance physicians’ acceptance, performance expectancy and credibility of health apps for chronic pain patients. We further investigated predictors of acceptance.
Design Randomised experimental trial with a parallel-group repeated measures design.
Setting and participants 248 physicians working in various, mainly outpatient settings in Germany.
Intervention and outcome Physicians were randomly assigned to either an experimental group (short video about health apps) or a control group (short video about chronic pain). Primary outcome measure was acceptance. Performance expectancy and credibility of health apps were secondary outcomes. In addition, we assessed 101 medical students to evaluate the effectiveness of the video intervention in young professionals.
Results In general, physicians’ acceptance of health apps for chronic pain patients was moderate (M=9.51, SD=3.53, scale ranges from 3 to 15). All primary and secondary outcomes were enhanced by the video intervention: A repeated-measures analysis of variance yielded a significant interaction effect for acceptance (F(1, 246)=15.28, p=0.01), performance expectancy (F(1, 246)=6.10, p=0.01) and credibility (F(1, 246)=25.61, p<0.001). The same pattern of results was evident among medical students. Linear regression analysis revealed credibility (β=0.34, p<0.001) and performance expectancy (β=0.30, p<0.001) as the two strongest factors influencing acceptance, followed by scepticism (β=−0.18, p<0.001) and intuitive appeal (β=0.11, p=0.03).
Conclusions and recommendations Physicians’ acceptance of health apps was moderate, and was strengthened by a 3 min video. Besides performance expectancy, credibility seems to be a promising factor associated with acceptance. Future research should focus on ways to implement acceptability-increasing interventions into routine care.
- pain management
- medical education & training
- education & training (see medical education & training)
Data availability statement
Data are available on reasonable request. A request can be made to the corresponding author.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Strengths and limitations of this study
This is the first study to examine physicians’ acceptance and expectations about health apps for chronic pain.
A strength of the study is the investigation of both practitioners and medical students as future physicians.
The study has a strong active control group.
A limitation is the online-only data collection, due to which a selection bias may have occurred.
Introduction
Since the Global Burden of Disease Study was first conducted in the 1990s, chronic pain has been identified as the leading cause of years lived with disability.1 Chronic pain has various negative health consequences and adverse impacts on quality of life.2–4 Although there are effective treatments for chronic pain,5 6 effect sizes tend to be small.7 Further, the sustained efficacy of treatments is uncertain.8 This is problematic, because chronic pain raises costs dramatically for healthcare systems9 10 and is a significant contributor to work disability.11 The likelihood of returning to work correlates with the duration of pain: the longer patients are out of work, the less likely they are to return to full-time employment.12 13 Therefore, the principle for treating pain is that it should start as early as possible. However, many people, especially in rural areas, have no access to adequate pain treatment,14 15 even though it is considered a human right.16
eHealth offerings can help to alleviate these problems and provide patients with evidence-based interventions.17 Smartphone apps, falling under the mHealth category, especially have great potential for both practitioners and patients.18 First, because of the widespread use of smartphones, they can reach patients with chronic pain at a low threshold.19 Second, they can help patients better manage their pain, for example as a treatment adjunct or in the absence of a pain expert.20–22 Pain apps offer a wide range of application possibilities ranging from diary functions for monitoring pain to specific interventions. Two recent meta-analyses concluded that pain apps can reduce patients’ pain by a small effect23 and have a small positive effect on depression and short-term pain catastrophising.24 However, despite their positive potential, it should be mentioned that most pain apps have not been scientifically evaluated yet and privacy protection is often not sufficiently guaranteed.25 Besides these problems, there are various other barriers to the implementation of health apps into clinical practice.
One barrier on the practitioners’ side is that they play a gatekeeping role in electronic treatment forms.26 Even if physicians consider health apps to be helpful,27 integrating health apps into their daily work is slow.28 Although many patients are eager to try health apps29 health professionals recommend them seldom.30 31 One potential reason for this is their moderate acceptance of eHealth.32 There is ample evidence that acceptance is an important prerequisite for implementing new technologies into practice.33 34 Across studies, an important factor influencing acceptance (respective the intention to apply new technology) is performance expectancy.32 35–38
To increase acceptance, acceptance-enhancing video interventions have proven to be effective in patients and health practitioners.33 39 40 However, not all studies were able to increase practitioners’ acceptance,41 42 suggesting that the presentation and content of educational videos are relevant.33
Since previous research mainly investigated eHealth in general focusing on internet interventions, little is known about the acceptance of mobile health apps. The main aims of this study were to assess physicians’ acceptance of health apps and to increase their acceptance, performance expectancy and credibility via a short video intervention. Our further aim was to identify variables that influence physicians’ acceptance of health apps for chronic pain. To the best of our knowledge, this is the first experimental study assessing and modifying physicians’ acceptance of health apps in the context of chronic pain.
Methods
Study design
This study is a web-based experimental trial with a parallel-group design using simple randomisation procedure (1:1 allocation ratio). Self-rating questionnaires were used to assess preintervention and postintervention outcomes.
Completing the survey took an average of 14 min. Measurements were collected online via the software platform Unipark (Enterprise Feedback Suite survey, version Fall 2020, Questback). Randomisation was performed within the Unipark software. All procedures complied with the German Psychological Society’s ethical guidelines.
Participants
Data collection was performed between December 2020 and April 2021. The sample size was determined using an a priori power analysis with G*Power V.3.1.9.3.43 Following a similar preceding study,33 we based our calculations on a small effect between groups (expected f=0.16; power=0.8; alpha error probability of 0.05), resulting in a necessary sample size of 230. Because we assumed a 10% drop-out rate, we planned to survey 253 subjects. We recruited physicians via email distribution lists, physician networks and emails to practices. Due to the different recruitment methods, we can only estimate the number of physicians contacted. We assume that we reached approximately 10 000 physicians, of whom 354 started the survey. The response rate is comparable to a similar study.33 A total of 257 participants completed the questionnaires at postintervention, yielding a completer rate of 73% (figure 1). Inclusion criteria were being employed as a physician and sufficient knowledge of the German language. Study participants were collected online through practices, hospitals and medical communities. In addition, we recruited a sample of 101 medical students via Facebook groups for medical students as well as email distribution lists of medical schools.
Flow chart.
Measures
Primary outcome
Acceptance of the Unified Theory of Acceptance and Use of Technology model (UTAUT)34 was our primary outcome. Acceptance according to the UTAUT model is conceived as the intention to use (new) technologies. The three acceptance items (table 1) were added together as a cumulative score, giving a range of 3–15. To make our data easier to interpret, we considered values as low (3–6), moderate (7–11) and high (12–15). This classification is similar to other studies.32 33 Cronbach’s alpha was 0.93.
UTAUT items
Secondary outcomes
Performance expectancy of the UTAUT model was our secondary outcome. It was surveyed by means of 3 items (table 1). Performance expectancy is conceptualised as the expectation that an intervention will be beneficial.
An additional secondary outcome was the credibility of health apps, which we assessed via the Credibility/Expectancy Questionnaire (CEQ).44 The credibility scale (eg, ‘How logical does the medical use of health apps for chronic pain seem to you?’), includes three items and asks about treatment credibility on a 9-point response scale (ranging from 1=not at all useful to 9=very useful). Cronbach’s alpha for the credibility scale was 0.91.
Primary and secondary outcomes were measured both before and after the intervention. With our cohort of medical students, only the primary and secondary outcomes were assessed, but not the predictors of acceptance.
Predictors of acceptance
Predictors of acceptance were examined. For this purpose, we used the baseline variable of acceptance as dependent variable and multiple predictors as independent variables (see the Statistical analysis section).
Sociodemographic variables included age, gender, daily smartphone time and smartphone use in a professional context. All of the following items had to be slightly adapted for the purpose of this study.
We assessed the four main constructs of UTAUT model.34 The UTAUT model is an established model which states that the four constructs performance expectancy (Cronbach’s alpha of 0.94); effort expectancy (Cronbach’s alpha of 0.84); facilitating conditions (Spearman’s correlation of 0.17) and social influence (Spearman’s correlation of 0.79) have an effect on the acceptance and intention to use (new) technologies. The scales consist of statements (table 1) that can be agreed to on a 5-point response scale (answers ranging from 1=totally disagree to 5=totally agree). Higher values indicate a higher level of the construct. Items were adapted from different studies.39 45 46
From the Attitudes towards Psychological Online Interventions questionnaire (APOI),47 we used the scepticism and perception of risks scale, which contains four statements (eg, ‘It is difficult for patients to effectively integrate health apps into their daily lives.’) that can be agreed on a 5-point scale (ranging from 1=totally agree to 5=totally disagree). We excluded one item because its content did not fit the survey (‘By using a POI (Psychological Online Interventions), I do not receive professional support.’). Cronbach’s alpha for this scale was 0.57.
Openness (eg, ‘I would use new treatments to help my patients.’) and intuitive appeal (eg, ‘If you learned about a new health app, how likely would you be to use it if it appealed to you intuitively?’) were assessed with the Evidence-based Practice Attitude Scale-36 (EBPAS).48 The EBPAS measures difficulties and supportive factors in implementing evidence-based treatment approaches. Both scales consist of four statements or questions that can be agreed to on a 5-point response scale (ranging from 0=not at all to 4=to a very great extent). Cronbach’s alpha of 0.84 (openness) and 0.87 (intuitive appeal).
Before starting the survey, we gave participants a brief definition of health apps and instructed them that all questions are related to health apps for chronic pain patients.
Intervention
The control group (CG) watched a video (3:10 min) providing general information about chronic pain (eg, prevalence and costs for the healthcare system and psychosocial consequences for people suffering from chronic pain). The experimental group (EG,) watched a video (3:23 min) that discussed the content of health apps (eg, how they can be used and the results of recent studies). We kept the information of both videos in simple language. In terms of content, the videos only gave a general overview of the topic without going into too much detail. Both videos were matched in terms of visuals (figure 2). Skipping the video was not possible due to the survey software. We produced the video with the commercial software Powtoon (2012–2021 Powtoon). A professional narrator recorded the audio track. An English translation of the spoken text is in online supplemental material.
Supplemental material
Screenshots of the video interventions. Left: Video of the EG describing possible applications of pain apps; Right: Video of the CG describing psychosocial consequences of chronic pain. CG, control group; EG, experimental group.
Statistical analysis
We used the V.26 of IBM SPPS Statistics software for statistical analyses. There were no missing data due to the software (participants had to answer all questions to get to the next page). For all analyses, we used a type-1 error level of 5%.
Both Mahalanobis distance and Cook’s distance were used to detect multivariate outliers.49 According to the suggestion of Pituch and Stevens, univariate outliers were calculated using standardised values.49 We checked data for plausibility before exclusion. In addition, we checked subjects' comments at the end of the survey for possible bias.
To detect any differences between baseline values, we conducted a multivariate analysis of variance for age; APOI; EBPAS; CEQ; and the UTAUT variables. We assessed gender differences using a χ2 test.
The video’s influence on our primary and secondary outcomes was assessed via a 2 (condition) × 2 (time) repeated measures analysis of variance. Partial eta squared was used as the effect size measure, as suggested by Richardson. Effect sizes were classified according to Richardson50 based on Cohen.51 To reduce inflation of the alpha error, we applied Bonferroni correction to secondary outcomes.52
The variables influencing health apps’ acceptance were calculated using linear regression, in which we added predictor groups blockwise: first, demographic variables (age; gender; daily smartphone time; smartphone use in a working context). The APOI, EBPAS and CEQ scales were then added. Last, the four UTAUT predictors were added to the model. Acceptance from the premeasurement was the dependent variable.32 Because of the large number of predictors and resulting overestimation of R2, we referred to an adjusted R2 as the outcome.53
Patient and public involvement
No patient involved.
Results
Sample characteristics
After inspecting the data, there was one exclusion because the subject stated that he had filled in the questionnaires arbitrarily. Eight subjects were excluded because they had stated ‘psychological psychotherapist’ as their specialist direction, which in Germany indicates that they were not physicians but psychologists. This reduced our sample to 248 (38.71% female) (nEG=124; nCG=124). The average age was 49.56 years (SD=11.51). There were no baseline differences between conditions. The most common fields of specialisation were general practitioners (89); surgeons (39); anesthesiologists (29); neurologists and psychiatrists (23). Acceptance levels at baseline across both conditions were moderate (M=9.51, SD=3.53) with 21.4% in the low range, 47.1% in the moderate range, and 31.5% in the high range. See table 2 for a complete list of specialty directions, additional demographic variables as well as prevalues of the baseline measures.
Demographic characteristics
Primary outcome
Our subjects’ acceptance was increased by means of the video (significant main effect of time (F(1, 246)=15.28, p<0.001, ɳ2p=.06)). Further subjects of the EG showed higher increases than those of the CG (significant time × condition interaction (F(1, 246)=15.28, p=0.01, ɳ2p=.02)). After the intervention, the EG (M=10.51, SD=3.28) had higher postacceptance scores than the CG (M=9.48, SD=3.57) (t(246)=-2.37, p=0.01). Group comparison of postassessment data revealed a small effect (Cohen’s d=0.30). Figure 3 shows a comparison between the medical student sample and the physicians.
Change in acceptance. Error bars indicate SEs. *P<0.05; **P<0.005. CG, control group; EG, experimental group; pre, measurement before the video; post, measurement after the video.
Secondary outcomes
Performance expectancy could also be increased by the video (main effect of time (F(1, 246)=66.85, p<0.001, ɳ2p=0.21)). Again, the increase was higher in the EG than in the CG (significant time × condition interaction (F(1, 246)=6.10, p=0.01, ɳ2p=0.02)). The EG (M=9.94, SD=3.16) had higher post-performance expectancy scores than the CG (M=9.02, SD=3.34) (t(246)=−2.23, p=0.01). Again, group comparison of the post-assessment data revealed a small effect (Cohen’s d=0.28).
We found the same pattern of results for credibility. It was increased by the video (significant effect of time (F(1, 246)=64.47, p<0.001, ɳ2p=0.21), with a higher increase in the EG (significant time × condition interaction (F(1, 246)=25.61, p<0.001, ɳ2p=0.09)). Postvalues of the EG (M=6.07, SD=1.87) were higher than those of the CG (M=5.31. SD=2.14) (t(246)=−2.95, p=0.002). Postassessment group comparison revealed a small to moderate effect for credibility (Cohen’s d=0.38). Figure 4 shows a comparison between the medical student sample and the physicians in terms of credibility.
Change in credibility. Error bars indicate SEs. **P<0.01. CG, control group; EG, experimental group; pre, measurement before the video; post, measurement after the video.
The medical students’ pattern of results was identical to those illustrated above (see online supplemental material for a detailed presentation of results and demographic variables). The time × condition interaction effect for acceptance had an effect size of ɳ2p=0.13 (figure 2); for performance expectancy an effect size of ɳ2p=0.09; and for credibility an effect size of ɳ2p=0.21 (figure 3).
Supplemental material
Predictors of acceptance
Linear regression with the predictors from the first block was significant (R2adj=0.14, F(4, 242)=11.01, p<0.001). Age (β=−0.23, p=0.001) and smartphone use in a professional context (β=0.20, p=0.002) were related to acceptance.
The model improved when we added the second block with APOI, EBPAS as well as CEQ scales (R2adj=0.70, F(8, 238)=72.35, p<0.001). Credibility (β=0.51, p<0.001) was the strongest predictor followed by scepticism (β=−0.24, p<0.001) and intuitive appeal (β=0.13, p=0.01). None of the predictors from the first block were significant.
The model improved marginally after adding the UTAUT variables (R2adj=0.73, F(12, 234)=56.24, p<0.001). Again, credibility was the best predictor (β=0.34, p<0.001), followed by performance expectancy (β=0.30, p<0.001), scepticism (β=−0.18, p<0.001) and intuitive appeal (β=0.11, p=0.03). None of the other predictors were significant. A table with all predictors is provided in online supplemental material.
Supplemental material
Discussion
This study is the first to explicitly investigate physicians’ acceptance of health apps focusing on chronic pain. Our results complement preceding studies by adding the physicians’ perspective within an outpatient setting. The main aims of this study were to survey physicians' current acceptance of health apps for patients with chronic pain and to increase their acceptance. In general, physicians’ and medical students’ acceptance for health apps was moderate, which indicates a higher openness than previous studies.32 The experimental intervention successfully increased acceptance, performance expectancy and credibility of health apps among physicians and medical students. Our additional study aim was to identify variables that influence acceptance. Credibility and performance expectancy were the strongest predictors of acceptance, followed by scepticism and intuitive appeal.
We found that our physicians’ moderate acceptance of health apps was higher than that reported in previous studies: A survey conducted between 2015 and 2016 among various healthcare professionals observed rather low acceptance rates for electronic health interventions.32 According to a recent study, psychotherapists exhibited mixed acceptance of blended care (a combination of internet and mobile based interventions and face-to-face therapy).33 However, the aforementioned study was conducted several years ago and perceptions of eHealth may have changed in the meantime. In particular, the COVID-19 pandemic may have influenced opinions about electronic health interventions.54 Also, unlike the studies mentioned above, we specifically asked about health apps in our survey.
Our results indicate that brief, visually appealing educational videos may be an effective acceptance-facilitating intervention for physicians. Results from acceptance-enhancing interventions in other studies were inconclusive. Some researchers demonstrated positive effects,33 while others identified no effects.41 42 Most researchers employed video interventions to increase acceptance toward eHealth interventions in general (eg, online interventions) but not by focusing on apps in particular. Another potential explanation of our positive findings is the specific focus on chronic pain, as the perceived usefulness of eHealth and mHealth interventions could be disorder-specific.
However, the higher effect sizes of the student sample lead us to cautiously conclude that the intervention may be more effective with students. Although young age does not automatically lead to higher digital health competencies,55 young professionals appear to be more receptive to interventions that promote the acceptance of health apps. This could be due to a generally higher familiarity of younger people in using smartphones and their preference for this medium for obtaining health information.56 Since high acceptance does not automatically lead to action,57 long-term studies examining the actual use of health apps among (prospective) physicians would be worthwhile.
The strong association we detected between performance expectancy and acceptance is in line with other research findings. Across studies, performance expectancy has consistently shown to be one of the most important predictors of acceptance of new technologies in the healthcare sector.32 37 This strong association between performance expectancy and acceptance suggests that physicians’ acceptance can be increased by highlighting the benefits of health apps for their patients and themselves. This is also supported by a study which found that physicians are more likely to use mobile devices with drug reference software if they believe it will help their patients.58 In contrast to Hennemann et al,32 we found no impact of social influence on acceptance, nor did we find any influence of facilitating conditions as Liu et al colleagues did.37 Note that the subjects in those two studies were surveyed in inpatient settings. We mainly surveyed physicians in an outpatient setting. Accordingly, our physicians were probably relying less on their employer’s facilitation because they are often self-employed. The same might apply to social support: Medical practices employ much less staff than hospitals, a fact that may have contributed to this construct being less significant in this survey. Additionally, it is worth mentioning that the two studies above did not specifically survey acceptance towards health apps and that they were conducted a few years ago. The relevance of certain constructs like facilitating conditions may have lessened since then.
The association we found between credibility and acceptance also concurs with previous research findings. A study with college students concluded that credibility influences the perceptions of health apps positively.59 The credibility of new technologies in the healthcare field is important60 as it increases the likelihood that the technology will be used in the short term and long term.61 62 Accordingly, the low prescription rates (or the paucity of recommendations) of health apps by physicians could be partly attributable to their lack of credibility. One potential reason for this is the low quality of many health apps on the market.63 Important to the credibility of information about new electronic health measures is the source of the information. Websites controlled by editors are perceived to be more credible, as is information from independent medical experts.64 Because the source of the material appears to be more important than its design,65 independent research institutes can play an important role in disseminating evidence-based information about electronic healthcare interventions. By including highly visible videos on their websites, they could increase both the acceptance and awareness of health apps. Our results indicate that such an approach holds particular promise for medical students, highlighting the call for establishing eHealth curricula in education.60 66
Technological influences will continue to make strong inroads into medicine,67 which requires that healthcare professionals are able to adapt new technologies flexibly. Especially considering the rapid technological progress in this area, the evidence from earlier studies and from ours provide valuable information about the importance of communicating with physicians, psychotherapists and other professional groups in the healthcare sector about eHealth in general and health apps in particular. Video interventions can be an effective and cost-saving method of communicating the potential, opportunities and limitations of these new technologies. They reach the target group at a low threshold, for example, by being included on informational websites, newsletters or at training courses. This informational material should emphasise both performance expectancy and the credibility of the intervention being addressed.
In addition to increasing acceptance of health apps, it is also important to provide physicians with specific recommendations on which apps are best to use for which patients. Due to the volume of the still growing market, it is hardly possible for individuals to get a comprehensive overview of the range of health apps available. It, therefore, seems sensible to establish guidelines for physicians on which apps can be helpful for which problems—just as there are guidelines for medications for diseases. To achieve this, a recent study suggests specific recommendations from medical associations or scientific societies, as well as special training in this area.68 This could help physicians integrate health apps into their workflows.69
Limitations
Our study has some limitations. First, due to our broad definition of pain apps, participants may have assumed different usage scenarios for health apps. This could have influenced their acceptance. Accordingly, future studies could investigate attitudes toward specific apps, for example, psychological intervention apps. There may have been a selection bias due to the data collection method. Thus, physicians who were already open and interested in mHealth may have participated, which would restrict the generalisability of our results. Furthermore, our results relied solely on self-reporting. Most of our items were adaptations of already tested items or scales on questionnaires. This approach was necessary due to the lack of appropriate health app-specific questionnaires, but it remains a limitation. In addition, the scale facilitating conditions had low correlation measures, accordingly results of this scale should be interpreted with caution. Because of the survey’s brevity, we could not collect many other potentially relevant constructs like technologisation threat47 or previous experience with health apps. As acceptance due to self-regulatory deficits70 does not guarantee that intention becomes an action in the future,57 longitudinal surveys to examine whether video interventions increase the actual recommendations or prescriptions of the respective technologies should be one of the next steps in research.
Strengths
To our knowledge, this is the first study that investigated and increased physicians’ acceptance of health apps for managing chronic pain. This professional group is of particular interest due to the gatekeeper role they play in the healthcare system. Furthermore, we based the UTAUT questionnaires on predecessor studies, to increase comparability. In addition, we engaged a strong CG whose intervention was timed, visually and audibly matched to the intervention video. Despite the brevity of the survey and our strong CG, we identified a superior effect of the intervention video. The video intervention was very short and can be integrated at a low-threshold within different platforms.
Conclusion
Our results show that physicians are open to using health apps for chronic pain patients as they demonstrated moderate to high acceptance rates. Our study also shows that performance expectancy and credibility had the strongest influence on acceptance. As low-threshold entities, brief video interventions are useful tools that can strengthen these constructs and reach a high number of health professionals. They can thus be helpful in overcoming certain barriers to implementing mobile health interventions in clinical practice. Future studies should examine the long-term effect of acceptance facilitating interventions and their impact on behavioural measures.
Data availability statement
Data are available on reasonable request. A request can be made to the corresponding author.
Ethics statements
Patient consent for publication
Ethics approval
This study involves human participants and was approved by Ethics Committee of the Philipps University of Marburg, reference number: 2020-72k-2.
Acknowledgments
We would like to thank Nora Jander for her excellent voice-over on the video and Benno Glöckler and Kari Fuhrmann for their support in recruiting the physicians.
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Contributors HJH, JAG, WR and JR: Conception and design of the study; HJH: data collection, analysis and interpretation, manuscript preparation; JAG, WR and JR: supervision, manuscript editing and reviewing; JR: project administration and guarantor. All authors approved the final manuscript.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.