Article Text

Download PDFPDF

mHealth in psychiatry: time for methodological change
Free
  1. Jennifer Nicholas1,2,
  2. Katherine Boydell1,
  3. Helen Christensen1
  1. 1Black Dog Institute, University of New South Wales, Sydney, New South Wales, Australia
  2. 2Faculty of Medicine, School of Psychiatry, University of New South Wales, Sydney, Australia
  1. Correspondence to Jennifer Nicholas, Black Dog Institute, University of New South Wales, Hospital Road, Prince of Wales Hospital, Randwick, Sydney, NSW 2031, Australia; j.nicholas{at}blackdog.org.au

Abstract

A multitude of mental health apps are available to consumers through the Apple and Google app stores. However, evidence supporting the effectiveness of mHealth is scant. We argue this gap between app availability and research evidence is primarily due to unsuitable knowledge translation practices and therefore suggest abandoning the randomised controlled trial as the primary app evaluation paradigm. Alternative evaluation methodologies such as iterative participatory research and single case designs are better aligned with mHealth translational needs. A further challenge to the use of mobile technology in mental health is the dissemination of information about app quality to consumers. Strategies to facilitate successful dissemination of quality resources must consider several factors, such as target audience and context. In practice, structured solutions to inform consumers of evidence-informed apps could range from the development of consumer used tools to app accreditation portals. Consumer enthusiasm for apps represents an opportunity to increase access and support for psychiatric populations. However, adoption of alternative research methodologies and the development of dissemination strategies are vital before this opportunity can be substantially seized.

  • MENTAL HEALTH
  • STATISTICS & RESEARCH METHODS
  • PSYCHIATRY

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

A multitude of mental health apps are now available through the Apple and Google app stores. The need and potential utility of quality smartphone apps for mental healthcare was highlighted in Leigh and Flatt's1 recent perspective. However, advantages of this technology will not be realised until we change the way we undertake evaluative research and develop quality indicators of apps for dissemination.

Unlike the rich evidence supporting the effectiveness of web interventions in mental health, research regarding mHealth efficacy is scant. A 2013 review of the mHealth literature identified a mere eight papers investigating the efficacy of mental health apps despite the approximate 3000 available to download.2

This evidence gap between literature and marketplace is further exemplified by systematic reviews that evaluate mental health app content against clinical guidelines or evidence-based practice.3 ,4 Results indicate app content is rarely concordant with best practice; the average number of guidelines covered by apps for bipolar disorder was 4 of 113 and 7.8 of 60 for smoking cessation.4 We argue that this current gap between app availability and research evidence is primarily due to unsuitable knowledge translation practices.

While critical, the current time taken to empirically evaluate apps developed for psychiatric disorders is crippling in the marketplace environment. For good reasons, traditionally randomised controlled trials (RCTs) are required to establish intervention efficacy. However, they are unsuited to mHealth, which resides within the fast-paced dynamic app marketplace. Within this space, while undergoing lengthy evaluative processes, developments in technology can render interventions obsolete and consumer expectations change rapidly.5 However, of foremost concern is that the largely unregulated app marketspace creates a multitude of apps, directed at health system gaps, or driven by user need. Unlike medical devices where demonstration of effectiveness is required, commercial incentives to develop apps with evidence-based content are lacking. Therefore, we believe the expeditious development and dissemination of such high-quality resources must be a primary focus.

Given this, we suggest abandoning the RCT as the primary app evaluation paradigm in favour of alternatives better aligned with mHealth translational needs. Participatory research frameworks are already used in mHealth to ensure acceptability and uptake of apps in the target population. With involvement of intended users throughout design, development and deployment processes, the integration of small feasibility and pilot evaluation studies throughout the process has the capability for rapid preliminary appraisal. Such iterative design and evaluation models could make research apps more competitive in the marketplace, while also ensuring the app is acceptable to target populations. Single case designs may also serve as a novel first step in research evaluation. In fact, technological interventions uniquely lend themselves to this design, as they require continuous assessment, both during baseline and intervention observation.6 The continuous production of user data by mHealth apps may also allow for observational studies to evaluate an app's effect on health outcomes. To hasten availability, open pragmatic trials that test app effectiveness with the intended user population would allow parallel assessment and evaluation. An extension of this, Mohr et al5 proposed the continuous evaluation of evolving behavioural intervention technologies (CEEBIT) framework which would perform constant app evaluation and monitoring within app stores. The CEEBIT framework would continuously evaluate and compare app clinical outcome and usage data.5 Apps found comparatively inferior would be removed from the app marketplace, thereby ensuring consumers could only access apps with demonstrated effectiveness.5 Regardless of methodology, the mHealth field needs to adopt alternate strategies of rigorous evaluation.

While the motivation for the implementation of alternative evaluation strategies is to reduce the gap between app availability and research evidence, ensuring quality resources, app quality remains undefined within the field. Certainly, mHealth is divergent from the app marketplace which measures quality by commercial success, such as number of downloads, exposure and app ratings. However, within mHealth, a range of dimensions contributing to app quality have been discussed including data security, usability, accessibility—including cost and platform, technical quality, user risk and safety, and clinical outcomes.1 ,7 ,8 Further, investigations regarding what consumers regard as indicators of app quality are strikingly absent.

The protracted evaluative process may contribute to this lack of consensus within the field, as traditional quality indicators of mental health resources, such as demonstrated clinical efficacy, are perhaps unrealistic given the current evaluative lag. This has prompted the current discussion within mHealth regarding quality, and its multiple proposed forms.1 ,3–4 ,7 ,8 In our subsequent discussion, we consider credible, secure, evidence-informed apps to be high quality, as compared with the plethora of unevaluated non-evidence-informed alternatives.1 ,3 ,4

Differentiation of evidence-informed apps within the app marketplace, and dissemination of information about app quality represents a further knowledge translation challenge. Consideration of how evidence-informed apps are presented to consumers is vital in ensuring their uptake and usage over unevaluated alternatives. This is where we can turn to best evidence in current knowledge translation and exchange practice.9

Limited research in this area highlights the complexities and intricacies of knowledge dissemination strategies. As such, an understanding of factors that influence success, including target audience, communication channels, context and type of information disseminated is vital.9 Knowledge about app quality therefore likely requires a variety of dissemination strategies. Careful consideration of these factors during the development of evaluation frameworks or implementation of marketplace regulations is critical to ensuring consumers are assisted in delineating high-quality mental health apps.

Structured solutions to inform consumers of evidence-informed apps could take varied forms. App evaluation could be performed by quality portals or accreditation sites, an approach available for online resources for mental health.10 App quality portals attempt to improve uptake through assuring clinical quality and security, as privacy and data security are major concerns of consumers.11 In practice, the sheer number of health apps has mandated that such portals balance thoroughness of evaluation with expediency. However, certification processes that fail to undertake extensive evaluation have dangers. For example, Happtique, a certification programme for health apps, was forced to close their app registry after independent evaluation of several certified apps exposed security flaws.8 Likewise, the National Health Service (NHS) Health Apps Library accreditation process included data security assurances. However, a recent evaluation of NHS accredited apps identified many that did not comply with legislated privacy and security standards.12 Furthermore, Leigh and Flatt1 reported that the majority could also not substantiate claims of effectiveness. Principally these failures result in consumers using insecure or ineffective apps; however, incorrect accreditation may also decrease consumer confidence in mHealth. To be truly successful, the app accreditation approach requires commitment of substantial resources and consumer awareness of its existence.

Alternatively, tools could be developed to assist consumers with app quality assessment. Ongoing collaborative work between the Mindtech Healthcare Technology Co-operative, Leeds and York NHS Foundation Trust, and the University of Leeds has resulted in development of such a toolkit for consumers and clinicians.7 While this approach informs and empowers consumers, inherent difficulties arise when relying on consumers to assess app quality, foremost non-compliance. The Mindtech cooperative was also involved in the creation of health app development guidelines to further ensure the quality of mental health apps.13 While such guidelines are an important step, similar difficulties exist, as without enforcement or regulation there is no incentive for developers to adhere to the published code of practice.

Another option is to develop a technological solution to automate quality accreditation. Similar to a proposal for website assessment,14 which used a specially developed algorithm to rank depression websites according to the quality of their information, automatic evaluation of app content could occur during app upload, with results displayed within the app's store description. The display of app quality results within app stores, the point of contact for consumers, represents a substantial advantage for the dissemination of quality resources. Further, automated evaluation would potentially allow for the ranking of app store search results according to quality. Recently Google announced the ability to search app content, making this option viable.15 However, a technological solution would require both Google and Apple to commit to health app quality assurance.

Overall, implementation research is required to assess the most successful strategy of enabling easy identification of high-quality resources.16 Implementation research refers to the scientific study of methods to support the systematic uptake of evidence-based clinical practices or treatments into routine practice, thus improving health.16 It includes the study of influences on consumer, healthcare professional and organisational behaviour in either healthcare or population settings, as well as understanding the barriers and facilitators that influence successful implementation of effective interventions. Effective implementation practices are essential to any effort in using the products of science—such as evidence-based apps—to improve the lives of individuals.17 Therefore, measuring the success of app quality indicator implementation is vital, and contributes to informing the allocation of future resources.

The popularity of healthcare apps among consumers represents a great opportunity for technology to increase access and support for psychiatric populations. However, adoption of alternative research methodologies and implementation of app evaluation strategies are imperative for mHealth to substantially impact health outcomes.

References

View Abstract

Footnotes

  • Twitter Follow Helen Christensen at @HM_Christensen, Jennifer Nicholas at @JMNBDI and Katherine Boydell at @KBoydell

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.