Article Text

Original research
Evaluating evaluation frameworks: a scoping review of frameworks for assessing health apps
  1. Sarah Lagan,
  2. Lev Sandler,
  3. John Torous
  1. Division of DIgital Psychaitry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
  1. Correspondence to Dr John Torous; jtorous{at}bidmc.harvard.edu

Abstract

Objectives Despite an estimated 300 000 mobile health apps on the market, there remains no consensus around helping patients and clinicians select safe and effective apps. In 2018, our team drew on existing evaluation frameworks to identify salient categories and create a new framework endorsed by the American Psychiatric Association (APA). We have since created a more expanded and operational framework Mhealth Index and Navigation Database (MIND) that aligns with the APA categories but includes objective and auditable questions (105). We sought to survey the existing space, conducting a review of all mobile health app evaluation frameworks published since 2018, and demonstrate the comprehensiveness of this new model by comparing it to existing and emerging frameworks.

Design We conducted a scoping review of mobile health app evaluation frameworks.

Data sources References were identified through searches of PubMed, EMBASE and PsychINFO with publication date between January 2018 and October 2020.

Eligibility criteria Papers were selected for inclusion if they meet the predetermined eligibility criteria—presenting an evaluation framework for mobile health apps with patient, clinician or end user-facing questions.

Data extraction and synthesis Two reviewers screened the literature separately and applied the inclusion criteria. The data extracted from the papers included: author and dates of publication, source affiliation, country of origin, name of framework, study design, description of framework, intended audience/user and framework scoring system. We then compiled a collection of more than 1701 questions across 79 frameworks. We compared and grouped these questions using the MIND framework as a reference. We sought to identify the most common domains of evaluation while assessing the comprehensiveness and flexibility—as well as any potential gaps—of MIND.

Results New app evaluation frameworks continue to emerge and expand. Since our 2019 review of the app evaluation framework space, more frameworks include questions around privacy (43) and clinical foundation (57), reflecting an increased focus on issues of app security and evidence base. The majority of mapped frameworks overlapped with at least half of the MIND categories. The results of this search have informed a database (apps.digitalpsych.org) that users can access today.

Conclusion As the number of app evaluation frameworks continues to rise, it is becoming difficult for users to select both an appropriate evaluation tool and to find an appropriate health app. This review provides a comparison of what different app evaluation frameworks are offering, where the field is converging and new priorities for improving clinical guidance.

  • psychiatry
  • telemedicine
  • information management
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Contributors SL and JT designed the procedure. SL and JT screened articles for eligibility. SL and LS compiled and mapped questions from frameworks. SL and JT composed manuscript.

  • Funding This work was supported by a gift from the Argosy Foundation.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Addtional data are presented in Appendix A and B.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.