Article Text
Abstract
Introduction Reviews of commercial and publicly available smartphone (mobile) health applications (mHealth app reviews) are being undertaken and published. However, there is variation in the conduct and reporting of mHealth app reviews, with no existing reporting guidelines. Building on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we aim to develop the Consensus for APP Review Reporting Items (CAPPRRI) guidance, to support the conduct and reporting of mHealth app reviews. This scoping review of published mHealth app reviews will explore their alignment, deviation, and modification to the PRISMA 2020 items for systematic reviews and identify a list of possible items to include in CAPPRRI.
Method and analysis We are following the Joanna Briggs Institute approach and Arksey and O’Malley’s five-step process. Patient and public contributors, mHealth app review, digital health research and evidence synthesis experts, healthcare professionals and a specialist librarian gave feedback on the methods. We will search SCOPUS, CINAHL Plus, AMED, EMBASE, Medline, APA PsycINFO and the ACM Digital Library for articles reporting mHealth app reviews and use a two-step screening process to identify eligible articles. Information on whether the authors have reported, or how they have modified the PRISMA 2020 items in their reporting, will be extracted. Data extraction will also include the article characteristics, protocol and registration information, review question frameworks used, information about the search and screening process, how apps have been evaluated and evidence of stakeholder engagement. This will be analysed using a content synthesis approach and presented using descriptive statistics and summaries. This protocol is registered on OSF (https://osf.io/5ahjx).
Ethics and dissemination Ethical approval is not required. The findings will be disseminated through peer-reviewed journal publications (shared on our project website and on the EQUATOR Network website where the CAPPRRI guidance has been registered as under development), conference presentations and blog and social media posts in lay language.
- Information technology
- Telemedicine
- eHealth
- STATISTICS & RESEARCH METHODS
This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.
Statistics from Altmetric.com
STRENGTHS AND LIMITATIONS OF THIS STUDY
This review will be conducted systematically, with data extraction informed by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 reporting items and a previous scoping review.
The protocol has had input from a multidisciplinary team of mHealth app review, digital health and evidence synthesis experts, healthcare professionals, a librarian, and patient and public contributors.
The broad scope of health topics to be included in the review increases the generalisability of the findings.
In line with scoping review guidance, a quality appraisal of the included studies will not be conducted.
Only mHealth app reviews reported in English will be included, meaning some other relevant reviews published in other languages may not be included.
Introduction
In 2021, it was recorded that there are more than 350 000 health applications (apps).1 These applications are increasingly being integrated into healthcare; supporting professionals in their clinical practice2 and empowering patients to manage and monitor their health conditions.3 4 However, the quality and reliability of mobile health (mHealth) apps vary significantly,5 as developers can release smartphone health apps without any evaluation, meaning it is a challenge for health professionals and users without expertise in health research and digital technology to identify and evaluate the suitability of mHealth apps for their use.
This has led to the emergence of a new method: systematic reviews of commercial and publicly available mHealth apps (hereafter called mHealth app reviews). This provides a standard approach to identify mHealth apps relevant to a particular use case and assess aspects such as quality and functionality. Many mHealth app reviews have now been published, for varied topics including genetics,6 7 patient-reported outcomes in oncology,8 mental health,9 10 rheumatoid arthritis,11 strength training,12 menopause,13 exercise,14 15 hand hygiene,16 atrial fibrillation,17 pain18–20 and smoking cessation.21 22 These reviews can serve as a valuable resource for healthcare decision-makers, practitioners, patients and the general public seeking high-quality mHealth apps; can identify gaps in the field and may guide researchers and industry in developing new products.
While mHealth app reviews share features with traditional systematic literature reviews, they differ substantially in their methods and reporting23 due to the review being of commercial and publicly available products on app stores, instead of published literature. Examples of traditional systematic reviews of literature describing apps include those on monitoring and managing mental health symptoms24 and self-managing pregnancy.25 While literature reviews can tell us about the effectiveness of apps which have been evaluated and the results published, they do not provide a comprehensive overview of all apps that are commercially or publicly available for use by patients, healthcare professionals and the public. mHealth app reviews also differ as there are no formal requirements for the protocol to be registered, searches take place on app libraries, screening often takes place on Excel (rather than using specifically designed tools like Rayyan26 or Covidence27), and they are more challenging to replicate, as apps may emerge, disappear or be updated between searches.23
The EQUATOR Network28 provides an array of guidelines for reporting evaluations of digital technologies, such as the CONSORT-EHEALTH checklist (an extension of the CONSORT checklist tailored for reporting randomised controlled trials of web-based and mobile health interventions29), and guidance on reporting evaluations of specific technologies, such as sensors,30 mHealth interventions,31 telehealth in clinical trials32 and smartphone-delivered ecological momentary assessments.33 There are also several extensions of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) reporting guidelines34 available for different types of literature reviews.
In contrast, no reporting guidelines exist for mHealth app reviews, and we are not aware of any currently in development. The need for a reporting standard for health app-focused reviews was emphasised in a scoping review published in 2020.35 The authors reviewed 26 mHealth app reviews published between 2012 and 2018 and found issues in their reporting. For example, the date of the searches was sometimes unclear (38%, 10/26) or absent (15%, 4/26). The number of reviewers involved was also poorly reported in 58% of reviews, and in 83%, it was unclear whether screening was performed independently. Another important finding was the absence of clinical recommendations or reporting on clinical efficacy, found in 77% of the included reviews. Adhering to reporting guidelines may help to minimise the presence of these inconsistencies in reporting and ensure that standardised information is presented.
If a key purpose of an mHealth app review is for people to be able to identify the best product for a particular purpose, it is important to further explore how app review authors have evaluated the apps and reported on their outcomes. For example, some previous app reviews have considered the accessibility of the apps by generating readability metrics on the written content.6 8 13 This is especially important for patient or public-facing apps. There are also various approaches to reporting on efficacy (ie, that the app results in intended outcomes). This may include searching for evidence of a previous evaluation within the app itself, on developers’ websites, or for published literature or conference abstracts in academic databases. For example, the Mobile App Rating Scale (MARS)36 is commonly used in mHealth app reviews and has an item which addresses whether an app has been trialled or tested. Previous examples of app reviews have approached this item by excluding the item entirely11 or searching Google Scholar.6 8 13 16 18 It is unclear how other authors have evaluated the apps and how this has informed their recommendations of apps as being high quality. Understanding the nature of the evaluations (especially efficacy evidence) seems essential in mHealth app reviews as readers, including health and care workers, patients and the public, and healthcare decision-makers may use these to choose which apps to use.
We have previously discussed the methodological considerations for conducting systematic mHealth app reviews; introducing a seven-step method and the TECH framework (Target user, Evaluation focus, Connectedness and the Health domain) for developing research (review) questions and determining app eligibility criteria.23 This is the first stage of a broader project that aims to systematise the process of conducting and reporting mHealth app reviews.
The next step is to develop reporting guidelines to support authors of mHealth app reviews in transparently presenting their methods and findings. The field of app reviews is rapidly developing and expanding so a new scoping review is required to update that previously reported, which had a search date of 2018.35 A preliminary search of the Cochrane Database of Systematic Reviews, Google Scholar and Joanna Briggs Institute (JBI) Evidence Synthesis was conducted, and no current or ongoing systematic reviews or scoping reviews on the topic were identified. Therefore, we are undertaking a scoping review to build on and update the scoping review by Grainger et al.35
Objectives
In line with guidance for developing reporting guidelines,37 the next step is to search for relevant evidence on the quality of the reporting of published mHealth app reviews. The aim of this work is, therefore, to conduct a scoping review on published mHealth app reviews to explore their alignment, deviation and modification to the PRISMA 2020 items and identify a list of possible items to include in the new Consensus for APP Review Reporting Items (CAPPRRI) guideline.
Methods
Scoping review
The methods for this scoping review were developed in alignment with the JBI approach for scoping reviews38 and reviewed by a group of patient and public contributors and an advisory group consisting of mHealth app review, digital health research and evidence synthesis experts and National Health Service (NHS) healthcare professionals interested in app reviews. The review will be carried out using the five-step process for conducting scoping reviews, originally outlined by Arksey and O’Malley.39 This protocol has already been registered and made publicly available on OSF (https://osf.io/5ahjx). The final review will be reported using the PRISMA extension guidelines for Scoping Reviews.40 This protocol has been reported using the PRISMA-P extension41 (see online supplemental appendix 1). We will start the review on 2 January 2024 and complete it by 2 September 2024.
Supplemental material
Procedures
Identifying the initial research question
We used the Study, Data, Methods and Outcomes (SDMO) acronym to inform our research questions and eligibility criteria, which has been recommended when conducting reviews on methodology or theory.42 Additionally, as suggested by Levac et al,43 we considered the purpose and expected outputs of the review to assist in writing the research questions. The purpose and expected outputs are primarily a list of potential reporting items used to inform the future CAPPRRI guideline. Building on the PRISMA 202034 items is appropriate as many app review authors already informally use the PRISMA items to report their work or have attempted to amend the PRISMA flow chart when reporting their app search and screening process.6 8 11–13 16–18 21
The second question seeks to understand what outcomes were evaluated in mHealth app reviews (eg, usability, functionality, privacy, accessibility and efficacy). This builds on the previous review,35 which found that most of the app reviews did not make clinical recommendations or report clinical efficacy (ie, whether the app could meet desired outcomes in a clinical context). We are, therefore, interested in understanding what the outcomes were in general, and whether any of the app reviews reported on efficacy in any other sense such as satisfaction, increased knowledge or perceived support.
The two key questions are as follows:
In published reviews of commercial and publicly available mHealth apps how does reporting align with or deviate from the PRISMA 2020 items? Have authors used items that directly align with PRISMA 2020 items or have these been modified?
What outcomes did the mHealth app reviews evaluate and report on?
Identifying relevant studies
We will search the SCOPUS, CINAHL Plus (via EBSCO), AMED (via Ovid), EMBASE (via Ovid), Medline (via Ovid), APA PsycINFO and ACM Digital Library databases for published mHealth app reviews, under the guidance of a teaching and learning librarian who has given input on the search strategy. Reference lists of eligible articles will also be handsearched for additional sources (snowballing) while a forward citation approach will be used to identify app reviews that have cited earlier published work.
The key terms used to build the search strategy are shown in table 1, with the full search strategy presented in online supplemental appendix 2. Where appropriate, subject headings will be applied to the databases. The keywords will be separated by the ‘OR’ Boolean operator. The technology and review type keywords will be separated by the ‘AND’ Boolean operator. The proximity function using five words will be used for the review type keywords, to include different variations of app review, such as review of apps, review of smartphone apps or review of patient-facing mobile health applications.
Publication date will be limited from 1 January 2007, as the first iPhone (and first smartphone) was introduced on 29 June 2007 so there will be no apps available to review before 2007.
Study selection
Table 2 presents the eligibility criteria for the literature, using the SDMO acronym.42 Broadly, our inclusion criteria are as follows:
Reviews of commercial and publicly available mobile (smartphone) apps published in English and after 1 January 2007 (types of study).
that have a health focus (type of data),
include any method and measure of evaluating apps (eg, MARS or user ratings) and,
have any outcome or focus, including focusing on the availability of apps or evaluation (eg, quality, functionality, privacy and security or adherence to clinical guidelines).
We will only include mHealth app reviews published in English. However, we will separately list articles that were excluded due to language, which can enable others to easily identify these papers for subsequent reviews.
The final search results will be imported into Rayyan26 for deduplication and screening. As suggested by Levac et al,43 the screening process will be iterative and use a team-based approach. An initial meeting will be held with all researchers involved in the screening process to discuss interpretation of the eligibility criteria and reach a shared understanding after an initial set of records have been pilot screened. A two-step process will then be followed. First, two researchers will independently review each abstract/title against the eligibility criteria. Second, the full text of records potentially eligible based on abstract/title screening will be reviewed by two researchers independently. A meeting will be held after the second stage to discuss and reach consensus where there is disagreement. A third reviewer will make the final decision if consensus cannot be reached. Depending on the number of articles, two teams of two researchers may perform the screening process, with a fifth researcher available to resolve disagreements. The search and screening process will be presented as a flow chart.
Charting the data
A data extraction sheet will be created in Excel using headers related to our seven-step method and TECH framework,23 the PRISMA 2020 items,34 whether and how authors modified them, additional information reported, and methods used to appraise the apps’ quality, functionality or efficacy. Some items have also been taken from the review conducted by Grainger et al 35 as these capture details unique to app reviews (eg, how app stores were searched). Table 3 presents the data extraction items.
Consistent with recommendations by Levac et al,43 we will take an iterative approach to charting by continually updating the data-charting form as needed. The research team will first pilot the data extraction sheet, by extracting the data from one app review, with a discussion afterwards to ensure consistency in interpretation of the items. Data will then be extracted from the other articles, with one author extracting the information and another checking this. Depending on the final number of included articles, this will be split between the researchers.
It has been suggested that some scoping reviews should also include quality assessments of the methodology used in the articles.43 However, as this is not the focus of our review and as no specific quality assessment tool currently exists for mHealth app reviews, the quality of the included studies will not be assessed.
Collating, summarising and reporting the results
Similar to the previous review35 and as recommended by Arksey and O’Malley,39 we will report data as frequencies (where possible) to determine which items were reported as is, or whether they were modified.
Information that cannot be reported as frequencies, on how the PRISMA 2020 items were modified and how other relevant information was reported will be summarised using a content synthesis approach, to help identify new items for the CAPPRRI guideline.
The results overall will be reported using descriptions and examples while some of the numerical results will also be presented using tables and figures.
Strengths and limitations
This scoping review will be conducted in a systematic and rigorous manner, with data extraction informed by the existing PRISMA 202034 reporting items and a previous scoping review.35 It also adheres to existing guidance on conducting scoping reviews, including from the JBI38 and Arksey and O’Malley39 and has had input from a multidisciplinary team of mHealth app review, digital health and evidence synthesis experts, NHS healthcare professionals, a librarian and patient and public contributors. The breadth of its scope of health topics (and methodological focus) also means that the findings will be widely generalisable.
A fundamental limitation is the inability to assess the quality of the included reviews, due to an absence of quality appraisal tools for reviews of commercial and publicly available mHealth apps. This limitation means that low-quality app reviews may also contribute to the development of the future CAPPRRI guideline. However, we will mitigate this in the next phase of the project, in which a Delphi study with experts will help to prioritise the items. Another limitation concerns restricting the included mHealth app reviews to those reported in English which may lead to other relevant reviews being excluded.
Patient and public involvement
We have established a Patient and Public Involvement and Engagement (PPIE) group to give feedback on our project. The group has been consulted to provide input on the protocol and suggested additional items that should be extracted (ie, whether the accessibility of the apps was evaluated and whether a lay summary was provided). They also gave ideas for how the findings should be disseminated. We will continue to consult with them throughout the review; this will inform the iterative aspects of the scoping review process and will help to guide the findings and their dissemination.
Ethics and dissemination
Ethical approval is not required to conduct this scoping review which will use only previously published data.
The findings of this scoping review will be disseminated through peer-reviewed journal publications which will be shared on our project website and on the EQUATOR Network website where the CAPPRRI guideline has been registered as under development, in addition to conference presentations and blog posts and short summaries in lay language on professional social media.
Conclusion
This protocol describes how we will conduct a scoping review on published mHealth app reviews to explore their alignment, deviation and modification to the PRISMA 2020 items and identify a list of possible items to include in the new CAPPRRI reporting guideline. The results will inform the next phase in developing the CAPPRRI guideline: a Delphi study to reach a consensus on which items are most relevant and important to include in the guideline.
Ethics statements
Patient consent for publication
Acknowledgments
We would like to acknowledge Mr Michael Stevenson for assisting with the search strategy. We would also like to thank our patient and public contributors for their input—thank you Amber McAvoy, Eric Lowndes, Ashgan Mahyoub and Beatrice Namu. We acknowledge Mrs Amy Vercell and Dr Lisa McGarrigle who are part of our team.
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
X @drbeckyg, @fhscChester
Contributors NG, GN and DD designed the protocol. NG wrote the first draft of the manuscript, with support from GN. NG, GN, RG, CE-T, DJ, SMA, SNvdV, CRF, AH, KL, MB, AD, DP, CS and DD revised the protocol critically and approved the final manuscript.
Funding This work is funded by the National Institute for Health and Care Research Applied Research Collaboration Greater Manchester (grant award: NIHR200174).
Disclaimer The views expressed in this publication are those of the authors and not necessarily those of the National Institute for Health and Care Research or the Department of Health and Social Care.
Competing interests None declared.
Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.