Article Text

Download PDFPDF

Original research
Descriptive study of the challenges when implementing an app for patients with neovascular age-related macular degeneration to monitor their vision at home
  1. Barnaby C Reeves1,
  2. Robin Wickens1,
  3. Sean R O’Connor2,
  4. Eleanor Alma Gidman1,
  5. E Ward1,
  6. Charlene Treanor2,
  7. Tunde Peto3,
  8. Ben J L Burton4,
  9. Paul C Knox5,
  10. Andrew Lotery6,
  11. Sobha Sivaprasad7,
  12. Michael Donnelly2,
  13. Chris A Rogers1,
  14. Ruth E Hogg2
  1. 1Bristol Trials Centre, Bristol Medical School, University of Bristol, Bristol, UK
  2. 2Centre for Public Health, Queen's University Belfast, Belfast, UK
  3. 3Queen's University Belfast Faculty of Medicine Health and Life Sciences, Belfast, UK
  4. 4James Paget University Hospitals NHS Trust, Liverpool, UK
  5. 5University of Liverpool, Liverpool, UK
  6. 6University of Southampton, Southampton, UK
  7. 7Moorfields Eye Hospital, London, UK
  1. Correspondence to Barnaby C Reeves; barney.reeves{at}bristol.ac.uk

Abstract

Objectives Remote monitoring of health has the potential to reduce the burden to patients of face-to-face appointments and make healthcare more efficient. Apps are available for patients to self-monitor vision at home, for example, to detect reactivation of age-related macular degeneration (AMD). Describing the challenges when implementing apps for self-monitoring of vision at home was an objective of the MONARCH study to evaluate two vision-monitoring apps on an iPod Touch (Multibit and MyVisionTrack).

Design Diagnostic Test Accuracy study.

Setting Six UK hospitals.

Methods The study provides an example of the real-world implementation of such apps across health sectors in an older population. Challenges described include the following: (1) frequency and reason for incoming calls made to a helpline and outgoing calls made to participants; (2) frequency and duration of events responsible for the tests being unavailable; and (3) other technical and logistical challenges.

Results Patients (n=297) in the study were familiar with technology; 252/296 (85%) had internet at home and 197/296 (67%) had used a smartphone. Nevertheless, 141 (46%) called the study helpline, more often than anticipated. Of 435 reasons for calling, all but 42 (10%) related to testing with the apps or hardware, which contributed to reduced adherence. The team made at least one call to 133 patients (44%) to investigate why data had not been transmitted. Multibit and MyVisionTrack apps were unavailable for 15 and 30 of 1318 testing days for reasons which were the responsibility of the app providers. Researchers also experienced technical challenges with a multiple device management system. Logistical challenges included regulations for transporting lithium-ion batteries and malfunctioning chargers.

Conclusions Implementation of similar technologies should incorporate a well-resourced helpline and build in additional training time for participants and troubleshooting time for staff. There should also be robust evidence that chosen technologies are fit for the intended purpose.

Trial registration number ISRCTN79058224.

  • Telemedicine
  • Medical retina
  • Aging

Data availability statement

Data are available upon reasonable request. Individual participant data (IPD) sharing plan. Data will not be made available for sharing until after publication of the main results of the study. Thereafter, anonymised individual patient data will be made available for secondary research, conditional on assurance from the secondary researcher that the proposed use of the data is compliant with the MRC Policy on Data Preservation and Sharing regarding scientific quality, ethical requirements, and value for money. A minimum requirement with respect to scientific quality will be a publicly available prespecified protocol describing the purpose, methods, and analysis of the secondary research, for example, a protocol for a Cochrane systematic review. The second file containing patient identifiers would be made available for record linkage or a similar purpose, subject to confirmation that the secondary research protocol has been approved by a UK REC or other similar, approved ethics review body.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

  • Occurred within the context of a well-designed Diagnostic Test Accuracy study that adhered to Standards for Reporting Diagnostic Accuracy (STARD) reporting guidelines.

  • Systematically recorded observations throughout the study duration.

  • Pragmatic study, that is, challenges observed among participants recruited from usual care eye clinics so representative of the kinds of patients for whom the apps were intended.

  • Some of the challenges were unanticipated so additional documentation and recording procedures had to be initiated throughout the study.

  • The training and retraining sessions were not operationalised using a standard operating procedure due to the considerable experience of the study staff.

Introduction

The development and implementation of self-monitoring technology for chronic conditions has the potential to ease the burden on both patients and hospital services.1

Applications (apps) that communicate information from a remote setting, typically the home, to a care provider are burgeoning. The feasibility of implementing these technologies is improving due to improvements in digital literacy in older age groups. Physiological information can often be stored and transmitted passively but apps which require the active engagement of a patient are more difficult to implement. Home-monitoring of a sensory impairment is one such situation.

Neovascular age-related macular degeneration (nAMD) is a common cause of visual impairment worldwide.2 Treatment involves a series of intraocular injections, then regular hospital follow-up, usually every 1–3 months, during which more treatment may be required if the neovascular activity is deemed to have reactivated. This pattern of regular follow-up during periods of remission, interspersed with additional treatment, may continue for years and places a significant burden on the health services.3 If self-monitoring tests could accurately detect the need for retreatment, regular check-ups during periods of non-treatment would not be needed and hospital clinic appointments could be reserved for patients most likely to require treatment, as indicated by the home-monitoring tests.4 While it may be anticipated that older people may find it challenging to use the electronic devices, such as smart phones and tablets, that are the platform for some home-monitoring tests, studies in age-related macular degeneration (AMD)5 6 and other conditions such as glaucoma7 8 have shown that this can be achieved.

The MONARCH (Monitoring for neovascular Age-related macular degeneration (AMD) Reactivation at Home) study was commissioned by the National Institute for Health and Care Research (NIHR) Health Technology Assessment (HTA) Programme to investigate ‘Are newer tests and devices more accurate and acceptable than the Amsler grid for self-monitoring age-related macular degeneration between routine clinic visits?’ This report describes a secondary objective of the study, namely to describe the challenges arising during set-up, recruitment, and follow-up when implementing two electronic tests, provided as ‘apps’.9 The primary objective of the study was to determine the diagnostic accuracy of the apps to detect reactivation of neovascular age-related macular degeneration (nAMD), reported elsewhere.10 ,11

Methods

The MONARCH study recruited patients at six UK hospitals. Ethical approval to carry out the study at all hospitals was given by Northern Ireland Health and Social Care Research Ethics Committee A (reference number: 17/NI/0235) on 29 January 2018.

Participants

Participants had to have at least one study eye (diagnosed with nAMD and first treated between 6 and 42 months prior to informed consent). Eyes that had had surgery in the preceding 6 months, had any other vision-limiting eye conditions, or had vision that was too poor to be able to do the tests (visual acuity Snellen score 6/60, LogMar 1.04 or 33 letters) were excluded. Patients were approached to join the study if they were able to operate the home-testing equipment independently (confirmed during a training session) and if home circumstances permitted (satisfactory mobile phone reception or other way to connect to the internet).

Participants were provided with an Apple iPod with two preloaded apps, Multibit (MTB; a near acuity threshold test of neuroretinal damage) and MyVisionTrack (mVT; a shape discrimination test which measures hyperacuity), a personal Wifi (MiFi) device for remote data transmission across the mobile phone network and relevant accessories, for example, device mains chargers.10 11 Participants recruited early in the study self-tested for up to 2 years and the minimum period of self-testing for the last participants recruited was 6 months (unless a participant or his/her managing ophthalmologist asked to withdraw sooner). The research team did not withdraw a patient, for example, for non-adherence to weekly testing.

Training

An in-person study overview and training session was provided to all local study teams by the project manager and Chief-Investigator. Local study teams were part of the National Institute for Health and Care Research (NIHR) Clinical Research Network (CRN) so highly experienced in research delivery. An information and training session was led by a member of the local research team with experience of working with patients at a hospital visit (such as research nurse or optometrist). At the information and training session, the potential participant was shown the equipment and how it should be used for the study. This was not operationalised using a standard operating procedure. The trainer supervised self-testing during the training (ability to do the tests at training was an eligibility criterion) and answered any further questions. If required, participants could request retraining (or the local team offer retraining) using the same method, that is, in person alongside a scheduled hospital outpatient appointment.

Outcome measures for describing challenges

Processes were in place from the beginning of recruitment to capture information regarding several challenges that were identified during the design of the study. We set up a telephone helpline at the start of the study (a mobile phone, staffed by a researcher during office hours), anticipating that some participants would need support. When required, participants were offered retraining. In addition, throughout recruitment and follow-up, challenges arose that involved the apps, the devices and patient testing; challenges were anticipated unless stated to be unanticipated.

These challenges and the actions taken to address them were documented and categorised:

Participants’ lack of familiarity with the technology

At baseline, participants were asked about their past/current use and experience of technology. The duration and frequency of retraining sessions were recorded. All incoming calls to a dedicated telephone helpline for participants were logged, categorised by reason for the call, and the duration recorded. The findings of qualitative interviews with participants, carers, and healthcare professionals about their experiences when using the apps are published elsewhere.12

Reasons for expected data not being transmitted

We did not anticipate the need to make outgoing telephone calls to participants but, early in the study, decided to do so when expected data were not transmitted. Telephone calls were made to patients if app data had not been received within 2 weeks of consent or if no new test data had been received by 3 weeks after the previous test. (Participants were sometimes not called again immediately when they triggered the latter criterion if they had been called recently.) All calls were logged, documenting one or more reasons for the absence of test data; the information collected enabled the study personnel to distinguish between participants not testing and test data not being transmitted. An automated SMS notification system was implemented partway through the study to prompt participants to test or seek assistance. All SMS notifications were documented.

Issues with devices

All issues with the delivery, set-up, and operation were documented. None of were anticipated.

Issues with the apps

There were several unanticipated technological issues with the apps and the automatic transmission of data to the study management centre through online data portals. These issues, and incoming calls to the participant helpline related to them, were documented as they arose.

Analysis

The outcomes were summarised descriptively where possible, for example, counts and percentages. No statistical tests were carried out. Some challenges were not quantitative and are described narratively.

Patient and public involvement

A patient and public involvement group comprised patients with nAMD who were not taking part in the study. Group members advised about patient-facing documents used in the study but were unable to provide useful feedback about the design and operation of the apps. They had little or no experience of similar software and seemed unwilling to criticise their content.

Results

Participants

In total, 297 patients (mean age 74.9 years) took part. Median testing frequency was four times per month (IQR 1–4) for both apps. About 60% of participants were continuing to test their study eyes when follow-up stopped, 6–24 months after participants started to self-test. Time to stopping testing is shown in figure 1; this was very similar for the each of the apps, about 38% stopping testing by 12 months after starting. Most participants were stopping testing with both apps. Over the course of the study, 32% of participants withdrew by 12 months, mostly due to decisions by participants (88/94, 94%) for ‘personal reasons’ (50/88), followed by testing being too time-consuming (25 participants).

Figure 1

Time to stopping testing with mVT (top panel) and MTB apps (bottom panel). MTB, multibit; mVT, MyVisionTrack.

Participants’ lack of familiarity with the technology

Participants recruited to the study had used widely available technology at least weekly quite extensively: 67% used a smart phone, 85% used internet at home, and 72% used email (see figure 2).

Figure 2

Technology used at least weekly by participants before starting to self-test in the study (n=297).

Despite this apparent familiarity with a smart device, a total of 353 incoming calls requesting support with testing were received. These calls were made by 141 identified participants (47.5% of all participants; this percentage varied 27.5% and 71.4% by site) and 30 for whom no study ID was available over about 350 person years of testing in the study. Calls were distributed over the course the study, not just when participants started to test. A total of 33 hours was spent in answering the calls. The median number of calls per identified participant was three (range 1–11) and the median call duration was 5 min (range 2–7 min).

The reasons for incoming calls to the study helpline are detailed in figure 3. For each call to the helpline, a maximum of two reasons were recorded. All but 42 (10%) of 435 reasons related to testing with the apps although, for a small proportion (3.7%), callers were only seeking reassurance that data had been transmitted successfully. One or both electronic apps accounted directly for 47% (205) of reasons. Devices issued to participants (iPod Touch, MiFi device, chargers, or connectivity) accounted for a further 27% (116). Responding to a SMS text message prompt to test was the most frequent other reason (13%; 56) for calling.

Figure 3

Reasons for incoming calls to the helpline. MIFI, a personal Wifi; mVT, MyVisionTrack.

Eighteen of the 297 (6%) participants needed retraining (11 at just one of the six participating sites).

Reasons were logged in relation to 353 calls; up to two reasons could be logged for one call. Eight calls had no recorded reason.

Reasons for expected data not being transmitted

For each telephone call made to a patient to find out why new test data had not been transmitted, a maximum of two reasons for calling the participant were recorded. Outgoing calls were suspended between March and June 2020 during the COVID-19 pandemic.

A total of 272 calls were made to 133 participants (44.1%); the median number of calls made to a participant was three (range 1–7); participants who had longer total follow-up time in the study required on average more outgoing calls to be made. Of the 272 calls, 180 (66%) were answered but the participant being called was unavailable for 13 (5%; eg, call answered by a partner when the participant was absent from the home). A total of 218 reasons for data not being transmitted were logged for the remaining 167 calls; frequent reasons for data not having been received were to do with the apps (23%), devices issued to participants (37%) or reasons not to do with the tests (40%; see figure 4).

Figure 4

Reasons reported by participants for their data not being received during outgoing calls. MIFI, a personal Wifi; mVT, MyVisionTrack.

A total of 218 of reasons were logged in relation to 180 of 272 calls; up to two reasons could be logged for one call. Ninety-two calls had no recorded reason.

Issues with devices

The devices we issued to participants caused many incoming and outgoing calls as described above. Here, we give more details about the challenges with devices.

As we supplied the MiFi device with only a limited data volume contract, we were keen to reduce the opportunity for using the iPod for other things. Therefore, we decided to use multiple device management software (MDMS) to retain control of all devices during the study. We chose to use the free Apple product because commercial custom-designed versions appeared expensive in comparison. During the study, we were unable to stop Apple updates being ‘pushed’ to devices, causing confusion for some participants resulting in a call to the helpline. The MDMS also meant that all the iPods had to be loaded with the apps in Belfast and then transported to sites; this was problematic due to strict regulations governing the transport of lithium-ion batteries by air and ferry (only specific carriers were able to accept them and only two batteries could be included in each package, significantly increasing the costs and time required to administer the process).

Mobile phone connectivity using the MiFi device was required for app data to be transmitted. Although mobile phone coverage was checked at the outset, connectivity was still a problem for some participants some of the time, either due to variability in the strength of mobile phone coverage or difficulty in using the MiFi device. We also had considerable problems at one of our sites which had no network coverage within the hospital, preventing device set-up (including app activation) being carried out at the training visit. This challenge required us to negotiate with the hospital IT department to give special permission to access a sufficiently secure Wifi network.

During the study, a participant reported that the two-port USB charger issued with the hardware had exploded in a power socket while charging the devices. The charger casing had split into two pieces and the electricity supply to the participant’s house was tripped. The participant was unharmed and there was no damage to the house or the iPod and MiFi device. Subsequently, a letter was posted to all participants providing revised instructions on charging the study devices.

Issues with apps

The apps we were evaluating also caused many incoming and outgoing calls as described above. Here, we give more details about the challenges with the apps.

A variety of unanticipated issues arose with the apps during the study causing one or other app to be unavailable for varying amounts of time, which affected both participant recruitment and self-testing by participants already in the study. These occasions accrued to 15 days of testing for the MTB test and 30 days for the mVT test. To put these numbers in context, total testing time in the study spanned 1318 days, although the number of participants testing with each app varied over the course of the study.

Early in the study, the company which created the mVT app was acquired by the Roche Holding AG (Basel, Switzerland). On acquisition, the server supporting the mVT app was switched off and the mVT app was unavailable for 22 days in total due to this issue. The mVT app was also incompatible with v12.0 of the iPod operating system, preventing the mVT app from working on any iPod that had been updated to v12.0 at that time. When the mVT app was unavailable, prospective participants attending an information and training session could not have the mVT app activated on their iPods. The MDMS was set to instal future updates automatically and the equipment instructions were modified to tell patients what to do if their iPod needed to instal an update.

Around this time, sites and participants reported a variety of technical issues such as error messages, inability of the iPod to connect to the internet, the MBT app failing to work on two iPods following an update, iPods of a certain batch being unable to update to V.12.0 of the operating system and three iPods with an mVT app error preventing activation of the app. The team’s ability to guide a participant through problem-solving remotely over the telephone varied substantially by participant.

For both apps, a security certificate was required for the servers that hosted the online portals. The certificates expired unexpectedly, causing the test portals to become temporarily unavailable. Renewal of the certificates was the responsibility of the app providers. For the MBT app, the portal was down for less than 1 day and the issue was resolved by 3 pm. The helpline received three phone calls about this issue. For the mVT app, the portal was down for a week. During this time, participants could test as normal, but new devices could not be activated. This issue effected four participant training sessions, causing sessions to be rearranged, participants being sent home without equipment or without the mVT app activated.

Test data for the MBT app could not be downloaded for 11 days due to the portal server having reached full capacity. The server capacity was increased, with a concomitant increase in the cost. On a separate occasion, participants could not test using the MBT app and the online portal could not be accessed for 14 days. The issue was caused by the expiry of a domain certificate required for the host database, leading to over 50 participants contacting the study team.

Discussion

We believe that the MONARCH study provided a unique opportunity to document the challenges of implementing home-monitoring. Our experiences may help those who try to implement similar apps in the future, whether to monitor vision or other health functions by patients testing their performance. All index tests had encouraging peer-reviewed evidence in similar populations when the study was conceived. Both app-based tests had formal spin out companies and, at the time of trial initiation, both had been acquired by pharma companies for further application and development. Hence, we believed that they were relatively well developed.

We have described a wide range of practical and logistical challenges that we experienced during the study. These challenges arose in the context of pragmatic use by a large number of patients over an extended period of time;13 50%–60% of patients who volunteered to take part in the study were still testing with the apps after 1 year, despite the challenges. Our findings are important because most of the challenges we experienced would affect future implementations of similar apps, whether in the context of another pragmatic diagnostic accuracy study or usual care. Given the costs and effort involved in conducting a study or changing a clinical pathway, future implementations of home vision testing need to address these challenges in advance.

Provision of hardware

We chose to provide all the equipment required for participants to test at home, partly to ensure a uniform platform for the apps, minimising the extent to which this could be criticised as an explanation for poor performance, and partly to minimise inequality. The disadvantage was that participants had to acquaint themselves with two new pieces of equipment. Some challenges might have been addressed by requiring users to implement the apps on their own equipment. However, this would have limited the number of potentially eligible patients and likely have increased inequality in access.14 Software developers would also have had to adapt the apps to the range of platforms used by patients. We would also have had to assume that app performance was invariant across the platforms on which the app was implemented.

In the context of a health service implementing home-monitoring, the same choice between issuing hardware and asking patients to use their own would exist. Noting the challenge we experienced with defective USB chargers, a health service choosing to issue hardware would probably need to check the safety of any equipment provided; such a check could not guarantee the safety of a device after it has been provided.

Need for study helpline

The study highlighted the importance of providing a well-resourced study helpline. We foresaw the need to provide this facility to resolve issues with apps or equipment but we did not anticipate the extent of the demand. Extrapolating this to a widespread deployment would require significant investment in a call-centre/chatbot scenario, the resources for which would need to be set against the efficiency achieved by reducing the number of hospital appointments. We did not anticipate the additional need to call study participant to ‘chase’ for data or to troubleshoot when no data had been received. Solutions could potentially be automated but, in a real-world clinical scenario, careful protocols would nevertheless be required for reminders and to prompt decisions about clinical follow-up to ensure that patients who stop testing are not lost-to-follow-up.

Challenges with the technology

The study evaluated the most developed technologies we identified at the time of designing the study but, with hindsight, we realise the apps were at too early a stage of development for a pragmatic evaluation. The developers did not seem to have appreciated the need for reliable provision (working app) in the context of wide-scale deployment (and evaluation) in a health service. The pilot data had generally been gathered in single-centre studies, usually under the supervision of the researchers responsible for app development. More recently, guidelines15 16 about best practice for app development have been published and future studies should only assess those that adhere to such guidelines.

Findings in the context of existing literature

Since our study began, several other studies have also evaluated the utility of tablet or phone-based tests for patients to monitoring macular disease themselves. Reports from such studies have mainly tended to emphasise the success of implementing tests, often in small (<100) and selected populations, rather than the challenges and barriers, for example, describing the average number of test occasions per participant. It is clear that many patients are able to use a variety of apps on tablet-type devices and test regularly, as we also found.5 6 17–20 It is less clear why some patients may have struggled to test or tested less frequently and our findings provide insights about possible reasons.

In this context, two aspects worth comparing across studies are the proportions of people offered home-monitoring who take up the offer, and the proportion of these who test regularly. In the MONARCH study, these percentages were 32% and 88%.13

The Alleye app (Oculocare, Zurich, Switzerland) has been evaluated in a series of pragmatic studies, Islam et al20 describe identifying 605 patients receiving antiangiogenic treatment for nAMD or Diabetic Macular Oedema (DMO) as potentially suitable to use the AllEye app but 63% were either unwilling or did not meet the study inclusion criteria of having visual acuity of 6/24 or better. The proportion remaining is similar to MONARCH, that is, 37%. The numbers excluded for each of these two reasons were not reported nor the frequency with which the participants were asked to test. The authors do not report a distinction between occasional/one-off testing and regular testing. Participants tested on average 46.9 times but neither the total time over which they were testing nor the variability across patients was reported. In another study,16 56 nAMD or DMO patients generated 2258 tests in 222 intervals between clinical reviews. It is unclear what proportion of all patients invited to participate this number represents. Participants on average had 2.5 intervals (SD 1.4) and 6.4 tests per interval (but the variability of the latter across patients was not reported).

Guigou et al invited 60 patients with a variety of oedematous maculopathies to use another app, OdySight (Tilak, France).17 Thirty-seven (62%) created an account and performed at least one test. Using the app, 22 patients (60%, potentially ‘regular’ testers) generated 483 visual acuity tests over up to 9 months, during which time they had 77 consultations, an average of 6.3 tests per interval between consultations.

One of the apps that we evaluated, mVT was studied within a service quality improvement initiative,21 focusing on ‘active’ (installed the app and used it at least once) and ‘compliant’ use of the app (performed the test at least twice weekly during at least 4 weeks). Of 417 eligible patients, 258 (62%) tested at least once and, of these, 166 (64%) were tested regularly. Participants tested on average 1.83 times per week (SD 2.46).

Conclusion

Healthcare services are rightly cautious about implementing digital technologies. Evaluations need to focus on mature technologies and established third-party providers to minimise the challenges we have experienced. Some challenges could be minimised by requiring users to implement apps on their own equipment, but this would likely increase inequality.

Data availability statement

Data are available upon reasonable request. Individual participant data (IPD) sharing plan. Data will not be made available for sharing until after publication of the main results of the study. Thereafter, anonymised individual patient data will be made available for secondary research, conditional on assurance from the secondary researcher that the proposed use of the data is compliant with the MRC Policy on Data Preservation and Sharing regarding scientific quality, ethical requirements, and value for money. A minimum requirement with respect to scientific quality will be a publicly available prespecified protocol describing the purpose, methods, and analysis of the secondary research, for example, a protocol for a Cochrane systematic review. The second file containing patient identifiers would be made available for record linkage or a similar purpose, subject to confirmation that the secondary research protocol has been approved by a UK REC or other similar, approved ethics review body.

Ethics statements

Patient consent for publication

Ethics approval

This study involves human participants and was approved by Main REC: Northern Ireland Health and Social Care Research Ethics Committee A Reference number: 17/NI/0235. Date of favourable ethical approval: 29 January 2018. Participants gave informed consent to participate in the study before taking part.

Acknowledgments

This study was designed and is being delivered in collaboration with the Clinical Trials and Evaluation Unit (CTEU), a UKCRC registered clinical trials unit which, as part of the Bristol Trials Centre, is in receipt of National Institute for Health Research CTU support funding. We thank the independent members of the MONARCH steering committee, including our public and patient representatives for their valued contribution and oversight of the study and for their attendance at the steering committee meetings both in person and virtually. We extend our thanks to all the participants who took part in the study and without whom, the study would not be possible. We are grateful to all the staff at the clinical sites that facilitated recruitment, training, and data collection and contributed to the regular study management meetings. We thank the companies and organisations who provided access to their tests for evaluation in this context. Thanks to Mark Roser and Patricia Beaton from the International Macular and Retinal Foundation for help and support with the KeepSight journal. Thanks to Mike Bartlett and Yi-Zhong Wang from Vital Art and Science LLC for access and support with MyVisionTrack device. Thanks to Lars Frisen and Bo Frisen from Visumetrics for access and support with the Multibit device. Thanks to Novartis and Roche for access to the apps for the duration of the study.

References

Footnotes

  • Contributors REH: conceptualisation, funding acquisition, methodology, supervision, writing the original draft, writing-review, and editing. SS: methodology, writing the original draft, writing-review, and editing. RW: data curation, project administration writing-review, and editing. SRO’C: data curation, writing-review, and editing. EAG: data curation, formal analysis, validation, visualisation writing-review, and editing. EW: data curation, project administration writing-review, and editing. CT: data curation, writing-review, and editing. TP, PCK, AL, MD, and BJLB: methodology, writing-review, and editing. CAR: formal analysis, methodology, validation, visualisation, writing-review, and editing. BCR: conceptualisation, funding acquisition, methodology, supervision, validation, visualisation, writing the original draft, reviewing and editing, and guarantor for the overall content.

  • Funding NIHR Health Technology Assessment Programme (no. 15/97/02). The funder had no role in the write-up or conduct of the study.

  • Competing interests REH reports attendance at Roche Digital Health Advisory Meeting July 2019. She also received partial PhD Studentship funding from Okko Health 2021 for home monitoring of Diabetic Retinopathy. SS reports grants from Boehringer Ingleheim, receiving consulting fees from Boehringer Ingleheim, Novartis, Apellis, Bayer, Oculis, Oxurion, Roche, and Biogen. She also received payment or honoraria from Boehringer Ingleheim and Bayer, support for attending meetings from Bayer and participation in an advisory board with Bayer. She is also a Macular Society Trustee (unpaid). TP reports grants from Boehringer Ingelheim and Novartis; receiving consulting fees from Boehringer Ingelheim, Novartis, Apellis, Bayer, Oxurion, Roche, and Sandoz. She also received payment or honoraria (speakers fee/and/or advisory board) from Boehringer Ingelheim, Bayer, Roche, Apellis, Sandoz, Heidelberg, Zeiss, Optos. PCK reports software support from Vital Art and Science who produced the My Vision Track App. AL reports receiving consulting fees from and owning stock or stock options of Gyroscope Therapeutics. RW, SRO'C, EAG, EW, CT, BJLB, CAR and BCR have no competing interests.

  • Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details.

  • Provenance and peer review Not commissioned; externally peer reviewed.