Article Text

Download PDFPDF

Protocol
SocialBit: protocol for a prospective observational study to validate a wearable social sensor for stroke survivors with diverse neurological abilities
  1. Kelly White1,2,
  2. Samuel Tate3,
  3. Ross Zafonte4,
  4. Shrikanth Narayanan5,
  5. Matthias R Mehl6,
  6. Min Shin3,
  7. Amar Dhand1,2
  1. 1Department of Neurology, Brigham and Women's Hospital, Boston, Massachusetts, USA
  2. 2Harvard Medical School, Boston, Massachusetts, USA
  3. 3Department of Computer Science, The University of North Carolina at Charlotte, Charlotte, North Carolina, USA
  4. 4Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital Boston, Boston, Massachusetts, USA
  5. 5Department of Engineering, University of Southern California, Los Angeles, California, USA
  6. 6Department of Psychology, University of Arizona, Tucson, Arizona, USA
  1. Correspondence to Dr Amar Dhand; adhand{at}bwh.harvard.edu

Abstract

Introduction Social isolation has been found to be a significant risk factor for health outcomes, on par with traditional risk factors. This isolation is characterised by reduced social interactions, which can be detected acoustically. To accomplish this, we created a machine learning algorithm called SocialBit. SocialBit runs on a smartwatch and detects minutes of social interaction based on vocal features from ambient audio samples without natural language processing.

Methods and analysis In this study, we aim to validate the accuracy of SocialBit in stroke survivors with varying speech, cognitive and physical deficits. Training and testing on persons with diverse neurological abilities allows SocialBit to be a universally accessible social sensor. We are recruiting 200 patients and following them for up to 8 days during hospitalisation and rehabilitation, while they wear a SocialBit-equipped smartwatch and engage in naturalistic daily interactions. Human observers tally the interactions via a video livestream (ground truth) to analyse the performance of SocialBit against it. We also examine the association of social interaction time with stroke characteristics and outcomes. If successful, SocialBit would be the first social sensor available on commercial devices for persons with diverse abilities.

Ethics and dissemination This study has received ethical approval from the Institutional Review Board of Mass General Brigham (Protocol #2020P003739). The results of this study will be published in a peer-reviewed journal.

  • stroke
  • social interaction
  • quality of life
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

  • This study introduces a novel machine learning algorithm to detect and quantify social isolation.

  • The SocialBit wearable sensor accommodates individuals with diverse neurological abilities, promoting inclusivity in social monitoring technologies.

  • By detecting isolation in real time, this study allows for future interventions to minimise the negative effects of isolation on health outcomes.

  • Although this study is conducted in a hospital environment, future studies could validate the algorithm in a more naturalistic home setting.

  • To maintain patient privacy, the SocialBit algorithm does not record raw audio, which poses challenges to the machine learning process.

Introduction

Social connection has a large role in health outcomes.1 This effect is independent of socioeconomic status, smoking, alcohol consumption, obesity, physical activity and utilisation of preventive health services.2 Three decades ago, a meta-analysis concluded that social isolation was a major risk factor for health, and rivalled the effect of cigarette smoking, blood pressure, blood lipids, obesity and physical activity.3 More recently, a meta-analysis of 148 studies showed that persons with stronger social relationships had a 50% increased likelihood of survival, an OR higher than the effect of smoking, alcohol consumption and body mass index on health.4

This study is developing and validating a novel social interaction detection framework and an algorithmic implementation, SocialBit. SocialBit is a smartwatch-based mobile sensing application intended to passively and automatically track the amount of daily interactions of the primary person wearing the device. The application tracks social interactions by sampling ambient audio. Importantly, the application never stores the raw audio but rather stores a series of audio features to serve as input for classification by the SocialBit algorithm. Study investigators collect data from inpatient stroke survivors at Brigham and Women’s Hospital and Spaulding Rehabilitation Hospital in Boston, Massachusetts, capitalising on the ability to monitor patients’ social interactions in real time for multiple days.

SocialBit is the first wearable social interaction detection sensor customised for, and specifically validated with, stroke survivors. After stroke, patients are vulnerable to reduced social interactions and social isolation, which may have negative implications on their physical recovery.5–7 This is due to multiple factors, including changing social desires, language dysfunction, loss of shared activities, reduced energy levels, physical disability, depression, anxiety, motor impairment, environmental barriers, embarrassment and social stigma.8 9 The period immediately after stroke may be considered a particularly vulnerable time for social isolation because of the inability of family or friends to travel to the hospital, a belief that the patient needs to ‘heal’, and the initial severity of deficits that limit time away from home.10

It is specifically challenging to detect social interactions in patients with stroke who have cognitive or language deficits (aphasia). Usual methods of social isolation or social network characterisation rely on retrospective surveys or momentary self-report questionnaires.11 Patients with cognitive or language deficits cannot complete these instruments and are typically left out of such studies. Over 35% of patients with stroke have language deficits immediately after stroke,12 and over 50% of patients have cognitive impairment at 6 months after stroke.13 Such deficits make individuals incapable of providing a valid self-report, a limitation to our prior work on social network characterisation.14 The lack of data on patients with aphasia is especially concerning because of this population’s increased likelihood of social isolation after stroke due to multiple challenges with social integration.6 Our goal is to overcome this selection bias by developing a framework to detect interactions regardless of whether the patient is contributing distinguishable words to the conversation, thereby rendering it a viable interaction tracking solution that is universally accessible and inclusive for people with disabilities.

Previous attempts at measuring social interactions have shown promising results under particular assumptions.15–17 These assumptions, unfortunately, limit the scope of application. For example, one algorithm required mobile apps with access to Global Positioning System (GPS) information as well as the individuals' calendar events to analyse social interactions.16 Another application assumed that each individual wear a sensing device in a bounded environment (e.g., workplace).15 Another team used cameras to sense interactions, increasing privacy concerns.17 Another study used a smartphone-based conversation classifier, which assumes close proximity of smartphone to user.18 Here, we seek a technically parsimonious, privacy-protective and broadly applicable solution to automated social interaction tracking that does not require access to private information (i.e., it does not store raw ambient audio) and only requires the individual in question to be wearing a commercial smartwatch.

The goal of this study is to establish the validity and utility of this tool in detecting social isolation in stroke survivors in the period after stroke when patients may be observed in the hospital. This allows human observation of social interactions via video-streaming for ground truth determination. There are many implications. Unlike previously tested sensors,19 20 SocialBit requires only the patient to wear the device and not conversation partners. The algorithm maintains privacy of conversation content. SocialBit could run on widely accessible commercial devices. The application can be downloaded onto other WearOS devices, and in the future, it may be possible to download onto other platforms such as the Apple Watch, aligning with the vision of such wearables to become health monitoring tools. Lastly, SocialBit can lead to interventions to improve stroke recovery. With accurate social interaction data, SocialBit can provide real-time feedback and coaching to patients, family and clinicians.

Methods

SocialBit algorithm

The SocialBit algorithm detects social interactions through classifying acoustic features. Specifically, it analyses the temporal change of vocal acoustic behaviour, such as pitch and intensity. It promotes privacy because it does not rely on lexical information or natural language processing (i.e., what words were spoken). Rather, it uses a ‘sound signature’ machine learning method in which features of sound are extracted, analysed and classified as social interaction or not social interaction. The result is quantification of the number of social interaction minutes per day.

In figure 1, we depict the SocialBit algorithm machine learning approach. First, the algorithm converts the audio data into log mel spectrogram representations and passes them through a pretrained neural network called YAMNet,21 which is capable of classifying over 500 audio events from the public AudioSet data set.22 YAMNet produces feature vectors for each 0.96 s segment of audio, which are then combined and fed through a transformer network23 to determine whether the sequence contains a social interaction. Additionally, SocialBit collects voiceprint features unique to each patient and passes them to ECAPA,24 a neural network that helps identify when the primary device wearer is speaking. The speech sample used are five short sentences from the National Institutes of Health Stroke Scale (NIHSS).25 In summary, YAMNet discriminates general acoustic features of human voices versus other sounds, and ECAPA identifies when the primary device wearer is talking.

Figure 1

Machine learning algorithm that detects the probability of a social interaction.

Study design

This is a prospective, observational study of 200 patients with stroke. The data collection includes up to eight inpatient observation days per participant, and a 3-month follow-up assessment for study completion. We observe participants at Brigham and Women’s Hospital until they are discharged or for a maximum of 5 days. We collect up to 3 additional days of data if a patient transitions to Spaulding Rehabilitation Hospital Boston. At Brigham and Women’s Hospital, participants usually stay in shared rooms with one roommate. The focus is on acute care with modest inpatient rehabilitation services. At Spaulding Rehabilitation Hospital, Boston, participants stay in private rooms and receive at least 3 hours of intensive therapy (physical therapy, occupational therapy and speech therapy). We enrol caregivers of patients for those who cannot engage in surveys or are otherwise available for auxiliary data. We collect data at two time points: (1) In hospital, when research staff collect survey data and ground truth data, and (2) At the 3-month follow-up clinic appointment, when research staff collect additional survey data. Figure 2 shows the timeline of the study. This study was approved by the Institutional Review Board of Mass General Brigham (Protocol #2020P003739).

Figure 2

Timeline for the SocialBit study spanning 3 months.

This project has three primary aims: (1) Establish the accuracy of the SocialBit application for use in research with stroke survivors. (2) Determine the association between social interaction times and social isolation and stroke outcomes at 3 months (e.g., physical function, mood, disability). (3) Examine the influence of medical factors (e.g., depression, delirium and stroke severity) on social interaction time. The central hypothesis is that the SocialBit algorithm accurately detects social interactions of stroke survivors in hospital.

Recruitment

One goal of this project is to create social sensing technology universally accessible and inclusive for people with disabilities. Therefore, we focus on people with a range of neurological deficits after stroke as an integral part of the development process. Beginning on 15 June 2021, and continuing to early 2025, we are recruiting patients with acute ischaemic stroke at Brigham and Women’s Hospital in Boston, Massachusetts, USA. This urban setting provides a racially and economically diverse sample population for the study. We aim to recruit patients with a variety of neurological deficits including aphasia, dysarthria, cognitive changes and paralysis.

The following inclusion criteria apply: (1) Diagnosed with an acute ischaemic stroke defined clinically with support from imaging and (2) 18 years old or older. Exclusion criteria include the following: (1) On Comfort Measures Only (a patient end-of-life care plan that focuses on pain relief and quality of life), (2) Diagnosed with dementia prior to stroke in the medical record, (3) Unable to obtain informed consent from the patient or patient decision maker, and (4) Patient or patient decision maker is unable to understand or speak English well enough to complete surveys.

We screen patients for eligibility daily via the Brigham and Women’s Hospital inpatient neurology lists on Epic Systems. The research staff then request permission from the nursing staff to approach patients who meet formal inclusion and exclusion criteria for the study. The nurse presents the study to the patient and asks if the patient is willing to discuss the study further with the research team. After acquiring appropriate permission through the nursing staff, research staff approach qualifying patients and/or their family members to further explain the study.

We obtain signed informed consent from all patients who are consentable, meaning they do not have aphasia or confusion. For the patients who do have aphasia, we collect signed informed consent from caregivers who are willing to participate and answer survey questions on behalf of the patient. Patients who agree to participate complete about 1 hour of surveys with the research staff, including an NIHSS, a Montreal Cognitive Assessment (MoCA), and additional surveys about their social network, their perceived loneliness and depression, their subjective physical function, their life satisfaction, and their personality. We ask patients these same survey questions when they return for their follow-up appointment with the study’s principal investigator, AD, at~90 days. At this visit, the patient completes his/her participation in the study and is compensated for his/her time.

After the patient completes the initial surveys, we ask the patient to wear a SocialBit-equipped Samsung Galaxy Watch5 Pro from 9:00 to 17:00 until he/she is discharged or for a maximum of 5 days during their inpatient stay at Brigham and Women’s Hospital. Additionally, if the patient is discharged to Spaulding Rehabilitation Hospital in Boston, we collect data in that setting for up to 3 days. On average, we aim to collect 2–3 days (16–24 hours) of data per patient (i.e., around 8 hours of expected monitoring per day, a constraint imposed by both the ground-truth data collection and watch battery considerations).

To identify the primary speaker, the patient creates a voice profile on the SocialBit application by reading five short sentences from the NIHSS. We ask the patient to read the five sentences aloud for 30 s. For patients who cannot read or speak, no voice profile is created. Following the voice profile, SocialBit runs in the background for passive detection of interactions without any interface with users. If the patient were to take a shower, leave for imaging or perform another task that may interfere with the functioning of the smartwatch, we instruct the patient and/or nursing staff to take the watch off before leaving and put the watch back on upon return.

Study staff set up an iPad with a Zoom livestream in the patient’s room and put signage in the room to alert other people of the livestream video and provide people with a contact number in case they have any questions or concerns. A Health Insurance Portability and Accountability Act (HIPAA) certified research staff member is on the other side of the livestream manually coding, with a temporal resolution of 1 min, all social interactions that the patient has throughout the day. If there are any moments throughout the day that the patient does not want their conversation to be overheard, we encourage the patient and/or nursing staff to cover up the iPad, signalling to the research team to pause data collection and stop listening.

Exposure and outcome measurements

To address the three aims of this study, we collect a variety of measures about patients’ social interactions, social connectedness, stroke severity, physical function and cognitive function. This section outlines all the measures we collect with patients and their caregivers during both the hospitalisation period and the 3-month follow-up appointment.

Ground truth coding system

To accomplish the primary aim of this study, establishing the accuracy of the SocialBit algorithm in detecting social interaction, the SocialBit algorithm and trained research staff independently collect social interaction data. Table 1 shows the ground truth data coded by research staff, including the questions answered each minute. Table 2 lists the guidelines for making coding judgements for each question.

Table 1

Ground truth data collection recorded every minute from 9:00 to 17:00

Table 2

Guidelines for making coding judgements for each question of the ground truth table

To ensure study staff are taking a standardised approach to coding the ground truth table, we provide training to all new study staff. First, current staff teach new study staff how to code the ground truth table and new staff observe the coding process for at least two full days. Then, new study staff watch two 15-minute-long sample videos and complete sample ground truth tables for each scenario. The principal investigator and team created these two sample videos along with a standardised answer key. The new study staff’s ground truth coding should match the key, except for coding ±1 for tone and depth due to the subjectivity of these variables. If new study staff members do not accurately fill out the practice ground truth tables, they must complete further practice and training before coding for an actual patient. Study staff also continually work together to ensure that they are consistently answering the ground truth questions similarly. The study lead coordinator re-trains auxiliary study staff every 6 months.

Subjective social connectedness

To address the second aim of determining the association between social isolation and stroke outcomes, we perform a series of self-report surveys (table 3) with the participants (or caregivers) to get their subjective interpretation of their social networks and social connectedness. We use the Personal Network Survey for Clinical Research, developed by Dhand et al,26 to assess the social connectedness of patients with stroke. We also administer the 20-item UCLA Loneliness Scale27 and 20-item Center for Epidemiological Studies Depression Scale28 to assess patients’ subjective feelings of loneliness and depression. These surveys are conducted at patient enrolment as well as at their 3-month follow-up visit, to evaluate trends in stroke survivors’ social connectedness during recovery.

Table 3

List of all measures collected in the SocialBit Study, including measures for exposures, outcomes and covariates

Subjective physical function

To understand patients’ stroke outcomes, we use the simplified Modified Rankin Scale29 and National Institutes of Health Patient-Reported Outcome Measurement Information System (PROMIS) Physical Function Surveys through self-report with patients and/or their caregivers (table 3). These measures are widely used and consensually accepted measures of functional ability in patients with stroke.30–32 The Modified Rankin Scale is a 6-point disability scale using dichotomous questions29 and the PROMIS Physical Function is a computer adaptive test that selects items from a large PROMIS bank.33 We conduct these surveys at patient enrolment as well as at their 3-month follow-up visit. Data from enrolment is compared with data at the 3-month follow-up to assess the patient’s physical improvements over the 3-month period.

Stroke severity and cognitive function

To address the third aim of determining the impact of medical factors on social interaction time, we assess stroke severity through the broadly used NIHSS34 and cognitive abilities through the reliable and validated MoCA (table 3).35 36 The NIHSS is a 15-item scale used to assess the physical and cognitive effects of an acute stroke.25 The MoCA is a short screening tool for cognitive impairment.35 We compare NIHSS and MoCA scores at enrolment and at the 3-month follow-up to determine a patient’s physical and cognitive improvement over the first 3 months of recovery. We also record patient’s delirium from the Confusion Assessment Method,37 an instrument used in a clinical setting to detect delirium, noted in patients’ charts by nursing staff during their inpatient stay.

Caregiver burden

At the 3-month follow-up, we ask caregivers who agree to participate in the study a series of questions regarding the burden they have felt since taking care of their loved one after their stroke. The Caregiver Burden Scale (table 3) used in this study is a short-form 6-item scale adopted from the Zarit Burden Interview. Previous studies have validated the reliability and usefulness of the shortened scale in detecting caregiver burden.38 39 These data are useful information to put into perspective how illness and disease have an impact on not only the patient but also their social network.

Covariates

We also collect data on explanatory variables that may impact health outcomes (table 3). In this category, we collect sociodemographic information, including age, sex, ethnicity, race, education level, household income, employment status and marital status. We also collect a patient’s personality index by using the validated 15-item Big Five Inventory2 XS Survey.40 Caregivers who agree to participate answer this personality survey twice, once regarding the patient’s personality and once regarding their own personality. This provides the research team with a better understanding of the personality traits of both the patient and their caregiver. We collect information on comorbidities and stroke characteristics, which can impact stroke severity and recovery time, from the patient’s chart.

Analysis plan

The analysis focuses on determining the accuracy of the SocialBit algorithm in detecting social interactions between the primary device wearer and others. Second, we determine the correlation between the amount of social interaction and stroke outcomes, as well as the association of medical factors with the amount of social interaction.

Quantitative data analysis

The overall analysis plan is to evaluate accuracy (% of correct classification), specificity, sensitivity, positive predicted value (PPV) and negative predictive value (NPV) of the social interaction times measured by the SocialBit algorithm versus the ground truth. For the machine learning, fivefold cross validation is used with 40 patients per fold. The unit of analysis is social interaction or no social interaction in every 1 min interval for 8 hours a day. If social interaction occurs partially within an interval, then the entire 1 min interval is marked as social interaction. To be consistent with the ground truth, the automated algorithm processes at the 1-min interval as well. However, only 1 min every 5 min or 6 min depending on the battery capacity is analysed by the algorithm. We minimise the overfitting to training data through optimising hyperparameters (eg, regularisation and stop training when the validation set is maximum). To increase the diversity of the training corpus, we add audio extracts of voices with different pitch, tone and volume from public data sets. We also train using examples of TV shows and ambient healthcare setting noise from recordings in empty hospital rooms.

We are conducting all regression and longitudinal statistical analyses in R with biostatistical consultation at Harvard Medical School. For the primary outcome, we are assessing baseline characteristics and any differences between patients. We are also performing checks for outlying values. For diagnostic accuracy determination, we are constructing a 2-by-2 table, and then determining overall accuracy, sensitivity, specificity, PPV, NPV and diagnostic OR. For the secondary outcomes, we are using multivariate linear regression to determine the association between social interactions and stroke outcomes and the influence of medical factors on social interaction times. All analyses are accounting for gender and socioeconomic status variables at baseline including education, income and occupation. We aim to include a diverse set of participants including both men and women and a mix of race and ethnicity. We are completing stratified and interaction analyses by these variables to assess whether patterns are seen within and across these categories.

Power analysis

In machine learning projects, high-quality and diverse data with positive and negative examples are needed for successful model building. Therefore, contrary to traditional power analysis, sample size determination is based on iterative performance metrics during training. In our project, we plan for 200 patients × 24 hours × 60 min/hour divided by 6 (to conserve battery life) which is equal to 48 000 samples. The ratio of social interaction to non-social interaction samples is unknown, but even at extreme levels (9 to 1), this results in 4800 samples in a minority class. Based on the literature, this sample size is comparable to gold standard studies such as Audio Set (most classes contain less than 10 000 samples)22 and ImageNet (~1000 average samples per class, ~3000 for the mode).41 Furthermore, deep learning algorithms have achieved good accuracy for each of these data sets. For example, a deep learning algorithm like ours achieved an Area Under the Curve (AUC) level of 0.96 for Audio Set.42 Therefore, our strong preliminary data and literature-based estimates justify our sample size determination. For future use of these data, we also plan to analyse the data set size needed to reach reliable classification. We will compute a learning curve that measures the trade-off between the size of training set and the classification accuracy.43 This would allow evaluation of the feasibility and scalability of the algorithm.

Data management

One of the audio features collected through the SocialBit application includes the patient reading a passage. This is considered a voice print, or a visual record of speech, which is Protected Health Information (PHI) as defined by HIPAA.44 We treat such data in the same way as other PHI in terms of privacy and security when collecting and transferring their PHI. In general, all audio features are encrypted and only temporarily stored on the watch. These encrypted audio features are uploaded to Amazon Web Services (AWS), which is a HIPAA-compliant platform. To respect two-party consent laws and to avoid storing PHI from anyone other than the patient, we store only the similarity score (0–1) between the primary device wearer’s voice print and subsequent audio recordings. Only authorised study staff and our collaborators at the University of North Carolina at Charlotte have access to the data on AWS.

We collect and store ground truth data and survey data in a secure online database, Research Electronic Data Capture (REDCap).45 46 REDCap is a commonly used platform in clinical research with HIPAA-compliant data security. Only authorised study staff access the REDCap database.

Patient and public involvement

Patients and the public were not involved in the design of this study. Study participants can request information on the results on study completion.

Discussion and expected impact

SocialBit is a machine learning algorithmic framework designed to detect and quantify social interactions, and as inverse measure, social isolation. The goal of SocialBit is to be an objective, valid and easy-to-interpret metric of social interactions for individuals with diverse abilities. The current study aims to validate the accuracy of SocialBit in detecting social interactions in a sample of stroke survivors with a variety of speech, cognitive and physical impairments. This study takes advantage of the unique opportunity to directly observe patients for multiple days in a hospital setting.

The implications of this technology are multifold. SocialBit could be a useful tool for social sensing in individuals with diverse abilities, as well as detecting social isolation in vulnerable individuals with high accuracy. The algorithm could be used as a basis for social therapeutics in which social isolation is detected and acted on quickly, leading to improved health outcomes. These ideas connect with trends in the social sensing literature including the importance of social sensing for disease surveillance, health behaviour monitoring and intervention design.47 These measurement possibilities also answer the call for greater understanding of the effects of social isolation and loneliness on public health.48

There are some limitations to the current study. First, the hospital environment in which the study is conducted is not entirely natural, and interaction frequency, types of persons and types of interactions may differ from what an individual experiences in their day-to-day life. Additionally, the hospital environment is often noisy with monitors and televisions, which may interfere with the algorithm’s ability to detect social interactions. Lastly, due to HIPAA concerns, the algorithm does not store raw audio, missing the ability to understand pitfalls in specific cases.

In conclusion, the validation of SocialBit in stroke survivors represents an important step forward in the development of an objective, valid and easy-to-interpret metric of social interactions. The ability to detect social isolation with high accuracy and sensitivity could lead to improved health outcomes for vulnerable individuals. Further studies are needed to determine the utility of SocialBit in other populations and settings, and to determine how best to use SocialBit in social therapeutics to improve health outcomes.

Ethics and dissemination

We received ethical approval from the Institutional Review Board at Mass General Brigham (Protocol #2020P003739). We obtain written informed consent from patients and their caregivers, and we reimburse patients who complete the study with a $100 cheque for their time. We offer a $50 cheque to patients who do not complete the entire study.

Once this study is complete, results will be submitted for publication in a peer-reviewed journal and data will be available from the corresponding author on reasonable request.

Ethics statements

Patient consent for publication

Acknowledgments

The authors thank all past, current and future participants in the SocialBit Study for their contributions towards creating a metric of social interactions to help detect social isolation.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
  31. 31.
  32. 32.
  33. 33.
  34. 34.
  35. 35.
  36. 36.
  37. 37.
  38. 38.
  39. 39.
  40. 40.
  41. 41.
  42. 5.
  43. 43.
  44. 44.
  45. 45.
  46. 46.
  47. 47.
  48. 48.

Footnotes

  • Contributors AD, MRM and MS are responsible for the conceptualisation and design of the study. KW wrote the manuscript. AD, ST, RZ, SN, MRM and MS made substantial contributions to the manuscript. KW, ST, RZ, SN, MRM, MS and AD reviewed and edited the manuscript.

  • Funding This work is supported by the National Institutes of Health (grant number R01HD099176).

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.