Article Text

Download PDFPDF

Derivation and validation of a chief complaint shortlist for unscheduled acute and emergency care in Uganda
  1. Brian Travis Rice1,2,
  2. Mark Bisanzo3,
  3. Samuel Maling4,
  4. Ryan Joseph5,
  5. Hani Mowafi6
  6. Global Emergency Care Investigators Group (Study Group)
    1. 1 Emergency Medicine, New York University Langone Medical Center, New York City, New York, USA
    2. 2 Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
    3. 3 Division of Emergency Medicine, Department of Surgery, University of Vermont, Burlington, Vermont, USA
    4. 4 Psychiatry, Mbarara University of Science and Technology, Mbarara, Uganda
    5. 5 Emergency Medicine, Texas A&M, Corpus Christi, Texas, USA
    6. 6 Emergency Medicine, Yale University, New Haven, Connecticut, USA
    1. Correspondence to Dr Brian Travis Rice; brice{at}stanford.edu

    Abstract

    Objectives Derive and validate a shortlist of chief complaints to describe unscheduled acute and emergency care in Uganda.

    Setting A single, private, not-for profit hospital in rural, southwestern Uganda.

    Participants From 2009 to 2015, 26 996 patient visits produced 42 566 total chief complaints for the derivation dataset, and from 2015 to 2017, 10 068 visits produced 20 165 total chief complaints for the validation dataset.

    Methods A retrospective review of an emergency centre quality assurance database was performed. Data were abstracted, cleaned and refined using language processing in Stata to produce a longlist of chief complaints, which was collapsed via a consensus process to produce a shortlist and turned into a web-based tool. This tool was used by two local Ugandan emergency care practitioners to categorise complaints from a second longlist produced from a separate validation dataset from the same study site. Their agreement on grouping was analysed using Cohen’s kappa to determine inter-rater reliability. The chief complaints describing 80% of patient visits from automated and consensus shortlists were combined to form a candidate chief complaint shortlist.

    Results Automated data cleaning and refining recognised 95.8% of all complaints and produced a longlist of 555 chief complaints. The consensus process yielded a shortlist of 83 grouped chief complaints. The second validation dataset was reduced in Stata to a longlist of 451 complaints. Using the shortlist tool to categorise complaints produced 71.5% agreement, yielding a kappa of 0.70 showing substantial inter-rater reliability. Only one complaint did not fit into the shortlist and required a free-text amendment. The two shortlists were identical for the most common 14 complaints and combined to form a candidate list of 24 complaints that could characterise over 80% of all emergency centre chief complaints.

    Conclusions Shortlists of chief complaints can be generated to improve standardisation of data entry, facilitate research efforts and be employed for paper chart usage.

    • epidemiology
    • public health

    This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    Strengths and limitations of this study

    • Largest data set of emergency care chief complaints in a low-middle income country in the literature to date.

    • First attempt to produce an emergency care chief complaint list with a data-driven approach.

    • A candidate chief complaint list appropriate for retrospective analysis and one for prospective paper chart-based data entry are both generated.

    • Data derived and validated in district level hospitals in Uganda. Needs further validation for broader application.

    • Provides a crucial step forward in standardising data collection and research capacity for emergency care in low-resource settings.

    Introduction 

    Emergency care provided in low-income and middle-income countries (LMICs) remains poorly characterised for multiple reasons. Research to better understand emergency care needs in these settings is complicated by a frequent lack of discrete, standardised emergency care departments within health centres from which to collect such data; the conflation of emergency encounter data with either outpatient or inpatient data; and varying levels of training for those entering data.1 When captured, data on emergency care encounters are typically entered in free text without using a standard lexicon onto paper charts, making it difficult to abstract data for research, quality assurance or development efforts.

    Despite the difficulty in characterising emergency care, such care is nonetheless provided daily to millions of patients in LMICs at various points of entry to the health system that we will term ‘emergency units’. Accordingly, there is an imperative to identify methods that capture emergency care data in a standard format that is useful to clinicians, researchers and policymakers seeking to improve emergency care in LMICs.

    One proposed method to organise LMIC emergency care data is to develop a standardised list of chief complaints. The chief complaint serves as the entry point into diagnostic and therapeutic evaluation and is a critical step to perform a range of tasks from triage to developing differential diagnoses. Moreover, chief complaints can be captured at the moment of a patient’s presentation and provide a fundamentally different source of information about the patient (initial status, subjective experience and undifferentiated severity of illness) than do final diagnoses that may not be available at the time of the emergency care encounter.2

    Regardless of how such a system is ultimately implemented, consensus has developed that establishing a list of chief complaints to adequately and accurately characterise a large percentage of emergency encounters is a needed next step in developing emergency care and research in in LMICs.3 Chief complaints have been standardised in high-income countries through the use of encoding algorithms and medical ontologies such as the Health Level 7,4 the Systematized Medical Nomenclature of Medical Disease5 and Unified Medical Language System initiatives,6 but their applicability is unclear as they have been generated from ‘top-down’ expert consensus processes that may not be relevant to LMIC settings.

    To date, no comprehensive effort has been published that describes such a list of chief complaints native to an LMIC. In this study, emergency chief complaints from a low-resource setting in rural Uganda were analysed, using a data-driven approach, to generate candidate chief complaint shortlists tailored for use in retrospective data analysis and for prospective entry of emergency care data in the paper charts typically used throughout Uganda and LMICs.

    Methods

    The chief complaint data were collected from a quality assurance database established by Global Emergency Care (GEC), a US and Uganda-based not-for-profit organisation providing emergency care training in Uganda. The emergency unit at Karoli Lwanga ‘Nyakibale’ Hospital is set in Uganda’s rural Rukungiri district. This emergency unit sees medical and surgical emergencies—with maternal emergencies typically being triaged to a separate labour and delivery ward—and is staffed by non-physician clinicians (NPCs) trained in emergency care by GEC via a 2-year curriculum. The six-bed emergency unit sees approximately 500 patients per month with an admission rate of slightly over 60% and a 3-day mortality rate of almost 4% for admitted patients. The setting, resource availability and outcomes of this programme are described in depth in previous publications.7–9

    A robust database was designed to capture demographics, details of emergency visits, disposition and 3-day outcomes for discharged and admitted patients. The derivation dataset was collected from November 2009 through the end of February 2015. The validation dataset was collected from March 2015 through February 2017. Chief complaints were written in English into a paper chart by nursing students who spoke both Ugandan English and Runyankole, the local dialect. Trained Ugandan research assistants working in the emergency unit entered this chart as free-text data into an electronic database. From 2009 to 2012, data were entered into Microsoft Excel, and from 2012 to 2017, it was entered into Microsoft Access. No limitation was put on to the number or length of chief complaints.

    Derivation

    Data cleaning

    Cleaning and analysis of raw free text was done with Stata Statistical Software V.13 by a single unblinded researcher. Initial data cleaning was done with handwritten natural language processing rules in Stata.10 All free-text data had capitalisation and blank spaces removed to generate the initial subset. Emergency unit protocols previously encouraged research assistants to enter multiple complaints as separate entries, but many free-text entries contained compound chief complaints connected by alphanumeric character(s). Therefore, each entry was scanned for those characters and compound entries containing multiple complaints were split into distinct complaints in Stata. Usage of American and British English was standardised.

    Data refining

    Once these data were cleaned, they were further refined through a series of steps. First, all mention of duration was removed (stage 1). Second, description of body parts and body locations were standardised (stage 2). Third, spelling errors were corrected, and abbreviations were standardised with handwritten natural language processing rules in Stata (stage 3). All questions about abbreviations and local idioms were discussed with providers who had been working in the emergency unit at the study site for more than 5 years. Fourth, all statements of left-sidedness or right-sidedness were removed (stage 4) to produce the chief complaint ‘longlist’.

    Consensus process for data grouping

    Once the derivation set longlist was produced in Stata, the next steps were to group these chief complaints to produce a ‘shortlist’. This was done via a consensus process that involved two independent, unblinded, US-based, board-certified emergency medicine physician reviewers with substantial clinical experience in LMICs generally and Uganda specifically (BTR: LMIC since 2007 and Uganda since 2012; HM: LMIC since 2006 and Uganda since 2014). Each reviewer individually reviewed the longlist and either kept each complaint or grouped it to a broader category to produce two candidate shortlists that were then compared. When both reviewers agreed, the grouped complaint was added to the final derivation shortlist. In all cases of disagreement, the reviewers were able to reach consensus by discussion. A third reviewer was available in cases of intractable disagreement.

    The discussions in this consensus process initially focused on grouping complaints that differed only in subtle anatomic descriptions (eg, ‘HEADACHE - FRONTAL’ and ‘HEADACHE – OCCIPITAL’ were grouped as ‘HEADACHE’ and ‘PAIN – EPIGASTRIC’ and ‘LOWER ABDOMINAL PAIN’ were grouped as ‘ABDOMINAL PAIN’). Emphasis was placed on making groups that considered the resources required for diagnosis and treatment (eg, ‘LACERATION – LEG’ and ‘LACERATION – ARM’ were grouped into ‘LACERATION’, but ‘LACERATION – SCALP’ was grouped with ‘HEAD INJURY’ because of the substantial differences in injury severity, evaluation and treatment between these complaints).

    Care was taken to include only complaints in the shortlist, thus diagnoses entered as complaints such as ‘asthma’ were reclassified as ‘shortness of breath’. Mechanisms of injury, however, were deliberately kept as they reflected the context and often times severity of illness of patients presenting for care and reflected what clinicians felt was most germane to patient care.

    Disagreement about how body locations was resolved by using body regions instead of very specific or very general locations (eg, ‘PAIN AND/OR SWELLING – HAND’ and ‘PAIN AND/OR SWELLING – ARM’ became ‘PAIN AND/OR SWELLING – UPPER EXTREMITY’ and ‘PAIN AND/OR SWELLING – FOOT’ and ‘PAIN AND/OR SWELLING – LEG’ became ‘PAIN AND/OR SWELLING – LOWER EXTREMITY’).

    The final focus for discussion centred on the relative benefits of keeping a longer list of complaints to produce greater data resolution (eg, ‘ULCER – ORAL’ and ‘TONGUE MASS’ and ‘PAIN – TOOTH’) versus the benefits of having a more concise list (eg, ‘DENTAL/ORAL PROBLEM’). In most cases of disagreement, the authors deferred to a more concise list.

    Once the final derivation shortlist was produced, the authors entered it into the electronic survey tool Qualtrics (Qualtrics, Provo, Utah, USA) to facilitate prospective use by clinicians. The shortlist was split into traumatic and medical complaints to remain consistent with the structure found in the Kampala Trauma Form (already in use at the hospital for trauma presentations) to produce a final derivation ‘shortlist’ of chief complaints.11 12

    Automation of consensus process

    The logic used in the consensus process described above was reproduced post hoc in Stata via additional language processing. This additional processing was applied to the derivation set longlist to produce and alternative ‘automated shortlist’.

    Validation

    A second set of patient complaints from the same study site was analysed using the Stata programme described above to produce a second longlist of cleaned and refined data. The performance of this cleaning and refining programme was analysed using χtest to see if there was a significant difference in performance between derivation and validation data sets. The threshold for significance was set at p<0.05.

    The refined validation set longlist of chief complaint data was given to two Ugandan NPCs for sorting with the Qualtrics tool. These NPCs had been using the free-text chief complaints for clinical care for several years and were fluent in English and Runyankole. Using the Qualtrics version of the derivation shortlist, the NPCs were asked to categorise every longlist complaint to a corresponding derivation set shortlist complaint. If an appropriate entry could not be found, they were instructed to select ‘OTHER’ and enter a free-text complaint. The results of their categorisation were then compared using Cohen’s kappa to determine inter-rater reliability for the shortlist.13 14 The thresholds for reliability were defined as: 0.01–0.20 as none to slight agreement, 0.21–0.40 as fair agreement, 0.41–0.60 as moderate agreement, 0.61–0.80 as substantial agreement and 0.81–1.00 as almost perfect agreement.14

    Patient and public involvement

    The NPC training programme was originally developed in response to several years of clinical emergency medicine experience in Uganda. The positive response of patients, staff and administrators at the pilot site in Nyakibale led to the expansion of the project to Masaka. Patients and the public were not involved in the design of the study though outcome measures are explicitly patient oriented. Results will be disseminated through open access publication.

    Results

    Derivation

    The derivation dataset included 26 996 unique emergency visits with 32 272 free-text chief complaints resolving to 42 566 discrete chief complaints (average: 1.58 complaints per visit). The demographics for these patient visits are listed in table 1 below.

    Table 1

    Demographics

    When the raw data was refined using the Stata cleaning algorithm, 40 772 complaints (95.8% of all complaints) were recognised and yielded 10 110 unique cleaned and capitalised chief complaints. After the four stages of data refining described above, the process reduced the total number of unique chief complaints to 9061 (stage 1), then to 8801 (stage 2), then to 838 (stage 3), then to 555 (stage 4) (see figure 1).

    Figure 1

    Data flow for chief complaint analysis.

    Those 555 refined complaints (listed as online supplementary appendix 1) were then grouped via the consensus process to produce a final derivation shortlist of 83 chief complaints (stage 5) detailed below. The candidate derivation shortlist for one reviewer (BTR) contained 104 complaints, and for the other reviewer (HM) it contained 75 complaints. Agreement in the consensus process is shown in table 2 below.

    Supplemental material

    Table 2

    Agreement in consensus process

    Agreement was defined in three cases: both reviewers used exactly the same words; both reviewers used synonyms (‘LACERATION’ vs ‘CUT OR LACERATION’); both reviewers agreed that anatomic descriptors should be omitted for specific complaints. Disagreement was also defined in multiple ways: disagreement about how broad to make complaints (‘SWELLING – LIPS/FACE’ and ‘SWELLING – LOWER EXTREMITY’ vs ‘SWELLING – LOCALIZED’), disagreement about how to describe location for traumatic complaints (‘INJURY – PELVIC’ and ‘INJURY – LOWER EXTREMITY’ vs ‘TRAUMA/INJURY’), disagreement about how to describe location for non-traumatic complaints (‘PAIN – UPPER EXTREMITY’ and ‘PAIN – LOWER EXTREMITY’ vs ‘PAIN – MSK’) and disagreements requiring prolonged discussion (ie, should ‘UNABLE TO TALK’ fall under the category of ‘MOTOR DEFECIT’ or stand alone as ‘APHASIA’, should ‘INOXICATION – ALCOHOL’ fall under ‘ALTERED MENTAL STATUS’ or stand alone as ‘INTOXICATION WITH ALCOHOL OR DRUG’). In all cases of disagreement, consensus was arrived on through discussion, and the third reviewer (MB) was never required for breaking a deadlock.

    This consensus process produced a shortlist of 83 complaints that encompass all 555 cleaned complaints and are compiled in table 3.

    Table 3

    Consensus derivation shortlist of chief complaints

    The processing of the longlist in Stata using the logic from the consensus process yielded an ‘automated shortlist’ of 186 entries, which is reproduced in online supplementary appendix 2.

    Supplemental material

    Validation

    The validation dataset included 10 068 patient visits and 19 531 recorded complaints. Expanding those complaints when multiple complaints were written in a single field yielded 20 165 unique complaints. The Stata cleaning algorithm recognised 94.9% of the complaints to produce 19 138 cleaned complaints. This level of recognition was very similar between the derivation dataset (95.8% complaint recognition) and the validation dataset (94.9% complaint recognition) but because of the very large cohorts used, this difference of less than 1% reached statistical significance (p<0.001) (table 4).

    Table 4

    Comparison of automated cleaning performance for derivation and validation datasets

    Stata refinement of the 19 138 complaints produced a longlist of 451 complaints (online supplementary appendix 3). The two NPCs grouped this longlist using the Qualtrics tool that replicated the consensus shortlist, and their choices were compared with assess inter-rater reliability. Agreement between the two NPCs was highest for the most common chief complaints. The top 10 most frequent complaints had 90.0% agreement; the top 20 had 75.0% agreement. The top 50 had 64.0% agreement, and the top 100 had 60.0% agreement.

    Supplemental material

    Overall, there was 71.5% agreement for the 19 138 complaints, yielding a kappa of 0.70 (95% CI 0.70 to 0.73) suggesting substantial inter-rater reliability of the shortlist. In only one case out of 451 longlist entries did an NPC feel the need to select ‘OTHER’ and enter free-text to describe chief complaint (‘DROWNING’). Several entries were placed on the shortlist via the author consensus process but were never selected by the NPCs. Neither NPC used ‘SYNCOPE’ or ‘MEDICATION REFILL’. One NPC never selected ‘BLOODY STOOL’, ‘FOREIGN BODY – INGESTED’, ‘VAGINAL BLEEDING’, ‘CAST CHANGE/PROBLEM’ or ‘ELECTRICAL/LIGHTNING INJURY’. The other NPC only failed to select ‘CHANGE IN SKIN COLOR’ in addition to the two shared omissions described above.

    Final candidate shortlist

    The chief complaints required to account for 80% of complaints overall in both the consensus and the automated shortlist were compared side by side (see table 5). The 14 most frequent complaints were identical in both lists, and the remainder were merged to form a final candidate shortlist of 25 chief complaints (the 24 listed +a free-text ‘OTHER’ field).

    Table 5

    Comparison of 80% inclusive shortlists (in order of decreasing frequency)

    As some hospital systems (including this study site) have separate trauma forms, the authors felt that there is value in a prospective shortlist with a specific area for trauma related injuries (derived from the Ugandan trauma form currently in use at the study site), and this list is provided below as figure 2.

    Figure 2

    Candidate chief complaint shortlist for prospective use.

    Discussion

    Emergency care providers in low-resource settings are caught in a vicious cycle. They frequently work in under-resourced emergency units that are stressed past capacity in terms of acuity and clinical volume. However, little is known about what conditions present for emergency treatment or what occurs in the emergency encounter because few health systems in LMICs systematically capture data from emergency care units. It becomes difficult to argue for additional resources to improve emergency care without data, and in turn, it is difficult to capture data without resources and some system in place to systematically collect and analyse that data. Experts have called for research on emergency chief complaints as a critical step in emergency care development in LMICs.1

    Overall, the goal of this project was to take a pragmatic approach that could result in a solution that:

    • Is easily understood by any emergency care provider (physicians, NPCs, nurses and clinical officers).

    • Maximises speed and minimises error.

    • Does not rely on digital records that are uncommon in these settings.

    • Uses little space on standard clinical documentation forms.

    • Is independent of final diagnoses that are frequently unavailable at the time of the emergency clinical encounter.

    • Can be immediately implemented in most LMIC emergency units no matter their level of resource.

    • Allow for comparisons of emergency care data across facilities.

    This manuscript represents the largest, most comprehensive analysis of chief complaints to be produced to date from a LMIC emergency unit. A search in PubMed yielded minimal research that dealt with emergency chief complaints within LMICs, and none involved rigorous methods or large, representative patient populations. The closest published research was a description of a year of Kenyan emergency visits, which used International Classification of Diseases, 10th Revision codes for entering presenting complaints instead of an LMIC-specific lexicon.15 A Cambodian-based study described emergency presentations in a LMIC but used a pre-existing set of chief complaints.16 The other published manuscripts from sub-Saharan Africa to date that deal even tangentially with chief complaints focus on non-emergency cases,17 small numbers of patients,18 small surveys19 20 or looked only at subsets of trauma patients.21 22

    The chief complaint processing and analysis presented in this manuscript is a data-driven process that can be likely be applied to data from other LMIC emergency units and can be an appropriate tool for retrospective analysis. Additionally, this analysis informed the creation of a candidate chief complaint shortlist that can realistically further prospective data collection in low-resource settings.

    The Stata algorithm performed well in separating, cleaning and refining the free-text chief complaints, with nearly 95% of free-text strings from both the derivation (95.8%) and the validation (94.9%) datasets being accurately identified and converted to a longlist complaint. The consensus process was able to reduce an unwieldy 555 complaints to 83. The reliability of this grouping schema was supported by the NPCs who used the shortlist tool generating ‘substantial’ inter-rater reliability (kappa=0.70). For both the derivation and the validation processes, the majority of the disagreement occurred with the least common complaints.

    The inter-rater reliability suggests it is a reasonable grouping system according to both international researchers and local providers. This systematic grouping will form the basis for analysis to assess the epidemiology of emergency encounters to determine what defines a high-risk chief complaint in this setting and to assist with the rational development of emergency care in Uganda. The alternative automated shortlist provided in the supplementary appendix 2 was included to provide a list that more closely adheres to the language used by patients but which is therefore less compact. This preservation of language may provide individuals interested in research and additional information for future investigations.

    While arguments can be made about the methods chosen to encode and validate the chief complaints and produce the candidate shortlist in figure 2, no alternative standard system exists for chief complaint data in LMICs. The struggle to balance data resolution (splitting) and providing complaints that group together similar patients (clumping) is not limited to LMIC settings and was well described in a high-income setting.23 No scientifically derived number exists to define adequate coverage for a chief complaint list, but a recent consensus process suggested a list would need to describe at least 80% of emergency patient presentations.1 3

    While using a list of 83 (or 186) chief complaints may be useful for electronic retrospective data analysis, the vast majority of emergency care is delivered in settings reliant on patient charts. A pragmatic approach demands a compromise between data resolution and the limitations of a paper chart. Some advocates suggest that mobile technology will enable systems to ‘leapfrog’ forward to a digital collection of all emergency health data. While this may be the future, most emergency units in LMICs do not have that option at this point in time, and patients continue to arrive to these units daily. The final shortlist presented in this manuscript provides a tool for immediate implementation in the existing systems.

    To the authors’ knowledge, the chief complaint lists generated in this manuscript represent the largest, most rigorous and most comprehensive dataset of emergency chief complaints to ever be published from an LMIC. Next steps for research should focus on external validation both within Uganda and other LMICs, on comparing the list to those employed in high-income countries, or to linking complaint data with patient outcomes to establish high-risk complaints in Uganda. As efforts continue to standardise emergency care data collection in LMICs, improving the quality of chief complaint data can be an important step in improving the quality of emergency care and research in low-resource settings worldwide.

    This emergency chief complaint shortlist—derived in a typical district hospital setting in an LMIC—provides a tool to catalogue, characterise and analyse emergency care in such settings that adequately characterises 80%–90% of encounters. Implementation of such a tool will help plan for training and resource allocation to assess changes in epidemiology of emergency encounters and to provide a normalised basis on which emergency care centres may be compared. This represents an important first step in breaking the cycle of data poverty and beginning a new virtuous cycle where improved understanding of the emergency encounter can generate clinical improvements, new lines of inquiry and further elaboration of pragmatic data collection systems that realistically can be immediately implemented by users in low-resource settings.

    Limitations

    The study database was produced from patient visits at a single site. Recorded complaints from this region were necessarily impacted by the local dialect spoken and are highly culturally and linguistically specific. Reported complaints were also affected by the rural setting and the presence of other healthcare services (eg, more agricultural injuries and poisonings and fewer maternal complaints than may be seen elsewhere). The automated cleaning process was imperfect, and 4.2% of the complaints produced by the Stata algorithm were either unintelligible or failed to be recognised by the string filters. This small amount of data is not represented in analysis.

    The consensus process used was intentionally designed to minimise the influence of existing high-income complaint systems. However, the physicians involved were American Board of Emergency Medicine certified, and their training likely somewhat biased their cognitive schema towards their current practices.

    The validation process used NPCs who both use the handwritten charts and train the nursing students to fill them out. The authors discussed using the nursing students to validate the shortlist in addition to the NPCs, but their level of computer literacy was not adequate for them to meaningfully use the Qualtrics tool.

    This derived list of standardised chief complaints includes a combination of signs, symptoms, events and mechanisms. This is in distinction to more sophisticated systems of classification that clearly delineate these as separate categories of information. When these data were presented at a WHO expert meeting in South Africa in April 2016, members from high-income countries raised this as an objection. However, there was consensus among the attendees from LMICs that such a list is what most accurately reflects the real-world experience of delivering emergency care in their countries, where non-clinicians often perform triage. Moreover, it was noted that in high-income countries, what clinicians encounter as a ‘chief complaint’ is often a ‘triage impression’ that reflects the complaint of the patient after cognitive filtering by a clinician with more training than that of the provider or clerk recording these data in low-resource settings.

    Conclusions

    Emergency care in LMICs in remains poorly characterised. Chief complaint data present one target of opportunity for standardising collection of emergency care data to improve quality of and research in emergency care in LMICs. This study presents the largest published analysis of chief complaints from any LMIC and outlines a validated consensus shortlist of chief complaints to retrospectively categorise visits and a simplified shortlist that can be immediately used in low-resource settings. Further work is needed to prospectively validate this list in other environments and to compare it with other locally derived sets of chief complaints to create a final candidate list that is robust across emergency centre types, different languages and cultures.

    Acknowledgments

    The authors would want to thank three emergency care providers—Hilary Kizza, Benifer Niwagaba and Deus Twinomugisha—for participating in the validation process. The authors also wish to thank Caleb Dresser, MD, for his assistance with data cleaning. The authors wish to acknowledge all of the emergency care providers who provided the essential care described above in addition to the programme directors and research assistants who made the data collection possible.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12.
    13. 13.
    14. 14.
    15. 15.
    16. 16.
    17. 17.
    18. 18.
    19. 19.
    20. 20.
    21. 21.
    22. 22.
    23. 23.

    Footnotes

    • Contributors BTR developed and designed the study concept, cleaned and interpreted the data, designed and performed the programming and statistical analysis, participated in the consensus process and drafted and revised the manuscript. MB assisted with study design, participated in consensus process and drafted and revised the manuscript. RJ assisted with the acquisition of the data, supervised and administered the validation tool and revised the manuscript. SM provided administrative support, local expertise and revised the draft manuscript. HM developed and designed the study concept, analysed and interpreted the data, participated in the consensus process and drafted and revised the manuscript. All members of Global Emergency Care Investigator Group designed and implemented the training program being studied, assisted with the development of the database, assisted with data collection, assisted with administrative issues related to research and development and revised the manuscript.

    • Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

    • Competing interests None declared.

    • Patient consent Not required.

    • Ethics approval The development and implementation of this database received institutional review board approval from the University of Massachusetts, and local approval was sought and received by hospital administration in conjunction with GEC from Mbarara University of Science and Technology and the Ugandan Council of Science and Technology. University of Massachusetts, Mbarara University of Science, Ugandan Council of Science and Technology.

    • Provenance and peer review Not commissioned; externally peer reviewed.

    • Data sharing statement Individual participant data including data dictionaries that underlie the results reported in this article will be made available after deidentification. Statistical analysis plan plus analytic code will also be made available beginning 3 months and ending 5 years after article publication to researchers who provide a methodologically sound proposal to achieve aims approved by the GEC Executive Committee and our local Ugandan partners. Proposals should be directed to brian@globalemergencycare.org and to gain access, data requestors will need to sign a data access agreement.

    • Collaborators Global Emergency Care Investigator Group Members: Stacey Chamberlain; Bradley Dreifuss; Heather Hammerstedt; Mélissa Langevin; Sara Nelson; Usha Periyanayagam.