Article Text
Abstract
Objectives To develop a consensus statement to provide advice on designing, implementing and evaluating crowdsourcing challenge contests in public health and medical contexts.
Design Modified Delphi using three rounds of survey questionnaires and one consensus workshop.
Setting Uganda for face-to-face consensus activities, global for online survey questionnaires.
Participants A multidisciplinary expert panel was convened at a consensus-development conference in Uganda and included 21 researchers with experience leading challenge contests, five public health sector workers, and nine Ugandan end users. An online survey was sent to 140 corresponding authors of previously published articles that had used crowdsourcing methods.
Results A subgroup of expert panel members developed the initial statement and survey. We received responses from 120 (85.7%) survey participants, which were presented at an in-person workshop of all 21 panel members. Panelists discussed each of the sections, revised the statement, and participated in a second round of the survey questionnaire. Based on this second survey round, we held detailed discussions of each subsection with workshop participants and further revised the consensus statement. We then conducted the third round of the questionnaire among the 21 expert panelists and used the results to finalize the statement. This iterative process resulted in 23 final statement items, all with greater than 80% consensus. Statement items are organised into the seven stages of a challenge contest, including the following: considering the appropriateness, organising a community steering committee, promoting the contest, assessing contributions, recognising contributors, sharing ideas and evaluating the contest (COPARSE).
Conclusions There is high agreement among crowdsourcing experts and stakeholders on the design and implementation of crowdsourcing challenge contests. The COPARSE consensus statement can be used to organise crowdsourcing challenge contests, improve the rigour and reproducibility of crowdsourcing research and enable large-scale collaboration.
- public health
- statistics & research methods
- social medicine
- qualitative research
Data availability statement
Data are available upon reasonable request. Data are available upon reasonable request. The datasets generated and/or analysed during the study are not publicly available due to participants not having consented to public availability, but are available from the corresponding author on reasonable request: jdtucker@med.unc.edu.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Strengths and limitations of this study
Recruitment for the first round of survey questionnaires included 140 lead authors of crowdsourcing manuscripts.
Two additional rounds of surveys were completed by a multidisciplinary expert panel to obtain from a broad range of stakeholders with expertise in leading crowdsourcing challenge contests.
A combination of in-person and digital methods were used for discussion and voting on consensus items.
The study is limited by an absence of in-depth interviews to develop the initial statement items and potential recency bias.
Introduction
COVID-19 continues to test governments and healthcare providers around the world. In response, crowdsourced projects have created new forms of personal protective equipment,1 developed participatory citizen-science apps for contact tracing,2 and organised mutual aid organisations.3–5 Crowdsourcing is the process of having a group, including experts and non-experts, solve a problem and then share solutions with the public.6 Crowdsourcing has been used to inform WHO policy,7 develop machine learning algorithms,8 and identify innovative health services.9
Crowdsourcing is increasingly being used to find innovative, stakeholder-engaged solutions to challenging medical and public health problems.10–12 The effectiveness of crowdsourced health solutions has been demonstrated in randomised clinical trials,10 and social science research has also demonstrated the power of crowdsourcing to increase the engagement of stakeholders in health problem-solving, resulting in solutions that are more effective at addressing local contexts and community concerns.13 14 For example, a crowdsourcing approach has been previously been used to engage communities disproportionately impacted by the HIV epidemic to help develop creative, culturally-appropriate messaging on HIV cure research.15 Additionally, crowdsourcing approaches have been found to be highly effective for engaging marginalised lay populations in HIV cure research,16 and for engaging youth in developing effective youth-friendly HIV self-testing promotion strategies.17
An array of crowdsourcing approaches have been used by health and scientific research organisations, including the National Academies of Sciences, Engineering and Medicine,18 the National Institutes of Health Research Office of Behavioral and Social Science Research,19 and The Lancet Healthy Cities Commission.20 While there are many ways to implement crowdsourcing approaches, one common approach is in the form of challenge contests (also called open calls, innovation challenges, prize inducement contests).6 Challenge contests typically involve a call for community solutions in response to a specified problem; contributions are evaluated to identify exceptional ideas, with finalists being awarded prizes and their ideas disseminated for implementation.12 The goals of challenge contests can vary; for instance, a systematic review of challenge contests found that process-oriented contests focused on mass engagement, while outcome-oriented contests focused on producing high-quality outputs.21
Despite the growing interest in and value of crowdsourcing challenge contests, there are few resources available to inform the design, implementation and evaluation of crowdsourcing contests related to health and medicine. The Special Programme for Research and Training in Tropical Disease (TDR) ‘Practical Guide on Crowdsourcing in Health and Health Research’12 is the single most comprehensive guidance. However, this guide did not include a systematic review of the evidence in makings its recommendations, nor did it include a consensus statement. Robust consensus guidance may help to mitigate potential risks of inconsistent application of crowdsourcing methodology, as well as help to establish greater trust in the use of challenge contests as an innovative approach to health research among both health researchers and public stakeholders.22 Consensus guidance can also help to enhance the rigour and reproducibility of crowdsourcing challenge contests. Given that the heterogeneous nature of crowdsourcing stifles comparisons,6 consensus guidance may be important for encouraging health researchers to implement this approach across a wider array of challenging problems in need of stakeholder-driven solutions. In order to provide rigorous guidance to assist healthcare professionals, policymakers and citizen scientists in applying a crowdsourcing approach, a multidisciplinary group of international experts reviewed evidence on crowdsourcing in health and medicine to develop a consensus statement. The goal of this paper is to describe our consensus development process and present the final statement as a tool for informing crowdsourcing approaches to health and medical research.
Methods
Study design
We followed recommendations on guideline development from the Guideline International Network.23 Development of the final consensus statement proceeded through a modified Delphi process,24 which proceeded through four stages: (1) convening of an expert panel to initiate the consensus development process; (2) initial statement development and first round of survey questionnaires; (3) an in-person workshop with expert panel members and a second round of survey questionnaires; (4) a third round of survey questionnaires and finalisation of the consensus statement items. These stages are described in detail below.
Patient involvement
Patients were not involved in the design, conduct, reporting or dissemination plans of our research.
Expert panel recruitment
To initiate the consensus development process, we convened a panel of experts at a consensus development conference in Uganda. The conference was convened by PA (conference director), jointly with the following partner organisations: TDR Social Innovation in Health Initiative (SIHI), Makerere University, University of Malawi, University of Philippines Manila, Universidad Icesi, Centro Internacional de Entrenamiento e Investigaciones Medicas, Pan American Health Organization, Social Entrepreneurship to Spur Health and the Bertha Center for Social Innovation and Entrepreneurship at the University of Cape Town. The meeting received organisational support from Fondation Merieux. The conference director and cochair (JT) and partner organisations appointed a multidisciplinary expert panel of internationally recognised academics representing several scientific disciplines, including internal medicine, public health, health policy, social innovation, social entrepreneurship, psychology and primary care. The panel included individuals from each of the five TDR SIHI hubs. These individuals all have experience leading health-related challenge contests in local, regional and global settings. In addition, we invited five individuals from the Ugandan public sector (innovations, consumer organisation, clinical health services, education, science and technology, public health) and nine Ugandan potential users.
The members of the expert panel had voting rights for the entirety of the consensus development process. They were selected because of expertise about crowdsourcing challenge contests and relevant publications. Each of the SIHI hubs used their own criteria to decide who should join the expert panel. Criteria included leadership in organising challenge contests on health, participation in TDR SIHI activities related to open calls and open innovation, and participation in other committees related to crowdsourcing. One independent, non-voting member (EK) with previous experience in Delphi methodology administered questionnaires for the modified Delphi process.
Modified Delphi consensus development
Building on the evidence from our group’s previous systematic review of crowdsourcing in health and medicine,10 existing reviews of crowdsourcing in health,21 25–31 and previous guidelines,12 a subgroup of expert panel members developed an initial statement and set of recommendations. An online survey questionnaire was developed using the initial statement following our review of existing guidelines (online supplemental material 1). Second, we sent a link to the survey (hosted on Sojump, an online survey platform) to 140 corresponding authors of previously published articles that had used crowdsourcing methods. The survey included sections on considering, organising, promoting, assessing, recognising, sharing and evaluating crowdsourcing challenge contests. Throughout the survey, participants were presented with a series of statements pertaining to specific elements in the ideal design and implementation of each stage of a crowdsourcing challenge contest, including what processes, goals and considerations should be part of a crowdsourcing activity. The participants were asked for their level of agreement with each statement (strongly agree, agree, neutral, disagree and strongly disagree) and given the choice to amend or make comments. Note that at this stage of consensus development, the term crowdsourcing rather than the more specific format of crowdsourcing challenge contests was used throughout the questionnaire, in order to capture participants’ views on crowdsourcing as a method more broadly (thus potentially encompassing other forms of crowdsourcing, such as hackathons). Participants were assured that their responses were confidential and only used for the purposes of the consensus statement development. Electronic informed consent in lieu of written consent was obtained from all online survey participants. Third, we presented the survey results to the expert panel at a workshop. This in-person workshop, sponsored by TDR SIHI, was held in Uganda from 8 to 11 October 2019.32 Participants of the workshop included all 21 members of the expert panel. As proceedings of the workshop are a matter of public record, the names and affiliations of panel members are available from the workshop report,32 and are summarised in table 1. Participants discussed each of the sections, revised the statements, and finished the second round of the survey questionnaire. At this stage of consensus development, survey participants were asked to consider their responses in relation to the specific crowdsourcing approach of challenge contests. Written informed consent for survey participation was obtained from all 21 workshop participants. Fourth, we held detailed discussions of each subsection with workshop participants (online supplemental material 2) and revised the consensus statement accordingly. Then we conducted the third round of the questionnaire among the 21 expert panellists. Finally, we summarised the results of the final questionnaire.
Supplemental material
Supplemental material
Consensus statement definitions
A supermajority consensus rule was pre-specified. Specifically, all statements that had agreement rates of 80% or higher were included in the final consensus statement. Individual statement items were iteratively revised to maximise agreement across the three rounds of questionnaires. The degree of consensus for each statement was graded as follows: grade U was classified as unanimous (100%) agreement; grade A was 90%–99% agreement; and grade B was 80%–89% agreement.
Results
The first round of survey questionnaires (online survey) received 120 responses (response rate 85.7%). The first questionnaire included 35 items, including four items on sociodemographic characteristics. All 21 members of the expert panel participated in the second round survey questionnaire at the in-person workshop. After the workshop, the third round of survey questionnaire was completed by 19 of the 21 expert panel members (response rate 90%). In total, over the three rounds of questionnaires and in-person workshop, first round survey participants and the expert panel eliminated 12 items that were redundant or unnecessary. The iterative process resulted in 23 final consensus statement items, all with greater than 80% consensus. This final consensus statement on crowdsourcing challenge contests is presented in table 2. Note that for the sake of brevity, the general term crowdsourcing’ is used throughout the consensus statement rather than the more specific phrase ‘crowdsourcing challenge contests’. Table 2 also indicates the grade received for each statement item. Seven of the final 23 items achieved unanimous agreement; 15 achieved grade A agreement and 1 (item 6a) achieved grade B agreement.
The 23 final items are organised by the seven stages of implementing a crowdsourcing challenge contest, which are summarised in figure 1.12 Here, we briefly describe the seven stages. Detailed descriptions are available (online supplemental material 3).
Supplemental material
Consensus statement
Considering the appropriateness of challenge contests
Considering whether a challenge contest is an appropriate method for solving a problem is an important first step. Challenge contests are prize challenges where a call is issued by contest organisers, and solutions and ideas are then solicited from the public. Other forms of crowdsourcing include hackathons and online collaboration systems. Challenge contests, hackathons, and online collaboration systems differ from one another in terms of the amount of time, resources, and processes required to implement each. Researchers considering whether to use crowdsourcing should consider which method would be the most effective and feasible based on the local setting. Table 3 shows unique aspects of each of these methods.
Organising a community steering committee
If a challenge contest is deemed to be the most suitable crowdsourcing method for the local context, a steering committee should be organised. The steering committee often includes local community members, health professionals, community-based organisation leaders and private sector leaders. Importantly, efforts should be made to recruit committee members from diverse fields to provide an array of different perspectives. Including individuals with direct, personal experience with the problem, such as patients or at-risk groups, on the steering committee is essential. The steering committee plays an important leadership role throughout the contest, including deciding the structure and purpose of the contest, outlining the rules and requirements for entries, developing a call for contributions, and establishing the prize structure.
Promoting the challenge contest
Many people are unfamiliar with challenge contests and will need a clear description of the purpose, expectations and rules. Although there are many private companies that organise challenge contests, a simple website may be sufficient to communicate with potential challenge contest participants. The website should contain all information related to the challenge contest, including an overview of objectives, timeline, guidelines for contributions, criteria for judging, prizes and frequently asked questions. Infographics and short videos can help make the challenge contest more accessible to a general audience, allowing participation to move beyond expert audiences to engage the public. Challenge contests should include in-person events when possible. One study found that participants were twice as likely to learn about challenge contests through in-person events compared with social media.33
Assessing contributions
The steering committee will need to consider how contributions will be received and judged. For some topics, receiving contributions exclusively online may be the best approach. For example, an online receiving platform would be the most efficient choice for a challenge contest seeking video contributions, as videos can be readily uploaded and submitted from any location with internet access. Text-based contributions can also be easily collected using an online submission form. However, it is important to note that limiting the receiving platform to online contributions may exclude some participants. In-person events in partnership with local organisations can provide alternative ways to receive offline contributions. Once all contributions are collected, a panel of judges will evaluate them to determine finalists. The judging panel often consists of a mix of experts, laypersons and members of the contest organising committee. Judges who have a potential conflict of interest should recuse themselves from reviewing contributions.
The quality of crowdsourcing contributions can vary broadly, resulting in both low and high-quality contributions.34 It is thus recommended to conduct an initial eligibility screening of all contributions in order to remove invalid, incomplete or duplicate entries before forwarding to the judges. When there is a smaller number of contributions, panel judging can then occur after this initial screening. If a contest receives a large number of contributions (eg, more than 10 contributions per judge), the judging process can be conducted in three phases: eligibility screening, crowd judging and panel judging. In phase one, two independent judges examine contributions based on prespecified criteria. Invalid, incomplete or duplicate contributions are deleted and do not advance to the next judging round. In phase two (crowd judging), a group of laypersons evaluates the eligible contributions using an evaluation rubric. Each contribution is reviewed by three independent judges. Only those contributions that are deemed exceptional (eg, mean score of 7/10 or greater) will then proceed to the final round of panel judging. A panel of experts and non-experts individually evaluates each contribution forwarded from the previous round of crowd judging. Once the evaluations have been received, the steering committee reviews all evaluations to rank order the scores and identify the finalists. Judges should be thanked for their assistance and notified when an announcement of the finalists will be made.
Recognising contributors
Recognition in the context of a challenge contest can be difficult given that organisers bring together experts, non-experts and many other individuals who have different training and expectations. However, this is an important component for sustaining challenge contests. The first stage in recognising individuals is to establish clear expectations for all those who contribute. Among judges and steering committee members, this typically involves mentioning that their contributions are done on a voluntary basis when they are invited. The amount of time required of judges and steering committee members varies but is usually less than 4 hours total. Among contributors, the amount of time required to create a submission should be commensurate with the prize structure. In instances where there is uncertainty, asking individuals who submit entries to estimate the total time spent creating the submission may be helpful. Finalists may be recognised through public announcements using various platforms, including organisational websites, social media platforms, online public fora and in-person events. While more attention is given to finalists, it is important to acknowledge the efforts of all people who submitted entries. There are many ways to do this, including sending emails notifying contributors of the outcome or thanking them for their efforts on an open platform. Written feedback on contributions may be shared with selected contributors. Public announcements should be timely, occurring shortly after the conclusion of a contest. Terms such as ‘winner’ and ‘loser’ are often avoided to acknowledge the hard work of all contest contributors and encourage future participation.
Sharing and implementing ideas
One of the most important stages of community-engaged research projects is the process of sharing results35 beyond scientific audiences.36 The main aim of the dissemination stage is to share ideas generated through the contest and to implement selected ideas where appropriate. Challenge contest dissemination depends on the context and the audience. There are several reasons to widely share selected contest contributions. First, since crowdsourcing involves soliciting outputs from a group, sharing allows organisers to give back to the group who made the project possible. Second, crowdsourcing projects are often supported by public funds, enrol local participants, and are sanctioned by local public authorities. Despite the strong rationale for sharing, there are also many factors that may limit wide sharing. Contest participants may be appropriately concerned that sharing their contribution could pose risks, such as inadvertent disclosure of private information (eg, sexual orientation). In terms of research, scientists may be concerned with disseminating materials that interfere with blinding in randomised controlled trials (RCTs). The risks of sharing contributions need to be carefully considered and addressed during contest planning.
Evaluating challenge contests
Crowdsourcing challenge contests can be evaluated in many ways, including quantitative and qualitative approaches. Quantitative studies can examine the crowdsourcing activity itself and may include the number and quality of contributions, the number of website views and related social media metrics.37–39 Such evaluations can be used to determine the overall reach and level of participation in a challenge contest, which can further indicate the interest of stakeholders in addressing the health problem targeted by the contest. Observational studies can provide useful information about challenge contests such as the acceptability of the challenge contest for relevant stakeholders, the impact on related participant behaviours and motivations for participation. This can provide insights on the extent to which a crowdsourcing activity was successful at engaging a diverse array of stakeholders in ways that are meaningful to them. This may be particularly important for studies seeking to engage populations whose perspectives are often excluded from the production of health research knowledge—for example, individuals with low levels of education14 and youth.17 RCTs are used to assess the effectiveness of crowdsourcing interventions compared with interventions developed using other methods. For example, one study evaluated an online, peer support intervention and found that crowdsourced social interactions enhanced user engagement and decreased rates of depression compared with online expressive writing.40 While considered to be the gold standard evaluation method, RCTs are often time and resource intensive. Qualitative evaluation can synthesise themes identified in the text of contributions or provide more context on implementation.
Discussion
We developed a consensus statement, considering the appropriateness, organising a community steering committee, promoting the contest, assessing contributions, recognising contributors, sharing ideas and evaluating the contest (COPARSE), on crowdsourcing challenge contests in health and health research. COPARSE brought together data from a systematic review and meta-analysis, existing global guidelines on crowdsourcing, and structured feedback from experts and end-users using a modified Delphi methodology. The consensus statement provides a harmonised framework to enhance crowdsourcing challenge policy, implementation and research. First, the statement provides a structure for policymakers to organise open calls for suggestions about health policy. Second, both the shorter consensus statement and the longer implementation considerations provide a range of practical suggestions for implementers using challenge contests in the field. Finally, researchers may find COPARSE useful in developing studies to evaluate the process or outcomes of crowdsourcing studies.
COPARSE represents a novel approach to solidifying expert advice on the design, implementation and evaluation of challenge contests. Based on the results of our literature review, published literature on crowdsourcing is limited to methodological descriptions. Although there have been published reviews on crowdsourcing,10 11 41 the heterogeneity of recommendations has precluded consensus development. COPARSE extends the scope of TDR’s Practical Guide on Crowdsourcing in Health and Health Research12 by developing a consensus statement with extensive feedback from clinical and public health experts, as well as implementers with experience using crowdsourcing. Our consensus statement helps to refine the practice of crowdsourcing in health and medical research based on a convergence of expert and non-expert review. Establishing consensus on crowdsourcing challenge contest procedures through the input of experts with direct crowdsourcing experience may help to further legitimise and encourage the use of this approach in solving complex health and medical problems.
COPARSE expands the literature by including diverse voices from around the world, using a modified Delphi method, and focusing on challenge contests. However, there are some methodological limitations to this study that should be considered. First, we did not use in-depth interviews at the start of the process to inform survey development. However, the survey was informed by a thorough review of the existing literature. Second, the in-person workshop was also conducted over 2 days, which may introduce recency bias among workshop participants whose responses were used to develop COPARSE. Third, while the workshop was attended by a diverse range of participants with highly relevant expertise, participation was ultimately limited to 21 individuals. An iterative procedure over a longer time horizon, and with a larger group of participants, may allow for a greater diversity of opinions and recommendations for inclusion in future iterations of COPARSE, which we envision as a statement that can be revisited, updated and further refined over time and in response to innovations in the growing field of crowdsourcing for health research.
Conclusion
Challenge contests are simple, inclusive, and inexpensive ways to solicit community feedback on health and medical problems. COPARSE should not be used as a rigid guidebook, but rather as a set of core principles to inspire further challenge contests. Only through iterative implementation will the science and practice of crowdsourcing for health and medicine improve.
Data availability statement
Data are available upon reasonable request. Data are available upon reasonable request. The datasets generated and/or analysed during the study are not publicly available due to participants not having consented to public availability, but are available from the corresponding author on reasonable request: jdtucker@med.unc.edu.
Ethics statements
Patient consent for publication
Ethics approval
This project was approved by the IRB of Southern Medical University.
Acknowledgments
We thank Social Entrepreneurship to Spur Health and TDR Social Innovation in Health Initiative for operational support. The SIHI network is supported by TDR, the Special Programme for Research and Training in Tropical Disease, co-sponsored by UNDP, UNICEF, the World Bank and WHO. TDR is able to conduct its work thanks to the commitment and support from a variety of funders. For the full list of TDR donors, please see: https://www.who.int/tdr/about/funding/en/. TDR receives additional funding from Sida, the Swedish International Development Cooperation Agency, to support SIHI. We would like to thank the Merieux Foundation for support of the SIHI workshop where the stages of crowdsourcing were discussed.
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
LH, WT and TR contributed equally.
Contributors LH, WT and JT designed and carried out this study. PA convened the consensus-development conference, which was chaired by JT and attended by members of the expert panel, including WT, TR, SW, HB, EK, DM, PA, NJ, DA, YX, EO, RJ and VA contributed to data analysis and writing of the longer supplemental manuscript. VA developed the website associated with the paper. The paper was drafted by LH, with feedback from TR, SD and JT. All coauthors reviewed the paper. JT act as the guarantor.
Funding This was supported by TDR, the National Institutes of Health (NICHD UG3HD096929, NIAID K24AI143471), the UNC Center for AIDS Research (NIAID 5P30AI050410), and the National Key Research and Development Program (2017YFE0103800).
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.