Understanding current practice, identifying barriers and exploring priorities for adverse event analysis in randomised controlled trials: an online, cross-sectional survey of statisticians from academia and industry

Objectives To gain a better understanding of current adverse event (AE) analysis practices and the reasons for the lack of use of sophisticated statistical methods for AE data analysis in randomised controlled trials (RCTs), with the aim of identifying priorities and solutions to improve practice. Design A cross-sectional, online survey of statisticians working in clinical trials, followed up with a workshop of senior statisticians working across the UK. Participants We aimed to recruit into the survey a minimum of one statistician from each of the 51 UK Clinical Research Collaboration registered clinical trial units (CTUs) and industry statisticians from both pharmaceuticals and clinical research organisations. Outcomes To gain a better understanding of current AE analysis practices, measure awareness of specialist methods for AE analysis and explore priorities, concerns and barriers when analysing AEs. Results Thirty-eight (38/51; 75%) CTUs, 5 (5/7; 71%) industry and 21 attendees at the 2019 Promoting Statistical Insights Conference participated in the survey. Of the 64 participants that took part, 46 participants were classified as public sector participants and 18 as industry participants. Participants indicated that they predominantly (80%) rely on subjective comparisons when comparing AEs between treatment groups. Thirty-eight per cent were aware of specialist methods for AE analysis, but only 13% had undertaken such analyses. All participants believed guidance on appropriate AE analysis and 97% thought training specifically for AE analysis is needed. These were both endorsed as solutions by workshop participants. Conclusions This research supports our earlier work that identified suboptimal AE analysis practices in RCTs and confirms the underuse of more sophisticated AE analysis approaches. Improvements are needed, and further research in this area is required to identify appropriate statistical methods. This research provides a unanimous call for the development of guidance, as well as training on suitable methods for AE analysis to support change.

To gain a better understanding of current adverse event (AE) analysis practices and the reasons for the lack of use of sophisticated statistical methods for AE data analysis in randomised controlled trials (RCTs), with the aim of identifying priorities and solutions to improve practice.

Design
A cross-sectional, online survey of statisticians working in clinical trials, followed-up with a workshop of senior statisticians working across the United Kingdom.

Participants
We aimed to recruit into the survey a minimum of one statistician from each of the 51 UK Clinical Research Collaboration (CRC) registered clinical trial units (CTUs) and industry statisticians from both pharmaceuticals and clinical research organisations (CROs).

Outcomes
To gain a better understanding of current AE analysis practices, measure awareness of specialist methods for AE analysis and explore priorities, concerns and barriers when analysing AEs.

Results
Thirty-eight (75%) CTUs, five (71%) industry and twenty-one attendees at the 2019 PSI conference consented to participate and proceeded into the survey. Forty-six participants were classified as public sector participants and eighteen as industry participants. Participants indicated that they predominantly (80%) rely on subjective comparisons when comparing AEs between treatment groups. Forty percent were aware of specialist methods for AE analysis but only 13% had undertaken such analyses. All participants believed guidance on appropriate AE analysis and 97% thought

Conclusions
This research supports our earlier work that identified sub-optimal AE analysis practices in RCTs and confirms the under use of more sophisticated AE analysis approaches. Improvements are needed and this research provides a unanimous call for the development of guidance, as well as training on appropriate methods for AE analysis to support change. Further research is needed to identify the most appropriate statistical methods for AE data analysis.

Keywords
Randomised controlled trials; adverse events; harms; adverse drug reactions; survey; statisticians; clinical trials units; industry; analysis.  There was some level of self-selection to participation and as such, there is a possibility that participants had an increased interest in adverse event (AE) analysis and are not fully representative of the clinical trial community.
 The survey was followed up with a workshop of senior statisticians from across the United Kingdom, which represents more of a general interest group.
 The survey provides insight and essential starting points to identify areas of focus to help support a change to improve AE analysis practices.  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  allowing causality to be evaluated and potential detection of adverse drug reactions (ADRs). Reviews of journal article reports of RCTs have demonstrated that harms data is not being fully utilised with frequent inappropriate and insufficient analyses. [1][2][3][4] In addition, inconsistent information is reported, thus preventing a complete summary of the harm profile to be established. 5-11 † Building on previous work a comprehensive methods review undertaken by the authors revealed that there are a broad range of published statistical methods proposed specifically to analyse AE data for both the interim and final analysis. 12,13 Many of the proposed methods could be adopted into current practice with relative ease. Chuang-Stein and Xia have proposed examples of industry strategies adopting such methods. 14 Previous research has demonstrated that these methods are not used for the analysis presented in the primary results publication, and there are minimal citations of these published methods in the RCT setting, which further suggests uptake of these methods is low. 1,12,13 Understanding the reasons for this low uptake will help identify solutions to improve the analysis of AEs in RCTs. We undertook a survey of UK statisticians working in clinical trials to investigate their current practice when analysing AEs, to measure their awareness of available methods for AE analysis, and to explore their priorities, concerns and identify any perceived barriers when analysing AEs. † An adverse event is defined as 'any untoward medical occurrence that may present during treatment with a pharmaceutical product but which does not necessarily have a causal relationship with this treatment'. An adverse drug reaction (ADR) is defined as 'a response to a drug which is noxious and unintended …' where a causal relationship is 'at least a reasonable possibility'. and industry statisticians from both pharmaceuticals and clinical research organisations (CROs) was conducted. We aimed to recruit a minimum of one statistician from each of the 51 UKCRC registered CTUs and from a sample of pharmaceutical companies and CROs in the UK to gain an industry perspective. The survey was followed-up with a workshop at the UKCRC biannual statisticians' operations group meeting where survey results were presented and areas for improvements and priorities were discussed.

Survey development
The survey was developed using information from current guidance and previous research that examined barriers to the uptake of new methodology. [15][16][17][18] Topics covered included questions about current practice and factors influencing AE analysis performed; barriers encountered when analysing AEs; concerns regarding AE analysis; awareness and opinions of specialist methods for AE analysis; concerns and barriers of implementing specialist methods; and opinions on potential solutions to support a change in AE analysis practice.
Questions were predominantly closed form but where appropriate open-ended questions were included to allow for detailed responses and comments. Responses were measured using Likert scales.

Sampling and Recruitment
We targeted a population that we knew to be predominantly involved in

Participants
Statisticians with experience of planning and preparing the final analysis reports for pharmacological RCTs were invited to participate. Descriptive analysis was undertaken, primarily including frequencies and proportions for each questionnaire item and where appropriate was accompanied with visual summaries. The frequency and proportion of participants that showed support for an item was calculated by combining the 'always' and 'often' or 'strongly agree' and 'agree' categories. Participants were classified according to affiliation into either CTU/public sector or industry sector and analysis was stratified by sector.
Response rates were calculated for groups of participants where known.

Patient and public involvement
This survey forms part of a wider research project that was developed with input from a range of patient representatives. There were no patients directly involved in this survey but the original proposal and patient and public involvement (PPI) strategy were reviewed by service user representatives (with experience as clinical trial participants and PPI advisors) who provided advice specifically with regard to communication and dissemination to patient and public groups.  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59
Over three-quarters (77%) of participants provided further reasons for lack of use of specialist methods. Reasons were characterised into comments relating to: concerns with the suitability of methods in relation to trial characteristics and nature of AE data (n=7); opposition and a lack of understanding from clinicians (n=5); a lack of need for such methods (n=3); a desire to keep analysis consistent with historical analysis (n=3); and training and resources (n=1). Table 1 displays the participant comments attributed to each group.

Influences, barriers and concerns
The most common influences for the AE analysis performed were cited as the chief investigator's preference for simple approaches (78%), the observed AE rates (76%) and the size of the trial (73%).

Concerns and solutions
When participants were asked to think about available methods for AE analysis the most common concern, which was held by 38% of participants was acceptability of methods to regulators. This differed substantially by sector with only 23% of CTU/public sector participants holding this belief compared to 77% of industry participants. Twenty percent of participants were concerned about the acceptability of methods to the chief investigator and journals and 32% were concerned about the robustness of methods.  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60   F  o  r  p  e  e  r  r  e  v  i  e  w  o  n  l  y All participants believed that guidance on appropriate AE analysis is needed, 97% thought training specifically for AE analysis is needed, and 63% thought new software or code is needed. Figure 2 provides a visual summary of concerns and solutions by setting. Summary statistics by sector and for the overall sample are provided in tables A10-A11 of the appendix. Just under a third (32%) of participants offered solutions to support change in AE analysis practices. These included suggestions regarding improved standards or calls for changes from journals, registries and regulators (n=8); development of guidance, education and engaging with the medical community (n=9); and analysis (n=3).

Analysis
"Inferential analysis based on small numbers of adverse events, but of great influence on the patient health." (Industry) Thirty percent of participants raised other items not listed in the survey regarding current AE analysis practices, these covered the following themes: minimum summary information that participants would expect to be reported for AE data such as "numbers and percentages" (n=2); changes to analysis practice that could or have been made such as "use of graphical methods" (n=8); concerns about the quality and collection of AE data (n=3); and general comments and criticisms about current AE analysis and reporting practices (n=4). Table 3 provides the participant comments attributed to each theme.
In the follow-up workshop of senior statisticians (n=52 from 43 UKCRC registered CTUs) attending the UKCRC biannual statisticians' operations meeting in November 2019, participants were asked to rate the need to improve analysis practices for AE data on a scale of 0-100 (indicating low to high priority). The mean score was 66 (SD 16.2) (median 71 (range 9, 88)) (n=44).

DISCUSSION
Despite RCTs being a valuable source of data to compare rates of AEs between treatment groups and provide an opportunity to assess causality, analysis and reporting practices are often inadequate. [1][2][3][4][5][6][7][8][9][10][11]   Results were broadly similar across public and industry sectors with the only notable differences being the greater use of hypothesis testing and 95% CIs as a means to compare AE rates between treatment groups by CTU participants, a more predominant belief by industry participants that regulators preferred simple approaches to AE analysis, and a greater concern about acceptability of methods to regulators by industry participants. Across sectors, there was unanimous support that guidance and training on appropriate AE analysis is needed.
Survey responses indicated that 75% of statisticians produce tables with both the number of participants with at least one event and the total number of events. This is substantially higher than reported in reviews of published articles, which found between 1% and 9% reported both. 1-3 The number of total events experienced can give a better summary of impact to patients' quality-of-life but it seems this is often omitted from journal articles with reviews identifying only 6% to 7% of published articles reporting this information. 1,4 Reported use of 95% CIs were similar to that reported in journal articles (22% compared to 20%) but reported use of hypothesis testing was lower than what was found in journal articles (32% compared to a range of 38% to 47%). [1][2][3] Reasons for these disparities are not known but could include journals editors requesting such analyses is undertaken to compare groups, or at the request of the chief investigator, which is supported by survey responses indicating a preference for simple approaches from both groups.
Many methods have been specifically proposed for AE analysis in RCTs and there was a moderate level of awareness of these methods (40%) but in line with our review of journal articles we found uptake to be minimal (13%). 12 Trial design and the nature of AE outcomes can also hinder the analyses performed. Unlike efficacy outcomes, which are well defined and limited in number from the outset, harm outcomes are numerous, undefined and contain additional information on severity, timing and duration, and number of occurrences, which all need to be considered. More careful consideration of harm outcomes when designing, analysing and reporting trials will help produce a more balanced view of benefits and risks.
Improved analysis could be achieved through adoption of existing or development of more appropriate methods for AE data. Several participants mentioned AE analysis approaches we believe warrant exploring including time-to-event analyses, data-visualisations and Bayesian methods.
Ultimately, with the aim of helping to identify signals for ADRs enabling a clearer harm profile to be presented. This is supported by feedback obtained at the workshop and the earlier findings of

Strengths and limitations
Through support of the UKCRC CTU network and utilisation of personal contacts, we were able to achieve a high response rate for the survey. There was some level of self-selection for those recruited via the open platform and as such, there is a possibility that these participants had an increased interest in AE analysis and are not fully representative of the clinical trial community. We also did not have any information on non-responders and as such cannot characterise any potentially relevant differences that could affect the generalisability of our results. This survey provides insight and essential starting points to identify areas of focus to help support a change to improve AE analysis practice. Many of the opinions raised in the survey were echoed by the workshop attendees who represented more of a general interest group.

COMPETING INTERESTS
None declared.

ETHICS
This study was granted ethical approval by the Imperial College Joint Research Compliance Office (ICREC reference: 19IC5067).

FUNDING
This research was supported by the NIHR grant number DRF-2017-10-131.

DATA SHARING STATEMENT
Survey data are available from the Zenodo data repository.

AUTHOR CONTRIBUTIONS
RP and VC conceived the idea, designed and ran the survey. RP performed the data analysis, interpreted the results and wrote the manuscript. VC interpreted the results, provided critical revision of the manuscript and supervised the project.  Before you proceed we thought it would be helpful for you to know about our recent findings.

Figure legends
We undertook a systematic review of RCT journal reports and found that trials typically report AE data using frequencies (94%) and percentages (87%).

They often ignore repeated events (84%) and 47% undertake hypothesis tests despite a lack of power. There is also a common practice to categorise continuous clinical and laboratory outcomes and present as frequencies and percentages (59%). A small proportion (12%) incorporated graphics into the AE analysis.
Thinking about analysis methods for AEs: 6 How often would you say the following influences the analysis performed? i Statistician prefers simple approaches e.g. tables of frequencies and percentages Always Often Not very often Never Don't know Are there any reasons other than those mention above why those methods are not being more widely used?

Yes No
If yes, please specify Have you undertaken any specialist AE analysis not mentioned in your previous responses?

Yes No
Please explain your answer. If 'yes', please include details of the method(s) used for the analysis performed

Objectives
To gain a better understanding of current adverse event (AE) analysis practices and the reasons for the lack of use of sophisticated statistical methods for AE data analysis in randomised controlled trials (RCTs), with the aim of identifying priorities and solutions to improve practice.

Design
A cross-sectional, online survey of statisticians working in clinical trials, followed-up with a workshop of senior statisticians working across the United Kingdom.

Participants
We aimed to recruit into the survey a minimum of one statistician from each of the 51 UK Clinical Research Collaboration registered clinical trial units (CTUs) and industry statisticians from both pharmaceuticals and clinical research organisations.

Outcomes
To gain a better understanding of current AE analysis practices, measure awareness of specialist methods for AE analysis and explore priorities, concerns and barriers when analysing AEs.

Results
Thirty-eight (38/51; 75%) CTUs, five (5/7; 71%) industry and twenty-one attendees at the 2019 PSI conference participated in the survey. Of the 64 participants that took part, forty-six participants were classified as public sector participants and eighteen as industry participants. Participants indicated that they predominantly (80%) rely on subjective comparisons when comparing AEs between treatment groups. Forty percent were aware of specialist methods for AE analysis but only 13% had undertaken such analyses. All participants believed guidance on appropriate AE analysis  There was some level of self-selection to participation and as such, there is a possibility that participants had an increased interest in adverse event analysis and are not fully representative of the clinical trial community.
 The survey was followed up with a workshop of senior statisticians from across the United Kingdom, which represents more of a general interest group.
 The survey provides insight and essential starting points to identify areas of focus to help support a change to improve adverse event analysis practices.  proposed methods being used, with authors preferring simple approaches predominantly presenting frequencies and percentages of events. 1,5 The statistical methods proposed for adverse event † An adverse event is defined as 'any untoward medical occurrence that may present during treatment with a pharmaceutical product but which does not necessarily have a causal relationship with this treatment'. An adverse drug reaction is defined as 'a response to a drug which is noxious and unintended …' where a causal relationship is 'at least a reasonable possibility'. analysis identified in the methodology review also had minimal citations, which further suggests uptake of these methods is low. 1,6,7 In addition, there is a problem with the reporting of adverse events and the selection of events to include in journal articles. Many reviews have established poor quality reporting in journal articles of adverse event data from RCTs. 9-15 Also it is often not possible to include all adverse events in the primary RCT publication and authors need to select events for a pertinent summary. To achieve this there is a prevalent practice of relying on arbitrary rules to select events to report, which can introduce reporting biases leaving out important adverse events. This also creates a barrier to establishing an accurate harm profile. 3,16 Understanding the reasons for the low uptake of these statistical methods will help identify solutions to improve the analysis of adverse events in RCTs. We undertook a survey of UK statisticians working in clinical trials to investigate their current practice when analysing adverse events, to measure their awareness of available methods for adverse event analysis, and to explore their priorities, concerns and identify any perceived barriers when analysing adverse events.

Study design
A cross-sectional, online survey of UK Clinical Research Collaboration (CRC) clinical trial unit (CTU) and industry statisticians from both pharmaceuticals and clinical research organisations (CROs) was conducted. We aimed to recruit a minimum of one statistician from each of the 51 UKCRC registered CTUs and from a sample of pharmaceutical companies and CROs in the UK to gain an industry perspective. The survey was followed-up with a workshop at the UKCRC biannual statisticians' operations group meeting where survey results were presented and areas for improvements and priorities were discussed.

Survey development
The survey was developed using information from current guidance and previous research that   The invitation to participate in the study included the participant information sheet (appendix item 2), which was also included at the beginning of the survey before participants formally entered.

Sampling and Recruitment
Participants were encouraged to read the information sheet and discuss the study with others or contact the research team if they wished. If invitees were happy to enter into the trial at that point their consent was taken as implied upon submission of the completed survey.

Statisticians with experience of planning and preparing the final analysis reports for pharmacological
RCTs were invited to participate.

Analysis
Descriptive analysis was undertaken, primarily including frequencies and proportions for each questionnaire item and where appropriate was accompanied with visual summaries. 22 The frequency and proportion of participants that showed support for an item was calculated by combining the 'always' and 'often' or 'strongly agree' and 'agree' categories. Participants were classified according to affiliation into either CTU/public sector or industry sector and analysis was stratified by sector.
Response rates were calculated for groups of participants where known.

Patient and public involvement
This survey forms part of a wider research project that was developed with input from a range of patient representatives. There were no patients directly involved in this survey but the original proposal and patient and public involvement (PPI) strategy were reviewed by service user representatives (with experience as clinical trial participants and PPI advisors) who provided advice specifically with regard to communication and dissemination to patient and public groups.

Participant flow
Invitations were sent to fifty-one CTU/public sector and seven industry contacts.  figure A1).

Participant characteristics
Overall, more than 80% of responders worked on studies of more than 100 participants, and 80% worked on phase II/III trials. A greater proportion of industry participants were working on phase I/dose finding trials compared to CTU/public sector participants (22% vs 2%) (figure 1). The mean number of years of experience was 12.8 (SD 8.3) (median 11.5 years, range (1-35 years)) (table 1).   1 Other ways of presenting AE information included presenting information on: overall number of events (n=2); number of patients experiencing 0, 1, 2 etc. events and number of AEs per patient (n=2); duration (n=1); relatedness (n=1) and severity (n=7) (full free text comments in appendix table A1). 2 Incorporates free text comments that described summaries synonymous with incidence rate ratios. 3 Included a comment that a participant presents the "median number (IQR)" of events. 4 Other comments related to the calculation of confidence intervals for precision (n=2), one indicated use of a graphical summary (n=1) and four cautioned against the use of testing.
Just under 40% stated that they were aware of appropriate methods published specifically for adverse event analysis in RCTs (table 2). There were five broad groups of methods mentioned, including Bayesian methods to analyse low frequencies (n=1); standard regression modelling Of the participants who reported that they were aware of specialist adverse event analysis methods, we asked opinions on why such methods were not more widely used. Just over a quarter thought limited use was due to technical complexity (27%); over a third thought it could be due to trial characteristics such as unsuitability of sample sizes (36%) and the number of different adverse events experienced in trials (36%); and 46% thought methods were too resource intensive and methods were not suitable for typical adverse event rates observed (appendix table A4).
Over three-quarters (77%) of participants provided further reasons for lack of use of specialist methods. Reasons were characterised into comments relating to: concerns with the suitability of methods in relation to trial characteristics and nature of adverse event data (n=7); opposition and a lack of understanding from clinicians (n=5); a lack of need for such methods (n=3); a desire to keep analysis consistent with historical analysis (n=3); and training and resources (n=1) (appendix table   A5).

Influences, barriers and concerns
The most common influences for the adverse event analysis performed were cited as the chief investigator's preference for simple approaches (78%), the observed adverse event rates (76%) and the size of the trial (73%). Over 60% of participants felt that the statistician preferred simple approaches for adverse event analysis (68%), and the number of different adverse events experienced in a trial were influential (65%

Concerns and solutions
When participants were asked to think about available methods for adverse event analysis the most common concern, which was held by 38% of participants was acceptability of methods to regulators.
This differed substantially by sector with only 23% of CTU/public sector participants holding this belief compared to 77% of industry participants. Twenty percent of participants were concerned about the acceptability of methods to the chief investigator and journals and 32% were concerned about the robustness of methods (figure 2 and appendix table A9).
All participants believed that guidance on appropriate adverse event analysis is needed, 97% thought training specifically for adverse event analysis is needed, and 63% thought new software or code is needed (figure 2 and appendix table A10). Just under a third (32%) of participants offered solutions to support change in adverse event analysis practices. These included suggestions regarding improved standards or calls for changes from journals, registries and regulators (n=8); development of guidance, education and engaging with the medical community (n=9); and analysis (n=3) (appendix table A11).
Thirty percent of participants raised other items not listed in the survey regarding current adverse event analysis practices, these covered the following themes: minimum summary information that participants would expect to be reported for adverse event data such as "numbers and percentages" (n=2); changes to analysis practice that could or have been made such as "use of graphical methods" (n=8); concerns about the quality and collection of adverse event data (n=3); and general comments and criticisms about current adverse event analysis and reporting practices (n=4) (appendix table   A12).
In the follow-up workshop of senior statisticians (n=52 from 43 UKCRC registered CTUs) attending the UKCRC biannual statisticians' operations meeting in November 2019, participants were asked to rate the need to improve analysis practices for adverse event data on a scale of 0-100 (indicating low to high priority). The mean score was 66 (SD 16.2) (median 71 (range 9, 88)) (n=44). In discussions, the following themes were highlighted as priorities to take forward: development of guidelines; identification of appropriate analysis methods; exploring integration of qualitative information; and ensuring consistency of information reported including development of core harm outcomes by drug class.

DISCUSSION
Despite RCTs being a valuable source of data to compare rates of adverse events between treatment groups and provide an opportunity to assess causality, analysis and reporting practices are often inadequate. [1][2][3][4][9][10][11][12][13][14][15] This survey of statisticians from the UK public and private sectors has established a more detailed picture of clinical trial statisticians' adverse event analysis practices and builds on our previous research which evaluated adverse event analysis practices reported in journal articles. 1 It has identified priorities and concerns including influences, barriers and opinions to be addressed in future work to improve adverse event analysis. Results were broadly similar across public and industry sectors with the only notable differences being the greater use of hypothesis testing and 95% confidence intervals as a means to compare adverse events rates between treatment groups by CTU participants, a more predominant belief by industry participants that regulators preferred simple approaches to adverse event analysis, and a greater concern about acceptability of methods to regulators by industry participants. Across sectors, there was unanimous support that guidance and training on appropriate adverse event analysis is needed.
Survey responses indicated that 75% of statisticians produce tables with both the number of participants with at least one event and the total number of events. This is substantially higher than reported in reviews of published articles, which found between 1% and 9% reported both. 1-3 The number of total events experienced can give a better summary of impact to patients' quality-of-life but it seems this is often omitted from journal articles with reviews identifying only 6% to 7% of published articles reporting this information. 1,4 Reported use of 95% confidence intervals were similar to that reported in journal articles (22% compared to 20%) but reported use of hypothesis testing was lower than what was found in journal articles (32% compared to a range of 38% to 47%). [1][2][3] Reasons for these disparities are not known but could include journals editors requesting such analyses is undertaken to compare groups, or at the request of the chief investigator, which is supported by survey responses indicating a preference for simple approaches from both groups. It could also be that the survey participants were restricted to those working in CTUs and industry, and are perhaps not fully representative of those undertaking and reporting clinical trial results.
Many methods have been specifically proposed for adverse event analysis in RCTs and there was a moderate level of awareness of these methods (40%) but in line with our review of journal articles we found uptake to be minimal (13%). 6,7 Whilst not directly comparable, our results are also closely This survey did not specifically ask participants about their use of graphics to display adverse event data but a similar proportion of participants indicated use of such summaries in free text comments as identified in our review of journal articles (9% vs 12%). 1 However, these figures were both substantially lower than the 37% that indicated use of static visual displays for study level adverse event analysis in the survey of industry statisticians. 28 This could reflect the use of more advanced graphical approaches for internal reports. within these guidelines such as "reporting CIs around absolute risk differences" and to "include both the number of events (per person time) and the number of patients experiencing the event" to be minimal. 1,2,4,14,15 It has also been argued that such guidelines do not go far enough and fail to account for the complex nature of harm outcomes data. 5  Trial design and the nature of adverse event outcomes can also hinder the analyses performed.
Unlike efficacy outcomes, which are well defined and limited in number from the outset, harm outcomes are numerous, undefined and contain additional information on severity, timing and duration, and number of occurrences, which all need to be considered. More careful consideration of harm outcomes when designing, analysing and reporting trials will help produce a more balanced view of benefits and risks.  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60   F  o  r  p  e  e  r  r  e  v  i  e  w  o  n  l  y 20 Improved analysis could be achieved through adoption of existing or development of more appropriate methods for adverse event data. Several participants mentioned adverse event analysis approaches we believe warrant exploring, including time-to-event analyses, data-visualisations and Bayesian methods. Ultimately, with the aim of helping to identify signals for adverse drug reactions enabling a clearer harm profile to be presented. This is supported by feedback obtained at the workshop and the earlier findings of Colopy et al. who concluded that statisticians should help "minimize the submission of uninformative and uninterpretable reports" and thus present more informative information regarding likely drug-event relationships. 28 Participants of both the survey and workshop raised concerns about the quality and reporting of adverse event data from RCTs. We agree that if adverse event data is not robust the analysis approach is redundant as the results will not be accurate. Therefore, procedures should be put in place at the trial design stage to mitigate problems with adverse event data collection, including for example, development of validated methods for data collection and clear, standardised instructions for those involved in the detection and collection. 3,31

Strengths and limitations
Through support of the UKCRC CTU network and utilisation of personal contacts, we were able to achieve a high response rate for the survey. After invitations were sent there was no way to ensure that responses were restricted to one per unit or organisation. However, dissemination via the UKCRC to senior statisticians within units and personal, senior contacts within industry would have ensured some quality control. There was some level of self-selection for those recruited via the open platform and as such, there is a possibility that these participants had an increased interest in  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60   F  o  r  p  e  e  r  r  e  v  i  e  w  o  n  l  y   21 adverse event analysis and are not fully representative of the clinical trial community. We also did not have any information on non-responders and as such cannot characterise any potentially relevant differences that could affect the generalisability of our results. This survey provides insight and essential starting points to identify areas of focus to help support a change to improve adverse event analysis practice. Many of the opinions raised in the survey were echoed by the workshop attendees who represented more of a general interest group.

Conclusions
This research demonstrates that there is a moderate level of awareness of appropriate statistical methods for adverse event analysis but that these methods are not being used by statisticians and supports our earlier work identifying adverse event analysis practices in RCTs as sub-optimal.

Acknowledgments
The authors would like to thank: Dr Odile Sauzet for discussing the survey design in the initial stages of development.
Louise Williams (UKCRC) for her support, in particular circulating the survey to UKCRC registered CTUs and helping us achieve such a high response rate, and Professor Carrol Gamble and Professor Catherine Hewitt (UKCRC statistical operational group chairs) for supporting this project.
Alexander Schacht for inviting us on to his podcast to promote the survey and circulating to his contacts through LinkedIn.
Dr Suzie Cro, Nicholas Johnson, Anca Chis Ster, Emily Day, Fiona Reid and Professor Carrol Gamble for providing feedback on survey content and platform. Also to Dr Suzie Cro for her help in facilitating the workshop at the UKCRC biannual statisticians' operations group meeting.

COMPETING INTERESTS
None declared.

ETHICS
This study was granted ethical approval by the Imperial College Joint Research Compliance Office (ICREC reference: 19IC5067).

DATA SHARING STATEMENT
Survey data are available from the Zenodo data repository.

Figure legends
What is the typical phase of the trials you work on? Phase I/Dosefinding Phase II/III Phase IV Before you proceed we thought it would be helpful for you to know about our recent findings.
We undertook a systematic review of RCT journal reports and found that trials typically report AE data using frequencies (94%) and percentages (87%).

They often ignore repeated events (84%) and 47% undertake hypothesis tests despite a lack of power. There is also a common practice to categorise continuous clinical and laboratory outcomes and present as frequencies and percentages (59%). A small proportion (12%) incorporated graphics into the AE analysis.
Thinking about analysis methods for AEs: 6 How often would you say the following influences the analysis performed? i Statistician prefers simple approaches e.g. tables of frequencies and percentages Always Often Not very often Never Don't know Are there any reasons other than those mention above why those methods are not being more widely used?

Yes No
If yes, please specify Please explain your answer. If 'yes', please include details of the method(s) used for the analysis performed This survey will allow an exploration of awareness of statistical methods available to flag AEs as potential adverse drug reactions (ADRs) and identify any potential barriers to their use, as well as gain feedback on ideas for new statistical methods.

Why have I been chosen?
You are eligible to participate in the survey if you satisfy the following inclusion criteria: i) Your current role is as a senior statistician or equivalent at a UKCRC CTU; ii) You have experience of planning and preparing final analysis reports for pharmacological RCTs.
We ask you to provide your personal views.

Do I have to take part?
Participation in the study is voluntary. It is up to you to decide whether to take part. If you decide to take part, you are still free to withdraw at any time without having to give a reason.
However, retraction or removal of your survey answers is not possible once the 'Submit' button has been selected.

What are the possible disadvantages and risks of taking part?
There are no disadvantages that we are aware of from taking part in this study.

What if something goes wrong?
We are not aware of any risks involved in taking part in this study.

Will my taking part in this study be kept confidential?
All personal records relating to this study will be kept confidential. We will use SurveyMonkey to capture your responses. No personal data will be collected in the survey, as such your responses to this survey will be anonymous. Responses will be kept in a secure passwordprotected and encrypted file and stored on Box cloud content management platform. Data in Box is stored securely and automatically backed up. The Box platform is fully General Data Protection Regulation (GDPR) compliant. Upon completion of the study the research data will be uploaded to an approved data-sharing repository. This will be maintained for at least ten years from the time the research study is complete.

What will happen to the results of the research study?
The results of this study will be analysed and published in an open access peer reviewed scientific journal. The work will also be submitted for oral presentation at a range of academic conferences targeting statisticians and the wider clinical trial community. If you would like help in locating and viewing the published results please contact us using the details below.
Study data will be stored for ten years post end of study in keeping with Imperial College London research policy.
No identifying data will be published.

Will I receive payment for participating in the study?
You will not be paid for taking part in this study but upon successful completion of the survey, you will be entered into a prize draw for a chance to win £50 worth of Amazon vouchers.

Who is organising and funding the research?
This study is being organised and sponsored by Imperial College London. This study is funded by the National Institute for Health Research (NIHR) (grant reference number DRF-2017-10-131). Please note that the views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care.

Who has reviewed this study?
This study has been reviewed by the Head of Imperial Clinical Trials Unit and granted ethical approval by the Imperial College Joint Research Compliance Office (JRCO). Please follow the link in the invitation email to access the survey. We approximate that the survey will take no longer than 15 minutes to complete. You will have an eight-week window to complete the survey. Reminder emails will be sent at week 4 and week 6.

What action is required?
Please note that completing the survey and clicking 'Submit' automatically implies your consent to participate. Participation is voluntary and you are free to withdraw at any point whilst completing the survey. However please note retraction or removal of individual survey answers is not possible once the 'Submit' button has been selected.

Contact information:
Should you have any questions concerning this study, please contact the research team using the details provided below:

Ethics. Page 22
Informed consent Describe the informed consent process. Where were the participants told the length of time of the survey, which data were stored and where and for how long, who the investigator was, and the purpose of the study?
Methods -sampling and recruitment. Page 8 Data protection If any personal information was collected or stored, describe what mechanisms were used to protect unauthorized access. Methods -sampling and recruitment. Page 7/8

Randomization of items or questionnaires
To prevent biases items can be randomized or alternated.

Not applicable
Adaptive questioning Use adaptive questioning (certain items, or only conditionally displayed based on responses to other items) to reduce number and complexity of the questions.

Number of Items
What was the number of questionnaire items per page? The number of items is an important factor for the completion rate.

Appendix item 1 -survey questions
Number of screens (pages) Over how many pages was the questionnaire distributed? The number of items is an important factor for the completion rate. Was this done, and if "yes", how (usually JAVAScript)? An alternative is to check for completeness after the questionnaire has been submitted (and highlight mandatory items). If this has been done, it should be reported. All items should provide a non-response option such as "not applicable" or "rather not say", and selection of one response option should be enforced.

Not applicable
Participation rate (Ratio of unique visitors who agreed to participate/unique first survey page visitors) Count the unique number of people who filled in the first survey page (or agreed to participate, for example by checking a checkbox), divided by visitors who visit the first page of the survey (or the informed consents page, if present). This can also be called "recruitment" rate. Completion rate (Ratio of users who finished the survey/users who agreed to participate) The number of people submitting the last questionnaire page, divided by the number of people who agreed to participate (or submitted the first survey page). This is only relevant if there is a separate "informed consent" page or if the survey goes over several pages. This is a measure for attrition. Note that "completion" can involve leaving questionnaire items blank. This is not a measure for how completely questionnaires were filled in. (If you need a measure for this, use the word "completeness rate".) If you provide view rates or participation rates, you need to define how you determined a unique visitor. There are different techniques available, based on IP addresses or cookies or both.

Not applicable
Results -participant flow and Appendix Figure A1 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60   F  o  r  p  e  e  r  r  e  v  i  e  w  o  n  l  y Cookies used Indicate whether cookies were used to assign a unique user identifier to each client computer. If so, mention the page on which the cookie was set and read, and how long the cookie was valid. Were duplicate entries avoided by preventing users access to the survey twice; or were duplicate database entries having the same user ID eliminated before analysis? In the latter case, which entries were kept for analysis (eg, the first entry or the most recent)?

Not applicable
Log file analysis Indicate whether other techniques to analyze the log file for identification of multiple entries were used. If so, please describe.

Registration
In "closed" (non-open) surveys, users need to login first and it is easier to prevent duplicate entries from the same user. Describe how this was done. For example, was the survey never displayed a second time once the user had filled it in, or was the username stored together with the survey results and later eliminated? If the latter, which entries were kept for analysis (eg, the first entry or the most recent)?

Discussion -Strengths and limitations. Page 20/21
Handling of incomplete questionnaires Were only completed questionnaires analyzed? Were questionnaires which terminated early (where, for example, users did not go through all questionnaire pages) also analyzed? No. Results section reflects this.
Questionnaires submitted with an atypical timestamp Some investigators may measure the time people needed to fill in a questionnaire and exclude questionnaires that were submitted too soon. Specify the timeframe that was used as a cut-off point, and describe how this point was determined.

Not applicable
Statistical correction Indicate whether any methods such as weighting of items or propensity scores have been used to adjust for the non-representative sample; if so, please describe the methods.