Article Text

Download PDFPDF

Evaluating follow-up and complexity in cancer clinical trials (EFACCT): an eDelphi study of research professionals’ perspectives
  1. Helene Markham Jones Markham-Jones1,2,
  2. Ffion Curtis1,
  3. Graham Law3,
  4. Christopher Bridle4,
  5. Dorothy Boyle5,
  6. Tanweer Ahmed2
  1. 1 Lincoln Institute for Health, University of Lincoln, Lincoln, UK
  2. 2 Lincolnshire Clinical Research Facility, United Lincolnshire Hospitals NHS Trust, Lincoln, UK
  3. 3 Community Health Research Unit, University of Lincoln, Lincoln, UK
  4. 4 School of Psychology, University of Bedfordshire, Luton, UK
  5. 5 South East Scottish Cancer Research Network (SESCRN), NHS Lothian, Edinburgh, UK
  1. Correspondence to Ms Helene Markham Jones; hMarkhamJones{at}lincoln.ac.uk

Abstract

Objectives To evaluate patient follow-up and complexity in cancer clinical trial delivery, using consensus methods to: (1) identify research professionals’ priorities, (2) understand localised challenges, (3) define study complexity and workloads supporting the development of a trial rating and complexity assessment tool (TRACAT).

Design A classic eDelphi completed in three rounds, conducted as the launch study to a multiphase national project (evaluating follow-up and complexity in cancer clinical trials).

Setting Multicentre online survey involving professionals at National Health Service secondary care hospital sites in Scotland and England varied in scale, geographical location and patient populations.

Participants Principal investigators at 13 hospitals across nine clinical research networks recruited 33 participants using pre-defined eligibility criteria to form a multidisciplinary panel.

Main outcome measures Statements achieving a consensus level of 70% on a 7-point Likert-type scale and ranked trial rating indicators (TRIs) developed by research professionals.

Results The panel developed 75 consensus statements illustrating factors contributing to complexity, follow-up intensity and operational performance in trial delivery, and specified 14 ranked TRIs. Seven open questions in the first qualitative round generated 531 individual statements. Iterative survey rounds returned rates of 82%, 82% and 93%.

Conclusions Clinical trials operate within a dynamic, complex healthcare and innovation system where rapid scientific advances present opportunities and challenges for delivery organisations and professionals. Panellists highlighted cultural and organisational factors limiting the profession’s potential to support growing trial complexity and patient follow-up. Enhanced communication, interoperability, funding and capacity have emerged as key priorities. Future operational models should test dialectic Singerian-based approaches respecting open dialogue and shared values. Research capacity building should prioritise innovative, collaborative approaches embedding validated review and evaluation models to understand changing operational needs and challenges. TRACAT provides a mechanism for continual knowledge assimilation to improve decision-making.

  • cancer research
  • follow-up
  • Delphi methods
  • protocol complexity
  • workforce planning
  • Singerian Inquiry
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • The multimodal study design developed consensus-defined trial rating and complexity indicators to support objective analysis of cancer research delivery adaptable to operational evaluation in other therapeutic areas and global settings.

  • Qualitative aspects provide in-depth contextual evidence through the ‘voices’ of patient-facing professionals, articulating human and social aspects of research.

  • This study is the first, to our knowledge, to present a Delphi methodology adopting a Singerian approach involving research professionals, in a consensus process which is holistic and dialectical.

  • The study involved key stakeholders from a wide geographic base reflecting a heterogeneous sample of clinical trial professionals.

  • Participants were limited to research professionals delivering studies at National Health Service sites in Scotland and England. Future research is planned involving a wider demographic to include sponsors, funders, networks and policymakers.

Introduction

Clinical trial delivery in hospital settings is crucial in advancing cancer care and treatment options with evidence indicating sustained commitment to research enhances performance and patient outcomes.1 Cancer research has evolved rapidly in recent years, with innovations in immunotherapy and precision medicine increasingly prioritised in healthcare policy. The National Health Service (NHS) has published ambitions to accelerate innovation, outlining a framework for rapid adoption of next generation treatments offering personalised, stratified care and follow-up models.2 3

The ability to translate scientific, laboratory advances in cancer research into clinical and patient benefit through clinical trials is a critical requirement for healthcare providers, as cancer incidence and patient populations continue to grow.4

Realising these translational benefits is challenging sites as cancer clinical research trial complexity increases,5 with niche designs and stratified treatments affecting research delivery costs and resources. Cancer research is an interdisciplinary enterprise advancing patient care and therapeutic benefits through a collaborative research pathway involving scientific, translational and clinical research trials. As trials evolve to study rare diseases, wide-ranging cancers and molecular sub-types, delivery complexity and workloads grow in tandem. Intricate protocols, narrow selection criteria, high data demands and extended safety, efficacy and outcome monitoring6 7 are stretching staff and site capabilities.

A predicted 70% increase in cancer incidence8 within 20 years combined with improving survival rates, follow-up demands and funding pressures necessitates operational review of trial designs and implementation frameworks to articulate impacts on sites, patients and professionals. Systematic, structured evaluation of research delivery in secondary care (hospital) settings is limited with minimal, current empirical study of trial complexities and follow-up impacts, workloads, institutional dynamics or operational processes across complex healthcare institutions such as the NHS. In-depth review is a paramount priority for the healthcare industry to comprehend variables contributing to service pressures, identify changing stakeholder needs and facilitate evidence-based commissioning of services through appropriately aligned funding and support models.

Delivering research in the era of precision medicine is intense and complex, a clinical reality strongly evidenced in international literature.9 Analysis of operational delivery involving key delivery stakeholders has predominantly operated at regional levels, limiting global relevance and has not yet led to transformative models.10 Lyddiard et al 11 undertook a UK collaborative study to develop a workload measurement tool but excluded investigator and pharmacist roles, anticipating challenges in collating accurate workload data. Further research recommended qualitative evaluation of workload and complexity alongside development of trial rating models using experts whose advice is ‘fundamental to the weighting and scoring’.12 However, within healthcare applications and systems development there is a persistent lack of dialogue with ‘users and implementers of technology for data capture’.13 Operational evaluation including assessment of technologies, training solutions, capacity planning and research delivery models should involve subject-matter experts capable of providing grounded knowledge and insight. The significant complexity gap and incremental patient follow-up activity requires external recognition. Currently there is no national analysis of follow-up or protocol complexity workloads to understand fluctuating operational and resource demands at local, regional and national levels. Systematic rating of trial attributes in real time and over study lifetimes will create longitudinal data sets enabling evidence-based cost attribution and funding decisions to enhance research capacity and productivity. The extant literature underlines a need for broad, cyclical and continual analysis of research advancements and disease burdens to anticipate future demands for resources, as well as facilitating sustainable growth, productivity and improvements in patient care.

Enabling research growth necessitates structured workforce planning; yet there is poor application of this crucial management function across the NHS.14To build capacity, manage increasingly complex trials and support patient-centred care, research organisations, funders and policymakers need to evaluate current delivery and performance management models, seek interdisciplinary stakeholder feedback and consider adopting creative, design-thinking approaches with reflective and critical capabilities.15Research into Singerian organisational models has shown that holistic and dialectic approaches to understanding context-related challenges support process improvement and knowledge generation. Organisations cultivating positive communication with well-integrated systems are associated with improved performance and healthcare outcomes.16 Holistic, collaborative team environments promote valued attributes of respect, creativity and knowledge sharing.17

Aims

Cancer research forms part of a complex collaboration between scientists, clinical research professionals and patients. Evaluation of patient follow-up in cancer clinical trials and the nature of complexity, in its many forms, need to understand the experiences and challenges of research professionals’ implementing and delivering cancer clinical trials in hospital settings. In this study we aimed to contribute to existing knowledge of translational cancer research, to support acceleration of laboratory advances for patient benefit, by engaging research professionals in a democratic, systemic evaluation of cancer clinical trial research delivery. We sought multidisciplinary perspectives to: (1) identify research professionals’ priorities, (2) understand localised challenges, (3) define study complexities and workloads supporting the development of a trial rating and complexity assessment tool (TRACAT). This study adopted a holistic, consensus-based design engaging patient-facing clinical trial professionals in developing grounded, contextual knowledge of trial implementation and end-user input into the development of TRACAT which will function as an operational decision-support tool, as well as highlighting views, perceptions and priorities for their professional field.

Methods

Study design and approach

To facilitate a detailed systems evaluation sensitive to the multi-faceted nature of cancer research delivery a multimodal study was developed. The design reflects the Singerian-Churchmanian model of inquiring systems (SCIS) valuing ethics and community knowledge in complexity evaluation and decision-making.18 The adopted design combining the Delphi technique with a Singerian approach followed an initial scoping review covering subject, policy and methodological literature. The review identified key challenges for the profession directing the overall research and initial survey design. A democratic approach was needed recognising multiple perspectives combined with individual knowledge and experience, to form a comprehensive understanding of the complexities of the systems and networks in which they operate through a dialectical group consensus process, a Singerian Delphi. SCIS provide a framework and meta-method approach to generating actionable knowledge, capable of addressing wicked, complex problems and ‘sensemaking in complex, multifaceted, subjective’19 contexts.

Delphi technique

The Delphi technique is widely used in healthcare to gain insight from front-line experts knowledgeable within specific fields.20 It provides practical applications in consensus development, prioritisation, forecasting, policy development and investigation of multi-faceted issues.20–22 We adopted the method to elicit expert opinion in developing a comprehensive rubric of research delivery variables and in the analysis of complex problems within a group.23 Healthcare and research delivery operate within complex adaptive systems with diverse and multifarious units, processes and interactions. Analysis of complexity concepts provides an explanatory, sensemaking device to interpret ‘phenomena in diverse applications’24 which are dynamic, emergent and entwined. The professionals recruited to the panel performed an ethical role, as their observations and engagement in identifying trial-rating attributes contribute to designing an evaluation tool for operational decision-making and strategic planning. The design of technical applications or models for strategic evaluation or decision support and inclusion criteria for measurement or quantitative judgements should be based on input from ‘experts’ in the field (patients and professionals), the users and benefactors of ‘human-centred automation'.13 17 23For this reason, the research commences with a Delphi designed from a Singerian inquiring system perspective, drawing ethics and heuristics into the development of an information system and model.25 This Singerian-oriented Delphi aimed to incorporate diverse knowledge, experience and ideologies of multiple stakeholders, disciplines and personality types26 to form a prismatic view of cancer research delivery sensitive to its evolving, multi-faceted and complex nature.27

Sampling procedure

A purposive selection process recruited NHS secondary care (hospital) sites from a wide geographic base in the UK. This supported formation of an ‘expert’ panel of professionals, knowledgeable in delivering research at teaching, acute or district general hospitals providing services to rural and metropolitan patient populations. Site characteristic diversity, based on scale and nature of operations and patient populations, aimed for a heterogeneous sample minimising bias and facilitating expression of ranging perspectives. To achieve a target sample (n=20) researchers planned to recruit between 22 and 30 participants. While this is a relatively small sample size the importance in the selection of a Delphi sample is the knowledge and expertise of participants in relation to the research. The interdisciplinary nature of research and delivery roles required a range of professionals to form an expert panel. A smaller sample size is effective when panellists are similarly knowledgeable and expert in the field of study.28

Recruitment procedure

Principal investigators at sites approached potential participants based on their knowledge and experience within cancer research delivery. Pre-defined eligibility criteria stipulated professionals should have 18 months experience in secondary care setting within a research delivery or support role, currently or within the past 18 months.

Materials and survey design

The three-round eDelphi took place online between January and August 2018 using Qualtrics software. Participant information sheets described the iterative process, commencing with open questions in round 1 and moving to structured questions in subsequent rounds. The anonymised design meant participants’ identity was unknown to other panellists, a key benefit of the technique.29 Anonymity facilitates free and open expression of individuals removing the potential for domination by senior or influential colleagues which may lead to bias as participants submit to peer pressure within an open group.30 References to roles within individual textual responses were removed, protecting participants’ anonymity and preventing role seniority influence on consensus development. Consenting participants received an invite and link to the online questionnaire. Detailed instructions guided panellists throughout with individual feedback provided between rounds. Experts were encouraged to complete surveys as fully as possible to facilitate comprehension of perspectives, priorities and levels of consensus and support reliability of results. Optional free-text comments at the end of each question section and survey encouraged dialogue, reflection and refinement of observations. The roles of participants and their ethical contribution were detailed in the study information sheets and documents provided to participants who consented to join the ‘expert panel’.

First round survey

Panellists provided their definitions, perceptions and suggestions to seven open questions shown in table 1. The broad nature of questions aimed to generate rich responses iteratively testing inter-connection of phenomena between categories. Individual responses were analysed in NVivo with responses coded thematically. Similar themes were condensed into the initial 201 group statements with care taken to retain as much of participants’ intended meaning as possible. Participants were advised that themes suggested by the panel would be developed as trial rating indicators (TRIs) as part of the TRACAT tool to support workforce and capacity planning.

Table 1

First round open questions

Second round survey

Panel-developed statements were circulated alongside a 7-point Likert-type scale ranging from strongly disagree (1) to strongly agree (7), for participants to confirm their level of agreement to question category statements from 1 to 7. A new survey section (question 8) asked panellists to rank TRACAT categories from lowest priority (1) to highest priority (7) as factors to include as TRIs and complexity indicators. To form the initial TRI categories first round responses were coded in NVivo and ranked by frequency of themes.

Third round survey

Panellists received the previous round’s results showing the percentage level of agreement and median response to each statement alongside their own selection. Panellists were asked to review initial responses in light of levels of agreement and either revise or leave their original selection unchanged, following reflection on wider perspectives. Participants were encouraged to comment on reasoning for changing responses by more than two scale points away from consensus, or their original selection. Final round panellists received a summary report of consensus statements and ranked TRACAT categories.

Data analysis

The qualitative data from the open round were content analysed and coded thematically in NVivo using a framework approach to create the initial complexity categories in question 8. The statements relative to each individual question category are shown in table 1. A second stage of hand coding to validate the initial analysis was performed. Quantitative analysis of the second and third round Likert-type scale responses was performed using SPSS V.22.0. Summary statistics reported to panellists described frequency of responses to statements (percentage level) and the median (measure of central tendency). In addition the IQR was used as a measure of dispersion in analysing stability of responses and move towards consensus in order to decide on the final survey iteration.

Consensus level and validity

Consensus was defined as 70% of panellists rating a statement the same on the 7-point Likert-type scale, a recognised level of agreement.31 Instructions advised participants that a convergence of opinion and the agreed consensus measure would determine the stopping point for the study. Items achieving frequency consensus and median strength of agreement contribute to future questionnaire and interview designs.

Patient and public involvement

A patient advisory group reviewed the study design prior to submission to HRA (Health Research Authority) and ethics with revisions made following their recommendations. Panellists received a final consensus report and other stakeholders had the option to receive results by a preferred method of print, email, Qualtrics or evaluating follow-up and complexity in cancer clinical trials (EFACCT) website; https://efacct.com/.

Results

The target sample (n=20) was exceeded with 33 professionals from 13 hospitals and 9 local research networks consenting to join the expert multidisciplinary panel. Forty-four potential participants were approached with 11 professionals declining due to limited capacity or availability to complete the surveys. The summary demographics and return rates are shown in table 2. Twenty-five research professionals completed the three-round process, an increase of 25% on the initial planned sample, compensating for a 24% participant dropout rate. Regular communication with panel members encouraged retention but robust return rates and continued commitment potentially suggest the study’s importance in providing a platform to elucidate role-specific experiences and challenges. The number of panel statements generated in the opening round within each question category is detailed in table 3 alongside the percentage of statements achieving consensus by each category and round.

Table 2

Participant demographics and response rates by round

Table 3

Consensus statements by question category and round

Round 1 survey results

Round 1 achieved a return rate of 81.82% with 27 participants completing the initial qualitative survey and demographic information. Open question responses were comprehensive leading to the generation of 531 individual statements, analysed and condensed into 201 group statements.

Round 2 survey results

Round 2 achieved the same response with 15 statements reaching consensus (7.46% of total statements). One participant joined the panel for the quantitative survey rounds. They did have the option to provide individual feedback through free-text comments in line with all other participants.

Round 3 survey results

Twenty-five panellists returned the final survey, a return rate of 92.59%. This round included 13 additional statements generated from free-text responses. Table 3 details the 75 statements reaching consensus. In addition, 14 TRIs were identified with four achieving a median rating of 7 (highest priority) and remaining items rated as 6 or 6.5. Non-responders to round 2 were not included in the third circulation. Based on the groups’ move towards consensus the third survey formed the final round.

Summary of panel responses and discourse

The results provide detailed insights into factors contributing to complexity, follow-up intensity and resource impacts for sites. The researchers chose to retain the broad nature of participant statements following data collection of the initial qualitative open round. As a criterion of the Singerian Delphi, professional panellists needed to witness the diversity, depth and richness of colleague responses, and the complexity of problems in social settings. In retaining detailed statements the full nature of participants’ sentiments in responses is expressed, allowing the Delphi panel the opportunity to reflect on broader perspectives, concepts and nuances of meaning. Characterising a Singerian inquiring approach the Delphi study served as a process for adding to ‘substantive knowledge’ and ‘participants’ knowledge of themselves’ through a group reflective process.23 Participant feedback was encouraged throughout, supporting the concept of the Delphi as a self-reflective and collective decision-making process, whereby there is a move towards consensus, or a participant’s conscious informed choice to revise their opinion or personal philosophy based on wider perspectives of peer group experiences. Panellists described changes in their perspectives stemming from a new understanding of ‘how things may be’ in different contexts or ‘in light of more recent experiences and discussion’. Other feedback illustrated the nature of changing circumstances and experiences on perceptions and sensitivities during the course of the study, leading to a reflection and adjustment of initial views and recognising the subjective nature of issues. Statements achieving the highest levels of agreement are detailed under each question category. Online supplement 1 presents the full list of panel consensus statements.

Supplemental material

Follow-up definition

Participants provided personal definitions of ‘follow-up’ in relation to cancer clinical trial delivery. Responses highlighted diverse interpretations with 56% of panellists defining follow-up as activities relating to any or multiple protocol stages (including active and post-treatment phases) while 44% identified follow-up as occurring solely post-active treatment.

Panellists confirmed their level of agreement to summarised definitions of follow-up created from individual interpretations to form three core categories: (1) any trial stage, (2) multiple stages, (3) post-active treatment. An additional question in round 2 asked panellists to consider the need for a nationally agreed definition supporting research delivery. Panel-developed definitions did not reach consensus but 92% of professionals strongly agreed on a need for a nationally agreed definition of the term and its sub-types (table 4).

Table 4

Q1 Follow-up definition consensus statement

Barriers and burdens

In round 1 the panel described phenomena encountered in their roles within research and elements perceived as barriers or burdens to effective practice. This category reached high levels of agreement with 21 statements achieving consensus, the highest of which called for an ‘effective and consistently validated funding and support model’, recognising increased levels of complexity within cancer clinical trials and associated workloads. Panellists agreed strongly (92% consensus) that the funding of research delivery does not ‘accurately reflect the requirements, time and effort of sites’ representing a risk for NHS organisations in delivering effective research with inadequate resources and staffing levels (table 5).

Table 5

Q2 Barriers and burdens—top consensus statements

Analysis of complexity

The highest level of consensus within the study was reached in this category with 96% of professionals strongly agreeing growing protocol burden adds to operational complexity (table 6). Ten statements in this domain reached consensus, 60% of which had a consensus level of over 80%. A further 11 statements in this group were in a 10% range of consensus sharing over 60% agreement levels between panellists.

Table 6

Q3 Analysis of complexity—top consensus statements

Factors affecting capacity

In round 1 the panel described factors affecting their capacity to support and deliver cancer trials. Nine statements reached consensus with the highest item level of agreement (88%) alluding to organisational inadequacies in communication, collaboration and integration across services, impeding the effectiveness of trial delivery (table 7).

Table 7

Q4 Factors affecting capacity—top consensus statements

Strategic priorities

The largest number of consensus statements by category related to strategic priorities with 23 items reaching an agreement level of 76% or higher. Five statements shared panel consensus of 88% in terms of their priority for research delivery, four of which related to social aspects of operations: cognition, collaboration and communication (table 8).

Table 8

Q5 Top strategic priorities—top consensus statements

Effective research practice

Panellists provided views on existing elements of cancer clinical research practice in the NHS they felt contributed to or demonstrated efficient trial delivery and practice. Statements achieving consensus and a median response of strongly agree in this category related to human-centred elements of research delivery with seven statements reaching 80% agreement levels or above (table 9).

Table 9

Q6 Effective research practice—top consensus statements

Additional Delphi considerations

A final broad category provided participants the opportunity to suggest additional items for panel consideration. Existing categories incorporated related statements but themes which were new, unique or covered multiple areas were presented in section 7. Free-text responses provided by panellists generated 23 statements with one achieving consensus (table 10).

Table 10

Q7 Additional Delphi considerations—consensus statements

TRACAT—trial rating and complexity assessment tool

First round statements were coded thematically within NVivo creating a matrix of codes which were quantified by frequency of themes to form the initial trial complexity analytical categories of question 8. The 14 TRIs (complexity scoring statements) were prioritised by panellists from lowest priority (1) to highest priority (7). Table 4 shows the panel ranking of TRIs which will be used to develop the TRACAT tool. The indicators and rankings are detailed in table 11.

Table 11

Trial Rating Indicators (TRIs) priority rankings

Discussion

Overview of main findings

The Delphi’s primary aim was to evaluate cancer clinical research delivery with a focus on patient follow-up and complexity from a multidisciplinary perspective. The study provides in-depth insights of professionals working at the forefront of cancer clinical trial delivery, identifying priorities, concerns and indicators of research complexities. Consensus and priority factors developed by expert panellists illustrate tensions and pressures within the profession. The main findings are discussed in relation to the key objectives across the eight inter-related survey categories with cross-over themes.

Evaluating follow-up and complexity

Follow-up definition: Patient follow-up in cancer clinical trials is a key factor affecting capacity to deliver research, requiring an ostensive definition to ensure support models for its effective management develop from a clarified and equitable stance. The meaning participants attached to follow-up varied significantly which has implications for operational review. Implementation of a funding model acknowledging resource implications in patient follow-up management reached consensus as a strategic priority. Panellists strongly agreed that managing follow-up was a key factor affecting capacity, calling for recognition of the challenges faced and intimating the National Institute for Health Research (NIHR) recruitment focused delivery model does not support follow-up. The group expressed a view that follow-up data are essential to successful trial outcomes but felt under pressure to open new studies to gain accruals, with a detrimental effect on their ability to support existing patients.

Barriers and burdens: A common thread running through statements on barriers and burdens within research was an expression of sites being under pressure, with perceptions of high expectations and demands placed on staff while faced with reduced resources. Communication issues, both internally and externally, were a common theme and perceived as a barrier to effective research. Concerns also related to sponsor documentation and inadequacy of information to accurately assess capacity and capability, or determine the full impact of delivering a study, in terms of its associated workloads and administrative burden. High levels of agreement between panellists indicated a sense of feeling unsupported, indicating principal investigator oversight and involvement can be lacking at times, recommending a clear understanding of roles, responsibilities and accurate assessment of workloads.

Analysis of complexity: In addition to incremental interventions, tests and procedures within evolving study designs, the panel highlighted factors relating to the nature of cancer as a complex disease. Wide-ranging sub-types and niche patient populations combined with variations in health status and support needs of patients add to research complexity. While trial phase is a recognised contributor to complexity, participants frequently cited short timelines and visit windows for protocol procedures as being problematic, particularly in terms of aligning sponsor requirements to site capacity, treatment pathways and the coordination of procedures, multidisciplinary teams and support services.

Factors affecting capacity: Strong consensus existed between research professionals with regard to capacity factors. Inadequacies in staffing levels, funding, resources and facilities featured alongside constraints relating to overly complicated protocols designed without due consideration for practicalities of research delivery. Frequent amendments to trials also affected ongoing capacity reflecting uncertainty within research delivery which cannot always be predicted at site feasibility.

Strategic priorities: Participants strongly agreed on strategic priorities relating to culture, education and collaborative relationships, all social aspects of research delivery. A patient-focussed priority reached an 88% consensus on the requirement to develop biomarkers for prediction of suitability and response to treatment and early diagnosis. The panel came to the same level of consensus in respect of national and organisational recognition of the challenges faced by professionals and sites. A group perspective illustrated the need for local and national leaders to develop greater understanding of the ‘constraints, resource and capacity issues and the priorities for research delivery and funding in the NHS’. The high levels of consensus relating to environment, culture, education, resources and investment delineate the needs of a profession within an evolving healthcare system, providing a strong focus for the NIHR and policymakers and impetus for further dialogue and review.

Effective research practice: Themes of open communication, staff commitment and dedication, well-trained and informed staff and strong collaborative teamwork all achieved high levels of consensus between the Delphi panellists. These skill sets within the profession allow sites and research staff to share best practices, retain staff and contribute to efficient trial delivery despite current challenges and resource limitations.

Additional Delphi considerations: The one statement achieving consensus in this category called for appropriate follow-up funding to support the primary endpoints of clinical trials.

TRACAT: A key outcome of the study is the ranking of TRIs to develop TRACAT, a system-based tool facilitating the accurate mapping and monitoring of factors determining study intensity, workload and resource impact on trial centres. The trial complexity rating will be applied to studies to support sites in feasibility assessment and map any changes to workloads or complexity during study life-cycles. Key stakeholder knowledge is vital in developing operational evaluation models and panellists had an important study role in prioritising and ranking TRIs and recommending additional factors for consideration. Through the assignment of a trial rating and complexity score linked to monitoring of interventions, visits, follow-up and patient volumes TRACAT provides workload and capacity analysis at individual, site, regional and national levels. The aim is to create an objective trial rating and portfolio management tool capable of integrating with existing data systems, to monitor real-time activity linked to complexity, increasing the value and structure of data for strategic and operational decision-making. Enhanced knowledge of trial complexity and acuity will support forecasting and capacity planning to optimise resource allocation in line with research objectives and patient needs.

Strategic opportunities for clinical research delivery: The study identified shortfalls at local and national levels, relating to effective communication and shared comprehension of needs and priorities for research, which provide an immediate opportunity for service improvements through better engagement across networks, organisations and disciplines. Strategic opportunities exist for trusts, local research networks, the NIHR and NHS to work collaboratively to develop specialist services and support models, built on shared understanding and structured operational evaluation, to increase patient ‘accessibility, choice and participation in clinical trials’. To improve research quality and safety it is essential healthcare providers promote open and honest cultures focusing on improvement.32 Professionals and organisations alike need to embrace dialectic approaches where mutual respect, innovation and communication can thrive. Iterative dialogue with research professionals to understand critical values and perceptions, relevant to local contexts, is vital in identifying effective strategic models and measures to improve operational delivery.33 There is no national workforce planning for research delivery and NHS global activities for workforce modelling are fragmented.34 As research advances and organisations grow, they face increasing challenges and complexities. Dynamic, fluctuating and evolving environments call for greater understanding of context-specific challenges. This study highlights the current realities of research delivery, emphasising the importance of dialogue and shared decision-making in developing effective strategies and common goals, respecting mutual understanding.

Evaluating research delivery and performance: Analysing and measuring performance and quality in evolving professions and organisations is challenging. Richardson et al 17 argue that an organisation’s measurement of information decreases in value as they grow and face greater complexity. Evaluation of operational performance and monitoring of success need to take into account not only objective measures but also understand and value qualitative evidence to indicate progress or success, especially where complexity of operational elements is a dominant characteristic. Regular evaluative research of the state and nature of the clinical research delivery industry in the UK should be an ethical requirement of the NHS, NIHR and their partners. There is a moral obligation for researchers to ensure that the work they undertake and the resource allocated to perform these activities provide value, efficiency in service and participant benefit.

Singerian inquiry in operational review: An effective evaluation of trial delivery requires a systems approach engaging multidisciplinary professionals from a wide range of geographical locations, networks and trusts in a collective critique covering multiple realms. Collaborative research cultures supporting enhanced data structuring and synthesis can ‘significantly shorten the time gap between clinical research results to better clinical care decisions’.35 The nuances and complexities of cancer research delivery necessitated a study design involving a critical analysis of strategies, processes and technologies through a collation and synthesis of prismatic perspectives and experiential data. This study supports a systems-based approach to developing effective research capacity planning and performs an ethical role in the review of current NHS research delivery with the intent of improving performance and patient experience. An adaptive NHS research delivery framework capable of analysing and monitoring research capacity and operational models in real-time and over time would enhance knowledge and support strategic planning. This study contributes in-depth qualitative review into operational aspects of clinical trials by engaging key stakeholders in defining variables relating to service pressures as well as highlighting best practices.

Relation to existing research: Our findings support the existing body of research documenting increasing pressures on sites linked to protocol complexity. Growing patient populations, bespoke therapies and extended follow-up pose challenges for existing NHS strategies with resources and research professionals under increasing pressure. The ability to grow research capacity is limited in systems where performance measures do not adequately assess complexity and context or support ‘tailored research capacity-building interventions’.33 Clinical research operational delivery exists within a complex adaptive system faced with growing challenges, one that Britnall argues ‘requires us to think, work and collaborate in different ways’.34 Outdated, hierarchical management styles36 and cognitive dissonance are fuelling a healthcare staffing crisis and stifling innovation through its alienation of experienced, knowledgeable and creative professionals. Britnall discusses the following four key domains where improvement and investment enhances productivity: workforce health and well-being, skills development, technological efficiencies and effective innovation.34 Findings of our study reinforce the need for strategic focus in these domains.

Strengths and limitations

A strength of the study is the holistic, dialectical, consensus-based design which is as far as we are aware the first use of a Singerian Delphi in cancer research evaluation. Qualitative aspects of the design provided in-depth grounded knowledge through the ‘voices’ of clinical trial professionals, articulating human and social aspects of research delivery. The study also developed consensus-defined TRIs and complexity indicators to support objective analysis of cancer research delivery, adaptable to other therapeutic areas and global settings.

Given the exploratory nature of the study in developing a Singerian focused qualitative Delphi the resulting data sets were lengthy and expressive. The causal relationships within the data sets were not fully analysed during the implementation of the Delphi study. The EFACCT Delphi findings contribute to the development of grounded theory as part of a wider national project being conducted by the research team. This democratic study developed new knowledge in defining areas of importance to research delivery stakeholders and forms part of an iterative research programme to evaluate and support operational delivery, focusing on follow-up and complexity.

Participants were limited to patient-facing professionals delivering studies at NHS sites in Scotland and England and did not include representatives from the Clinical Research Network. The results reflect the perspectives of professionals conducting the delivery elements of cancer research at trial sites. This does provide a strong understanding of the priorities in a clinical setting but enhanced knowledge covering the full gamut of roles within the industry is required. This Delphi forms part of a programme of study with future research planned involving a wider demographic to include sponsors, funders, networks and policymakers.

Implications for practice

The results point to operational fragmentation and organisational disconnect with conflicting priorities limiting the ability of the profession to manage growing complexities and pressures. The evidence suggests that the current operating model is not sustainable for NHS sites. Statements achieving the highest level of consensus between Delphi panellists outlined growing protocol and procedural burden, calling on the NIHR to acknowledge increased complexities in cancer clinical trials and associated pressures for sites. High levels of consensus relating to operational challenges in research are relevant to wider global settings and the concepts should be tested in other therapeutic areas. Additional recommendations included the requirement for a nationally agreed definition of follow-up and an effective, consistently validated funding and support model.

The research design considered the suitability of the Singerian approach within the Delphi method in relation to answering the main research question. A Singerian Delphi can serve multiple purposes and answer complex and broad questions in a single study. Our approach demonstrates a pragmatic application of the Singerian Delphi through an engagement with multiple perspectives to develop collaborative knowledge37 and a recognition of diversity and complexity in understanding separate realities. Retrospectively, based on the resultant data and reflection, the Singerian approach has emerged as a potential theoretical lens to apply in future research investigating operational management within healthcare organisations.

Conclusions

Cancer clinical research delivery forms part of a complex system which is in perpetual flux and ill-suited to linear, determinate operational models and processes. Disease, humans and operational networks, all complex in their own respect, continually transpose, synthesise and evolve, requiring a prismatic perspective and adaptive, systems-thinking approach to comprehend and to design effective, sustainable, human-centred research delivery solutions.

In summary, our findings indicate that in order to support patient access to clinical trials, meet national research ambitions and keep pace with scientific advances in cancer research, a delivery model cognisant of complex and diverse contextual challenges is required. To deliver quality research the holistic needs of patients and professionals alike need supporting. Further research into operational efficacy should consider the testing of dialectic models based on the Singerian approach. While the study applied the Singerian approach as a Delphi methodology, it has emerged as a highly appropriate approach to understand and manage the dynamic and evolving field of cancer clinical research as a whole.

Acknowledgments

The authors wish to acknowledge the invaluable contribution of principal investigators and staff at United Lincolnshire Hospitals Trust, Edinburgh Cancer Centre, Dumfries & Galloway Royal Infirmary, Clatterbridge Cancer Centre, University College London Hospitals NHS Foundation Trust, Royal Devon & Exeter NHS Foundation Trust, Harrogate & District NHS Foundation Trust, Derby Teaching Hospitals NHS Foundation Trust, University Hospitals Coventry & Warwickshire, Aintree University Hospitals NHS Foundation Trust, Lancashire Teaching Hospital NHS Foundation Trust, North Bristol NHS Trust, and Poole Hospital NHS Foundation Trust.

References

Footnotes

  • Contributors HMJ was responsible for the study design, data acquisition and analysis and led on manuscript preparation. FC contributed to manuscript preparation, review and revision. GL provided statistical review. FC, GL, CB and TA were responsible for academic and intellectual review of the study design, protocol and manuscript. DB has provided clinical oversight. All authors have read and reviewed the final manuscript.

  • Funding United Lincolnshire Hospitals NHS Trust (through cancer charitable funds) and the University of Lincoln funded a PhD studentship leading to the study. University of Lincoln as sponsor provided academic and research governance direction.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Ethics approval The study was approved by the East Midlands—Derby Research Ethics Committee (reference: 17/EM/0292) and the University of Lincoln School of Health and Social Care Ethics Committee. All participants taking part in the Delphi study provided informed consent.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Data are available upon reasonable request. Anonymised data will be available on request from the corresponding author.