Article Text

Download PDFPDF

Can we use patient-reported feedback to drive change? The challenges of using patient-reported feedback and how they might be addressed
  1. Kelsey Margaret Flott,
  2. Chris Graham,
  3. Ara Darzi,
  4. Erik Mayer
  1. Centre for Health Policy, Institute of Global Health Innovation, Imperial College London, London, UK
  1. Correspondence to Kelsey Margaret Flott, Centre for Health Policy, Institute of Global Health Innovation, Imperial College London, St Mary's Campus, Pread Street, London W2 1NY, UK; k.flott14{at}imperial.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The resolve to put patients at the heart of the National Health Service (NHS) has been ubiquitous in the aftermath of the Francis Report, and the policy agenda is beginning to reflect attempts to deliver that promise. The introduction of new care models at NHS ‘vanguard’ sites, the 3-year target to give all patients access to their electronic care records, and the expansion of integrated care services all exemplify the salience of patient-centricity at the national level.1 This pattern has been witnessed across many developed health systems.2

The ideals of this paradigm have also captured the attention of local commissioners and providers, offering an evolved concept of service design that resonates with patients' needs. As a result, providers are increasingly turning to patient-reported feedback to drive local improvement. This trend is indicative of progress in the field of patient experience: policy discourse has advanced from being curious about patients' feedback, to actually collecting it, to valuing it as a lever for quality improvement. Furthermore, this trend is not specific to the UK; large amounts of work, including cross-national feedback collections, have been conducted across and between health systems around the world.3 However, even with this momentum, much patient-reported feedback remains dormant and underutilised, drawing into question its ability to drive change.4 ,5 Consequently, improvements in patients' experiences over the last decade have largely been limited to transactional aspects of care and driven by top-down national targets; there has been little change in measures that reflect a person-centred approach.6 ,7

In principle, patients are unique experts in their lived experience of care, and respecting their insights has extraordinary potential to enhance quality. In practice, however, more must be done to ensure that the collection of patient experience data can be translated locally into service improvements. This requires more than refining the survey process, but building an organisation capable of using feedback in a meaningful way and fostering organisational cultures that are receptive to the feedback that patients deliver. The patient-reported feedback pathway from collection to use would benefit from enhancements to make patients' data more effective for improvement.

A web of challenges

In the NHS, the US hospitals and many European health systems patient experience data are plentiful and publicly available.8 ,9 For example, since 2002 NHS organisations have carried out patient surveys in a consistent way, and it has become possible to regulate and compare trust-level performance. Since 2013, the Friends and Family Test (FFT)—a single question assessing patients' likelihood to recommend a service to their friends and family—requires that all NHS providers collect near-real time patient-reported feedback. Furthermore, a host of organisational, ward-specific and clinician-specific surveys have emerged locally, as well as a surge of online outlets.10 These programmes have been successful in obtaining patient feedback and raising the profile of patient experience. However, a web of challenges still prevents the harnessing of the data for widespread use. While the NHS provides useful examples to illustrate these problems, they appear to exist across many countries where widespread patient-reported feedback is available.

Untangling this web requires evaluation of the research structure behind patient-reported feedback and use of it. The theoretical foundations for why patient-reported feedback should be used are logical. However, upon exploration of the survey methods used, the data generated and the analyses required, many problems emerge which contribute to a disconnect between data collection and use.

Key challenges

▸ Scepticism: Staff may be sceptical of the data provided in feedback collections, especially when they are not trained in the methods used to collect it.

▸ Training: Analysing survey data depend on skilled understanding of social research methods which is not always included in training for clinical and managerial staff.

▸ Statistical complexity: Even with training, understanding larger surveys is complex and demands strict, time-consuming attention to detail.

▸ Aggregated data: Patient feedback on many national collections like the National Patient Survey Programme (NPSP) in England is presented in provider-level aggregates which does not inspire local ownership to drive improvements.

▸ Isolated data: Feedback from national collections as well as smaller ones are not often linked to other relevant data sources, so there is no way of understanding the relationship between effectiveness, safety and experience at a patient level.

▸ Contradictory results: The mode in which feedback is collected matters; it changes results drastically such that the same organisation has multiple experience scores and little guidance on how to understand the reasons for differences or which scores to trust.

▸ Technical guidance: For the larger surveys, which are accompanied by guidance documents for analysis, these are still very technical and require a substantial level of methodological expertise to comprehend as intended.

▸ Disparities in external support: There is no standardised way in which providers receive support to analyse patient-reported feedback and drive improvement from it.

Understanding survey methods

It is not necessarily lack of interest that prevents patient views from influencing frontline care. In fact, evidence from countries as diverse as Denmark, Israel, the UK and the USA suggests that clinicians are generally enthusiastic to learn from patients' experience; but they feel ill-equipped to use patient-reported feedback.11 There is also still a tendency among some staff members to be sceptical of survey data, especially when they are not trained in the feedback process, because of perceived faults in the methods used to collect it.12–14 Furthermore, evidence demonstrates that in order for clinicians and managers to instigate change, they require training in quality improvement methodology, in this case survey methodology.4

Mediating this scepticism and understanding survey results for meaningful use requires substantial methodological expertise. In some cases, where methods are suitably robust, methodological expertise is simply needed to debunk myths. More often, when feedback collections are driven by ad hoc online sources or developed without comprehensive guidance, methodological expertise is needed to enhance processes. Collecting feedback from patients is often imagined to be unchallenging, but methodological considerations—particularly in the design of quantitative surveys—are far more complex than is often appreciated:

  • The collection method should account for sample bias and mode effects.

  • Sample sizes should, where appropriate, be informed by power calculations, taking account of likely response rates and to inform suitability for generalisation.

  • Question topics should be derived from evidence of their importance to patients to ensure external validity.

  • Question text should be tested with the target population to ensure internal validity (often this takes at least three rounds of cognitive testing with around 10 patients each to arrive at a uniformly interpreted question).

Improving data utility also requires that research acumen be balanced with practical mindfulness of what types of data are likely to resonate with frontline staff. Feedback collections must balance robust methods against a strategy that will produce timely and easily digestible results. This is an intricate balancing act, especially when conducted under organisational pressures.

Furthermore, it is important to scrutinise the survey processes and ensure that they resonate with patients' preferences for giving feedback. Currently, in the NPSP in England, which began in 2002 and includes annual surveys of recent hospital inpatients and a wide range of other types of patients, uses a postal methodology with sample sizes between 850 and 1250 patients per organisation. In the case of the inpatient survey, this equates to a total national sample size of well over 180 000. The size of the surveys and the use of standardised methods and instruments allow reliable organisation level results.15 There are also international surveys that have spanned Australia, Canada, Germany, New Zealand, the Netherlands, the UK and the USA, surveying 11 000 patients about their patient experience via postal questionnaire.9 Such postal questionnaires have been shown to receive a high response rate, but it might be prudent to consider other delivery modes in light of the growing number of patients engaging with online feedback forums.10 Different types of feedback serve different purposes, but unless feedback is collected in a way that works for patients, the scope of feedback being collected will remain limited.

Data quality and interpretation

Healthcare staff in health systems worldwide have cited a lack of data timeliness and specificity as a barrier to using survey data for service improvement.12 ,16 These types of results produced from surveys like those in the NPSP and many other collections can make ascertaining the most person-centred improvements difficult; reviewed in isolation, there is a limit to their ability to put patients' needs at the heart of quality improvement.

In terms of results from the NPSP and surveys like it, the primary reason for this is because data are presented in aggregate, provider-level reports that are designed to investigate commonly important issues for patients and providers nationwide, rather than being specifically tailored to the needs of individuals and organisations. This is not necessarily a flaw; rather, it is a feature of large anonymous surveys controlled by strict regulatory and ethical standards. While capable of identifying major trends, this high-level reporting can be alienating to providers who are trying to engineer improvements relevant to their service—particularly where there is limited local expertise in analysing and interpreting quantitative evidence. Additionally, confidentiality requirements set by regulators mean that NPSP results are not easily linked to other quality indicators like patient safety and clinical effectiveness. This hinders data triangulation and leaves patient-reported feedback isolated from more holistic quality improvement plans. Aggregate presentation of results and limitations around data linkage confound attempts to use data for secondary research and exploratory analysis, restricting its potential to inform an understanding of how patients' experiences are linked to clinical outcomes and service administration. Examples of linkage between patient experience survey results and clinical data sources have yielded promising results for service improvement.17 For example, the Consumer Assessment of Healthcare Providers and Systems survey in the USA has been successfully adapted to include links to other metrics and specific questions intended to facilitate quality improvement across clinical effectiveness as well as experience.18

Furthermore, results from different feedback collections often contradict each other. Staff find different improvement priorities and levels of urgency depending on what data source they read, and attempts to compare ‘benchmarked’ or relatively judged measures to those that provide only absolute ratings can be confusing. Using the UK as an example, at Imperial College Healthcare Trust, NPSP data for acute care indicates below average scores for overall view of care and services; local surveys suggest a much more positive view and FFT data say that in fact 96% of people would recommend that same service.19 ,20 Mode and timing effects exert significant influences on responses: NPSP data are conducted via post months after care, the FFT is collected in services using everything from iPad surveys to token bowls, and local surveys are conducted at the discretion of trusts. Data collected via different approaches may differ very substantially and certain methods have a normative effect, encouraging more positive results than others. As a result, inter-organisational comparisons are generally only possible in the case of standardised NPSP collections.21

Staff juggle differing advice on how to interpret each set of results. For instance, NPSP results are supplied through rigorous survey methods, but they are reported in an untimely way; FFT results present a wealth of information quickly, but do not provide actionable detail about where to make improvements; local surveys are meaningful to the service, but they are often conducted in-house and risk yielding artificially elevated results, as patients are reluctant to be negative while in the care of the organisation. Although a variety of data sources can be advantageous, due to the lack of resources devoted to working with the results, these inconsistencies obscure where improvement is needed most.

Given the multitude of datasets, interpreting data and deriving meaning from them requires a multidisciplinary approach with accessible guidance. Evidence from across the USA also suggests that uncertainty over how to interpret or act on patient-reported feedback results means that using patient-reported feedback necessitates a more concerted effort than other types of quality improvement data.14

Analytical complexities

Finally, analysing patient-reported feedback and deriving meaningful improvements is a complicated task, subject to many misinterpretations. The NPSP guidance for analysing data provides an interesting example:The [colour] categories are based on an analysis technique called the ‘expected range’ which determines the range within which the trust's score could fall without differing significantly from the average, taking into account the number of respondents for each trust and the scores for all other trusts. If the trust's performance is outside of this range, it means that it performs significantly above/below what would be expected. If it is within this range, we say that its performance is ‘about the same’.22

On first read it is not clear where to focus improvement. Should services prioritise scores that are at the low end of the expected range, but still very high, or scores that are low, but nearing the high end of the expected rage? Are these questions equally important to improving quality? In terms of the NPSP, extensive guidance exists for staff members interested in interpreting the data correctly, and most providers use external survey companies to run surveys and lead data interpretation workshops. However, while guidance and workshops are available, guidance documents still require a level of survey methodology familiarity to comprehend them. Furthermore, data interpretation workshops are neither mandatory nor consistent across companies and individual providers pay for them at their own discretion.

Addressing these questions is complicated, and the above excerpt is one section of three-and-a-half pages of text explaining how to interpret data. It is easy to understand why some people analysing their reported results might be tempted to think their scores indicate performance against other trusts, or improvement over time. The guidance does warn of these misconceptions, and the level of detail provided in the guidance is certainly necessary to derive strictly accurate meaning from findings. However, it is not particularly accessible to staff responsible for analysis. There are no explicit instructions to help patient experience leads prioritise the results and translate them to the frontline staff.

Results from patient-reported feedback are often misapplied and analysed in the context of point scoring rather than improvement. Despite the ‘expected range’ technique applying robust techniques for institutional comparisons in a bid to be both authoritative and accessible, it is often misunderstood.23 In the case of the FFT, analysis is provided with relation to completion rates, which spurs a race towards higher response rates rather than improvement. While the competition might inspire change, national targets relating to feedback collections often undermine the patient-centred objectives they were designed to promote.4 ,24

Enhancing use of patient-reported feedback

Highlighting these problems might feel risky, as though it could detract from the positivity surrounding patient-centricity. However, by not addressing these issues we endanger the long-term benefits and sustainability of the paradigm. This requires an injection of new thought.

Improved data

While the first necessity revolves around methodological expertise and training, there are also more dynamic collection methods to be trialled. This does not necessarily mean new vehicles for collection; rather, enhancements could be made to existing collections to enable them to yield new types of data. For instance, collecting patient-reported feedback so that it can be linked to effectiveness and safety information at the case level could enable more effective integration of quality reports, and break down the silos that currently encase the three domains. Although regulation prohibits this at the moment, evidence suggests that patients are largely willing to participate in this sort of data linkage and that it has the promise for more targeted service delivery.25 Studies from diverse care settings demonstrate how linkage between experience data and hospital records can produce a picture of individuals' care quality that is better suited to driving improvement.26

Innovations can also be applied to sampling. The most important priorities for quality improvement vary based on individual health and medical needs.27 ,28 However, current results from patient-reported feedback fall short of indicating what different patient groups' need and value. One solution being explored by the NPSP is mandating larger sample sizes across all organisations to allow results to be disaggregated and reported at the site-level without breaching confidentiality or threatening the reliability of survey estimates.

Beyond that, other industries have been very successful in understanding where to make improvements for consumers through data segmentation techniques that identify specific groups within the population.29 Segmenting the patient population by their medical and socio-demographic needs can account for important differences and enable patients' feedback to indicate where quality improvements are required for specific groups. While this is possible with data from the NPSP programme, it again requires both resources in terms of statistical software and staff skills. Furthermore, if conducted outside of provider organisations, critical demographic data is redacted which limits the utility of segmentation. This could instigate a more patient-centred hierarchy of improvement priorities and help to identify quality improvements for the most marginalised patient groups whose voices can be diluted in current results. Segmentation can also be used to better understand the provider landscape, as identifying appropriate improvements would be more straightforward if benchmarking better accounted for heterogeneity across sites and organisations.

Driving change

Patient experience should not simply be a vehicle to capture ‘comfort’ measures. The feedback process should invite patients to fully integrate their views into their care pathways and capture data, and aspects of care that critically influence outcomes and adherence to pathways. In this light, patient experience pertains more to things like information, communication and involvement—the ingredients that support better uptake of adherence to care. Arguably this is increasingly important as the health service takes more responsibility for conditions where outcomes are not measured by cure rates, but determined by adherence to long-term treatment regimes and by how well that treatment supports people to live the lives that they want to lead. The underuse of patient-reported feedback is well documented. However, the challenges to using it have not before been effectively compiled to demonstrate the complexity behind underuse. The articulation of possible techniques to overcome these challenges in this paper is not wholly novel, but demonstrates the availability of practical solutions to make feedback more useful. This moves away from simply acknowledging the underuse of data, and towards actually rectifying the problem. Further work is needed to understand how patients can facilitate the uptake of these methods and inform other, more innovative solutions to using their feedback.

The sector-wide interest in patient-reported feedback reflects an appreciation for patient experience, and in some cases, has provoked pioneering approaches to quality improvements. However, improving the use of patient-reported feedback requires consideration and resolve from regulators and providers to optimise the structure of the feedback process. The issues are by no means insurmountable, but failing to confront them could curtail the development of a more modern, sustainable and patient-centric health system.

References

Footnotes

  • Twitter Follow Kelsey Flott at @kelseyflott and Chris Graham at @ChrisGrahamUK

  • Contributors KMF conducted the research that went into this paper and was the primary author. CG was also responsible for parts of the research and managing the writing. AD provided input into the design of the final drafts of the work. EM provided senior oversight of the whole paper and was also involved in the drafting and editing of the manuscript.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.