Article Text

Download PDFPDF

Achieving progress through clinical governance? A national study of health care managers’ perceptions in the NHS in England
  1. T Freeman1,
  2. K Walshe2
  1. 1Health Services Management Centre, University of Birmingham, Birmingham, UK
  2. 2Manchester Centre for Healthcare Management, University of Manchester, Manchester, UK
  1. Correspondence to:
 T Freeman
 Health Services Management Centre, University of Birmingham, Birmingham B15 2RT, UK; t.freemanbham.ac.uk

Abstract

Background: A national cross sectional study was undertaken to explore the perceptions concerning the importance of, and progress in, aspects of clinical governance among board level and directorate managers in English acute, ambulance, and mental health/learning disabilities (MH/LD) trusts.

Participants: A stratified sample of acute, ambulance, and mental health/learning disabilities trusts in England (n = 100), from each of which up to 10 board level and 10 directorate level managers were randomly sampled.

Methods: Fieldwork was undertaken between April and July 2002 using the Organisational Progress in Clinical Governance (OPCG) schedule to explore managers’ perceptions of the importance of, and organisational achievement in, 54 clinical governance competency items in five aggregated domains: improving quality; managing risks; improving staff performance; corporate accountability; and leadership and collaboration. The difference between ratings of importance and achievement was termed a shortfall.

Results: Of 1916 individuals surveyed, 1177 (61.4%) responded. The competency items considered most important and recording highest perceived achievement related to corporate accountability structures and clinical risks. The highest shortfalls between perceived importance and perceived achievement were reported in joint working across local health communities, feedback of performance data, and user involvement. When aggregated into domains, greatest achievement was perceived in the assurance related areas of corporate accountability and risk management, with considerably less perceived achievement and consequently higher shortfalls in quality improvement and leadership and collaboration. Directorate level managers’ perceptions of achievement were found to be significantly lower than those of their board level colleagues on all domains other than improving performance. No differences were found in perceptions of achievement between different types of trusts, or between trusts at different stages in the Commission for Health Improvement (CHI) review cycle.

Conclusions: While structures and systems for clinical governance seem well established, there is more perceived progress in areas concerned with quality assurance than quality improvement. This study raises some uncomfortable questions about the impact of CHI review visits.

  • clinical governance
  • NHS trusts
  • quality improvement

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Across the developed and developing world, health systems and governments are engaged in developing new institutions, mechanisms, and processes intended to assure and improve the quality of health care.1–4 Yet, there is often a gap between the rhetoric of policy and the reality of organisational practice. It seems that designing a national strategy for healthcare quality improvement is not too difficult—after all, the UK has had several in the last decade. However, implementing that strategy and making it “work” is immensely challenging.

In the UK, central government has become increasingly prescriptive about the direction and form of healthcare quality improvement. In 1997 the government outlined its programme of NHS quality reforms (summarised in box 1),5 an ambitious agenda reflecting broader governance trends towards the performance management of devolved organisations against national standards6–10—new NHS systems included a National Institute for Clinical Excellence (NICE) charged with standard setting; an expanded Performance Assessment Framework; visits, inspections and latterly performance ratings (“stars”) from a Commission for Health Improvement (CHI; latterly Healthcare Commission); and a Modernisation Agency to support organisational process redesign.

Box 1 Summary of UK NHS quality reforms

In 1997/98 the then newly elected Labour government launched a comprehensive set of healthcare quality reforms aimed at putting “quality at the heart of the NHS”. It included measures to:

  • set standards of care—through a new National Institute for Clinical Excellence (NICE) to appraise new healthcare interventions and produce clinical guidelines and other advice on best practice, and National Service Frameworks defining templates of care for key service areas;

  • deliver standards of care—through new systems of clinical governance in all NHS organisations, alongside arrangements for promoting lifelong learning; and

  • monitor delivery—through a new Commission for Health Improvement (CHI) set up to review clinical governance progress in NHS organisations and to investigate serious problems or performance failures, a new set of NHS performance ratings, and a new national survey of patients’ and users’ experiences of the NHS.

At that time, policy documents suggested that clinical governance should consist of:

  • clear lines of responsibility and accountability for the overall quality of care, led by the chief executive and board;

  • a comprehensive programme of quality improvement activities including clinical audit, professional development;

  • clear policies and processes for managing risk; and

  • procedures for identifying and remedying poor performance.

Since those reforms were put in train, the government has also reformed professional self-regulation, created new structures for dealing with poor clinical performance and adverse incidents, introduced public performance ratings for NHS organisations, further reorganised healthcare regulation and created a new Healthcare Commission in place of the CHI, and begun to reform the clinical negligence litigation process.

Clinical governance, underpinned by a new statutory legislative duty of quality on healthcare providers, has been seen as the lynchpin of these reforms. It is defined as: “… a framework through which NHS organisations are accountable for continuously improving the quality of their services and safeguarding high standards of care by creating an environment in which excellence in clinical care will flourish”.11

Given the combination of external quality assurance and internal continuous quality improvement at its heart, clinical governance has been strongly contested in the literature,12 characterised both as a controls assurance framework13,14 and as a whole systems approach to continuous quality improvement.15,16 While often described as a bridge between managerial and clinical approaches to quality,17 these tensions are embedded in the policy and its implementation, and the language of continuous quality improvement contrasts sharply with the assurance focused style of performance management exhibited by the Department of Health and its agencies.12

One distinctive characteristic of the strategy has been the attention paid to monitoring and implementation.18 In the past, policymakers rarely made more than token attempts to follow up quality reforms and test the robustness of implementation at a local level. This time the CHI, recently reconfigured as the Healthcare Commission, has been tasked with reviewing all NHS trusts and publishing an explicit and detailed report on their progress in clinical governance. Those assessments have been incorporated into performance ratings, have influenced decisions about resource allocation, and have led to direct intervention and top management replacement in some cases.19 However, the wider literature suggests that the effectiveness of such external review processes in bringing about sustained improvement is open to question, and that they bring costs and adverse consequences as well as benefits.20

Previous empirical studies of the development of clinical governance have relied heavily on organisations self-reporting on their own achievements and have offered a rather mixed picture of progress. For example, evaluations of early progress in clinical governance in primary,21–23 secondary, and tertiary24,25 trusts report significant progress in establishing clinical governance systems and processes, including increased attention to clinical quality at board level. However, they found rather less evidence of coherent planning for quality improvement or of impact at the clinical front line,26 and raise growing concern that the long term quality improvement agenda is becoming lost under pressure to meet the requirements of assurance.27–29

Funded by and undertaken in conjunction with the National Audit Office (NAO) between April and July 2002, this cross sectional study provides a national overview of perceptions of the importance of, and progress in, aspects of clinical governance among board level and directorate managers in English acute, ambulance, and mental health/learning disabilities (MH/LD) trusts. Detailed study objectives are set out in box 2. Board and directorate managers were included in order to assess the extent of divergence in perceptions between these two organisational tiers within organisations; and acute, ambulance and MH/LD trusts were included to assess the extent of divergence in perceptions between different types of healthcare organisation. Primary care organisations were explicitly excluded given the organisational turbulence occasioned by the shift from Primary Care Groups (PCGs) to Primary Care Trusts (PCTs) as the organisational form for primary care delivery in the UK NHS at the time of the study.

Box 2 Study objectives

  • Which individual competency items are perceived as most important?

  • Which individual competency items show most perceived progress and where are the shortfalls (in which ratings for importance exceed the ratings for achievement)?

  • Which aggregated domains are perceived as most important?

  • Which aggregated domains show most perceived progress and where are the shortfalls?

  • What perceived progress is there in clinical governance structures, processes and outcomes?

  • To what extent does perceived achievement vary between trusts?

  • What impacts do trust type, CHI visits, and respondents’ status have on perceptions of achievement in aggregated domains?

The study also assesses the impact of CHI clinical governance reviews on perceptions of achievement. While the review methodology evolved over time, at the time of the study judgements were based on a peer review of organisational arrangements for clinical governance combining routine data with further information gathered during a site visit. The resulting reports were used as the basis for an improvement plan, performance managed by the relevant Strategic Health Authority. The study assesses the impact of the visit/reporting/planning cycle on perceptions of achievement by comparing cross sectional trust level data on the basis of the trust’s position in the review cycle: no visit undertaken or planned; undergoing the visit process; and in receipt of a published review report for 3 months or more.

METHODS

Participants

The NAO established a sample frame of all eligible English acute, ambulance, and mental health/learning disabilities (MH/LD) trusts, from which a stratified random sample of 100 trusts was drawn to reflect the proportion of each type of organisation. This resulted in 68 acute, 11 ambulance, and 21 MH/LD trusts being selected for inclusion. To ensure representation of perceptions from both senior managers at board level and middle managers more directly involved in operational issues, up to 10 board members (both executive and non-executive directors) and 10 directorate level managers at each trust were randomly sampled from Binley’s database of NHS management.30 By seeking the views of these managers as individuals (rather than seeking an organisational response) in confidential circumstances, it was anticipated that respondents would be more likely to be candid in their assessments. Where fewer than 10 such staff members were listed, all eligible staff were included. In total, 1916 participants across 100 trusts were selected. Participants received a six page questionnaire which was sent again to non-respondents at 4 and 8 weeks.

Study measures

Previously developed at the University of Birmingham Health Services Management Centre, the Organisational Progress in Clinical Governance (OPCG) schedule assesses respondents’ perceptions of achievement on a series of organisational competencies related to clinical governance (Appendix 1). Briefly, the OPCG schedule was developed through a combination of literature reviews and qualitative research with expert groups: the former to identify tasks defining “good practice” in clinical governance, and the latter to operationalise statements reflecting competence within the identified tasks and to validate their comparative level of difficulty. This produced a set of 54 statements relating to organisational competencies in clinical governance. Respondents scored the importance of, and their organisation’s achievement against, these statements on a six-point Likert scale where 1 = low and 6 = high. A forced five factor solution Principal Factor Analysis (PFA) confirmed the internal consistency of statements under five domains (improving quality; managing risk; improving staff performance; corporate accountability; and leadership and collaboration) which allowed the aggregation of competency item scores into domain scores. As these domains consist of different numbers of items, reported domain scores are standardised between 0 and 10 to aid comparison.

The extent to which scale items are a well balanced and comprehensive sample of the domain in question was assessed empirically using a dual strategy of content validation.31 This involved a review of government guidance to identify tasks and the empirical development of statements of achievement against those tasks using Q-sort method,32 thereby developing the task areas identified in the literature in the light of expert opinion. Full details of the development and validation of the OPCG schedule are outside the scope of this paper but are available elsewhere.33

Statistical methods

Data were analysed using SPSS-PC for Windows software (SPSS Software Inc, release 11.0). As the study reports multiple respondents from each of 100 individual trusts, there is a possibility that “clusters” of respondents might lead to an underestimation of standard errors and thus artificially low confidence intervals around estimates. To avoid this problem the standard technique of aggregation was used, in which individual respondent scores were aggregated into trust means. These aggregates were then used as raw data to calculate mean, SD, and confidence intervals for each of the item and domain scores. For the analysis of the effect of respondent status (board member or directorate manager) on domain scores, the scores of board and directorate managers within each trust were separately aggregated so that there was a summary data point for each of the two groups from each of the 100 trusts.

Kolmogorov-Smirnov and Levine’s tests were used to assess the assumption of normal distribution and homogeneity of variance of each of the OPCG schedule domain scores used as dependent variables. The effect of CHI visit status (three groups) was explored using analysis of variance (ANOVA). Given a violation of the assumption of homogeneity of variance, the effect of trust type (three groups) was explored via a non-parametric equivalent to analysis of variance (Kruskal-Wallis). The effect of respondent status (two groups) was explored by paired t tests, given the natural pairing of board and directorate managers within each trust.

RESULTS

Of 1916 eligible study participants, 1177 (61.4%) returned completed questionnaires. Data were checked for out of range values. Further close examination of 60 (5%) entered cases against the original paper questionnaires found no discrepancies.

Statistical considerations

As the move from individual to organisational level analysis is only valid if there is more variation in scores of individuals between different trusts than there is in the scores of individuals within trusts, a simple one way (non-repeated measure) analysis of variance tested this assumption (table 1). The results show that the condition was satisfied for all domains and that the data could be legitimately aggregated for trust level analysis. Variations between respondents from the same trust were less than variations between respondents from different trusts for each of the five OPCG schedule domain aggregates (improving quality; managing risks; improving performance; corporate accountability; and leadership and collaboration) and for the additional aggregates of structure, process and outcome. The data may thus be legitimately aggregated to organisational level in analysis in order to avoid the problem of clustering caused by multiple respondents from each of the 100 trusts in the sample.

Use of ANOVA and t tests to explore the effect of CHI visit status, trust type, and respondent status on OPCG schedule domain scores requires that the sampling distributions of means of dependent variables are normally distributed. Additionally, ANOVA requires homogeneity of variance between subgroups. While a Kolmogorov-Smirnov test confirmed that the assumption of univariate normality was met, statistically significant results for Levene’s test between subgroups of types of trust on improving quality (3.149; p = 0.047), improving performance (8.324; p = 0.000) and process (3.132; p = 0.048) aggregates suggested violation of the assumption of homogeneity of variance. As the number of subjects in each subgroup was also different, a non-parametric equivalent to analysis of variance (Kruskal-Wallis) was substituted for comparisons between subgroups of trusts.34

Which individual competency items are perceived as most important?

Overall, respondents perceived most items as highly important. On a 6 point scale, the mean item rating was 5.1 and no single item was rated lower than 4.5. Items regarded most important included formal committee structure (items 35 and 32); “no blame” culture (item 22); raising concerns and action planning/improving quality around risks and adverse events (items 13, 14, 16, 17, 18 and 23). Items perceived least important included benchmarking (item 2), use of research and clinical indicators (items 4, 5, 6 and 25), and joint working with partner agencies and shared protocols (items 51, 52 and 41).

Which individual competency items show most perceived achievement and where are the shortfalls?

Respondents perceived higher achievement against items concerning structural change for corporate accountability than against those concerned with quality improvement or collaborative outcomes. Competency items scored highly included committee structures (items 35 and 32), collation of complaints/information (items 20 and 36), raising clinical issues (item 7), discussing risk and adverse event data (item 16), and priority planning (items 39 and 40). Conversely, items scored low included joint work across local health communities (items 51 and 52), benchmarking for quality improvement (item 2), use of research evidence (items 6 and 4), using clinical indicators (items 24 and 25), user involvement (item 49), and promoting clinical teams (items 45 and 46). Perceived shortfalls were highest in joint work across local health communities (items 51 and 52), clarity of service development criteria (item 38), performance feedback and clear objectives (items 45 and 47), working across boundaries and reorganising work processes (items 1 and 46), and user involvement (items 48–50).

Which aggregated domains are perceived as most important?

Respondents’ scores for items were aggregated into five domains, scored 0–10 to aid comparison, and then aggregated by trust in order to account for the clustering effect of multiple respondents. At the domain level, corporate accountability was scored highest (mean 8.8, 95% CI 8.7 to 8.9), followed by managing risks (mean 8.7, 95% CI 8.6 to 8.8), performance improvement (mean 8.1, 95% CI 8.0 to 8.2), leadership and collaboration (mean 8.0, 95% CI 7.9 to 8.1), and finally improving quality (mean 7.7, 95% CI 7.6 to 7.7).

Which aggregated domains show most perceived achievement and where are the shortfalls?

Respondents’ scores for items were aggregated into five domains, scored 0–10 to aid comparison, and then aggregated by trust in order to account for the clustering effect of multiple respondents (fig 1). The domains showing highest perceived achievement across the sample were corporate accountability and risk management which scored 8.1 (95% CI 7.9 to 8.3) and 6.8 (95% CI 6.7 to 7.0), respectively. There was evidence of less perceived achievement in quality improvement and leadership and collaboration, with mean scores of 5.4 (95% CI 5.3 to 5.5) and 5.6 (95% CI 5.5 to 5.8), respectively. The highest shortfalls between perceived importance and achievement were for leadership and collaboration (mean 2.4, 95% CI 2.3 to 2.5) and improving quality (mean 2.3, 95% CI 2.1 to 2.4), followed by performance improvement (mean 1.9, 95% CI 1.7 to 2.1), managing risks (mean 1.8, 95% CI 1.7 to 2.0), and corporate accountability (mean 0.8, 95% CI 0.6 to 0.9).

What perceived progress is being made against structures, processes and outcomes?

The item scores were further aggregated under the headings “structure”, “process” or “outcome” (items in each domain detailed in Appendix 1) and standardised between 0 and 10 to aid comparison. These domain scores were then aggregated by trust in order to account for the clustering effect of multiple respondents from each trust (fig 2). Respondents perceived most achievement against items relating to structural change (mean 7.2, 95% CI 7.1 to 7.3), and rather less against process (mean 6.0, 95% CI 5.9 to 6.1) and outcome (mean 5.7, 95% CI 5.5 to 5.8). Importantly, process (mean 8.2, 95% CI 8.1 to 8.3) and outcome (mean 8.1, 95% CI 8.0 to 8.2) were perceived to be slightly more important than structure (mean 7.7, 95% CI 7.6 to 7.8), so the shortfalls between perceived importance and achievement are considerably greater for outcome (mean 2.4, 95% CI 2.4 to 2.6) and process (mean 2.2, 95% CI 2.1 to 2.3) than for structure (mean 0.5, 95% CI 0.5 to 0.6).

To what extent do perceived achievements vary between trusts?

To facilitate comparisons between trusts, perceived achievement scores were aggregated to produce trust means, summed to produce a single summary score (0–50), and ranked from lowest to highest (fig 3).

What impacts do trust type, CHI visits, and respondents’ status have on perceptions of achievement in OPCG schedule domains?

Sample respondents differed in the type of trust in which they worked (acute, ambulance or MH/LD), the CHI visit status of the trust in which they worked (not yet given a date for first visit, undergoing the process, having received a report 3+ months ago), and the respondents’ status in the organisation (board member or directorate manager). The effect of these independent variables on perceptions of achievement in OPCG schedule domains was assessed using Kruskal-Wallis, ANOVA and paired t test analysis, respectively (table 2).

Trust type

Kruskal-Wallis analysis (table 2) identified a significant effect of trust type on perceptions of achievement in improving performance (χ2 = 6.035, d.f. = 2, p = 0.049). Further post hoc analysis (table 3) revealed that the overall effect on this domain score was due to significantly lower perceptions of achievement in ambulance trusts than in acute trusts (mean difference −1.1, 95% CI −1.7 to −0.5).

CHI visit status

ANOVA analysis identified no significant effect of CHI visit status on perceptions of achievement (table 4), suggesting that the visiting process had no discernable effect on perceptions of achievement in any of the identified domains.

Respondents’ status

Board level respondents perceived higher achievement than directorate level managers in corporate accountability (mean difference 0.4, 95% CI 0.3 to 0.6), leadership and collaboration (mean difference 0.3, 95% CI 0.2 to 0.5), managing risks (mean difference 0.3, 95% CI 0.1 to 0.5), and improving quality (mean difference 0.3, 95% CI 0.1 to 0.5). Mean scores are shown in table 5. On all domains other than improving performance, board members perceived higher achievement than their directorate level counterparts.

DISCUSSION

This national cross sectional study reveals important differences in the perceived importance of, and achievement in, aspects of clinical governance across the sample, as well as between corporate and directorate management tiers within organisations. Aspects relating to the corporate accountability agenda were perceived as more important and more achieved than those relating to aspects such as interorganisational collaboration or quality improvement. Board level managers within organisations consistently rated achievement higher than their directorate based colleagues in corporate accountability, improving quality, managing risks, and leadership and collaboration aspects. Importantly, no significant differences were found in trust level aggregate data between trusts at different positions in the CHI review cycle. The implications of these results are discussed below.

Primacy of the assurance agenda

Respondents prioritised the importance of the assurance agenda above others at both competency item and aggregate domain level. The primacy of the assurance agenda is understandable in the context of the strong performance management culture in the NHS. However, the results could also be interpreted as raising concerns about the ability of NHS organisations to tackle the important long term quality improvement agenda of clinical governance,27–29 especially when account is taken of shortfalls between perceived importance and achievement, and it is here that support and development efforts need to be concentrated.

Structural primacy

Consistent with earlier evaluations,23,24 results suggest perceptions of good progress against the structural agenda but rather less on process and outcome dimensions. NHS trusts appear to have concentrated effort on the structural mechanisms for clinical governance—committees, policies and resources—rather than the substance of the reforms and their intended outcomes in terms of the way that clinical teams work together to improve service provision. It may be suggested that this is simply a natural lag and that one might expect the more difficult quality improvement agenda to be addressed in due course. The danger is that attention is focused on the systems themselves, rather than the effects that the systems are designed to achieve.

Ritualistic exchanges

Analysis suggested that perceptions of progress were higher among board level managers than directorate level managers across multiple domains. This is consistent with broader literature showing variations in perceptions of organisational achievement across management tiers within organisations.35,36 The reliance on self-reports by chief executives and NHS boards for estimates of progress in previous studies of clinical governance is thus likely to have produced overoptimistic assessments. These results may also suggest that clinical governance committees provide a “theatrical” function, reassuring the board that all is well while allowing “business as usual” at lower levels within the organisation.37

Impact of CHI visits on perceptions of progress

While there were no statistically significant differences in managers’ perceptions of progress in clinical governance between trusts that had been, were being, or had yet to be visited by CHI, a number of caveats need to be borne in mind. Firstly, the study was not primarily designed to explore this question—the primary questions related to managers’ perceptions of progress at a single point in time (cross sectional design). Furthermore, there are likely to be difficulties associated with the relationship between the OPCG schedule and the CHI review itself since the CHI review is likely to affect managers’ perceptions of clinical governance as well as the underlying realities of clinical governance, and the two are very hard to disentangle.

Key messages

  • A survey of 1916 managers across 100 English NHS trusts identified more perceived achievement in quality assurance and structural aspects of clinical governance than with areas such as quality improvement or leadership and collaboration.

  • Respondents at board level (executive and non-executive directors) tended to have a more positive view of achievement in clinical governance than their directorate level colleagues.

  • There were no significant differences in perceived achievements between trusts at different stages in the CHI review cycle.

  • There is often a substantial difference between the rhetoric of national policy initiatives and strategies and the realities of their implementation.

Caveats notwithstanding, these results seem to show little effect of CHI visits on the clinical governance competency items. The CHI has been subject to some evaluation38,39 which suggests that its impact has been substantial but highly variable, and further process evaluations40,41 may be needed to understand better the conditions or circumstances in which CHI reviews lead to sustained quality improvement.

This study has a number of methodological limitations. Firstly, it is limited to the perceptions of managers at board and directorate level within organisations, omitting the views of most clinicians. Evidence from an earlier study suggests that managers’ perceptions of progress may be more optimistic than those of clinical staff.42 In addition, the adequacy of our response rate of 61% is hard to assess because we have a limited ability to compare the characteristics of responders and non-responders.

CONCLUSIONS

This study suggests that structures and systems for clinical governance are established in the NHS in England, but there seems to be more progress in those areas concerned with quality assurance than quality improvement. The implementation of clinical governance has been shaped by an assurance focused performance management culture in the NHS in England that may not promote quality improvement, and can be argued to be antithetical towards it. For other countries and healthcare systems the British experience shows that a determined government can drive the development of quality improvement systems in healthcare organisations if they develop and apply a consistent, resolute, and coherent policy. However, it also illustrates that the external mandating of what is at heart an internal process of improvement is problematic, and that the risks of institutional symbolic compliance and distortion of policy goals are considerable.

APPENDIX 1: ORGANISATIONAL PROGRESS IN CLINICAL GOVERNANCE (OPCG) SCHEDULE OF COMPETENCY ITEMS

Table 1

 Mean differences in domain scores within and between trusts

Table 2

 Effect of trust type, CHI visit, and board membership on OPCG schedule domain aggregates

Table 3

 Perceived achievement by trust type

Table 4

 Perceived achievement by CHI visit status

Table 5

 Perceived achievement by respondent status

Figure 1

 Mean scores for perceived achievement, importance, and shortfall across OPCG domains (out of 10) where high scores indicate high achievement and importance.

Figure 2

 Mean scores for perceived achievement, importance, and shortfalls in structures, processes and outcomes of clinical governance (out of 10) where high scores indicate high achievement and importance.

Figure 3

 Ranked aggregated mean OPCG domain scores across all 100 trusts.

Acknowledgments

The authors thank John Step and Guy Munro from the National Audit Office and Professor Peter Spurgeon from HSMC at the University of Birmingham for their help and support throughout the project. They also thank all those who gave of their time to take part in the study and are grateful for reviewer’s comments on an earlier draft of this paper.

REFERENCES

Footnotes

  • The National Audit Office funded this research project as part of a wider examination of the progress of clinical governance in the NHS.

  • See editorial commentary, p 328

Linked Articles