Objectives To explore general practitioner (GP) team perceptions and experiences of participating in a large-scale safety and improvement pilot programme to develop and test a range of interventions that were largely new to this setting.
Design Qualitative study using semistructured interviews. Data were analysed thematically.
Subjects and setting Purposive sample of multiprofessional study participants from 11 GP teams based in 3 Scottish National Health Service (NHS) Boards.
Results 27 participants were interviewed. 3 themes were generated: (1) programme experiences and benefits, for example, a majority of participants referred to gaining new theoretical and experiential safety knowledge (such as how unreliable evidence-based care can be) and skills (such as how to search electronic records for undetected risks) related to the programme interventions; (2) improvements to patient care systems, for example, improvements in care systems reliability using care bundles were reported by many, but this was an evolving process strongly dependent on closer working arrangements between clinical and administrative staff; (3) the utility of the programme improvement interventions, for example, mixed views and experiences of participating in the safety climate survey and meeting to reflect on the feedback report provided were apparent. Initial theories on the utilisation and potential impact of some interventions were refined based on evidence.
Conclusions The pilot was positively received with many practices reporting improvements in safety systems, team working and communications with colleagues and patients. Barriers and facilitators were identified related to how interventions were used as the programme evolved, while other challenges around spreading implementation beyond this pilot were highlighted.
- patient safety
- quality improvement
- general practice
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Strengths and limitations of this study
This study used qualitative methods to uncover the social and technical issues of relevance in the testing of multiple and novel safety improvement interventions by general practice teams as part of a large-scale pilot collaborative programme. This approach provided some evidence of the potential transferability and utility of most interventions after adaptation to this setting (eg, safety climate assessment and clinical care bundles), although engagement with Plan-Do-Study-Act change cycles was problematic.
With hindsight, some programme aims, data collection and improvement measures were arguably overambitious and unrealistic in the short timeframe available, while related learning and improvement was self-reported. The study was likely biased by involvement of volunteer ‘early adopters’ who over-represented the general practice training environment.
Although many findings are promising, further testing with larger groups of representative GP teams is necessary to more fully inform the ambitions of this type of programme, the utility of related interventions, and their impacts on professional and organisational learning, and making care systems safer for patients.
A review of evidence in 2011 estimated that approximately 1–2% of consultations in primary care may involve an ‘error’ which could lead to potential or actual physical or psychological harm to patients.1 In the UK, for example, around one million patients consult with primary care services on a daily basis1 which provides a guide to the possible scale of patient safety incidents—although many have minor to moderate impacts on health and well-being, or are mitigated before harm actually occurs.2–5
Evidence around the types and sources of avoidable harms in primary care is largely focused on clinical diagnoses, medicines management, and wider systems issues such as test results handling and communications at care interfaces.1 ,5–10 For example, prescribed medicines have inherent risks that are associated with unwanted side effects, inappropriate or incorrect usage, and unsafe systems of monitoring.11 Additionally, medicine-related adverse events are reported to cause between 5% and 17% of hospital admissions, most of which are related to prescribing, monitoring and adherence problems with many considered preventable.12 On the other hand, in general practice, prescribing or monitoring errors and harms are often associated with high-risk drugs that require careful monitoring such as warfarin and methotrexate.13
Efforts at the multiorganisational level to improve the safety of patient care are more advanced in acute hospital settings (where, historically, most of the related policy focus and resource is concentrated) compared with primary care.14 The reasoning for this imbalance may be explained partly by the prevailing policy view, internationally, that primary care is a comparatively low-technology environment where patient safety is not perceived as a major issue.15 However, recent commitments by, for example, the Scottish Government,16 European Union17 and WHO18 demonstrates a shift in prioritisation and formal recognition that patient safety is a problem in primary care which requires necessary action to address these concerns.
Evidence of large-scale collaborative initiatives to improve patient safety in primary care settings is limited.14 ,19 However, central to these efforts is the need to agree a shared strategic vision of the safety issues to be prioritised, develop the necessary expert leadership support, and invest in infrastructure that can provide valid, timely data to measure and monitor care improvement at the local, organisational and national levels.20 ,21 Building workforce capacity and capability through delivery of training in quality improvement concepts, skills and methods, and to acquire knowledge of theory-based change models, is recognised as another vital element for success.21 ,22
This study reports the findings of the qualitative evaluation of the pilot Safety and Improvement in Primary Care (SIPC) collaborative programme. The SIPC pilot programme aimed to apply a collaborative learning method to improve the safety of care for patients with heart failure, or taking high-risk medications such as Methotrexate or Warfarin (where high levels of avoidable morbidity and harm are well established and known to occur). This was to be achieved by building quality improvement knowledge, skills and behaviours in participating general practitioner (GP) teams during protected learning time. Participants then applied this learning during ‘action periods’ to enhance the practice safety culture by prioritising the identification and measurement of risks and safety incidents, and redesigning systems and processes to reduce avoidable harm. Given that many of the safety improvement concepts and methods were new to the great majority of participants, the pilot study offered a perfect opportunity to develop, contextualise and test the usefulness of the programme interventions in this care setting. The background context underpinning how the SIPC programme was delivered is described briefly in box 1.
A brief summary of how the Safety and Improvement in Primary Care (SIPC) pilot programme was delivered and guided by the Institute for Healthcare Improvement (IHI) Breakthrough Series Collaborative Method
The programme was implemented over a 24-month period through adherence to the IHI's Breakthrough Series Collaborative Method, that is, a combination of learning events for multidisciplinary practice teams followed by action periods to measure and improve the safety and reliability of care
The participant attended three 1-day learning events interspersed with 3-month action periods in frontline clinical practice to deliver on programme aims (with local leadership, advice and support provided in each National Health Service (NHS) Board area, and the programme being managed centrally by a core leadership and advisory team)
The SIPC project steering team delivered the learning sessions, at which primary care teams (general practitioners (GPs), nurses, pharmacist, practice managers, administrators, etc) were taught to proactively identify areas where harm is occurring within their practice, to identify how to make changes, measure improvement and ensure safe and reliable care
The Model for Improvement (incorporating Plan-Do-Study-Act (PDSA) cycles) was taught to participants as a method for them to test its usefulness in facilitating rapid improvements in care processes and systems
The Trigger Review Method for primary care was taught to participants (in groups sessions and face to face) to test its usefulness in serially measuring undetected harm events in electronic patient records and identify areas for improvement
The principles of Clinical Care Bundles were taught to participants who developed and tested these locally to assess their potential for improving the reliability of patient care delivery in selected clinical areas
A web-based online questionnaire survey was developed to assess perceptions of safety climate in participating practices and determine its usefulness for team-based reflection and acting on the quantitative feedback reports provided as a way to enhance the prevailing safety culture
The focus of this study is the first wave of the SIPC programme which was initiated in August 2012.
This involved three clinical and management representatives from 22 GP teams based in three regional NHS Board areas in Scotland. Participating NHS Boards and general practices were recruited on a voluntary basis
Financial support for backfill costs was provided to enable core GP team representatives to attend learning sets, and have some protected time for improvement activities
Participants were supported by a local NHS Board level team consisting of a public partner; GP clinical lead; manager; and quality improvement facilitator
The expectation was that NHS Boards would also develop their expertise in supporting practices in improving their care through collaborative working, and in coordinating system-wide approaches to complex patient care
The overall pilot programme was managed and coordinated by a core team from Healthcare Improvement Scotland (HIS)—the national organisation responsible for the—which consisted of a GP clinical lead, programme manager, two project officers and two project administrators. The expectation was that this team would gain insights into how patient safety improvement might be further developed in primary care
Against this background, the main evaluation aim was to explore the perceptions and experiences of those participating in the pilot programme and identify the facilitators and barriers associated with the range of novel improvement concepts and methods being applied to this setting, mostly for the first time. In this way, evidence of their overall utility could inform decision-making to further refine and spread the implementation of the programme at scale on a national basis.
A qualitative study was undertaken using open-ended semistructured interviews23 with key programme participants: GPs (family doctors), practice nurses and practice managers.
The programme aims were broadly informed by a theory-driven approach24 ,25 to assist programme leaders to gather evidence related to predicted theories of change and inform future planning. In the early stages of the evaluation, the programme plans were reviewed and key elements of the theories inherent in these were identified. The theories were further refined with input from the programme leadership (NH and JG), and subsequently illustrated in a basic Logic Model to describe how the interventions were initially understood and what results they were expected to achieve (see online supplementary appendix 1).
Setting and participants
The SIPC pilot programme was undertaken in two phases over a 24-month period from March 2012 in 45 (initially 22 in wave 1) general practices across six (initially three in wave 1) National Health Service (NHS) Board regional areas in Scotland. The practices were of varied sizes, location and socioeconomic status, with some providing care to small rural community populations of around 1100, and others being large urban practices with over 14 000 patients. Study participants were the members of the core GP teams (GPs, practice nurses and practice managers) in each of the three wave 1 participating NHS Boards. Wave 1 participants were selected for interview based on the pragmatic decision that they had, potentially, the greatest programme experience and insights, as well as for availability of evaluation resource reasons. Purposive sampling was employed in an attempt to represent a wide range of views and reflect fundamental characteristics of interest to the evaluation, such as NHS Board setting, professional grouping and programme withdrawal.
A multi-intervention strategy was employed by the programme steering group based on related evidence of driving learning and improvement using similar methods in secondary care settings,22 and informed by professional consensus and experiences in frontline practice. The main interventions comprised: delivery of a quality improvement collaborative based on the Institute of Health Improvement Breakthrough Series:22 application of the ‘model for improvement’ (MFI),26 Trigger Review Method (TRM),3 clinical care bundles,27 safety climate assessment survey,28 infrastructure/advisory support from local NHS Boards, and formation of a multiprofessional programme steering group to coordinate activities (table 1).
Semistructured interviews were conducted face-to-face in a location of convenience to study participants, and lasted between 50 and 85 min. They were undertaken by an experienced qualitative researcher and health psychologist (LH) over the final 9-month period of the SIPC programme during 2012/2013, and informed by a brief topic guide (box 2) designed to explore participant perceptions and experiences and reported barriers and facilitators related to the programme interventions. Interviews were tape-recorded with consent from participants and then digitally transcribed.
Brief interview topic guide
Programme goals, information and improvement support
What did you understand about the programme goals
What was your experience of the learning sets? How did you and the team benefit?
Explore sharing and spread of programme concept and practices with the wider practice team
Explore barriers and facilitators with each intervention
What is realistic and feasible and why? If not, why is this?
Explore programme impact at different levels:
Personal and team learning
Practice safety system improvements
Direct patient care improvements
Consequences, good and not so good, for everyday practice
Overall experience of Safety and Improvement in Primary Care programme
What do you find to be effective about the programme? Why was that?
What do you think did not go well about the programme. Why was that?
What programme aspects have the practice embraced? Why? How will you continue with these?
Any concerns about this type of programme approach and why?
Data analysis and interpretation
Data were coded and categorised on an iterative basis by LH immediately postinterview to inform further interviews, and then subjected to a simple thematic analysis29 by LH and PB independently. Both researchers met regularly to compare analyses and further co-develop and refine data categories to generate themes, with any disagreements being resolved by consensus. From the outset, the stated evaluation aim explicitly shaped how data were analysed and provided a basic framework to present the evolving themes that were generated. The findings were shared iteratively with the programme steering group leading to mid-programme activity corrections and refinement of related theories, and as a means of providing supporting evidence for learning outcomes and future implementation efforts at scale.
A total of 27 participants from 11 general practice teams took part in open-ended semistructured, face-to-face interviews (table 2). Three main themes emerged:
Programme perceptions, experiences and benefits;
Improvements to patient care systems;
Utility of programme interventions.
Programme perceptions, experiences and benefits
Most participants believed that the programme had benefited their organisations, patient care and professional work performance. The programme was reported as well organised and providing an explicit focus for patient safety issues to be examined while also encouraging broader team working. The learning-oriented sessions were generally well received and valued because they provided opportunity for participants to reflect on current practices, network with peer practices, discuss concerns, feedback on their progress, keep staff focused on the programme goals, and provide opportunities to share learning and improvement successes across practice teams, as these evolved during the programme.
It was a good opportunity to systematically review how we do look after these patients and that was all very positive. [Practice Nurse 4]
They [learning sets] were thought provoking and change stimulating as well as informative and hard hitting. [GP2]
I think it has been very positive, it has been a good way for me to work with other people, we have all kind of come together. [GP1]
Listening to other practices doing other things has also been a benefit…it was good to meet with the other practices, good to share…you really do learn from others. [GP3]
A majority of participants referred to gaining new theoretical and experiential safety knowledge (eg, how unreliable evidence-based care can be) and skills (eg, how to search electronic records for undetected clinical risks) related to the programme interventions. Some also reported improvements in aspects of their clinical knowledge with regard to specific drugs and their interactions, and the importance of educating patients taking high-risk medications. A much greater awareness of improvement concepts, systems thinking, the potential for avoidable harm, the importance of safety culture and the need to proactively manage risk was reported by most participants as positive programme outcomes.
In terms of the project globally I think it has been very well organised with structured learning days and the support we have had from designated people in the practice. [GP3]
I welcome the concept of identifying potential harm and preventing it rather than waiting until it occurs. [GP7]
It has certainly increased my knowledge so hopefully we may have an increased knowledge I am delivering better patient care…[and]…made the doctors think a bit more on how they see their patients, how they read their patient's records and what action they take. [Practice Nurse 2]
Multiple competing workload priorities, time demands, difficulties in communicating with and engaging of colleagues, and managing the necessary change processes where highlighted as major challenges due to heavy workload constraints. Participants described problems in physically getting team members together in a meeting room to feed back and reflect on programme learning and agree on improvement steps from the learning sets. Some practice managers and nurses believed they could have offered much greater support to the programme, but felt largely excluded because their delegated roles were very limited or even diminished by the decision-making of medical hierarchies. Others reported a lack of medical involvement and support, and the shifting of much of the programme workload and responsibility onto practice nurses and managers.
Many participants reported a significant mismatch between the comparatively low level of backfill funding received and the time and resources actually committed to the workload demands of the programme. For some, these were the key factors informing their decisions to withdraw from the programme, while others gave serious consideration to future participation due to similar financial concerns. Three practices disengaged from the programme citing lack of time-out for staff, staff stress due to workloads, and time with patients potentially being compromised.
The work has basically fallen to myself and the practice nurse, the doctors haven't really engaged with it…the first [GP partner] who came with me she was very cynical and very critical and I found that challenging because I wasn't want to carry the sole responsibility, that was a struggle to the practice to begin with to have, we brought the wrong one [GP partner] along. [Practice Manager 3]
The main challenge was keeping the rest of the team inspired, pulling the team on board was difficult…if it is going to fail it's going to fail because we just can't all get together, that is just not achievable. [Practice Manager 2]
Feeding back things from learning events to the rest of the team, feeding that back to the wider practice group…coming back to a busy practice back into all the time constraints and all the demands on your time to then try and pass on that energy is extremely difficult, that's where a lot of it falters, it is actually very, very difficult to pass that onto the wider group. [GP 4]
The amount of money given to us to pay for back fill didn't pay for a quarter of the back fill, and it costs money to take people out to have meetings, it would have been easier to release time if I had more money to put in locum provision. [Practice Manager 4]
Improvements to patient care systems
Improvements in care bundle data collection methods and the reliability of related systems were reported by most practices over the course of the programme (see online supplementary appendix 2 for examples), but this was an evolving process that was strongly dependent on closer working arrangements between clinical and administrative staff. These changes reportedly led to a number of improvements, including: enhanced systematic monitoring of patients (eg, blood tests and side effects) and documentation; greater personal vigilance when, for example, handling repeat medication prescriptions and the prescribing of antibiotics; developing more robust systems for managing laboratory test results for patients; and more proactive patient contact, education and involvement in their care, including checking understanding of medication regimes and how to seek further support.
A majority of practices reported that they were now gradually providing safer, more reliable care for patients with heart failure or left ventricular systolic dysfunction (LVSD). They indicated that programme participation had enabled them to identify suboptimal care in these areas and take action such as: clean up-related patient registers and improve identification of patients with LVSD; optimise heart failure management through specialist clinics leading to reported improvements in, for example, New York Heart Association (NYHA), recording and increased pneumococcal vaccinations; implement more robust monitoring of medications and improved patient contacts and care education.
A small minority of participants highlighted the perceived positive impact heart failure clinics and education had, for example, on patients’ awareness, knowledge and self-management of their conditions. This had reportedly led to some patients feeling more in control in terms of self-management, and having a greater understanding of their illness, and with respect to high-risk medications.
[Patients are now] weighing themselves every day…and they have got it written down, they have never done that before…they are also now fully aware of what side effects to look out for from the outset. [Practice Nurse 4]
We have drafted information cards for the patients on methotrexate and isothyoprine, we have had positive feedback, they have actually participated and helped us revise the cards. [Practice Nurse 4]
We have tidied up our DMARD programme, we have got the new guidelines, our safety has improved with regards to the DMARDS…and we have put together a pre-initiation check list which is working really, really well, everyone in the practice is aware of that… [GP 1]
Utility of programme interventions
Clinical care bundles
Most participants favoured the care bundle intervention as having potentially the greatest positive impact in improving patient care safety and reliability. They considered the visual nature of the ‘run charts’ related to care bundle data measurement as important for encouraging and motivating staff and driving improvement. The care bundle approach was reported to be effective in highlighting unreliable practice, and participants believed that this led to improved care systems, and enhanced patient education and involvement in self-management of illness.
However, adapting, redesigning and gaining consensus on the content of the care bundles was perceived as problematic by many of those involved in the process. For some, the care bundles that were locally developed (eg, related to heart failure care) were viewed as having a limited evidence base and relevance, and were difficult to interpret and challenging in terms of achieving high reliability in the areas of patient care delivery being targeted. Practice teams also reported further struggles with patient non-compliance with some aspects of recommended care related to the bundles which ‘skewed’ their data. They also reported issues with interpreting statistical relevance related to care bundle compliance data. Additional technical problems with information technology for related data collection, storage and access were felt by some to often hinder effective implementation of the care bundle approach.
You can see week by week, month by month, whether or not you are showing any improvement, we seem to be improving and that's good because we are able to see our graphs and what not and how we were doing with that. [GP3]
The [Care] Bundle is the thing that forces you to make changes everything else is driven by that…they are straight forward, it is not too complicated. [GP1]
Model for improvement/Plan-Do-Study-Act change cycles
Although there were some reported successes, multiple barriers were apparent for many participants related to the everyday implementation of model for improvement (MFI)/Plan-Do-Study-Act (PDSA) cycles as a method to facilitate small tests of change and drive rapid care improvements. Participants indicated that time constraints and competing work priorities meant that they quickly lost momentum and motivation after initially implementing the PDSA cycle process to test changes in care practices. They also reported some confusion with fully grasping the concept and its relevance to the general practice setting, and how the PDSA cycle process actually aligned with ‘data measurement’ and ‘improvement’. Others indicated that they found the method to be a little ‘contrived’ and ‘unnecessary’ for everyday work. Many participants reported that they felt it was unnecessary to always formally document records of PDSA cycles undertaken, as many viewed it as an ‘instinctive’ or ‘mental’ thought process that was routinely done ‘automatically’ when making small-scale adjustments to ways of working.
In some ways it feels almost it's quite contrived what you are doing with it because you have got to do each individual step rather than just say this is how I think we should deal with it…you just tend to make changes and just roll with it, maybe why it is a bit more difficult for us to try and sort of implement it in the way we work. [GP 7]
I would say they are a bit of a pain…probably about 50% of the [improvement] work that we have done has not been recorded by a PDSA…just breaking it down and recording it is time consuming and a bit of a faff…too many paper exercises for us as practitioners. [Practice Nurse 3]
It's a good way to implement change, how to make your systems better
you can make changes quickly you know it doesn't need to be as cumbersome as, you know, audit… [GP 2]
Safety climate assessment survey
Participants reported mixed views and experiences of participating in the safety climate survey intervention and holding related team-based meetings to discuss and reflect upon the feedback report of survey findings that was provided as part of the programme. In the early stage of the programme, many participants reported multiple issues related to survey participation mainly to do with the online technology used but also in interpreting the relevance of their survey findings, particularly in comparison with other GP teams. This caused confusion, raised concerns around statistical meaningfulness, and fomented negativity about this activity for some participants who reported very limited learning and improvement from survey participation. This prompted a review of how the climate survey was designed and delivered by the programme leadership, resulting in a number of technical and educational support refinements during the programme, with later programme participants reporting increasingly positive feedback on the usefulness and impact of the climate survey.
For those participants who reportedly engaged well with the climate survey at the outset, this activity was perceived to lead to valuable, if occasionally difficult, team discussions on patient safety systems, internal and external communication problems, and practice leadership issues. Participants also indicated that survey participation provided welcome reassurance on safety performance; highlighted misplaced perceptions of how safe the practice was thought to be; teased out why there were marked differences in clinical and non-clinical responses to the survey; and identified areas for improvement within practices.
The whole area round the climate survey was disappointing, I would say that was the failure point of the programme for us…the way it is done just now just hasn't worked in this practice. [Practice Nurse 1]
I felt that we didn't get enough input prior to completing it, it was just e-mailed, asked to complete it but we didn't know what it was about…we felt we didn't have enough explanation. [Practice Manager 3]
Many of us in the practice doctors and staff hadn't really made the link that us failing to communicate in some other ways was a threat to patient safety so we opened that up for discussion, we had a lot of really good stuff came out of it, a lot of very open discussion. [GP7]
Trigger review method
Mixed views were reported on the perceived purpose and usefulness of TRM, with some participants finding this intervention daunting and threatening. It was immediately clear from early evaluation feedback that ‘reliable measurement’ of harm rates in the electronic records of specific patient groups was not being attempted by most participants. Problems were being reported by participants related to how they perceived and interpreted the ‘harm measurement’ element of TRM. Many participants indicated that they struggled to identify enough harm cases to be able to calculate a ‘reliable measure of harm’ for the specific patient group being reviewed by TRM. Instead, those who engaged well with the tool reported an alternative application in actually identifying previously unknown incidences of patient harm in electronic records related to, for example, the altering of medications, inadequate recording of adverse drug reactions and drug allergies, and lack of clinical follow-up of patients. This was reported to have prompted greater scrutiny in medication prescribing and monitoring systems, as well as improved coding of adverse events as a means to manage clinical risks and potentially reduce avoidable harm to patients in future.
On the basis of these early programme experiences, the purpose and application of the TRM was adapted to a method of ‘flagging up’ previously undetected patient safety incidents (eg, ‘latent risks’, ‘near miss events’ and ‘adverse events’) in specific high-risk patient groups (eg, those taking warfarin). In this regard, the TRM purpose was altered by these participants to a method for identifying patient safety-related learning needs though highlighting suboptimal processes and general quality of care issues. A clear facilitating factor was the provision of one-to-one training by a medical doctor experienced in the method, which was associated with its perceived successful application by participants.
[we] discovered quite a few people whose haemoglobin was quite low for no obvious reason, we've now built in a regular haemoglobin review into patients on Warfarin therapy. [GP2]
Occasionally trends in blood counts rather than absolute values had been missed, we have definitely now got procedures in place that pick those up. [GP4]
[The TRM] identified near misses that would never have otherwise been unveiled to anybody ever but had very significant learning. [GP1]
The pilot collaborative programme largely achieved its objective of capturing key perceptions and experiences of participants, and identifying the facilitators and barriers associated with the range of novel improvement interventions being tested. Encouragingly, most participants valued the educational benefits of being involved, such as learning about safety and improvement theory and methods, having protected time for team-based reflection, and participating in peer-to-peer learning. The findings confirmed and refuted some aspects of the initial programme theories, which enabled us to refine initial assumptions about how some of the interventions were working (or otherwise) and why, and make mid-corrections or end-of-programme corrections to their purpose and delivery. This evidence has since informed decision-making to further refine and spread the implementation of the programme at scale on a national basis.
While the Breakthrough Series collaborative concept was well received, similar to previous research,19 the overall approach also provided some evidence of the potential transferability and utility of most interventions after adaptation to the Scottish primary care context (particularly clinical care bundles, but also safety climate assessment and TRM). However, engagement with MFI/PDSA change cycles was more problematic, and is perhaps worthy of future research exploration, particularly given the findings of a recent systematic review30 which also found issues with the understanding, application and reporting of this method.
Additionally, clear challenges were identified around protected time to participate, competing workload priorities and wider engagement of GP teams beyond core programme participants. Further work is also necessary to ensure that data collection and monitoring systems are improved, that there is greater realism around what can be achieved, and the unintended consequences of participation in such programmes are considered—all are well-established improvement challenges.31–33 The main theory-related learning suggests the following refinements:
High system reliability appears difficult to achieve in some circumstances with care bundles, as success can be strongly dependent on patient-compliance issues.
Care bundle content should be based on strong evidence to be professionally acceptable, while larger patient sample sizes are needed to demonstrate system improvements and clinical impacts.
The TRM does not appear feasible as a tool for measuring harms, and was reportedly more useful for identifying learning and improvement opportunities related to (previously undetected) patient safety incidents.
TRM training was associated with enhancing the success of implementation, while the method appeared to have greater utility when used with specific ‘high-risk’ patient populations, rather than random samples of the GP patient population.
Feedback related to safety climate measures and performance needs to be simplified and illustrated by graphs, with minimal use of even basic statistical concepts as these may not be well understood by many and can cause confusion. Careful consideration should be given to introducing the concept, explaining its purpose, the formatting of feedback reports and the need for basic guidance for participants.
PDSA cycles were not generally well used, as these were frequently viewed as unnecessary for rapid improvements, while related documentation processes were also thought unnecessary and cumbersome.
Caution must be exercised when interpreting the findings, largely due to potential bias because of the comparatively small number of mainly enthusiastic volunteer participants (early adopters) who over-represented general practices in the specialty training community. The programme was primarily a feasibility pilot and relatively short-term which resulted in a lack of objective, longer term outcome measures being collated at the national level around whether harm was actually reduced or care process reliability increased—participants reported examples from their locally held data to evaluators, but these could not be verified, and so overall programme success was difficult to gauge. With hindsight, if participants had greater knowledge and experience of the interventions being tested, and all the sociocultural and technical issues uncovered during the pilot had previously been resolved to a large extent, then achieving some of the patient-safety-related programme aims may have been more achievable (and measureable). The evaluation process was largely descriptive and would have benefited from a greater analytical focus, which is perhaps a reflection of the overambitious programme goals and the broad evaluation approach employed for this type of feasibility pilot. Future study of these specific improvement interventions (and related evaluation) will require much greater clarity about the social, technical and behavioural processes that need to be measured and altered to achieve the desired impacts on frontline practice.33
While improvement collaboratives, including the Breakthrough Series approach,22 are well established in many acute hospital settings, there is limited evidence of their implementation in primary care. Several studies have, however, been undertaken, and reported mixed findings in improving chronic obstructive pulmonary disease,34 diabetes care,35–38 advanced patient access,39 ,40 pressure ulcer care in nursing homes,41 chronic heart failure,42 and prehospital care for acute myocardial infarction and stroke.43 The common thread among these studies is that they focused improvement efforts entirely on a specific, well-defined disease or patient group using a standard collaborative approach. By contrast, the approach adopted in this pilot study was arguably unorthodox and, with hindsight, overambitious. Efforts were focused on multiple interventions and patient safety issues simultaneously. This included testing the feasibility of the Breakthrough Series collaborative approach at scale, piloting nascent improvement methods which hitherto had largely never been applied in this setting in Scotland and internationally, and challenging participants to reduce harm incidents or increase care delivery reliability.
The study findings are informing the (re)design and delivery of a planned future safety and improvement programme to be implemented nationally in Scottish general practices, before spreading to other primary care professional groups. Further testing and refinement of the programme interventions are ongoing with more representative groups of GP teams, with some having demonstrated promise in their reported potential to improve the reliability of clinical safety systems44 and identify previously undetected patient safety concerns.45 The potential for some of the tools to support QI evidence requirements for GP specialty training and medical appraisal and revalidation is also apparent.44–47
There is growing evidence of the impact of interventions to improve the safety and reliability of specific aspects of specialist hospital care—such as through the successful implementation of surgical checklists48 ,49 and clinical care bundles50 to reduce avoidable harms. However, there are still question marks over the effectiveness of such initiatives—and also large-scale collaborative programmes—in achieving and sustaining the desired improvements in the quality and safety of patient care.51–53 Overall, the evidence of what interventions work to enhance safety in primary care is less well developed,15 with related evaluations being predominantly observational, conducted in single sites and of variable quality.25 ,32 ,33 This evaluation provides some evidence of the transferability and utility of specific safety improvement methods in primary care, and sheds some light on related implementation issues that can arise.
The delivery of the SIPC pilot programme was positively received by the great majority of participants, with many reporting improvements in practice safety systems, team working and communication with colleagues and patients. However, some practices struggled with understanding the concepts, relevance and application of many of the improvement interventions tested. The evaluation provided valuable insights into how the interventions were used and adapted and contextualised as the programme evolved, which has already led to further refinements and improvements in application. A number of social and technical implementation challenges (eg, appropriate use of financial incentives, information technology support and availability of data, and workload demands) were also identified that need to be taken into consideration when spreading this approach at larger scale. To achieve this, policy and organisational levers will be necessary to implement the programme interventions on a formal basis at the national level in general practice. However, this will require significant resources to support the design of infrastructure to enable the routine collection of data for improvement (both narrative and numerical), and the requirement to systematically build capacity and capability in improvement skills among the primary care workforce.
The authors wish to offer sincere thanks to all GP teams, NHS Board teams, Healthcare Improvement Scotland and NES colleagues, and public-patient partners who participated and/or supported the pilot programme. They also wish to acknowledge and thank the contribution of The Health Foundation for their continued advice and support with this improvement work.
Contributors PB co-designed the study, assisted with data analysis and interpretation and cowrote the initial manuscript. LH co-designed the study, conducted interviews and analysed and interpreted the data, and cowrote the paper. AB provided advice on study design and theory, data analysis and interpretation and contributed to the writing and critical appraisal of the manuscript. NH and JG designed and led the programme intervention, contributed to evaluation data interpretation and to the critical evaluation and writing of the manuscript. All authors approved the final version of the manuscript.
Funding The funding for this study was provided by the UK Health Foundation as part of their ‘Closing the Gap’ programme initiative.
Competing interests None declared.
Ethics approval The study and evaluation protocols were prescreened by the West of Scotland Research Ethics Committee, but it was judged to be a service development which did not require ethical approval.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement No additional data are available.