Article Text

Download PDFPDF

Original research
Responses of physicians to an objective safety and quality knowledge test: a cross-sectional study
  1. Harry B Burke1,
  2. Heidi B King1,2
  1. 1Department of Medicine, Uniformed Services University, Bethesda, Maryland, USA
  2. 2Patient Safety Program, Defense Health Agency, Bethesda, Maryland, USA
  1. Correspondence to Dr Harry B Burke; harry.burke{at}usuhs.edu

Abstract

Objective For physicians to practice safe high quality medicine they must have sufficient safety and quality knowledge. Although a great deal is known about the safety and quality perceptions, attitudes and beliefs of physicians, little is known about their safety and quality knowledge. This study tested the objective safety and quality knowledge of practicing US primary care physicians.

Design Cross-sectional objective test of safety and quality knowledge.

Setting Primary care physicians practicing in the USA.

Participants Study consisted of 518 US practicing primary care physicians who answered an email invitation. Fifty-four percent were family medicine and 46% were internal medicine physicians.The response rate was 66%.

Intervention The physicians took a 24-question multiple-choice test over the internet.

Outcome The outcome was the percent correct.

Results The average number of correct answers was 11.4 (SD, 2.69), 48% correct. Three common clinical vignettes questions were answered correctly by 45% of the physicians. Five common radiation exposures questions were answered correctly by 40% of the physicians. Seven common healthcare quality and safety questions were answered correctly by 43% of the physicians. Seven Donabedian’s model of structure, process and outcome measure questions were answered correctly by 67% of the physicians. Two Institute of Medicine’s definitions of quality and safety questions were answered correctly by 19.5% of the physicians.

Conclusion Forty-eight per cent of the physicians’ answers to the objective safety and quality questions were correct. To our knowledge, this is the first assessment of the objective safety and quality knowledge of practicing US primary care physicians.

  • quality in health care
  • health & safety
  • medical education & training

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information. The frequency counts for each question are the data and they are provided in Table 1.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • A strength is that it consisted of practicing primary care physicians.

  • A strength is that it is representative of US physicians.

  • A strength is its large sample size.

  • A limitation is that there is no canonical safety and quality corpus.

Introduction

The landmark Institute of Medicine (IOM) report, To Err Is Human: Building a Safer Health System,1 described a medical system that had become a clinical colossus, but its safety and quality had not kept pace with its size and complexity. It presented a system that was committing more errors yet detecting and correcting only a small fraction of them. It described a system with significant safety and quality deficits, some of which resulted in patient injury and death, and it recommended sweeping healthcare reforms.

Since To Err Is Human was published more than 20 years ago, a great deal of work has been done on improving safety and quality,2 yet a recent IOM report, Best Care at Lower Cost: The Path to Continuously Learning Health Care in America,3 and a recent study,4 suggest that many of the errors reported in To Err Is Human are continuing. The persistence and frequency of errors, and our reduced tolerance for errors, has heightened the importance3of medical safety and quality.

Although a great deal is known about the safety and quality perceptions, attitudes, opinions and beliefs of physicians,5–9 little is known about their safety and quality knowledge. We designed a cross-sectional objective test of the safety and quality knowledge of practicing physicians. We believe this to be the first test of the safety and quality knowledge of practicing US primary care physicians.

Methods

This is a cross-sectional one-time objective test of the safety and quality knowledge of practicing US General Internal Medicine and Family Medicine physicians. Its participants were drawn from a national panel of physicians registered in Medscape. Physicians who completed the test received a US$30 Amazon gift card. The test was budgeted for 518 physicians completing the test. Seven hundred and eighty-eight practicing primary care physicians were randomly selected and solicited via email, which resulted in a 66% response rate. The test instrument was web-based and consisted of 24 multiple-choice questions. The objective questions were taken from widely available safety and quality textbooks and clinical literature. They were designed to reflect the practical safety and quality knowledge of practicing physicians. There were five areas of questions: patient managemen; radiation ris; general safety and qualit; structure, process and outcome; and quality and safety definitions.

In terms of patient management, three common patient management vignettes addressed the physician’s clinical quality knowledge. For the breast cancer vignette, there were five possible answers.10 For the renal mass, the American College of Radiology (ACR) Appropriateness Criteria11 gave the CT abdomen without and with intravenous contrast the highest appropriateness rating of 9, but this modality also had the highest radiation level. The ultrasound kidney retroperitoneal with duplex Doppler had the next highest rating of 8. The ACR states that appropriateness ratings of 9, 8 and 7 are ‘Usually appropriate’. There were eight possible answers to the renal mass question.12 In terms of patient management, the lung cancer screening consisted of five possible answers.13

In terms of common radiation risks, five questions addressed physicians’ knowledge of common radiation risks.14–16 There were four choices per question, each choice differed by one base-10 log. In other words, the four possible answers to the question spanned a four-log range. In terms of common healthcare system safety and quality issues, there were seven questions17–20 There were five choices per question. In terms of Donabedian’s21 model for assessing safety and quality in terms of structure, process and outcomes, there were seven questions. There were three choices per question.

Two questions asked physicians to identify common quality22 and safety definitions from the IOM. There were five choices per question. The most difficult of the 24 questions was the IOM’s definition of safety. Limiting the definition to ‘freedom from accidental injury’, would not have distinguished it from other safety definitions. Therefore, the correct answer included the rest of the IOM definition, ‘where accidental injury can be due to error, as either the failure of a planned action to be completed as intended or the use of the wrong plan to achieve an aim’.

The questions and answers are shown in table 1. The questions were presented in a random order and no changes were made to the questions during testing. The only instruction the physicians received was that they had to answer all the questions. The de-identified results were sent to the investigators by Medscape. The questions were not weighted. For each question, the percent correct is calculated and, for each topic, the average per cent correct was calculated. The χ2 test was used to assess demographic differences and whether the categorical answer frequencies differed from chance, and the Student’s t-test was used to compare continuous variables. The tests were performed using R (www.R-project.org) and significance was set at a probability of less than 0.05.

Table 1

Test questions and percent correct.The astrick denotes the correct answer.

Patient and public involvement

There was no patient and public involvement.

Results

The study demographics of the 518 physicians are shown in table 2. The medical specialty of the participants was 46% general internal medicine and 54% family medicine. The gender of the participants was 64% men and 35% women. There were no significant differences between the participants and US practicing physicians in terms of specialty, gender and age.23 24 There were no significant differences in the test scores by specialty, gender and age, except for slightly lower scores for physicians over 60 years of age compared with those under 60 years of age, 0.45 (SD, 0.12) and 0.48 (SD, 0.11), respectively, p=0.003. The median time to take the test was 10.1 minutes.

Table 2

Physician characteristics*

The results are shown in table 1. The average number of correct answers was 11.4 (SD, 2.69), 48% correct. Every physician answered at least four questions correctly and no physician answered more than 20 questions correctly (figure 1). For each question, the distribution of answers was significantly different from that expected by chance (p<0.01). The mean per cent correct for each of the five topics is shown in figure 2.

Figure 1

Percentage of subjects answering the questions correctly.

Figure 2

An integrated view of the mean percent correct for each of the five topic domains. Manage, patient managemen; rad risk; radiation ris; safety, general safety and quali; SPO; structure, process, outcom; Q&S def; quality and safety definitions.

In terms of the three common management vignettes, the average number of correct answers was 1.3 (SD, 0.90), 45% correct. For the breast cancer vignette, 55% of the physicians knew how to manage a woman with breast cancer who tested positive for a deleterious BRCA mutation. For the renal mass vignette, 46%, knew the work up for an indeterminate renal mass. For the lung cancer screening, 33%, knew the current approach to screening for lung cancer. Forty-six per cent of the physicians correctly balanced the radiation risk against the marginal additional benefit of CT and chose the ultrasound test. These results are also consistent with a recent study that found that physicians rarely have accurate expectations of the harms and benefits of clinical interventions, which the investigators attributed to a lack of knowledge.25

In terms of common radiation risks, the average number of correct answers was 2 (SD, 1.14), 40% correct. Sixty-one per cent of the physicians correctly identified the radiation exposure delivered by a chest X-ray, 60% correctly identified the radiation exposure delivered by a mammogram, but only 45%, could correctly identify the radiation exposure delivered by a CT scan of the abdomen and pelvis. Furthermore, in terms of population risk, only 25% of the physicians correctly chose the annual natural radiation exposure of an individual and only 11% knew the degree to which a 20 mSv (millisievert) of radiation exposure increased the population risk of a fatal cancer. These results are consistent with a systematic review of CT and other radiographical procedures that found a similarly low level of radiology exposure knowledge among physicians.26

In terms of commons healthcare system safety and quality issues, the average number of correct answers was 3 (SD, 1.27), 43% correct. Eighty-eight per cent of the physicians knew the main hospital accrediting body, 74% knew the definition of beneficence and 53% knew the Swiss Cheese model of accidents. But their accuracy was lower for questions regarding quality improvement tools, medication errors, 6-sigma and harm detection—which were answered correctly by 34%, 19%, 19% and 14% of the physicians, respectively. These results are consistent with a recent study of generalist and subspecialist internal medicine physicians which found that they correctly answered 43% of the questions regarding the US Food and Drug Administration (FDA) approval process.27 They are also consistent with a study of physician knowledge of central line-associates blood stream infection quality metrics that found that they answered 61% of the questions correctly.28

In terms of Donabedian’s model, the average number of correct answers was 4.7 (SD, 1.50), 67% correct. This set of questions contained the easiest question, namely, whether ‘The percentage of patients who are satisfied with their care’ was a structure, process or outcome measure. Ninety-six per cent of the physicians correctly answered that it was an outcome. The physicians were very accurate on classifying nosocomial infections, 89%, and staffing, 84%, but they were only 53% correct in classifying beta-blockers, 53% correct in classifying credentials, 50% correct in classifying the diabetic foot exam, and 45% correct in classifying discharge instructions.

In terms of common safety and quality definitions, the average number of correct answers was 0.39 (SD, 0.54), 20% correct. The definitions were published 19 years ago in To Err Is Human. Despite the high visibility of To Err Is Human, only 33% of the physicians correctly identified the IOM definition of quality and only 6% of physicians knew the correct definition of safety.

Discussion

US physicians answered 48% of the safety and quality questions correctly. They performed best on questions that required little safety and quality knowledge and worst on question that required basic safety and quality knowledge. Our population was similar to the US physician population in terms of specialty, gender and age. There were no significant differences within specialty, gender or age; although the scores of physicians over 60 years of age were slightly lower. These results are consistent with studies of physician knowledge of clinical harms and benefits,25 radiology knowledge,26 knowledge of the FDA approval process,27 and of quality metrics.28 ,29

Physicians want to practice safe, high quality medicine,30 but they may not be aware of how much they need to know about safety and quality. Furthermore, physicians need time to learn about safety and quality, and they need the time and expertise required to use the information in their electronic health records to monitor the safety and quality of their practice. Although many healthcare systems consider themselves to be healthcare learning systems,29 that belief does not always translate into their assisting frontline clinicians in improving their safety and quality knowledge.30 31

The main limitation of this study is that there is no canonical safety and quality corpus. Another limitation is that we may have overestimated physician knowledge because it used multiple-choice questions that probe recognition. Physician scores might have been substantially lower had they been asked to recall the correct answer to each question.

Conclusions

Only 48% of the physicians’ answers to the safety and quality questions were correct. A national system has been put in place at the resident level to improve physician safety and quality knowledge. Since knowledge is a prerequisite for performance, we expect that future physicians’ increased knowledge will result in less patient harm and improved clinical outcomes. Future studies should objectively measure and track changes in physicians’ objective knowledge of safety and quality. We believe this to be the first prospective test of the objective safety and quality knowledge of practicing US primary care physicians.

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information. The frequency counts for each question are the data and they are provided in Table 1.

Ethics statements

Patient consent for publication

Ethics approval

This study was approved by the Uniformed Services University of the Health Sciences Institutional Review Board.

References

Footnotes

  • Contributors HBB: originated the study idea and designed the research project, analysed the study data and drafted the manuscript. HK: made important contributions to designing the research project and analysing the study data, and made significant contributions to the writing of the manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Disclaimer The findings and conclusions of this paper are those of the authors and do not necessarily represent the positions or views of the US Department of Defence, the Military Health System, the Defence Health Agency or the Uniformed Services University of the Health Sciences.

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.