Article Text

Download PDFPDF

Original research
Quality of informed consent documents among US. hospitals: a cross-sectional study
  1. Erica S Spatz1,2,
  2. Haikun Bao2,
  3. Jeph Herrin1,
  4. Vrunda Desai3,
  5. Sriram Ramanan2,
  6. Lynette Lines2,
  7. Rebecca Dendy2,
  8. Susannah M Bernheim2,4,
  9. Harlan M Krumholz1,2,
  10. Zhenqiu Lin2,
  11. Lisa G Suter2,5
  1. 1Section of Cardiovascular Medicine, Yale University, New Haven, Connecticut, USA
  2. 2Center for Outcomes Research and Evaluation, Yale New Haven Health System, New Haven, Connecticut, USA
  3. 3Obstetrics and Gynecology, Yale School of Medicine, New Haven, Connecticut, USA
  4. 4Medicine, Yale School of Medicine, New Haven, Connecticut, USA
  5. 5Section of Rheumatology, Yale School of Medicine, New Haven, CT, United States
  1. Correspondence to Dr Erica S Spatz; erica.spatz{at}yale.edu

Abstract

Objective To determine whether informed consent for surgical procedures performed in US hospitals meet a minimum standard of quality, we developed and tested a quality measure of informed consent documents.

Design Retrospective observational study of informed consent documents.

Setting 25 US hospitals, diverse in size and geographical region.

Cohort Among Medicare fee-for-service patients undergoing elective procedures in participating hospitals, we assessed the informed consent documents associated with these procedures. We aimed to review 100 qualifying procedures per hospital; the selected sample was representative of the procedure types performed at each hospital.

Primary outcome The outcome was hospital quality of informed consent documents, assessed by two independent raters using an eight-item instrument previously developed for this measure and scored on a scale of 0–20, with 20 representing the highest quality. The outcome was reported as the mean hospital document score and the proportion of documents meeting a quality threshold of 10. Reliability of the hospital score was determined based on subsets of randomly selected documents; face validity was assessed using stakeholder feedback.

Results Among 2480 informed consent documents from 25 hospitals, mean hospital scores ranged from 0.6 (95% CI 0.3 to 0.9) to 10.8 (95% CI 10.0 to 11.6). Most hospitals had at least one document score at least 10 out of 20 points, but only two hospitals had >50% of their documents score above a 10-point threshold. The Spearman correlation of the measures score was 0.92. Stakeholders reported that the measure was important, though some felt it did not go far enough to assess informed consent quality.

Conclusion All hospitals performed poorly on a measure of informed consent document quality, though there was some variation across hospitals. Measuring the quality of hospital’s informed consent documents can serve as a first step in driving attention to gaps in quality.

  • informed consent
  • elective procedures
  • patient autonomy
  • quality measurement
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Strengths and limitations of this study

  • Informed consent documents for a broad range of elective procedures were evaluated using a tool that was rigorously developed with patient input to determine whether a minimum standard of quality is being met.

  • Hospital performance on the measures of informed consent quality was assessed using the mean document quality score and the proportion of documents meeting a minimum score of 10 (out of 20).

  • Informed consent documents are necessary though not sufficient to assess the total quality of the informed consent process.

Introduction

In the USA, hospitals are responsible for their informed consent (IC) processes and forms, which results in wide variation in IC practices. Standards for obtaining IC for clinical procedures do exist, but they do not specify the content, presentation or timing of written information shared with the patient. For example, the Centers for Medicare and Medicaid Services (CMS) Condition of Participation for Hospitals states that IC documents should include the name of the hospital, procedure and practitioner performing the procedure along with a statement certifying that the procedure, anticipated benefits, material risks and alternative treatment options were explained to the patient or the patient’s legal representative.1 The Joint Commission mandates that hospitals develop IC processes and forms that reference that a discussion took place between the clinician and the patient about the risks, benefits and alternatives to the proposed procedure, including the option to elect to receive no treatment2 3 A few states mandate that certain procedure-specific risks be described in writing, but there is no comprehensive guidance for IC documents.4 5 Unfortunately, even if it is assumed that most IC documents are in compliance with state laws, CMS and the Joint Commission standards, prior studies demonstrate that IC documents frequently lack critical information to support patient centred, informed decision making.6–8

Instead, IC documents for elective procedures are typically generic, containing legal-approved language that complies with state laws and hospital policies combined with blank space for clinicians to input the details of the procedure and specific risks, benefits and alternatives.9–11 An unfortunate consequence of this approach is that the most important information about the procedure is often missing, illegible or incomprehensible due to the use of acronyms and medical jargon.8 12 13 Moreover, the documents are often shared minutes before the start of a procedure, a time when patients are vulnerable and least likely to ask questions, and practically leave no room for informed decision making.8 As such, several stakeholders, including patients and patient advocacy groups, have called for more patient-centred IC processes, both in clinical practice and in research.14–19

Measures that evaluate the quality of IC documents can identify gaps and lead to meaningful improvements in this critical component of the IC process. Accordingly, in 2013, under contract with CMS, we first developed criteria for a tool to assess the quality of IC documents (described in a companion manuscript) and tested this tool to establish reliability and face validity. Using this tool, we aimed to develop a hospital measure of the quality of IC documents for elective procedures and tested its performance among 25 volunteer hospitals. We hypothesised that this measure could feasibly distinguish IC document quality across hospitals. This manuscript describes the development and testing of the measure. The measure is intended to identify gaps in quality and motivate hospitals to improve the IC process.

Methods

Overview

We developed a measure to assess the quality of IC documents associated with elective procedures performed as part of routine clinical care among hospitalised Medicare fee-for-service (FFS) patients. We focused on IC documents for elective procedures for several reasons. We expect IC to be standard practice for these procedures. More importantly, patients undergoing elective procedures would greatly benefit from a measure aimed at optimising communication about the risks, benefits and purpose of the procedure because elective procedures are generally considered ‘preference sensitive’ (meaning there are reasonable alternatives to the procedure) and different patients may choose different options depending on their preferences, values and goals.

The measure was developed in accordance with accepted standards established by the National Quality Forum.20 First, we developed an instrument to support the IC quality measure. Next, we piloted the measure in 25 hospitals, recruited through two partnering organisations, the Hospital Services Advisory Group (HSAG) and Premier, and conducted a cross-sectional study to assess hospital performance on the measure, along with measure reliability and validity. This work was supported by a contract with CMS.

Patient and public involvement

This measure was developed with input from a technical expert panel (TEP). This panel was composed of clinicians, patient advocates, hospital administrators, attorneys and experts in bioethics. Collectively, the TEP members brought expertise and perspectives in: IC and ethical decision making; patient care, engagement and communication; hospital administration and risk management; psychometric tool development; and performance measurement and quality improvement.

In addition, we collaborated with a working group of patients and patient advocates to develop the instrument (Abstraction Tool) used to evaluate hospital IC documents, and to represent the patient perspective throughout measure development. The patients and patient advocates came from diverse backgrounds and had prior knowledge of or experiences with IC, either as patients, caregivers, advocates for vulnerable populations, legal representatives or patient safety experts. In addition to providing critical input regarding the items to be included in the Abstraction Tool, the working group and TEP helped to determine how the measure result would be calculated and reported. The final measure specifications were made available to TEP and working group members, and to the broader public through CMS.

Measure cohort

The measure cohort was defined as the IC documents associated with a subset of elective, hospital-based inpatient procedures performed in Medicare FFS beneficiaries, aged 18 years and over, for which IC is considered standard practice.

To identify the cohort of electively performed procedures (and their associated IC documents), we used Medicare part A data from 1 January 2013 to 31 December 2015. Elective medical procedures and surgeries (herein referred to as procedures) were selected from 10 distinct specialties (neurosurgery; ophthalmology; otolaryngology; cardiothoracic; vascular; general; urology; obstetrics and gynaecology; orthopaedics and plastic surgery). To determine whether the procedure was performed on an elective basis, we applied the Planned Readmission Algorithm, developed and validated for use in CMS’s unplanned readmission measures.21 The algorithm identifies procedures that are: (1) ‘always’ or ‘potentially’ planned procedures and (2) not associated with an acute medical discharge diagnosis code. In addition, the elective status of the procedure was further assessed during the IC quality review, in which abstractors were asked to flag if the procedure was urgently or emergently performed, based on information provided in the IC document and operative report. We excluded organ transplant procedures since these are commonly performed on an emergent basis and typically have unique IC processes; non-invasive radiographic diagnostic tests (eg, CT scan with contrast), since IC standards may be different than standards for invasive procedures and surgeries; procedures that are conducted over several encounters (eg, dialysis, chemotherapy and radiation therapy), since IC is likely only conducted prior to the first encounter; and procedures performed during the same encounter as another already selected procedure, since procedures performed after the initial procedure but in the same encounter are less likely to be elective. For a full list of procedures deemed eligible for cohort selection, see the publicly available methodology report (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html).

Since the types and volume of elective procedures performed vary within each hospital, we selected procedures that were representative of the procedure mix at that hospital during the measurement period. Specifically, and to reduce bias, we randomly sampled cases within 10 specialties, with the number of cases sampled from each of the 10 specialties being proportional to the total number of cases performed at the hospital. We selected up to 150 cases per hospital, with the goal of reviewing the first 100 consent documents to calculate a hospital quality score. We included the 50 surplus cases to account for the possibility that a hospital might be unable to locate the medical record or that the identified consent document was in a language other than English, which was not feasible for us to review. This sample size was based on our prior review that there is heterogeneity in the quality of IC documents within and across procedure types,8 and that 100 documents per hospital would allow for us to detect this heterogeneity and reduce any random bias by selecting for IC documents that were not representative of the total hospital quality.

Measure outcome

The outcome is the quality of IC documents for hospital-performed elective procedures. Specifically, IC documents were reviewed using an eight-item instrument previously developed for the purposes of the measure (table 1).

Table 1

Abstraction tool item scoring

The instrument assesses three aspects of IC document quality: content, presentation and timing. It was developed in collaboration and is intended to represent a minimum set of standards for IC documents, deemed to be meaningful to patients, feasible to evaluate and consistent with the recommendations set forth by state laws, government agencies and professional societies. The instrument assesses whether the following information is conveyed in the document: a description of the procedure itself; how the procedure will be performed; rationale for why the procedure will be performed; risks, benefits and alternatives to the procedure. Readability assessment using standardised tools was considered though not implemented due to feasibility and stakeholder input. In addition, there is an item to assess the timing of the when the patient received the consent document in relation to the procedure date (usually designated by the date that the document was signed, if not otherwise documented). Each item had previously been iteratively tested for validity and reliability in a development sample of IC documents from eight distinct hospitals and found to meet the criteria of >80% agreement. Each item was given a weighted score and summed to a total potential score of 20, with the score of 0 representing the lowest quality IC document and 20 representing the highest quality document. The timing item was given the most weight (score of 5 points) based on stakeholder feedback that moving the IC discussion prior to the day of the procedure might be the most important standard to support the value that patients have ample time to consider the risks and benefits of the procedure, along with alternatives. Abstractors received 1 hour of training on how to rate each item, which involved reviewing an Abstraction Tool Manual that included information about: the intent of each item being evaluated, what qualifies, what does not qualify and examples of qualifying and non-qualifying text. These examples came from IC documents related to different types of procedures and surgeries. Trainers were guided to evaluate IC documents through the patient’s eyes (see online supplementary file).

The final hospital-level measure outcome is calculated by aggregating all scores from the sampled IC documents, as assessed using the instrument, for each hospital. We did not risk adjust the outcome, since demographic and clinical factors should not impact IC document quality. In accordance with stakeholder feedback, and consistent with other measures of patient-centred practices, hospital-level performance is also reported as the percentage of a hospital’s documents that exceeded a quality of 10 (out of 20). A threshold of 10 was considered by the patient working group to establish a starting point; however, the expectations were that over time, hospitals could work towards 20 out of 20, since practically, hospitals control the IC document quality and process.

Assessment of hospital performance on the measure

We piloted the IC measure in a cohort 25 volunteer hospitals from 11 states. The hospitals ranged in size, teaching status and rural/urban location. Participating hospitals agreed to provide the IC documents and operative reports of up to 100 cases that met cohort criteria. We assessed the following outcomes at both the individual and hospital level: item performance (proportion of documents meeting each item in the instrument), mean document score and the percentage of documents meeting potential quality thresholds of 5, 10 and 15 points.

Measure reliability and validity

To assess reliability of the outcome, we tested the inter-rater reliability of IC document scores. Specifically, two experienced abstractors reabstracted a subset of previously abstracted documents. Ten IC documents were randomly selected from each hospital for review (total of 250 documents). Inter-rater reliability using the Spearman correlation and the intraclass correlation coefficient (ICC).

To evaluate the final face validity of the measure, we surveyed the TEP. We asked each member to rate two statements using a six-point scale (1=strongly agree, 2=moderately agree, 3=somewhat agree, 4=somewhat disagree, 5=moderately disagree and 6=strongly disagree): (1) The Abstraction Tool, as currently specified, provides a valid assessment of the basic elements of IC documents and (2) The measure, as currently specified, provides a valid assessment of the quality of hospitals’ IC documents. In addition, we asked the TEP to respond to the following questions about the larger importance of measuring IC quality with the following questions: (1) Measuring the quality of IC is important.; (2) The quality of IC documents is an important component of the IC process; (3) Measuring the quality of the IC document is a valid approach for assessing an aspect of IC quality and (4) Improving the quality of IC documents could meaningfully improve one aspect of the IC process for patients. These data were collected under a data use agreement with CMS and with the partnering hospitals, and as such, cannot be shared.

Results

Hospital performance on the measure

The final measurement sample comprised 2480 IC documents from the 25 participating hospitals, with the median number of documents assessed from each hospital of 100, but with a range from 50 to 150. Some hospitals (n=8) had fewer qualifying cases than the requested 100 IC documents. Among the 2480 documents received from 25 hospitals, there were substantial deficiencies on most items in the Abstraction Tool (table 2).

Table 2

Overall item-level performance across hospitals participating in the measure testing

Only 30% of documents contained language describing the procedure and only 11% reported any information about how the procedure would be performed. Just 2% of documents reported any quantitative risks, such as the percent of patients who develop an infection during the postoperative period. Additionally, few documents (5%) included statements of any patient-oriented benefits, such as pain relief or prolonged survival, or of specific alternative options (17%), such as medication therapy or active surveillance. Over 80% of documents had the procedure name written by hand, and when information was provided about how the procedure was performed, only 11% were typed. Regarding timing, 46% of documents were shared with patients at least 1 day in advance of the procedure, as indicated by the signature. No documents provided information about whether the IC document was shared prior to the date of signature. Additionally, despite IC documents providing space for the time the IC document was reviewed, the time was frequently missing, limiting our ability to assess when exactly the IC document was signed in relation to the procedure.

At the hospital level, we observed variation in the proportion of documents meeting each item in the Abstraction Tool. For example, at one hospital, 94% of documents contained language describing the procedure, whereas at other hospitals 0% of documents contained such description. In 9 of the 25 hospitals, the content items were always handwritten. In one hospital, procedure-specific risks were always described, and accompanying these risks were qualitative language about the probability of them occurring. In 13 of the 25 hospitals, fewer than half of the documents were signed by patients more than one calendar day prior to the procedure date.

Documents in the overall sample received scores ranging from 0 to 20, with a mean of 4.5 (SD 4.3) out of 20 possible points and a median of 5 (IQR: 0–7) (table 3).

Table 3

Hospital-level mean document score results

Hospital mean performance scores ranged from <1 (95% CI 0.3 to 0.9) to 10.8 (95% CI 10.0 to 11.6). The median hospital IC scores ranged from 0 (IQR: 0–0) to 12 (IQR: 10–12). The proportion of documents meeting or surpassing a quality threshold of 5, 10 and 15 points (out of 20 possible points) are presented in table 4.

Table 4

Hospital-level results using three possible quality threshold values

Reliability and validity

The reliability of the overall document score, measured using the Spearman correlation, was 0.92 and using the ICC (2,1) was 0.92. These are conventionally considered as ‘very strong’ correlations. Seven of 13 TEP members responded to the survey on face validity. Six of the seven TEP members supported that the Abstraction Tool provides a valid assessment of the basic elements of IC documents and five agreed that the measure, as currently specified, provides a valid assessment of the quality of hospitals’ IC documents. Additionally, the seven TEP members all supported the validity of the measure concept as indicated by a response of moderately agree/strongly agree to the importance of measuring IC, the importance of IC documents as a component of the IC process and the validity of measuring the quality of IC documents as a way of assessing an aspect of IC quality. The seven responding TEP members moderately/strongly agreed that improving the quality of IC documents could meaningfully improve one aspect of the IC process for patients.

While we did not specifically survey hospitals in the testing sample, our partners who recruited hospitals for participation noted that the hospitals were enthusiastic about the project and felt that they learnt a lot about their IC process through use of the measure.

Discussion

We implemented a measure of IC document quality, developed in collaboration with patients and other stakeholders, and tested in a sample of 25 diverse and geographically dispersed hospitals, to identify gaps and variation in this important component of IC. Among nearly 2500 IC documents, representing a range of hospital-performed procedures and surgeries, we found that most IC documents did not meet minimal standards of quality. We did, however, observe substantial interhospital variation in performance, demonstrating the potential for improvement. Based on a scale of 0–20, with 20 representing high quality IC documents, the hospital mean document score ranged from 0.6 to 10.8. Most hospitals had at least some of their documents score more than 10, a quality threshold set by stakeholders as being meaningful to report, but only two hospitals had more than 50% of their documents score above a 10-point threshold. Some documents received a score of 0, though still met standards for IC documents. For example, CMS only requires the name of the hospital, procedure and practitioner performing the procedure along with a statement certifying that the procedure, anticipated benefits, material risks and alternative treatment options were explained to the patient or the patient’s legal representative; yet, to meet our criteria for whether the procedure was named, we required that the name of the procedure be restated in language readily understandable to a patient. This was based on stakeholder feedback that the name of the procedure (eg, coronary artery bypass graft surgery, CABG) was frequently unrecognisable to patients and that lay language was needed (eg, CABG—surgery that uses a healthy artery to bypass or go around a diseased artery). These data tell an important story that few hospitals offer patients a minimum standard of written information about their elective procedure, which may have important implications for safety and truly informed decision making.

Much research and health system efforts have focused on identifying deficiencies in IC and improving IC processes. Other investigators have developed tools to evaluate and improve the IC process.12 22 In a study of over 500 hospitals’ IC documents, most forms did not meet acceptable standards.22 Over half of documents made no mention of serious or common risks or general benefits and even fewer mentioned benefits or specific alternatives. These authors put forth an alternative form or ‘worksheet,’ though to our knowledge, it has not been implemented widely. Additionally, hospitals and health systems have put forth efforts to support more ethical, patient-centred IC processes.23 For example, Temple Health has a website with best practices, along with training kits for healthcare professionals and patients to support more patient-centred IC process.24 Unfortunately, while much attention has been given to identifying deficiencies and improving the IC document and process, little has changed.16 25–27

This measure builds on prior work to improve IC processes by establishing a national minimal standard, informed by patients, that goes beyond the guidance of regulatory agencies, and which establishes a method for evaluating hospital quality on a spectrum. The quality items assessed in this measure are consistent with guidance from the American College of Surgeons28 and other professional societies.29 They recommend that IC include a written description about the basic procedures involved in the operation, when the patient can expect to resume normal activities, and how the operation is expected to improve the patient’s health or quality of life. Additionally, the Institute of Medicine suggests that IC materials be written to support health literacy, including presenting content in various modalities, setting a maximum reading level of material, minimising language barriers, focusing on patient desired outcomes and beginning the IC process in advance of the procedure to allow patients to better prepare, ask questions and deliberate the decision. This measure assessed a minimum set of these quality items, though ideally, hospitals would use such a measure as an opportunity to develop consent documents that more fully meet the quality attributes outlined by patients and professional societies as important for patient decision making.

Our study identifies several specific gaps that hospitals could address to achieve high-quality IC documents. Less than one-third of documents provided information about the procedure and only 10% described how the procedure is performed. Moreover, in most hospitals, the content items relating to the name of the procedure and how it is performed were never typed. Additionally, there was little information provided about procedure-specific risks, benefits or alternatives. Fewer than 5% used a quantitative probability and only 25% use a qualitative probability of risks, both of which were asserted to be important by patients and other stakeholders. The exception was a hospital that was located in Louisiana, a state which has more extensive requirements for disclosing procedure-specific risks in the IC document. Still, state requirements did not extend to describing procedure-specific benefits, which were under-reported in all hospitals including this one. Only a few IC documents reported any patient-oriented benefits or procedure-specific alternative treatment options. The timing item was met by the majority of documents in most hospitals, though results show opportunity for improvement. While some patients may opt to not read any IC document, the timing item allows those who do want to read the document, the time and space to do so. Taken together, while we did not observe other trends in quality related to hospital type, this study was not designed to assess predictors of quality, and with a limited number of hospitals, more robust comparisons were not possible.

Several concurrent efforts support the value of improving IC processes. In Washington state, new laws encourage providers to use patient decision aids in lieu of IC documents for elective procedures; providers who do so are afforded increased protection against litigation.30 The American College of Surgeons is championing the use of evidence-based calculators to estimate personalised risk as part of the IC process.31–33 Additionally, CMS’s payment model approach to screening CT scan to detect lung cancer, left atrial appendage repair and implantable cardioverter defibrillators are tying reimbursement to the documentation of a discussion about risks, benefits and alternative treatment options.34–36 Finally, the National Quality Forum is leading efforts to support the measurement of patient-reported outcomes and use of patient decision aids.37 We anticipate that the tool developed for this measure could be used to evaluate IC processes associated with both inpatient and outpatient electively performed procedures; additionally, results could be reported by provider or by health system, especially as the measure does not require risk adjustment.

This study has several limitations. The hospitals recruited for this study were not randomly selected. Willingness to participate in this study may have been driven by performance on either end of the quality spectrum. However, hospitals were aware that the data would remain anonymous. Additionally, we provided hospitals with randomly selected cases to avoid document selection bias. Another limitation is that, although the process for selecting items was iterative, and informed by regular stakeholder, including patient input, this does not preclude the omission of items which may have improved the reliability or validity of the measure. For example, we did not use a standardised tool to assess readability; however, there are several limitations to these tools. Instead, with stakeholder feedback, we provided detailed guidance to abstractors about the use of lay language being necessary to meet criteria for some of the items. Also, we did not assess the accuracy of the information provided on the IC documents. However, we did assess the inter-rater reliability of IC scores abstracted by the two raters and found ‘very strong’ correlations. Finally, the weighting of the items, while thoroughly vetted with stakeholders, is ultimately subjective, and other weighting schemes may be equally or more useful, valid and reliable. Still, this tool represents a significant advance in the assessment of the quality of IC.

Conclusion

A measure of the quality of IC identified important gaps in the quality of IC documents for elective procedures; additionally, we found substantial heterogeneity in the quality of IC documents across hospitals suggesting that hospitals can improve on this necessary, though not sufficient, component of quality. The ultimate goal is that hospitals and other health systems and providers will use this measure as an opportunity to identify gaps in their own IC documents, and work to transform the IC document from a transactional form used to attain a patients’ signature to a meaningful resource that supports patients in the decision-making process.

Acknowledgments

At the time of this study, Spatz was supported by grant K12HS023000 from the Agency for Healthcare Research and Quality Patient-Centered Outcomes Research Institute (PCORI) Mentored Career Development Program. The authors thank the working group for their time and commitment to this effort. They include: Ellen Andrews, Irwin Birnbaum, Larry Bocchiere III, Jonathan Delman, Gaye Hyre, Marilyn Mann, Chris Norton, Patricia Skolnik and Amos Smith. The authors thank members of the technical expert panel. They would also like to thank Steven DeMaio for assistance with an early draft of this manuscript.

References

Footnotes

  • Twitter @SpatzErica, @hmkyale

  • Contributors ESS: drafting of manuscript; interpretation of data collection. HB: analysis of data; critical revisions to manuscript. JH: design of measure specifications; critical revisions to manuscript. VD: design of measure specifications; critical revisions to manuscript. SR: critical revisions to manuscript. LL: critical revisions to manuscript. RD: collection of data; critical revisions to manuscript. SMB: critical revisions to manuscript. HMK: conceptual design; design of measure specifications; critical revisions to manuscript. ZL: conceptual design; design of measure specifications; critical revisions to manuscript LGS: conceptual design; design of measure specifications; critical revisions to manuscript.

  • Funding The analyses on which this publication is based were performed under the Measure and Instrument Development and Support (MIDS) contract HHSM-500-2013-13018I, Task Order HHSM-500-T0001– evelopment, Re-evaluation and Implementation of Outcome/Efficiency Measures for Hospital and Eligible Clinicians, Option year 3, funded by the Centers for Medicare and Medicaid Services, an agency of the US Department of Health and Human Services.

  • Disclaimer The authors report receiving support from the Centers for Medicare and Medicaid Services to develop and maintain performance measures used in public reporting programs, including a measure of informed consent document quality. The measure is not currently part of any quality reporting programs, though the Centers for Medicare and Medicaid Services has made publicly available for use by hospitals to support quality improvement efforts. HMK was a recipient of a research grant, through Yale, from Medtronic and the US Food and Drug Administration to develop methods for postmarket surveillance of medical devices; was a recipient of a research grant with Medtronic and Johnson & Johnson, through Yale, to develop methods of clinical trial data sharing; was a recipient of a research agreement, through Yale, from the Shenzhen Center for Health Information for work to advance intelligent disease prevention and health promotion; collaborates with the National Center for Cardiovascular Diseases in Beijing; received payment from the Arnold & Porter Law Firm for work related to the Sanofi clopidogrel litigation and from the Ben C. Martin Law Firm for work related to the Cook IVC filter litigation; chairs a Cardiac Scientific Advisory Board for UnitedHealth; is a participant/participant representative of the IBM Watson Health Life Sciences Board; is a member of the Advisory Board for Element Science, the Advisory Board for Facebook, and the Physician Advisory Board for Aetna and is the founder of HugoHealth, a personal health information platform. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services nor does the mention of trade names, commercial products or organisations imply endorsement by the US government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.

  • Competing interests The authors of this manuscript receive/received support to develop quality measures for the Centers for Medicare and Medicaid Services for public reporting. The informed consent measure is not currently implemented but was made publicly available so that hospitals could use the measure as a self-evaluation tool.

  • Patient consent for publication Not required.

  • Ethics approval The Institutional Review Boards of Yale University and of HSAG and Premier approved the study.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Data are available on reasonable request. These data were collected under a data use agreement with CMS and with the partnering hospitals, and as such, cannot be shared.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Linked Articles