Article Text

CareTrack Australia: assessing the appropriateness of adult healthcare: protocol for a retrospective medical record review
  1. Tamara D Hunt1,
  2. Shanthi A Ramanathan2,
  3. Natalie A Hannaford1,3,
  4. Peter D Hibbert4,
  5. Jeffrey Braithwaite4,
  6. Enrico Coiera4,
  7. Richard O Day4,5,
  8. Johanna I Westbrook4,
  9. William B Runciman1,3,4
  1. 1School of Psychology, Social Work and Social Policy, Division of Education, Arts and Social Sciences, University of South Australia, Adelaide, South Australia, Australia
  2. 2Hunter Valley Research Foundation, Maryville, New South Wales, Australia
  3. 3Australian Patient Safety Foundation, Adelaide, South Australia, Australia
  4. 4Australian Institute of Health Innovation, Faculty of Medicine, University of New South Wales, Sydney, New South Wales, Australia
  5. 5Clinical Pharmacology and Toxicology, St Vincent's Hospital, Sydney, New South Wales, Australia
  1. Correspondence to Professor William B Runciman; william.runciman{at}unisa.edu.au

Abstract

Introduction In recent years in keeping with international best practice, clinical guidelines for common conditions have been developed, endorsed and disseminated by peak national and professional bodies. Yet evidence suggests that there remain considerable gaps between the care that is regarded as appropriate by such guidelines and the care received by patients. With an ageing population and increasing treatment options and expectations, healthcare is likely to become unaffordable unless more appropriate care is provided. This paper describes a study protocol that seeks to determine the percentage of healthcare encounters in which patients receive appropriate care for 22 common clinical conditions and the reasons why variations exist from the perspectives of both patients and providers.

Methods/design A random stratified sample of at least 1000 eligible participants will be recruited from a representative cross section of the adult Australian population. Participants' medical records from the years 2009 and 2010 will be audited to assess the appropriateness of the care received for 22 common clinical conditions by determining the percentage of healthcare encounters at which the care provided was concordant with a set of 522 indicators of care, developed for these conditions by a panel of 43 disease experts. The knowledge, attitudes and beliefs of participants and healthcare providers will be examined through interviews and questionnaires to understand the factors influencing variations in care.

Ethics and dissemination Primary ethics approvals were sought and obtained from the Hunter New England Local Health Network. The authors will submit the results of the study to a relevant journal as well as undertaking oral presentations to researchers, clinicians and policymakers.

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Article summary

Article focus

  • What is the percentage of healthcare encounters at which Australians receive appropriate care?

  • What influences variations in care from the perspectives of patients and healthcare providers?

Key messages

  • A protocol for a population-based study of appropriate care of 1000 patients using medical record review.

Strengths and limitations of this study

  • Obtaining a snapshot and using a consistent method for 522 indicators across 22 common conditions power diagnostic indicators because they only present once for each patient.

  • The potential attrition rate of healthcare providers and telephone recruitment of participants may introduce selection biases.

Introduction

Australia's expenditure on healthcare now exceeds $110b each year, on par with most developed nations at over 9% of the gross domestic product.1 Chronic conditions comprise a very large proportion of the most common and costly diseases.2 Accordingly, effective prevention and management of chronic disease is a key policy initiative for all modern health services.

In theory, evidence-based clinical practice guidelines allow professionals to integrate the best available evidence with their clinical expertise to make informed decisions regarding individual patient care.3–8 However, there is mounting evidence that there are considerable gaps and variations between the care that is regarded as appropriate (in line with evidence-based or at least consensus-based guidelines) and the care that is received9–16 (see box 1). The RAND study in the USA showed that, on average, American adults received 55% of recommended care at the turn of last century (range 11%–79% for particular conditions).13 Since then, progress has been slow for most conditions, although there have been notable improvements for certain care indicators, with, for example, far more patients being discharged on β blockers after myocardial infarction than previously.18 In Australia, studies focusing on individual conditions have shown similar patterns of non-compliance with indicators. One such study found that patients with hypertension reached target blood pressures just under 60% of the time and that just over 70% of patients eligible for screening for hyperlipidaemia were not screened, screened and found to be hyperlipidaemic but not treated or treated without reaching target levels (51%, 12% and 7% of eligible patients, respectively).10

Box 1

Definitions used

  • Condition means acute (eg, myocardial infarction) and chronic (eg, diabetes) conditions and clinical circumstances (eg, surgical site infection) or being eligible for screening or preventive care (eg, mammography).

  • Evidence-based care (EBC) is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of EBC means integrating individual clinical expertise with the best available external clinical evidence from systematic research.17

  • Appropriate care for this study is clinical care for a condition considered to be evidence based or consensus based by a panel of clinical experts in Australia in the context in which it was delivered in the years 2009 and 2010.

  • Indicator is a condition-specific process measurement of healthcare management, appropriate for Australian practice in 2009–2010. Each indicator is scored as to whether eligible processes for prevention (eg, mammogram), monitoring (eg, blood pressure, lipids) or treatment (eg, aspirin, statins) have been carried out by answering ‘yes’ or ‘no’.

  • Healthcare provider refers to doctors, nurses, medical specialists and allied health professionals such as physiotherapists, occupational therapists and chiropractors.

  • Healthcare encounter means any consultation with a healthcare provider or attendance at a facility or hospital for an activity relevant to one of the selected conditions for which there is an indicator.

  • Compliance with indicators is expressed as the percentage of eligible healthcare encounters at which appropriate care was received. Eligibility or scoring will be determined by the three criteria listed under Component 2 of the Methods section.

  • Participants are patients, clients, consumers or citizens enrolled in the study who have completed a relevant interview.

  • Surveyor is a person with appropriate clinical and audit experience who has been trained and accredited for the study to review medical records in relation to the care indicators.

To meet the needs of an ageing population and increasing treatment possibilities and expectations,19 financial considerations alone mean that funding must be diverted from ineffective and non-cost-effective interventions to more rational appropriate care.20 However, to do this, we need to understand who is getting what care from whom, and why, and establish sustainable methods for the ongoing surveillance of the appropriateness of care received by patients.

This paper describes the study protocol to undertake the CareTrack Australia study, one component of a National Health and Medical Research Council (NHMRC) program grant 56861221 on patient safety. CareTrack Australia has four main aims:

  1. to determine the percentage of healthcare encounters at which Australians receive appropriate care;

  2. to determine the percentage of Australians who receive appropriate care;

  3. to identify factors influencing decisions to depart from appropriate care, from the perspectives of both participants and healthcare providers;

  4. to make recommendations on what would be necessary to set up sustainable systems for the surveillance of the appropriateness of healthcare in Australia.

Methods/design

The protocol is based on the RAND methodology of McGlynn et al.13 We developed an updated set of indicators for a subset of important conditions, will collect information onsite from healthcare providers and seek the views of patients and healthcare providers on why gaps exist in appropriate care. Our study will involve a retrospective review of the medical records of over 1000 participants over a 2-year period (2009–2010) to measure compliance with indicators for 22 common conditions.

There are 13 components to the study protocol of CareTrack Australia (figure 1).

Figure 1

The components and aims of CareTrack Australia.

Given the scale and complexity of the full study, a small pilot study was undertaken to determine the types of problems that might be encountered and to inform the final selection of conditions, their indicators and the logistical and practical aspects of recruiting participants and healthcare providers, accessing records and extracting, recording, storing and analysing the data.

Components 1 and 2: selecting conditions and developing indicators

Fifty-two candidate conditions were identified from published research and disease burden or quality of care priority lists of seven organisations.13 22–27 These conditions were then assessed against the following criteria:

  • the availability of clinical process indicators that were feasible to collect and had high content and face validity;

  • mainly affecting adults, and with a sufficiently high prevalence to be studied using our methodology28–30;

  • identified as already being researched at a population level in Australia.

A final set of 22 conditions met these criteria: alcohol dependence, antibiotic use, asthma, atrial fibrillation, cerebrovascular accident, chronic heart failure, chronic obstructive pulmonary disease, community acquired pneumonia, coronary artery disease, depression, diabetes, dyspepsia, hyperlipidaemia, hypertension, low back pain, obesity, osteoarthritis, osteoporosis, panic disorder, preventive care, surgical site infection and venous thromboembolism.

Candidate indicators and guidelines were sourced by (1) targeting internet sites with existing clinical guidelines13 31–65 and (2) adapting indicators used in the RAND study.13 Indicators for each condition were then collated, grouped into categories (eg, cardiology, respiratory medicine) and forwarded to clinical experts for review. Experts were identified as clinical leaders in their field and typically were employed as the head or director of a department in a large hospital and/or held an adjunct academic appointment. They were invited to score the indicators on a scale of 1–9 for their appropriateness (1: not appropriate; 9: very appropriate), in the context of the care that would be expected to have been delivered in Australia from 2009 to 2010. A formal process was employed for managing discrepancies based on the following criteria: indicators that scored between 7 and 9 by all experts were automatically included; indicators with scores between 1 and 3 from all experts were automatically excluded and indicators that scored between 4 and 6 or that received scores from each of the three ranges were subjected to further review, with further clarification being sought where required. A final list of 522 indicators was selected by 43 experts to represent appropriate care for the selected conditions in the years 2009 and 2010.

To facilitate analysis, indicators were classified into three categories:

  1. Indicators eligible for scoring at each healthcare encounter (eg, an exacerbation of asthma) by any provider (denominator is all eligible encounters).

  2. Indicators eligible for scoring at identified time intervals by any provider (eg, blood pressure measurements every 6 months) (denominator is a product of the number of applicable time periods within the 2-year period of the study and the number of eligible healthcare providers seen within each time period).

  3. Indicators eligible for scoring once for each participant (eg, indicators to deal with a new diagnosis) (denominator is 1).

Component 3: securing ethics approvals

Relevant Human Research Ethics Committee (HREC) approvals were sought and received prior to participant recruitment and medical record reviews in all jurisdictions, authorities, health services and private hospitals included in the study.

Component 4: obtaining statutory immunity

Statutory immunity protects from disclosure any identifying information obtained through an approved quality assurance activity.66 CareTrack Australia applied to the Federal (Commonwealth) Minister for Health for statutory immunity under Section VC of the Commonwealth Health Insurance Act 1973. This was granted on 17 September 2010.

Component 5: determining the sampling strategy

The study aims to access the medical records of 1000 eligible adult participants across South Australia and New South Wales (as was done in the Quality in Australian HealthCare Study).67 The states of New South Wales and South Australia were chosen because of the representativeness of populations across urban, regional and remote regions68 and offer a suitable range of demographic characteristics (table 1). Based on a pilot study of 100 participants, we estimated that 7600 participants would need to be contacted to meet this target. Half of the participants will be recruited from each state, and proportional representation from each of the metropolitan, regional and remote regions will be targeted, as illustrated in table 1.

Table 1

Percentage of people living in urban, regional and remote areas of NSW, SA and Australia68 69

The sample will be stratified by region to obtain a representative cross section of participants by demographics and geographic location. One of the four Socio-Economic Indexes for Areas, the Index of Relative Socioeconomic Disadvantage (IRSD) will be used to facilitate comparison of social and economic status between geographic regions.70 The IRSD is derived from multiple-weighted variables such as low income, high unemployment and low levels of education, which are all markers of relative socioeconomic disadvantage. Various combinations of local government areas will be examined, so that a representative sample can be obtained with respect to the IRSD index.

Component 6: resolving data management requirements

A web-based tool will be developed to enter data during medical record review and subsequent data analysis. The tool will support secure data access, data encryption, off-line data collection and subsequent database synchronisation (to mitigate against the problems of fire-walls and poor internet connectivity in various healthcare settings).

Given the complexity of the indicator set, the tool will also generate a set of indicators relevant to a particular condition, based on participant-specific information. Indicator algorithms will take into account the type of healthcare facility or provider, and the participants' conditions and gender. For example, the database will automatically filter out the indicator related to Pap smears from all male participants.

Component 7: recruiting participants

Participants will be recruited using a two-stage Computer-Assisted Telephone Interview (CATI) process (see figure 2). Interviewers will undergo a training program prior to recruitment. The first stage, CATI 1, will involve telephoning randomly selected households from the Telstra White Pages71 within the selected subarea and randomly selecting one householder. Once selected, the householder will be informed of the study and asked if they would like to receive further information. At this time, their demographic details will also be collected. The CATI 1 interview script is at appendix 1. People who agree will be sent an information pack that contains a covering letter, an information sheet and a consent form that allows CareTrack researchers to access their medical records (appendices 2–4). Receipt of a participant's consent marks the start of the second stage of the recruitment process—CATI 2. Participants will be re-contacted by telephone to collect details of their medical conditions that pertain to the study, and the names and addresses of the healthcare providers who managed these conditions in 2009 and 2010. The script for the CATI 2 interview is at appendix 5. Participants without any of the 22 study conditions or without a healthcare encounter from 2009 to 2010 or whose only encounter was day surgery (excluding persons with dyspepsia who had endoscopy) will be excluded from further participation at this stage.

Figure 2

The process to recruit participants and undertake medical record reviews. CATI, Computer-Assisted Telephone Interview. HCP = Healthcare provider

Component 8: recruiting healthcare providers

Healthcare providers and/or facilities identified by participants will be sent a covering letter, an information sheet and two consent forms (one for medical record review and one for an interview) to be completed prior to a CareTrack surveyor accessing the medical records (appendices 6–9). Healthcare providers and/or facilities that provide consent will be contacted by CareTrack surveyors to arrange a suitable time and place to review the medical records.

Component 9: recruiting and training surveyors

Suitably experienced nurses will be employed to act as surveyors for the CareTrack study. A key selection criterion will be experience in clinical audit and medical record review. Six full-time equivalent staff will be required. The selection process will involve an aptitude test using an artificially constructed medical record, with a requirement to code indicators for certain conditions under time constraints. A detailed surveyor manual will outline the conditions, indicators, definitions, abbreviations and processes for arranging and conducting medical record reviews.

Inter-rater reliability will be examined by two methods. First, all surveyors will code indicators from an artificial medical record, which will include all indicators, and second, dual review of a sample of participants' records will be undertaken. For both methods, κ scores will be calculated to test the level of agreement between each surveyor and one of the researchers (NAH). Based on the results of the artificial test, the number of participants' records to be dual reviewed will be determined at a confidence level of 95%, with a power of 80%. The CareTrack Australia researchers will provide constant feedback to surveyors to ensure that they consistently interpret the medical records according to the CareTrack Australia definitions and indicator inclusion and exclusion criteria.

Component 10: reviewing medical records

Surveyors will undertake an explicit criterion-based medical record review using the data tool (see Component 6). Medical record reviews will be conducted for each participant–healthcare practitioner encounter (therefore more than one medical record review may be undertaken for a participant). Surveyors will assess the medical record for evidence that the participant was being treated for the condition that they nominated and for any other of the 22 conditions. The surveyor will answer each indicator question as ‘Yes’ (care provided during the encounter was consistent with the indicator), ‘No’ or ‘Not Applicable’ (N/A) (the indicator was not relevant to the encounter). For example, an answer of N/A will be assigned to those indicators that relate to a new diagnosis if the participant already had that condition. For indicators that are answered N/A or no, a text field will be available for surveyors to explain the reason for their answer.

Component 11: analysing indicator data

Data storage will be structured to allow identification of indicator categories (see Component 2) and to allow calculation of compliance of appropriate care by healthcare encounter (CareTrack aim A) and by participant (aim B). Per cent compliance and CIs will be calculated for each indicator and then aggregated and reported at the level of condition. Stratification will be undertaken by healthcare provider type.

Component 12: interviewing and surveying participants

This component of the research will identify the main drivers of participant's healthcare decision making and barriers to receiving appropriate care and will aim to identify if, and how, common ground may be sought between patients and providers in providing appropriate care. Semistructured interviews and self-administered questionnaires of participants will be used. For selected common conditions (depression, diabetes, hyperlipidaemia, hypertension, low back pain and osteoarthritis), participant characteristics (age, sex, occupation and work history, duration of disease, level of disability and health literacy) and patient knowledge, attitudes and beliefs regarding their condition(s) will be examined. Where possible, for each condition, validated survey tools will be used. A mixed-methods approach will be used including quantitative analysis of questionnaires and qualitative analysis of free-text answers in questionnaires and transcripts of interviews.

Component 13: interviewing healthcare providers

The knowledge, attitudes and beliefs of healthcare providers with respect to the treatment and management of a single condition, osteoarthritis, will be examined. Osteoarthritis has been chosen because of its high prevalence and because of anticipated interactions between participants and mainstream as well as complementary and complementary medicine practitioners.72 Semistructured interviews will be conducted at places and times convenient to healthcare providers. Factors pertaining to the healthcare providers that will be explored include socio-demographic characteristics of the provider and the practice setting, knowledge of clinical indicators for osteoarthritis, attitudes to guidelines in general and those specifically concerned with osteoarthritis and perceived barriers to guideline implementation.

CareTrack aim D: developing recommendations for what would be needed to set up a sustainable system for surveillance of the appropriateness of care in Australia

A daily lessons log will be kept for the duration of the study with respect to the barriers encountered for each component of the study. Strategies actually used and potential strategies for the future will be identified, and a series of recommendations made with respect to how to establish and maintain a sustainable surveillance system for appropriateness of care in the future. Details of the time taken by researchers and surveyors will be logged to enable various components of the study to be costed so that priorities can be set, and attention directed to, the clinical areas that are most problematic.

Ethics and dissemination

Ethics approvals were sought and obtained from the following key organisations in the first instance—the Hunter New England Local Health Network (HNE HREC Reference no: 09/12/16/5.09), the University of New South Wales and the South Australian Department of Health, and subsequently by relevant HRECs across the country, which are ACT Health, Southern Adelaide Flinders Clinical HREC, The Queen Elizabeth Hospital, TAS Health, Royal Australian College of General Practitioners and the Royal Adelaide Hospital HREC.

We will submit the results of the study to relevant journals as well as undertaking national and international oral presentations to researchers, clinicians and policymakers.

Index of appendices

  1. Computer-Assisted Telephone Interview 1—Recruitment

  2. Covering letter to participants

  3. CareTrack information sheet for participants

  4. Consent for medical record access by participants

  5. Computer-Assisted Telephone Interview 2—Interview healthcare

  6. Covering letter to provider

  7. Information sheet to provider

  8. Consent for access to records from provider

  9. Consent to be interviewed by provider

References

Supplementary materials

Footnotes

  • To cite: Hunt TD, Ramanathan SA, Hannaford NA, et al. CareTrack Australia: assessing the appropriateness of adult healthcare: protocol for a retrospective medical record review. BMJ Open 2012;2:e000665. doi:10.1136/bmjopen-2011-000665

  • Funding The study was supported by Australian National Health and Medical Research Council.

  • Competing interests None.

  • Patient consent A patient consent form was developed specifically for the study and is shown in appendix 4.

  • Ethics approval The study was approved by Hunter New England Local Health Network (HNE HREC Reference no: 09/12/16/5.09).

  • Contributors TDH, CareTrack project manager, was responsible for coordinating the project, developing and getting ethics approvals, liaising with the Hunter Valley Research Foundation (HVRF), preparing information packages and consent forms for participants and providers, managing the pilot study, indicator development, liaising with the database developers, training and accrediting surveyors, implementation of the marketing strategy and management of the budget. SAR was responsible for developing the scripts for the computer-assisted telephone interviews, for training and managing the interviewers, planning and managing the participant recruitment process, developing the sampling strategy and managing the HVRF components of the pilot study. NAH worked closely with TDH and SAR on all aspects of the project but was particularly involved in preparing information sheets, development of the surveyor training manual and indicator development and review. PDH was involved in coordinating interactions between CareTrack Australia and the other three NHMRC program grant studies, including budgeting. He also played a major role in the development and execution of the marketing strategy, developing methods for data acquisition, storage and analysis, and reviewing the sampling strategy. JB is the lead chief investigator of the overall program grant; he chairs program grant meetings and works across CareTrack and other studies in the research program. As a chief investigator of the program grant he was involved at a conceptual stage and then at monthly intervals in providing oversight and advice. He helped draft and edit the current manuscript. EC, ROD and JIW as chief investigators of the program grant were involved at a conceptual stage and then at monthly intervals in providing oversight and advice on aspects of the project as it evolved. ROD as a practicing clinician was involved in strategies for reviewer selection and indicator development as well as classification and structure of the indicators. JIW provided expertise and advice on methodology, particularly in terms of sampling and inter-rater reliability. WBR conceived of the project initially and wrote the relevant components of the research grant application. He worked closely with the other authors generating the information sheets and ethics applications and was involved in developing and executing strategies for making and monitoring progress in areas such as condition selection, indicator development and wording, CareTrack marketing, indicator classification and review and writing the manuscripts.

  • Provenance and peer review Not commissioned; internally peer reviewed.