Article Text

Development and validation of Australian aphasia rehabilitation best practice statements using the RAND/UCLA appropriateness method
  1. Emma Power1,2,
  2. Emma Thomas2,3,
  3. Linda Worrall2,3,
  4. Miranda Rose2,4,
  5. Leanne Togher1,2,
  6. Lyndsey Nickels2,5,
  7. Deborah Hersh2,6,
  8. Erin Godecke2,6,
  9. Robyn O'Halloran2,4,
  10. Sue Lamont7,
  11. Claire O'Connor8,
  12. Kim Clarke9
  1. 1Speech Pathology, Faculty of Health Sciences, The University of Sydney, Lidcombe, New South Wales, Australia
  2. 2Centre for Clinical Research Excellence in Aphasia Rehabilitation
  3. 3School of Health and Rehabilitation Sciences, the University of Queensland, St Lucia, Queensland, Australia
  4. 4Department of Human Communication Sciences, School of Allied Health, La Trobe University, Bundoora, Victoria, Australia
  5. 5Department of Cognitive Science, ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, New South Wales, Australia
  6. 6Speech Pathology, School of Psychology and Social Science, Edith Cowan University, Perth, Western Australia, Australia
  7. 7Department of Speech Pathology, Monash Health, Melbourne, Victoria, Australia
  8. 8NSW Agency for Clinical Innovation, Chatswood, New South Wales, Australia
  9. 9Speech Pathology, Country Health SA Local Health Network, Strathalbyn, South Australia, Australia
  1. Correspondence to Dr Emma Power; emma.power{at}sydney.edu.au

Abstract

Objectives To develop and validate a national set of best practice statements for use in post-stroke aphasia rehabilitation.

Design Literature review and statement validation using the RAND/UCLA Appropriateness Method (RAM).

Participants A national Community of Practice of over 250 speech pathologists, researchers, consumers and policymakers developed a framework consisting of eight areas of care in aphasia rehabilitation. This framework provided the structure for the development of a care pathway containing aphasia rehabilitation best practice statements. Nine speech pathologists with expertise in aphasia rehabilitation participated in two rounds of RAND/UCLA appropriateness ratings of the statements. Panellists consisted of researchers, service managers, clinicians and policymakers.

Main outcome measures Statements that achieved a high level of agreement and an overall median score of 7–9 on a nine-point scale were rated as ‘appropriate’.

Results 74 best practice statements were extracted from the literature and rated across eight areas of care (eg, receiving the right referrals, providing intervention). At the end of Round 1, 71 of the 74 statements were rated as appropriate, no statements were rated as inappropriate, and three statements were rated as uncertain. All 74 statements were then rated again in the face-to-face second round. 16 statements were added through splitting existing items or adding new statements. Seven statements were deleted leaving 83 statements. Agreement was reached for 82 of the final 83 statements.

Conclusions This national set of 82 best practice statements across eight care areas for the rehabilitation of people with aphasia is the first to be validated by an expert panel. These statements form a crucial component of the Australian Aphasia Rehabilitation Pathway (AARP) (http://www.aphasiapathway.com.au) and provide the basis for more consistent implementation of evidence-based practice in stroke rehabilitation.

  • aphasia
  • knowledge translation
  • rehabilitation
  • quality

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • There is limited guidance for health professionals working with the complex condition of aphasia to implement best practice across the continuum of care.

  • The aim of this study was to develop and validate a set of best practice statements for use in post-stroke aphasia rehabilitation.

  • An aphasia rehabilitation Community of Practice developed eight areas of care that provided the framework for the development of an initial 74 best practice statements. These statements were then validated using the RAND/UCLA Appropriateness Method in two rounds of ratings with nine panel members.

  • During the rating, items were added, divided or deleted so that there was panel agreement on a final set of 82 best practice statements over eight domains of care. These evidence-based and expert endorsed care standards form part of the Australian Aphasia Rehabilitation Pathway (AARP), and have been formulated into a dynamic web-based implementation tool with increasing attention to care standards for culturally and linguistically diverse, and indigenous populations.

  • The literature reviews may not be exhaustive given the continuing development of the published evidence. The best practice statements may require some adaptation for other regions; however, the majority of the data should be common and internationally applicable. This process sets a benchmark for the development and dissemination of best practice post-stroke aphasia rehabilitation statements to other areas of practice where research evidence is in its foundational stage.

Introduction

Aphasia is an acquired neurological disorder of language processing that affects speaking, listening, reading, writing and gesture.1 Approximately 30–35% of stroke survivors have aphasia on discharge from hospital following stroke,2 ,3 with the prevalence of speech (dysarthria) and language (aphasia) disability 6 months after stroke reported as 30–50/100 000.4 People with aphasia have higher healthcare costs (8.5% or $1700 attributable cost) and longer length of stays in hospital (6.5%) compared with stroke survivors without aphasia.5 People with stroke-related aphasia may require additional services to address their communication disability in hospital and also during community life, and such services might reduce their length of length of stay or incidence of adverse events.6 ,7 However, the management of swallowing disorders (dysphagia) may be prioritised over aphasia services in acute hospital settings due to inadequate staffing ratios and lack of appropriate therapy space/resources.8 ,9 Additionally, people with aphasia have poor long-term outcomes after stroke, including consequences such as social isolation, depression and poor quality of life for themselves and their family members.4 ,10–13 As a chronic disability, aphasia generates a number of long-term service needs, including therapy to enable functional and socially relevant communication.14 Provision of quality, efficient, evidence-based care is critical for people with aphasia, their families, and healthcare systems.

Stroke clinicians and teams need to make daily decisions about the management of people with aphasia from the acute phase through to community-based care. However, a variety of challenges exist to the implementation of best practice in aphasia rehabilitation. Using the AGREEII and ADAPTE guideline appraisal tools, Rohde et al,15 documented a significant lack of high quality, comprehensive guidance for clinicians working with people with aphasia. Most recommendations identified were included within broader stroke guidelines and frequently lacked details on management of aphasia. The Australian National Stroke Foundation Clinical Guidelines for Stroke Management16 were identified as high-quality guidelines. These guidelines contain 11 items that are focused specifically on aphasia from screening, goal setting, provision of health information, therapy and counselling to communication partner training for family/carers. Despite the availability of these guidelines and the fact that their use has been shown to result in improved patient outcomes,17 documented evidence to practice gaps continue to exist. For example, in provision of health information, only 56% of a sample of 170 Australian hospitals provided at least ‘some’ information in tailored, ‘aphasia friendly’ formats to people with aphasia as the guidelines recommend.18 Additionally, some people with aphasia are not receiving treatment,19 despite a recent Cochrane review that found that aphasia therapy was generally effective.20 While the above two examples are centred on Australian stroke rehabilitation practice, evidence to practice gaps in stroke rehabilitation have also been documented internationally.21

Clinicians report that implementation of best practice is challenging because recommendations are often too broad and the evidence-base is limited in some areas.9 ,22 For example, while the Cochrane aphasia rehabilitation review20 concluded that therapy was generally effective, an additional difficulty for clinicians is that there is still insufficient evidence to indicate the best approach to provision of aphasia therapy for specific individuals with aphasia. Clinicians report they require more in-depth information and research and accompanying resources (eg, standardised clinical resources such as aphasia friendly information handouts) to bridge the evidence to practice gap.9 ,22 Evidence to practice gaps are of concern as consumers report a lack of consistency in the provision of aphasia care and difficulty accessing communication therapy services, especially in the chronic phase.14 ,23 Consumers also emphasise the importance of the rehabilitation journey and the need for a comprehensive road map to understand what to expect at different phases of their recovery.14 ,23

The Centre for Clinical Research Excellence (CCRE) in Aphasia Rehabilitation is an Australian research centre that was funded for 5 years by the Australian National Health and Medical Research Council (NHMRC). The CCRE in Aphasia Rehabilitation drove a national collaborative effort to enhance the quality and consistency of rehabilitation care provided to people with aphasia. To address the above issues of a lack of a detailed road map, comprehensive recommendations and accessible implementation resources, the CCRE in Aphasia Rehabilitation developed the Australian Aphasia Rehabilitation Pathway (AARP).24 This care pathway was developed to provide the basis for a road map containing important domains of care25 ,26 and was populated with detailed best practice statements (BPS) for each domain of the pathway. The mode of delivery of the AARP is a web-based dynamic tool that contains resources to assist with implementing each of the BPS recommendations.24 BPS are a recent development and intended to guide practice and promote a consistent, cohesive and achievable approach to care.27 These address an area of care where there is variation in practice due to limited robust evidence; the BPS attempt to incorporate professional consensus in the absence of a rigorous evidence base.27

One method of developing BPS is the RAND/UCLA Appropriateness Method (RAM).28 The RAM consists of the development and validation of quality indicators through a literature review and a two-round modified-Delphi method with a panel of experts.28 The RAM has been used to develop validated, expert endorsed indicators in a wide range of fields, including paediatric traumatic brain injury rehabilitation,29 osteoarthritis30 and pharmacy.31 The process has been particularly recommended for areas of practice with an emerging evidence base because it seeks to combine both research (literature review) and clinical evidence (expert panel opinion).28 The aim of this study was to develop a comprehensive set of evidence-based, expert endorsed BPS for post-stroke aphasia rehabilitation utilising the RAM.

Method

Design

We utilised the RAM22 to develop and validate the Australian Aphasia Rehabilitation Best Practice Statements. The RAM consists of a literature review and development of a list of ‘indications’, which we termed best practice statements. These statements were then rated for their degree of appropriateness by an expert panel in two rounds using a modified Delphi technique. The RAM approach has produced results that are valid32 and reliable.28 ,33 An overview of the process involved in developing and validating the RAM statements is found in figure 1, while a detailed timeline of processing and events can be found on the AARP website (http://www.aphasiapathway.com.au/flux-content/aarp/pdf/Australian-Aphasia-Rehabilitation-Pathway-RAM-Timeline.pdf). The first round of ratings was conducted via email, while the second round of ratings was conducted face-to-face. The inclusion of a face-to-face round has been described as advantageous compared with traditional Delphi methods because it allows for greater opportunities for discussion and clarification of statement wording and evidence.31

Figure 1

Overview of RAND/UCLA process as applied to the development of the Australian Aphasia Rehabilitation Best Practice Statements (adapted from Fitch et al28 and NHS Quality Improvement Scotland).27 NHMRC, National Health and Medical Research Council; RAM, RAND/UCLA Appropriateness Method.

Development of best practice statements

In developing and organising the statements, we were guided by eight overarching areas of care developed through a national consultative programme with the CCRE in Aphasia Rehabilitation Community of Practice (CoP).34 The CoP consisted of over 250 aphasia clinicians and managers, researchers, people with aphasia (Australian Aphasia Association) and policymakers (National Stroke Foundation) who were collectively interested in aphasia care, policy and practice. The eight areas of care were developed through an iterative process that occurred through face-to-face workshops, teleconferences, written feedback using Google Docs (a web-based word processing programme) and review of the research literature on consumer (patient and family) experiences with their care and goal setting.24 ,14 ,23 The agreed areas of care were: (1) receiving the right referrals, (2) optimising initial contact, (3) setting goals and measuring outcomes, (4) assessing, (5) providing intervention, (6) enhancing the communicative environment, (7) enhancing personal factors, (8) planning for transitions and discharges.

From January to August 2013, a core team of researchers from the CCRE in Aphasia Rehabilitation (see table 1) conducted multiple literature reviews to provide a synthesis of the evidence-base for each area of care identified (see figure 1, earlier). The synthesis, construction and refinement of the BPS was an iterative, cyclical process. Within each content area, research questions were created by the project manager in conjunction with the core literature review group. Initially, the literature was searched for secondary level evidence. We accessed available guidelines that were identified by Rohde et al15 as being of high quality, including the Australian Clinical Guidelines for Stroke Management.16 Other major sources of secondary evidence were: Trip Database (http://www.tripdatabase.com); Evidence Based Reviews of Stroke Rehabilitation (http://www.ebrsr.com); the Cochrane Library and American Speech and Hearing Association's Evidence Maps: Aphasia (http://ncepmaps.org/aphasia/tx/). We also accessed systematic reviews and then conducted a manual search of their bibliographies. If no secondary evidence was available, search terms were then developed for each area of care and applied to the following databases: The Cochrane Library (2005–2013), CINAHL (1981–2013), Medline (1946–2013), Pubmed (1948–2013), speechBITE (1956–2013) and Google Scholar (2009–2013). All quantitative research designs were included (eg, systematic reviews, randomised controlled trials, cohort studies, single case experimental designs) as well as qualitative research studies. Additional experts in each care area (see table 1) were contacted and requested to provide any applicable literature (both published literature and grey literature). Evidence was synthesised by the project manager and sent back to key experts and the CCRE executive team (core group) via email. Limited published literature was identified for Section 6 (Enhancing the Communicative Environment) and Section 7 (Enhancing Personal Factors). Therefore greater input was required from experts. One CCRE researcher with specialist expertise working with Aboriginal and Torres Strait Islander people led a team to prepare the literature review and development of BPS in this area. Likewise, a CCRE postdoctoral researcher led the development of the BPS in regards to Culturally and Linguistically Diverse (CALD) populations.

Table 1

Best Practice Statement contributions matrix

The quantitative literature was graded according to the NHMRC Levels of Evidence and Grades of Recommendation.35 This grading system was chosen because it aligns with the Australian Clinical Guidelines for Stroke Management10 and is the system endorsed by the Australian NHMRC peak body. As the NHMRC levels of evidence do not include a level for every type of study design, single case experimental designs studies were assigned a grading of IV, and the qualitative literature, if used to support a best practice statement, was listed as ‘Qual’. It must be emphasised that this system only allows for the level of study design to be assessed, which is different to the grade of evidence. The grade of evidence (eg, A, B, C, D) takes into account the level of evidence along with evidence quantity, quality, consistency, clinical impact, generalizability and applicability. This additional step was not feasible at the time of the development of the BPS. Qualitative studies were not rated due to the lack of current consensus methods for grading of qualitative studies; however, only studies that were judged to be rigorous were included. Where expert opinion was utilised as the evidence for a statement, we followed the procedures in the Australian Clinical Guidelines for Stroke Management16 and labelled the level of evidence as a ‘Good Practice Point’ (GPP).

All 250 members of the CoP were invited to provide feedback on the synthesis of each area. This process has been previously described (see24 http://www.aphasiapathway.com.au/flux-content/aarp/Thomas-KTEaphasia-pathway-JCPSLP-2014.pdf). Briefly, of the CoP members, a group of CCRE researchers (n=25) and clinical affiliates (n=45) expressed interest in providing regular feedback and formed the constituents of a feedback mail group. The core CCRE Aphasia Rehabilitation working group then translated the evidence into a list of BPS and additional input was sought from researchers who had specific expertise in each of the areas of care (see additional experts listed in table 1). Further feedback was then obtained on the initial draft of statements from the feedback mail group using an online programme ‘Google Documents’. Feedback from the CoP in the refinement of the BPS was mostly provided by CoP speech pathologists. Two representatives of the Australian Aphasia Association provided consumer feedback throughout the process. However, they had most input for the workshops that established the areas of care rather than for the detailed refinement of the BPS. Development of the specific sections on personal factors associated with Aboriginal and Torres Strait Islander populations did include input from Aboriginal and Torres Strait Islander people. The final document contained 74 BPS across the eight areas of care. Each final statement was accompanied by its corresponding level of evidence, source study reference and a rationale (see table 2 for an example).

Table 2

Example of the format of the statements for use in the validation procedure

Validation of statements

Our procedure followed the RAM guidelines28 consisting of a two-round modified Delphi method with an added face-to-face component that allows members to discuss their judgements between rating rounds.28 Basger et al31 highlighted the importance of the face-to-face component in allowing discussion to resolve misinterpretation, introduce new evidence and improve clarity. Experiences with the RAM and other group processes indicate that the potential for bias in the face-to-face group can be largely controlled by effective group leadership.28 Therefore, the panel facilitator was experienced in moderating group discussions and an experienced RAM facilitator provided additional training and advice to the Aphasia RAM facilitator.

Participants—national expert panel

The panel comprised of nine qualified, Australian speech pathologistsi with significant expertise in post-stroke aphasia rehabilitation (see table 1). We used the standard sample size recommended in the RAM manual (n=9). This number of panellists was considered sufficient to permit a diverse sample and also provide an opportunity for panel members to be involved in the group discussion in Round 2.28 We used purposive sampling across a range of factors (see table 1) to maximise the opportunity for a diverse range of perspectives and expertise. Factors included varied professional roles and skill sets (ie, research, clinical, managerial and policy-based), geographical region (ie, different states/cities as well as rural and metropolitan areas), expertise across a range of rehabilitation settings (ie, acute, inpatient and community) and the International Classification of Functioning, Disability and Health (ICF)37 (ie, Impairment, Activity/Participation, Environmental Factors and Personal Factors). The initial nine members who were invited to take part in the RAM process all agreed and participated in both rating rounds.

RAM rating round 1

The first round involved individual ratings of statements that were distributed to panel members through email. The facilitator (EP) contacted each panellist to explain the RAM procedure and clarify any questions. The panellists were then emailed: a copy of the BPS complete with a summary of the evidence and NHMRC level of evidence;35 instructions on how to perform the ratings according to the definition of appropriateness provided in the RAM manual;28 a score sheet; a list of abbreviations and definitions; and an EndNote library complete with full texts of every reference. The facilitator's contact details were provided so that any queries in regards to how to perform the ratings could be answered. The panellists rated the ‘appropriateness’ of each statement on a scale of one to nine, with nine being the most ‘appropriate’.28 They were also able to record comments to explain their scores. Panel members retained copies of their comments to aid in discussion for the second round of rating. Completed score-sheets with any comments were then returned by email to the project manager. The scores and comments were recorded onto a central database file. The panel facilitator checked the accuracy of the transfer of all entries and comments.

Analysis

Following the RAM guidelines,28 median scores were calculated for each statement and the number of panellists rating outside the median tertile was recorded. Statements were classified and agreed to as valid based on the median rating of appropriateness and the degree of panel agreement (dispersion). Statements with a median panel score in the top tertile (7–9) without disagreement were classified as ‘appropriate’, median ratings in the bottom tertile (1–3) without disagreement were classified as ‘inappropriate’, and median scores between 4 and 6 or any median with disagreement was classified as neither appropriate or appropriate but as ‘uncertain’. Using the guidelines for a nine member panel,28 agreement was indicated when no more than two panellists rated the statement outside the three point region (1–3; 4–6; 7–9) containing the median score. Disagreement was indicated when at least three panellists rated the statement in the lower third region (1–3) and at least three panellists rated it in the top third region (7–9). A simple content analysis was performed by the facilitator and project manager (EP, ET) on the comments to provide a preliminary understanding of the nature of any issues panel members had with the statements. There was no further analysis of the comments as their primary purpose was to aid in the discussion of items in the face-to-face round (see below).

RAM rating round 2

All nine panel members attended the second round face-to-face meeting which occurred in Sydney in November 2013, 2 weeks after all panel members had completed their first round ratings. All discussions were audiotaped with the consent of the panellists. Two members were unable to attend for the entire day and were provided with an audio recording for the sections they missed. During this round, each panel member was provided with a score-sheet containing both their original rating for each item and the panel's median score for each statement. Panellists discussed the wording of the BPS and any other issues associated with each BPS, such as the nature of the evidence. Panellists did not explicitly discuss their scores with each other and once the discussion was completed, panellists then re-rated each statement anonymously without discussion. The same analysis procedure as before was applied to the second round rating. To be classified as ‘appropriate’ and retained for the final best practice statement document, the statements needed to achieve a median rating between 7–9 and have no more than two panellists rate below 7.

Results

Round 1

Seventy-four BPS were rated across the eight areas of care developed by the CoP (see table 3). At the end of Round 1, 71 of the 74 statements were rated as appropriate (median rating between 7 and 9 with no disagreement) and no statements were rated as inappropriate. Three statements were rated as uncertain with one with a median rating of 6 and disagreement (table 3: BPS 5.4), and the other two statements had median ratings in the appropriate range (7–9), but more than two panellists scored below 7 (table 3: BPS 5.6, 7.7). Of the 71 ‘appropriate’ statements, 34 (48%) had one or two ratings below 7, indicating that some minor discrepancies between panellists existed. Nearly all raters provided comments that centred mainly on: (1) the degree to which the statement was consistent with speech pathologists’ scope of practice, (2) statement wording, and (3) the source evidence.

Round 2

As no statements were rated as inappropriate in Round 1, all 74 statements were then retained and rated again in the second round face-to-face meeting. Owing to time constraints, the ratings of two of the eight sections had to be completed in a teleconference after the face-to-face meeting. During the meeting, 16 statements were added by consensus through splitting existing items or adding new statements. This splitting and addition occurred mostly in the Intervention section (Section 5) in Round 2 where the panel expressed dislike of the format of the section and key missing statements. Their feedback was documented; however, due to time constraints, rating of the section was postponed to the teleconference. Between Round 2 and the teleconference, the section was reformatted and new additions made based on panel feedback with updated references, where required. Seven statements were deleted leaving 83 statements. Statements were deleted because they were considered too broad in nature (eg, table 3: BPS 2.4) or were replaced by other statements or there were questions whether the statement adequately reflected the role and scope of the speech pathologist (eg, table 3: BPS 8.9). Most of the 83 statements were edited during the face-to-face meeting to ensure consistency of wording and terminology. All statements in Round 2 scored a median appropriateness score of 9 with the exception of two statements that scored 8. Agreement was reached for 82 of the final 83 statements, and a statement on outcome measures (table 3: BPS 3.6) was then excluded from the final version as three panel members scored outside the median range (7–9). The final 82 statements are presented in the online supplementary materials. The comprehensive version of the statements, including detailed rationales, references and level of evidence, can be found in the online supplementary materials or at http://www.aphasiapathway.com.au.

Table 3

Best practice statements (BPS) for each area of care (n=74) presented with median panel score and number of panellists that scored outside the median tertile

The final statements were based on a combination of evidence and expert opinion. In summary, 35 (42%) of the 82 statements were supported by quantitative evidence and 23 of those 35 statements were supported by the highest level of evidence (level I35). Sixteen (20%) of the statements were supported by qualitative evidence and 31 (38%) were supported by expert opinion. Eleven of these 31 were in the section on Enhancing Personal Factors that included working with people from culturally and linguistically diverse, and Aboriginal and Torres Strait Islander backgrounds.

Discussion

Consensus has been reached on 82 Aphasia Rehabilitation Best Practice Statements across eight domains of care using the RAM quality indication development and validation process. As the dissemination of guidelines alone does not necessarily result in implementation,39 these statements have been integrated into a dynamic and accessible online implementation resource, the Australian Aphasia Rehabilitation Pathway (AARP: http://www.aphasiapathway.com.au). Each statement is accompanied by clinical resources to assist implementation efforts. In producing the BPS and AARP, we have responded to evidence from clinicians9 ,22 and consumers14 that there is a need for more detailed recommendations and a clearer pathway of care for aphasia.

The 82 statements over eight domains of care represent a considerable expansion of the number of aphasia-related statements contained in current stroke guidelines. While there were additions in all domains, one of the principal areas for an increase in recommendations was in the Personal Factors domain, in particular, for culturally and linguistically diverse populations (see online supplementary materials; 11 statements, BPS 7.3–7.13) and Aboriginal and Torres Strait Islander populations (9 statements, BPS 7.14–7.22). These populations pose particular challenges for speech pathologists when providing appropriate care to people with aphasia. The Australian stroke guidelines do refer very briefly to consideration of cultural and linguistic diversity in the assessment of people with aphasia, but further guidance in this area for indigenous peoples in particular was lacking. An additional expert panel was engaged to provide more specific information to these statements. Additionally, we referenced the New Zealand stroke guidelines40 which include specific statements tailored to the Maori population. Inclusion of these statements represents an important step towards understanding, respecting and representing Indigenous world-views, encouraging culturally appropriate working practices and valuing cultural diversity.

Another feature of the statements is that while 28% of the statements were rated as Level I evidence,35 a large proportion (58%) were supported by either qualitative evidence (20%) or expert opinion (38%), particularly in the Personal Factors section. This finding may be expected for a developing area of research.41 For example, until recently very few studies had been published on aphasia and Aboriginal and Torres Strait Islander populations and the evidence in the BPS represents the early phases of research in this area.42–44 Additionally, one advantage of the process of guideline development is that it can highlight the evidence gaps more clearly and focus researchers on areas of practice where more high quality evidence is required. Through the development of the BPS, the CCRE in Aphasia Rehabilitation has been able to identify where the gaps in evidence lay, and has focused research efforts on these gaps in combination with priority areas identified by clinicians,9 consumers14 and international clinical and research organisations.45 Therefore, in addition to the creation and validation of best practice statements, the RAM process has focused research agendas on areas of need.39

Owing to the above factors associated with the quality of the current aphasia rehabilitation evidence, the RAM method was ideally suited for the validation of BPS through a combination of research evidence and expert opinion.28 One of the advantages of the RAM process is that it contains a face-to-face discussion round.28 ,31 This round provided the panel with the opportunity to discuss their opinions and assumptions underlying their ratings and the source evidence in addition to modification of the wording of statements. There were very high levels of agreement in both Rounds 1 (see above) and 2 (82/83 agree appropriate, and median rating of 9 for 80/82 final statements).

Strengths and limitations

We have utilised a method of validation recommended for areas of practice where there is a lack of high quality evidence across domains of care.22 Owing to the vulnerability of people with aphasia to poor long-term psychosocial outcomes,10–13 it is critical to have validated BPS available for clinicians that incorporate research evidence and expert opinion where this research is lacking in order to provide a foundation for quality and consistent care provision. In the development process, we engaged a range of stakeholders through our CoP to develop the AARP domains and a diverse panel with expertise in research, clinical and managerial practice as well as policy to validate the BPS. However, it is possible that we did not identify all articles in our review and it may not have been exhaustive given the continuing development of the published evidence. Despite this, we have created a strong foundation for the continued revision and updating of the BPS in the future. Additionally, the judgements made by a single panel of speech pathologists may not be representative of all clinicians, researchers and policymakers. Two panel members were unable to attend the whole face-to-face meeting and they provided their ratings after listening to the recorded discussion. While their input was considered separately, their absence for those sections may have affected the nature of the discussion. We also did not return the final, validated BPS to the broader CoP to gain wider national consensus.

The majority of the BPS data should be internationally applicable. While there are some promising new guidelines available internationally, such as the Canadian Stroke Best Practice Stroke Recommendations which include nine recommendations specific to aphasia management, there remains a paucity of rigorously reported BPS guidelines for aphasia.15 Prior to publication of the BPS, the most robust clinical guidelines to address stroke management included the Australian Clinical Guidelines for Stroke Management16 and the New Zealand Clinical Guidelines for Stroke Management;40 however, these were not developed using the ICF framework nor do they focus on aphasia management across the continuum of care.15 One strength of the BPS is the inclusion of a comprehensive section on Personal Factors relating to culturally and linguistically diverse and indigenous populations. While this inclusion might encourage the international community to address such Personal Factors, the BPS may require adaptation for other regions and nations, especially for those sections that have been heavily contextualised for Australian practice and society.

Future directions

The BPS are suitable for use as an audit tool in clinical settings. This process will identify particular statements where there is either consistency or variation in practice, and those that have variation can be targeted for either broader translation initiatives involving evidence-based implementation strategies46 or local quality improvement projects.47 The future production of a consumer friendly version that incorporates ‘aphasia friendly’ formatting and language48 will potentially enhance the active participation of people with aphasia and their families in the rehabilitation process, and provide them with information about their care at each phase of their journey.46 Further consultation with other health professionals involved in the care of people with aphasia is also warranted. As new evidence emerges, the BPS will be updated to reflect the current state of the knowledge in this field. Additionally, we will continue to work with our collaborators to ensure the BPS influence future iterations of other stroke and aphasia guidelines, nationally and internationally.

Conclusion

We have developed evidence-based and expert-endorsed BPS for aphasia rehabilitation. These statements form a crucial part of the Australian Aphasia Rehabilitation Pathway (http://www.aphasiapathway.com.au). The aphasia BPS represent a critical foundation step for a national implementation effort in stroke care/rehabilitation.

Acknowledgments

For their contributions to the development of the best practice statements, the team would like to acknowledge: Dr Zaneta Mok, Dr Anne J Hill, Ms Alexia Rohde, Ms Sarah Wallace, Dr Karen Brewer, Dr Denise O'Connor and the Missing Voices project team (Professor Beth Armstrong, Associate Professor DH, Associate Professor Judith Katzenellenbogen, Associate Professor Julianne Coffin, Professor Sandra Thompson, Dr Natalie Ciccone, Professor Colleen Hayward, Deborah Woods, Professor Leon Flicker and Ms Meaghan McAllister, Edith Cowan University); the Aphasia Rehabilitation Community of Practice, including the Australian Aphasia Association, National Stroke Foundation, and the clinicians who attended the CoP meetings; the CCRE in Aphasia Rehabilitation Chief and Associate investigators, postdoctoral research fellows, higher degree and honours research students. The authors would also like to acknowledge Dr Benjamin Basger for his generous consultation on the RAND/UCLA procedures.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Twitter Follow Emma Power at @dr_epower

  • Contributors EP, ET, LW, MR, and LT contributed to the project conceptualisation, design, data collection, analysis and editing of final manuscript. EP and ET drafted the manuscript. EP facilitated the RAM process. The remaining authors made substantial contribution to data collection, interpretation and analysis through either the development of the best practice statements and /or contributions of the RAM panel. All authors have contributed to the critical editing of the paper. All authors take responsibility for the accuracy and integrity of the data, and have given approval for the final version to be published.

  • Funding This study was funded by an Australian National Health and Medical Research Council Centre for Clinical Research Excellence grant (569935).

  • Competing interests All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/coi_disclosure.pdf and declare: EP, ET, LW, MR, LT, LN, RO, and EG had financial support from the Australian National Health and Medical Research Council Centre for Clinical Research Excellence grant (569935). Additionally, financial support was provided to MR and LN (Australian Research Council Future Fellowships), LT (Australian National Health and Medical Research Council Senior Research Fellowship). DH, SL, CO and KC had no financial support for this work; none of the authors had financial relationships with any organisations that might have an interest in the submitted work in the previous 3 years; none of the authors had other relationships or activities that could appear to have influenced the submitted work.

  • Ethics approval University of Queensland Human Ethics Committee (approval number 2009001850).

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement No additional data are available.

  • i Due to the Australian context of the study, we use the term ‘speech pathologist’, which is synonymous with the term ‘speech and language therapist’ adopted in the United Kingdom.