Article Text

Supporting Policy In health with Research: an Intervention Trial (SPIRIT)—protocol for a stepped wedge trial
  1. The CIPHER Investigators
  1. 1The Sax Institute, Haymarket, Australia
  1. Correspondence to Dr Anna Williamson; Anna.williamson{at}saxinstitute.org.au

Abstract

Introduction Governments in different countries have committed to better use of evidence from research in policy. Although many programmes are directed at assisting agencies to better use research, there have been few tests of the effectiveness of such programmes. This paper describes the protocol for SPIRIT (Supporting Policy In health with Research: an Intervention Trial), a trial designed to test the effectiveness of a multifaceted programme to build organisational capacity for the use of research evidence in policy and programme development. The primary aim is to determine whether SPIRIT results in an increase in the extent to which research and research expertise is sought, appraised, generated and used in the development of specific policy products produced by health policy agencies.

Methods and analysis A stepped wedge cluster randomised trial involving six health policy agencies located in Sydney, Australia. Policy agencies are the unit of randomisation and intervention. Agencies were randomly allocated to one of three start dates (steps) to receive the 1-year intervention programme, underpinned by an action framework. The SPIRIT intervention is tailored to suit the interests and needs of each agency and includes audit, feedback and goal setting; a leadership programme; staff training; the opportunity to test systems to assist in the use of research in policies; and exchange with researchers. Outcome measures will be collected at each agency every 6 months for 30 months (starting at the beginning of step 1).

Ethics and dissemination Ethics approval was granted by the University of Western Sydney Human Research and Ethics Committee HREC Approval H8855. The findings of this study will be disseminated broadly through peer-reviewed publications and presentations at conferences and used to inform future strategies.

  • PUBLIC HEALTH
  • STATISTICS & RESEARCH METHODS

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Internationally, there is a growing commitment to better use of evidence from research in policy, with a view to improving outcomes and optimising resource allocation.1–4 As a result, considerable attention has recently been given to assisting government agencies to better use evidence from research in policy and programme design.

Health policies and programmes have been a particular focus, and a number of approaches to building skills and organisational capacity to use evidence from research have been developed.5 ,6 The target for these programmes is government agencies who develop national or regional policy and programmes rather than organisations that provide health services and clinical care. Despite the proliferation of such programmes and their cost to government, there have been few examinations of their impact on the use of research. A recent comprehensive review7 found only five studies examining the impact of strategies to increase the use of evidence in policy and since that time, to the best of our knowledge, only one other relevant study protocol has been published.8

The paucity of trials may be partly a result of the considerable challenges in evaluating the impact of strategies designed to increase the use of research in policy and programme design. Agencies involved in policy and programme development, even within health, are heterogeneous—they will be different in remit, size, proximity to central government and in their need for and current capacity to use evidence from research. These agencies are also often complex organisational systems and may respond very differently to the same strategy programme. Effective strategies will therefore most likely be multicomponent and highly tailored to meet the needs of different agencies. Study designs will need to be capable of assessing complex interventions and of carefully measuring the context in which the programme is delivered; it is most likely that the programme will need to be delivered at the level of the health policy agency, which means a cluster design will be required. It may not be feasible to recruit and intervene with sufficient agencies to power a randomised controlled trial. A further challenge lies in the lack of established validated measures of the use of research in policy and programme development; in a trial context, an objective measure of the impact of the programme is required.

This paper describes SPIRIT (Supporting Policy In health with Research: an Intervention Trial), an evaluation of the impact of a multifaceted programme designed to build the capacity of health policy agenciesi to use research in policy and programme development. The trial is based on the SPIRIT Action Framework described in detail elsewhere; the Framework hypothesises that the use of research in policy is mediated by the capacity of the organisation to use research (including the extent to which the agency values research, has systems in place to use research and the skills of its staff). It predicts that greater capacity will lead to more research engagement actions (demonstrated by accessing and appraising existing research, generating new research and interacting with researchers) and that, in turn, more engagement actions will result in a greater use of research evidence.

SPIRIT is a stepped wedge trial evaluating a complex intervention and using an objective measure for its primary outcome. It extends previous studies which often evaluated only one strategy,10–12 did not use objective outcome measures10 ,11 ,13 and were unable to include a detailed process evaluation.10–14 In addition, SPIRIT is the only trial designed to bring about organisational change in policy and programme agencies and that has the capacity to assess the long-term impact of the programme (up to 18 months after its completion).

Aims

Objective and aims

The objective of the trial is to examine what impact the SPIRIT intervention has and how it works in the field. Our primary outcome measure is an objective assessment made by an expert panel, our secondary measures utilise self-informant or key informant reports and our process evaluation includes coded and descriptive accounts of the intervention delivery and participation made during observations of sessions, semistructured interviews and written participant feedback. We will examine these data across the components of the SPIRIT Action Framework (capacity, research engagement actions and use of research evidence); this will enable us to examine the impact of the SPIRIT intervention programme and to test the causal relationships hypothesised by the model. By capacity, we mean the extent to which the agency values research, has systems in place to use research and the skills of its staff. By research engagement actions, we mean the extent to which research is accessed, appraised and generated, and where interaction with researchers occurs, in relation to the development of a specific policy or programme document. By research use, we mean the extent to which research is sought and used in developing the policy or programme document, taking into account barriers and facilitators.

The specific aims are:

Primary outcome

  1. To determine, using an objective measure, whether the SPIRIT intervention results in an increase in the extent to which research engagement actions are undertaken, and research is used in the development of policy and programme documents.

Secondary outcome

  1. To determine, using the self-report of staff members, whether the SPIRIT intervention results in an increase in:

    • The capacity of health policy agencies to use research in terms of: (A) the value placed on the use of research evidence by individual policymakers and by the organisation; (B) the confidence of policymakers in undertaking research actions and using research and (C) the systems and tools the organisation has in place to support research use;

    • Research engagement actions;

    • The use of research.

Process evaluation: The aim of the process evaluation is to complement the outcome measures by exploring how and why the intervention worked or did not work in different contexts. This includes descriptions of:

  1. The implementation of the intervention programme, including the delivery of the essential elements (fidelity);

  2. Participation in and responses to the intervention (eg, how people interacted with the programme, how they evaluated different aspects of it and what sort of change they experienced, if any);

  3. Contextual factors that might have affected the programme and responses to it (eg, agency priorities, practice norms, other training or organisational change initiatives, relationships with external bodies, legislative reform).

Cost analysis

  1. To document the cost of delivering the intervention.

Methods

SPIRIT started in October 2012, and at the time of writing, approximately one-half of the total intervention period of 30 months has been completed.

Trial design

SPIRIT is a stepped wedge cluster intervention trial (figure 2) involving six agencies. Two agencies are randomly assigned to each step. This design is a variation on a cross-over design where each intervention unit is measured in the control and intervention phases, except that the step wedge design involves a cross-over in one direction only. The key feature of this design is that the intervention is rolled out sequentially over a number of time periods, such that all trial units have received the intervention by the end of the trial, although the order in which they receive it is determined at random.15 ,16 Outcome measures are obtained at the same time in all sites, at baseline and after implementation in each site. This design was selected in accordance with the UK Medical Research Council17 recommendations because the intervention is delivered at the level of the health policy agency (cluster) and recruiting and intervening with a large enough number of policy agencies to power a traditional randomised controlled trial was unfeasible due to the resource-intensive nature of SPIRIT and its status as a new, untested programme.

Participants

Recruitment of health policy agencies

The inclusion and exclusion criteria for policy agencies were established prior to recruiting trial participants. An agency was eligible to participate in SPIRIT if: (A) a significant proportion of its work was in health policy or programme development; (B) at least 20 staff members were involved in policy or programme design, development or evaluation; and (C) it was located in Sydney (for ease of provision of the programme).

Government websites which list all New South Wales and Australian government health policy and programme agencies were used to identify potentially eligible agencies located in Sydney, New South Wales (where most of the major health policy agencies in the state are located). Information from the website of each agency was reviewed by members of the investigator team with policy experience to exclude those agencies without a significant focus on health and on policy or programme design, development or evaluation. Seventy-five agencies were classified as potentially eligible and email and/or phone contact was made with each agency to determine the number of relevant staff members. Following this step, the 16 agencies still regarded as potentially eligible were ranked to determine those with the greatest specific focus on health and the largest numbers of relevant staff. A visit was made by two of the authors (SR and NL) to the top six ranked agencies, all of whom agreed to take part. The agencies which agreed to participate in SPIRIT comprise a major centre within the New South Wales Ministry of Health and five agencies (four state and one national) that develop policy and programmes about specific aspects of health.

Intervention

Development of the intervention programme

The intervention programme is underpinned by the programme logic shown in figure 1.

Figure 1

The programme logic for Supporting Policy In health with Research: an Intervention Trial (SPIRIT).

The intervention was designed to be appropriate for policy agencies, taking into account that they are sophisticated, complex organisations with skilled staff and diverse policy priorities. The design of the programme was strongly influenced by cognitive behavioural theory,18 ,19 system science,20 ,21 organisational change theory22–25 and adult learning theories.26 ,27 We used these approaches, together with information from interviews with policymakers and experience from existing programmes in Australia28–30 and internationally,5 ,6 ,10 ,14 to identify factors that have been associated with significant individual or organisational change. Telephone interviews were then conducted with nine senior Australian health policymakers to canvass their views about the kinds of strategies being considered for inclusion in the intervention. They endorsed our approach, indicating that the proposed change strategies were acceptable and would most likely make a difference in improving the use of research evidence in policymaking.

Components of the intervention and all three measures were piloted in a policy agency that was not part of SPIRIT prior to the start of the study. This pilot broadly confirmed that the proposed methods were acceptable and appropriate and allowed fine tuning of some aspects of the intervention.

SPIRIT: the programme

The SPIRIT intervention is a six-component programme (table 1):

  1. Audit, feedback and goal setting: SPIRIT begins with the provision of a facilitated discussion of the findings from the agency's preintervention measures (see below) to the individuals that each agency nominates to be part of their ‘leaders’ group. One of the authors (SR) facilitates these sessions and encourages the leaders to reflect on and discuss the current strengths of the agency in using evidence from research, and opportunities for improvement based on the preintervention data. The discussion is used to (A) agree on priorities for change; (B) select the elective elements of the SPIRIT programme (see table 1) and (C) identify other actions outside the scope of the SPIRIT programme that the agency might wish to undertake to increase its use of evidence in policy.

  2. Leadership programme: This component provides an opportunity for agency-nominated leaders to consider the value of research in their work, current approaches to knowledge exchange and how to bring about a greater use of evidence from research in their agency. It draws on the CFHCI EXTRA programme,5 and consists of two interactive workshops led by experts in knowledge exchange and policy development.

  3. Agency support for research: This component is designed to strengthen the perception among staff that their organisation values the use of evidence and to provide tools for structural change. It includes (A) a quarterly email sent to all staff in each agency by the chief executive (CE). The email aims to demonstrate CE support for the use of evidence in the agency's work and is drafted by the research team and contextualised by the CE; (B) access to Web CIPHER (Centre for Informing Policy in Health with Evidence from Research), an online resource for policymakers where users can access research, hear from opinion leaders, share knowledge and debate ideas about the use of research in policymaking and programme development and (c) provision of relevant fact sheets and key publications.

  4. Systems for accessing research, commissioning a review and analysing relevant data: This component gives agencies an opportunity to trial existing systems for helping agencies to use research evidence. The agency elects to trial the system for one of: a brokered rapid review of research evidence around a topic of interest; a brokered analysis of local data or the development of an evaluation framework related to an aspect of the agency's work. A tailored research product is produced for each agency as a result of this process.

  5. Research access: In this component, agencies choose three occasions of facilitated access to research or researchers—this might be (A) a tailored interactive forum that brings together researchers and policymakers around a topic specified by the agency; or (B) receipt of a summary of recently published systematic reviews relevant to the agency's work.

  6. Staff training and skill development: This component is designed to increase the value placed on the use of research in policy (symposium 1, received by all agencies) and to increase skills in areas selected by the agency (symposia 2–3, topics selected by the agency). The workshops are provided by individuals with expertise appropriate to the topic, include tailored case studies and draw on adult learning principles. Symposium 1 includes feedback on the current use of evidence from research by the agency and symposia 2 and 3 might address any of the following depending on the interests of the agency: accessing and applying systematic reviews; skills for appraising research; policy and programme evaluation (introduction to evaluation or evaluation in practice); working with researchers or commissioning research for policy and programme development.

Table 1

Overview of the intervention components and subcomponents, their delivery mode and goals

The active phase of the intervention runs for 12 months with around one activity in each agency per month. There are no costs to agencies for any component, other than the opportunity cost associated with the time taken for their staff to participate. The CE of each participating agency was asked to nominate a senior member of staff to function as the liaison person (LP) and to be the primary point of contact for the SPIRIT team. An individual with extensive knowledge of brokering experience was appointed as the SPIRIT officer for each agency and worked with the LP to ensure that SPIRIT was provided effectively at that agency.

Timing of recruitment, intervention delivery and follow-up

As shown in figure 2, the SPIRIT programme runs for 12 months in each health policy agency. The total measurement period is 30 months with measures being collected at the beginning of the study period, and then every 6 months thereafter in each agency. There are thus six measurement points in total. The first measurement point is a preintervention measure for all agencies and the final measurement point is 6 months after the completion of the SPIRIT programme in the last randomised pair of agencies.

Figure 2

SPIRIT design. LP, liaison person; SPIRIT, Supporting Policy In health with Research: an Intervention Trial.

Outcomes

Primary outcome

  1. The primary outcome is an objective measure (SAGE, described below) of whether the SPIRIT intervention results in an increase in the extent to which research engagement actions are undertaken and research is used in the development of policy and programme documents.

Secondary outcomes

  1. To determine, using the self-report of staff members (SEER and ORACLe, described below), whether the SPIRIT intervention results in an increase in:

    • The capacity of health policy agencies to use research in terms of: (A) the value placed on the use of research evidence by individual policymakers and by the organisation; (B) the confidence of policymakers in undertaking the research actions and using research; and (C) the systems and tools the organisation has in place;

    • Research engagement actions;

    • The use of research.

Process evaluation: The third set of outcomes was collected as part of a process evaluation and describes the implementation of SPIRIT in health policy agencies including data around:

  1. SPIRIT components selected by each agency;

  2. Participants’ engagement;

  3. The fidelity of delivery;

  4. The cost of delivery.

Outcome measurement

All outcome measures used in SPIRIT have been developed specifically for the trial as we were unable to identify any suitable measures in the literature. All measures were extensively piloted prior to the start of the trial and papers outlining their development and validation will be published in peer-reviewed journals. Table 2 provides a summary of the primary and secondary study outcomes and the accompanying outcome measurement tools.

  1. Staff Assessment of enGagement with Evidence (SAGE) is an objective measure of the primary outcome—the extent to which agencies apply research findings in the development of the policy and programme documents they produce. An expert panel will be assembled to assign SAGE scores on the basis of interview data and document review. At each of the six data points, agencies are asked to provide the four policy documents finalised during the past 6 months that best represent the use of research in their policy and programme development work. The level of measurement is the policy document. Information about each document is collected via a structured qualitative interview with an individual who was heavily involved in the development of the document, nominated by the LP. The interview runs for approximately 1 h. A separate interview is conducted for each nominated document. The domains SAGE assesses are: (A) research engagement actions (accessing research, appraising research (for quality and relevance), generating new research or analysis and interacting with researchers)—each of these six dimensions receives a score from 0 to 9. Scores can be summed or reported separately; (B) outcome: research use (four types of research use are considered: instrumental, tactical, conceptual and imposed). Each of these four dimensions receives a score from 0 to 9. Scores can be summed or reported separately.

  2. Seeking, Engaging with and Evaluating Research (SEER) measures individual policymakers’ capacities, research engagement actions and research use. SEER is administered via an online self-report survey to eligible policymakers identified by the LP. At each of the six measurement points, the LP from each agency provides a list of eligible policymakers to the research team.

Table 2

SPIRIT outcome measures

Health policy or programme staff are regarded as eligible to complete SPIRIT measures if:

  1. They write health policy documents or develop health programmes, or make or contribute significantly to policy decisions about health services, programmes or resourcing;

  2. They are employed at a mid-level or higher in their agency;

  3. They are over 18 years of age and consent to participate in the study.

Individual agency staff members are excluded if they are contractors and/or work across several of the participating agencies. All nominated policymakers are emailed an invitation to participate in the SPIRIT trial outcome measure in question along with standard participant information and consent forms. SEER measures (A) capacity (four subscales: predisposing factors related to individual values, perceptions of organisational values, perception of organisational systems and individual knowledge); (B) research use actions (accessing research, appraising, generating new research or analyses and interacting with researchers); and (C) research use (conceptual, instrumental, tactical and imposed research use) and extent of research use.

  1. Organisational Research Access, Culture and Leadership (ORACLe) is used to assess each agency's capacity to use research findings. ORACLe data are collected via a structured qualitative interview with a senior staff member from each agency, nominated by the agency LP. The qualitative data are transcribed and used to assign scores to seven dimensions related to an organisation's capacity to use research. Two researchers separately score the responses (0=no, 1=yes—some or to a limited extent, 2=yes—very much so) of each participant to the 21 questions which make up the ORACLe scale. Inter-rater reliability has thus far been high (95%). Scores for each domain and a total score are then generated using an algorithm developed via a discrete choice experiment. This experiment harnessed the views of leaders in policy and knowledge exchange on the most important agency-level factors influencing the use of evidence.

ORACLe assesses agencies across the following seven domains: (A) processes that encourage or require the examination of research in policy and programme development; (B) tools and programmes to assist leaders of the organisation to actively support the use of research in policy and programme development; (C) strategies to provide staff with training in using evidence from research in policy and in maintaining these skills; (D) organisational strategies to help staff to access existing research findings; (E) methods to generate new research evidence to inform the organisation's work; (F) methods to ensure adequate evaluations of the organisation's policies and programmes; and (G) strategies to strengthen research relationships.

Process evaluation

A mixed methods process evaluation is conducted in parallel with the intervention in each of the six agencies. It collects data about, and analyses the interactions between, three domains:

  1. The delivery of the intervention, including implementation of essential elements (fidelity). All intervention sessions are observed and audio recorded by process evaluation staff. Initial essential elements (ie, aspects of each intervention component that are considered critical to its success) are identified, and codes developed to monitor their delivery descriptively and, where possible, quantitatively. Essential elements and codes are tested in the early stages of the programme delivery, refined and further tested throughout the trial. Revised elements are applied retrospectively to earlier sessions to enable comparison across intervention sites.

  2. Participation in and responses to the intervention: Participation attributes are recorded during session observations, and self-report evaluation forms are used to collect feedback on the relevance, applicability and value of each intervention session from the participants’ point of view, including whether they are likely to use knowledge/skills developed in the session. Following the intervention, five to seven semi-structured interviews are conducted in each site which explore participants’ responses to the programme, including changes in relation to individual and organisational capacity to use research, research engagement actions and actual research use.

  3. Contextual factors that might have affected the intervention and responses to it: Session observations are used to collect data on participants’ views and experiences of research use in their work. Preintervention interviews with participants explore workplace priorities and culture, how they perceive their organisations’ support for research use, the role of research within the mix of other information and any other contextual factors that affect their use of research or may affect how they will engage with and respond to the intervention, including exposure to other research use drivers.

The process evaluation will complement the outcome measures by describing how and why the intervention worked or did not work, including why the same intervention strategies may have had different effects in different contexts.31–33 In addition, it will aid future programme improvement by identifying beneficial design and implementation strategies and suggesting contextually responsive programme adaptation for different recipients and settings. The full protocol for the SPIRIT process evaluation is described in detail elsewhere.

Cost analysis

A cost analysis is being conducted as part of SPIRIT because: the intervention will include non-market goods for which standardised cost estimates are less likely to exist. This cost analysis is conducted from the perspective of the intervention provider; detailed data on the resources used to deliver the intervention and their values are collected via interviews with central intervention staff, logging of the costs involved in hiring session facilitators, interviews with agency liaison people regarding the time taken to complete their SPIRIT tasks, recording the length of interviews, monitoring the time taken to complete the SEER online surveys and careful record keeping regarding the duration of sessions. Microcosting (a bottom-up approach methodology) is being used.34

Randomisation

The health policy agency is the unit of randomisation and intervention in the SPIRIT trial, with two sites randomised to start the intervention at each ‘step’. The six participating agencies were each assigned a code. The list of agency codes was sent to a biostatistician who was not involved in any other aspect of the trial, who assigned each agency code a computer-generated random number sequence and resorted them. The first two agencies in the list were allocated the October 2012 start date (step 1), the second two the April 2013 start date (step 2) and the final two the October 2013 start date (step 3).

Blinding

The SPIRIT staff and those investigators involved in delivering aspects of the intervention, by necessity, are not blind to allocation. All other investigators are blind to allocation, although analysis will need to know when sites start the intervention. They will not be informed which agencies are in the active intervention phase, and all questionnaire responses and interview records that they receive will be de-identified for individual participants. The agency allocation is concealed from those scoring SAGE and ORACLe to limit bias; this is particularly important since the intervention team cannot be blinded to the active intervention agencies.

Statistical methods

Analysis will be undertaken using generalised linear models with a link function and error distribution appropriate to the outcome measure. In each model, the intervention effect will be estimated as the difference between the postintervention and preintervention levels of the outcome after adjusting for time. The unit of analysis will be the agency for the SAGE and ORACLe outcome measures and the individual for the SEER outcome measure. Since there are too few clusters to employ either a linear mixed model or a generalised estimating equation (GEE) approach, all models will adjust for the clustering by using agency as a fixed effect.

For the study design outlined in figure 2 and with the assumption of an intraclass correlation coefficient of 0.01, a total sample size of 144 documents, 4 per site per time period, will give 89% power to detect a 1 SD difference in the mean SAGE total score at the 5% significance level.16

Data management and monitoring

Data for the SEER measure are entered online by participants. SEER is primarily multiple choice. In order to enhance data quality, the online survey has built in skip functions and participants are unable to progress from one page to the next until all questions are answered. Data from SAGE and ORACLe are transcribed (the scoring and checking procedure was described in the outcome measures section).

All data and documents are stored on password protected computers and on a network server accessible only to the researchers. Any paper documents are stored in a securely locked filing cabinet located at the Sax Institute, which has day-to-day management of CIPHER. The Sax Institute is a secure facility requiring a unique staff access code to enter; visitors are escorted on-site and are required to sign in.

A data monitoring committee was not required for this trial as the risk of harm to participants is negligible.

Ethics and dissemination

All participants provide electronic consent to participate in outcome measures via return email, and signed consent for process evaluation data collection. All participating agencies and individuals are free to decline to participate in any and all aspects of SPIRIT at any time with no explanation required (example participant information and consent forms in online supplementary appendix 1).

Participating agencies receive feedback on their results at regular intervals during the active phase of the intervention and again at the conclusion of the trial. The findings of this study will be disseminated more broadly through peer-reviewed publications (authorship determined by BMJ guidelines) and presentations at conferences and used to inform future strategies.

Acknowledgments

The authors would wish to thank the people and organisations participating in the CIPHER project. CIPHER is a joint project of the Sax Institute; Australasian Cochrane Centre, Monash University; the University of Newcastle; The University of New South Wales, The University of Technology, Sydney; the Research Unit for Research Utilisation, University of St Andrews and University of Edinburgh; and the University of Western Sydney.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:

Footnotes

  • Collaborators Writing group: Anna Williamson, Sally Redman, Abby Haynes, Daniel Barker, Louisa Jorm, Sally Green, Fiona Blyth, Nicola Lewis, Anthony Shakeshaft, Catherine D'Este, The CIPHER Investigators. CIPHER team: Chief investigators: Sally Redman, Louisa Jorm, Sally Green, Catherine D'Este, Anthony Shakeshaft, Huw Davies, Jordan Louviere. Associate investigators: Terry Flynn, Mary Haines, Andrew Milat, Denise O'Connor, Sarah Thackway, Fiona Blyth, Stacy Carter. Study team: Anna Williamson, Abby Haynes, Emma Darsana, Catherine McGrath, Steve Makkar, Tari Turner, Nicola Lewis, Danielle Campbell.

  • Contributors Anna Williamson contributed to the design of the intervention, oversaw its implementation and drafted the manuscript. Sally Redman conceived the study, designed the intervention and drafted the manuscript. Abby Haynes designed the process evaluation and drafted the manuscript. Daniel Barker designed the statistical analysis plan and drafted the manuscript. Louisa Jorm conceived the study and contributed to the study design and drafting of the manuscript. Sally Green conceived the study and contributed to the study design. Fiona Blyth contributed to the design of the intervention and drafting of the manuscript. Nicola Lewis and Anthony Shakeshaft contributed to the design of the intervention. Catherine D'Este contributed to the design of the study and its statistical analysis plan. All authors were involved in revising the manuscript critically for important intellectual content and have given final approval to the version to be published.

  • Funding SPIRIT is funded as part of the Centre for Informing Policy in Health with Evidence from Research (CIPHER), an Australian National Health and Medical Research Council Centre for Research Excellence (APP1001436) and administered by the University of Western Sydney. The Sax Institute receives a grant from the NSW Ministry of Health.

  • Competing interests Sally Green is the co-director of the Australasian Cochrane Centre. Anna Williamson holds an NHMRC Public Health Training Fellowship (510 391).

  • Ethics approval The University of Western Sydney Human Research and Ethics Committee (HREC Approval H9870).

  • Provenance and peer review Not commissioned; peer reviewed for ethical and funding approval prior to submission.

  • i For the purposes of SPIRIT, a health policy agency is defined as: A body within a state or federal government department, or a statutory authority, whose focus is to develop policy which has an impact on state-wide or national services and programmes intended to improve individual, family or community health.9