Article Text

Download PDFPDF

Overdetection in breast cancer screening: development and preliminary evaluation of a decision aid
  1. Jolyn Hersch1,
  2. Jesse Jansen1,
  3. Alexandra Barratt2,
  4. Les Irwig3,
  5. Nehmat Houssami3,
  6. Gemma Jacklyn4,
  7. Hazel Thornton5,
  8. Haryana Dhillon6,
  9. Kirsten McCaffery1
  1. 1Screening & Test Evaluation Program (STEP) and Centre for Medical Psychology & Evidence-based Decision-making (CeMPED), School of Public Health, University of Sydney, Sydney, New South Wales, Australia
  2. 2Centre for Medical Psychology & Evidence-based Decision-making (CeMPED), School of Public Health, University of Sydney, Sydney, New South Wales, Australia
  3. 3Screening & Test Evaluation Program (STEP), School of Public Health, University of Sydney, Sydney, New South Wales, Australia
  4. 4School of Public Health, University of Sydney, Sydney, New South Wales, Australia
  5. 5Department of Health Sciences, University of Leicester, Leicester, UK
  6. 6Centre for Medical Psychology & Evidence-based Decision-making (CeMPED), Central Clinical School, University of Sydney, Sydney, New South Wales, Australia
  1. Correspondence to Dr Kirsten McCaffery; kirsten.mccaffery{at}sydney.edu.au

Abstract

Objective To develop, pilot and refine a decision aid (ahead of a randomised trial evaluation) for women around age 50 facing their initial decision about whether to undergo mammography screening.

Design Two-stage mixed-method pilot study including qualitative interviews (n=15) and a randomised comparison using a quantitative survey (n=34).

Setting New South Wales, Australia.

Participants Women aged 43–59 years with no personal history of breast cancer.

Interventions The decision aid provides evidence-based information about important outcomes of mammography screening over 20 years (breast cancer mortality reduction, overdetection and false positives) compared with no screening. The information is presented in a short booklet for women, combining text and visual formats. A control version produced for the purposes of comparison omits the overdetection-related content.

Outcomes Comprehension of key decision aid content and acceptability of the materials.

Results Most women considered the decision aid clear and helpful and would recommend it to others. Nonetheless, the piloting process raised important issues that we tried to address in iterative revisions. Some participants found it hard to understand overdetection and why it is of concern, while there was often confusion about the distinction between overdetection and false positives. In a screening context, encountering balanced information rather than persuasion appears to be contrary to people's expectations, but women appreciated the opportunity to become better informed.

Conclusions The concept of overdetection is complex and new to the public. This study highlights some key challenges for communicating about this issue. It is important to clarify that overdetection differs from false positives in terms of its more serious consequences (overtreatment and associated harms). Screening decision aids also must clearly explain their purpose of facilitating informed choice. A staged approach to development and piloting of decision aids is recommended to further improve understanding of overdetection and support informed decision-making about screening.

  • PUBLIC HEALTH
  • QUALITATIVE RESEARCH
  • PREVENTIVE MEDICINE

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • The strengths of this project include the staged, mixed-methods approach to developing and evaluating the decision aid, combining both qualitative and quantitative data.

  • The iterative pilot-testing process enabled us to explore women's responses to successive drafts, identify problematic aspects, and revise the materials to clarify misconceptions.

  • Decisions about initial design and subsequent modifications were undertaken by an experienced multidisciplinary team with input from laypersons and independent experts.

  • Some participants in stage 1 pilot interviews already had breast screening experience, thus differing from our ultimate intended audience, and this may have affected their responses. Stage 2 participants were members of the target population facing real-life decisions.

Introduction

Recent changes to international policy and practice have sought to promote greater involvement of patients and citizens in healthcare decision-making.1–3 It is argued that, just as patients may choose between treatment options, people offered medical screening should have the opportunity to make informed decisions about whether to participate.4 ,5 Supporting informed choice about screening requires clear, balanced information on benefits and harms,6 ,7 as reflected in new approaches to screening information provision.8 One way to facilitate informed decision-making is through the use of decision aids—resources designed for patients or citizens facing specific decisions about treatment or screening. Decision aids provide evidence-based information about the benefits and harms of healthcare options, and their capacity to improve users’ knowledge about the options has been demonstrated via randomised trials in a variety of healthcare settings.9

One of the main harms of mammography screening is overdetection (or overdiagnosis) leading to treatment of breast cancers that would not otherwise present clinically or cause problems in a woman's life. Overdetection results in harm to emotional and physical health, both in the short and long terms. 10 ,11 However, information about overdetection has been lacking from materials distributed by breast screening programmes worldwide.12–14 Furthermore, there is little evidence regarding how best to convey this novel information to the public.

In a qualitative study,15 we examined how women aged 40–79 responded to information about overdetection, exploring its potential influence on decision-making about breast cancer screening and treatment. The study also highlighted challenges in explaining this new and counterintuitive concept, and confirmed that women were participating in screening (or not) without knowing about the risk of overdetection. After our face-to-face explanation, focus group discussions and clarification of queries, most participants demonstrated a reasonable understanding of the issue. Although surprised, women valued the information and felt that it ought to be provided when screening is offered15—findings echoed in a similar UK study.16 This suggests that informed decision-making should be possible for potential screening participants, when they are provided with good information. The challenge remaining was to convert a meaningful explanation of overdetection into a written format and test whether it could convey the information successfully in a real-life decision-making setting. This is particularly important because in Australia, among other countries, women interact directly with a screening service, often bypassing any discussion with a healthcare provider.

In the present study, we developed a decision aid for women facing their initial decision about participation in mammography screening. The information presented includes the main benefit and harms of screening (breast cancer mortality reduction, false positives and overdetection). The goal was to produce materials that we could then use in a randomised trial to assess whether information on overdetection makes a difference to women's views and decisions about screening,17 with the potential for future adaptation into a resource suitable for distribution within organised screening programmes. This paper describes the development and preliminary evaluation of the decision aid.

Methods

Overview of decision aid development and evaluation

Figure 1 depicts the stages of this project. Stage 1 included the design of a decision aid informed by our focus group study,15 previous decision aid work18 ,19 and other relevant literature, followed by an iterative piloting and revision process involving user testing and expert feedback. Then we created a control decision aid omitting the overdetection content. In stage 2, the materials underwent preliminary evaluation using a telephone questionnaire and were subsequently revised to produce final versions. Stage 3 is a randomised trial comparing the two decision aids.17 This paper reports stages 1 and 2.

Figure 1

Flow chart of decision aid development and evaluation process.

Project team

Decision aid design and revisions involved a multidisciplinary team with expertise in the clinical, psychosocial and epidemiological aspects of breast screening and experience in developing tools to support health decision-making. The team incorporates lay perspectives from a health consumer organisation representative (similar to our target audience in age and gender) and an experienced independent citizen advocate. We worked with a graphic designer to produce the booklets.

Evidence base for quantitative outcome information

The evidence to inform the decision aid content is from an updated version (manuscript in preparation) of a published model of breast screening outcomes for women in Australia.20 The model incorporates estimates of the breast cancer mortality reduction from screening and of overdetection. Estimates were derived from a meta-analysis of effects found in randomised trials,6 adjusted to reflect the impact of attending screening regularly (not just being invited).21 These were applied to current Australian incidence and mortality data to quantify cumulative outcomes of biennial screening from age 50 to 69 versus no screening over this period. The 20-year cumulative likelihood of a false positive result was modelled from current Australian breast screening data.

Key design features

Offering choice

Unlike conventional screening materials encouraging uptake,5 ,12 the decision aid is framed as a resource providing information to support women in choosing whether to have screening or not.

Communicating outcome probabilities using visual formats

Quantitative screening outcome information is stated transparently using absolute frequencies with a clearly specified reference class.22 The expected frequency of each outcome is illustrated by an icon array—a visual graphic display representing the numerator and denominator together via differently coloured filled circles arranged in a matrix. As recommended by the International Patient Decision Aids Standards,23 icon arrays are formatted consistently and share a common reference class: 1000 women screened for 20 years. A summary table concludes the decision aid, bringing together key information already presented to facilitate comparison between the options (screening vs not screening) in terms of the numbers of women dying from breast cancer and experiencing screening harms. Such summaries are generally a well utilised and liked feature of decision tools.24

Plain language

We followed suggestions for making information easy to understand across literacy levels.25 The Flesch-Kincaid readability score of 7.85 indicates that the booklet is suitable for readers at the seventh to eighth grade level. A glossary defines medical terms, and earlier findings guided word choice—for example, we use overdetection as focus groups showed that overdiagnosis may be confused with misdiagnosis.15

Communicating the novel concept of overdetection

As the concept of overdetection is expected to be new to most readers, we created a simple conceptual illustration based on a slide that helped our focus group participants.15 It depicts two alternative scenarios that could happen to a hypothetical woman with asymptomatic breast cancer: one with screening (and consequent cancer diagnosis and treatment); and one without screening. In both scenarios, the woman lives to age 85 and dies of heart disease. This is intended to help readers understand how screening can lead to overdetection of cancers that would never cause harm.

A question-and-answer section describes evidence for overdetection and how and why breast cancer is treated, and addresses potential misunderstandings that the novel information could raise.15

Stage 1 interviews

Participants

Stage 1 involved 15 participants. Six women were recruited by convenience sampling among our contacts; they were not familiar with the study but were friends, relatives or partners of the project team or of colleagues. Another nine women were from a database of potential research participants originally identified through random sampling of Sydney telephone numbers as part of the recruitment for our previous study.15 These women had expressed interest in participating in our research should a suitable opportunity arise, but were unable to join the scheduled focus group sessions. We obtained ethics approval to recontact them and invite them to take part in decision aid piloting.

Table 1 shows the stage 1 sample characteristics. All women spoke English at home, none had a personal history of breast cancer, and about half had been screened.

Table 1

Stage 1 participant characteristics (n=15)

Procedure

JH conducted audiorecorded interviews (35–50 min) between February and October 2013. Participants were sent the draft decision aid to read beforehand. Interviews were conducted face to face (n=13) at women's homes or at the university, or by telephone (n=2). The semistructured interviews focused on a set of purpose-designed questions to assess comprehension of key content and preferences regarding presentation. We incorporated a standard teach-back technique, asking women to describe in their own words what selected parts of the booklet were trying to communicate. The interviewer noted women's responses and raised problematic aspects in team discussions where successive modifications were considered.

Expert review

We sought feedback on the draft decision aid from two independent experts not involved in the project: a communication expert (researcher and journalist) and a clinical expert (oncologist and clinical epidemiologist). The first review emphasised the importance of being ‘upfront’ about uncertainty in the quantitative information. We had included this acknowledgement at the end of the booklet but subsequently moved it to the introduction: ‘The numbers presented are the best available estimates based on the latest research. They may need to be reviewed in the future when new information becomes available.’ The second review highlighted that some icon arrays presented outcome categories that were subsets of others (eg, false positives leading to biopsies vs all false positives) and this was not always clear. We revised the diagrams to improve clarity and balance. For example, where we had already presented (i) the total number of breast cancers diagnosed and (ii) the number within that total which represented overdetection, we then added to the text (iii) the complementary number of cases that were not instances of overdetection.

Intervention and control versions of decision aid

Table 2 shows the content of the final decision aids (at the end of stage 2). The control version was created at the end of stage 1 by deleting all overdetection-related material (two pages) from the intervention decision aid. The sections on benefit and false positives remained identical across versions in content and format. The booklets were printed in B5 size (176×250 mm).

Table 2

Content of final decision aids, with italics for items found only in intervention (Int.)

Stage 2 interviews

In stage 2, 34 additional women were interviewed about the revised decision aids. This took place within a pilot study conducted between October and December 2013 to test the feasibility of recruitment, randomisation and data collection procedures ahead of the randomised trial (stage 3). Procedures are described in detail elsewhere17 and outlined briefly below.

Participants

We recruited a community sample of women facing real decisions. The New South Wales state electoral register extracted a random selection of women aged 48–49 (ie, approaching age 50, when Australian women are routinely invited to breast screening). We sent a database of names and telephone numbers to the Hunter Valley Research Foundation (HVRF), an independent non-profit organisation. HVRF interviewers telephoned women, invited those eligible to participate, and obtained oral consent. The interviewers were not aware of the randomisation sequence. Exclusion criteria were: personal history of breast cancer; increased risk of breast cancer; any mammogram in the past 2 years; or insufficient fluency in English.

Table 3 shows stage 2 sample characteristics. Although 36 women were randomised, 2 (6%) were lost to follow-up—one in each arm.

Table 3

Stage 2 participant characteristics (n=34)

Procedure

Using a computer random number generator, participants were randomised to be sent either the intervention or control decision aid by post. Participants had been told that they would receive one of two versions of the booklet, but they were not aware of how the versions differed or which was the intervention arm. Around 3 weeks later, a trained HVRF interviewer conducted a structured telephone interview (15–20 min) measuring decision aid acceptability using rating scales,19 ,26 knowledge using items adapted from previous work,15 ,26 and other trial outcomes17 that are beyond the scope of this paper.

Results

Communication issues and corresponding revisions

The stage 1 (qualitative) and 2 (quantitative) interviews together highlighted several important challenges in the communication of information about unfamiliar aspects of screening—specifically, the risk of overdetection and, more broadly, the possibility of harm and relevance of an informed choice approach. We modified the decision aid drafts to address these issues, as outlined in table 4 and detailed below.

Table 4

Key issues identified during the piloting process, with corresponding revisions made

Lack of familiarity with screening information framed around choice

The stage 1 interviews showed women were unfamiliar with screening participation being explicitly framed as an option they could choose to either take up or not—as one woman noted,People aren't used to being given information to make a balanced choice, you know. We are used to being given: go and do this, it's good for you…But to be treated like someone with a free will and being given the possession of the facts, we don't get treated like that very often in public life. So… we're being treated like grownups for once.

Therefore, the underlying purpose—to encourage a personal decision—needed to be made explicit. As this issue had arisen with previous screening decision aids,19 ,27 our initial draft tried to address it by posing a question in the title (‘Should I…?’) and stating in the opening paragraph that the ‘booklet is designed to help you make an informed choice about whether you would prefer to have screening or not.’ However, findings from the first few interviews led us to strengthen this by adding ‘there is no right or wrong answer about whether to have breast screening. It is a matter of what you believe is the right choice for you.’ We also renamed the booklet ‘Breast cancer screening: It's your choice’ and modified the subtitle to include ‘information to help women make a decision’ rather than ‘information to consider’. These aspects were identical for the control version.

Overdetection not understood as a harm

An obvious factor contributing to women's confusion about why the booklet presents screening as an intervention with pros and cons was their lack of prior awareness of overdetection. Learning about what one woman described as “quite a complicated idea” for the first time, it appeared difficult for some women to grasp whether and why overdetection may be considered a negative outcome. A stage 1 interviewee said, “I just don't know if overdetection is seen as being a problem” and “I don't understand the anxiety about overdetection, and why it's being flagged as an issue”. Another stated that “overdetection is not necessarily a harm or a bad thing”, while a third said, “the overdetection part didn't really make me feel that uncomfortable because I'm the sort of person, I think, that would rather know and have it treated”. Although the initial draft had mentioned various breast cancer treatment modalities and acknowledged that these involve side effects, we expanded this into a new section providing a short description of the mechanism of each treatment and several common side effects. This aimed to help women better understand the implications of being overdiagnosed and the likely course of treatment that may follow a diagnosis.

Confusion about distinction between overdetection and false positive screening results

In the stages 1 and 2 interviews, some women showed confusion regarding the concepts of overdetection and false positives—for example, “What's the difference between—so, the overdetection is the false positive? Is it the same thing? …That's the confusion that I've had… I didn't quite understand that from reading that… I think I was just assuming it was all the same thing.” Although the initial draft already had these two outcomes presented under separate headings and listed on separate lines in the summary table, we revised the decision aid within stage 1 to try and clarify this point by explicitly numbering the outcomes in the section headings and summary table. We also added a statement to the introduction—“There are 3 important things to know: …”—briefly listing as 1, 2 and 3 the outcomes to be covered in the booklet (ie, breast cancer mortality benefit, false positives and overdetection).

Despite these efforts, the stage 2 interviews demonstrated the persistence of some confusion between the concepts, leading us to take several additional steps to further clarify our presentation of this information. First, to draw attention to the aforementioned ‘3 important things’ statement, we put a box around it. Second, we tried to encourage attention to the overdetection content by flagging it as ‘new’ information. Third, we moved this section to an earlier position in the booklet, ahead of the false positives section. Fourth, we made minor modifications to the text explaining the two concepts, including slight wording changes and use of bold font to emphasise key phrases (eg, in false positives ‘there is no cancer’). Finally, we added a new item to the question and answer section to explicitly address this point: ‘How is overdetection different from false positives?’

Relationship between risk of overdetection and chance of benefit not well understood

The stage 2 interviews included questions to assess whether women had understood the key facts. One such point related to which outcome would affect more women—overdetection or avoidance of a breast cancer death. This was asked in a ‘true or false’ format, which a majority of respondents answered incorrectly. In the light of this, we added a new box following a presentation of the benefit and overdetection information, entitled ‘Putting it together’. Here we restated for both outcomes the absolute numbers per 1000 women screened over 20 years, noting explicitly ‘that means more women experience overdetection than avoid dying from breast cancer.’

Communication of new and complex material

The piloting process highlighted the challenge readers faced to absorb what one woman called “quite complicated information”. Another remarked, “I had to read it a few times… It was quite clear when I went back to it. But initially it was quite overwhelming.” To improve overall ease of reading, in the final revision we increased the font size and spaced out the intervention content over 11 pages rather than the original 8. We modified the control version accordingly (changing from 6 pages to 8).

Booklet acceptability

Stage 1 (qualitative)

Overall, stage 1 participants reacted positively towards the decision aid. Although some aspects were evidently challenging to understand (see previous section), all 15 women said they found most things or everything clear. The graphical presentation of quantitative information was generally liked—for example, “It was really clear, it really explained it well. I'm a visual person. I mean, if there's figures I tend to go blank. But when you actually see that represented by dots it's very easy to understand.” Every participant regarded the decision aid as at least a little helpful, with about half saying it was very helpful. Reading about the downsides of screening—“being told the other half of the picture”—was thought-provoking. As one woman said, “It certainly did make me think… it made me reflect… really this is calling for a decision one way or another, and what will I do when I'm 50”. Nonetheless, many of the women expressed appreciation for the opportunity to become better informed, and all but one would recommend the resource to others facing the decision—for example, “because it's got information that people need to know… I wish that I'd had that information when I was turning 50… it would have been good to know that, at the start. To be prepared and to have that understanding, so I think it's really good that this is out there.

Stage 2 (quantitative)

Table 5 presents quantitative acceptability data from stage 2 interviews on the intervention and control decision aids. The majority of participants (76%) considered their booklet just about right in length, with the remainder tending to say it was a little too long, and most (76%) would recommend the decision aid to others. Women found both booklets clear and easy to understand (intervention 81%, control 94%) and helpful in making their screening decision (75% and 61%, respectively). Responses to the question about how much of the information was new to the reader largely fell into the categories ‘some’ and ‘most’, with group differences apparently reflecting the additional new content (ie, overdetection) in the intervention booklet. A question about whether the booklet was balanced or slanted towards or away from screening elicited a range of responses in both groups (see table 5).

Table 5

Acceptability of intervention and control decision aids (stage 2)

Discussion

This paper describes the development and preliminary evaluation of a decision aid designed to support women to make informed decisions about breast screening. The piloting process described here enabled us to explore responses to successive drafts of the materials among our intended audience. User testing with women approaching the age of invitation to screening showed that the decision aid could be read in a reasonable time (10–15 min) and was generally received positively. Most of the qualitatively interviewed women liked the graphical presentation style used for the numerical information, and considered it helpful to be able to “visually scan the information”. As overdetection was the most novel element of the content, our particular interest in this study was examining whether women could comprehend this information and exploring how they reacted to the presentation of screening downsides. The purpose of undertaking the two stages of piloting prior to more formal evaluation17 was to identify where there was room for improvement and to modify our materials accordingly.

We found that the main conceptual point of confusion around overdetection related to understanding how it is distinct from false positive screening results. While both outcomes represent harms of screening, overdetection has more serious implications for those affected. By adding an item to the question and answer section (‘How is overdetection different from false positives?’), we have acknowledged that there is potential for confusion and provided a concise statement underscoring where the contrast lies. This leads to a question about how breast cancer is treated (also an addition after initial piloting), aiming to draw the reader's attention to the consequences of overdetection by highlighting some of the common side effects of the main treatment modalities. As in our focus group study,15 for some qualitative interview participants it was not clear why overdetection would be considered a negative outcome of screening, whereas others who had more experience with cancer treatment (albeit indirectly) grasped this more readily. This reinforces the importance of decision aids including some description of what it may be like to experience the consequences of choosing particular options, which may help a reader clarify her values.28

In terms of the magnitude of overdetection, we consider it important for readers to understand the ‘bottom line’ that overdetection occurs more frequently than prevention of death from breast cancer. However, a ‘true or false’ knowledge item about this was answered poorly in stage 2. As the different outcomes were shown on separate icon arrays, it may have been difficult for readers to connect and compare the benefit and overdetection figures. In the revised intervention, we tried to make this relationship clearer by reinforcing the visual depictions with an added short text box, thus giving the reader key information in two complementary ways. The final version also has the icon arrays for benefit and overdetection on facing pages, which may make the comparison more salient.

It was also evident from the qualitative interviews that the decision aids—with their neutral presentation of benefit and harm information and framing of a choice between screening and not screening—did not match readers’ expectations for screening messages, which are typically persuasive in tone and intent. Similar issues have been reported in previous research on informed choice in screening.16 ,27 ,29 ,30 This underscores the need for screening decision aids to start by clearly explaining their purpose and why there is a decision to make, as ours did.

The strengths of this project include the rigorous staged approach to developing and evaluating the intervention. Our initial design built on a comprehensive qualitative study that explored responses to overdetection in 50 women15 together with our previous experience in producing and trialling cancer screening decision aids.18 ,26 ,31 We used an iterative process of pilot-testing, combining both qualitative and quantitative data, and revised our materials successively according to the findings. Such an approach is recommended19 ,25 ,32 as it facilitates a thorough exploration of problematic aspects and careful testing of potential solutions. Decisions about initial design and subsequent revisions were undertaken in consultation with an experienced multidisciplinary research team, incorporating input from laypersons as well as independent experts. Possible limitations are the inclusion of some women recruited via convenience sampling (n=6) and the fact that stage 1 participants were somewhat varied in age and screening history compared with the specific target population for our decision aids. However, for the further evaluation in stage 2 (n=34), we recruited directly from our target population, and these women read our booklets within a real-life decision-making setting.

We have produced these decision aids for the purposes of a population-based randomised controlled trial (stage 3) examining how information about overdetection affects women's decision-making about breast screening.17 Trial participants will receive one of our decision aids in addition to other information materials in current use locally.33 As such, we have not included practical information such as the procedural aspects of having a mammogram, which would need to be added in order to produce a stand-alone resource. Although our current focus is on introducing to women the novel concept of overdetection and overtreatment, as public understanding increases over time, future decision aid developers might consider also trying to address the difficult issues of how screening may affect the extent of treatment women receive and the risk of dying from all causes. Our decision aids have been designed to be accessible to people with an average level of reading ability, and further work would be required to adapt the materials to ensure that they are suitable for lower-literacy groups and culturally diverse populations. Ultimately, this work will help address the increasingly recognised responsibility for cancer screening services to provide evidence-based benefit and harm information to people in a clear, transparent way.5–7 ,34

Implications and conclusions

The concept of overdetection is complex and new to the public, and people may find the issue hard to understand. In our efforts to communicate with women about overdetection in breast screening, we have found it important to make clear why overdetection may be considered a concern by explaining the associated consequences in terms of unnecessary treatments that can cause harm. Related to this is the need to differentiate very clearly between overdetection and false positives, which we have identified as a common source of confusion. Encountering balanced information about screening rather than a persuasive message is contrary to people's expectations. Results of the decision aid trial that is currently underway17 will indicate whether we have succeeded in overcoming these challenges and communicating effectively about overdetection.

Acknowledgments

The authors thank Kirsten Howard for her work on the modelling of screening outcomes, Kevin McGeechan and Jenn Kidd for their important contributions to the decision aid piloting and revision process, Ray Moynihan and Martin Stockler for helpful comments on the draft decision aid, Katharine Morgan for graphic design services, and Hunter Valley Research Foundation for recruitment and interviewing services. We are very grateful to all study participants for their time and invaluable feedback.

References

Footnotes

  • Contributors KM, JH, JJ, AB and LI developed the original concept of this study. JH drafted the decision aid prototype with KM and JJ. GJ updated the screening outcomes model with AB, JH and LI. All authors contributed to discussions about the decision aid design and iterative revisions. JH coordinated the piloting and revision process and conducted the stage 1 interviews. KM, AB, JJ, NH and HD obtained funding. JH drafted the manuscript; all other authors were involved in the editing of the manuscript.

  • Funding This work was supported by the National Health and Medical Research Council of Australia in the form of a project grant (no. 1062389), a program grant to the Screening and Test Evaluation Program (no. 633003), a Career Development Fellowship awarded to Kirsten McCaffery (no. 1029241), and an Early Career Fellowship awarded to Jesse Jansen (no. 1037028).

  • Competing interests None.

  • Ethics approval The University of Sydney Human Research Ethics Committee approved the study (project no. 2012/1429).

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement No additional data are available.