Objectives To make informed decisions about healthcare, patients and the public, health professionals and policymakers need information about the effects of interventions. People need information that is based on the best available evidence; that is presented in a complete and unbiased way; and that is relevant, trustworthy and easy to use and to understand. The aim of this paper is to provide guidance and a checklist to those producing and communicating evidence-based information about the effects of interventions intended to inform decisions about healthcare.
Design To inform the development of this checklist, we identified research relevant to communicating evidence-based information about the effects of interventions. We used an iterative, informal consensus process to synthesise our recommendations. We began by discussing and agreeing on some initial recommendations, based on our own experience and research over the past 20–30 years. Subsequent revisions were informed by the literature we examined and feedback. We also compared our recommendations to those made by others. We sought structured feedback from people with relevant expertise, including people who prepare and use information about the effects of interventions for the public, health professionals or policymakers.
Results We produced a checklist with 10 recommendations. Three recommendations focus on making it easy to quickly determine the relevance of the information and find the key messages. Five recommendations are about helping the reader understand the size of effects and how sure we are about those estimates. Two recommendations are about helping the reader put information about intervention effects in context and understand if and why the information is trustworthy.
Conclusions These 10 recommendations summarise lessons we have learnt developing and evaluating ways of helping people to make well-informed decisions by making research evidence more understandable and useful for them. We welcome feedback for how to improve our advice.
- health policy
- quality in health care
- medical journalism
- public health
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Strengths and limitations of this study
Our approach to preparing this checklist has been pragmatic in terms of the methods we have used.
We have provided explanations of the basis for each recommendation and references to supporting research.
We did not conduct a systematic review to inform our guidance.
We did not review non-English language literature.
We did not systematically grade the certainty of the evidence or strength of our recommendations.
Access to healthcare information is necessary if people are to be involved in decisions regarding their own health.1 Recognising this, governments in several countries have included the right to healthcare information in patients’ charters. These charters commonly establish people’s right to access information about treatments,2 including the benefits and harms of these treatments.3 Patients’ charters also underline the need to provide this information in a way that people can understand and that is adapted to each individual’s needs.2 4
Having the right to information does not necessarily mean that this information is available, and many patients and members of the public struggle to find information that is relevant to their circumstances. At the same time, most people are bombarded with claims in the media and other aspects of day-to-day life about what they should and should not do to maintain or improve their health.
Many health claims are unreliable and conflicting.5–14 When they are purported to be based on research, this might also contribute to a lack of trust in research. For example, surveys in the UK have shown that only about one-third of the public trust evidence from medical research, while about two-thirds trust the experiences of friends and family.15
It cannot therefore be assumed that people will trust advice simply because it is based on research evidence and given by authorities. Nor should they, as the opinions of experts or authorities do not alone provide a reliable basis for judging the benefits and harms of interventions.16 17 Doctors, researchers and public health authorities—like anyone else—often disagree about the effects of interventions. This may be because their opinions are not always based on systematic reviews of fair comparisons of interventions.18 Government authorities and professional organisations host many websites that provide health advice to the public. However, these websites often provide information that is unclear, incomplete and misleading.11 We were able to find only three websites that provide information about the effects of healthcare interventions that were explicitly based on systematic reviews.19 Even where information is based on systematic reviews, it may still be unclear, incomplete and misleading.
People who summarise lengthy research reports to make them more accessible are faced with many choices. This includes decisions about which evidence to present, how this evidence should be interpreted, and the format in which it should be presented. Our own experiences creating summaries based on Cochrane reviews have shown us that there are many pitfalls.20–25 A fundamental challenge is to find an appropriate balance between accuracy and simplicity. On the one hand, summaries should give a reasonably complete, nuanced and unbiased representation of the evidence. On the other hand, they should be succinct and understandable to people without research expertise.
Another challenge to making research evidence easier to use is that people with expertise in a field have been found to pay attention to read and interpret information differently from people without expertise.26 A common publishing strategy is to accommodate these differences by creating different versions of information for experts and non-experts; for example, for health professionals and for patients. However, both health professionals and patients frequently lack research expertise.22 26–29 In terms of understanding evidence-based information about the effects of treatments, ‘experts’ are the people who have acquired the skills needed to understand and interpret results from quantitative studies and systematic reviews. Everybody else could be considered ‘non-experts’ in this area.
This does not mean that this large group of non-experts are universally similar regarding their information needs. They may have different levels of language literacy, health literacy and numeracy, or they may need to use evidence for different kinds of decision-making tasks. However, when it comes to the specific task of understanding research evidence and using this information to weigh the trade-offs between possible benefits and harms, most users are non-experts. Consequently, most people would benefit from information about the effects of interventions that are presented in a way that recognises the needs of non-experts. This includes patients, health professionals and policymakers.
In summary, to make informed choices or decisions, people need information that is accessible, easy to find, relevant, based on the best available evidence, accurate, complete, not misleading, nuanced, unbiased, easy to understand and trustworthy.
The aim of this paper is to provide guidance and a checklist to anyone who is preparing and communicating evidence-based information on the effects of interventions (ie, information based on systematic reviews of fair comparisons) that is intended to inform decisions by patients and the public, health professionals or policymakers.
Development of this checklist was guided by ethical considerations underlying informed consent and patients’ rights. Informed consent in medical research has received a huge amount of attention.30 Informed consent in clinical and public health practice has received far less attention,31 and a double standard has existed for at least 50 years.32 Consent in clinical and public health practice is reviewed, if at all, only in retrospect. Health professionals are exhorted to obtain informed consent, but in daily practice, as opposed to in clinical trials, they often minimise uncertainties about interventions and they may feel duty bound to provide unequivocal recommendations.32
Our starting point in preparing this checklist was the belief that patients and the public have the right to be informed when making health choices—such as a personal choice about whether to adhere to advice, a decision about whether to participate in research or in taking a position regarding a health policy. Specifically, they should have access to the best available research evidence, including information about uncertainty, summarised in plain language. We do not assume that everyone wants this information.
Many people are not interested or prefer for someone else to make healthcare decisions on their behalf. For example, a systematic review of patient preferences for decision roles found that a substantial portion of patients prefer to delegate decision-making to their physician, although in most studies most patients reported a preference for shared decision-making.33 Some patient’s rights charters take this into account—for instance, the right to waive one’s ‘right to be informed’ is specifically mentioned in the Norwegian Patient Rights legislation.4 We would argue that under most circumstances it is good clinical practice to respect patient preferences.31 Those people who do not want information on the effects of treatments do not need to read or listen to information, but it should be there for those who want it.
To inform the development of this checklist, we compiled research evidence that is relevant to giving guidance on how to communicate evidence-based information about the effects of interventions. We started with our own research and then identified related research through a snowballing and citation reference method. We supplemented this with broad searches for evidence on communicating research evidence and intervention effects and specific searches for each item in the checklist. We did not conduct a systematic review. We have, however, referenced systematic reviews to support each item in the checklist when one was available. When we were not able to find a relevant systematic review, we have referenced the best available evidence that we have found. In addition, we have reviewed relevant guidance and reference lists. This included guidance for plain language summaries of research evidence,34 for reporting and using systematic reviews,35 36 for making judgements about the certainty of evidence and for going from evidence to recommendations37–39 and for risk communication.40
We used an iterative, informal consensus process to synthesise our recommendations. This was informed by our own experience and research spanning over three decades, our review of the literature, comparing our recommendations to other relevant guidance, and feedback from colleagues. We met initially to discuss our recommendations, divided up tasks, prepared drafts and then discussed these until we reached agreement on a final set of recommendations. In addition to the checklist summarising our main recommendations, we prepared a flowchart, providing guidance for implementing our recommendations. After agreeing on a set of recommendations, we compared these to recommendations made by others and sent a draft report to 40 people and received feedback from 30 (see Acknowledgements section) requesting structured feedback (online supplementary additional file 1).
Patient and public involvement
We did not directly involve patients in planning or executing this study.
Our recommendations are summarised in a checklist with 10 items (box 1). The basis for each recommendation is provided in online supplementary additional file 2 and explanations for each of the recommendations are provided in online supplementary additional file 3. All of our recommendations could be considered ‘good practice statements’. Good practice statements are recommendations that do not warrant formal ratings of the certainty of the evidence.41 One way of recognising such recommendations is to ask whether the unstated alternative is absurd.41 Arguably, that is the case for all the recommendations in box 1.
Checklist for communicating effects
Make it easy for your target audience to quickly determine the relevance of the information, and to find the key messages.
Clearly state the problem and the options (interventions) that you address, using language that is familiar to your target audience—so that people can determine whether the information is relevant to them.
Present key messages up front, using language that is appropriate for your audience and make it easy for those who are interested to dig deeper and find information that is more detailed.
Report the most important benefits and harms, including outcomes for which no evidence was found—so that there is no ambiguity about what was found for each outcome that was considered.
For each outcome, help your target audience to understand the size of the effect and how sure we can be about that; and avoid presentations that are misleading.
Explicitly assess and report the certainty of the evidence.
Use language and numerical formats that are consistent and easy to understand.
Present both numbers and words and consider using tables to summarise benefits and harms, for instance, using Grading of Recommendations Assessment, Development and Evaluation (GRADE) summary of finding tables or similar tables.
Report absolute effects.
Avoid misleading presentations and interpretations of effects.
Help your audience to avoid misinterpreting continuous outcome measures.
Explicitly assess and report the credibility of subgroup effects.
Avoid confusing ‘statistically significant’ with ‘important’ or a ‘lack of evidence’ with a ‘lack of effect’.
Help your target audience to put information about the effects of interventions in context and to understand why the information is trustworthy.
Provide relevant background information, help people weigh the advantages against the disadvantages of interventions and provide a sufficient description of the interventions.
Tell your audience how the information was prepared, what it is based on, the last search date, who prepared it and whether the people who prepared the information had conflicts of interest.
The flowchart (figure 1) outlines a process for producing evidence-based information about the effects of interventions. It provides examples that illustrate each step of the process. 42–46 The process begins with making sure that you know your target audience. It is important to consider how members of your target audience will be involved in the process. The next steps in the process are designing and user testing a template for the information that you will prepare, organising an editorial process and training and considering ways of making it easy for your target audience to find your information. Although the flowchart suggests a linear process, development should be approached as an iterative, cyclical process. The last step in figure 1 is to collect feedback on each individual piece of information from people in your target audience; to make changes if needed (to your template as well as to individual pieces of information); and to evaluate again, if needed. It also includes establishing routines for updating the information that you prepare, if this is planned.
How our checklist compares with related checklists and guidance
Although our guidance overlaps with other guidance,38 47–54 for the most part other guidance does not specifically addressing preparation of evidence-based information for decision-makers about the effects of interventions. The one exception or which we are aware is the ‘guideline for evidence-based health information’ prepared by the German Network for Evidence-based Medicine (DNEbM),55 which is only partially translated to English as of April 2020. The DNEbM recommendations are consistent with or recommendations to present both numbers and words and report absolute effects. They do not explicitly address our other recommendations. Comparison of our guidance with other guidance is summarised in table 1.
The Ensuring Quality Information for Patients (EQIP) tool49 and the International Patient Decision Aid Standards (IPDAS) checklist51 52 include specific recommendations related to using plain language (short sentences and a reading level not exceeding a reading age of 12). We have included key principles for plain language in our detailed guidance (online supplementary additional file 3).
The EQIP tool,49 the IPDAS checklist51 52 as well as a systematic review on evidence-based risk communication by Zipkin and colleagues50 recommend using visual aids. The last two recommend using graphs to show probabilities. We agree that information for people making decisions about interventions should be visually appealing and that well-designed visualisations can help some people to understand information about the effects of interventions. The DNEbM guidelines55 recommend that ‘graphics may be used to supplement numerical presentations in texts or tables’ based on ‘low-quality’ evidence. They also recommend that ‘if graphics are used as a supplement, then either pictograms or bar charts should be used’ based on ‘moderate-quality’ evidence. Spiegelhalter53 recommends visualisations in communication about risk and uncertainty, which seems sensible. However, we do not think there currently is enough evidence to support recommendations about when to use visualisations or what type of visualisation to use.50 53 56 57
The systematic review on evidence-based risk communication50 suggests being aware that positive framing (stating benefits rather than harms) increases acceptance of therapies. The IPDAS checklist52 53 recommends presenting probabilities using both positive and negative frames (eg, showing both survival and death rates). We do not think there currently is enough evidence for either of these recommendations.58
Zipkin and colleagues50 suggest placing a patient’s risk in context by using comparative risks of other events. We do not think there is currently enough evidence to support this recommendation and question its relevance for many decisions about interventions.
The DNEbM guidelines55 suggest ‘interactive elements may be used in health information’ based on ‘moderate-quality’ evidence. Similarly, the IPDAS checklist51 52 recommends allowing patients to select a way of viewing the probabilities (eg, words, numbers, diagrams). We agree this is sensible and, in previous work, we have designed an interactive summary of findings with this in mind.45 However, there is limited evidence to support this recommendation. We attempted to test this hypothesis in a randomised trial.59 Because of technical problems (the interactive summary of findings and data collection did not work for some participants), we were not able to complete the trial. The qualitative data that we collected suggested that participants (people in Scotland with an interest in participating in randomised trials of interventions60) had mixed views about their preferences for an interactive versus a static presentation. They also had mixed views regarding which initial presentation they preferred in the interactive presentation.
Finally, the DNEbM guidelines conclude that ‘narratives cannot be recommended’ based on ‘low-quality’ evidence. In contrast, the IPDAS checklist51 52 recommends including stories of other patients’ experiences and using audio and video to help users understand the information. We agree that this may be helpful. However, it is also possible that stories that specifically describe patients’ experiences of treatment effects and side effects can have unintended consequences. For example, people’s perceptions of their own risks of experiencing a benefit or harm could be influenced by whether they identify with the person telling the story or not. We are not aware of evidence from randomised trials comparing information with and without patients’ experiences, audio or video; or comparing different types of presentations. A recent systematic review on the use of narratives to impact health policymaking did not find any trials.61
Strengths and weaknesses of our checklist
We did not conduct a systematic review to inform our guidance, review non-English language literature, assess the certainty of the evidence supporting each recommendation, grade the strength of our recommendations or use a formal consensus process. However, we have provided explanations of the basis for each recommendation and references to supporting research. Our approach to preparing this checklist has been pragmatic in terms of the methods we have used. We hope that others will find the checklist practical and helpful. To facilitate use of the checklist, we have prepared a flowchart with examples (figure 1).
Implementation of the guidance can be facilitated by developing a template, specific guidance for those charged with using the template to prepare the information and training for those people. Links to examples of these are found in the flowchart. User testing can help to ensure that people in your target audience experience the information positively and as intended. We have provided links to examples of user tests of information about the effects of interventions and to resources for user testing in the flowchart.
Implications for research
There remain many important uncertainties about how best to present evidence-based information about the effects of interventions to people making decisions about those interventions There is a need for more primary research and more systematic reviews in this field. We have summarised key uncertainties that we identified while preparing this checklist in table 2. In addition, there is a need for a methodological review and a consensus on appropriate outcomes for studies evaluating different ways of communicating evidence-based information about the effects of interventions.62
The checklist that we have developed, which includes 10 items, is the top layer of our recommendations for how to prepare evidence-based information on the effects of interventions that is intended to inform decisions by patients and the public, health professionals or policymakers. These 10 recommendations summarise the lessons that we have learnt from our review of relevant research. The recommendations draw on our own experience over the past 20–30 years in developing and evaluating ways of helping people to make well-informed health choices by making research evidence more understandable and useful to them. We welcome feedback and suggestions for how to improve our advice.
The following people provided feedback on an earlier version of our checklist: Angela Coulter, Anne Hilde Røsvik, Baruch Fischhoff, Christina Rolfheim-Bye, Daniella Zipkin, David Spiegelhalter, Donna Ciliska, Elie Akl, Frode Forland, Glyn Elwyn, Gord Guyatt, Hanne Hånes, Holger Schünemann, Iain Chalmers, Jessica Ancker, John Ioannidis, Knut Forr Børtnes, Magne Nylenna, Marita Sporstøl Fønhus, Mike Clarke, Mirjam Lauritzen, Nancy Santesso, Nandi Siegfried, Pablo Alonso Coello, Paul Glasziou, Per Kristian Svendsen, Ray Moynihan, Rebecca Bruu Carver, Richard Smith and Tove Skjelbakken.
Contributors ADO, CG, SF, SL and AF are health service researchers. SR is a designer and researcher. The authors have worked together for over two decades studying ways to help health professionals, policymakers, patients and the public make well-informed healthcare decisions. All the authors participated in discussions about the recommendations, and this report helped review the literature and respond to external feedback on a draft report and provided feedback on each draft of the report. ADO is the guarantor of the article.
Funding All the authors are employed by the Norwegian Institute of Public Health and work in the Centre for Informed Health Choices.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting or dissemination plans of this research.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data sharing not applicable as no datasets generated and/or analysed for this study.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.