Intended for healthcare professionals

Research Methods & Reporting

Converting an odds ratio to a range of plausible relative risks for better communication of research findings

BMJ 2014; 348 doi: https://doi.org/10.1136/bmj.f7450 (Published 24 January 2014) Cite this as: BMJ 2014;348:f7450

This article has a correction. Please see:

  1. Robert L Grant, senior lecturer in health and social care statistics
  1. 1St George’s, University of London and Kingston University, London SW17 0RE, UK
  1. Correspondence to: R L Grant robert.grant{at}sgul.kingston.ac.uk
  • Accepted 1 November 2013

Odds ratios are a necessary evil in medical research; although used as a measure of effect size from logistic regressions and case-control studies, they are poorly understood. This paper provides practical advice for authors and readers on converting odds ratios to relative risks

The odds ratio is a common measure in medical research of the effect size comparing two groups (treatments or risk factors) in terms of an outcome that is either present or absent. However, the odds ratio is poorly understood.1 2 3 The relative risk (also called the risk ratio) is more intuitive, but cannot be obtained from case-control studies or (except in rare instances) logistic regressions. Because the misunderstanding arises from the odds itself, simply describing it as a proportional change (for example, explaining an odds ratio of 0.8 as “treatment X was associated with a 20% reduction in the odds of the outcome”) is not helpful for most people. This is a problem when communicating results to healthcare professionals and policy makers, discussing treatment options with patients, or seeking to conduct a meta-analysis of studies reporting effect sizes in a mixture of odds ratios and relative risks.

Unfortunately, confusion about odds is not the only problem; there is also a danger of inaccuracy when communicating odds ratios. When the outcome is rare, the odds ratio and relative risk are about the same. Medical papers sometimes rely too heavily on this approximation, discussing odds ratios in terms of risks. Not only is the odds ratio a poor approximation for outcomes which are not “rare” in the study, a statistical analysis with a single odds ratio, common to all participants, does not imply a single common relative risk. In fact, the relative risk depends also on the risk of the outcome in the baseline or control group. For brevity, I will refer to this as the baseline risk. Because of this, the same odds ratio could imply a very different relative risk for subgroups of the population with different baseline risks.

The formula for converting an odds ratio to a relative risk is straightforward4 5 (fig, table 1):

Relative risk=odds ratio/(1−p0+(p0×odds ratio))

(Where p0 is the baseline risk.)

Figure1

Relation between the odds ratio, relative risk, and baseline risk (p0)

Table 1

Relative risks according to varying values of the odds ratio and baseline risk

View this table:

The odds ratio is always further away from 1 than the relative risk, but they are more similar when the baseline risk is small, as seen by the near diagonal line in the figure for 0.1 baseline risk. Although the basic conversion is well understood, more complex analyses may not have a simple “baseline risk.” For example, the box describes the difference between odds and risks using the example of a clinical trial of a smoking cessation strategy.6 Odds ratios are used to summarise the effect of the strategy on various dichotomous outcomes, and these are also adjusted for ward discharge rates. This means that the baseline risk (in the usual care group) will differ between wards, according to the definition of a confounding factor.7 Therefore, a single odds ratio can imply a range of relative risks, depending on age.

Box: Definitions of odds and risks

Definitions
  • Outcome present for treatment: a

  • Outcome present for control: b

  • Outcome absent for treatment: c

  • Outcome absent for control: d

  • Total for treatment: a+c

  • Total for control: c+d

Therefore:

  • Odds of the outcome in the treatment group: a/c

  • Odds of the outcome in the control group: b/d

  • Odds can range from 0 to infinity but are always positive

  • Odds ratio for the outcome comparing treatment with control: (a/c)/(b/d)=(a×d)/(b×c)

  • In case-control studies, the odds ratio for having received the treatment, comparing outcome present to outcome absent, is also (a×d)/(b×c), hence the use of odds ratios in this study design

  • Risk of the outcome in the treatment group: a/(a+c)

  • Risk of the outcome in the control group: b/(b+d)

  • Risks can range from 0 to 1

  • Relative risk comparing treatment with control: (a×(b+d))/(b×(a+c))

  • For both the odds ratio and relative risk, 1 represents no difference between the groups

  • The risk (and the odds) does not have to refer to an undesirable outcome

An applied example

A cluster randomised controlled trial by Murray and colleagues is used here to show these calculations.6 This study compared systematic identification and support to quit smoking with discretionary support (usual care) in a hospital setting. Applying the above definitions:

  • Number of participants who quit at four weeks after the hospital based intervention (“a”): 98

  • Number of participants who did not quit at four weeks after the hospital based intervention (“b”): 162

  • Number of participants who quit at four weeks after usual care (“c”): 37

  • Number of participants who did not quit at four weeks after usual care (“d”): 187

  • Total for the hospital based intervention (“a+c”): 260 participants

  • Total for usual care (“c+d”): 224 participants

In this example, the odds ratio was 3.06 and the relative risk was 2.28, which means that patients’ “risk” of successfully quitting smoking after four weeks in the intervention group was 2.28 times higher than in the usual care group. The baseline risk was 37/224=17%.

These figures are the crude odds ratio and relative risk, which is the difference observed between groups, ignoring any other confounding variables that might differ between groups. In order to obtain a better estimate of how the treatment or risk factor actually causes a change in the odds or risk of the outcome, analyses are adjusted for the confounding variables.

Logistic regression is a statistical procedure to estimate the odds of the outcome occurring given one or more predictor variables. This is a convenient and flexible way to obtain the adjusted odds ratio, but for computational reasons it usually cannot estimate the adjusted relative risk.

When the regression allows for the effect of the treatment or risk factor of interest to differ for various values of a confounding variable, this is called interaction, or effect modification.

Advice for authors of medical research

Authors of medical research should consider converting odds ratios to relative risks in this way, and should provide the observed risks if possible, because best practice in communicating risks requires absolute as well as relative measures.8 Unfortunately, the risks cannot be obtained at all for case-control studies. In this case, if a range of plausible baseline risks can be agreed among authors, perhaps based on published evidence, this can be used to create a range of plausible relative risks.

When an odds ratio is adjusted for covariates (typically by logistic regression), there is no longer a single shared baseline risk. Instead, different values of the covariates in different participants mean that their baseline risk differs, and therefore the relative risk differs too. In such a situation, authors can use an average baseline risk, or present a range of relative risks corresponding to an observed or otherwise plausible range of baseline risks.

A single average risk can be calculated in the context of logistic regression by finding the mean observed value of each covariate and entering these values, along with the baseline or control group indicator, into the regression equation. This will yield a baseline odds that can then be converted to the baseline risk by the following equation:

Risk=odds/(1+odds)

Therefore, the odds ratio is converted to an “average relative risk.”

It may be preferable to provide a range of relative risks rather than a single average, because the baseline risk is a measure of risk in the population and could still differ markedly for individual people. A relatively simple approach to this is to use the regression model to calculate the baseline odds for every study participant, rank these odds, and calculate deciles (the values dividing the study participants into ten equally sized subgroups in order of their baseline odds). The deciles for baseline odds are then converted to baseline risks, and hence the odds ratio is converted to deciles of adjusted relative risks.

However, in more complex models, such as where the odds ratio for the intervention or risk factor of interest is altered by one or more covariates (an “interaction” or “effect modification”), this simple averaging may fail to capture all the interdependencies in the data. Modern statistical software—for example, Stata9 and the R package “effects”10—allows for the calculation of “marginal effects,” which incorporate these complexities to estimate individuals’ risks, and authors should be prepared to seek advice on the best way to obtain and communicate these results.

Advice for clinical readers of medical research

Most published research providing an odds ratio as a measure of effect size should also provide sufficient information for the baseline risk, and hence the relative risk, to be calculated. If numbers in each group are given, the crude relative risk can be calculated directly (box). Even if not all this information is present, readers can still make some useful attempts at conversion, which is particularly important for those conducting meta-analyses and systematic reviews. Readers will need to decide on a plausible range of baseline risks and thus derive a plausible range of relative risks. In the case of odds ratios adjusted for covariates—which is common in contemporary medical research—readers will not be able to carry out the approach suggested above for authors (unless they can access the original data). Therefore, a simple sensitivity analysis is likely to be helpful, using an upper and lower plausible limit. This plausible range could be derived from similar, previously published research, or could use expert opinions.

Example: cluster randomised controlled trial of hospital based smoking cessation

An illustrative analysis was conducted on Murray and colleagues’ cluster randomised controlled trial,6 comparing systematic identification and support to quit smoking with discretionary support (usual care) in a hospital setting. This study reported a primary outcome and six secondary outcomes, with baseline risk (proportions of participants with the outcome in the usual care arm) ranging from 6% to 29% for different outcomes (table 2). Here, all the outcomes are positive but it still makes sense to describe them as risks. The crude odds ratios and relative risks could be calculated from the reported data, but estimating adjusted relative risks requires a decision about a plausible range of baseline risks at different levels of the confounding factor (in this case, ward discharge rates). Because this covariate varies between care settings, the Cochrane review cited by Murray and colleagues was taken as an approximate guide for the primary outcome, quitting at four weeks. It included 23 studies with 100 or more participants in the control arm.11 The baseline risks ranged from 6% to 53% (interquartile range 9-35), which is about half to double the baseline risk in Murray and colleagues’ study.6 Follow-up times varied, but this range appeared plausible; therefore, 9% and 35% were used for sensitivity analysis.

Table 2

Results from cluster randomised controlled trial by Murray and colleagues6

View this table:

The plausible crude relative risks ranged from 1.78 to 2.58, which clearly contained the true known value of 2.28. The plausible adjusted relative risks ranged from 1.52 to 1.91, which cannot be verified but still represented a large increase in smokers’ chances of still having quit at four weeks’ follow-up.

A doctor can clearly advise their patient that “research suggests this intervention will improve your chances of quitting by somewhere between 52% and 91%, depending on how soon you leave hospital.” This advice is more likely to be understood than the alternative (“will more than double your odds”), or the incorrect assumption that the odds ratio can be described as an approximate relative risk and any differences ignored (“will more than double your chances”). Considering the secondary outcomes from Murray and colleagues makes this clear: those with large baseline risks, or large odds ratios, have very large differences between odds ratios and relative risks.

Conclusion

When communicating risks to patients, it is important to be able to frame the statistics in a meaningful and easily understood metric.8 Misinterpretation of the odds ratio can lead to serious overestimation of the benefits or risks in medical decision making. It is relatively simple to convert odds to relative risks given the baseline risk or baseline odds of the outcome. However, any statistical model which contains a single odds ratio that is constant for all covariate values does not imply a constant relative risk, so to communicate relative risks effectively it is essential to provide a range of relative risks for different covariate values. These calculations can be conducted with simple approximations or with modern statistical software, and authors of medical research should make use of these to improve communication of their findings. For readers of published research, there will remain situations where sufficient information for such a conversion is not given, and here they should interpret odds ratios with caution; sensitivity analysis over plausible relative risks is advisable. Conversion is also potentially helpful in meta-analyses, where different metrics can prevent studies from being combined.

Summary points

  • Odds ratios are not well understood as a measure of effect size, and conversion to relative risks by a simple calculation would improve understanding of findings

  • A statistical model with a single fixed odds ratio does not imply a fixed relative risk; in fact, the relative risk differs depending on the baseline risk of the outcome

  • When baseline risks have not been published, or the effect is adjusted for other confounding factors, a range of plausible relative risks is an effective way of representing uncertainty about the effect size in terms of an individual’s risk

  • In the case of more complex statistical models, modern software can provide estimates of individuals’ risks

Notes

Cite this as: BMJ 2014;348:f7450

Footnotes

  • I am grateful to Fiona Reid and Caroline Rogers for their advice on drafts of this paper.

  • Contributor: RLG conceived of the need for practical advice on this topic, and the general approach set out above. RLG carried out the example analysis and wrote the paper. RLG is a medical statistician with an interest in effective communication of research findings, and he acts as guarantor for this paper.

  • Funding: None received.

  • Competing interests: I have read and understood the BMJ Group policy on declaration of interests, and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

  • Ethical approval: None required; this entirely methodological paper involved no contact with or data from humans.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

References