Original Article
Directed acyclic graphs can help understand bias in indirect and mixed treatment comparisons

https://doi.org/10.1016/j.jclinepi.2012.01.002Get rights and content

Abstract

Objective

To introduce and advocate directed acyclic graphs (DAGs) as a useful tool to understand when indirect and mixed treatment comparisons are invalid and guide strategies that limit bias.

Study Design and Setting

By means of DAGs, it is heuristically explained when indirect and mixed treatment comparisons are biased, and whether statistical adjustment of imbalances in study and patient characteristics across different comparisons in the network of RCTs is appropriate.

Results

A major threat to the validity of indirect and mixed treatment comparisons is a difference in modifiers of the relative treatment effect across comparisons, and statistically adjusting for these differences can improve comparability and remove bias. However, adjustment for differences in covariates across comparisons that are not effect modifiers is not necessary and can even introduce bias. As a special case, we outline that adjustment for the baseline risk might be useful to improve similarity and consistency, but may also bias findings.

Conclusion

DAGs are useful to evaluate conceptually the assumptions underlying indirect and mixed treatment comparison, to identify sources of bias and guide the implementation of analytical methods used for network meta-analysis of RCTs.

Introduction

What is new?

  • It is known that the assumptions of similarity and consistency underlie indirect and mixed treatment comparisons. These assumptions are violated if effect modifiers of the relative treatment effect differ across comparisons.

  • This article uses directed acyclic graphs (DAGs) to conceptually evaluate the assumption of similarity and consistency, and to explain heuristically when analyses of indirect and mixed treatment comparisons of randomized controlled trials are biased.

  • Although statistically adjusting for differences in effect modifier across comparisons can improve comparability, by means of DAGs it is demonstrated that adjusting for differences that are not effect modifiers is not necessary and can even introduce bias.

  • Furthermore, it is shown that adjustment for the baseline risk to explain heterogeneity can introduce bias in indirect and mixed treatment comparisons as well.

  • This article suggests the use of DAGs to identify sources of bias in indirect and mixed treatment comparisons and guide the implementation of analytical methods used for network meta-analysis of RCTs.

In the absence of a randomized controlled trial (RCT) comparing all interventions of interest, an indirect treatment comparison of different RCTs can provide useful evidence to inform healthcare decision making [1], [2], [3], [4], [5], [6], [7], [8]. Even when the results of the direct comparisons are conclusive, combining them with indirect estimates in a mixed treatment comparison may yield more refined estimates [1], [2], [3], [4]. If the available evidence base consists of a network of RCTs involving treatments compared directly or indirectly or both, it can be synthesized by means of a network meta-analysis [9].

In indirect and mixed treatment comparisons, the randomization holds within but not across trials. Accordingly, covariates that affect treatment effects may be imbalanced across comparisons, resulting in violations of the similarity assumption [6]. When the network of RCTs consists of both direct and indirect evidence for some comparisons, the imbalance in these treatment-by-covariate interactions results in consistency violations. Regression-based techniques have been used to account for such differences across comparisons [6], [10], [11], [12], [13], [14].

The objective of this article was to advocate directed acyclic graphs (DAGs) as a useful tool to understand bias in indirect and mixed treatment comparisons and guide the implementation of analytical methods.

Section snippets

DAGs

A DAG is a graphical structure consisting of a set of relevant nodes, each associated with a random variable and corresponding arrows connecting the nodes representing dependence [15], [16], [17], [18], [19]. In Fig. 1, three DAGs include the nodes treatment T, outcome O along with co-variables severity of disease C1, adverse events C2, and biomarker status C3. A node pointing to another is called a parent; a node pointed to is a child (e.g., C1 is a parent node of T, Fig. 1B). A path between

Meta-analysis of RCTs

For an RCT (Fig. 1A), the node T, node O, and the corresponding direct path between them can be collapsed into a single node representing the relative effect of the treatment on the outcome. Fig. 2 illustrates the DAG for a meta-analysis synthesizing evidence from several RCTs that compare treatment B with treatment A. Node D reflects the distribution of relative treatment effects of B vs. A across studies. In a meta-analysis, randomization to treatment only holds within studies and not across

Indirect treatment comparisons of RCTs

Assume that RCTs are available for treatment comparisons AB but also AC, and that we are interested in the relative treatment effect of intervention C vs. B. Because no BC studies are available, we must rely on an indirect comparison of the relative effect of the AB and AC studies to estimate the relative effect of C vs. B. Fig. 3 presents a DAG for such an indirect comparison. In contrast to a pair-wise meta-analysis of only AB studies, an indirect comparison, or network meta-analysis, is

Mixed treatment comparisons of RCTs

When in addition to AB and AC studies the evidence base also consists of BC studies, there is both direct and indirect evidence for each of the three pair-wise comparisons. Such a closed loop network of studies is called a mixed treatment comparison. The DAG of Fig. 4 not only applies to an indirect treatment comparison, but also to a mixed treatment comparison. Now, node TC consists of three levels: AB, AC, and BC comparisons. The arrow pointing from node TC to node D reflects the relative

Useful adjustment for covariates in an indirect and mixed treatment comparison

As illustrated, indirect and mixed treatment comparisons result in biased estimates if there is an open path between TC and D in addition to the direct target path. To remove all bias, we have to condition on covariates in such a way that all open backdoor paths will be closed without closing the target path or opening new biasing paths [16], [17], [23].

Let us look at the numerical example introduced before where C1 is the source of bias because of an imbalance across AB and AC comparisons.

Unnecessary and harmful adjustment for covariates in an indirect and mixed treatment comparison

One can argue that a priori causal knowledge regarding alleged bias as reflected with the DAGs is not relevant to guide analysis of indirect and mixed treatment comparisons in an attempt to minimize bias; just incorporate covariates that are different across comparisons in a metaregression model. Unfortunately, because network meta-analyses are often based on a limited number of studies, they may not have the power to estimate a regression model that captures all imbalances in covariates

Adjustment for baseline risk across trials in an indirect and mixed treatment comparison

In meta-analysis, the relative treatment effect may vary according to the underlying or baseline risk of the patients in the different studies [29]. Although for indirect and mixed treatment comparisons, it may also be of interest to estimate relative treatment effects by baseline risk (which is not straightforward when not all studies include a common control group), it is arguably more important to evaluate whether adjustment for baseline risk can improve similarity and consistency.

In the DAG

Conclusion

DAGs illustrate that indirect and mixed treatment comparisons of RCTs can bias estimates of treatment effects when covariates that act as relative treatment effect modifiers vary across comparisons and are not adjusted for in the analysis. However, adjustment for differences in covariates across comparisons that are not effect modifiers is unnecessary and might even introduce bias. Adjustment for the baseline risk may also be useful in indirect and mixed treatment comparisons, but may also

Acknowledgments

Chris Schmid received funding from the Agency for Healthcare Research and Quality (grant no. R01HS018574). Georgia Salanti received funding from the European Research Council (IMMA 260559 project).

References (29)

  • G. Lu et al.

    Combination of direct and indirect evidence in mixed treatment comparisons

    Stat Med

    (2004)
  • G. Salanti et al.

    Evaluation of networks of randomized trials

    Stat Methods Med Res

    (2008)
  • T. Lumley

    Network meta-analysis for indirect treatment comparisons

    Stat Med

    (2002)
  • N.J. Cooper et al.

    Addressing between-study heterogeneity and inconsistency in mixed treatment comparisons: application to stroke prevention treatments in individuals with non-rheumatic atrial fibrillation

    Stat Med

    (2009)
  • Cited by (45)

    • AHRQ series on complex intervention systematic reviews—paper 3: adapting frameworks to develop protocols

      2017, Journal of Clinical Epidemiology
      Citation Excerpt :

      For example, directed acyclic graphs (that can account for feedback loops) may be useful as callouts in a larger analytic framework to illustrate the relationships relevant to a particular research question (or a subquestion of a more comprehensive research question). These graphs can identify potential confounders and time-varying change in intervention dose, intensity, or even intervention type as well as can help inform risk of bias assessment about a particular intervention or exposure [23–25]. However, these new tools and ideas are not widely used.

    • Critical appraisal of network meta-analyses evaluating the efficacy and safety of new oral anticoagulants in atrial fibrillation stroke prevention trials

      2015, Value in Health
      Citation Excerpt :

      Overall, the synthesis studies were generally limited by (the reporting of) the process for identifying potential treatment effect modifiers. There is a risk that deciding which covariates are effect modifiers on the basis of observed patterns in the results across trials can lead to false conclusions regarding the sources of inconsistency and may bias indirect comparisons [59,60]. Although most studies did not clearly identify potential treatment effect modifiers in the Methods section, all studies identified imbalances in study or patient characteristics across the RCTs generally on the basis of a qualitative comparison in the Results section, with select studies using a quantitative comparison [44,46,47].

    View all citing articles on Scopus
    View full text