Original Article
A meta-regression analysis shows no impact of design characteristics on outcome in trials on tension-type headaches

https://doi.org/10.1016/j.jclinepi.2007.10.006Get rights and content

Abstract

Objectives

In the conduct of a systematic review or meta-analysis, many possible sources of bias exist, such as bias caused by design characteristics. We studied the influence of the methodological study characteristics of randomized clinical trials (RCTs) on the outcome in a systematic review on conservative treatments in patients with tension-type headache (TTH).

Study Design and Setting

Included were RCTs from a systematic review on TTH, which were a control group receiving a placebo or no treatment and presented data on recovery or headache severity, intensity, or frequency. Design characteristics were assessed using the Delphi list. Regression analysis is performed on separate design characteristics on size of treatment effect.

Results

Out of the original data set of 146 trials, 61 trials fulfilled our selection criteria. The number of trials presenting only dichotomous data was larger than trials presenting only continuous data. All study characteristics show a nonsignificant relation with the effect estimate. Whether outcome is presented dichotomous or continuous appears to have a significant impact on treatment effect estimates.

Conclusion

In this study, sample design characteristics do not show to have an impact on treatment effect estimates, but the way the treatment effect is measured has a significant impact.

Section snippets

Objective

Systematic reviews and meta-analysis are designed to help the clinician base their clinical decisions on the best available evidence. In the conduct of a systematic review or meta-analysis, many possible sources of bias exist, such as publication bias [1], [2], language bias [3], [4], and bias caused by design characteristics [5], but also clinical heterogeneity. All these sources of variation possibly have an influence on the reported outcome.

One can assess design components such as concealed

Study selection

For this study, we selected randomized clinical trials out of a large systematic review, which fulfilled the following criteria: (1) including a control group with placebo treatment/no treatment or waiting list controls; (2) presenting sufficient data (means and measures of variability or number of patients successful on treatment outcome). For the crossover trials, we summed the outcomes over all active treatments periods and over all control periods, as none of these trials provided separate

Study selection

We divided the original data set of 146 trials into four main categories according to the intervention: acute pain medication (n = 41), preventive medication (n = 36), physiotherapy interventions (n = 8), and behavioral interventions (n = 43). One category of trials (n = 11) concerned children with TTH. Five trials were included in two categories (preventive medication and physiotherapy or behavioral interventions) and in total seven trials did not fit in either one of the categories and were excluded,

Discussion

Meta-regression evaluates whether certain factors, in this case design characteristics, explain heterogeneity of treatment effect between trials. We did not find any significant association between design characteristics as measured with the Delphi list and the effect estimate.

Conclusion

In this study, sample design characteristics do not show to have an impact on treatment effect estimates, but the way the treatment effect is measured has a significant impact. Whether assessment of design characteristics should or should not be a part of the design of systematic reviews and meta-analysis is still undecided. Bias by design characteristics cannot be ruled out, but the direction is unclear.

Acknowledgment

Two authors (A.P.V., T.S.) had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

References (27)

  • A.P. Verhagen et al.

    The influence of methodological quality on the conclusion of a landmark meta-analysis on thrombolytic therapy

    Int J Technol Assess Health Care

    (2002)
  • K.F. Schultz et al.

    Blinding and exclusions after allocation in randomised controlled trials: survey of published parallel group trials in obstetrics and gynaecology

    Br Med J

    (1996)
  • R. Kunz et al.

    The unpredictability paradox: review of empirical comparisons of randomised and non-randomised clinical trials

    Br Med J

    (1998)
  • Cited by (0)

    View full text