Stakeholders faced with the choice of adding a new vaccine product into a national immunization programme are increasingly requesting information on the relative value (i.e. cost effectiveness) of the vaccine compared with alternative uses of resources.[1,2] To assess the cost effectiveness of a vaccine programme, and to formally take into account the relevant epidemiologic, socioeconomic and population-level data, some kind of model is generally necessary.

This review examines the various modelling approaches taken to evaluate the cost effectiveness of vaccination programmes. First we provide an overview of key attributes of general models used to conduct cost-effectiveness analyses (CEAs). We then describe the basic characteristics of the CEAs of currently available vaccines that we identified from a systematic search of the literature. We focus on the models used and their attributes. Based on our findings, we highlight the relevant implications for models used to evaluate the cost effectiveness of vaccines.

1. Background

1.1 Cost-Effectiveness Analysis (CEA) in Health

Economic evaluations differ in the manner in which health outcomes are valued and expressed.[3] CEA is one particular type of economic evaluation in which “costs and effects of a program and at least one alternative are calculated and presented in a ratio of incremental cost to incremental effect.”[4] The underlying principle guiding the valuation of resources in CEAs is opportunity cost, which reflects competing societal demands for limited resources. The implication is that resources should be used as efficiently as possible in order to maximize the health benefits to the population.[3,5]

CEA is distinguished from cost-benefit analysis (CBA) in that CEA measures a programme’s effects in health outcomes (e.g. deaths averted, life-years saved) while CBA measures the effects in monetary units.[3,4] Depending on whether the health outcomes are weighted by utilities (e.g. QALYs or disability-adjusted life-years [DALYs]), analyses may be further distinguished as cost-utility analyses (CUAs) or CEAs.[3,4] Here, we use the term CEA in a broad sense to be inclusive of both. The popularity of CEAs over CBAs has grown, partly because the valuation of health in monetary terms is difficult and has not been universally accepted in the public health community.[4,6] For example, according to our search results using the UK NHS Economic Evaluation Database (NHS EED), among 659 English-language full economic evaluations that were published in 2005 and abstracted in the database, there were 651 CEAs (including CUAs), six CBAs, and two studies that performed both CEA and CBA.

The majority of CEAs conducted for general health interventions have concentrated on questions of ‘technical efficiency’, in that they ask, “given several competing options for reducing morbidity and mortality due to a specific disease, which are the most cost effective to achieve the given objective.”[3] When CEA is used to inform ‘allocative efficiency’, the intention is to provide insight into an optimal set of interventions across multiple disease areas (i.e. within the healthcare sector); this type of analysis needs health outcomes to be expressed in a common unit to facilitate comparison (e.g. life-years saved, QALYs or DALYs). Considerations of allocative efficiency directly allow for broader questions of resource allocation, such as whether a Ministry of Health should invest in AIDS prevention as opposed to childhood vaccination.[3]

Our main focus in this paper is on modelling methods used for CEA of vaccines and, therefore, a comprehensive review of economic evaluation, and in particular CEA, is beyond the scope of this paper. However, we refer the reader to several excellent references, many of which were motivated by the need to improve the quality and comparability of CEAs for priority setting in health.[3,4,614] Although there is consensus on the need for standardized methods and assumptions for CEA, there are different areas of emphasis among sources focusing on developed countries,[3,4,10,11] developing countries[1215] and context-specific decision making within a single country.[1619] In addition to these general references, numerous sources provide information on specific methodological challenges in CEA, such as the measurement, valuation and transferability of costs.[12,15,2034] Others have focused on tracking the quality of CEAs[3538] and cataloguing published CEAs.[3841]

1.2 Assessing the Cost Effectiveness of Vaccination

In general, vaccines are considered one of the most cost-effective health interventions in both developing and developed countries.[42,43] With new vaccines becoming available (e.g. rotavirus, pneumococcal, meningococcal and human papillomavirus [HPV]), the number of CEAs focused on vaccination has been growing rapidly.[44] This may reflect, at least to some extent, the increasing demand for information on cost effectiveness by those making immunization policy recommendations (e.g. the WHO and country policy makers), financing coordination mechanisms (e.g. the GAVI Alliance [formerly the Global Alliance for Vaccines and Immunisation]) and donors.[1,2] As the number of new vaccines (many of which are quite costly) grows, they will inevitably compete for a limited budget, particularly in developing countries.[45]

Although the controversial methodological issues surrounding all comparative economic evaluations also apply to CEAs of vaccination programmes, there are modelling challenges that are of particular relevance to vaccination programmes.[46,47] The representative example most often described relates to how vaccination programmes can lead to complex epidemiological consequences at the population level as the result of potential herd immunity effects.[48] An additional example relates to challenges associated with modelling certain vaccine-preventable diseases that involve a long lag time between the intervention and the ultimate health benefits.[33,47,4951] For example, vaccines for hepatitis B virus (HBV) and HPV prevent cancers in adulthood several decades after vaccination.[52] Finally, in select cases, vaccines may prevent only a subset of serotypes (e.g. pneumococcal conjugate vaccine) or viral types (e.g. HPV vaccine), introducing additional challenges related to the modelling of potential type replacement (i.e. effects on the proportion of disease caused by serotypes not targeted by the vaccine).[49,50,53,54]

1.3 Modelling in CEA

Like most public health decisions, decisions related to vaccination policy have to be made in the context of incomplete evidence, imperfect data and uncertainty. Evaluating cost effectiveness, taking into account the relevant biological, clinical, epidemiological and economic factors of a vaccination programme, generally requires use of a model.[55,56]

‘Model’ is an umbrella term that denotes any theoretical construct describing behaviours of a system, and the term is widely used across different disciplines.[8,57] Models can be mathematical or non-mathematical, depending on whether mathematical languages are used in describing the system.[56] Broadly speaking, models in the healthcare context can also be categorized as either ‘empirical’ (in which all model inputs are obtained from a clinical trial or an observational study providing patient-level data) or ‘theoretical’ (in which model parameters are synthesized using mathematical formulae, statistical techniques or simulations).[56] Even when model inputs are directly obtained from a single trial or observational study, some use of mathematical language is required in CEAs to present the final model outcomes in the form of incremental cost-effectiveness ratios (ICERs).[8] Accordingly, both theoretical and empirical approaches to performing CEAs can be described as mathematical models. Because it is difficult to obtain all necessary model inputs from a single trial or observational study, purely empirical models are rare.[58]

Accordingly, in our review, we informally refer to ‘empirical’ CEA models as approaches in which the main inputs for estimating effectiveness are directly obtained from clinical trials or observational studies, and ‘theoretical’ models of CEAs as those in which the main inputs are synthesized to estimate effectiveness and cost using a mathematical formula or simulation technique. In practice, theoretical models may be further categorized into relatively complex ‘simulation models’ (in which simulation techniques of various degrees of complexity — such as a Markov model — are performed to estimate final outcomes) and simpler ‘non-simulation models’ (in which final outcomes are obtained using a simple calculation, mathematical function or statistical model). Figure 1 summarizes this set of concepts for classifying model types in a broad context.

Fig. 1
figure 1

Categories that may broadly classify general models used in health economic evaluations.

Based on this set of classifications, models used in health economic evaluations could be described generally as falling into the category of theoretical models that use mathematical languages. However, models used in health economic evaluations (typically referred to as decision analytic models) often involve complex computer-based simulation modelling.[59,60] Accordingly, although we refer generally to mathematical theoretical models, in our review we focus on the subset of those models that involve more complex simulation techniques.

The mathematical models most commonly used for health economic evaluations have been described through the lens of health-decision science, epidemiology and, more recently, operations research and industrial engineering. In addition to general sources that describe models commonly used in decision analysis,[56,5961] there is a vast literature on epidemiological models of infectious diseases,[6267] and a growing body of literature on modelling techniques used more commonly in the field of operations research and industrial engineering, such as discrete-event simulation (DES).[6870] Publications from multiple disciplines have described characteristics of specific types of models,[61,62,65,7073] and have suggested general guidelines for developing and evaluating models.[37,44,7476]

While a number of studies have discussed specific methodological issues surrounding modelling of vaccination programmes, such as the importance of including herd immunity,[44,6567] and others have suggested guidelines for evaluating the cost effectiveness of vaccines,[7779] most of these have focussed on a review of a specific vaccine, although the issues raised generally have broad application. The Viral Hepatitis B Prevention Board (Antwerp, Belgium) facilitated a discussion on methodological aspects of economic evaluations as applied to vaccines, and from this recommendations were made for future economic evaluations of vaccination programmes.[77] The resulting statement[77] is the only formal statement specifically focused on vaccine modelling issues underwritten by a diverse, international group of academics active in the field. Beutels et al.[44] presented a general but comprehensive review on the economic evaluation of vaccines. This paper provided a brief overview of the published economic analyses of vaccines, showing the trend in the number of studies on vaccines and examining the distribution of studies across settings, and discussed a broad range of methodological issues, including the choice of model, outcome valuation and costing. Comprehensive reviews are available for several particular vaccines, including vaccines against rubella,[80] hepatitis A virus (HAV),[81,82] HBV,[83] meningococcal meningitis,[84,85] pneumococcal pneumonia,[8688] Haemophilus influenzae type b (Hib),[89] influenza,[90] pertussis,[91] HPV,[92,93] varicella[94] and rotavirus.[95] Among these, articles by Beutels[83] and Thiry et al.[94] provide specific discussion about alternative types of models. In addition to two reviews of previously published models of HPV,[92,93] Goldie et al.[49] and Garnett et al.[50] provide general discussion of newer modelling approaches required for HPV vaccination. Finally, Brisson and Edmunds[47] have recently described the impact of different sources of uncertainty (i.e. model, methodological and parameter uncertainty) in models of vaccines.

While the use of computer-based mathematical models to synthesize evidence as part of decision modelling for health and economic evaluation has grown dramatically, there has been a relative lack of transparency about analysts’ rationale for model choice. In part, communication about alternative modelling techniques has been hampered by a lack of universal language that describes different kinds of models and their features. The technical expertise for different types of models also often resides within different disciplines (e.g. decision analysis, operations research, statistics and epidemiology), and each has used modelling to address different kinds of questions, and have therefore emphasized different model attributes.[59,72] Presumably though, the choice of model type is a function of several considerations for analysts of any disciplinary background. Factors likely to drive most model type and structure decisions include the question at hand and the nature of the decision problem; the natural history and features of the particular disease of interest; data available to parameterize and/or calibrate the model; familiarity of the analyst with different modelling techniques; time requirements for model development; and ease and speed of simulation.[70,96]

2. A Framework Based on Model Attributes

Unfortunately, taxonomy to indicate model types often differs from study to study. There have been suggested taxonomies for models,[66,97,98] although not focused on vaccine-preventable diseases. In this section, we review key dimensions (or attributes) of modelling approaches, emphasizing those of particular significance to the modelling of vaccine-preventable infectious diseases. We then describe some of the main advantages and disadvantages of models that possess the different attributes described below. Finally, building upon the categorization of models suggested by others,[66,97] we suggest an informal framework that we have found helpful, and use this in our literature review of vaccine models for CEA.

Models can be categorized depending on whether they possess the following attributes:

  1. 1.

    the main features of the model change over time (dynamic) or not (static);

  2. 2.

    any changes in the model occur randomly (stochastic or probabilistic) or the rules of changes are pre-specified (deterministic);

  3. 3.

    the population’s behaviour in the model is simulated using aggregate variables of which values are population averages (aggregate) or the behaviours of individuals in the population are tracked (individual based);

  4. 4.

    events are assumed to occur at a discrete time interval (discrete) or at a point on a continuum (continuous);

  5. 5.

    the model allows individuals to enter the model (open) or not (closed); and

  6. 6.

    the model is expressed in equations that are functions of linearly linked parameters (linear) or not (non-linear).

2.1 Static versus Dynamic

In modelling an infectious disease, a key question is how to define the force of infection, (i.e. the rate at which susceptible individuals become infected) in order to estimate incidence, which is calculated as the product of the force of infection and the proportion of uninfected (susceptible) individuals in a population.[62] In a static model, the force of infection is constant over time (or changes as a function of age or other individual-based factors). In a dynamic model, the probability of an individual acquiring an infection is dependent on the contact patterns of that individual (i.e. interaction between individuals), the transmissibility (e.g. infectiousness) of the infection and the distribution of the infection within the population over time. Models that reflect transmission dynamics are the appropriate model when consideration of herd immunity effect is important.[48,63,65,67] Dynamic models are usually expressed as a set of differential equations.[62,65]

When the parameter values depend on a time variable only (as opposed to other parameters defined within the model), ordinary differential equations are used. When the parameter values depend on another important variable (e.g. age) in addition to the time variable, partial differential equations are used.[63,67,73]

Of note, the term ‘dynamic’ is often used to refer to a model that is ‘open’ (see section 2.5). While dynamic transmission models are often open, these models may also be closed, assuming an epidemic in a closed population.[57,99] In addition, other kinds of models that do not reflect transmission dynamics can also be open. Therefore, we distinguish open from closed models as a separate model attribute.

2.2 Deterministic versus Stochastic

In designing a model for vaccination programmes (or health interventions in general), an important analytic decision is whether to incorporate the concept of chance into the model.[70] That is, to simulate the world realistically events can be set to occur by chance (randomly); this type of model is considered stochastic (also known as probabilistic).[60] In contrast, in a deterministic model, all events occur in a pre-specified way depending on the parameter values and initial conditions of the model. Both dynamic transmission and static models can be deterministic or stochastic.[97] A stochastic dynamic transmission model might be an appropriate choice when modelling an infectious disease epidemic in a small population; such epidemics can be highly dependent on chance, since infectious agents are transmitted with different transmission probabilities. A stochastic model, used within either a static or dynamic transmission model framework, adds complexity, but can allow for a more comprehensive evaluation of the impact of variability and uncertainty.[49,50,60,100,101]

Confusion between the terms ‘stochastic’ and ‘variable’ is often observed, as well as with the closely related terms ‘uncertainty’ and ‘variability’.[60]‘Stochastic’ is usually combined with the term ‘uncertainty’ to indicate the uncertainty associated with chance outcomes (also known as first-order uncertainty).[60] When parameter values are changed stochastically (or probabilistically) in an aggregate or compartmental model (see section 2.3), the model is said to be simulating parameter uncertainty (also known as second-order uncertainty), which indicates measurement error of the parameter from various sources. In contrast, ‘variability’ refers to the often ‘known’ heterogeneity across subgroups or in a population (e.g. age or gender).[60] For example, variability across subgroups can be analysed using deterministic sensitivity analysis; variability in a population can be simulated using a Markov cohort model with varying initial states or microsimulation techniques (first-order Monte Carlo or DES).[60]

2.3 Aggregate (or Population Average) versus Individual Based

In an aggregate model, individuals are assigned to compartments (sometimes referred to as health states) depending on their health status or other relevant variables. Individuals in each compartment move according to parameter values at the aggregate level (i.e. averages of the individuals belonging to a compartment or the population as a whole), and the model records the number of individuals in each compartment over time. One limitation of this approach is that the number of the compartments quickly increases with complex diseases, as more variables are used to stratify the population in the model. For example, if prognosis, utility, costs and outcomes differ based on individual-based factors, and/or if memory of an individual’s history is an important determinant of future probabilities, then the number of compartments rapidly increases. If the model of interest is a simple deterministic static model, the computational burden may remain quite tolerable even with a large number of compartments or states. If the model is dynamic, as the number of compartments increases, there is a substantial computational burden in capturing the interaction between individuals belonging to different compartments.

An individual-based model (or microsimulation model) can overcome this limitation by keeping track of each individual’s behaviour. Like Weinstein,[101] we use the term microsimulation to describe an analysis “in which individual instantiations of a system — such as a patient’s lifetime or the course of an epidemic — are generated using a random process to ‘draw’ from probability distributions a large number of times, in order to examine the central tendency and possibly the distribution of outcomes.” Since microsimulation randomly samples individuals with their own sets of assigned attributes, microsimulation models are naturally stochastic. As previously mentioned in section 2.2, the variance associated with the individual sampling in microsimulation (first-order uncertainty) is different from the uncertainty related to the parameter values.[60,99] Microsimulation models may or may not allow for interactions among individuals. If individuals are simulated one at a time without interaction, the model is classified as a static microsimulation model. If a model tracks individuals, allowing for interactions with other individuals or the environment, the model is categorized as a dynamic microsimulation model.

Dynamic microsimulation models can be further divided into three subtypes: (i) individual-level Markov models allowing for interaction; (ii) DES models; and (iii) agent-based models. Individual-level Markov models extend the notion of the static microsimulation model based on a Markov structure, allowing for interactions among individuals.[97] DES is a modelling technique that originated from the field of industrial engineering.[72] It can simulate a complex system involving interactions among individuals or between individuals and the environment (e.g. healthcare delivery facilities). The DES model samples the time to the next event and describes the life history of individuals progressing with various events over time. The DES model is known to be particularly useful in simulating a complex system involving queues and delays mainly due to resource constraints.[60,72] The agent-based model is considered one of the most flexible modelling techniques, and differs from other dynamic microsimulation models in that it allows agents (e.g. individuals) to act autonomously with their own behavioural rules.[102,103] The popularity of this technique has been increasing in a wide range of fields including ecology, engineering and management sciences. It has yet to reach its potential in the field of economic evaluation of vaccines or other health interventions.[104]

2.4 Discrete versus Continuous

The main difference between ‘discrete’ and ‘continuous’ models is the measurement of the timing of events. In a discrete model, an event is assumed to occur at one of the discrete time steps, while in a continuous model, an event can occur at any point in time on a continuum. An example of a continuous model is a dynamic model expressed as a set of differential equations. It is known that the time step used in a discrete-time model can affect the model’s results, partly because such models allow for only one event at each time step.[105] Accordingly, the choice of timeframe can be an important consideration in choosing a model type for a health intervention, particularly when there is a need to model multiple simultaneous events in one time step (e.g. presence of co-morbidities).[97] In this case, a discrete-time model with a small fixed interval, or a continuous-time model is recommended.[97] However, there is a trade-off in choosing between the two timeframes: while a continuous model can provide more accurate results, at least in theory, it could involve a higher computational burden. In addition, since it is difficult to obtain the numerical solutions of a continuous model, for practical reasons, continuous differential equations are often approximated into discrete difference equations.

2.5 Open versus Closed

An open model allows individuals to enter and exit the model over time, while a closed model does not allow for new entrances over time. An example of a closed model is a single birth cohort simulation using a Markov model. One example of an open model is a dynamic transmission model that allows for the entry of ‘susceptible’ (or uninfected) individuals into the model (e.g. via birth over time) replenishing the susceptible compartment.[52] While dynamic transmission models are often open, they can be closed. Similarly, individual-based microsimulation models may or may not be open.

Individual-based microsimulation models that are open but static are often referred to as ‘population-based models’, and have recently been used in CEAs of cancer prevention programmes.[106109] An open-model population-based approach can be very useful in assessing past trends and forecasting future trends in chronic non-communicable diseases.[110] Open models can be more data intensive (if one wants to account for parameters that change over time, such as secular trends) and can be more computationally intensive, depending on the complexity of the underlying structure.

Since outputs of an open model generally vary by both calendar year and age, this is the appropriate modelling approach if one is trying to directly compare the results of a dynamic transmission model and a static microsimulation model, or if one is using output (e.g. incidence over time) from a dynamic transmission model to parameterize a static microsimulation model.

2.6 Linear versus Non-Linear

Although ‘linear’ is often used to describe something with a straight-line feature, it is not straightforward to distinguish between a linear model and a non-linear one. In general, a model is defined as linear if all of its functions are linear (that is, functions are represented by linear equations) and is defined as non-linear otherwise.[111] In mathematics, a ‘linear equation’ refers to an equation whose terms are a constant or the product of a constant and a variable, and the highest degree term is the first degree (e.g. y = a × x + b, where a and b are constants and y and x are variables). This kind of equation is presented as a straight line. Meanwhile, in statistics and econometrics, a linear function is expressed as Y = Xβ + u, where Y is the n × 1 vector of an observable dependent variable, X is the n × k matrix of observed variables, β is the k × 1 vector of parameters and u is a u × 1 vector of unobserved errors.[112] That is, even when the Y and X variables are obtained using a non-linear relationship among data, if the function is linear in the parameters to be estimated, the function is linear (e.g. although an equation Y = β0 + β1X + β2X2 is a polynomial function and is not a straight line, this function is considered linear).[112] Mathematical models used for CEA rarely describe considerations of linearity.

2.7 Classification of Model Types

It should be noted that, of the six dimensions addressed in this section, the first three (static/dynamic, deterministic/stochastic and aggregate/individual based) are considered particularly relevant in that they are more fundamental in characterizing model structures and they often appear in descriptions of CEA models in the literature. Meanwhile, the other three dimensions are more related to the question of how analysts measure or compute model outputs given a model structure, and their applications to CEA models are not often described explicitly. Accordingly, we combine elements of each of the first three dimensions, yielding a total of eight (= 2 × 2 × 2) possible model types. However, although not impossible, the concepts of ‘individual based’ and ‘deterministic’ are often incompatible in the modelling context. Therefore, we considered the two resulting categories (individual-based deterministic static and dynamic models) to be unrealistic, and excluded them. We thus present a framework that classifies model types into six categories (see figure 2). Certain of these categories can then be further stratified based on consideration of two additional dimensions: open versus closed and discrete versus continuous.

Fig. 2
figure 2

Classification of (mathematical) model types used in health economic evaluation. Using classification along several dimensions, this figure presents general types of models that can be used in projecting the health and economic consequences of vaccination programmes. a Most individual-based models are stochastic. ODE = ordinary differential equations; PDE = partial differential equations.

Also of note, others have proposed alternative options for categorizing model types, many of which are more comprehensive in scope.[97,98] For example, Brennan et al.[97] propose 14 types of model structures in their taxonomy of model structures for health technology, by applying the dimensions of time and Markovian property in addition to the three dimensions of our choice. Likewise, additionally applying other dimensions to those three dimensions chosen yields more finely divided model types. For example, we could overlay the dimension of openness on the key three dimensions, dichotomizing the current six categories into two subtypes: open versus closed, and this attempt will yield a total of 12 (= 2 × 6) model types. Similarly, by superimposing the dimension of time, we can have two subtypes of model (e.g. discrete-time DES and continuous-time DES) for most model categories.[97] Analogously, if we overlay both openness and time dimensions on the existing categories at the same time, it will yield much more finely divided categories unless those two dimensions are completely incompatible. Furthermore, it is also possible to have a model that is a hybrid between two categories with compatible and supplementary features. For example, Coudeville et al.[113] combined the notions of a transmission dynamics model (type 2) and decision tree (type 1) to evaluate the cost effectiveness of multiple strategies of varicella vaccination in Italy. However, in this paper, we purposefully elected to keep our categories as simple as possible, partly for ease of communication with non-modellers and between modellers of different backgrounds.

For each type of model, examples of applications and its advantages and disadvantages are presented in table I.

Table I
figure Tab1

Examples of applications, advantages and disadvantages of each type of model

3. Considerations of Uncertainty

3.1 General Sources of Uncertainty

A comprehensive economic evaluation should consider many different kinds of uncertainty, and most importantly, should clearly relay to the target audience the impact of those uncertainties on the policy conclusions. In addition to uncertainty related to model parameters, there are analytic choices of model type and model structure, and a number of possible perspectives from which to conduct CEA.[4,47] Although important, it is beyond the scope of this paper to comprehensively address uncertainty in CEA modeling results related specifically to cost-effectiveness methodology (e.g. choice of perspective, analytic and time horizon, costing methods, choice of baseline comparator, discount rate); these topics are discussed elsewhere, both generally[3,4,1015,3538,114] and in the context of specific CEAs for vaccine programs.[49,77,80,81,83,84,86]

3.2 Parameter Uncertainty

Manning et al.[115] define parameter uncertainty as “uncertainty about the true numerical values of the parameters used as inputs,” and argue that such uncertainty can be introduced in the following cases: the values of key inputs are unknown or unobservable (e.g. the price of future vaccines); there is no consensus about the appropriate level of the parameter value (e.g. discount rate); uncertainty remains about the key components of the process (e.g. disease epidemiology); there is sampling variability of parameters (e.g. individual differences in response rates to treatment); it is unclear whether the estimates can be interpolated or extrapolated. Prior to the last several years, the overriding focus has been on using sensitivity analysis to assess how changes in model parameters, when varied over a plausible range, lead to changes in the ICER.[115118]

Sensitivity analyses range from simple deterministic univariate analyses to probabilistic multivariate analyses.[60,100,119] In a probabilistic sensitivity analysis key parameters are assigned distributional forms, and at each run samples are randomly drawn from the distributions to generate a distribution of cost-effectiveness ratios.[117] Probabilistic sensitivity analysis can be performed when using a Markov model analysed as a cohort simulation, and can be performed on a Markov model analysed as a first-order Monte Carlo simulation.[60] Much has been written in the last few years about both conducting probabilistic sensitivity analysis and summarizing the results in the form of cost-effectiveness acceptability curves.[120125] This curve has been developed to overcome the limitation of the ICER as a summary statistic in presenting the probabilistic sensitivity analysis results; the confidence interval of the ICERs can have negative bounds, which are not straightforward to interpret.[121,122] Cost-effectiveness acceptability curves show the probability that an intervention will be cost effective as the threshold cost-effectiveness ratio is varied.[121,122]

An important area that is beyond the scope of this review is the growing field of calibration methods and how these relate to parameter uncertainty. Model calibration is increasingly being used, in part, because as models become more complex, the number of unobserved parameters quickly increases, and methods to estimate these model inputs are needed. These methods are discussed elsewhere.[47,49,50,101,126128]

3.3 Model Uncertainty

Historically, there has been less explicit attention paid to how the choice of model structure and type affects the approach to addressing parameter uncertainty. For example, different approaches would be appropriate for a simple decision tree to assess basic questions about rotavirus vaccination versus a stochastic microsimulation model empirically calibrated to data using likelihood-based methods. Obviously, conducting probabilistic sensitivity analysis is somewhat more complicated and labour intensive when using an individual-based stochastic simulation model (e.g. first-order Monte Carlo model) than a static deterministic cohort model.[100] In general, the more complex the analysis, the greater the challenge to achieve transparency and relay the information about the impact of uncertainty succinctly to decision makers.[125,129]

Manning et al.[115] subdivide modelling uncertainty into ‘model-structure uncertainty’ and ‘modelling-process uncertainty’. Model-structure uncertainty is the uncertainty related to the mathematical forms by which parameters are combined to estimate costs, effects and/or combinations of costs and effects. Modelling-process uncertainty is the uncertainty arising from the combination of decisions made through the entire process of the analysis. Suggestions have been made to vary structure to assess the stability of results, or examine analyses by multiple analysts, although very little formal guidance exists on how best to do this.

4. Model Evaluation

Criteria for assessing the quality of models have been suggested,[44,47,56,130] and many comment on the importance of assessment of model structure, model type choice and model validation.[56,70,75,76,101] Weinstein et al.[56] have suggested that, in addition to a model that is transparent, with clearly stated assumptions and data sources, analysts should work towards demonstrating a model’s internal and external consistency, projective validity and convergent validity. Internal consistency is often ascertained by simple exercises to ensure that model output is consistent with assumptions and data used to parameterize the model. External consistency, or verification of a model, involves demonstrating that the output of the model is consistent with known facts about the disease and with epidemiological data. Assessment of projective validity entails demonstrating that the output of a model is consistent with observations that are independent of data used as model inputs.[56] Convergent validity or model corroboration involves comparing model results with those from different models that address the same problem based on similar assumptions. Ultimately, models should be compared with empirical data that become available after the model was built.

5. Literature Search

To collect data on the number and types of models used in the CEAs of vaccines, we performed a systematic search using three databases: MEDLINE, EMBASE and NHS EED. For a comprehensive search (e.g. for MEDLINE), we developed a general search strategy for vaccines, combining a text-string search in titles using (‘vaccine name’[TI] AND (‘vaccin*’[TI] OR ‘immuniz*’[TI] OR ‘immunis*’[TI]) AND (‘cost*’[TI] OR ‘economic*’[TI] OR ‘evaluat*’[TI])] and a Medical Subject Headings (MeSH) term search using [(‘a MeSH term corresponding to disease name caused by an infectious pathogen’ OR ‘a MeSH term indicating the generic vaccine name’) AND (‘cost-benefit analysis’[MeSH] OR ‘decision support techniques’[MeSH]) AND (‘immunisation’[MeSH] OR ‘immunisation programs’[MeSH])]. We then applied this strategy for each of the currently available vaccines listed in table II. We restricted our search to English-language articles published between 1976 and May 2007. We further restricted our search by including a MeSH term for human subjects and specifying the publication type to be included (see the appendix for an example using varicella vaccines). We applied similar search strategies to the other two databases, EMBASE and NHS EED. We also examined the reference lists of several systematic reviews of vaccine CEAs.

Table II
figure Tab2

Published cost-effectiveness analyses (CEAs) of currently available vaccines (1976–May 2007)

The initial searches retrieved >3000 articles. We finally identified 276 CEAs of vaccines (see the supplementary material [‘ArticlePlus’] at http://pharmacoeconomics.adisonline.com for a complete reference list of these CEAs) by screening the abstracts (and the main texts if the abstracts did not explicitly reveal the form of economic evaluations) of the initially retrieved articles, and applying the following inclusion/exclusion criteria: (i) we included studies on the currently available monovalent or combined vaccines only, excluding any hypothetical vaccines under development (e.g. vaccines for malaria, AIDS and dengue fever, etc.); (ii) we included only CEAs (including both CEAs and CUAs), excluding CBAs and partial economic evaluation studies such as cost analyses or outcome description studies, which estimate either effects or costs only, although we did include studies in which both CEA and CBA were performed, reviewing the CEA aspects of such studies; (iii) we included original research only, excluding reviews, commentaries or editorials.

In order to provide a general overview of the CEAs of vaccines, we reviewed the final set of 276 articles, extracting data on the following elements for each vaccine: (i) the type of vaccine; (ii) study setting (i.e. country or region of study); and (iii) the year of publication. Next, in order to examine the trend and evolution of model types used in evaluating the cost effectiveness of vaccines, we specifically focused on the following elements: (i) model types; (ii) types of health outcome measures; and (iii) approaches for uncertainty analysis.

6. Findings of the Systematic Review

6.1 General Overview of CEAs of Currently Available Vaccines

6.1.1 Distribution of the Number of Published CEAs by Vaccine

Table II presents a list of the currently available monovalent and combined vaccines and the number of published CEAs for each vaccine by study setting. The distribution of the number of published CEAs is extremely disproportionate; for example, of the 276 studies identified, 157 CEAs (~57%) have been performed for five vaccines (varicella, influenza, HAV, HBV and pneumococcal) while there have been few or no CEAs conducted for monovalent mumps, tick-borne encephalitis virus, monovalent diphtheria vaccines and many of the combined vaccines. The number of CEAs of pneumococcal vaccines (41) ranked the highest.

6.1.2 Distribution of the Number of Published Vaccine CEAs by Study Setting

Table II also shows that the distribution of study settings is disproportionate. Of the total 275 studies, only 43 (16%, adjusted for those studies that evaluated multiple vaccines) have been conducted in low- to middle-income country settings despite the fact that the populations in these settings may benefit the most from the use of vaccines.

6.1.3 Trend in the Growth of Published Vaccine CEAs

Figure 3 shows that the number of published CEAs of vaccines has rapidly increased since the mid 1990s. Given the highly disproportionate distribution of the numbers of published CEAs among vaccines, this trend may reflect that most of the high-CEA-volume vaccines have been approved only since the mid 1990s. However, as stated previously, the trend may also reflect that the demand for cost-effectiveness data for recently approved vaccines has increased, affected by the general trend that more policy makers are accepting CEA as an aid to their decision making.

Fig. 3
figure 3

Growth in the number of cost-effectiveness analyses (CEAs) of vaccination programmes published by year from 1976 through May 2007. A total of 276 articles were found. The number for 2007 is the count between January and May 2007.

6.2 Model Types and Related Features

6.2.1 Distribution of Model Types

The model type used for the CEAs of vaccines varied from study to study. Figure 4 a presents the overall distribution of model categories. Of the 276 studies, 18 CEAs (∼7%) were performed based on clinical trials or observational studies (‘empirical’ category); 14 studies (∼5%) fell into an ‘other’ category, estimating health outcomes using mathematical functions, statistical models (linear or logistic regressions) or using a simple calculation; and 36 studies (13%) did not report model types, or vaguely described their models. For the rest of the studies (i.e. 208 CEAs), most (∼89%) used static models. Twenty three studies developed dynamic transmission models for eight vaccines:Footnote 1 one model for measles,A18 two models for polio,A245,A246 seven models for varicella,A14,A24,A25,A42,A43,A131,A135 three for HBV,A67,A71,A270 two for HAV,A57,A139 two for HPV,A63,A243 four for pertussisA61,A103,A104,A240 and two models for meningitis.A51,A250

Fig. 4
figure 4

Selected characteristics of 276 cost-effectiveness analyses of currently available vaccines published between 1976 and May 2007. (a) Distribution of model types; (b) distribution of types of major health outcome measures reported in each study. DALY = disability-adjusted life-year; LYG = life-year gained.

When applied to our suggested categories of model types, most studies fall into type 1 (184 of 208 studies; 88.5%). The number of studies that belong to types 2 and 3 were, respectively, 23 (11.0%) and 1 (0.4%). There were no studies including models that fell into types 4, 5 and 6. Thus, all of the CEAs were aggregate-level models, with no microsimulation individual-level models found. Of the 185 CEAs judged to be static models (summing types 1 and 3), in 154 studies (83%) the model type was explicitly stated or was evident from the written description, i.e. that the model structures were decision trees, Markov models or hybrid types (decision trees including Markov models). The rest of the studies described the model types rather vaguely, such as, ‘cohort model’, ‘cost-effectiveness model’, ‘excel model’, ‘simulation model’, ‘epidemiological model’ or ‘economic model’, but based on the details provided (e.g. parameter requirements, actual model inputs, types of model output and format of results), were consistent with static models. Others appeared to estimate effectiveness using models that implicitly followed the structure of a state-transition model.

Our findings demonstrated the evolution of model types over time. For the period 1976–90, all vaccine CEA models fell into the categories of ‘type 1’ or ‘other’. The majority of relatively sophisticated models that belong to types 2 or 3 were presented in CEAs published during the past 5 years.

6.2.2 Distribution of Health Outcome Measures

The types of health outcomes measured with the models were more variant than the model types. Different health outcome measures included QALYs gained, life-years gained, DALYs averted, deaths averted (or lives saved), cases averted, illness days averted and hospitalizations averted, and so on. Figure 4 b presents the distribution of main health outcome measures (note that some studies included more than one health outcome measure; when multiple measures were reported, we categorized the one main measure that was most similar to the fatality outcome or that was adjusted for quality of life). Overall, ‘cases prevented’ (29%) was the most frequently used form of health outcome measure. The second most prevalent type of health measure was QALYs (26%).

Our findings showed that there is a time trend in the distribution of main effectiveness measures. For example, for the period 1976–90, 69% of CEAs measured effectiveness in terms of the number of ‘cases prevented’ while only 14% of studies used ‘cases prevented’ as the main health outcome measure over the past 5 years. The most commonly used health outcome measure in the same period was DALYs averted in low- to middle-income country settings (11 of 25 CEAs) and QALYs gained in high-income country settings (46 out of 110 CEAs). Regardless of the study setting, in the last 5 years, the most commonly used measure was QALYs gained.

6.2.3 Uncertainty Analysis

A majority of the studies (~81%) explored parameter uncertainty by conducting deterministic, univariate (or multivariate in a small number of studies) sensitivity analysis alone or did not report any sensitivity analysis results. The number of studies that performed probabilistic sensitivity analysis was relatively small, although these appear to be more frequent in recent years. For example, of the 53 studies reporting probabilistic sensitivity analysis results, approximately 85% were published over the past 5 years. Less than 5% of studies discussed model uncertainty, and no study formally and explicitly examined model uncertainty using structured methods (e.g. computing final outcome measures such as cost-effectiveness ratios and exploring the magnitude of differences qualitatively or quantitatively under alternative model structures).[115]

7. Discussion

While there are a growing number of model-based analyses that assess the cost effectiveness of vaccination programmes, the distribution of studies is highly disproportionate, particularly in terms of study setting and vaccine of interest. Nearly 83% of studies have focused on developed countries and over half of analyses have addressed one of the five most studied vaccines, namely the varicella, influenza, HAV, HBV and pneumococcal vaccines. In addition, apparent time trends are observed in the distribution of the number of CEAs published, main health outcome measures and model types. The majority of the relatively sophisticated models that belong to types 2 or 3 were published over the past 5 years. However, it should be noted that these findings are subject to various types of biases such as publication, English-language and local literature bias, as often discussed in most systematic review literature.[131,132]

Modelling vaccination programmes might be expected to involve more complexity than modelling health interventions for chronic diseases, mainly due to the complex transmission modes involving interactions between individuals as observed in many infectious diseases.[47,52] Accordingly, in theory, the model types for projecting the cost effectiveness of vaccination programmes might be expected to be more diverse than those for other interventions targeting non-communicable diseases. Our findings show that this has not been the case thus far. For example, Cooper et al.[133] reviewed 61 modelling studies of coronary heart disease treatment (not restricted to CEAs) and found that only three subtypes of model structures were used to model the disease: state-transition models including Markov (32 studies), decision trees (24 studies) and DES models (five studies). According to our suggested model type classification system, these models correspond to types 1 and 6. But, while the decision-analytic models used in CEAs of vaccines fall into three categories (types 1, 2 and 3), approximately 89% of vaccine models would be classified as type 1.

In nearly 40% of studies, it was not straightforward for a reader to quickly ascertain the model type and techniques used. For example, vague model description meant that it was often difficult to further divide the type 1 models into three subtypes (decision tree, state transition model including a Markov model and hybrid). Furthermore, when the model types were explicitly stated or the model features were described in detail, the terminology used by analysts differed widely between studies. At a minimum, a clear description of the model type and justification for why the analyst chose that model type must begin to accompany all analyses. While our suggested framework is only one possibility, we urge modellers to at least provide an explicit statement about each of the following four major model attributes, and if relevant, a statement about the two additional attributes of time and linearity.

Is the model:

  • a dynamic transmission model or a static model, i.e. does the model consider interaction between groups in the population, can the model reflect herd immunity effects?

  • individual based or aggregate (population average), i.e. are individuals in the model assigned to compartments and move between them based on averages or does the model keep track of each individual’s behaviour and attributes?

  • deterministic or stochastic, i.e. do events in the model occur by chance (randomly) or do events occur in a pre-specified way?

  • open versus closed, i.e. does the model follow a single group of people over time with no new individuals entering the model, or does the model allow new individuals to enter and exit the model over time?

The findings of our review imply that, at least for certain types of vaccines, the choice of model structure in evaluating the cost effectiveness of vaccination programmes might be improved. For example, despite the widely known fact that, in general, vaccination programmes can have both direct and indirect effects, and that only dynamic models can capture the indirect effects, only 23 CEAs (8%) included in our review developed dynamic models. Although herd immunity may be negligible for some vaccines (e.g. rotavirus), given the disproportionately small proportion of dynamic models, a number of studies may be underestimating vaccine benefits. Extending this idea, in some cases other factors beyond just herd immunity may be quite influential on the cost-effectiveness results, and are also not being given adequate attention. While the lack of individual-based stochastic models is, in part, likely attributable to the time investment required to develop the model, the increased complexity in evaluating the model performance, and the associated increased computational burden, this type of model may be an appropriate choice for certain CEAs of vaccines.

For example, Goldie et al.[126] have recently developed an individual-based stochastic model to assess the impact and cost effectiveness of HPV 16,18 vaccination in the context of countries with ongoing screening programmes. In part, this choice was made to permit the tracking of individual behaviour and to allow individual attributes (e.g. race, age, vaccination status) and individual history (e.g. prior clinical events) affect future screening behaviour, prognosis and outcomes.[49,50] Furthermore, this model reflects multiple HPV types, both vaccine-targeted types and non-targeted types, allowing for exploration of the potential increase in the proportion of disease expected from HPV types not targeted by the vaccine.[92,126,128,134136] Kim et al.[134] then developed a companion dynamic transmission model of HPV that can appropriately represent herd immunity effects, but this model reflects only vaccine-targeted HPV types 16 and 18, and cannot be used to project the range of disease outcomes attributable to all HPV types with different vaccination strategies.[33,92,128]

Adopting a hybrid approach, estimates of HPV 16, 18 incidence can be generated for different vaccination scenarios over time with the dynamic transmission model, and used as an input to the individual-based stochastic model, which may be run as an open or closed model. This ‘linkage’ allows one to capture herd immunity effects, explore potential uncertainties such as type replacement, include outcomes for all HPV type-attributable disease, and assess type-specific vaccination in the context of complex screening strategies. The dynamic transmission model can also be used as an independent model to project outcomes of different vaccination strategies in boys and girls, assess the cost effectiveness of vaccinating boys, and compare different age-based catch-up programmes. Other advantages of a hybrid approach are that features that add complexity can be more easily activated or de-activated, and projected results can be compared using two independently structured models, which can greatly enhance evaluation of the impact of model structure on cost-effectiveness results. Although, theoretically, an ideal solution might be to develop an individual-based stochastic model with interaction, considerations such as development time, available data (e.g. transmission parameters for other HPV types), desire to apply the model to many countries with different amounts of data, computational intensity and need for timely results all made a hybrid model approach a reasonable choice in this instance.

In the present review, we also found that assessment of model uncertainty was generally not addressed, or at least not explicitly. A relatively small number of studies have justified the choice of model structure and few studies have assessed model uncertainty formally, by conducting sensitivity analyses varying model structures (i.e. comparing outputs from different model structures based on each model’s specific sets of assumptions and input parameter values). Some studies compared results with the results of other studies using a different model structure, but assumptions and input data were often different, making it difficult to determine whether the inconsistency in results was due to the various model types. Furthermore, only a few studies attempted to evaluate the models formally with specific explicit mention of model verification, model corroboration or assessment of projective validity.

Weinstein[101] emphasizes that “model assumptions regarding causal structure and parameter estimates should be continually assessed against data, and models should be revised accordingly.” It is the responsibility of the analyst to conduct the assessment of model performance more comprehensively, to be more transparent about the conditional relationship between inputs and outputs of models, to illustrate the rationale behind model structure choice, and to present a reasonable representation of how results would differ using an alternative structure. For example, Hammerschmidt et al.[137] present their efforts to validate a model for evaluating varicella vaccination strategies in Germany, based on multiple criteria.

Our review has several limitations. First, we restricted our review to CEAs. CEA is only one type of economic evaluation, although it is increasingly the predominant choice, accounting for more than half of the published economic evaluation studies. When extended to all economic evaluations of vaccines, the results may show different distributions of model types and other related characteristics. Second, our search strategy did not include all possible databases. Thus, we might have missed some CEAs of vaccines conforming to our study’s inclusion criteria but indexed in other databases.[132] Third, we did not include any future vaccines that are under development such as AIDS/HIV, tuberculosis and malaria vaccines. Fourth, we did not include the grey literature, which is defined as “a range of published and unpublished material which is not normally identifiable through conventional methods of bibliographic control.”[27] According to Batt et al.,[27] the grey literature on immunization covers more recent interventions and better represents low- to middle-income countries in West Africa and the Middle East. Fifth, we did not attempt to formally assess the performance and validity of the models in each study. Our review was based solely on the information we were able to ascertain from the manuscripts. Finally, we did not address the important issue of communicating cost-effectiveness results and, more specifically, the modelling approach and techniques to decision makers. We believe the first step is for analysts to be more transparent with each other, beginning with more complete descriptions of the choices around modelling approaches as well as enhanced methods of assessing model performance and validity. This will greatly facilitate the development of effective communication tools to describe these issues to different stakeholders.

Economic studies of vaccines, including vaccine development and delivery issues, are conducted with the general intention of informing policy recommendations and programmatic decisions at an international, national and sub-national level.[138] Although the process by which results of CEAs play a role in shaping vaccination policy is not well understood,[139] stakeholders faced with decisions about whether to introduce a new and costly vaccine are requesting information on its comparative benefits, costs and cost effectiveness compared with alternative uses of resources.[1,2] For decision makers responsible for priority setting, this information must be based on analyses that are comparable, in that they use standard methodology, are of equivalent rigour and are transparent and clear in communication of methods and results.[85]

8. Conclusions

Our results from this focused review emphasize the need to improve modelling methods for CEAs for vaccination programmes, specifically model choice, construction, assessment and validation. Many of the CEAs identified did not adequately describe the modelling approach or explicitly provide the rationale for model choice. The model choice was often not optimal given the issues surrounding vaccination, such as the potential for effects arising from herd immunity. Inadequate attention was given to assessment of uncertainty, particularly model uncertainty.

We propose a framework where models are classified according to three main attributes: static/dynamic; stochastic/deterministic; aggregate/individual based. Models can then be further classified within these categories. Beyond this, models should be identified as open or closed and, where relevant, specify (non)linearity and time intervals (discrete/continuous).

As interest in conducting economic analyses of vaccines is shared by analysts with different disciplinary backgrounds, a common framework would be helpful to engage in dialogue and debate about optimal modelling approaches, inevitable tradeoffs between complexity and simplicity, and the need for innovative approaches to represent complex systems. While a prescriptive approach is unlikely to be successful given the nature of the many factors that should be considered in model choice, adhering to guidelines for good practice will enhance valid comparability of results. Increased transparency regarding analytic choices and improved assessment of model performance and validation will go far in moving the field forward.