Assessing the cost effectiveness of a new health intervention often requires modelling to estimate the impact of the intervention on cost, survival and quality of life over the lifetime of a cohort of patients. Markov modelling is a methodology that is commonly employed to estimate these long-term costs and benefits. As commonly used, these models assume that the patients continue to get the treatments assigned regardless of the change in health states. In this paper, we describe an extension to the Markov modelling approach, called Markov decision modelling. Such a model starts with a set of health states and treatments and optimally assigns treatments to each of the health states. A Markov decision model can be used to identify the optimal treatment strategy not just for the initial disease state, but also as the disease state changes over time. We present a dynamic programming approach to identifying the optimal assignment of treatments, and illustrate this methodology using an example. The Markov decision modelling approach provides an efficient way of identifying optimal assignment of treatments to health states, but, like the standard Markov model, may be of limited use when probabilities of future events depend on past history in a complex fashion. Even with its limitations, Markov decision models offer an opportunity for health economists to inform healthcare decision-makers on how to modify current treatment pathways to incorporate new treatments as they become available.
Optimal assignment of treatments to health states using a Markov decision model: An introduction to basic concepts
Bala, M., & Mauskopf, J. (2006). Optimal assignment of treatments to health states using a Markov decision model: An introduction to basic concepts. PharmacoEconomics, 24(4), 345-354.