We conducted an evaluation of a patient-centered medical home demonstration sponsored by the Centers for Medicare & Medicaid Services. We implemented a quasi-experimental pre-post with a comparison group design. Traditional propensity score weighting failed to achieve balance (exchangeability) between the two groups on several critical characteristics. In response, we incorporated a relatively new alternative known as entropy balancing. Our objective is to share lessons learned from using entropy balancing in a quasi-experimental study design. We document the advantages and challenges with using entropy balancing. We also describe a set of best practices, and we present a series of illustrative analyses that empirically demonstrate the performance of entropy balancing relative to traditional propensity score weighting. We compare alternative approaches based on: (i) covariate balance (e.g., standardized differences); (ii) overlap in conditional treatment probabilities; and (iii) the distribution of weights. Our comparison of overlap is based on a novel approach we developed that uses entropy balancing weights to calculate a pseudo-propensity score. In many situations, entropy balancing provides remarkably superior covariate balance compared to traditional propensity score weighting methods. Entropy balancing is also preferred because it does not require extensive iterative manual searching for an optimal propensity score specification. However, we demonstrate that there are some situations where entropy balancing "fails". Specifically, there are instances where entropy balancing achieves adequate covariate balance only by using a distribution of weights that dramatically up-weights a small set of observations, giving them a disproportionately large and undesirable influence.
Using entropy balancing to strengthen an observational cohort study design
Lessons learned from an evaluation of a complex multi-state federal demonstration
Parish, W. J., Keyes, V. S., Beadles, C., & Kandilov, A. M. G. (2018). Using entropy balancing to strengthen an observational cohort study design: Lessons learned from an evaluation of a complex multi-state federal demonstration. Health Services and Outcomes Research Methodology, 18(1), 17–46. https://doi.org/10.1007/s10742-017-0174-z
Abstract
Publications Info
To contact an RTI author, request a report, or for additional information about publications by our experts, send us your request.
Meet the Experts
View All ExpertsRecent Publications
Article
Multifaceted risk for non-suicidal self-injury only versus suicide attempt in a population-based cohort of adults
Article
Use of a web-based portal to return normal individual research results in Early Check
Article
The importance of quality data to track global progress in addressing stillbirths and neonatal mortality
Article