EconPapers    
Economics at your fingertips  
 

Learning Hidden Markov Models with Structured Transition Dynamics

Simin Ma (), Amin Dehghanian (), Gian-Gabriel Garcia () and Nicoleta Serban ()
Additional contact information
Simin Ma: H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332
Amin Dehghanian: H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332
Gian-Gabriel Garcia: H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332
Nicoleta Serban: H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332

INFORMS Journal on Computing, 2025, vol. 37, issue 3, 531-556

Abstract: The hidden Markov model (HMM) provides a natural framework for modeling the dynamic evolution of latent diseases. The unknown probability matrices of HMMs can be learned through the well-known Baum–Welch algorithm, a special case of the expectation-maximization algorithm. In many disease models, the probability matrices possess nontrivial properties that may be represented through a set of linear constraints. In these cases, the traditional Baum–Welch algorithm is no longer applicable because the maximization step cannot be solved by an explicit formula. In this paper, we propose a novel approach to efficiently solve the maximization step problem under linear constraints by providing a Lagrangian dual reformulation that we solve by an accelerated gradient method. The performance of this approach critically depends on devising a fast method to compute the gradient in each iteration. For this purpose, we employ dual decomposition and derive Karush–Kuhn–Tucker conditions to reduce our problem into a set of single variable equations, solved using a simple bisection method. We apply this method to a case study on sports-related concussion and provide an extensive numerical study using simulation. We show that our approach is in orders of magnitude computationally faster and more accurate than other alternative approaches. Moreover, compared with other methods, our approach is far less sensitive with respect to increases in problem size. Overall, our contribution lies in the advancement of accurately and efficiently handling HMM parameter estimation under linear constraints, which comprises a wide range of applications in disease modeling and beyond.

Keywords: expectation-maximization algorithm; hidden Markov model; convex optimization; accelerated gradient method; statistical learning (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/ijoc.2022.0342 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:orijoc:v:37:y:2025:i:3:p:531-556

Access Statistics for this article

More articles in INFORMS Journal on Computing from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-06-11
Handle: RePEc:inm:orijoc:v:37:y:2025:i:3:p:531-556