EconPapers    
Economics at your fingertips  
 

Strong Optimal Classification Trees

Sina Aghaei (), Andrés Gómez () and Phebe Vayanos ()
Additional contact information
Sina Aghaei: Center for Artificial Intelligence in Society, University of Southern California, Los Angeles, California 90089
Andrés Gómez: Department of Industrial and Systems Engineering, University of Southern California, Los Angeles, California 90089
Phebe Vayanos: Center for Artificial Intelligence in Society, University of Southern California, Los Angeles, California 90089

Operations Research, 2025, vol. 73, issue 4, 2223-2241

Abstract: Decision trees are among the most popular machine learning models and are used routinely in applications ranging from revenue management and medicine to bioinformatics. In this paper, we consider the problem of learning optimal binary classification trees with univariate splits. Literature on the topic has burgeoned in recent years, motivated both by the empirical suboptimality of heuristic approaches and the tremendous improvements in mixed-integer optimization (MIO) technology. Yet, existing MIO-based approaches from the literature do not leverage the power of MIO to its full extent: they rely on weak formulations, resulting in slow convergence and large optimality gaps. To fill this gap in the literature, we propose an intuitive flow-based MIO formulation for learning optimal binary classification trees. Our formulation can accommodate side constraints to enable the design of interpretable and fair decision trees. Moreover, we show that our formulation has a stronger linear optimization relaxation than existing methods in the case of binary data. We exploit the decomposable structure of our formulation and max-flow/min-cut duality to derive a Benders’ decomposition method to speed-up computation. We propose a tailored procedure for solving each decomposed subproblem that provably generates facets of the feasible set of the MIO as constraints to add to the main problem. We conduct extensive computational experiments on standard benchmark data sets on which we show that our proposed approaches are 29 times faster than state-of-the-art MIO-based techniques and improve out-of-sample performance by up to 8%.

Keywords: Optimization; machine learning; optimal classification trees; mixed-integer optimization; Benders’ decomposition (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/opre.2021.0034 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:73:y:2025:i:4:p:2223-2241

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-08-06
Handle: RePEc:inm:oropre:v:73:y:2025:i:4:p:2223-2241