Slowly Varying Regression Under Sparsity
Dimitris Bertsimas (),
Vassilis Digalakis (),
Michael Lingzhi Li () and
Omar Skali Lami ()
Additional contact information
Dimitris Bertsimas: Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts 02142
Vassilis Digalakis: Department of Information Systems and Operations Management, HEC Paris, 78350 Jouy-en-Josas, France
Michael Lingzhi Li: Technology and Operations Management Unit, Harvard Business School, Boston, Massachusetts 02163
Omar Skali Lami: McKinsey & Company, Boston, Massachusetts 02210
Operations Research, 2025, vol. 73, issue 3, 1581-1597
Abstract:
We introduce the framework of slowly varying regression under sparsity, which allows sparse regression models to vary slowly and sparsely. We formulate the problem of parameter estimation as a mixed-integer optimization problem and demonstrate that it can be reformulated exactly as a binary convex optimization problem through a novel relaxation. The relaxation utilizes a new equality on Moore-Penrose inverses that convexifies the nonconvex objective function while coinciding with the original objective on all feasible binary points. This allows us to solve the problem significantly more efficiently and to provable optimality using a cutting plane–type algorithm. We develop a highly optimized implementation of such algorithm, which substantially improves upon the asymptotic computational complexity of a straightforward implementation. We further develop a fast heuristic method that is guaranteed to produce a feasible solution and, as we empirically illustrate, generates high-quality warm-start solutions for the binary optimization problem. To tune the framework’s hyperparameters, we propose a practical procedure relying on binary search that, under certain assumptions, is guaranteed to recover the true model parameters. We show, on both synthetic and real-world data sets, that the resulting algorithm outperforms competing formulations in comparable times across a variety of metrics, including estimation accuracy, predictive power, and computational time, and is highly scalable, enabling us to train models with tens of thousands of parameters.
Keywords: Optimization; slowly varying regression; sparse regression; mixed integer optimization; binary convex relaxation (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/opre.2022.0330 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:73:y:2025:i:3:p:1581-1597
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().