L 0 -Regularized Learning for High-Dimensional Additive Hazards Regression
Zemin Zheng (),
Jie Zhang () and
Yang Li ()
Additional contact information
Zemin Zheng: International Institute of Finance, The School of Management, University of Science and Technology of China, Hefei, Anhui 230026, P. R. China
Jie Zhang: International Institute of Finance, The School of Management, University of Science and Technology of China, Hefei, Anhui 230026, P. R. China
Yang Li: International Institute of Finance, The School of Management, University of Science and Technology of China, Hefei, Anhui 230026, P. R. China
INFORMS Journal on Computing, 2022, vol. 34, issue 5, 2762-2775
Abstract:
Sparse learning in high-dimensional survival analysis is of great practical importance, as exemplified by modern applications in credit risk analysis and high-throughput genomic data analysis. In this article, we consider the L 0 -regularized learning for simultaneous variable selection and estimation under the framework of additive hazards models and utilize the idea of primal dual active sets to develop an algorithm targeted at solving the traditionally nonpolynomial time optimization problem. Under interpretable conditions, comprehensive statistical properties, including model selection consistency, oracle inequalities under various estimation losses, and the oracle property, are established for the global optimizer of the proposed approach. Moreover, our theoretical analysis for the algorithmic solution reveals that the proposed L 0 -regularized learning can be more efficient than other regularization methods in that it requests a smaller sample size as well as a lower minimum signal strength to identify the significant features. The effectiveness of the proposed method is evidenced by simulation studies and real-data analysis. Summary of Contribution: Feature selection is a fundamental statistical learning technique under high dimensions and routinely encountered in various areas, including operations research and computing. This paper focuses on the L 0 -regularized learning for feature selection in high-dimensional additive hazards regression. The matching algorithm for solving the nonconvex L 0 -constrained problem is scalable and enjoys comprehensive theoretical properties.
Keywords: survival data analysis; high-dimensional features; L 0 -regularized learning; primal dual active sets; global and local optimizers; model selection consistency (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://dx.doi.org/10.1287/ijoc.2022.1208 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:orijoc:v:34:y:2022:i:5:p:2762-2775
Access Statistics for this article
More articles in INFORMS Journal on Computing from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().