Feature Selection and Grouping Effect Analysis for Credit Evaluation via Regularized Diagonal Distance Metric Learning
Tie Li (),
Gang Kou (),
Yi Peng () and
Philip S. Yu ()
Additional contact information
Tie Li: School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 611731, People’s Republic of China
Gang Kou: Xiangjiang Laboratory, Changsha 410205, People’s Republic of China; and School of Business Administration, Southwestern University of Finance and Economics, Chengdu 610074, People’s Republic of China; and Big Data Laboratory on Financial Security and Behavior, Southwestern University of Finance and Economics, Chengdu 610074, People’s Republic of China
Yi Peng: School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 611731, People’s Republic of China
Philip S. Yu: Department of Computer Science, University of Illinois at Chicago, Chicago, Illinois 60607
INFORMS Journal on Computing, 2025, vol. 37, issue 5, 1391-1412
Abstract:
In credit evaluation, feature selection and grouping effect analysis are used to identify the most relevant credit risk features. Most feature selection and grouping effect analysis are implemented via regularizing linear models. Nevertheless, substantial evidence shows that credit data are linearly inseparable due to heterogeneous credit customers and various risk sources. Although many nonlinear models have been proposed in the last two decades, the majority of them required recombination of the original features, which made it difficult to interpret the results of the models. To cope with this dilemma, we propose a diagonal distance metric learning model that improves distance metrics by rescaling the features. Meanwhile, feature selection and grouping effect analysis are realized by adding regularizations to the model. The main merit of the proposed model is that it avoids the limitation of the linear models by not pursuing linear separability, yet guaranteeing the interpretability. We also prove and explain why feature selection and grouping effect can be achieved and decompose the optimization problem into parallel linear programming problems, plus a small quadratic consensus-reaching problem, such that the optimization can be efficiently solved. Experiments using a real credit data set of 96,000 instances show that the proposed model improves the area under the receiver operating characteristic curve (AUC) of the distance-based classifier k -nearest neighbors by 14% in two-class credit evaluation and surpasses linear models in terms of accuracy, true positive rate, and AUC. The proposed regularized diagonal distance metric learning approach also has the potential to be applied to other fields where data are linearly inseparable.
Keywords: diagonal distance metric learning; ElasticNet; feature selection; feature group; credit evaluation (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/ijoc.2023.0322 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:orijoc:v:37:y:2025:i:5:p:1391-1412
Access Statistics for this article
More articles in INFORMS Journal on Computing from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().