Wasserstein Robust Classification with Fairness Constraints
Yijie Wang (),
Viet Anh Nguyen () and
Grani A. Hanasusanto ()
Additional contact information
Yijie Wang: School of Economics and Management, Tongji University, Shanghai 200092, China
Viet Anh Nguyen: Department of Systems Engineering and Engineering Management, Chinese University of Hong Kong, Hong Kong
Grani A. Hanasusanto: Department of Industrial and Enterprise Systems Engineering, University of Illinois Urbana-Champaign, Urbana, Illinois 61801
Manufacturing & Service Operations Management, 2024, vol. 26, issue 4, 1567-1585
Abstract:
Problem definition : Data analytics models and machine learning algorithms are increasingly deployed to support consequential decision-making processes, from deciding which applicants will receive job offers and loans to university enrollments and medical interventions. However, recent studies show these models may unintentionally amplify human bias and yield significant unfavorable decisions to specific groups. Methodology/results : We propose a distributionally robust classification model with a fairness constraint that encourages the classifier to be fair in the equality of opportunity criterion. We use a type- ∞ Wasserstein ambiguity set centered at the empirical distribution to represent distributional uncertainty and derive a conservative reformulation for the worst-case equal opportunity unfairness measure. We show that the model is equivalent to a mixed binary conic optimization problem, which standard off-the-shelf solvers can solve. We propose a convex, hinge-loss-based model for large problem instances whose reformulation does not incur binary variables to improve scalability. Moreover, we also consider the distributionally robust learning problem with a generic ground transportation cost to hedge against the label and sensitive attribute uncertainties. We numerically examine the performance of our proposed models on five real-world data sets related to individual analysis. Compared with the state-of-the-art methods, our proposed approaches significantly improve fairness with negligible loss of predictive accuracy in the testing data set. Managerial implications : Our paper raises awareness that bias may arise when predictive models are used in service and operations. It generally comes from human bias, for example, imbalanced data collection or low sample sizes, and is further amplified by algorithms. Incorporating fairness constraints and the distributionally robust optimization (DRO) scheme is a powerful way to alleviate algorithmic biases.
Keywords: math programming; stochastic methods (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/msom.2022.0230 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:ormsom:v:26:y:2024:i:4:p:1567-1585
Access Statistics for this article
More articles in Manufacturing & Service Operations Management from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().