EconPapers    
Economics at your fingertips  
 

Mitigating Algorithmic Bias Through Probability Calibration: A Case Study on Lead Generation Data

Miroslav Nikolić, Danilo Nikolić (), Miroslav Stefanović, Sara Koprivica and Darko Stefanović
Additional contact information
Miroslav Nikolić: Open Institute of Technology, University of Malta, XBX 1425 Ta’ Xbiex, Malta
Danilo Nikolić: Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
Miroslav Stefanović: Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
Sara Koprivica: Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
Darko Stefanović: Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia

Mathematics, 2025, vol. 13, issue 13, 1-23

Abstract: Probability calibration is commonly utilized to enhance the reliability and interpretability of probabilistic classifiers, yet its potential for reducing algorithmic bias remains under-explored. In this study, the role of probability calibration techniques in mitigating bias associated with sensitive attributes, specifically country of origin, within binary classification models is investigated. Using a real-world lead-generation 2853 × 8 matrix dataset characterized by substantial class imbalance, with the positive class representing 1.4% of observations, several binary classification models were evaluated and the best-performing model was selected as the baseline for further analysis. The evaluated models included Binary Logistic Regression with polynomial degrees of 1, 2, 3, and 4, Random Forest, and XGBoost classification algorithms. Three widely used calibration methods, Platt scaling, isotonic regression, and temperature scaling, were then used to assess their impact on both probabilistic accuracy and fairness metrics of the best-performing model. The findings suggest that post hoc calibration can effectively reduce the influence of sensitive features on predictions by improving fairness without compromising overall classification performance. This study demonstrates the practical value of incorporating calibration as a straightforward and effective fairness intervention within machine learning workflows.

Keywords: probability calibration; algorithmic fairness; isotonic regression; expected calibration error; machine learning fairness; binary classification (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/13/2183/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/13/2183/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:13:p:2183-:d:1694444

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-07-04
Handle: RePEc:gam:jmathe:v:13:y:2025:i:13:p:2183-:d:1694444