Ethical Considerations in AI-Enabled Services
Atul Gupta (),
Dinesh Verma () and
Utpal Mangla ()
Additional contact information
Atul Gupta: Government of Canada
Dinesh Verma: IBM
Utpal Mangla: IBM
A chapter in Smart Services Summit, 2025, pp 3-20 from Springer
Abstract:
Abstract The five key categories of ethical considerations in AI systems focused on fairness and bias, trust and transparency, privacy and security, accountability, and social benefits. This study proposes a framework for resolving Accuracy-Fairness trade-offs in AI use cases, leveraging Multi-Criteria Decision Making (MCDM) techniques. The Decision Making Trial and Evaluation Laboratory (DEMATEL) method and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) understand complex relationships among ethical considerations and identify ideal solutions. AI models can improve societal equality, but so is the risk of bias leading to inequality. The study proposes a framework to analyze the trade-off between the cost of creating an unbiased AI model and the delay in societal benefits. This involves modeling societal benefits as an exponentially decaying function and inequality using a value distribution model. Several inequality measures are defined, including the Gini Index, the 20:20 index, and the Palma Ratio. A method is proposed to model social value using parameters that define the value models and the opportunity cost of getting a fair model. It also suggests modeling and analyzing indices of unfairness to determine the combinations for gain/loss of the unfairness index. The study further proposes to use a temporal model for societal benefits, where the total value delivered by a technology at time “t” is represented as an exponentially decaying function. Three types of values are defined: value generated without AI, with a biased AI model, and with a fair AI model. The opportunity cost of developing a fair model is represented by the integral of the value generated with a biased AI model from 0 to the time taken to develop the fair model. For modeling inequality, a value distribution model is used, where the cumulative distribution function (CDF) of value distributed across society is defined by $$f(x) = x^{ \wedge } g$$ f ( x ) = x ∧ g , where x is the percentage of society and g is a parameter that defines the CDF. The study proposes a method to model social value using parameters that define the value models and the opportunity cost of getting a fair model. It will determine the conditions under which society might be better off using a biased AI model and those under which society might be better off waiting for an unbiased AI model.
Keywords: Ethical considerations in AI; Accuracy-fairness trade-offs; Multi-Criteria Decision Making (MCDM) with DEMATEL and TOPSIS; Inequality measures (Gini Index 20:20 Index; Palma Ratio); Temporal model for societal benefits; Value distribution model (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:prochp:978-3-031-86958-7_1
Ordering information: This item can be ordered from
http://www.springer.com/9783031869587
DOI: 10.1007/978-3-031-86958-7_1
Access Statistics for this chapter
More chapters in Progress in IS from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().