Awareness of Unethical Artificial Intelligence and its Mitigation Measures
Reinhard Bernsteiner,
Christian Ploder,
Teresa Spiess,
Thomas Dilger and
Sonja Höller
European Journal of Interdisciplinary Studies, 2023, issue 02
Abstract:
Normal 0 21 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; text-align:center; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri",sans-serif; mso-bidi-font-family:"Times New Roman"; mso-ansi-language:EN-US; mso-fareast-language:EN-US;} The infrastructure of the Internet is based on algorithms that enable the use of search engines, social networks, and much more. Algorithms themselves may vary in functionality, but many of them have the potential to reinforce, accentuate, and systematize age-old prejudices, biases, and implicit assumptions of society. Awareness of algorithms thus becomes an issue of agency, public life, and democracy. Nonetheless, as research showed, people are lacking algorithm awareness. Therefore, this paper aims to investigate the extent to which people are aware of unethical artificial intelligence and what actions they can take against it (mitigation measures). A survey addressing these factors yielded 291 valid responses. To examine the data and the relationship between the constructs in the model, partial least square structural modeling (PLS-SEM) was applied using the Smart PLS 3 tool. The empirical results demonstrate that awareness of mitigation measures is influenced by the self-efficacy of the user. However, trust in the algorithmic platform has no significant influence. In addition, the explainability of an algorithmic platform has a significant influence on the user's self-efficacy and should therefore be considered when setting up the platform. The most frequently mentioned mitigation measures by survey participants are laws and regulations, various types of algorithm audits, and education and training. This work thus provides new empirical insights for researchers and practitioners in the field of ethical artificial intelligence.
Keywords: artificial intelligence; biased artificial intelligence; algorithmic fairness; IT-audit; ethical AI (search for similar items in EconPapers)
JEL-codes: C30 D83 M00 (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://ejist.ro/files/pdf/526.pdf (application/pdf)
https://ejist.ro/abstract/526/Awareness-of-Unethic ... gation-Measures.html (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:jis:ejistu:y:2023:i:02:id:526
Access Statistics for this article
More articles in European Journal of Interdisciplinary Studies from Bucharest Economic Academy Contact information at EDIRC.
Bibliographic data for series maintained by Alina Popescu ().