EconPapers    
Economics at your fingertips  
 

The Anchoring Effect, Algorithmic Fairness, and the Limits of Information Transparency for Emotion Artificial Intelligence

Lauren Rhue ()
Additional contact information
Lauren Rhue: Robert H. Smith School of Business, Decision, Operations, and Information Technologies, University of Maryland at College Park, College Park, Maryland 20740

Information Systems Research, 2024, vol. 35, issue 3, 1479-1496

Abstract: Emotion artificial intelligence (AI) or emotion recognition AI may systematically vary in its recognition of facial expressions and emotions across demographic groups, creating inconsistencies and disparities in its scoring. This paper explores the extent to which individuals can compensate for these disparities and inconsistencies in emotion AI considering two opposing factors; although humans evolved to recognize emotions, particularly happiness, they are also subject to cognitive biases, such as the anchoring effect. To help understand these dynamics, this study tasks three commercially available emotion AIs and a group of human labelers to identify emotions from faces in two image data sets. The scores generated by emotion AI and human labelers are examined for inference inconsistencies (i.e., misalignment between facial expression and emotion label). The human labelers are also provided with the emotion AI’s scores and with measures of its scoring fairness (or lack thereof). We observe that even when human labelers are operating in this context of information transparency, they may still rely on the emotion AI’s scores, perpetuating its inconsistencies. Several findings emerge from this study. First, the anchoring effect appears to be moderated by the type of inference inconsistency and is weaker for easier emotion recognition tasks. Second, when human labelers are provided with information transparency regarding the emotion AI’s fairness, the effect is not uniform across emotions. Also, there is no evidence that information transparency leads to the selective anchoring necessary to offset emotion AI disparities; in fact, some evidence suggests that information transparency increases human inference inconsistencies. Lastly, the different models of emotion AI are highly inconsistent in their scores, raising doubts about emotion AI more generally. Collectively, these findings provide evidence of the potential limitations of addressing algorithmic bias through individual decisions, even when those individuals are supported with information transparency.

Keywords: algorithmic bias; artificial intelligence; emotion recognition; fairness; affective AI; emotion AI; the anchoring effect; information transparency; algorithmic fairness (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/isre.2019.0493 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:orisre:v:35:y:2024:i:3:p:1479-1496

Access Statistics for this article

More articles in Information Systems Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:orisre:v:35:y:2024:i:3:p:1479-1496