EconPapers    
Economics at your fingertips  
 

Machines that feel: behavioral determinants of attitude towards affect recognition technology—upgrading technology acceptance theory with the mindsponge model

Peter Mantello (), Manh-Tung Ho (), Minh-Hoang Nguyen () and Quan Hoang Vuong
Additional contact information
Peter Mantello: Ritsumeikan Asia Pacific University
Manh-Tung Ho: Phenikaa University
Minh-Hoang Nguyen: Phenikaa University

Palgrave Communications, 2023, vol. 10, issue 1, 1-16

Abstract: Abstract The rise of emotional AI signals a new era in human-machine relations where intelligent machines not only feel but also feed on human emotions as statistical fodder with the goal of reshaping our behavior. Unlike many smart technologies, emotion-recognition systems sense, monitor, harvest and analyze data extracted from a person’s non-conscious or psycho-physical state, often without their knowledge or consent. As a far more invasive manner of surveillance capitalism, the technological adoption of emotional AI is problematized by a myriad of legal, ethical, cultural, and scientific issues. To better understand the behavioral factors determining an individual’s attitude towards this emerging technology, we first identify five major tensions that may impinge on adoption. Second, we extend the Technological Acceptance Model (TAM) (Davis, 1989) model with insights from the mindsponge model of information filtering (Vuong and Napier, 2015) along with quantitative affordances offered by the Bayesian computational approach. Our analysis was conducted based on a multi-national dataset surveying perceptions of 1015 young adults (age 18–27) regarding emotional AI applications and their socio-cultural characteristics such as income, region, religiosity, and home country politics. These characteristics are fed into our Bayesian multi-level models as varying intercepts so that we can systematically measure and compare the effects of various behavioral determinants on the attitudes of respondents towards non-conscious data harvesting by government and private sector actors. Critically, this study finds respondents who feel more familiar with, and perceive more utilities in AI technologies, as well as rate themselves as more restrained from heated arguments on social media, feel less threatened by the practice of non-conscious data harvesting by both government and private sector actors. Our findings offer a fertile platform for further exploration of the intersection between psychology, culture, and emotion-recognition technologies as well as important insights for policymakers wishing to ensure design and regulation of the technology serve the best interests of society.

Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
http://link.springer.com/10.1057/s41599-023-01837-1 Abstract (text/html)
Access to full text is restricted to subscribers.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:pal:palcom:v:10:y:2023:i:1:d:10.1057_s41599-023-01837-1

Ordering information: This journal article can be ordered from
https://www.nature.com/palcomms/about

DOI: 10.1057/s41599-023-01837-1

Access Statistics for this article

More articles in Palgrave Communications from Palgrave Macmillan
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-07
Handle: RePEc:pal:palcom:v:10:y:2023:i:1:d:10.1057_s41599-023-01837-1