EconPapers    
Economics at your fingertips  
 

AI-determined similarity increases likability and trustworthiness of human voices

Oliver Jaggy, Stephan Schwan and Hauke S Meyerhoff

PLOS ONE, 2025, vol. 20, issue 3, 1-27

Abstract: Modern artificial intelligence (AI) technology is capable of generating human sounding voices that could be used to deceive recipients in various contexts (e.g., deep fakes). Given the increasing accessibility of this technology and its potential societal implications, the present study conducted online experiments using original data to investigate the validity of AI-based voice similarity measures and their impact on trustworthiness and likability. Correlation analyses revealed that voiceprints – numerical representations of voices derived from a speaker verification system – can be used to approximate human (dis)similarity ratings. With regard to cognitive evaluations, we observed that voices similar to one’s own voice increased trustworthiness and likability, whereas average voices did not elicit such effects. These findings suggest a preference for self-similar voices and underscore the risks associated with the misuse of AI in generating persuasive artificial voices from brief voice samples.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0318890 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 18890&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0318890

DOI: 10.1371/journal.pone.0318890

Access Statistics for this article

More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().

 
Page updated 2025-05-05
Handle: RePEc:plo:pone00:0318890