EconPapers    
Economics at your fingertips  
 

When AI gets it wrong: False inference and political harm

Slobodan Tomic

No w6az2_v1, SocArXiv from Center for Open Science

Abstract: AI systems are increasingly active agents in political discourse, shaping reputations, narratives, and public perceptions. This commentary examines three real-world cases from Serbia where AI chatbots—Grok and ChatGPT—asserted false claims, spreading false narratives about political collectives or regime-critical individuals. These incidents illustrate how, under the guise of technical neutrality, AI can reinforce dominant narratives, amplify disinformation, and undermine dissent. Drawing on a recently proposed framework for AI regulation (Tomić & Štimac, 2025), we show how failures across three dimensions—decision models, data sourcing, and interface semantics—create pathways for political manipulation and reputational harm. We conclude by reflecting on implications for political deliberation and calling for targeted regulatory and empirical responses.

Date: 2025-10-04
New Economics Papers: this item is included in nep-pol
References: Add references at CitEc
Citations:

Downloads: (external link)
https://osf.io/download/68e0b7141dda455aaede49a2/

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:osf:socarx:w6az2_v1

DOI: 10.31219/osf.io/w6az2_v1

Access Statistics for this paper

More papers in SocArXiv from Center for Open Science
Bibliographic data for series maintained by OSF ().

 
Page updated 2025-11-04
Handle: RePEc:osf:socarx:w6az2_v1