EconPapers    
Economics at your fingertips  
 

Power, Hate Speech, Machine Learning, and Intersectional Approach

Jae Yeon Kim
Additional contact information
Jae Yeon Kim: University of California, Berkeley

No chvgp, SocArXiv from Center for Open Science

Abstract: The advent of social media has increased digital content and, with it, hate speech. Advancements in machine learning algorithms help detect online hate speech at scale; nevertheless, these systems are far from perfect. Human-annotated hate speech data, used to train automated hate speech detection systems, is susceptible to racial/ethnic, gender, and other bias. To address societal and historical biases in automated hate speech detection, scholars and practitioners need to focus on the power dynamics: who decides what comprises hate speech. Examining inter- and intra-group dynamics can facilitate understanding of this causal mechanism. This intersectional approach deepens knowledge of the limitations of automated hate speech detection systems and bridges social science and machine learning literature on biases and fairness.

Date: 2021-04-10
New Economics Papers: this item is included in nep-big and nep-pay
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://osf.io/download/6070fcf951f7ae03dbf578ff/

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:osf:socarx:chvgp

DOI: 10.31219/osf.io/chvgp

Access Statistics for this paper

More papers in SocArXiv from Center for Open Science
Bibliographic data for series maintained by OSF ().

 
Page updated 2025-03-19
Handle: RePEc:osf:socarx:chvgp