Computer Vision: Anthropology of Algorithmic Bias in Facial Analysis Tool
Mayane Batista Lima
A chapter in Numerical Simulation - Advanced Techniques for Science and Engineering from IntechOpen
Abstract:
The usage of Computer Vision (CV) has led to debates about the bias within the technology. Despite machines being labeled as autonomous, human bias is embedded in data labeling for effective machine learning. Proper training of neural network machines requires massive amounts of "relevant data," however, not all data is collected. This contributes to a one-sided view and feeds a "standard of data that is not collected." The machine develops algorithmic decision-making based on the data it is presented, which can create machinic biases such as differences in gender, race/ethnicity, and class. This raises questions about which bodies are recognized by machines and how they are taught to "see" beyond binary "male or female" limitations. The study aims to understand how Amazon's Rekognition, a facial recognition and analysis tool, analyzes and classifies people of dissident genders who do not conform to "conventional" gender norms. Understanding the mechanisms behind the technology's decision-making processes can lead to more equitable and inclusive outcomes.
Keywords: artificial intelligence; computer vision; algorithmic Bias; Misgendering; Amazon recognition (search for similar items in EconPapers)
JEL-codes: C60 (search for similar items in EconPapers)
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.intechopen.com/chapters/87206 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:ito:pchaps:291278
DOI: 10.5772/intechopen.110330
Access Statistics for this chapter
More chapters in Chapters from IntechOpen
Bibliographic data for series maintained by Slobodan Momcilovic ().