EconPapers    
Economics at your fingertips  
 

Artificial intelligence and algorithmic bias? Field tests on social network with teens

Grazia Cecere (), Clara Jean (), Fabrice Le Guel () and Matthieu Manant ()
Additional contact information
Grazia Cecere: IMT-BS - DEFI - Département Droit, Économie et Finances - TEM - Télécom Ecole de Management - IMT - Institut Mines-Télécom [Paris] - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris], LITEM - Laboratoire en Innovation, Technologies, Economie et Management (EA 7363) - UEVE - Université d'Évry-Val-d'Essonne - Université Paris-Saclay - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris]
Clara Jean: EESC-GEM Grenoble Ecole de Management
Fabrice Le Guel: RITM - Réseaux Innovation Territoires et Mondialisation - Université Paris-Saclay
Matthieu Manant: RITM - Réseaux Innovation Territoires et Mondialisation - Université Paris-Saclay

Post-Print from HAL

Abstract: Artificial intelligence (AI) is a general purpose technology that is used in many sectors. However, automated decision-making powered by AI algorithms can lead to unintended outcomes, especially in the context of online platforms. The lack of transparency related to AI algorithms and their categorization methods make practical insights into effective management of the risks associated to their utilization of crucial importance. We address these issues through two field tests aimed at mitigating biases in online science, technology, engineering, and mathematics (STEM) education-related ads targeting teenagers. We conducted online ad campaigns involving gender-unspecific, women-specific, and gender-neutral ads targeted at young social network users. Our findings show that inclusion in the ad of a gender-oriented message tends to alleviate algorithmic gender bias but also reduced overall ad visibility. Our research shows also that text length has a significant impact on ad visibility, and that gender-oriented messages influence the display of the ad based on gender.

Keywords: Artificial intelligence; Algorithmic bias; Field tests; Audit; Online social network; Sustainable Development Goals (search for similar items in EconPapers)
Date: 2024-04
References: Add references at CitEc
Citations:

Published in Technological Forecasting and Social Change, 2024, 201, pp.123204. ⟨10.1016/j.techfore.2023.123204⟩

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-04464964

DOI: 10.1016/j.techfore.2023.123204

Access Statistics for this paper

More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().

 
Page updated 2025-03-19
Handle: RePEc:hal:journl:hal-04464964