Apparent algorithmic discrimination and real-time algorithmic learning in digital search advertising
Anja Lambrecht () and
Catherine Tucker ()
Additional contact information
Anja Lambrecht: London Business School
Catherine Tucker: MIT Sloan School of Management
Quantitative Marketing and Economics (QME), 2024, vol. 22, issue 4, No 1, 357-387
Abstract:
Abstract Digital algorithms try to display content that engages consumers. To do this, algorithms need to overcome a ‘cold-start problem’ by swiftly learning whether content engages users. This requires feedback from users. The algorithm targets segments of users. However, if there are fewer individuals in a targeted segment of users, simply because this group is rarer in the population, this could lead to uneven outcomes for minority relative to majority groups. This is because individuals in a minority segment are proportionately more likely to be test subjects for experimental content that may ultimately be rejected by the platform. We explore in the context of ads that are displayed following searches on Google whether this is indeed the case. Previous research has documented that searches for names associated in a US context with Black people on search engines were more likely to return ads that highlighted the need for a criminal background check than was the case for searches for white people. We implement search advertising campaigns that target ads to searches for Black and white names. Our ads are indeed more likely to be displayed following a search for a Black name, even though the likelihood of clicking was similar. Since Black names are less common, the algorithm learns about the quality of the underlying ad more slowly. As a result, an ad is more likely to persist for searches next to Black names than next to white names. Proportionally more Black name searches are likely to have a low-quality ad shown next to them, even though eventually the ad will be rejected. A second study where ads are placed following searches for terms related to religious discrimination confirms this empirical pattern. Our results suggest that as a practical matter, real-time algorithmic learning can lead minority segments to be more likely to see content that will ultimately be rejected by the algorithm.
Keywords: Algorithmic fairness; Algorithmic discrimination; Advertising (search for similar items in EconPapers)
JEL-codes: M2 M3 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://link.springer.com/10.1007/s11129-024-09286-z Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:kap:qmktec:v:22:y:2024:i:4:d:10.1007_s11129-024-09286-z
Ordering information: This journal article can be ordered from
http://www.springer. ... ng/journal/11129/PS2
DOI: 10.1007/s11129-024-09286-z
Access Statistics for this article
Quantitative Marketing and Economics (QME) is currently edited by Pradeep Chintagunta
More articles in Quantitative Marketing and Economics (QME) from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().