EconPapers    
Economics at your fingertips  
 

Trade-offs in automating platform regulation by algorithm: evidence from a health emergency

Grazia Cecere (), Vincent Lefrere (), Clara Jean () and Catherine Tucker ()
Additional contact information
Grazia Cecere: IMT-BS - DEFI - Département Data analytics, Économie et Finances - IMT - Institut Mines-Télécom [Paris] - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris], LITEM - Laboratoire en Innovation, Technologies, Economie et Management (EA 7363) - UEVE - Université d'Évry-Val-d'Essonne - Université Paris-Saclay - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris]
Vincent Lefrere: IMT-BS - DEFI - Département Data analytics, Économie et Finances - IMT - Institut Mines-Télécom [Paris] - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris], LITEM - Laboratoire en Innovation, Technologies, Economie et Management (EA 7363) - UEVE - Université d'Évry-Val-d'Essonne - Université Paris-Saclay - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris]
Clara Jean: EESC-GEM Grenoble Ecole de Management
Catherine Tucker: MIT - Massachusetts Institute of Technology

Post-Print from HAL

Abstract: Digital platforms have experienced pressure to restrict and regulate sensitive ad content. In a static environment, algorithms can help platforms more quickly and easily achieve regulatory compliance. However, in dynamic contexts, the performance of algorithmic decision-making for regulatory compliance is less understood. We aim to fill this gap by exploring how algorithmic rules governing digital platforms respond to rapid environmental changes, specifically in the context of a global health crisis. We study the effect of algorithmic regulation of ad content in times of rapid change where digital ad venues need to identify sensitive ads that should be subject to more restrictive policies and practices. Our results show that ads run by governmental organizations designed to inform the public about COVID-19 are more likely to be banned by Meta's algorithm than similar ads run by non-governmental organizations. Using a difference-indifferences (DiD) approach by exploiting an algorithmic incident on Meta in March 2020, we provide evidence of platform-level mechanisms at play. After the incident, we find that the proportion of disqualified ads decreased significantly. Further analysis reveal that (mis)classification of ads is responsible for this high proportion of disqualified ads, ruling out advertiser-effects and suggesting algorithmic (mis)classification. Using human-based classification, we show that the algorithm is likely to misclassified 12% of ads related to issues of national significance. This finding challenges the notion that algorithmic decision-making is always efficient or unbiased, especially in dynamic circumstances. Overall, our study contributes to the broader conversation about algorithmic decision-making in management. We suggest that algorithmic inflexibility towards categorization in periods of unpredictable shifts worsens the problems of trying to achieve regulatory compliance using algorithms.

Keywords: Algorithmic Decision-Making; Algorithmic Incident; Digital Advertising; Platform Regulation (search for similar items in EconPapers)
Date: 2025-07-25
References: Add references at CitEc
Citations:

Published in AOM 2025 : 85th Annual Meeting of the Academy of Management, Jul 2025, Copenaghen, Denmark

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-05164920

Access Statistics for this paper

More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().

 
Page updated 2025-07-22
Handle: RePEc:hal:journl:hal-05164920