Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries
Robert Gorwa and
Michael Veale
Additional contact information
Michael Veale: University College London
No 6dfk3, SocArXiv from Center for Open Science
Abstract:
The AI development community is increasingly making use of hosting intermediaries such as Hugging Face provide easy access to user-uploaded models and training data. These model marketplaces lower technical deployment barriers for hundreds of thousands of users, yet can be used in numerous potentially harmful and illegal ways. In this article, we argue that explain ways in which AI systems, which can both `contain' content and be open-ended tools, present one of the trickiest platform governance challenges seen to date. We provide case studies of several incidents across three illustrative platforms --- Hugging Face, GitHub and Civitai --- to examine how model marketplaces moderate models. Building on this analysis, we outline important (and yet nevertheless limited) practices that industry has been developing to respond to moderation demands: licensing, access and use restrictions, automated content moderation, and open policy development. While the policy challenge at hand is a considerable one, we conclude with some ideas as to how platforms could better mobilize resources to act as a careful, fair, and proportionate regulatory access point.
Date: 2023-11-17
New Economics Papers: this item is included in nep-ain and nep-pay
References: Add references at CitEc
Citations:
Downloads: (external link)
https://osf.io/download/6557966d062a3e1d96edff07/
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:osf:socarx:6dfk3
DOI: 10.31219/osf.io/6dfk3
Access Statistics for this paper
More papers in SocArXiv from Center for Open Science
Bibliographic data for series maintained by OSF ().