Can ex ante conformity assessment regulations contribute to trustworthy foundation models? An evolutionary game analysis from an innovation ecosystem perspective
Xiaoxu Zhang,
Wenyong Zhou,
Wen Hu,
Shenghan Zhou,
Xiaoqian Hu and
Linchao Yang
Technology in Society, 2025, vol. 82, issue C
Abstract:
Untrustworthy artificial intelligence (AI) systems, especially foundation models, may lead to significant economic and social issues, and this has been arousing widespread concern. However, there is no mature and future-proofed regulatory approach to govern foundation models or any consensus regarding the regulations due to their rapid development and limited understanding of them. Thus, the potential of alternative regulation methods should be fully discussed. The ex ante conformity assessment in the EU AI Act, the world's first comprehensive AI law, is applied to regulate high-risk AI systems and can be an alternative regulatory approach to manage foundation models in the future. Consequently, this necessitates considering whether ex ante conformity assessment can contribute to achieving trustworthy foundation models. Hence, we adopted an innovation ecosystem perspective and employed an evolutionary game approach, constructing two hypothetical scenarios for ex ante conformity assessment, namely, self-assessment and independent assessment. Findings show that market forces and ecosystem impacts play a crucial role in shaping trustworthiness and that ex ante conformity assessment alone—whether through self-assessment or independent assessment—may be insufficient to ensure trustworthy outcomes. Thus, we argue that market-driven incentives and ecosystem thinking among industry practitioners are pivotal for advancing trustworthy foundation models; however, it is important to remain cautious about the limitations of market mechanisms. Therefore, a hybrid regulatory framework that combines legal mandates with market-based incentives and ecosystem influences warrants further exploration. Furthermore, independent evaluators can serve as important facilitators in supporting providers through trustworthy audits. This study contributes to on-going policy discussions on trustworthy AI regulation and offers references for future policy design and implementation.
Keywords: Foundation models; Trustworthiness; Ex ante conformity assessment; Innovation ecosystems (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0160791X25000909
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:teinso:v:82:y:2025:i:c:s0160791x25000909
DOI: 10.1016/j.techsoc.2025.102900
Access Statistics for this article
Technology in Society is currently edited by Charla Griffy-Brown
More articles in Technology in Society from Elsevier
Bibliographic data for series maintained by Catherine Liu ().