Charte IA digne de confiance, Pour le déploiement d’une industrie de production numérisée, résiliente et éthique en Nouvelle-Aquitaine
Ikram Chraibi Kaadoud (),
Frédéric Alexandre () and
Michele Barbier ()
Additional contact information
Ikram Chraibi Kaadoud: STIP - Service transfert pour l'innovation et partenariats - Centre Inria de l'Université de Lille - Inria - Institut National de Recherche en Informatique et en Automatique, Dihnamic
Frédéric Alexandre: Mnemosyne - Mnemonic Synergy - LaBRI - Laboratoire Bordelais de Recherche en Informatique - UB - Université de Bordeaux - École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB) - CNRS - Centre National de la Recherche Scientifique - Centre Inria de l'Université de Bordeaux - Inria - Institut National de Recherche en Informatique et en Automatique - IMN - Institut des Maladies Neurodégénératives [Bordeaux] - UB - Université de Bordeaux - CNRS - Centre National de la Recherche Scientifique, IMN - Institut des Maladies Neurodégénératives [Bordeaux] - UB - Université de Bordeaux - CNRS - Centre National de la Recherche Scientifique
Michele Barbier: Inria Siège - Inria - Institut National de Recherche en Informatique et en Automatique
Working Papers from HAL
Abstract:
The adoption of the AI Act in 2024 represents a major regulatory milestone in the development of AI in Europe. This framework aims to establish conditions of trust around the use of AI systems (AIS), in a context where AI represents both a lever for economic transformation and a factor in organizational disruption. While debates have long focused on technological aspects – traceability, data governance, algorithmic biases, transparency, cybersecurity – the actual implementation of AI in organizations reveals equally critical human and managerial issues. AI cannot be reduced to a mere technical tool, as it involves the dynamics of acculturation, training, revision of processes, and management of perceptions and internal resistance. It therefore requires a fully-fledged change management strategy. How can we tackle both technical and management issues, while maintaining a clear line of conduct in line with European regulations? This is the question that Dihnamic has set out to answer by proposing AI Guidelines that can be trusted by companies and public authorities, to support them in their AI innovation, from the first steps of ideation to prototype development. Based on feedback from the field, needs expressed by companies, qualitative analyses, contributions from a HUB of experts and open source communities, Dihnamic has drawn up guidelines with eight recommendations to align regulatory requirements and operational constraints, while structuring an approach to raising awareness and responsible innovation. Through eight recommendations detailed in this document, the charter highlights the essential axes for a "trustworthy AI – company" collaboration that respects the rights and well-being of employees in both the public and private domains.
Keywords: transformation; innovation; robustesse; gouvernance; Transparence; responsabilité; IA digne de confiance (search for similar items in EconPapers)
Date: 2025-04
Note: View the original document on HAL open archive server: https://inria.hal.science/hal-05346167v1
References: Add references at CitEc
Citations:
Published in Inria Centre at the University of Bordeaux. 2025, pp.11
Downloads: (external link)
https://inria.hal.science/hal-05346167v1/document (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:wpaper:hal-05346167
Access Statistics for this paper
More papers in Working Papers from HAL
Bibliographic data for series maintained by CCSD ().