Am I Secure by Design ?” Evaluating the Security and Transparency of GenAI: An End User-Centric Approach
Christian Chung and
François Acquatella
Additional contact information
François Acquatella: DRM - Dauphine Recherches en Management - Université Paris Dauphine-PSL - PSL - Université Paris Sciences et Lettres - CNRS - Centre National de la Recherche Scientifique
Post-Print from HAL
Abstract:
Generative AI models, such as ChatGPT and Deepseek, are increasingly integrated into daily life. However, concerns about reliability, cybersecurity, transparency, and data privacy are growing through their use. While Secure by Design (SbD) and Explainable AI (XAI) offer theoretical guidelines, their practical combined application to AI-generated content remains unclear. This study empirically evaluates AI security, cybersecurity and transparency using a structured interrogation method directly addressed to AI models. We assessed multiple text-based open-source and proprietary AI systems on cybersecurity claims, update transparency, and privacy compliance. Preliminary results reveal some discrepancies between AI declarations and actual adherence to SbD principles. While most models incorporate ethical safeguards, they lack clarity on security updates and data management, particularly regarding training data. We propose a user-centered audit framework to test transparency and AI security commitments. The findings emphasize the need to adapt current Secure by Design standards to AI ecosystems while ensuring verifiable transparency.
Date: 2025-08
References: Add references at CitEc
Citations:
Published in Annual Americas Conference on Information Systems (AMCIS) 2025, Aug 2025, Montréal, Canada
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-05468287
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().