GDPR and Large Language Models: Technical and Legal Obstacles
Georgios Feretzakis (),
Evangelia Vagena,
Konstantinos Kalodanis,
Paraskevi Peristera,
Dimitris Kalles and
Athanasios Anastasiou
Additional contact information
Georgios Feretzakis: School of Science and Technology, Hellenic Open University, 26335 Patras, Greece
Evangelia Vagena: Athens University of Economics and Business, 10434 Athens, Greece
Konstantinos Kalodanis: Department of Informatics and Telematics, Harokopio University of Athens, 17676 Kallithea, Greece
Paraskevi Peristera: Division of Psychobiology and Epidemiology, Department of Psychology, Stockholm University, 10691 Stockholm, Sweden
Dimitris Kalles: School of Science and Technology, Hellenic Open University, 26335 Patras, Greece
Athanasios Anastasiou: Biomedical Engineering Laboratory, National Technical University of Athens, 15780 Athens, Greece
Future Internet, 2025, vol. 17, issue 4, 1-26
Abstract:
Large Language Models (LLMs) have revolutionized natural language processing but present significant technical and legal challenges when confronted with the General Data Protection Regulation (GDPR). This paper examines the complexities involved in reconciling the design and operation of LLMs with GDPR requirements. In particular, we analyze how key GDPR provisions—including the Right to Erasure, Right of Access, Right to Rectification, and restrictions on Automated Decision-Making—are challenged by the opaque and distributed nature of LLMs. We discuss issues such as the transformation of personal data into non-interpretable model parameters, difficulties in ensuring transparency and accountability, and the risks of bias and data over-collection. Moreover, the paper explores potential technical solutions such as machine unlearning, explainable AI (XAI), differential privacy, and federated learning, alongside strategies for embedding privacy-by-design principles and automated compliance tools into LLM development. The analysis is further enriched by considering the implications of emerging regulations like the EU’s Artificial Intelligence Act. In addition, we propose a four-layer governance framework that addresses data governance, technical privacy enhancements, continuous compliance monitoring, and explainability and oversight, thereby offering a practical roadmap for GDPR alignment in LLM systems. Through this comprehensive examination, we aim to bridge the gap between the technical capabilities of LLMs and the stringent data protection standards mandated by GDPR, ultimately contributing to more responsible and ethical AI practices.
Keywords: GDPR; artificial intelligence; large language models; AI Act; LLM; LLMs; data privacy; AI; Legal Obstacles (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1999-5903/17/4/151/pdf (application/pdf)
https://www.mdpi.com/1999-5903/17/4/151/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:17:y:2025:i:4:p:151-:d:1623026
Access Statistics for this article
Future Internet is currently edited by Ms. Grace You
More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().