EconPapers    
Economics at your fingertips  
 

Machine unlearning for generative AI

Yashaswini Viswanath, Sudha Jamthe, Suresh Lokiah and Emanuele Bianchini
Additional contact information
Yashaswini Viswanath: Resident Researcher, Business School of AI, USA
Sudha Jamthe: Technology Futurist, Business School of AI, Global South in AI, Stanford Continuing Studies, Barcelona Technology School, Elisava School of Engineering and Design, USA
Suresh Lokiah: Senior Engineering Manager, Zebra Technologies, USA
Emanuele Bianchini: Senior Director, Technology & Innovation, Consumer Technology Group, Flex, USA

Journal of AI, Robotics & Workplace Automation, 2023, vol. 3, issue 1, 37-46

Abstract: This paper introduces a new field of AI research called machine unlearning and examines the challenges and approaches to extend machine unlearning to generative AI (GenAI). Machine unlearning is a model-driven approach to make an existing artificial intelligence (AI) model unlearn a set of data from its learning. Machine unlearning is becoming important for businesses to comply with privacy laws such as General Data Protection Regulation (GDPR) customer’s right to be forgotten, to manage security and to remove bias that AI models learn from their training data, as it is expensive to retrain and deploy the models without the bias or security or privacy compromising data. This paper presents the state of the art in machine unlearning approaches such as exact unlearning, approximate unlearning, zero-shot learning (ZSL) and fast and efficient unlearning. The paper highlights the challenges in applying machine learning to GenAI which is built on a transformer architecture of neural networks and adds more opaqueness to how large language models (LLM) learn in pre-training, fine-turning, transfer learning to more languages and in inference. The paper elaborates on how models retain the learning in a neural network to guide the various machine unlearning approaches for GenAI that the authors hope can be built upon their work. The paper suggests possible futuristic directions of research to create transparency in LLM and particularly looks at hallucinations in LLMs when they are extended to do machine translation for new languages beyond their training with ZSL to shed light on how the model stores its learning of newer languages in its memory and how it draws upon it during inference in GenAI applications. Finally, the paper calls for collaborations for future research in machine unlearning for GenAI, particularly LLMs, to add transparency and inclusivity to language AI.

Keywords: machine unlearning; privacy; right to be forgotten; generative AI; fine-tuning; large language models; LLM; zero shot learning; explainability (search for similar items in EconPapers)
JEL-codes: G2 M15 (search for similar items in EconPapers)
Date: 2023
References: Add references at CitEc
Citations:

Downloads: (external link)
https://hstalks.com/article/8325/download/ (application/pdf)
https://hstalks.com/article/8325/ (text/html)
Requires a paid subscription for full access.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:aza:airwa0:y:2023:v:3:i:1:p:37-46

Access Statistics for this article

More articles in Journal of AI, Robotics & Workplace Automation from Henry Stewart Publications
Bibliographic data for series maintained by Henry Stewart Talks ().

 
Page updated 2025-03-19
Handle: RePEc:aza:airwa0:y:2023:v:3:i:1:p:37-46