EconPapers    
Economics at your fingertips  
 

Deepfake-Style AI Tutors in Higher Education: A Mixed-Methods Review and Governance Framework for Sustainable Digital Education

Hanan Sharif (), Amara Atif and Arfan Ali Nagra
Additional contact information
Hanan Sharif: Faculty of Computer Science, Lahore Garrison University, Lahore 5400, Pakistan
Amara Atif: School of Computer Science, University of Technology Sydney, Sydney 2007, Australia
Arfan Ali Nagra: Faculty of Computer Science, Lahore Garrison University, Lahore 5400, Pakistan

Sustainability, 2025, vol. 17, issue 21, 1-27

Abstract: Deepfake-style AI tutors are emerging in online education, offering personalized and multilingual instruction while introducing risks to integrity, privacy, and trust. This study aims to understand their pedagogical potential and governance needs for responsible integration. A PRISMA-guided, systematic review of 42 peer-reviewed studies (2015–early 2025) was conducted from 362 screened records, complemented by semi-structured questionnaires with 12 assistant professors (mean experience = 7 years). Thematic analysis using deductive codes achieved strong inter-coder reliability (κ = 0.81). Four major themes were identified: personalization and engagement, detection challenges and integrity risks, governance and policy gaps, and ethical and societal implications. The results indicate that while deepfake AI tutors enhance engagement, adaptability, and scalability, they also pose risks of impersonation, assessment fraud, and algorithmic bias. Current detection approaches based on pixel-level artifacts, frequency features, and physiological signals remain imperfect. To mitigate these challenges, a four-pillar governance framework is proposed, encompassing Transparency and Disclosure, Data Governance and Privacy, Integrity and Detection, and Ethical Oversight and Accountability, supported by a policy checklist, responsibility matrix, and risk-tier model. Deepfake AI tutors hold promise for expanding access to education, but fairness-aware detection, robust safeguards, and AI literacy initiatives are essential to sustain trust and ensure equitable adoption. These findings not only strengthen the ethical and governance foundations for generative AI in higher education but also contribute to the broader agenda of sustainable digital education. By promoting transparency, fairness, and equitable access, the proposed framework advances the long-term sustainability of learning ecosystems and aligns with the United Nations Sustainable Development Goal 4 (Quality Education) through responsible innovation and institutional resilience.

Keywords: deepfake AI tutors; synthetic media in education; online education governance; academic integrity; AI ethics in education; detection of deepfakes; privacy and fairness in AI; AI literacy; sustainable education; digital sustainability; SDG 4 quality education (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2071-1050/17/21/9793/pdf (application/pdf)
https://www.mdpi.com/2071-1050/17/21/9793/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:17:y:2025:i:21:p:9793-:d:1786651

Access Statistics for this article

Sustainability is currently edited by Ms. Alexandra Wu

More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-11-04
Handle: RePEc:gam:jsusta:v:17:y:2025:i:21:p:9793-:d:1786651