EconPapers    
Economics at your fingertips  
 

Responsible and Ethical Use of AI in Education: Are We Forcing a Square Peg into a Round Hole?

Alexander Amigud () and David J. Pell
Additional contact information
Alexander Amigud: Faculty of Management, International Business University, Toronto, ON M5S 2V1, Canada
David J. Pell: Faculty of Arts and Social Sciences (FASS), The Open University, Milton Keynes MK7 6AA, UK

World, 2025, vol. 6, issue 2, 1-18

Abstract: The emergence of generative AI has caused a major dilemma—as higher education institutions prepare students for the workforce, the development of digital skills must become a normative aim, while simultaneously preserving academic integrity and credibility. The challenge they face is not simply a matter of using AI responsibly but typically of reconciling two opposing duties: (A) preparing students for the future of work, and (B) maintaining the traditional role of developing personal academic skills, such as critical thinking, the ability to acquire knowledge, and the capacity to produce original work. Higher education institutions must typically balance these objectives while addressing financial considerations, creating value for students and employers, and meeting accreditation requirements. Against this need, this multiple-case study of fifty universities across eight countries examined institutional response to generative AI. The content analysis revealed apparent confusion and a lack of established best practices, as proposed actions varied widely, from complete bans on generated content to the development of custom AI assistants for students and faculty. Oftentimes, the onus fell on individual faculty to exercise discretion in the use of AI, suggesting an inconsistent application of academic policy. We conclude by recognizing that time and innovation will be required for the apparent confusion of higher education institutions in responding to this challenge to be resolved and suggest some possible approaches to that. Our results, however, suggest that their top concern now is the potential for irresponsible use of AI by students to cheat on assessments. We, therefore, recommend that, in the short term, and likely in the long term, the credibility of awards is urgently safeguarded and argue that this could be achieved by ensuring at least some human-proctored assessments are integrated into courses, e.g., in the form of real-location examinations and viva voces.

Keywords: generative AI; academic integrity; plagiarism; academic quality; ChatGPT; student assessment; learning technology (search for similar items in EconPapers)
JEL-codes: G15 G17 G18 L21 L22 L25 L26 Q42 Q43 Q47 Q48 R51 R52 R58 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2673-4060/6/2/81/pdf (application/pdf)
https://www.mdpi.com/2673-4060/6/2/81/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jworld:v:6:y:2025:i:2:p:81-:d:1671331

Access Statistics for this article

World is currently edited by Ms. Cassie Hu

More articles in World from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-06-04
Handle: RePEc:gam:jworld:v:6:y:2025:i:2:p:81-:d:1671331