Colleges and universities are important stakeholders for regulating large language models and other emerging AI
Veljko Dubljević
Technology in Society, 2024, vol. 76, issue C
Abstract:
AI technology has already gone through one “winter,” and alarmist thinking may cause yet another one. Calls for a moratorium on AI research increase the salience of the public request for comment on “AI accountability.” Prohibitive approaches are an overreaction, especially when leveraged on virtual (non-embodied) AI agents. While there are legitimate concerns regarding expansion of AI models like ChatGPT in society, a better approach would be to forge a partnership between academia and industry, and utilize infrastructure of campuses to authenticate users and oversee new AI research. The public could also be involved with public libraries authenticating users. This staged approach to embedding AI in society would facilitate addressing ethical concerns, and implementing virtual AI agents in society in a responsible and safe manner.
Keywords: Artificial intelligence (AI); Ethics; Public policy; Legitimacy; Oversight (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0160791X24000289
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:teinso:v:76:y:2024:i:c:s0160791x24000289
DOI: 10.1016/j.techsoc.2024.102480
Access Statistics for this article
Technology in Society is currently edited by Charla Griffy-Brown
More articles in Technology in Society from Elsevier
Bibliographic data for series maintained by Catherine Liu ().