Generative AI Models and Fake Bibliographic Information in Scholarly Publications: Causes, Typology, Consequences, and Implications for Managerial Decision-Making
S. A. Morozova ()
Administrative Consulting, 2026, issue 2
Abstract:
In the context of the digital transformation of science and education, the widespread adoption of generative artificial intelligence models functions both as a useful software solution that optimizes routine processes and large-scale data processing and as a source of new risks to the quality of scholarly communication, requiring managerial reflection. The article examines the phenomenon of fake bibliographic information arising from the use of such models in scholarly publishing practices. The study includes an analysis of terminological diversity and substantiates the use of the key concept of “confabulation†, as well as a review of Russian and international research.Objective of the study is to analyze the causes, types, and consequences of generating unreliable bibliographic references and to determine the significance of the identified risks for managerial decision-making at both federal and institutional levels.Methodology and Methods present the author’s approach to selecting and analyzing published Russian-language scholarly works, followed by verification of their bibliographic lists, and propose a typology of identified confabulations with justification for its application.Results demonstrate increasing activity in the use of generated unreliable references across publications of various subject areas and types, including peer-reviewed journals. Key causes of confabulation are identified, related both to the functioning characteristics of generative models and to authors’ practices in using them. It is also shown that confabulated bibliographies can serve as an indicator of generated fragments within scholarly texts, which has direct implications for managing publication quality.Conclusions confirm the need to move from declarative regulation toward comprehensive managerial solutions, including the development of institutional policies for the use of generative technologies, revision of scholarly quality control procedures, provision of access to up-to-date tools, and targeted development of competencies for responsible use of artificial intelligence among authors, editors, and academic managers.Discussion highlights the risk of reproducing unreliable references through subsequent publications and the formation of “chains of dissemination†of false scholarly information. It emphasizes the need to consolidate research focused on detecting generated texts, with particular attention to bibliographic data, and proposes priority directions for administrative decisions. Finally, it draws attention to the lack of an organizational framework governing users' application of generative models.
Date: 2026
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.acjournal.ru/jour/article/viewFile/2962/2134 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:acf:journl:y:2026:id:2962
Access Statistics for this article
More articles in Administrative Consulting from Russian Presidential Academy of National Economy and Public Administration. North-West Institute of Management.
Bibliographic data for series maintained by ().