MLKGC: Large Language Models for Knowledge Graph Completion Under Multimodal Augmentation
Pengfei Yue,
Hailiang Tang,
Wanyu Li (),
Wenxiao Zhang and
Bingjie Yan
Additional contact information
Pengfei Yue: School of Information Science and Engineering, Qilu Normal University, Jinan 250200, China
Hailiang Tang: School of Information Science and Engineering, Qilu Normal University, Jinan 250200, China
Wanyu Li: School of Humanities, Arts, and Social Sciences, Kunsan National University, Gunsan 54150, Republic of Korea
Wenxiao Zhang: School of Computer Science and Engineering, Kunsan National University, Gunsan 54150, Republic of Korea
Bingjie Yan: School of Mathematics, High School Attached to Shandong Normal University, Jinan 250200, China
Mathematics, 2025, vol. 13, issue 9, 1-13
Abstract:
Knowledge graph completion (KGC) is a critical task for addressing the incompleteness of knowledge graphs and supporting downstream applications. However, it faces significant challenges, including insufficient structured information and uneven entity distribution. Although existing methods have alleviated these issues to some extent, they often rely heavily on extensive training and fine-tuning, which results in low efficiency. To tackle these challenges, we introduce our MLKGC framework, a novel approach that combines large language models (LLMs) with multi-modal modules (MMs). LLMs leverage their advanced language understanding and reasoning abilities to enrich the contextual information for KGC, while MMs integrate multi-modal data, such as audio and images, to bridge knowledge gaps. This integration augments the capability of the model to address long-tail entities, enhances its reasoning processes, and facilitates more robust information integration through the incorporation of diverse inputs. By harnessing the synergy between LLMs and MMs, our approach reduces dependence on traditional text-based training and fine-tuning, providing a more efficient and accurate solution for KGC tasks. It also offers greater flexibility in addressing complex relationships and diverse entities. Extensive experiments on multiple benchmark KGC datasets demonstrate that MLKGC effectively leverages the strengths of both LLMs and multi-modal data, achieving superior performance in link-prediction tasks.
Keywords: large language models; multi-modal module; knowledge graph completion (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/9/1463/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/9/1463/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:9:p:1463-:d:1645806
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().