Large Language Models for Knowledge Graph Embedding: A Survey
Bingchen Liu,
Yuanyuan Fang,
Naixing Xu,
Shihao Hou,
Xin Li () and
Qian Li ()
Additional contact information
Bingchen Liu: School of Software, Shandong University, Jinan 250101, China
Yuanyuan Fang: Department of Computer Science, Metropolitan College, Boston University, Boston, MA 02215, USA
Naixing Xu: School of Software, Shandong University, Jinan 250101, China
Shihao Hou: Fujian Key Laboratory of Sensing and Computing for Smart City, School of Informatics, Xiamen University, Xiamen 361005, China
Xin Li: School of Software, Shandong University, Jinan 250101, China
Qian Li: School of Software, Shandong University, Jinan 250101, China
Mathematics, 2025, vol. 13, issue 14, 1-28
Abstract:
Large language models (LLMs) have attracted a lot of attention in various fields due to their superior performance, aiming to train hundreds of millions or more parameters on large amounts of text data to understand and generate natural language. As the superior performance of LLMs becomes apparent, they are increasingly being applied to knowledge graph embedding (KGE)-related tasks to improve the processing results. Traditional KGE representation learning methods map entities and relations into a low-dimensional vector space, enabling the triples in the knowledge graph to satisfy a specific scoring function in the vector space. However, based on the powerful language understanding and semantic modeling capabilities of LLMs, which have recently been invoked to varying degrees in different types of KGE-related scenarios such as multi-modal KGE and open KGE according to their task characteristics, researchers are increasingly exploring how to integrate LLMs to enhance knowledge representation, improve generalization to unseen entities or relations, and support reasoning beyond static graph structures. In this paper, we investigate a wide range of approaches for performing LLMs-related tasks in different types of KGE scenarios. To better compare the various approaches, we summarize each KGE scenario in a classification. In the article we also discuss the applications in which the methods are mainly used and suggest several forward-looking directions for the development of this new research area.
Keywords: knowledge graph; large language model; knowledge graph embedding (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/14/2244/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/14/2244/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:14:p:2244-:d:1699262
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().