CIST: Differentiating Concepts and Instances Based on Spatial Transformation for Knowledge Graph Embedding
Pengfei Zhang,
Dong Chen,
Yang Fang (),
Xiang Zhao and
Weidong Xiao
Additional contact information
Pengfei Zhang: Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China
Dong Chen: Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China
Yang Fang: Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China
Xiang Zhao: Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China
Weidong Xiao: Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China
Mathematics, 2022, vol. 10, issue 17, 1-16
Abstract:
Knowledge representation learning is representing entities and relations in a knowledge graph as dense low-dimensional vectors in the continuous space, which explores the features and properties of the graph. Such a technique can facilitate the computation and reasoning on the knowledge graphs, which benefits many downstream tasks. In order to alleviate the problem of insufficient entity representation learning caused by sparse knowledge graphs, some researchers propose knowledge graph embedding models based on instances and concepts, which utilize the latent semantic connections between concepts and instances contained in the knowledge graphs to enhance the knowledge graph embedding. However, they model instances and concepts in the same space or ignore the transitivity of isA relations, leading to inaccurate embeddings of concepts and instances. To address the above shortcomings, we propose a knowledge graph embedding model that differentiates concepts and instances based on spatial transformation—CIST. The model alleviates the gathering issue of similar instances or concepts in the semantic space by modeling them in different embedding spaces, and adds a learnable parameter to adjust the neighboring range for concept embedding to distinguish hierarchical information of different concepts, thus modeling the transitivity of isA relations. The above features of instances and concepts serve as auxiliary information so that thoroughly modeling them could alleviate the insufficient entity representation learning issue. For the experiments, we chose two tasks, i.e., link prediction and triple classification, and two real-life datasets: YAGO26K-906 and DB111K-174. Compared with state of the arts, CIST achieves an optimal performance in most cases. Specifically, CIST outperforms the SOTA model JOIE by 51.1% on Hits@1 in link prediction and 15.2% on F1 score in triple classification.
Keywords: knowledge graph; knowledge graph embedding; concepts and instances (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
https://www.mdpi.com/2227-7390/10/17/3161/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/17/3161/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:17:p:3161-:d:905245
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().