Continual Pre-Training of Language Models for Concept Prerequisite Learning with Graph Neural Networks
Xin Tang,
Kunjia Liu,
Hao Xu (),
Weidong Xiao and
Zhen Tan
Additional contact information
Xin Tang: Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China
Kunjia Liu: Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China
Hao Xu: Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China
Weidong Xiao: Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China
Zhen Tan: Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China
Mathematics, 2023, vol. 11, issue 12, 1-16
Abstract:
Prerequisite chains are crucial to acquiring new knowledge efficiently. Many studies have been devoted to automatically identifying the prerequisite relationships between concepts from educational data. Though effective to some extent, these methods have neglected two key factors: most works have failed to utilize domain-related knowledge to enhance pre-trained language models, thus making the textual representation of concepts less effective; they also ignore the fusion of semantic information and structural information formed by existing prerequisites. We propose a two-stage concept prerequisite learning model (TCPL), to integrate the above factors. In the first stage, we designed two continual pre-training tasks for domain-adaptive and task-specific enhancement, to obtain better textual representation. In the second stage, to leverage the complementary effects of the semantic and structural information, we optimized the encoder of the resource–concept graph and the pre-trained language model simultaneously, with hinge loss as an auxiliary training objective. Extensive experiments conducted on three public datasets demonstrated the effectiveness of the proposed approach. Our proposed model improved by 7.9%, 6.7%, 5.6%, and 8.4% on ACC, F1, AP, and AUC on average, compared to the state-of-the-art methods.
Keywords: concept prerequisite relationships; pre-trained language model; relational graph convolutional networks; contrastive learning (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/11/12/2780/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/12/2780/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:12:p:2780-:d:1175190
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().