[1]毛典辉,李瑞璇*,王可浩,等.结合层次化关系注意力与联合嵌入的知识表示方法[J].计算机技术与发展,2025,(06):108-115.[doi:10.20165/j.cnki.ISSN1673-629X.2025.0031]
 MAO Dian-hui,LI Rui-xuan*,WANG Ke-hao,et al.Knowledge Representation Learning Combining Hierarchical Relational Attention and Joint Embedding[J].,2025,(06):108-115.[doi:10.20165/j.cnki.ISSN1673-629X.2025.0031]
点击复制

结合层次化关系注意力与联合嵌入的知识表示方法()

《计算机技术与发展》[ISSN:1006-6977/CN:61-1281/TN]

卷:
期数:
2025年06期
页码:
108-115
栏目:
人工智能
出版日期:
2025-06-10

文章信息/Info

Title:
Knowledge Representation Learning Combining Hierarchical Relational Attention and Joint Embedding
文章编号:
1673-629X(2025)06-0108-08
作者:
毛典辉1李瑞璇1*王可浩1赵志华2
1. 北京工商大学 计算机与人工智能学院,北京 100048;
2. 中国政法大学 法学院,北京 102249
Author(s):
MAO Dian-hui1LI Rui-xuan1*WANG Ke-hao1ZHAO Zhi-hua2
1. School of Computing and Artificial Intelligence,Beijing Technology and Business University,Beijing 100048,China;
2. School of Law,China University of Political Science and Law,Beijing 102249,China
关键词:
知识表示学习层次化关系注意力联合嵌入图卷积网络链接预测
Keywords:
knowledge representation learninghierarchical relational attentionjoint embeddinggraph convolutional networkslink pre-diction
分类号:
TP391
DOI:
10.20165/j.cnki.ISSN1673-629X.2025.0031
摘要:
知识表示学习旨在将知识图谱中的实体和关系映射到低维连续向量空间,使得计算机能够处理结构化知识。 然而,现有基于图卷积网络的知识表示学习模型通常采用固定权重的聚合策略,未能充分考虑不同关系类型对中心实体的异质性影响。 为此,提出了一种结合层次化关系注意力与联合嵌入的知识表示学习模型(HRE-JEM)。 该模型通过自注意机制动态更新中心实体的向量表示,并捕获不同关系类型对中心实体的异质性影响。 编码器将实体和关系进行联合嵌入学习,并利用 ConvE 作为解码器分析三元组的空间结构特征。 在 WN18RR 和 FB15k-237 数据集上的对比实验和消融实验表明,该模型在多个指标上均优于基准模型,验证了该模型在知识表示学习领域的有效性与实用性。 此外,还讨论了注意力头个数变化对模型性能的影响。
Abstract:
Knowledge representation learning maps entities and relations in knowledge graphs to low-dimensional vector spaces for com-putational processing of structured knowledge. Current models using graph convolutional networks enhance feature representation of central entities by weighted aggregation of adjacent node features. However, these models often apply a fixed - weight aggregation strategy,failing to fully differentiate the influence of adjacent nodes on central entities across various relationship types. To address this,a hierarchical relational entity and relation joint embedding model, named HER- JEM, is proposed. The model uses the self - attention mechanism to dynamically update the vector representation of central entities, capturing the heterogeneous influence of different relationship types. In the encoder,joint embedding of entities and relations is applied,while ConvE serves as the decoder to analyze the spatial structure of triples. Comparative and ablation experiments on the WN18RR and FB15k - 237 datasets show that the proposed model outperforms the baseline model in multiple metrics,demonstrating its effectiveness and practicality in knowledge representation learning. The impact of varying the number of attention heads on model performance is also discussed.
更新日期/Last Update: 2025-06-10