[1]王梅,于源泽,尹传龙.基于多级特征融合的深度多视图对比学习聚类方法[J].计算机技术与发展,2025,(04):86-92.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0360]
 WANG Mei,YU Yuan-ze,YIN Chuan-long.Deep Multi-view Clustering Method Based on Multi-level Feature Fusion and Contrastive Learning[J].,2025,(04):86-92.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0360]
点击复制

基于多级特征融合的深度多视图对比学习聚类方法()

《计算机技术与发展》[ISSN:1006-6977/CN:61-1281/TN]

卷:
期数:
2025年04期
页码:
86-92
栏目:
人工智能
出版日期:
2025-04-10

文章信息/Info

Title:
Deep Multi-view Clustering Method Based on Multi-level Feature Fusion and Contrastive Learning
文章编号:
1673-629X(2025)04-0086-07
作者:
王梅于源泽尹传龙
东北石油大学 计算机科学与信息技术学院,黑龙江 大庆 163318
Author(s):
WANG MeiYU Yuan-zeYIN Chuan-long
School of Computer and Information Technology,Northeast Petroleum University,Daqing 163318,China
关键词:
深度多视图聚类多级特征融合对比学习语义一致性学习鲁棒性
Keywords:
deep multi-view clusteringmulti-level feature fusioncontrastive learningsemantic consistency learningrobustness
分类号:
TP301
DOI:
10.20165/j.cnki.ISSN1673-629X.2024.0360
摘要:
多视图聚类作为一种无监督的多视图学习方法,无需对大多视图数据进行标记,通过聚类即可从多个视图中挖掘出通用语义。 针对传统多视图聚类方法在特征融合和视图信息一致性处理上的不足,该文提出了一种基于多级特征融合的深度多视图对比学习聚类方法,以优化聚类性能并增强模型对多视图数据中共同语义的捕获及其在特征空间中的区分度。 该方法通过初级与次级编码器独立提取各视图的特征,并采用基于门控机制的多级特征融合模块动态调整特征的融合权重。 同时,引入对比学习机制,设计了特征对比损失和语义标签对比损失,以及加权互信息损失函数,进一步平衡视图间的一致性与特征重建目标。 在公开的多视图数据集上进行的实验验证了该方法的有效性,相较于对比方法显著提高了聚类的准确性和鲁棒性。
Abstract:
Multi-view clustering is an unsupervised learning method that does not require labeling of most view data,and can extract common semantics from multiple views through clustering. To address the limitations of traditional multi-view clustering methods in feature fusion and view information consistency,we present a deep multi-view clustering approach that leverages multi-level feature fusion and contrastive learning. This method aims to optimize clustering performance and improve the model’s ability to capture common semantics in multi-view data while enhancing discrimination in the feature space. This method independently extracts features from each view using primary and secondary encoders, and employs a multi - level feature fusion module based on a gating mechanism to dynamically adjust the fusion weights of the features. Simultaneously, a contrastive learning mechanism was introduced, and loss functions including feature contrastive loss,semantic label contrastive loss,and weighted mutual information loss were designed to further balance view consistency with the objective of feature reconstruction. Experiments conducted on publicly available multi-view datasets have validated the effectiveness of the proposed method,significantly enhancing the accuracy and robustness of clustering compared to competing methods.
更新日期/Last Update: 2025-04-10