[1]倪团雄,洪智勇,余文华,等.基于卷积注意力和对比学习的多视图聚类[J].计算机技术与发展,2023,33(08):59-65.[doi:10. 3969 / j. issn. 1673-629X. 2023. 08. 009]
 NI Tuan-xiong,HONG Zhi-yong,YU Wen-hua,et al.Multi-view Clustering Based on Convolution Attention and Contrast Learning[J].,2023,33(08):59-65.[doi:10. 3969 / j. issn. 1673-629X. 2023. 08. 009]
点击复制

基于卷积注意力和对比学习的多视图聚类()
分享到:

《计算机技术与发展》[ISSN:1006-6977/CN:61-1281/TN]

卷:
33
期数:
2023年08期
页码:
59-65
栏目:
媒体计算
出版日期:
2023-08-10

文章信息/Info

Title:
Multi-view Clustering Based on Convolution Attention and Contrast Learning
文章编号:
1673-629X(2023)08-0059-07
作者:
倪团雄12 洪智勇12 余文华12 张 昕1
1. 五邑大学 智能制造学部,广东 江门 529020;
2. 粤港澳工业大数据协同创新中心,广东 江门 529020
Author(s):
NI Tuan-xiong12 HONG Zhi-yong12 YU Wen-hua12 ZHANG Xin1
1. Department of Intelligent Manufacturing,Wuyi University,Jiangmen 529020,China
2. Guangdong Hong Kong Macao Industrial Big Data Collaborative Innovation Center,Jiangmen 529020,China
关键词:
编码器多视图聚类卷积注意力对比学习深度学习
Keywords:
encodermulti-view clusteringconvolutional attentioncomparative learningdeep learning
分类号:
TP301. 6
DOI:
10. 3969 / j. issn. 1673-629X. 2023. 08. 009
摘要:
多视图聚类能够综合不同视图的互补信息,往往能获得比单一视图更好的效果。 然而,传统多视图聚类方法受限于线性和浅层的学习函数,难以表征数据的深层
信息;现有的深度学习方法在表征多视图数据时,对多维度的细节特征关注度有所不足。 针对这些问题,提出一种基于卷积注意力机制的编码器模型( AEMC) ,该模型根据不同视图的特定表征,在编码器中融入卷积注意力模块自适应学习各个视图的关键特征,此外,为了优化模型,根据编码器表征,通过对比学习策略构造正负样本,使正样本间的相似度增加,负样本的相似度减少,引导聚类过程从而使其更具鲁棒性。 实证结果表明,模型优于当前大多数主流方法,并在 E-MNIST、E-FMNIST、VOC 和 RGB-D 数据集上聚类精度比基准模型分别提高了 10. 2% 、8. 1% 、7. 4% 和 4. 9% ,在手写数据集 E-MNIST 和 E-FMNIST 的聚类准确率分别高于目前最优的对比聚类方法(CoMVC)0. 7% 和 1. 3% ,在 VOC、RGB-D 数据集上略低于对比聚类方法( CoMVC) 。
Abstract:
Multi - view clustering can synthesize complementary information from different views,and often results better than a singleview. However,the?
traditional multi-view clustering method is limited by linear and shallow learning functions,which is difficult to characterize the deep information?
of the data. Existing deep learning methods pay insufficient attention to multi-dimensional detailed featureswhen characterizing multi-view data.?
In order to solve these problems,an encoder model based on convolutional attention mechanism( AEMC) is proposed,which integrates the conv-
olutional attention module into the encoder according to the specific representation ofdifferent views to adaptively learn the key features of each?
view. In addition,in order to optimize the model,according to the encoder representation,positive and negative samples are constructed through a comparative learning strategy,so that the similarity between positivesamples increases and the similarity of negative samples decreases,guiding the clustering process to make it more robust. The empiricalresults show that the model is better than most current mainstream methods,and its clustering accuracy on the E-MNIST,E-FMNIST,VOC and RGB-D datasets is improved by 10. 2% ,8. 1% ,7. 4% and 4. 9% compared with the benchmark model, respectively,and theclustering accuracy of E - MNIST and E - FMNIST datasets is higher than that of the current optimal Comparative Clustering Method
( CoMVC) by 0. 7% and 1. 3% , respectively,slightly lower than that of the Contrasting Clustering Method ( CoMVC) on the RGB-Ddataset.

相似文献/References:

[1]刘晓明 彭芳芳 吴皓威 解志强.基于IEEE802.16e的LDPO编译码方案设计及实现[J].计算机技术与发展,2009,(05):205.
 LIU Xiao-ming,PENG Fang-fang,WU Hao-wei,et al.Design and Implementation of LDPC Codec Based on IEEE802.16e[J].,2009,(08):205.
[2]李军 金翊 尹逊玮.C51单片机在三值光计算机编码器中的应用[J].计算机技术与发展,2008,(09):180.
 LI Jun,JIN Yi,YIN Xun-wei.Application of Single Chip Microcomputer in Ternary Optical Computer Coder[J].,2008,(08):180.
[3]王 坤,朱子奇.基于加强图像块相关性的细粒度图像分类方法[J].计算机技术与发展,2023,33(05):56.[doi:10. 3969 / j. issn. 1673-629X. 2023. 05. 009]
 WANG Kun,ZHU Zi-qi.Fine Grained Image Classification Method Based on Enhanced Patch Correlation[J].,2023,33(08):56.[doi:10. 3969 / j. issn. 1673-629X. 2023. 05. 009]
[4]黄巧文,周宽久,费 铮,等.基于模糊聚类的多视图协同过滤推荐算法[J].计算机技术与发展,2023,33(08):14.[doi:10. 3969 / j. issn. 1673-629X. 2023. 08. 003]
 HUANG Qiao-wen,ZHOU Kuan-jiu,FEI Zheng,et al.Fuzzy Clustering Based Multi-view Collaborative Filtering Recommendation Algorithm[J].,2023,33(08):14.[doi:10. 3969 / j. issn. 1673-629X. 2023. 08. 003]

更新日期/Last Update: 2023-08-10