[1]秦钰,刘芳,霍宏雯.基于自注意力机制的胃肠息肉图像分割算法[J].计算机技术与发展,2025,(01):67-72.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0306]
 QIN Yu,LIU Fang,HUO Hong-wen.Image Segmentation Algorithm of Gastrointestinal Polyps Based on Self-attention Mechanism[J].,2025,(01):67-72.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0306]
点击复制

基于自注意力机制的胃肠息肉图像分割算法()

《计算机技术与发展》[ISSN:1006-6977/CN:61-1281/TN]

卷:
期数:
2025年01期
页码:
67-72
栏目:
媒体计算
出版日期:
2025-01-10

文章信息/Info

Title:
Image Segmentation Algorithm of Gastrointestinal Polyps Based on Self-attention Mechanism
文章编号:
1673-629X(2025)01-0067-06
作者:
秦钰刘芳霍宏雯
长春工业大学 数学与统计学院,吉林 长春 130012
Author(s):
QIN Yu LIU FangHUO Hong-wen
School of Mathematics and Statistics,Changchun University of Technology,Changchun 130012,China
关键词:
胃肠息肉自注意力机制医学图像分割GF-Net网络损失函数
Keywords:
gastrointestinal polypself-attention mechanismmedical image segmentationGF-Net networkloss function
分类号:
TP183
DOI:
10.20165/j.cnki.ISSN1673-629X.2024.0306
摘要:
从肠道内窥镜检查图像中自动分割出胃肠息肉,可以为癌前病变的早期检测和预防提供重要依据。 对于胃肠息肉病变区域特征的高变异性以及病变与正常组织之间的低对比度、边缘纹理分割不清晰问题,在 GF-Net 网络分割方法基础上进行改进,使得改进的边缘引导模块更关注边缘信息,具体来说在边缘引导模块中逐层引入自注意力机制,使模型充分学习图像的全局特征,更好理解图像中的上下文关系,并将这些丰富的语义信息应用于胃肠息肉精准的分割。 同时结合分割损失函数和边缘损失函数,分割损失函数关注整体分割准确性,而边缘损失函数则注重保持边缘细节的清晰性和连续性,使用 Kvasir-sessile 数据集对改进后的模型进行了实验评估。 通过计算 Dice 系数、灵敏度、特异性等评价指标,并通过可视化分析病变区域,验证了该方法的有效性和优越性。 相比于其他深度学习网络模型,改进的 GF-Net 模型在胃肠息肉分割任务中表现出更高的准确性和鲁棒性。
Abstract:
Automatic segmentation of gastrointestinal polyps from intestinal endoscopy images can provide an important basis for the early detection and prevention of precancerous lesions. For the problems of high variability in the lesion area features of gastrointestinal polyps,low contrast between lesions and normal tissues,and unclear edge texture segmentation,improvements were made on the basis of GF-Net network segmentation to make the improved edge guidance module pay more attention to edge information. Specifically,self-at-tention mechanism was introduced layer by layer in the edge guidance module. The model can fully learn the global features of the image,better understand the context of the image,and apply these rich semantic information to the precise segmentation of gastrointestinal polyps. At the same time,combined with the segmentation loss function and the edge loss function,the segmentation loss function focuses on the overall segmentation accuracy,while the edge loss function focuses on maintaining the clarity and continuity of the edge details.The improved model is evaluated experimentally using the Kvasir - sessile dataset. The effectiveness and superiority of the proposed method were verified by calculating Dice coefficient,sensitivity,specificity and other evaluation indexes,and visualizing the lesion area.Compared with other deep learning network models,the improved GF-Net model shows higher accuracy and robustness in gastrointestinal polyp segmentation tasks.

相似文献/References:

[1]郑树挺,徐菲菲.基于改进 Self-Attention 的股价趋势预测[J].计算机技术与发展,2021,31(03):33.[doi:10. 3969 / j. issn. 1673-629X. 2021. 03. 006]
 ZHENG Shu-ting,XU Fei-fei.Research on Stock Price Trend Prediction Based on Self-Attention Model[J].,2021,31(01):33.[doi:10. 3969 / j. issn. 1673-629X. 2021. 03. 006]
[2]李肖南,王 蕾,程海霞,等.基于 SA-PointNetVLAD 的点云分类网络[J].计算机技术与发展,2022,32(05):36.[doi:10. 3969 / j. issn. 1673-629X. 2022. 05. 006]
 LI Xiao-nan,WANG Lei,CHENG Hai-xia,et al.Point Cloud Classification Network Based on SA-PointNetVLAD[J].,2022,32(01):36.[doi:10. 3969 / j. issn. 1673-629X. 2022. 05. 006]
[3]秦昊宇,葛 瑶,张力波,等.基于自注意力机制的视频超分辨率重建[J].计算机技术与发展,2022,32(08):42.[doi:10. 3969 / j. issn. 1673-629X. 2022. 08. 007]
 QIN Hao-yu,GE Yao,ZHANG Li-bo,et al.Video Super-resolution Reconstruction Based on Self Attention Mechanism[J].,2022,32(01):42.[doi:10. 3969 / j. issn. 1673-629X. 2022. 08. 007]
[4]黄 莉,何美玲*.基于 U-Net 改进模型的多模态脑肿瘤分割方法[J].计算机技术与发展,2022,32(11):58.[doi:10. 3969 / j. issn. 1673-629X. 2022. 11. 009]
 HUANG Li,HE Mei-ling*.Multi-model Brain Tumor Segmentation Method Based on Improved U-Net Model[J].,2022,32(01):58.[doi:10. 3969 / j. issn. 1673-629X. 2022. 11. 009]
[5]李 飞,陈 瑞,童 莹,等.基于增强特征和注意力机制的视频表情识别[J].计算机技术与发展,2022,32(11):183.[doi:10. 3969 / j. issn. 1673-629X. 2022. 11. 027]
 LI Fei,CHEN Rui,TONG Ying,et al.Video Facial Expression Recognition Based on ECNN-SA[J].,2022,32(01):183.[doi:10. 3969 / j. issn. 1673-629X. 2022. 11. 027]
[6]李丰翼,刘万里,杨晓辉,等.基于多头自注意力机制与 CNN 的文本分类模型[J].计算机技术与发展,2022,32(S1):18.[doi:10. 3969 / j. issn. 1673-629X. 2022. S1. 004]
 LI Feng-yi,LIU Wan-li,YANG Xiao-hui,et al.Text Classification Model Based on Multi-headed Self-attention Mechanism and CNN[J].,2022,32(01):18.[doi:10. 3969 / j. issn. 1673-629X. 2022. S1. 004]
[7]赵建强,朱万彤,陈 诚.基于多重卷积神经网络模型的命名实体识别[J].计算机技术与发展,2023,33(01):187.[doi:10. 3969 / j. issn. 1673-629X. 2023. 01. 028]
 ZHAO Jian-qiang,ZHU Wan-tong,CHEN Cheng.Named Entity Recognition Based on Duplex Convolution Neural Network Model[J].,2023,33(01):187.[doi:10. 3969 / j. issn. 1673-629X. 2023. 01. 028]
[8]黄汉琴,顾进广,符海东.融合依存句法和实体信息的临床时间关系抽取[J].计算机技术与发展,2024,34(01):128.[doi:10. 3969 / j. issn. 1673-629X. 2024. 01. 019]
 HUANG Han-qin,GU Jin-guang,FU Hai-dong.Extraction of Clinical Temporal Relation Fusing Dependency Syntax and Entity Information[J].,2024,34(01):128.[doi:10. 3969 / j. issn. 1673-629X. 2024. 01. 019]
[9]刘 洋,黎茂锋,黄 俊,等.面向方面级情感分析的双通道图卷积网络[J].计算机技术与发展,2024,34(03):49.[doi:10. 3969 / j. issn. 1673-629X. 2024. 03. 008]
 LIU Yang,LI Mao-feng,HUANG Jun,et al.Aspect-based Sentiment Analysis via Dual-channel Graph Convolutional Network[J].,2024,34(01):49.[doi:10. 3969 / j. issn. 1673-629X. 2024. 03. 008]
[10]雷孟飞,梁泉,孙世豪,等.基于自注意力和GRU的锂电池健康状态估计[J].计算机技术与发展,2024,34(05):213.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0062]
 LEI Meng-fei,LIANG Quan,SUN Shi-hao,et al.Health State Estimation of Lithium-ion Batteries Based on Discharge Process and Self-Attention-GRU[J].,2024,34(01):213.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0062]

更新日期/Last Update: 2025-01-10