[1]陈宗楠,金家瑞,潘家辉*.基于 Swin Transformer 的四维脑电情绪识别[J].计算机技术与发展,2023,33(12):178-184.[doi:10. 3969 / j. issn. 1673-629X. 2023. 12. 025]
 CHEN Zong-nan,JIN Jia-rui,PAN Jia-hui*.Swin Transformer-based 4-D EEG Emotion Recognition[J].,2023,33(12):178-184.[doi:10. 3969 / j. issn. 1673-629X. 2023. 12. 025]
点击复制

基于 Swin Transformer 的四维脑电情绪识别()
分享到:

《计算机技术与发展》[ISSN:1006-6977/CN:61-1281/TN]

卷:
33
期数:
2023年12期
页码:
178-184
栏目:
人工智能
出版日期:
2023-12-10

文章信息/Info

Title:
Swin Transformer-based 4-D EEG Emotion Recognition
文章编号:
1673-629X(2023)12-0178-07
作者:
陈宗楠金家瑞潘家辉*
华南师范大学 软件学院,广东 佛山 528225
Author(s):
CHEN Zong-nanJIN Jia-ruiPAN Jia-hui*
School of Software,South China Normal University,Foshan 528225,China
关键词:
深度学习情绪识别脑电图特征融合Swin Transformer
Keywords:
deep learningemotion recognitionelectroencephalogram ( EEG) feature fusionSwin Transformer
分类号:
TP183
DOI:
10. 3969 / j. issn. 1673-629X. 2023. 12. 025
摘要:
近年来,基于脑电图( Electroencephalogram,EEG) 的情绪识别研究主要使用卷积神经网络、循环神经网络和深度信念网络模型。 这些方法能利用全局差异来区分不同情绪状态,但忽视了局部脑电的变化对情绪状态的影响。 针对上述问题,使用了一种基于 Swin Transformer 的 EEG 四维脑电情绪识别模型,能够更好地捕捉到细小的局部空间特征和复杂的时间序列特征。 相较于其它情绪识别方法,该模型通过基于滑动窗口的自注意力机制提高了不同块之间的特征连接,使得模型的建模能力更强,也降低了计算的复杂度。 此外,利用情绪脑电公开数据集 SEED 来评估该模型的可行性与有效性,在单被试情绪三分类的准确率为 94. 73% ±1. 72% ,跨被试情绪三分类的准确度为 89. 63% ±3. 42% ,并且测试速度能达到实时处理的水平。 实验结果表明,基于 Swin Transformer 的 EEG 四维脑电情绪识别模型通过局部特征的学习在小样本训练上也能达到较高的情绪分类准确率和较快的测试速度。
Abstract:
In recent years, electroencephalogram ( EEG ) - based emotion recognition has focused on the use of convolutional neuralnetworks,recurrent neural networks and deep belief network models. These methods can use global differences to distinguish betweendifferent emotional states, but ignore the effect of local EEG changes on emotional states. To address these issues, we use a 4 -dimensional EEG emotion recognition model based on the Swin Transformer. The model can better capture both small local spatialfeatures and complex time-series features. Compared with other emotion recognition methods,the model proposed improves the featureconnectivity between different blocks through a self - attention mechanism based on shifted windows, which makes the model moremodelable and also reduces the computational complexity. In addition,we use the public emotion EEG dataset SEED to evaluate the feasibility and effectiveness of this model, with an accuracy of 94. 73% ± 1. 72% for single - subject emotion triple classification and89. 63% ±3. 42% for cross-subject emotion triple classification,and the testing speed can reach the level of real-time processing. Theexperimental results show that 4 - D EEG emotion recognition based on the Swin Transformer model can achieve high emotionclassification accuracy and fast testing speed even with small sample training through local feature learning.

相似文献/References:

[1]陈强锐,谢世朋.基于深度学习的肺部肿瘤检测方法[J].计算机技术与发展,2018,28(04):201.[doi:10.3969/ j. issn.1673-629X.2018.04.043]
 CHEN Qiang-rui,XIE Shi-peng.Lung Cancer Detection Method Based on Deep Learning[J].,2018,28(12):201.[doi:10.3969/ j. issn.1673-629X.2018.04.043]
[2]施泽浩,赵启军.基于全卷积网络的目标检测算法[J].计算机技术与发展,2018,28(05):55.[doi:10.3969/j.issn.1673-629X.2018.05.013]
 SHI Ze-hao,ZHAO Qi-jun.Object Detection Algorithm Based on Fully Convolutional Neural Network[J].,2018,28(12):55.[doi:10.3969/j.issn.1673-629X.2018.05.013]
[3]黄法秀,张世杰,吴志红,等.数据增广下的人脸识别研究[J].计算机技术与发展,2020,30(03):67.[doi:10. 3969 / j. issn. 1673-629X. 2020. 03. 013]
 HUANG Fa-xiu,ZHANG Shi-jie,WU Zhi-hong,et al.Research on Face Recognition Based on Data Augmentation[J].,2020,30(12):67.[doi:10. 3969 / j. issn. 1673-629X. 2020. 03. 013]
[4]陈浩翔,蔡建明,刘铿然,等. 手写数字深度特征学习与识别[J].计算机技术与发展,2016,26(07):19.
 CHEN Hao-xiang,CAI Jian-ming,LIU Keng-ran,et al. Deep Learning and Recognition of Handwritten Numeral Features[J].,2016,26(12):19.
[5]高翔,陈志,岳文静,等.基于视频场景深度学习的人物语义识别模型[J].计算机技术与发展,2018,28(06):53.[doi:10.3969/ j. issn.1673-629X.2018.06.012]
 GAO Xiang,CHEN Zhi,YUE Wen-jing,et al.Human Semantic Recognition Model Based on Video Scene Deep Learning[J].,2018,28(12):53.[doi:10.3969/ j. issn.1673-629X.2018.06.012]
[6]贺飞翔,赵启军. 基于深度学习的头部姿态估计[J].计算机技术与发展,2016,26(11):1.
 HE Fei-xiang,ZHAO Qi-jun. Head Pose Estimation Based on Deep Learning[J].,2016,26(12):1.
[7]徐 融,邱晓晖.一种改进的 YOLO V3 目标检测方法[J].计算机技术与发展,2020,30(07):30.[doi:10. 3969 / j. issn. 1673-629X. 2020. 07. 007]
 XU Rong,QIU Xiao-hui.An Improved YOLO V3 Object Detection[J].,2020,30(12):30.[doi:10. 3969 / j. issn. 1673-629X. 2020. 07. 007]
[8]曾志平[] [],萧海东[],张新鹏[]. 基于DBN的金融时序数据建模与决策[J].计算机技术与发展,2017,27(04):1.
 ZENG Zhi-ping[] [],XIAO Hai-dong[],ZHANG Xin-peng[]. Modeling and Decision-making of Financial Time Series Data with DBN[J].,2017,27(12):1.
[9]李全兵,文 钊*,田艳梅*,等.基于 WGAN 的音频关键词识别研究[J].计算机技术与发展,2021,31(08):26.[doi:10. 3969 / j. issn. 1673-629X. 2021. 08. 005]
 LI Quan-bing,WEN Zhao *,TIAN Yan-mei *,et al.Research on Audio Keywords Recognition Based on WassersteinGenerative Adversarial Network[J].,2021,31(12):26.[doi:10. 3969 / j. issn. 1673-629X. 2021. 08. 005]
[10]李宏林. 分析式纹理合成技术及其在深度学习的应用[J].计算机技术与发展,2017,27(11):7.
 LI Hong-lin. Analyzed Texture-synthesis Techniques and Their Applications in Deep Learning[J].,2017,27(12):7.

更新日期/Last Update: 2023-12-10