[1]曾援,李剑,马明星,等.基于改进Transformer模型的多声源分离方法[J].计算机技术与发展,2024,34(05):60-65.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0041]
 ZENG Yuan,LI Jian,MA Ming-xing,et al.Multi-source Separation Method Based on Improved Transformer Model[J].,2024,34(05):60-65.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0041]
点击复制

基于改进Transformer模型的多声源分离方法()

《计算机技术与发展》[ISSN:1006-6977/CN:61-1281/TN]

卷:
34
期数:
2024年05期
页码:
60-65
栏目:
媒体计算
出版日期:
2024-05-10

文章信息/Info

Title:
Multi-source Separation Method Based on Improved Transformer Model
文章编号:
1673-629X(2024)05-0060-06
作者:
曾援12李剑12马明星12庞润嘉12贺斌12
1.中北大学 信息与通信工程学院,山西 太原 030051;2.中北大学 省部共建动态测试技术国家重点实验室,山西 太原 030051
Author(s):
ZENG Yuan12LI Jian12MA Ming-xing12PANG Run-jia12HE Bin12
1.School of Information and Communication Engineering,North University of China,Taiyuan 030051,China;2.State Key Laboratory of Dynamic Testing Technology,North University of China,Taiyuan 030051,China
关键词:
上下采样层Transformer特征编码滑动窗口注意力机制深度学习
Keywords:
upper and lower sampling layerTransformerfeature codingsliding window attention mechanismdeep learning
分类号:
TP391
DOI:
10.20165/j.cnki.ISSN1673-629X.2024.0041
摘要:
目前主流的语音分离算法模型都是基于复杂的递归网络或Transformer网络,Transformer网络复杂度高导致训练难度大以及音频的高采样率导致在样本级别上使用超长输入从而获取不完全特征,不能直接对长语音特征序列进行直接建模出现特征丢失问题。对此,该文提出了一种基于Transformer的改进网络模型。首先,在原有Transformer网络模型编码器里新添加下采样块,计算不同时间尺度上的高级特征同时降低特征空间复杂度;其次,在Transformer网络模型的解码器里添加上采样层与编码器下采样层特征融合保证特征不丢失,提高模型分离能力;最后,在模型分离层里引入一种改进的滑动窗口注意力机制,滑动窗口使用循环移位技术,新的特征窗口中包含老的特征窗口特征同时融合特征边缘信息完成了特征窗口之间的信息交互,获得特征编码以及特征位置编码同时提高特征信息之间的相关系数。实验表明,使用SI-SNR评价标准达到13.5dB,使用SDR评价指标达到14.1dB,分离效果优于之前的方法。
Abstract:
The current mainstream speech separation algorithm models are all based on complex recursive network or Transformer network.The high complexity of Transformer network leads to difficult training,and the high sampling rate of audio leads to the use of long input at the sample level to obtain incomplete features.The feature loss problem occurs when long speech feature sequences cannot be directly modeled.For this,we propose an improved network model based on Transformer.Firstly,a new subsample block is added to the existing Transformer network model encoder to calculate advanced features on different time scales and reduce feature space complexity.Secondly,feature fusion between the upper sampling layer and the lower sampling layer of the encoder is added to the decoder of the Transformer network model to ensure no feature loss and improve model separation capability.Finally,an improved sliding window attention mechanism is introduced in the model separation layer.The sliding window uses circular shift technology,and the new feature window contains part of the old feature window and feature edge information to complete the information interaction between feature Windows,obtain feature coding and feature position coding,and improve the correlation coefficient between feature infor-mation.The experiment shows that the separation effect is better than that of the previous method,with SI-SNR evaluation standard reaching 13.5dB and SDR evaluation index reaching 14.1dB.

相似文献/References:

[1]徐丽燕,徐 康*,黄兴挺,等.基于 Transformer 的时序数据异常检测方法[J].计算机技术与发展,2023,33(03):152.[doi:10. 3969 / j. issn. 1673-629X. 2023. 03. 023]
 XU Li-yan,XU Kang*,HUANG Xing-ting,et al.Transformer-based Method of Anomaly Detection for Time Series Data[J].,2023,33(05):152.[doi:10. 3969 / j. issn. 1673-629X. 2023. 03. 023]
[2]朱光明,卢梓杰,冯家伟,等.基于攻击上下文分析的多阶段攻击趋势预测[J].计算机技术与发展,2023,33(07):104.[doi:10. 3969 / j. issn. 1673-629X. 2023. 07. 016]
 ZHU Guang-ming,LU Zi-jie,FENG Jia-wei,et al.Multi-stage Attack Prediction Based on Attack Context Analysis[J].,2023,33(05):104.[doi:10. 3969 / j. issn. 1673-629X. 2023. 07. 016]
[3]张诗凡,叶海波.Conditional HOTR:基于 Transformer 的人物交互检测[J].计算机技术与发展,2023,33(08):23.[doi:10. 3969 / j. issn. 1673-629X. 2023. 08. 004]
 ZHANG Shi-fan,YE Hai-bo.Conditional Human-object Interaction Detection with Transformer[J].,2023,33(05):23.[doi:10. 3969 / j. issn. 1673-629X. 2023. 08. 004]
[4]杨萍,陈立伟,王庆凤,等.融合卷积和Transformer的腹部多器官分割网络[J].计算机技术与发展,2024,34(09):47.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0161]
 YANG Ping,CHEN Li-wei,WANG Qing-feng,et al.Abdominal Multi Organ Segmentation Network Combining Convolution and Transformer[J].,2024,34(05):47.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0161]
[5]万飞.基于情感语义增强编解码的神经机器翻译方法[J].计算机技术与发展,2024,34(09):94.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0159]
 WAN Fei.Neural Machine Translation Method Based on Emotional Semantics Enhanced Encoding and Decoding[J].,2024,34(05):94.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0159]
[6]苏亚宁,高建华.融合信息增益和Transformer的代码异味强度预测[J].计算机技术与发展,2025,(01):154.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0279]
 SU Ya-ning,GAO Jian-hua.Code Smell Intensity Prediction Based on Information Gain and Transformer[J].,2025,(05):154.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0279]
[7]刘欢,廖雪超.基于特征融合的PEMFC健康度预测[J].计算机技术与发展,2025,(02):93.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0331]
 LIU Huan,LIAO Xue-chao.PEMFC Health Prediction Based on Feature Fusion[J].,2025,(05):93.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0331]
[8]李明,石超山,谭云飞,等.基于改进Autoformer的电力负荷预测[J].计算机技术与发展,2025,(04):107.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0401]
 LI Ming,SHI Chao-shan,TAN Yun-fei,et al.An Electricity Load Forecasting Based on Improved Autoformer[J].,2025,(05):107.[doi:10.20165/j.cnki.ISSN1673-629X.2024.0401]
[9]鲍凯辰,刘宁钟,张婧颖.基于非显著区域增强的弱监督语义分割方法[J].计算机技术与发展,2025,(06):10.[doi:10.20165/j.cnki.ISSN1673-629X.2025.0012]
 BAO Kai-chen,LIU Ning-zhong,ZHANG Jing-ying.A Weakly Supervised Semantic Segmentation Method Based on Non-salient Region Enhancement[J].,2025,(05):10.[doi:10.20165/j.cnki.ISSN1673-629X.2025.0012]
[10]王彩莲,郑文斌.基于图卷积与Transformer的缺失模态脑肿瘤分割[J].计算机技术与发展,2025,(07):173.[doi:10.20165/j.cnki.ISSN1673-629X.2025.0064]
 WANG Cai-lian,ZHENG Wen-bin.Missing Modality Brain Tumor Segmentation Based on Graph Convolution and Transformer[J].,2025,(05):173.[doi:10.20165/j.cnki.ISSN1673-629X.2025.0064]

更新日期/Last Update: 2024-05-10