[1]金壮壮,曹江涛,姬晓飞.多源信息融合的双人交互行为识别算法研究[J].计算机技术与发展,2018,28(10):32-36.[doi:10.3969/ j. issn.1673-629X.2018.10.007]
 JIN Zhuang-zhuang,CAO Jiang-tao,JI Xiao-fei.Research on Human Interaction Recognition Algorithm Based on Multi-source Information Fusion[J].,2018,28(10):32-36.[doi:10.3969/ j. issn.1673-629X.2018.10.007]
点击复制

多源信息融合的双人交互行为识别算法研究()
分享到:

《计算机技术与发展》[ISSN:1006-6977/CN:61-1281/TN]

卷:
28
期数:
2018年10期
页码:
32-36
栏目:
智能、算法、系统工程
出版日期:
2018-10-10

文章信息/Info

Title:
Research on Human Interaction Recognition Algorithm Based on Multi-source Information Fusion
文章编号:
1673-629X(2018)10-0032-05
作者:
金壮壮1曹江涛1姬晓飞2
1. 辽宁石油化工大学 信息与控制工程学院,辽宁 抚顺 113001; 2. 沈阳航空航天大学 自动化学院,辽宁 沈阳 110136
Author(s):
JIN Zhuang-zhuang1CAO Jiang-tao1JI Xiao-fei2
1. School of Information and Control Engineering,Liaoning Shihua University,Fushun 113001,China; 2. School of Automation,Shenyang Aerospace University,Shenyang 110136,China
关键词:
动作识别时空兴趣点方向梯度直方图加权融合双人交互词袋模型
Keywords:
action recognitionspatio-temporal interest pointhistogram of oriented gradient (HOG)weighted fusionhuman interac- tionbag of word (BOW)
分类号:
TP301.6
DOI:
10.3969/ j. issn.1673-629X.2018.10.007
文献标志码:
A
摘要:
人体及人体的运动均是三维信息,而传统的基于 RGB 视频的双人交互行为的特征描述方法由于缺少深度信息导致其特征描述的区分度较低。 根据 RGB 视频和深度视频各自优点和具有的互补特性,提出一种多源信息融合的双人交互行为识别算法。该算法首先采用时空兴趣点和词袋模型结合的方法对 RGB 视频进行特征表示。 然后采用方向梯度直方图对深度视频帧进行特征表示,并引入关键帧统计特征对深度视频进行直方图特征表示。最后,使用最近邻分类器分别对两种视频特征进行分类识别,通过加权融合两类视频的识别概率实现交互行为的识别。 实验结果表明,引入深度信息可以大大提高双人交互行为识别的准确率。
Abstract:
The human and the human actions are three-dimensional information. However,the traditional feature description method of human interaction based on RGB video without the depth information lead to a lower discriminative ability. Considering the respective advantages of the RGB image and the depth image and the characteristics of information complementarity,we propose a novel multisource information fusion algorithm of human interaction recognition. Firstly,the spatial-temporal interest point and the BOW model are used to represent RGB video. Secondly,the histogram of oriented gradient is used to describe the frames of depth video. In addition,the histogram representation is obtained by using key frame statistical feature. Finally,the nearest neighbor classifier is used to classify two kinds of video features respectively,then human interaction recognition is achieved by weighted fusing recognition probability of two types of video. The experiment shows that the accuracy of the human interaction recognition is improved by combing the depth information.

相似文献/References:

[1]谢泽奇,张会敏.基于MMA8452Q的肢体动作识别系统的设计[J].计算机技术与发展,2014,24(02):198.
 XIE Ze-qi,ZHANG Hui-min.Design of a Gesture Recognition System Based on MMA8452Q[J].,2014,24(10):198.
[2]王博,李燕.视频序列中的时空兴趣点检测及其自适应分析[J].计算机技术与发展,2014,24(04):49.
 WANG Bo,LI Yan.Space-time Interest Points Detection in Video Sequence and Its Adaptive Analysis[J].,2014,24(10):49.
[3]范晓杰,宣士斌,唐 凤.基于混合时空特征描述子的人体动作识别[J].计算机技术与发展,2018,28(02):98.[doi:10.3969/j.issn.1673-629X.2018.02.022]
 FAN Xiao-jie,XUAN Shi-bin,TANG Feng.Realistic Human Action Recognition Based on Mixed Spatio-temporal Feature Descriptor[J].,2018,28(10):98.[doi:10.3969/j.issn.1673-629X.2018.02.022]
[4]赵一丹,肖秦琨,高 嵩.基于模糊神经网络和图模型推理的动作识别[J].计算机技术与发展,2018,28(08):80.[doi:10.3969/ j. issn.1673-629X.2018.08.017]
 ZHAO Yi-dan,XIAO Qin-kun,GAO Song.Action Recognition Based on Fuzzy Neural Network and[J].,2018,28(10):80.[doi:10.3969/ j. issn.1673-629X.2018.08.017]
[5]丁文超,张俊宝,阴庚雷.基于 CRNN 的 CSI 动作识别[J].计算机技术与发展,2021,31(06):7.[doi:10. 3969 / j. issn. 1673-629X. 2021. 06. 002]
 DING Wen-chao,ZHANG Jun-bao,YIN Geng-lei.CSI Action Recognition Based on CRNN[J].,2021,31(10):7.[doi:10. 3969 / j. issn. 1673-629X. 2021. 06. 002]
[6]束 阳,李汪根,高 坤,等.基于轻量级语义信息融合的动作识别方法[J].计算机技术与发展,2023,33(06):181.[doi:10. 3969 / j. issn. 1673-629X. 2023. 06. 027]
 SHU Yang,LI Wang-gen,GAO Kun,et al.Action Recognition Method Based on Lightweight Semantic Information Fusion[J].,2023,33(10):181.[doi:10. 3969 / j. issn. 1673-629X. 2023. 06. 027]
[7]张 艳,肖文琛,张 博.基于双流骨架信息的人体动作识别方法[J].计算机技术与发展,2024,34(01):158.[doi:10. 3969 / j. issn. 1673-629X. 2024. 01. 023]
 ZHANG Yan,XIAO Wen-chen,ZHANG Bo.Human Action Recognition Method Based on Two-flow Skeleton Information[J].,2024,34(10):158.[doi:10. 3969 / j. issn. 1673-629X. 2024. 01. 023]

更新日期/Last Update: 2018-10-10