CN108446583A - Human bodys' response method based on Attitude estimation - Google Patents
Human bodys' response method based on Attitude estimation Download PDFInfo
- Publication number
- CN108446583A CN108446583A CN201810079476.7A CN201810079476A CN108446583A CN 108446583 A CN108446583 A CN 108446583A CN 201810079476 A CN201810079476 A CN 201810079476A CN 108446583 A CN108446583 A CN 108446583A
- Authority
- CN
- China
- Prior art keywords
- video
- artis
- matrix
- frame
- position coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 239000011159 matrix material Substances 0.000 claims abstract description 33
- 238000012360 testing method Methods 0.000 claims abstract description 11
- 238000012549 training Methods 0.000 claims abstract description 11
- 239000000284 extract Substances 0.000 claims abstract description 5
- 230000008859 change Effects 0.000 claims description 18
- 238000010606 normalization Methods 0.000 claims description 11
- 238000012706 support-vector machine Methods 0.000 claims description 10
- 230000001186 cumulative effect Effects 0.000 claims description 8
- 210000003423 ankle Anatomy 0.000 claims description 6
- 210000003127 knee Anatomy 0.000 claims description 6
- 210000001217 buttock Anatomy 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 3
- 230000003993 interaction Effects 0.000 abstract description 2
- 238000012544 monitoring process Methods 0.000 abstract description 2
- 230000006399 behavior Effects 0.000 description 13
- 238000004458 analytical method Methods 0.000 description 4
- 210000000707 wrist Anatomy 0.000 description 4
- 230000000694 effects Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于姿态估计的人体行为识别方法,主要解决现有技术在视频人体行为中处理速度过慢的问题。其实现步骤是:1.利用Open‑pose方法对视频中人体进行姿态估计,提取视频中每帧人体关节点位置坐标;2.根据每帧人体关节点位置坐标,计算相邻两帧人体关节点距离变化量矩阵;3.将视频进行分段,利用每段视频距离变化量矩阵生成视频特征;4.将数据集中视频分为训练集和测试集两部分,用训练集的视频特征训练分类器,利用训练好的分类器对测试集中的视频进行分类。本发明提高了视频中人体行为识别的速度,可用于智能视频监控、人机交互、视频检索。
The invention discloses a human body behavior recognition method based on gesture estimation, which mainly solves the problem of too slow processing speed in video human body behavior in the prior art. The implementation steps are: 1. Use the Open-pose method to estimate the pose of the human body in the video, and extract the position coordinates of the joint points of the human body in each frame of the video; 2. Calculate the joint points of the human body in two adjacent frames according to the position coordinates of the joint points of the human body in each frame Distance variation matrix; 3. Segment the video and use the distance variation matrix of each video to generate video features; 4. Divide the video in the data set into two parts, the training set and the test set, and train the classifier with the video features of the training set , use the trained classifier to classify the videos in the test set. The invention improves the speed of human body behavior recognition in video, and can be used for intelligent video monitoring, human-computer interaction and video retrieval.
Description
技术领域technical field
本发明属于图像处理技术领域,特别涉及一种视频人体行为识别方法,可用于智能视频监控、人机交互、视频检索。The invention belongs to the technical field of image processing, and in particular relates to a video human behavior recognition method, which can be used for intelligent video monitoring, human-computer interaction and video retrieval.
背景技术Background technique
随着计算机学科与人工智能的发展和应用,视频分析技术迅速兴起并得到了广泛关注。视频分析中的一个核心就是人体行为识别,行为识别的准确性和快速性将直接影响视频分析系统后续工作的结果。因此,如何提高视频中人体行为识别的准确性和快速性,已成为视频分析系统研究中的重点问题。With the development and application of computer science and artificial intelligence, video analysis technology has risen rapidly and received widespread attention. A core of video analysis is human behavior recognition. The accuracy and speed of behavior recognition will directly affect the results of follow-up work of the video analysis system. Therefore, how to improve the accuracy and rapidity of human behavior recognition in video has become a key issue in the research of video analysis systems.
目前,典型的视频人体行为识别方法主要有:时空兴趣点、密集轨迹等。其中:At present, typical video human behavior recognition methods mainly include: spatio-temporal interest points, dense trajectories, etc. in:
时空兴趣点,是通过检测视频中的角点、提取角点的特征进行人体行为识别,但是一部分角点是由背景噪声产生,不但会影响最后的结果,还会降低识别的运行速度。Spatio-temporal points of interest are human behavior recognition by detecting corners in the video and extracting the features of the corners. However, some corners are generated by background noise, which will not only affect the final result, but also slow down the recognition speed.
密集轨迹,是先对视频每一帧进行多个尺度上的密集采样,然后对采样的点进行跟踪得到轨迹,再提取轨迹的特征进行行为识别。但是该方法的计算复杂度高,并且产生的特征维度高,会占用大量的内存,很难做到实时识别。Dense trajectory is to perform dense sampling on multiple scales for each frame of the video, then track the sampled points to obtain the trajectory, and then extract the characteristics of the trajectory for behavior recognition. However, the calculation complexity of this method is high, and the generated feature dimension is high, which will occupy a large amount of memory, and it is difficult to achieve real-time recognition.
发明内容Contents of the invention
本发明的目的在于针对上述已有技术中实时性差的问题,提出一种基于姿态估计的人体行为识别方法,以提高人体行为识别的速度。The object of the present invention is to solve the problem of poor real-time performance in the above prior art, and propose a human behavior recognition method based on pose estimation, so as to improve the speed of human behavior recognition.
本发明的技术思路是:通过估计视频中人体的姿态,得到每一帧人体关节点的位置,利用人体关节点的位置变化量分析人体的动作,从而快速地进行人体行为识别。The technical idea of the present invention is: by estimating the posture of the human body in the video, the position of the joint points of the human body is obtained in each frame, and the movement of the human body is analyzed by using the position change of the joint points of the human body, so as to quickly recognize the human body behavior.
根据上述思路,本发明的实现方案包括如下:According to above-mentioned train of thought, the realization scheme of the present invention comprises as follows:
(1)提取视频中每帧人体关节点位置坐标:(1) Extract the position coordinates of the joint points of each frame of the human body in the video:
(1a)利用Open-pose方法对视频中每帧人体进行姿态估计,得到人体脖子、胸部、头部、右肩、左肩、右臀部、左臀部、右手肘、左手肘、右膝盖、左膝盖、右手腕、左手腕、右脚踝和左脚踝这15个关节点的位置坐标,其中第k个关节点的坐标表示为Lk=(xk,yk),k从1到15;(1a) Use the Open-pose method to estimate the pose of each frame of the human body in the video, and obtain the human neck, chest, head, right shoulder, left shoulder, right hip, left hip, right elbow, left elbow, right knee, left knee, The position coordinates of the 15 joint points of the right wrist, left wrist, right ankle and left ankle, wherein the coordinates of the kth joint point are expressed as L k =(x k ,y k ), k ranges from 1 to 15;
(1b)对每个关节点的位置坐标进行归一化;(1b) Normalize the position coordinates of each joint point;
(1c)用归一化之后的15个关节点位置坐标构成坐标矩阵P,P=[(x1,y1),(x2,y2),...,(xk,yk),...,(x15,y15)],其中(xk,yk)表示第k个关节点归一化之后的坐标;(1c) Use the normalized 15 joint point position coordinates to form a coordinate matrix P, P=[(x 1 ,y 1 ),(x 2 ,y 2 ),...,(x k ,y k ) ,...,(x 15 ,y 15 )], where (x k ,y k ) represents the normalized coordinates of the kth joint point;
(2)计算相邻两帧人体关节点距离变化量矩阵:(2) Calculate the distance variation matrix of human joint points in two adjacent frames:
(2a)根据相邻两帧的坐标矩阵Pn和Pn-1,计算相邻两帧关节点位置坐标变化量矩阵 (2a) According to the coordinate matrices P n and P n-1 of two adjacent frames, calculate the coordinate change matrix of joint point positions in two adjacent frames
(2b)根据关节点位置坐标变化量矩阵计算关节点距离变化量矩阵D;(2b) According to the coordinate change matrix of the joint point position Calculate the distance change matrix D of joint points;
(3)生成视频特征:(3) Generate video features:
(3a)按照视频的时间长度将视频平均分成4段,将每一段视频中相邻两帧产生的距离变化量矩阵D相加,得到各段累计距离变化量矩阵Di,i从1到4;(3a) According to the time length of the video, the video is divided into 4 segments on average, and the distance change matrix D generated by two adjacent frames in each segment of the video is added to obtain the cumulative distance change matrix D i of each segment, where i ranges from 1 to 4 ;
(3b)对Di进行L2归一化,得到归一化之后的Di';(3b) Perform L2 normalization on D i to obtain D i ' after normalization;
(3c)将累计距离变化量矩阵Di'串联起来作为整个视频的特征:F=[D1',D2',D3',D4'];(3c) Concatenate the cumulative distance variation matrix D i ' as the feature of the entire video: F=[D 1 ', D 2 ', D 3 ', D 4 '];
(4)训练分类器对视频进行分类:(4) Train the classifier to classify the video:
(4a)把sub-JHMDB数据集的视频分成训练集和测试集两部分,将训练集视频的特征输入到支持向量机中进行训练,得到训练好的支持向量机;(4a) divide the video of sub-JHMDB data set into training set and test set two parts, input the feature of training set video in the support vector machine and train, obtain the well-trained support vector machine;
(4b)把测试集视频的特征输入到训练好的支持向量机中得到分类结果。(4b) Input the features of the test set video into the trained support vector machine to obtain the classification result.
本发明具有以下优点:The present invention has the following advantages:
本发明由于采用了Open-pose方法对视频中人体进行姿态估计,能够快速地得到视频中每帧人体的关节点位置坐标,同时由于对视频进行分段处理,能够获取人体在视频不同时间段上的关节点位置变化量,从而利用位置变化量对视频中人体行为做出分类。Since the present invention adopts the Open-pose method to estimate the pose of the human body in the video, it can quickly obtain the joint point position coordinates of each frame of the human body in the video, and at the same time, because the video is segmented, it can obtain the position of the human body in different time periods of the video. The amount of change in the position of the joint points, so as to use the amount of position change to classify the human behavior in the video.
附图说明Description of drawings
图1是本发明的实现流程图;Fig. 1 is the realization flowchart of the present invention;
图2是用Open-pose估计得到的人体关节点位置示意图;Figure 2 is a schematic diagram of the position of human joint points estimated by Open-pose;
具体实施方式Detailed ways
以下参照附图,对本发明的技术方案和效果进行进一步说明:Below with reference to accompanying drawing, technical scheme and effect of the present invention are further described:
参照图1,本发明的实施步骤如下:With reference to Fig. 1, the implementation steps of the present invention are as follows:
步骤1.提取视频中每帧人体关节点位置信息。Step 1. Extract the position information of human joint points in each frame of the video.
1.1)利用Open-pose方法对视频中每帧人体进行姿态估计,得到人体脖子、胸部、头部、右肩、左肩、右臀部、左臀部、右手肘、左手肘、右膝盖、左膝盖、右手腕、左手腕、右脚踝和左脚踝这15个关节点的位置坐标,其中第k个关节点的坐标表示为Lk=(xk,yk),k从1到15,如图2所示;1.1) Use the Open-pose method to estimate the pose of each frame of the human body in the video, and obtain the human neck, chest, head, right shoulder, left shoulder, right hip, left hip, right elbow, left elbow, right knee, left knee, right The position coordinates of the 15 joint points of the wrist, left wrist, right ankle and left ankle, where the coordinates of the kth joint point are expressed as L k = (x k , y k ), k ranges from 1 to 15, as shown in Figure 2 Show;
1.2)对每个关节点的位置坐标进行归一化:1.2) Normalize the position coordinates of each joint point:
其中x,y表示归一化前的坐标,x',y'表示归一化后的坐标,W表示视频每一帧的宽度,H表示视频每一帧的高度;Where x, y represent the coordinates before normalization, x', y' represent the coordinates after normalization, W represents the width of each frame of the video, and H represents the height of each frame of the video;
1.3)用归一化之后的15个关节点位置坐标构成坐标矩阵P,P=[(x1,y1),(x2,y2),...,(xk,yk),...,(x15,y15)],其中(xk,yk)表示第k个关节点归一化之后的坐标。1.3) Use the normalized 15 joint point position coordinates to form a coordinate matrix P, P=[(x 1 ,y 1 ),(x 2 ,y 2 ),...,(x k ,y k ), ...,(x 15 ,y 15 )], where (x k ,y k ) represents the normalized coordinates of the kth joint point.
步骤2.计算相邻两帧人体关节点距离变化量矩阵。Step 2. Calculate the distance variation matrix of human joint points in two adjacent frames.
2.1)根据相邻两帧的坐标矩阵Pn和Pn-1,计算相邻两帧关节点位置坐标变化量矩阵 2.1) According to the coordinate matrices P n and P n-1 of two adjacent frames, calculate the coordinate change matrix of joint point positions in two adjacent frames
其中Pn和Pn-1分别表示前一帧和后一帧的关节点位置坐标矩阵,dx和dy表示同一个关节点相邻两帧坐标变化量;Among them, P n and P n-1 represent the coordinate matrix of the joint point position in the previous frame and the next frame respectively, and dx and dy represent the coordinate change of the same joint point in two adjacent frames;
2.2)根据关节点位置坐标变化量矩阵计算关节点距离变化量矩阵D:2.2) According to the coordinate change matrix of the joint point position Calculate the distance change matrix D of joint points:
其中dxk和dyk表示中第k个元素。where dx k and dy k represent The kth element in .
步骤3.生成视频特征。Step 3. Generate video features.
3.1)按照视频的时间长度将视频平均分成4段,将每一段视频中相邻两帧产生的距离变化量矩阵D相加,得到各段累计距离变化量矩阵Di,i从1到4。3.1) Divide the video into 4 segments on average according to the time length of the video, add the distance change matrix D generated by two adjacent frames in each video segment, and obtain the cumulative distance change matrix D i of each segment, where i ranges from 1 to 4.
3.2)对Di进行L2归一化,得到归一化之后的Di':3.2) Perform L2 normalization on D i to obtain D i ' after normalization:
其中Di=[d1,d2,...,dk,...,d15]是第i段视频累计距离变化量矩阵,dk表示Di中第k个元素,是Di的L2范数,表示Di中第k个元素的平方;Wherein D i =[d 1 ,d 2 ,...,d k ,...,d 15 ] is the cumulative distance variation matrix of the i-th video, and d k represents the kth element in D i , is the L2 norm of D i , Indicates the square of the kth element in D i ;
3.3)将累计距离变化量矩阵Di'串联起来作为整个视频的特征:3.3) Concatenate the cumulative distance variation matrix D i ' as the feature of the entire video:
F=[D1',D2',D3',D4'] <5>F=[D 1 ', D 2 ', D 3 ', D 4 '] <5>
步骤4.训练分类器对视频进行分类。Step 4. Train a classifier to classify the video.
4.1)把sub-JHMDB数据集的视频分成训练集和测试集两部分,将训练集视频的特征输入到支持向量机中进行训练,得到训练好的支持向量机;4.1) divide the video of sub-JHMDB data set into training set and test set two parts, input the feature of training set video in the support vector machine and train, obtain the well-trained support vector machine;
4.2)把测试集视频的特征输入到训练好的支持向量机中得到分类结果。4.2) Input the features of the test set video into the trained support vector machine to get the classification result.
本发明的效果可通过以下实验进一步说明:Effect of the present invention can be further illustrated by following experiments:
1.实验条件。1. Experimental conditions.
实验环境:计算机采用Intel(R)Core(TM)i7-7700CPU@3.8Ghz,16GB内存,GPU为GTX1080,软件采用Matlab2014b仿真实验平台。Experimental environment: The computer adopts Intel(R) Core(TM) i7-7700CPU@3.8Ghz, 16GB memory, GPU is GTX1080, and the software adopts Matlab2014b simulation experiment platform.
实验参数:支持向量机选用线性核,参数c=8。Experimental parameters: the support vector machine uses a linear kernel, and the parameter c=8.
2.实验内容与结果。2. Experimental content and results.
实验在sub-JHMDB数据集上进行,sub-JHMDB数据集一共包含12类人体动作,总共包括316个视频片段,每个视频片段包含一种人体行为。按照sub-JHMDB数据集提供方预先设定,将数据集中视频分成训练集和测试集两部分。利用本发明方法对sub-JHMDB数据集中视频进行处理得到视频特征,训练集视频的特征用于训练分类器,然后用训练好的分类器对测试集视频进行分类,测试集视频正确分类的比例作为最终的分类结果。The experiment is carried out on the sub-JHMDB data set, which contains 12 types of human actions, including a total of 316 video clips, and each video clip contains a human behavior. According to the presetting of the sub-JHMDB data set provider, the videos in the data set are divided into two parts: training set and test set. Utilize the inventive method to process the video in the sub-JHMDB dataset to obtain the video feature, the feature of the training set video is used to train the classifier, then use the trained classifier to classify the test set video, and the correct classification ratio of the test set video is used as final classification result.
在sub-JHMDB数据集上的分类结果达到43.9%,对视频的处理速度平均为10fps。The classification result on the sub-JHMDB dataset reaches 43.9%, and the processing speed of the video is 10fps on average.
综上可以得出,本发明可以实现对视频中人体行为的快速识别。In summary, it can be concluded that the present invention can realize rapid recognition of human behavior in videos.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810079476.7A CN108446583A (en) | 2018-01-26 | 2018-01-26 | Human bodys' response method based on Attitude estimation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810079476.7A CN108446583A (en) | 2018-01-26 | 2018-01-26 | Human bodys' response method based on Attitude estimation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108446583A true CN108446583A (en) | 2018-08-24 |
Family
ID=63191076
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810079476.7A Pending CN108446583A (en) | 2018-01-26 | 2018-01-26 | Human bodys' response method based on Attitude estimation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108446583A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109344790A (en) * | 2018-10-16 | 2019-02-15 | 浩云科技股份有限公司 | A kind of human body behavior analysis method and system based on posture analysis |
CN109815921A (en) * | 2019-01-29 | 2019-05-28 | 北京融链科技有限公司 | Method and device for predicting activity category in hydrogen refueling station |
CN109871750A (en) * | 2019-01-02 | 2019-06-11 | 东南大学 | A gait recognition method based on abnormal joint repair of skeleton map sequence |
CN110147723A (en) * | 2019-04-11 | 2019-08-20 | 苏宁云计算有限公司 | The processing method and system of customer's abnormal behaviour in a kind of unmanned shop |
CN110503077A (en) * | 2019-08-29 | 2019-11-26 | 郑州大学 | A vision-based real-time human motion analysis method |
CN110956139A (en) * | 2019-12-02 | 2020-04-03 | 郑州大学 | Human motion action analysis method based on time series regression prediction |
CN111626137A (en) * | 2020-04-29 | 2020-09-04 | 平安国际智慧城市科技股份有限公司 | Video-based motion evaluation method and device, computer equipment and storage medium |
CN112182282A (en) * | 2020-09-01 | 2021-01-05 | 浙江大华技术股份有限公司 | Music recommendation method and device, computer equipment and readable storage medium |
CN112417927A (en) * | 2019-08-22 | 2021-02-26 | 北京奇虎科技有限公司 | Method for establishing human body gesture recognition model, human body gesture recognition method and device |
CN112702570A (en) * | 2020-12-18 | 2021-04-23 | 中国南方电网有限责任公司超高压输电公司柳州局 | Security protection management system based on multi-dimensional behavior recognition |
CN113392758A (en) * | 2021-06-11 | 2021-09-14 | 北京科技大学 | Rescue training-oriented behavior detection and effect evaluation method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104573665A (en) * | 2015-01-23 | 2015-04-29 | 北京理工大学 | Continuous motion recognition method based on improved viterbi algorithm |
US20150186713A1 (en) * | 2013-12-31 | 2015-07-02 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for emotion and behavior recognition |
CN104866860A (en) * | 2015-03-20 | 2015-08-26 | 武汉工程大学 | Indoor human body behavior recognition method |
CN105138995A (en) * | 2015-09-01 | 2015-12-09 | 重庆理工大学 | Time-invariant and view-invariant human action identification method based on skeleton information |
CN105518744A (en) * | 2015-06-29 | 2016-04-20 | 北京旷视科技有限公司 | Pedestrian re-identification method and equipment |
CN106066996A (en) * | 2016-05-27 | 2016-11-02 | 上海理工大学 | The local feature method for expressing of human action and in the application of Activity recognition |
-
2018
- 2018-01-26 CN CN201810079476.7A patent/CN108446583A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150186713A1 (en) * | 2013-12-31 | 2015-07-02 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for emotion and behavior recognition |
CN104573665A (en) * | 2015-01-23 | 2015-04-29 | 北京理工大学 | Continuous motion recognition method based on improved viterbi algorithm |
CN104866860A (en) * | 2015-03-20 | 2015-08-26 | 武汉工程大学 | Indoor human body behavior recognition method |
CN105518744A (en) * | 2015-06-29 | 2016-04-20 | 北京旷视科技有限公司 | Pedestrian re-identification method and equipment |
CN105138995A (en) * | 2015-09-01 | 2015-12-09 | 重庆理工大学 | Time-invariant and view-invariant human action identification method based on skeleton information |
CN106066996A (en) * | 2016-05-27 | 2016-11-02 | 上海理工大学 | The local feature method for expressing of human action and in the application of Activity recognition |
Non-Patent Citations (2)
Title |
---|
DING WENWEN 等: "Skeleton-Based Human Action Recognition via", 《CHINESE JOURNAL OF ELECTRONICS》 * |
ZHECAO 等: "Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109344790A (en) * | 2018-10-16 | 2019-02-15 | 浩云科技股份有限公司 | A kind of human body behavior analysis method and system based on posture analysis |
CN109871750A (en) * | 2019-01-02 | 2019-06-11 | 东南大学 | A gait recognition method based on abnormal joint repair of skeleton map sequence |
CN109871750B (en) * | 2019-01-02 | 2023-08-18 | 东南大学 | Gait recognition method based on skeleton diagram sequence abnormal joint repair |
CN109815921A (en) * | 2019-01-29 | 2019-05-28 | 北京融链科技有限公司 | Method and device for predicting activity category in hydrogen refueling station |
CN110147723A (en) * | 2019-04-11 | 2019-08-20 | 苏宁云计算有限公司 | The processing method and system of customer's abnormal behaviour in a kind of unmanned shop |
CN110147723B (en) * | 2019-04-11 | 2022-08-19 | 苏宁云计算有限公司 | Method and system for processing abnormal behaviors of customers in unmanned store |
CN112417927A (en) * | 2019-08-22 | 2021-02-26 | 北京奇虎科技有限公司 | Method for establishing human body gesture recognition model, human body gesture recognition method and device |
CN110503077B (en) * | 2019-08-29 | 2022-03-11 | 郑州大学 | A vision-based real-time human motion analysis method |
CN110503077A (en) * | 2019-08-29 | 2019-11-26 | 郑州大学 | A vision-based real-time human motion analysis method |
CN110956139A (en) * | 2019-12-02 | 2020-04-03 | 郑州大学 | Human motion action analysis method based on time series regression prediction |
CN110956139B (en) * | 2019-12-02 | 2023-04-28 | 河南财政金融学院 | Human motion analysis method based on time sequence regression prediction |
CN111626137A (en) * | 2020-04-29 | 2020-09-04 | 平安国际智慧城市科技股份有限公司 | Video-based motion evaluation method and device, computer equipment and storage medium |
CN112182282A (en) * | 2020-09-01 | 2021-01-05 | 浙江大华技术股份有限公司 | Music recommendation method and device, computer equipment and readable storage medium |
CN112702570A (en) * | 2020-12-18 | 2021-04-23 | 中国南方电网有限责任公司超高压输电公司柳州局 | Security protection management system based on multi-dimensional behavior recognition |
CN113392758A (en) * | 2021-06-11 | 2021-09-14 | 北京科技大学 | Rescue training-oriented behavior detection and effect evaluation method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108446583A (en) | Human bodys' response method based on Attitude estimation | |
CN105550678B (en) | Human action feature extracting method based on global prominent edge region | |
CN105069434B (en) | A kind of human action Activity recognition method in video | |
CN101558996B (en) | Gait recognition method based on orthogonal projection three-dimensional reconstruction of human motion structure | |
CN102074034B (en) | Multi-model human motion tracking method | |
CN109308459B (en) | Gesture Estimation Method Based on Finger Attention Model and Keypoint Topology Model | |
CN110457999B (en) | A method for animal pose behavior estimation and mood recognition based on deep learning and SVM | |
CN104573665B (en) | A kind of continuous action recognition methods based on improvement viterbi algorithm | |
Zhou et al. | Learning to estimate 3d human pose from point cloud | |
CN108171133B (en) | Dynamic gesture recognition method based on characteristic covariance matrix | |
CN109472198A (en) | A Pose Robust Approach for Video Smiley Face Recognition | |
CN108416266A (en) | A kind of video behavior method for quickly identifying extracting moving target using light stream | |
CN112200074B (en) | Gesture comparison method and terminal | |
CN104700412B (en) | A computational method of visual saliency map | |
CN106570480A (en) | Posture-recognition-based method for human movement classification | |
CN107067410B (en) | Manifold regularization related filtering target tracking method based on augmented samples | |
CN107301376B (en) | A Pedestrian Detection Method Based on Deep Learning Multi-layer Stimulation | |
CN107123130B (en) | A Kernel Correlation Filtering Target Tracking Method Based on Superpixel and Hybrid Hash | |
CN103020614B (en) | Based on the human motion identification method that space-time interest points detects | |
CN104821010A (en) | Binocular-vision-based real-time extraction method and system for three-dimensional hand information | |
CN108898623A (en) | Method for tracking target and equipment | |
CN103871081A (en) | Method for tracking self-adaptive robust on-line target | |
CN103745228A (en) | Dynamic gesture identification method on basis of Frechet distance | |
CN105261038B (en) | Finger tip tracking based on two-way light stream and perception Hash | |
CN110046558A (en) | A kind of gesture identification method for robot control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180824 |
|
WD01 | Invention patent application deemed withdrawn after publication |