CN108446583A - Human bodys' response method based on Attitude estimation - Google Patents
Human bodys' response method based on Attitude estimation Download PDFInfo
- Publication number
- CN108446583A CN108446583A CN201810079476.7A CN201810079476A CN108446583A CN 108446583 A CN108446583 A CN 108446583A CN 201810079476 A CN201810079476 A CN 201810079476A CN 108446583 A CN108446583 A CN 108446583A
- Authority
- CN
- China
- Prior art keywords
- video
- artis
- matrix
- human
- position coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The Human bodys' response method based on Attitude estimation that the invention discloses a kind of mainly solving the problems, such as that prior art processing speed in video human behavior is excessively slow.Implementation step is:1. carrying out Attitude estimation to human body in video using Open pose methods, extract in video per frame human joint points position coordinates;2. according to per frame human joint points position coordinates, adjacent two frames human joint points distance change moment matrix is calculated;3. video is segmented, video features are generated using every section of video distance change moment matrix;4. video in data set is divided into training set and test set two parts, grader is trained with the video features of training set, is classified to the video in test set using trained grader.The present invention improves the speed of Human bodys' response in video, can be used for intelligent video monitoring, human-computer interaction, video frequency searching.
Description
Technical field
The invention belongs to technical field of image processing, more particularly to a kind of video human Activity recognition method can be used for intelligence
It can video monitoring, human-computer interaction, video frequency searching.
Background technology
With the development and application of Computer Subject and artificial intelligence, Video Analysis Technology is rapidly growing and has obtained extensively
Concern.A core in video analysis is exactly Human bodys' response, and the accuracy and rapidity of Activity recognition will directly affect
The result of video analytic system follow-up work.Therefore, the accuracy and rapidity of Human bodys' response in video how are improved,
Important Problems in being studied as video analytic system.
Currently, typical video human Activity recognition method mainly has:Space-time interest points, intensive track etc..Wherein:
Space-time interest points are the feature progress Human bodys' responses by detecting the angle point in video, extracting angle point, still
A part of angle point is generated by ambient noise, can not only influence last as a result, can also reduce the speed of service of identification.
Intensive track is the intensive sampling first carried out to each frame of video on multiple scales, is then carried out to the point of sampling
Tracking obtains track, then extracts the feature progress Activity recognition of track.But the computation complexity of this method is high, and generate
Characteristic dimension is high, a large amount of memory can be occupied, it is difficult to accomplish to identify in real time.
Invention content
It is an object of the invention to the problems for real-time difference in above-mentioned prior art, propose a kind of based on Attitude estimation
Human bodys' response method, to improve the speed of Human bodys' response.
The present invention technical thought be:By estimating the posture of human body in video, the position of each frame human joint points is obtained
It sets, the action of human body is analyzed using the location variation of human joint points, to rapidly carry out Human bodys' response.
According to above-mentioned thinking, implementation of the invention includes as follows:
(1) it extracts in video per frame human joint points position coordinates:
(1a) using Open-pose methods to Attitude estimation is carried out in video per frame human body, obtain human body neck, chest,
Head, right shoulder, left shoulder, right hips, left buttocks, right hand elbow, left hand elbow, right knee, left knee, right finesse, left finesse, right ankle
With the position coordinates of left ankle this 15 artis, wherein the coordinate representation of k-th of artis is Lk=(xk,yk), k from 1 to
15;
The position coordinates of each artis are normalized in (1b);
(1c) constitutes coordinates matrix P, P=[(x with 15 artis position coordinates after normalization1,y1),(x2,
y2),...,(xk,yk),...,(x15,y15)], wherein (xk,yk) indicate coordinate after the normalization of k-th artis;
(2) adjacent two frames human joint points distance change moment matrix is calculated:
(2a) is according to the coordinates matrix P of adjacent two framenAnd Pn-1, calculate adjacent two frames artis position coordinates variable quantity square
Battle array
(2b) changes moment matrix according to artis position coordinatesCalculate artis distance change moment matrix D;
(3) video features are generated:
Video is divided into 4 sections by (3a) according to the time span of video, and adjacent two frame in each section of video is generated
Distance change moment matrix D is added, and obtains each section of cumulative distance variable quantity matrix Di, i is from 1 to 4;
(3b) is to DiCarry out L2 normalization, the D after being normalizedi';
(3c) is by cumulative distance variable quantity matrix Di' the feature that is together in series as entire video:F=[D1',D2',D3',
D4'];
(4) training grader classifies to video:
The video of sub-JHMDB data sets is divided into training set and test set two parts by (4a), by the spy of training set video
Sign, which is input in support vector machines, to be trained, and trained support vector machines is obtained;
(4b) is input to the feature of test set video in trained support vector machines and obtains classification results.
The present invention has the following advantages:
The present invention carries out Attitude estimation as a result of Open-pose methods to human body in video, can be quickly obtained
Per the artis position coordinates of frame human body in video, simultaneously because carrying out segment processing to video, human body can be obtained in video
Artis location variation in different time sections, to make classification to human body behavior in video using location variation.
Description of the drawings
Fig. 1 is the implementation flow chart of the present invention;
Fig. 2 is the human joint points position view estimated with Open-pose;
Specific implementation mode
Referring to the drawings, technical solutions and effects of the present invention is further described:
Referring to Fig.1, implementation steps of the invention are as follows:
Step 1. is extracted in video per frame human synovial dot position information.
1.1) using Open-pose methods to carrying out Attitude estimation in video per frame human body, obtain human body neck, chest,
Head, right shoulder, left shoulder, right hips, left buttocks, right hand elbow, left hand elbow, right knee, left knee, right finesse, left finesse, right ankle
With the position coordinates of left ankle this 15 artis, wherein the coordinate representation of k-th of artis is Lk=(xk,yk), k from 1 to
15, as shown in Figure 2;
1.2) position coordinates of each artis are normalized:
Wherein x, y indicate that the coordinate before normalization, x', y' indicate that the coordinate after normalization, W indicate the width of each frame of video
Degree, H indicate the height of each frame of video;
1.3) coordinates matrix P, P=[(x are constituted with 15 artis position coordinates after normalization1,y1),(x2,
y2),...,(xk,yk),...,(x15,y15)], wherein (xk,yk) indicate coordinate after the normalization of k-th artis.
Step 2. calculates adjacent two frames human joint points distance change moment matrix.
2.1) according to the coordinates matrix P of adjacent two framenAnd Pn-1, calculate adjacent two frames artis position coordinates variable quantity square
Battle array
Wherein PnAnd Pn-1Indicate that the artis position coordinates matrix of former frame and a later frame, dx and dy indicate same respectively
The adjacent two frame coordinates variable quantity of a artis;
2.2) change moment matrix according to artis position coordinatesCalculate artis distance change moment matrix D:
Wherein dxkAnd dykIt indicatesIn k-th of element.
Step 3. generates video features.
3.1) video is divided into 4 sections according to the time span of video, adjacent two frame in each section of video is generated
Distance change moment matrix D is added, and obtains each section of cumulative distance variable quantity matrix Di, i is from 1 to 4.
3.2) to DiCarry out L2 normalization, the D after being normalizedi':
Wherein Di=[d1,d2,...,dk,...,d15] it is i-th section of video cumulative distance variation moment matrix, dkIndicate DiIn
K element,It is DiL2 norms,Indicate DiIn k-th of element square;
3.3) by cumulative distance variable quantity matrix Di' the feature that is together in series as entire video:
F=[D1',D2',D3',D4'] <5>
Step 4. training grader classifies to video.
4.1) video of sub-JHMDB data sets is divided into training set and test set two parts, by the spy of training set video
Sign, which is input in support vector machines, to be trained, and trained support vector machines is obtained;
4.2) feature of test set video is input in trained support vector machines and obtains classification results.
The effect of the present invention can be further illustrated by following experiment:
1. experiment condition.
Experimental situation:Computer uses Intel (R) Core (TM) i7-7700CPU@3.8Ghz, 16GB memories, and GPU is
GTX1080, software use Matlab2014b Simulation Experimental Platforms.
Experiment parameter:Support vector machines selects linear kernel, parameter c=8.
2. experiment content and result.
Experiment carries out on sub-JHMDB data sets, and sub-JHMDB data sets include 12 anthropoid actions altogether, in total
Including 316 video clips, each video clip includes a kind of human body behavior.It is advance according to sub-JHMDB data set providers
Setting, is divided into training set and test set two parts by video in data set.Using the method for the present invention in sub-JHMDB data sets
Video is handled to obtain video features, and the feature of training set video is for training grader, then with trained grader
Classify to test set video, the ratio that test set video is correctly classified is as final classification results.
Classification results on sub-JHMDB data sets reach 43.9%, to the processing speed average out to 10fps of video.
It can to sum up obtain, the quick identification to human body behavior in video may be implemented in the present invention.
Claims (5)
1. a kind of Human bodys' response method based on Attitude estimation, including:
(1) it extracts in video per frame human joint points position coordinates:
(1a) using Open-pose methods to Attitude estimation is carried out in video per frame human body, obtain human body neck, chest, head,
Right shoulder, left shoulder, right hips, left buttocks, right hand elbow, left hand elbow, right knee, left knee, right finesse, left finesse, right ankle and a left side
The position coordinates of this 15 artis of ankle, wherein the coordinate representation of k-th of artis is Lk=(xk,yk), k is from 1 to 15;
The position coordinates of each artis are normalized in (1b);
(1c) constitutes coordinates matrix P, P=[(x with 15 artis position coordinates after normalization1,y1),(x2,y2),...,
(xk,yk),...,(x15,y15)], wherein (xk,yk) indicate coordinate after the normalization of k-th artis;
(2) adjacent two frames human joint points distance change moment matrix is calculated:
(2a) is according to the coordinates matrix P of adjacent two framenAnd Pn-1, calculate adjacent two frames artis position coordinates and change moment matrix
(2b) changes moment matrix according to artis position coordinatesCalculate artis distance change moment matrix D;
(3) video features are generated:
Video is divided into 4 sections by (3a) according to the time span of video, by the distance that adjacent two frame generates in each section of video
Variable quantity matrix D is added, and obtains each section of cumulative distance variable quantity matrix Di, i is from 1 to 4;
(3b) is to DiCarry out L2 normalization, the D after being normalizedi';
(3c) is by cumulative distance variable quantity matrix Di' the feature that is together in series as entire video:
F=[D1',D2',D3',D4'];
(4) training grader classifies to video:
The video of sub-JHMDB data sets is divided into training set and test set two parts by (4a), and the feature of training set video is defeated
Enter and be trained into support vector machines, obtains trained support vector machines;
(4b) is input to the feature of test set video in trained support vector machines and obtains classification results.
2. according to the method described in claim 1, wherein the position coordinates of each artis are normalized in step (1b),
It carries out as follows:
Wherein x, y indicate that the coordinate before normalization, x', y' indicate that the coordinate after normalization, W indicate each frame width of video, H
Indicate each frame height of video.
3. according to the method described in claim 1, wherein calculating adjacent two frames artis position coordinates variable quantity in step (2a)
MatrixIt is calculated as follows:
Wherein PnAnd Pn-1The artis location matrix of former frame and a later frame, dx are indicated respectivelykAnd dykIndicate k-th of artis
Adjacent two frame coordinates variable quantity.
4. according to the method described in claim 1, artis distance change moment matrix D is wherein calculated in step (2b), by as follows
Formula calculates:
Wherein dxkAnd dykIt indicatesIn k-th of element.
5. according to the method described in claim 1, wherein to D in step (3b)iL2 is carried out to normalize to obtain Di', as follows
It calculates:
Wherein Di=[d1,d2,...,dk,...,d15] it is i-th section of video cumulative distance variation moment matrix, dkIndicate DiIn k-th
Element,It is DiL2 norms,Indicate DiIn k-th of element square.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810079476.7A CN108446583A (en) | 2018-01-26 | 2018-01-26 | Human bodys' response method based on Attitude estimation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810079476.7A CN108446583A (en) | 2018-01-26 | 2018-01-26 | Human bodys' response method based on Attitude estimation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108446583A true CN108446583A (en) | 2018-08-24 |
Family
ID=63191076
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810079476.7A Pending CN108446583A (en) | 2018-01-26 | 2018-01-26 | Human bodys' response method based on Attitude estimation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108446583A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109344790A (en) * | 2018-10-16 | 2019-02-15 | 浩云科技股份有限公司 | A kind of human body behavior analysis method and system based on posture analysis |
CN109815921A (en) * | 2019-01-29 | 2019-05-28 | 北京融链科技有限公司 | The prediction technique and device of the class of activity in hydrogenation stations |
CN109871750A (en) * | 2019-01-02 | 2019-06-11 | 东南大学 | A kind of gait recognition method based on skeleton drawing sequence variation joint repair |
CN110147723A (en) * | 2019-04-11 | 2019-08-20 | 苏宁云计算有限公司 | The processing method and system of customer's abnormal behaviour in a kind of unmanned shop |
CN110503077A (en) * | 2019-08-29 | 2019-11-26 | 郑州大学 | A kind of real-time body's action-analysing method of view-based access control model |
CN110956139A (en) * | 2019-12-02 | 2020-04-03 | 郑州大学 | Human motion action analysis method based on time series regression prediction |
CN111626137A (en) * | 2020-04-29 | 2020-09-04 | 平安国际智慧城市科技股份有限公司 | Video-based motion evaluation method and device, computer equipment and storage medium |
CN112182282A (en) * | 2020-09-01 | 2021-01-05 | 浙江大华技术股份有限公司 | Music recommendation method and device, computer equipment and readable storage medium |
CN112417927A (en) * | 2019-08-22 | 2021-02-26 | 北京奇虎科技有限公司 | Method for establishing human body posture recognition model, human body posture recognition method and device |
CN112702570A (en) * | 2020-12-18 | 2021-04-23 | 中国南方电网有限责任公司超高压输电公司柳州局 | Security protection management system based on multi-dimensional behavior recognition |
CN113392758A (en) * | 2021-06-11 | 2021-09-14 | 北京科技大学 | Rescue training-oriented behavior detection and effect evaluation method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104573665A (en) * | 2015-01-23 | 2015-04-29 | 北京理工大学 | Continuous motion recognition method based on improved viterbi algorithm |
US20150186713A1 (en) * | 2013-12-31 | 2015-07-02 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for emotion and behavior recognition |
CN104866860A (en) * | 2015-03-20 | 2015-08-26 | 武汉工程大学 | Indoor human body behavior recognition method |
CN105138995A (en) * | 2015-09-01 | 2015-12-09 | 重庆理工大学 | Time-invariant and view-invariant human action identification method based on skeleton information |
CN105518744A (en) * | 2015-06-29 | 2016-04-20 | 北京旷视科技有限公司 | Pedestrian re-identification method and equipment |
CN106066996A (en) * | 2016-05-27 | 2016-11-02 | 上海理工大学 | The local feature method for expressing of human action and in the application of Activity recognition |
-
2018
- 2018-01-26 CN CN201810079476.7A patent/CN108446583A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150186713A1 (en) * | 2013-12-31 | 2015-07-02 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for emotion and behavior recognition |
CN104573665A (en) * | 2015-01-23 | 2015-04-29 | 北京理工大学 | Continuous motion recognition method based on improved viterbi algorithm |
CN104866860A (en) * | 2015-03-20 | 2015-08-26 | 武汉工程大学 | Indoor human body behavior recognition method |
CN105518744A (en) * | 2015-06-29 | 2016-04-20 | 北京旷视科技有限公司 | Pedestrian re-identification method and equipment |
CN105138995A (en) * | 2015-09-01 | 2015-12-09 | 重庆理工大学 | Time-invariant and view-invariant human action identification method based on skeleton information |
CN106066996A (en) * | 2016-05-27 | 2016-11-02 | 上海理工大学 | The local feature method for expressing of human action and in the application of Activity recognition |
Non-Patent Citations (2)
Title |
---|
DING WENWEN 等: "Skeleton-Based Human Action Recognition via", 《CHINESE JOURNAL OF ELECTRONICS》 * |
ZHECAO 等: "Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109344790A (en) * | 2018-10-16 | 2019-02-15 | 浩云科技股份有限公司 | A kind of human body behavior analysis method and system based on posture analysis |
CN109871750A (en) * | 2019-01-02 | 2019-06-11 | 东南大学 | A kind of gait recognition method based on skeleton drawing sequence variation joint repair |
CN109871750B (en) * | 2019-01-02 | 2023-08-18 | 东南大学 | Gait recognition method based on skeleton diagram sequence abnormal joint repair |
CN109815921A (en) * | 2019-01-29 | 2019-05-28 | 北京融链科技有限公司 | The prediction technique and device of the class of activity in hydrogenation stations |
CN110147723A (en) * | 2019-04-11 | 2019-08-20 | 苏宁云计算有限公司 | The processing method and system of customer's abnormal behaviour in a kind of unmanned shop |
CN110147723B (en) * | 2019-04-11 | 2022-08-19 | 苏宁云计算有限公司 | Method and system for processing abnormal behaviors of customers in unmanned store |
CN112417927A (en) * | 2019-08-22 | 2021-02-26 | 北京奇虎科技有限公司 | Method for establishing human body posture recognition model, human body posture recognition method and device |
CN110503077B (en) * | 2019-08-29 | 2022-03-11 | 郑州大学 | Real-time human body action analysis method based on vision |
CN110503077A (en) * | 2019-08-29 | 2019-11-26 | 郑州大学 | A kind of real-time body's action-analysing method of view-based access control model |
CN110956139A (en) * | 2019-12-02 | 2020-04-03 | 郑州大学 | Human motion action analysis method based on time series regression prediction |
CN110956139B (en) * | 2019-12-02 | 2023-04-28 | 河南财政金融学院 | Human motion analysis method based on time sequence regression prediction |
CN111626137A (en) * | 2020-04-29 | 2020-09-04 | 平安国际智慧城市科技股份有限公司 | Video-based motion evaluation method and device, computer equipment and storage medium |
CN112182282A (en) * | 2020-09-01 | 2021-01-05 | 浙江大华技术股份有限公司 | Music recommendation method and device, computer equipment and readable storage medium |
CN112702570A (en) * | 2020-12-18 | 2021-04-23 | 中国南方电网有限责任公司超高压输电公司柳州局 | Security protection management system based on multi-dimensional behavior recognition |
CN113392758A (en) * | 2021-06-11 | 2021-09-14 | 北京科技大学 | Rescue training-oriented behavior detection and effect evaluation method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108446583A (en) | Human bodys' response method based on Attitude estimation | |
WO2021047232A1 (en) | Interaction behavior recognition method, apparatus, computer device, and storage medium | |
CN104038738B (en) | Intelligent monitoring system and intelligent monitoring method for extracting coordinates of human body joint | |
Abdat et al. | Human-computer interaction using emotion recognition from facial expression | |
CN110457999B (en) | Animal posture behavior estimation and mood recognition method based on deep learning and SVM | |
CN111160269A (en) | Face key point detection method and device | |
CN103310194B (en) | Pedestrian based on crown pixel gradient direction in a video shoulder detection method | |
CN108171133B (en) | Dynamic gesture recognition method based on characteristic covariance matrix | |
CN106682641A (en) | Pedestrian identification method based on image with FHOG- LBPH feature | |
CN103617413B (en) | Method for identifying object in image | |
CN103020614B (en) | Based on the human motion identification method that space-time interest points detects | |
CN109620244A (en) | The Infants With Abnormal behavioral value method of confrontation network and SVM is generated based on condition | |
CN112541870A (en) | Video processing method and device, readable storage medium and electronic equipment | |
Kusakunniran et al. | Automatic gait recognition using weighted binary pattern on video | |
CN112200074A (en) | Attitude comparison method and terminal | |
CN106529441B (en) | Depth motion figure Human bodys' response method based on smeared out boundary fragment | |
CN109993116B (en) | Pedestrian re-identification method based on mutual learning of human bones | |
CN117423134A (en) | Human body target detection and analysis multitasking cooperative network and training method thereof | |
CN106570479B (en) | A kind of pet motions recognition methods of Embedded platform | |
Sun et al. | Human action recognition using a convolutional neural network based on skeleton heatmaps from two-stage pose estimation | |
CN114299279A (en) | Unmarked group rhesus monkey motion amount estimation method based on face detection and recognition | |
CN113470073A (en) | Animal center tracking method based on deep learning | |
Hachaj et al. | Human actions recognition on multimedia hardware using angle-based and coordinate-based features and multivariate continuous hidden Markov model classifier | |
Foytik et al. | Tracking and recognizing multiple faces using Kalman filter and ModularPCA | |
CN111951298A (en) | Target tracking method fusing time series information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180824 |