CN108520205A - A kind of human motion recognition method based on Citation-KNN - Google Patents

A kind of human motion recognition method based on Citation-KNN Download PDF

Info

Publication number
CN108520205A
CN108520205A CN201810234650.0A CN201810234650A CN108520205A CN 108520205 A CN108520205 A CN 108520205A CN 201810234650 A CN201810234650 A CN 201810234650A CN 108520205 A CN108520205 A CN 108520205A
Authority
CN
China
Prior art keywords
frame
test sample
action
sample
citation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810234650.0A
Other languages
Chinese (zh)
Other versions
CN108520205B (en
Inventor
郭星
李黄划
张以文
李炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201810234650.0A priority Critical patent/CN108520205B/en
Publication of CN108520205A publication Critical patent/CN108520205A/en
Application granted granted Critical
Publication of CN108520205B publication Critical patent/CN108520205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention discloses a kind of human motion recognition methods based on Citation KNN, including:Obtain test sample;The key frame in test sample action sequence is extracted by frame difference method;Hausdorff distances are optimized, and c neighbour sample and r indexed samples of the test sample for being extracted key frame in training set are calculated according to the Hausdorff of optimization distances;Divide function based on neighbour's sample and indexed samples structure, and the score of each action classification in test sample is calculated according to score function, target detection action classification of the highest action classification of test sample mid-score as test sample is obtained, target detection action classification is exported.

Description

A kind of human motion recognition method based on Citation-KNN
Technical field
The present invention relates to technical field of computer vision more particularly to a kind of human actions based on Citation-KNN Recognition methods.
Background technology
Currently, the human action identification in computer vision has been to be concerned by more and more people, and the technology has had Widely apply, such as:Animated film, high-end computer game, biomethanics, robot, gait analysis and rehabilitation and machine Device sign language interpreter.With the development of the appearance and quick human body attitude algorithm for estimating of the depth transducer of economical and efficient, it is based on bone The human motion recognition method of frame has become the important method in human action identification.
Currently, action identification method has:Action recognition is carried out using hidden Markov model, but this method is trained Data volume it is limited and over-fitting easily occur, noise data and time misalignment are mitigated by Fourier time pyramid It influences, but this method needs carry out feature selection process on skeletal descriptor, and needs in action sequence description multiple Kernel learning process, to obtain newest result;Basic descriptor is quantified as by word using clustering technique, generates one Then action message is expressed as the histogram of quantificational description symbol, however, this method has abandoned length of a game, because only by code book The presence for considering word, without considering their time location;Recognition with Recurrent Neural Network (RNN) is applied into human action identification In, this method is very flexible time series classification device, can be that context temporal information models, but this method needs Careful parameter adjustment avoids over-fitting and gradient disappearance problem.
Invention content
Technical problems based on background technology, it is dynamic that the present invention proposes a kind of human body based on Citation-KNN Make recognition methods;
A kind of human motion recognition method based on Citation-KNN proposed by the present invention, including:
S1, test sample is obtained;
S2, the key frame in test sample action sequence is extracted by frame difference method;
S3, Hausdorff distances are optimized, and is calculated according to the Hausdorff of optimization distances and is extracted key frame C neighbour sample and r indexed samples of the test sample in training set;
S4, divide function based on neighbour's sample and indexed samples structure, and calculated in test sample each according to score function The score of action classification, the target detection for obtaining the highest action classification of test sample mid-score as test sample act class Not, target detection action classification is exported.
Preferably, step S2 is specifically included:
S21, key frame set KF and corresponding key frame weight set KFW is defined;
S22, calculate test sample action sequence in per frame frame gap from;
S23, frame difference distance average in test sample action sequence is calculated;
S24, by the frame gap of every frame from being compared with the poor distance average of frame, when certain frame frame gap from more than When frame difference distance average, which is inserted into KF, by the frame gap of the frame from insertion KFW;
S25, to each element KFW in KFWPIt is normalized, completes to the pass in test sample action sequence Key frame extracts, wherein normalized formula is:
Wherein, SUM (KFW) indicates the summation of element in KFW.
Preferably, step S3 is specifically included:
S31, the time serial message of the weight of human joint points and frame artis is added to Citation-KNN algorithms Euclidean distance in Hausdorff distances are optimized;
S32, the test sample of the key frame c in training set is extracted closely according to the calculating of Citation-KNN algorithms Adjacent sample and r indexed samples.
Preferably, step S31 is specifically included:
Build sample actionWithThe then f in deliberate action sample Ak1Frame and dynamic Make the f in sample Bk2The distance definition of the time serial message of frame artis is as follows:
Wherein,Table Show the weight of human joint points, numA, numBIndicate that the totalframes in A and B, α are frame weight.
Preferably, the weight of the human joint pointsCalculation formula be:
WhereinIndicate i-th of artis in fkSpeed when frame, t two Time between frame;
If the average speed of i-th of artis is in sample actionThe then global average speed of i-th of artisWith artis weightIt is defined as follows:
Preferably, step S4 is specifically included:
If l indicates the set of possible action classification, for test sample X, l*For the prediction action classification of X, the n of XrIt is a close Neighbour isAnd corresponding classification isWith the n of XcA index wrapsAnd corresponding classification isIt is defined as follows:
The wherein δ (a, b)=1 as a=b, otherwise δ (a, b)=0.
The present invention obtains test sample, and the key frame in test sample action sequence is extracted by frame difference method, right Hausdorff distances optimize, and are existed apart from the test sample for being extracted key frame is calculated according to the Hausdorff of optimization C neighbour's sample in training set and r indexed samples divide function based on neighbour's sample and indexed samples structure, and according to commenting The score for dividing function to calculate each action classification in test sample, obtains the highest action classification conduct of test sample mid-score The target detection action classification of test sample exports target detection action classification.In this way, the present invention makes a variation between same action Property and different action between Similarity Problem it is especially effective, as long as a few representative descriptor appears in correctly in action sequence Time in, can correctly to the classification of motion, by time serial message and artis importance be added to Hausdorff away from From in, it is made to have more more practical value, can very simply adjusting parameter come the influence that overcomes noise and actuation time to be misaligned, Higher action recognition rate is realized on UTD-MADH data sets.
Description of the drawings
Fig. 1 is a kind of flow diagram of the human motion recognition method based on Citation-KNN proposed by the present invention;
Fig. 2 is confusion matrix of the present invention on UTD-MHAD data sets.
Specific implementation mode
Referring to Fig.1, a kind of human motion recognition method based on Citation-KNN proposed by the present invention, including:
Step S1 obtains test sample.
Step S2 extracts the key frame in test sample action sequence by frame difference method, specifically includes:
S21, key frame set KF and corresponding key frame weight set KFW is defined;
S22, calculate test sample action sequence in per frame frame gap from;
S23, frame difference distance average in test sample action sequence is calculated;
S24, by the frame gap of every frame from being compared with the poor distance average of frame, when certain frame frame gap from more than When frame difference distance average, which is inserted into KF, by the frame gap of the frame from insertion KFW;
S25, to each element KFW in KFWPIt is normalized, completes to the pass in test sample action sequence Key frame extracts, wherein normalized formula is:
Wherein, SUM (KFW) indicates the summation of element in KFW.
Step S3 optimizes Hausdorff distances, and is calculated according to the Hausdorff of optimization distances and be extracted pass C neighbour sample and r indexed samples of the test sample of key frame in training set, specifically include:
S31, the time serial message of the weight of human joint points and frame artis is added to Citation-KNN algorithms Euclidean distance in Hausdorff distances are optimized, specifically include:
Build sample actionWithThe then f in deliberate action sample Ak1Frame and dynamic Make the f in sample Bk2The distance definition of the time serial message of frame artis is as follows:
Wherein,Table Show the weight of human joint points, numA, numBIt indicates that the totalframes in A and B, α are frame weight, indicates the frame in action sequence The importance of time serial message in row, the weight of the human joint pointsCalculation formula be:
WhereinIndicate i-th of artis in fkSpeed when frame, t two Time between frame;
If the average speed of i-th of artis is in sample actionThe then global average speed of i-th of artisWith artis weightIt is defined as follows:
S32, the test sample of the key frame c in training set is extracted closely according to the calculating of Citation-KNN algorithms Adjacent sample and r indexed samples.
In concrete scheme, it is assumed that have test sampleWherein xfkIncluding fkHuman body in frame In all artis three-dimensional coordinate information, M, N indicate in human body total sample number in the sum and training set of artis respectively, F is provided firstkThe frame gap of frame is from Fdiff (fk), it is defined as follows:
WhereinIndicate i-th of artis in fkThree-dimensional coordinate when frame,It indicates i-th Artis is in fk+2When frame and fk+1Euclidean distance when frame in step S2, carries out key frame to test sample action sequence and carries It takes, greatly reduces the calculation amount of next step Hausdorff distances.
Hausdorff distances HSL(A, B), formula is as follows:
HSL(A, B)=max (hs(A, B), hL(B, A))
Wherein | A |, | B | for the number of frame in sample action A and B, hF(A, B) is directed distance, indicates sample action A In all frames be ranked up to sample action minimum range, ranking be F be hFThe value of (A, B).
According to above-mentioned Hausdorff distances, calculates test sample and at a distance from each sample, found out in training set R neighbour's samples of the test sample.
Step S4 divides function based on neighbour's sample and indexed samples structure, and is calculated in test sample according to score function The score of each action classification, the target detection for obtaining the highest action classification of test sample mid-score as test sample are dynamic Make classification, exports target detection action classification, specifically include:
If l indicates the set of possible action classification, for test sample X, l*For the prediction action classification of X, the n of XrIt is a close Neighbour isAnd corresponding classification isWith the n of XcA index wrapsAnd corresponding classification isIt is defined as follows:
The wherein δ (a, b)=1 as a=b, otherwise δ (a, b)=0.
With reference to Fig. 2, Fig. 2 is the confusion matrix that the present invention obtains on UTD-MHAD data sets, can be with from confusion matrix Find out that the action that discrimination can reach 100% is right hand height dance and right hand preshoot, the minimum action of discrimination is grabbed for the right hand and the right side Hand height is thrown, and right hand height dance becomes other highest actions of stroke defect discrimination, and the methods herein known to confusion matrix exists Average recognition rate is 93% on UTD-MHAD data sets.Therefore this method realizes higher action on UTD-MADH data sets Discrimination.
Present embodiment obtains test sample, and the key frame in test sample action sequence is extracted by frame difference method, right Hausdorff distances optimize, and are existed apart from the test sample for being extracted key frame is calculated according to the Hausdorff of optimization C neighbour's sample in training set and r indexed samples divide function based on neighbour's sample and indexed samples structure, and according to commenting The score for dividing function to calculate each action classification in test sample, obtains the highest action classification conduct of test sample mid-score The target detection action classification of test sample exports target detection action classification.In this way, the present invention makes a variation between same action Property and different action between Similarity Problem it is especially effective, as long as a few representative descriptor appears in correctly in action sequence Time in, can correctly to the classification of motion, by time serial message and artis importance be added to Hausdorff away from From in, it is made to have more more practical value, can very simply adjusting parameter come the influence that overcomes noise and actuation time to be misaligned, Higher action recognition rate is realized on UTD-MADH data sets.
The foregoing is only a preferred embodiment of the present invention, but protection scope of the present invention be not limited to This, any one skilled in the art in the technical scope disclosed by the present invention, according to the technique and scheme of the present invention And its inventive concept is subject to equivalent substitution or change, should be covered by the protection scope of the present invention.

Claims (6)

1. a kind of human motion recognition method based on Citation-KNN, which is characterized in that including:
S1, test sample is obtained;
S2, the key frame in test sample action sequence is extracted by frame difference method;
S3, Hausdorff distances are optimized, and calculates the survey for being extracted key frame according to the Hausdorff of optimization distances C neighbour sample and r indexed samples of the sample sheet in training set;
S4, divide function based on neighbour's sample and indexed samples structure, and calculated in test sample according to score function and each acted The score of classification obtains target detection action classification of the highest action classification of test sample mid-score as test sample, defeated Go out target detection action classification.
2. the human motion recognition method according to claim 1 based on Citation-KNN, which is characterized in that step S2 is specifically included:
S21, key frame set KF and corresponding key frame weight set KFW is defined;
S22, calculate test sample action sequence in per frame frame gap from;
S23, frame difference distance average in test sample action sequence is calculated;
S24, by the frame gap of every frame from being compared with the poor distance average of frame, when the frame gap of certain frame is from more than frame gap When from average value, which is inserted into KF, by the frame gap of the frame from insertion KFW;
S25, to each element KFW in KFWPIt is normalized, completion carries the key frame in test sample action sequence It takes, wherein normalized formula is:
Wherein, SUM (KFW) indicates the summation of element in KFW.
3. the human motion recognition method according to claim 1 based on Citation-KNN, which is characterized in that step S3 is specifically included:
S31, the Europe that the time serial message of the weight of human joint points and frame artis is added to Citation-KNN algorithms Hausdorff distances are optimized in family name's distance;
S32, c neighbour sample of the test sample for being extracted key frame in training set is calculated according to Citation-KNN algorithms With r indexed samples.
4. the human motion recognition method according to claim 3 based on Citation-KNN, which is characterized in that step S31 is specifically included:
Build sample actionWithThe then f in deliberate action sample Ak1Frame and sample action F in Bk2The distance definition of the time serial message of frame artis is as follows:
Wherein,Indicate people The weight of body artis, numA, numBIndicate that the totalframes in A and B, α are frame weight.
5. the human motion recognition method according to claim 4 based on Citation-KNN, which is characterized in that the people The weight of body artisCalculation formula be:
WhereinIndicate i-th of artis in fkSpeed when frame, t be two frames it Between time;
If the average speed of i-th of artis is in sample actionThe then global average speed of i-th of artis With artis weightIt is defined as follows:
6. the human motion recognition method according to claim 4 based on Citation-KNN, which is characterized in that step S4 is specifically included:
If l indicates the set of possible action classification, for test sample X, l*For the prediction action classification of X, the n of XrA neighbour isAnd corresponding classification isWith the n of XcA index wrapsAnd corresponding classification is It is defined as follows:
The wherein δ (a, b)=1 as a=b, otherwise δ (a, b)=0.
CN201810234650.0A 2018-03-21 2018-03-21 motion-KNN-based human body motion recognition method Active CN108520205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810234650.0A CN108520205B (en) 2018-03-21 2018-03-21 motion-KNN-based human body motion recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810234650.0A CN108520205B (en) 2018-03-21 2018-03-21 motion-KNN-based human body motion recognition method

Publications (2)

Publication Number Publication Date
CN108520205A true CN108520205A (en) 2018-09-11
CN108520205B CN108520205B (en) 2022-04-12

Family

ID=63433861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810234650.0A Active CN108520205B (en) 2018-03-21 2018-03-21 motion-KNN-based human body motion recognition method

Country Status (1)

Country Link
CN (1) CN108520205B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190578A (en) * 2018-09-13 2019-01-11 合肥工业大学 The sign language video interpretation method merged based on convolution network with Recognition with Recurrent Neural Network
CN110524559A (en) * 2019-08-30 2019-12-03 成都未至科技有限公司 Intelligent human-machine interaction system and method based on human behavior data
CN113314209A (en) * 2021-06-11 2021-08-27 吉林大学 Human body intention identification method based on weighted KNN

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102122350A (en) * 2011-02-24 2011-07-13 浙江工业大学 Skeletonization and template matching-based traffic police gesture identification method
US20120087583A1 (en) * 2010-10-06 2012-04-12 Futurewei Technologies, Inc. Video Signature Based on Image Hashing and Shot Detection
CN102663437A (en) * 2012-05-03 2012-09-12 中国西安卫星测控中心 Spacecraft classifying and identifying method based on generalized Hough transformation
CN103400154A (en) * 2013-08-09 2013-11-20 电子科技大学 Human body movement recognition method based on surveillance isometric mapping
CN103455794A (en) * 2013-08-23 2013-12-18 济南大学 Dynamic gesture recognition method based on frame fusion technology
US20140294360A1 (en) * 2013-03-26 2014-10-02 Disney Enterprises, Inc. Methods and systems for action recognition using poselet keyframes
CN106886751A (en) * 2017-01-09 2017-06-23 深圳数字电视国家工程实验室股份有限公司 A kind of gesture identification method and system
CN107229921A (en) * 2017-06-09 2017-10-03 济南大学 Dynamic gesture identification method based on Hausdorff distances

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120087583A1 (en) * 2010-10-06 2012-04-12 Futurewei Technologies, Inc. Video Signature Based on Image Hashing and Shot Detection
CN102122350A (en) * 2011-02-24 2011-07-13 浙江工业大学 Skeletonization and template matching-based traffic police gesture identification method
CN102663437A (en) * 2012-05-03 2012-09-12 中国西安卫星测控中心 Spacecraft classifying and identifying method based on generalized Hough transformation
US20140294360A1 (en) * 2013-03-26 2014-10-02 Disney Enterprises, Inc. Methods and systems for action recognition using poselet keyframes
CN103400154A (en) * 2013-08-09 2013-11-20 电子科技大学 Human body movement recognition method based on surveillance isometric mapping
CN103455794A (en) * 2013-08-23 2013-12-18 济南大学 Dynamic gesture recognition method based on frame fusion technology
CN106886751A (en) * 2017-01-09 2017-06-23 深圳数字电视国家工程实验室股份有限公司 A kind of gesture identification method and system
CN107229921A (en) * 2017-06-09 2017-10-03 济南大学 Dynamic gesture identification method based on Hausdorff distances

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190578A (en) * 2018-09-13 2019-01-11 合肥工业大学 The sign language video interpretation method merged based on convolution network with Recognition with Recurrent Neural Network
CN110524559A (en) * 2019-08-30 2019-12-03 成都未至科技有限公司 Intelligent human-machine interaction system and method based on human behavior data
CN110524559B (en) * 2019-08-30 2022-06-10 成都未至科技有限公司 Intelligent man-machine interaction system and method based on personnel behavior data
CN113314209A (en) * 2021-06-11 2021-08-27 吉林大学 Human body intention identification method based on weighted KNN
CN113314209B (en) * 2021-06-11 2023-04-18 吉林大学 Human body intention identification method based on weighted KNN

Also Published As

Publication number Publication date
CN108520205B (en) 2022-04-12

Similar Documents

Publication Publication Date Title
JP6832504B2 (en) Object tracking methods, object tracking devices and programs
Garcia et al. Real-time American sign language recognition with convolutional neural networks
WO2020042419A1 (en) Gait-based identity recognition method and apparatus, and electronic device
CN110705390A (en) Body posture recognition method and device based on LSTM and storage medium
CN106600626B (en) Three-dimensional human motion capture method and system
CN114220176A (en) Human behavior recognition method based on deep learning
Wang et al. Sparse observation (so) alignment for sign language recognition
CN104408405B (en) Face representation and similarity calculating method
CN108520205A (en) A kind of human motion recognition method based on Citation-KNN
CN103886341A (en) Gait behavior recognition method based on feature combination
Khoshhal et al. Probabilistic LMA-based classification of human behaviour understanding using power spectrum technique
CN108875586B (en) Functional limb rehabilitation training detection method based on depth image and skeleton data multi-feature fusion
CN108681700A (en) A kind of complex behavior recognition methods
CN109325546A (en) A kind of combination footwork feature at time footprint recognition method
CN114240997B (en) Intelligent building online trans-camera multi-target tracking method
CN105373810A (en) Method and system for building action recognition model
Kumar et al. 3D sign language recognition using spatio temporal graph kernels
Ansar et al. Robust hand gesture tracking and recognition for healthcare via Recurent neural network
Zheng et al. L-sign: Large-vocabulary sign gestures recognition system
De Giorgis et al. Evaluating movement quality through intrapersonal synchronization
Batool et al. Telemonitoring of daily activities based on multi-sensors data fusion
CN112989889A (en) Gait recognition method based on posture guidance
Zhang et al. Image exploration procedure classification with spike-timing neural network for the blind
CN109409246B (en) Sparse coding-based accelerated robust feature bimodal gesture intention understanding method
CN110766093A (en) Video target re-identification method based on multi-frame feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant