CN104299004A - Hand gesture recognition method based on multi-feature fusion and fingertip detecting - Google Patents

Hand gesture recognition method based on multi-feature fusion and fingertip detecting Download PDF

Info

Publication number
CN104299004A
CN104299004A CN201410568977.3A CN201410568977A CN104299004A CN 104299004 A CN104299004 A CN 104299004A CN 201410568977 A CN201410568977 A CN 201410568977A CN 104299004 A CN104299004 A CN 104299004A
Authority
CN
China
Prior art keywords
gesture
defect
point
finger tip
boundary rectangle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410568977.3A
Other languages
Chinese (zh)
Other versions
CN104299004B (en
Inventor
于慧敏
盛亚婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201410568977.3A priority Critical patent/CN104299004B/en
Publication of CN104299004A publication Critical patent/CN104299004A/en
Application granted granted Critical
Publication of CN104299004B publication Critical patent/CN104299004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a hand gesture recognition method based on multi-feature fusion and fingertip detecting. The method comprises a training process and a recognition process. In the training process, for a complex hand gesture, reasonable hand gesture features are selected, a multi-feature fusion feature extracting algorithm is used, the hand gesture is subjected to support vector machine training, and a training model is formed. In the recognition process, for an input video image sequence, hand gesture detecting is carried out first, then multi-feature extracting and fusion are carried out, and multiple features are input into the support vector machine to obtain a recognition results. Meanwhile, the hand gesture is subjected to fingertip detecting based on defects, through a defect screener, the positions of fingertips of fingers are located, then two-time recognition and detecting results are subjected to synthesized, and the final hand gesture recognition results are obtained. The problem that in a complex scene, the hand gesture recognition rate is not high can be effectively solved, the requirement of real-time performance is met, and the method can be well used in human-machine interaction.

Description

A kind of gesture identification method detected based on multiple features fusion and finger tip
Technical field
The present invention relates to a kind of method of gesture identification, relate to a kind of gesture identification method detected based on multiple features fusion and finger tip.
Background technology
Development along with computing machine is applied more and more extensively with rapid in modern society, the demand of human-computer interaction technology also becomes more and more higher in human lives, in these interaction techniques, gesture is a kind of natural and meet the interactive mode of the behavioural habits of people, it is directly perceived with it, convenient, naturally feature receive everybody concern, be one of the ideal chose as novel human-machine interaction technology.And gesture identification is as one of the step of most critical in interactive system, its recognition effect directly has influence on the communication capability between people and computing machine.
In conjunction with all kinds of research and practical application, the contradiction that technical difficult points that current gesture identification field exists is between system real time and gesture identification rate can be analyzed.In order to obtain higher gesture identification rate, usual researcher can select to retain feature as much as possible and go to characterize gesture, and by the algorithm identification gesture of complexity, and will certainly reduce the speed of identification like this, the real-time of system is not being met; And in order to be applicable to real-time system, usually calculated amount can only be reduced by the dimension reducing feature again, and the minimizing of characteristic dimension can make the impact of noise increase, and to early stage Hand Gesture Segmentation result have very high requirement, the discrimination of gesture decreases, the gesture kind that can be identified becomes simplification, lacks certain application.
Summary of the invention
In order to solve the problems of the prior art, the invention discloses a kind of gesture identification method detected based on multiple features fusion and finger tip, the method is by selecting rational gesture feature, comprise Hu moment characteristics, defect characteristic and ratio characteristic, utilize the feature extraction algorithm of multiple features fusion, make the calculated amount of feature little, validity is high, and the Fingertip Detection combined based on defect, improve the accuracy rate identified further.Under effectively solving complex scene in this way, the problem that gesture identification rate is not high, and the requirement meeting real-time.
The present invention is by the following technical solutions: a kind of gesture identification method detected based on multiple features fusion and finger tip, comprises the following steps:
Step 1): training process: rational gesture feature is selected to complicated gesture, and utilize the feature extraction algorithm of multiple features fusion, support vector machine training is carried out to gesture, forms training pattern;
Step 2): identifying: gestures detection is carried out to input image sequence, multi-feature extraction and fusion are carried out to the gesture detected, and is input in support vector machine the recognition result obtaining SVM; Meanwhile, carry out detecting based on the finger tip of defect to gesture, comprehensive two times result exports final recognition result.
Further, step 1) described in training process, select rational gesture feature to complicated gesture, and utilize the feature extraction algorithm of multiple features fusion, carry out support vector machine training to gesture, form training pattern, its detailed process is as follows:
Step 1.1): Hu Moment Feature Extraction:
Calculate the Hu moment characteristics of each images of gestures, and be normalized it, Hu moment characteristics can represent by formula:
Hu=(φ 1234567)
Wherein, φ 1~ φ 7represent each component of Hu moment characteristics respectively.
Step 1.2): defect characteristic extracts:
The defect part of gesture is exactly the part that gesture convex closure deducts gesture profile.The method calculating defect number is as follows:
Obtained the profile of gesture by eight neighborhood search procedure, judge whether the peripheral polygon of this gesture profile is convex, if convex, then this convex polygon is gesture convex closure; Again polygonal approximation carries out to gesture convex closure level and smooth; According to the defect number of gesture profile, gesture convex hull computation gesture.Each defective packets is containing three points: starting point, end point, depth point and the distance between depth point and convex closure, be defined as ptStart, ptEnd, ptFar and Depth respectively.
Step 1.3): ratio characteristic extracts:
Profile girth and area ratio:
Definition gesture profile girth is ConLenght, and definition gesture contour area is ConArea.The profile girth of definition gesture and area ratio feature:
ConLA = ConLenght ConArea
Profile girth and boundary rectangle girth ratio:
The boundary rectangle girth RectLenght of definition gesture, the girth of definition profile girth and boundary rectangle is than feature:
LenCR = ConLenght RectLenght
Contour area and boundary rectangle area ratio:
The boundary rectangle area of definition gesture is RectArea, contour area and boundary rectangle area ratio feature:
AreaCR = ConArea RectArea
Boundary rectangle the ratio of width to height:
The wide W of definition boundary rectangle, height is H, boundary rectangle the ratio of width to height:
α = W H
Center of gravity and boundary rectangle up-and-down boundary ratio of distances constant:
O is expressed as the centre of gravity place of gesture, H 1, H 2represent the distance of gesture center of gravity to the upper and lower border of boundary rectangle respectively, definition center of gravity and boundary rectangle up-and-down boundary ratio of distances constant:
β = H 1 H 2
Center of gravity and boundary rectangle right boundary ratio of distances constant:
W 1, W 2represent the distance of gesture center of gravity to the left and right border of boundary rectangle respectively, definition center of gravity and boundary rectangle left and right lower boundary ratio of distances constant:
η = W 1 W 2
Step 1.4): multiple features fusion:
By step 1.1) ~ 1.3) in the Hu moment characteristics, defect characteristic and six ratio characteristic that extract to permeate a proper vector feature, for characterizing the characteristic information of images of gestures:
feature={Hu,numDefects,ConLA,LenCR,AreaCR,α,β,η}
Wherein, numDefects is defect number, Hu=(φ 1, φ 2, φ 3, φ 4, φ 5, φ 6, φ 7).
Step 1.5): support vector machine is trained:
Be input in support vector machine together with classification mark by the proper vector feature of above-mentioned gesture sample image and train, wherein classification is labeled as the mark of gesture kind.
Further, step 2) described in identifying, gestures detection is carried out to input image sequence, multi-feature extraction and fusion is carried out to the gesture detected, and be input in support vector machine the recognition result obtaining SVM; Meanwhile, carry out detecting based on the finger tip of defect to gesture, comprehensive two times result exports final recognition result, and its detailed process is as follows:
Step 2.1): gestures detection:
The sequence of video images of input is carried out to the mixed Gaussian background modeling improved based on space time information, and the Face Detection result combined based on multiple color spaces, obtain the Hand Gesture Segmentation image of binaryzation after filtering process and morphological operation are carried out to both synthesis result.
Step 2.2): feature extraction:
To the binary map of gesture according to step 1.1) ~ 1.4) calculate the proper vector of this images of gestures;
Step 2.3): SVM identifies:
By in the proper vector of images of gestures input support vector machine, export SVM recognition result.
Step 2.4): finger tip detects:
The number of finger tip is exactly the number of finger, the detection of finger tip is comprised to the positional information of finger tip number and finger tip.According to step 1.2) obtain three defect point ptStart that gesture defect and each defect comprise, ptEnd, ptFar, finger tip point is contained in these defect points, so set up an effective defect point screening washer, filter out effective defect point and be finger tip point.Screening washer rule is as follows:
I. the starting point of defect and depth point distance are greater than a certain proportion of gesture boundary rectangle height H
Lenght (ptStart, ptFar) > α H, wherein α is scale-up factor.
Ii. the depth point of defect and end point distance are greater than a certain proportion of gesture boundary rectangle height H
Lenght(ptEnd,ptFar)>αH
Iii. the angle that the starting point of defect, depth point, end point are formed is less than threshold value T angle
angle(ptStart,ptFar,ptEnd)<T angle
Iv. the starting point of defect, depth point, end point are in the certain limit of gesture boundary rectangle
y bounding<y ptStar<y bounding+βH
y bounding<y ptEnd<y bounding+βH
y bounding<y ptFar<y bounding+βH
Wherein β is scale-up factor, and H is that gesture boundary rectangle is high.
V. when defect point spacing is less than T distime, two defect points are approximate to be overlapped, and can be judged to be same defect point
Lenght(pt i,pt j)<T dis
When defect point meets above condition simultaneously, then can be judged to be effective defect point by screening washer, record effective defect point number and position thereof, be finger tip point.
Step 2.5): comprehensive recognition result:
The result and the finger tip that compare support vector machine output detect the finger tip point number exported, and when both are consistent, then export recognition result.
The present invention adopts above technical scheme compared with prior art, has following technique effect:
1) adopt the feature extracting method of multiple features fusion, feature calculation amount is little, and each feature can describe gesture from different perspectives, can the effective mistake brought of modifying factor single features flase drop, under calculated amount little as far as possible, bring higher discrimination.
2) Fingertip Detection based on defect is visual and understandable, meet the priori of gesture and the understanding custom of the mankind, calculated amount is little, compare depend on template template matching method, need to calculate in a large number curvature, distance profile analysis method and relate to the heuristics directly perceived of complicated thinning process, Fingertip Detection based on defect to the tip portion of gesture, the basis ensureing accuracy rate can further increase the real-time of system by quick position in a straightforward manner.
3) by support vector machine recognition result based on multi-feature fusion and finger tip association, the common recognition result determining final output, further increases gesture identification rate, reduces the flase drop risk that single recognition methods brings.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of gesture identification of the present invention;
Fig. 2 is the process flow diagram of training process;
Fig. 3 is the process flow diagram that finger tip detects;
Embodiment
Below in conjunction with accompanying drawing also by specific embodiment, technical scheme of the present invention is described in further detail.
Following examples are implemented under premised on technical solution of the present invention, give detailed embodiment and concrete operating process, but protection scope of the present invention are not limited to following embodiment.
Embodiment
The present embodiment processes the video sequence (640X480 pixel, 30ftps) that a section is taken by Logitech C710 IP Camera.This video random shooting in indoor scene, comprises complicated background, has the background object of the class colour of skin to occur there is the change of illumination in scene, the kind of gesture comprises 0,1,2,3,4,5,8 seven kind of gesture.The present embodiment comprises following steps:
Step 1): training process: all gesture sample images are input in tranining database one by one, select Hu moment characteristics, defect characteristic and ratio characteristic, and utilize the feature extraction algorithm of multiple features fusion, support vector machine training is carried out to gesture, forms training pattern;
In the present embodiment, step 1) described in training process, Fig. 2 is the process flow diagram of training process, and its detailed process is as follows:
Step 1.1): training sample prepares:
The gesture sample image of seven kinds and its classification mark are input to one by one in tranining database, have 1369 width gesture sample images in database, seven kinds of gestures are labeled as 0,1,2,3,4,5,8 respectively.
Step 1.2): Hu Moment Feature Extraction:
Calculate the Hu moment characteristics of each images of gestures, and be normalized it, Hu moment characteristics can represent by formula:
Hu=(φ 1234567)
Wherein, φ 1~ φ 7represent each component of Hu moment characteristics respectively.
Step 1.3): defect characteristic extracts:
The defect part of gesture is exactly the part that gesture convex closure deducts gesture profile.The method calculating defect number is as follows:
Obtained the profile of gesture by eight neighborhood search procedure, judge whether the peripheral polygon of this gesture profile is convex, if convex, then this convex polygon is gesture convex closure; Again polygonal approximation carries out to gesture convex closure level and smooth; According to the defect number of gesture profile, gesture convex hull computation gesture.Each defective packets is containing three points: starting point, end point, depth point and the distance between depth point and convex closure, be defined as ptStart, ptEnd, ptFar and Depth respectively.
Step 1.4): ratio characteristic extracts:
Profile girth and area ratio:
Definition gesture profile girth is ConLenght, and definition gesture contour area is ConArea.The profile girth of definition gesture and area ratio feature:
ConLA = ConLenght ConArea
Profile girth and boundary rectangle girth ratio:
The boundary rectangle girth RectLenght of definition gesture, the girth of definition profile girth and boundary rectangle is than feature:
LenCR = ConLenght RectLenght
Contour area and boundary rectangle area ratio:
The boundary rectangle area of definition gesture is RectArea, contour area and boundary rectangle area ratio feature:
AreaCR = ConArea RectArea
Boundary rectangle the ratio of width to height:
The wide W of definition boundary rectangle, height is H, boundary rectangle the ratio of width to height:
α = W H
Center of gravity and boundary rectangle up-and-down boundary ratio of distances constant:
O is expressed as the centre of gravity place of gesture, H 1, H 2represent the distance of gesture center of gravity to the upper and lower border of boundary rectangle respectively, definition center of gravity and boundary rectangle up-and-down boundary ratio of distances constant:
β = H 1 H 2
Center of gravity and boundary rectangle right boundary ratio of distances constant:
W 1, W 2represent the distance of gesture center of gravity to the left and right border of boundary rectangle respectively, definition center of gravity and boundary rectangle left and right lower boundary ratio of distances constant:
η = W 1 W 2
Step 1.5): multiple features fusion:
By step 1.2) ~ 1.4) in the Hu moment characteristics, defect characteristic and six ratio characteristic that extract to permeate a proper vector feature, for characterizing the characteristic information of images of gestures:
feature={Hu,numDefects,ConLA,LenCR,AreaCR,α,β,η}
Wherein, numDefects is defect number, Hu=(φ 1, φ 2, φ 3, φ 4, φ 5, φ 6, φ 7).
Step 1.6): support vector machine is trained:
The proper vector feature of above-mentioned gesture sample image is input in support vector machine together with classification mark and trains.
Step 2): identifying: gestures detection is carried out to input image sequence, multi-feature extraction and fusion are carried out to the gesture detected, and is input in support vector machine the recognition result obtaining SVM; Meanwhile, carry out detecting based on the finger tip of defect to gesture, comprehensive two times result exports final recognition result.
In the present embodiment, step 2) described in identifying, Fig. 1 is the process flow diagram of gesture identification of the present invention, and its detailed process is as follows:
Step 2.1): gestures detection:
Face Detection is carried out to the sequence of video images of input, adopt the skin color detection method of multiple color spaces component, set up a new color space HLS-CbCr color space, image is transformed on HLS-CbCr color space, by the colour of skin Sample Establishing complexion model extracted in advance, according to the complexion model distribution situation on HLS-CbCr color space, detect the area of skin color in image; Meanwhile, carry out the mixed Gaussian background modeling improved based on space time information, by setting up a mixture gaussian modelling for each background pixel, judging the background parts in image, thus extracting foreground area further.And detection zone R (x is set according to the result of Face Detection, y), for different learning rates is distributed in detection zone and non-detection zone, and record each pixel and be judged as background number of times, distribute different learning rates according to this number of times, thus detect the foreground area in image more quickly; After two kinds of testing results are comprehensively carried out filtering process and morphological operation, obtain the Hand Gesture Segmentation image of binaryzation.
Step 2.2): feature extraction:
To the binary map of gesture according to step 1.2) ~ 1.5) calculate the proper vector of this images of gestures;
Step 2.3): SVM identifies:
By in the proper vector of images of gestures input support vector machine, export SVM recognition result, recognition result be 0,1,2,3,4,5, the type of 8 seven kind of gesture.
Step 2.4): finger tip detects:
Fig. 3 is the process flow diagram that finger tip detects, and the number of finger tip is exactly the number of finger, the detection of finger tip is comprised to the positional information of finger tip number and finger tip.According to step 1.3) obtain three defect point ptStart that gesture defect and each defect comprise, ptEnd, ptFar, finger tip point is contained in these defect points, so set up an effective defect point screening washer, filter out effective defect point and be finger tip point.Screening washer rule is as follows:
I. the starting point of defect and depth point distance are greater than a certain proportion of gesture boundary rectangle height H
Lenght (ptStart, ptFar) > α H, wherein α is scale-up factor.
Ii. the depth point of defect and end point distance are greater than a certain proportion of gesture boundary rectangle height H
Lenght(ptEnd,ptFar)>αH
Iii. the angle that the starting point of defect, depth point, end point are formed is less than threshold value T angle
angle(ptStart,ptFar,ptEnd)<T angle
Iv. the starting point of defect, depth point, end point are in the certain limit of gesture boundary rectangle
y bounding<y ptStar<y bounding+βH
y bounding<y ptEnd<y bounding+βH
y bounding<y ptFar<y bounding+βH
Wherein β is scale-up factor, and H is that gesture boundary rectangle is high.
V. when defect point spacing is less than T distime, two defect points are approximate to be overlapped, and can be judged to be same defect point
Lenght(pt i,pt j)<T dis
When defect point meets above condition simultaneously, then can be judged to be effective defect point by screening washer, record effective defect point number and position thereof, be finger tip point.
Step 2.5): comprehensive recognition result:
The result and the finger tip that compare support vector machine output detect the finger tip point number exported, and when both are consistent, then export recognition result.
All experiments all realize on PC, and COMPUTER PARAMETER is: central processing unit Intel (R) Core (TM) i5CPU750@2.67GHz, internal memory 4.00GB.

Claims (3)

1., based on the gesture identification method that multiple features fusion and finger tip detect, it is characterized in that, comprise the following steps:
Step 1): training process: rational gesture feature is selected to complicated gesture, and utilize the feature extraction algorithm of multiple features fusion, support vector machine training is carried out to gesture, forms training pattern;
Step 2): identifying: gestures detection is carried out to input image sequence, multi-feature extraction and fusion are carried out to the gesture detected, and is input in support vector machine the recognition result obtaining SVM; Meanwhile, carry out detecting based on the finger tip of defect to gesture, the result that the recognition result of comprehensive SVM and finger tip detect exports final recognition result.
2. a kind of gesture identification method detected based on multiple features fusion and finger tip according to claim 1, is characterized in that, step 1) described in training process, its detailed process is as follows:
Step 1.1): feature extraction:
Feature extraction is carried out to images of gestures, extract the Hu moment characteristics of gesture, defect characteristic and ratio characteristic, described ratio characteristic comprises following six kinds of features: profile girth and contour area ratio, profile girth and boundary rectangle girth ratio, contour area and boundary rectangle area ratio, boundary rectangle the ratio of width to height, center of gravity and boundary rectangle up-and-down boundary ratio of distances constant, center of gravity and boundary rectangle right boundary ratio of distances constant;
Step 1.2): multiple features fusion:
By step 1.1) in the Hu moment characteristics, defect characteristic and six ratio characteristic that extract to permeate a proper vector, for characterizing the characteristic information of images of gestures;
Step 1.3): support vector machine is trained:
The proper vector of above-mentioned gesture sample image be input in support vector machine together with classification mark and train, wherein classification is labeled as the mark of gesture kind.
3. a kind of gesture identification method detected based on multiple features fusion and finger tip according to claim 1, is characterized in that, step 2) described in identifying, its detailed process is as follows:
Step 2.1): gestures detection:
Foreground detection and Face Detection are carried out to the sequence of video images of input, by comprehensive for both testing results, then carries out filtering process and morphological operation, obtain the Hand Gesture Segmentation image of binaryzation;
Step 2.2): feature extraction:
To the binary map of gesture according to step 1.1) ~ 1.2) carry out feature extraction and merge the proper vector obtaining this images of gestures;
Step 2.3): SVM identifies:
By in the proper vector of images of gestures input support vector machine, export SVM recognition result;
Step 2.4): finger tip detects:
Adopt the Fingertip Detection based on gesture defect, according to step 1.1) obtain three defect point ptStart that gesture defect and each defect comprise, ptEnd, ptFar, set up an effective defect point screening washer, filter out effective defect point and be finger tip point; Screening washer rule is as follows:
I. the starting point of defect and depth point distance are greater than a certain proportion of gesture boundary rectangle height H
Lenght (ptStart, ptFar) > α H, wherein α is scale-up factor;
Ii. the depth point of defect and end point distance are greater than a certain proportion of gesture boundary rectangle height H
Lenght(ptEnd,ptFar)>αH
Iii. the angle that the starting point of defect, depth point, end point are formed is less than threshold value T angle
angle(ptStart,ptFar,ptEnd)<T angle
Iv. the starting point of defect, depth point, end point are in the certain limit of gesture boundary rectangle
y bounding<y ptStar<y bounding+βH
y bounding<y ptEnd<y bounding+βH
y bounding<y ptFar<y bounding+βH
Wherein β is scale-up factor, and H is that gesture boundary rectangle is high;
V. when defect point spacing is less than T distime, two defect points are approximate to be overlapped, and can be judged to be same defect point
Lenght(pt i,pt j)<T dis
When defect point meets above condition simultaneously, then can be judged to be effective defect point by screening washer, record effective defect point number and position thereof, be finger tip point;
Step 2.5): comprehensive recognition result:
The result and the finger tip that compare support vector machine output detect the finger tip point number exported, and when both are consistent, then export recognition result.
CN201410568977.3A 2014-10-23 2014-10-23 A kind of gesture identification method based on multiple features fusion and finger tip detection Active CN104299004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410568977.3A CN104299004B (en) 2014-10-23 2014-10-23 A kind of gesture identification method based on multiple features fusion and finger tip detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410568977.3A CN104299004B (en) 2014-10-23 2014-10-23 A kind of gesture identification method based on multiple features fusion and finger tip detection

Publications (2)

Publication Number Publication Date
CN104299004A true CN104299004A (en) 2015-01-21
CN104299004B CN104299004B (en) 2018-05-01

Family

ID=52318725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410568977.3A Active CN104299004B (en) 2014-10-23 2014-10-23 A kind of gesture identification method based on multiple features fusion and finger tip detection

Country Status (1)

Country Link
CN (1) CN104299004B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678150A (en) * 2016-01-11 2016-06-15 成都布林特信息技术有限公司 User authority managing method
CN106295464A (en) * 2015-05-15 2017-01-04 济南大学 Gesture identification method based on Shape context
CN106599771A (en) * 2016-10-21 2017-04-26 上海未来伙伴机器人有限公司 Gesture image recognition method and system
CN107133361A (en) * 2017-05-31 2017-09-05 北京小米移动软件有限公司 Gesture identification method, device and terminal device
CN107133562A (en) * 2017-03-17 2017-09-05 华南理工大学 A kind of gesture identification method based on extreme learning machine
CN108496142A (en) * 2017-04-07 2018-09-04 深圳市柔宇科技有限公司 A kind of gesture identification method and relevant apparatus
CN108932053A (en) * 2018-05-21 2018-12-04 腾讯科技(深圳)有限公司 Drawing practice, device, storage medium and computer equipment based on gesture
CN109271838A (en) * 2018-07-19 2019-01-25 重庆邮电大学 A kind of three parameter attributes fusion gesture identification method based on fmcw radar
CN111160173A (en) * 2019-12-19 2020-05-15 深圳市优必选科技股份有限公司 Robot-based gesture recognition method and robot
CN111626364A (en) * 2020-05-28 2020-09-04 中国联合网络通信集团有限公司 Gesture image classification method and device, computer equipment and storage medium
CN111950514A (en) * 2020-08-26 2020-11-17 重庆邮电大学 Depth camera-based aerial handwriting recognition system and method
CN111160173B (en) * 2019-12-19 2024-04-26 深圳市优必选科技股份有限公司 Gesture recognition method based on robot and robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194097A (en) * 2010-03-11 2011-09-21 范为 Multifunctional method for identifying hand gestures
CN102402680A (en) * 2010-09-13 2012-04-04 株式会社理光 Hand and indication point positioning method and gesture confirming method in man-machine interactive system
US8831379B2 (en) * 2008-04-04 2014-09-09 Microsoft Corporation Cartoon personalization
US20140307919A1 (en) * 2013-04-15 2014-10-16 Omron Corporation Gesture recognition device, gesture recognition method, electronic apparatus, control program, and recording medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8831379B2 (en) * 2008-04-04 2014-09-09 Microsoft Corporation Cartoon personalization
CN102194097A (en) * 2010-03-11 2011-09-21 范为 Multifunctional method for identifying hand gestures
CN102402680A (en) * 2010-09-13 2012-04-04 株式会社理光 Hand and indication point positioning method and gesture confirming method in man-machine interactive system
US20140307919A1 (en) * 2013-04-15 2014-10-16 Omron Corporation Gesture recognition device, gesture recognition method, electronic apparatus, control program, and recording medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周伟芳: ""基于SVM的生物特征融合技术研究"", 《万方数据企业知识服务平台》 *
张凯: ""融合深度数据的人机交互手势识别研究"", 《中国博士学位论文全文数据库(信息科技辑)》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295464A (en) * 2015-05-15 2017-01-04 济南大学 Gesture identification method based on Shape context
CN105678150A (en) * 2016-01-11 2016-06-15 成都布林特信息技术有限公司 User authority managing method
CN106599771A (en) * 2016-10-21 2017-04-26 上海未来伙伴机器人有限公司 Gesture image recognition method and system
CN107133562B (en) * 2017-03-17 2021-05-14 华南理工大学 Gesture recognition method based on extreme learning machine
CN107133562A (en) * 2017-03-17 2017-09-05 华南理工大学 A kind of gesture identification method based on extreme learning machine
CN108496142A (en) * 2017-04-07 2018-09-04 深圳市柔宇科技有限公司 A kind of gesture identification method and relevant apparatus
CN108496142B (en) * 2017-04-07 2021-04-27 深圳市柔宇科技股份有限公司 Gesture recognition method and related device
CN107133361A (en) * 2017-05-31 2017-09-05 北京小米移动软件有限公司 Gesture identification method, device and terminal device
CN108932053A (en) * 2018-05-21 2018-12-04 腾讯科技(深圳)有限公司 Drawing practice, device, storage medium and computer equipment based on gesture
CN109271838A (en) * 2018-07-19 2019-01-25 重庆邮电大学 A kind of three parameter attributes fusion gesture identification method based on fmcw radar
CN111160173A (en) * 2019-12-19 2020-05-15 深圳市优必选科技股份有限公司 Robot-based gesture recognition method and robot
CN111160173B (en) * 2019-12-19 2024-04-26 深圳市优必选科技股份有限公司 Gesture recognition method based on robot and robot
CN111626364A (en) * 2020-05-28 2020-09-04 中国联合网络通信集团有限公司 Gesture image classification method and device, computer equipment and storage medium
CN111626364B (en) * 2020-05-28 2023-09-01 中国联合网络通信集团有限公司 Gesture image classification method, gesture image classification device, computer equipment and storage medium
CN111950514A (en) * 2020-08-26 2020-11-17 重庆邮电大学 Depth camera-based aerial handwriting recognition system and method
CN111950514B (en) * 2020-08-26 2022-05-03 重庆邮电大学 Depth camera-based aerial handwriting recognition system and method

Also Published As

Publication number Publication date
CN104299004B (en) 2018-05-01

Similar Documents

Publication Publication Date Title
CN104299004A (en) Hand gesture recognition method based on multi-feature fusion and fingertip detecting
Zhang et al. Research on face detection technology based on MTCNN
CN107168527B (en) The first visual angle gesture identification and exchange method based on region convolutional neural networks
CN104766046B (en) One kind is detected using traffic mark color and shape facility and recognition methods
CN102194108B (en) Smile face expression recognition method based on clustering linear discriminant analysis of feature selection
CN104063059B (en) A kind of real-time gesture recognition method based on finger segmentation
EP2980755B1 (en) Method for partitioning area, and inspection device
CN107341517A (en) The multiple dimensioned wisp detection method of Fusion Features between a kind of level based on deep learning
CN110796033B (en) Static gesture recognition method based on bounding box model
CN103034852B (en) The detection method of particular color pedestrian under Still Camera scene
CN103971102A (en) Static gesture recognition method based on finger contour and decision-making trees
CN105205480A (en) Complex scene human eye locating method and system
CN108846359A (en) It is a kind of to divide the gesture identification method blended with machine learning algorithm and its application based on skin-coloured regions
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN109255350A (en) A kind of new energy detection method of license plate based on video monitoring
CN103440035A (en) Gesture recognition system in three-dimensional space and recognition method thereof
Li et al. Fast and effective text detection
DK2447884T3 (en) A method for the detection and recognition of an object in an image and an apparatus and a computer program therefor
CN109086772A (en) A kind of recognition methods and system distorting adhesion character picture validation code
CN105678735A (en) Target salience detection method for fog images
CN104463138A (en) Text positioning method and system based on visual structure attribute
Vishwakarma et al. Simple and intelligent system to recognize the expression of speech-disabled person
CN103034851A (en) Device and method of self-learning skin-color model based hand portion tracking
CN102184404A (en) Method and device for acquiring palm region in palm image
CN107392105B (en) Expression recognition method based on reverse collaborative salient region features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant