CN106503626A - Being mated with finger contours based on depth image and refer to gesture identification method - Google Patents

Being mated with finger contours based on depth image and refer to gesture identification method Download PDF

Info

Publication number
CN106503626A
CN106503626A CN201610865679.XA CN201610865679A CN106503626A CN 106503626 A CN106503626 A CN 106503626A CN 201610865679 A CN201610865679 A CN 201610865679A CN 106503626 A CN106503626 A CN 106503626A
Authority
CN
China
Prior art keywords
finger
gesture
overbar
hand
depth image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610865679.XA
Other languages
Chinese (zh)
Inventor
刘佳
黄静
郑勇
孔剑辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201610865679.XA priority Critical patent/CN106503626A/en
Publication of CN106503626A publication Critical patent/CN106503626A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention propose mated with finger contours based on depth image and refer to gesture identification method, its key step includes:Depth image is obtained, is abated the noise, is split hand region, detection finger contours, finger contours template matching, recognize and refer to gesture, draw recognition result.The present invention is identified to gesture by sampling depth image, and the main global characteristics using hand are that finger contours carry out gesture coupling such that it is able to effectively improve the accuracy for the gesture identification of hand disease such as suffering from and referring to;The a large amount of local features of hand need not be mated during gesture identification, amount of calculation therefore can be reduced, accelerated recognition speed, and the depth image for gathering can be reduced the impact of complex environment background and illumination condition to recognition effect.

Description

Being mated with finger contours based on depth image and refer to gesture identification method
Technical field
The invention belongs to field of human-computer interaction, and in particular to a kind of quiet with finger land mobile distance based on depth image State gesture identification method.
Background technology
Static gesture technology of identification is one key technology of current field of human-computer interaction, mainly processes single-frame imagess, will The information such as the profile of handss, shape, size are classified, then by image in characteristic point and hand shape characteristic point carry out right Than carrying out the identification of gesture.Current static gesture identification method is broadly divided into two big class, and a class is the handss based on template matching Gesture recognition methodss, cardinal principle are to contrast the characteristic information of gesture to be identified with the gesture feature information in template base, Relatively both similarities are further recognized, but the selection to gesture characteristic point requires higher, characteristic point 0 mistake of selection of hand How the problems such as discrimination is low, speed is slow can be caused, choose, and the hand-characteristic point position of different people The difference that puts also results in different degrees of identification error;Another kind of is gesture identification method based on vector machine, using support The basic thought of vector machine (SVM), i.e., obtain optimal hyperlane in sample space or feature space, makes hyperplane similar with little The distance between sample set maximum, but SVM algorithm is difficult to carry out to large-scale training sample, it is impossible to further improve gesture and know Other accuracy;Additionally, polytypic gesture is solved with SVM has larger difficulty, limit human body gesture to a certain extent Action.
For a long time, static gesture identification research except the external environmental factors such as background noise, illumination variation interference in addition to, The hand disease such as close up i.e. and refer to of hand joint also has considerable influence to the accuracy of gesture identification always.Therefore traditional quiet State Gesture Recognition Algorithm such as cannot improve and refer to always at the accuracy of the gesture identification of hand disease, add the complexity of background noise With the change at random of illumination condition, so as to increase the amount of calculation of gesture identification, recognition rate slows down, and recognition efficiency is also therewith significantly Reduce.
Content of the invention
The present invention in order to overcome traditional static gesture recognizer under complex environment background, to and the hand disease such as refer to Gesture identification deficiency, it is proposed that a kind of being mated with finger contours based on depth image and refer to gesture identification method.
Being mated with finger contours based on depth image and refer to gesture identification method, it is characterised in that to comprise the steps:
Step one, obtains the depth image for including gesture, eliminates the noise in image;
Step 2, is partitioned into gesture profile therein from depth image, and represents gesture profile with time-serial position;
Step 3, by gesture profile and its time-serial position, is partitioned into finger therein using threshold decomposition algorithm Profile;
Step 4, uses for reference land mobile distance (EMD) algorithm, calculates acquired finger contours and standard gesture mould Mobile minimum range FinEMD is needed in plate matching process;
The FinEMD distances of calculating and existing standard form are carried out coupling and are compared by step 5, you can obtain coupling knot Really, gesture identification is completed.
In step one described further, the method for strengthening smothing filtering is taken to eliminate the noise in depth image.
In step 2 described further, including following the trail of work(opponent using the hand in Kinect Windows SDK kits Portion's location positioning;The threshold interval of depth image is set, obtains roughly hand region;Using RANSAC to being worn on wrist Black sign positioned, with further exactly by hand region and remaining background segment;Use time-serial position table Show hand profile, and record the relative distance between each profile summit of hand and a central point.
In step 3 described further, detect that finger contours, wherein threshold value are set to 1.5-1.7 using threshold decomposition algorithm Between.
In step 4 described further, the computing formula foundation of FinEMD is as follows:
By settingFor firstThe hand label of cluster, wherein riGeneration I-th finger of the whole hand cluster of table,RepresentThe weight of cluster;
By settingFor secondThe hand label of cluster, wherein tjGeneration Table is wholeThe jth root finger of hand cluster,RepresentThe weight of cluster;
The time-serial position of one label of hand is launched, the corresponding curved section of each of which finger, handle The finger section of time-serial position seen as by each label of portion's cluster:By each cluster riIt is defined as on time-serial position Angular interval at i-th finger section or so end points, ri=[ria, rib], wherein 0≤ria<rib≤1;The weight of clusterFor the normal region of finger section, wherein i value 1,2,3 ... successively;A value 1,2,3 ... successively;B value 1 successively, 2,3 ...;
D=[dij] be label R and T ground distance matrix, wherein dijIt is from cluster riArrive tjGround distance, dijDefinition For being spaced [ria, rib] to interval [t is completely coveredja, tjb] Minimum sliding distance, wherein i value 1,2,3 ... successively;Wherein j according to Secondary value 1,2,3 ...;A value 1,2,3 ... successively;B value 1,2,3 ... successively;Namely:
For two kinds of labels, R and T, their FinEMD distance definitions are to need mobile finger to standard gesture template Distance plus the minimum workload of the vacant amount after gesture template matching, commonly enter finger contours and standard gesture template Certain vacant amount is had with rear finger areas, i.e.,:
WhereinIt is normalization factor, fijIt is from cluster riArrive cluster tjStream, wherein i value 1 successively, 2, 3…;Wherein j value 1,2,3 ... successively;These constitute stream matrix F, and parameter beta plays regulation EmoveAnd EemptyBetween relation Effect, Eempty, dijConstant for two labellings;In order to calculate FinEMD, the value for calculating Fin is needed, Fin is mobile complete for needing The minimum workload of portion's finger,
According to stream definition of matrix F in EMD, it is set as finding the minimum needed for matched finger contours template Workload, what the first constraint formula was limited are directions of finger contours movement:From the finger contours for needing to recognize to standard handss Gesture template, wherein last constraint formula can limit the maximal workload moved when finger contours are mated.
In further described step 5, gesture matching formula is as follows:
C=arg min FinEMD (H, Tc)
Wherein H is input handss;TcIt is the template of class c;FinEMD(H,Tc) representing the finger for needing to recognize need to when mating The workload of the finger that matches in gesture template is moved to, the circular of the gesture matching formula C applies mechanically step Calculate the most unskilled labourer for needing mobile finger that the vacant amount after gesture template matching is added to the distance of standard gesture template in four The computational methods of work amount FinEMD.
Beneficial effect
Compared with prior art, the present invention provide based on finger contours mate and refer to gesture identification method, extract After hand region, hand profile is indicated with time-serial position, is used as cluster by the finger contours for being partitioned into hand Label, calculating input wide displacement FinEMD when with standard gesture outline of finger wheel carries out and refers to that gesture is known Not, and by increasing the local matching reduced on global characteristics (finger) in the vacant amount after each gesture template matching The error of (remaining characteristic point), therefore without the concern for using hand cluster each piece of local feature.The present invention is for bag Depth image containing hand carries out gesture identification, so as to greatly reduce the impact of complex environment and illumination condition to gesture identification; Finger areas are extracted in conjunction with Threshold Segmentation Algorithm, the accuracy of finger segmentation is greatly improved;By calculating FinEMD distances and mark Quasi- gesture template carries out match cognization, accelerates the speed of finger contours coupling, improves the accuracy of gesture identification.The present invention It is applied to and refers to etc. the gesture identification of hand Disease.
Description of the drawings
Flow charts of the Fig. 1 for gesture identification method;
Fig. 2 is the hand photo that isolates from depth image;
Fig. 3 is to adjust the hand region image that obtains after threshold interval;
Fig. 4 is the hand contour images for changing of attempting to change from the hand region of Fig. 3;
Hand time-serial positions of the Fig. 5 for label R;
Hand time-serial positions of the Fig. 6 for label T
Fig. 7 is hfFinger time-serial position of the value between 1.5-1.7;
Fig. 8 is the gesture identification figure of representative numeral two;
Fig. 9 is the gesture identification figure of representative numeral three.
Specific embodiment
The technical scheme that the present invention is provided is described in detail below with reference to specific embodiment, it should be understood that following concrete Embodiment is only illustrative of the invention and is not intended to limit the scope of the invention.
Step 1 carries out pretreatment to depth image
Compared with coloured image, depth image and impact of most of environment to recognizing is eliminated, but in depth image also It is to have a lot of background random noises, these interference can affect the precision that later stage finger is recognized, it is therefore desirable to which depth image is entered Row pretreatment, can take airspace filter to strengthen such as smothing filtering, sharp filtering etc. and depth image is processed, airspace filter Enhanced principle is as follows:
Generally in linear airspace filter method, pixel (x, y) position T such as formula (1.1) shown in, using T as The new gray value of the pixel at (x, y) place:
T=m (- 1,1) f (x-1, y-1)+m (- 1,0) f (x-1,0)+...+m (1,1) (f (x+1, ky+1) (1.1)
Under normal circumstances, the formula such as (1.2) institute during linear filtering is carried out with the image that the template of r*s is R*S to size Show:
Wherein,When the calculating to all of image pixel all progressive forms (1.2) assignment, so that it may To be filtered process to whole image.
As shown in Fig. 2 method used herein is linear smoothing filtering (field is average), make to subtract in this way Few a certain pixel and its surrounding pixel point difference are excessive, also just can eliminate noise to a certain extent.Smothing filtering is entering The method used during row image procossing is using a certain template, calculates putting down for a certain pixel gray level and the gray scale put around which Average replacing the gray value of the point, conventional template to have 3*3,5*5 etc., when template size becomes big when, the effect that noise is eliminated Better, but have the disadvantage that image can become fuzzyyer.Formula (1.3) is exactly a kind of common linear smoothing filtering template, is wherein multiplied by 1/9 purpose is to make pixel be unlikely to excessive with preimage vegetarian refreshments difference after processing.
Experiment shows, by the pretreatment of image, carries out the smoothing processing of image, effectively can eliminate around hand Random noise, the precision of finger tip identification also increase
Step 2 extracts hand region in the depth image of pretreatment from step 1, and concrete operations are as follows:
(1) particular location of hand is determined using the hand tracking function in Kinect Windows SDK kits;
(2) as shown in Figure 3 and Figure 4, by the threshold interval of setting depth image, hand region picture is obtained roughly, and And the picture is converted into hand contour images;
(3) require that user wears black wrist band, by detecting melanin, using the position of RANSAC positioning black wrist bands Put, and then more accurately by hand region and remaining background segment, now hand be usually the resolution of 100*100 pixels, And might have serious twisted phenomena;
(4) represent hand profile using time-serial position, and record each profile summit of hand and central point it Between relative distance.Wherein, the central point of definition is initial point at maximum distance of the hand away from profile summit after range conversion It is black coil to be detected according to RANSAC and is determined.The transverse axis of curve chart represents each profile summit of hand and relative to central point Angle between initial point, is passed through 360 degree and is standardized expression;The longitudinal axis is represented between hand profile summit and central point Euclidean distance, and be normalized by the radius of maximum inscribed circle.
Step 3 is partitioned into using threshold decomposition algorithm according to the hand profile and its time-serial position that obtain in step 2 Finger contours, concrete operations are as follows:
As shown in fig. 7, in step 2 in the time-serial position of hand, each finger to there is a peak, by handss Refer to a label as whole hand cluster, a section be defined as on time-serial position, by setting its longitudinal axis value It is more than threshold value hfOne section of region be defined as finger section, so as to obtain the matching area for representing the finger, it is contemplated that different hands Overall profile and single finger length difference, threshold value hfValue for finger extraction affect very big, it is therefore desirable to For hfSuitable value is set, is attempted setting h by a large amount offDifferent numerical value, such as reflect through time-serial position, if hfLow In 1.5, then finger areas cannot be isolated, if hfBe more than 1.7, then can lost part finger information such as thumb region, because This to effectively be partitioned into finger areas, then hfValue be preferably ranged between 1.5 to 1.7.
Step 4 uses for reference land mobile distance (EMD) algorithm, calculates the finger contours acquired in step 3 and standard handss Mobile minimum range FinEMD is needed during gesture template matching, calculates FinEMD as follows apart from concrete operations:
As shown in Figure 5 and Figure 6, by settingFor firstCluster Hand label, wherein riI-th finger of whole hand cluster is represented,RepresentThe weight of cluster;
By settingFor secondThe hand label of cluster, wherein tjGeneration Table is wholeThe jth root finger of hand cluster,RepresentThe weight of cluster;
The time-serial position of one label of hand is launched, the corresponding curved section of each of which finger.Handle The finger section of time-serial position seen as by each label of portion's cluster:Each cluster riIs defined as on time-serial position Angular interval at i finger section or so end points, ri=[ria, rib], wherein 0≤ria<rib≤1;The weight of clusterFor the normal region of finger section, wherein i value 1,2,3 ... successively;A value 1,2,3 ... successively;B values successively 1,2,3 ...;
D=[dij] be label R and T ground distance matrix, wherein dijIt is from cluster riArrive tjGround distance.dijDefinition For being spaced [[ria, rib]] to interval [t is completely coveredja, tjb] Minimum sliding distance, wherein i value 1,2,3 ... successively;Wherein j Value 1,2,3 ... successively;A value 1,2,3 ... successively;B value 1,2,3 ... successively;Namely:
For two kinds of labels, R and T, their FinEMD distance definitions are to need mobile finger to standard gesture template Distance plus the minimum workload of the vacant amount after gesture template matching, commonly enter finger contours and standard gesture template Certain vacant amount is had with rear finger areas, i.e.,:
WhereinIt is normalization factor, fijIt is from cluster riArrive cluster tjStream, wherein i value 1 successively, 2, 3…;Wherein j value 1,2,3 ... successively;These constitute stream matrix F in, and parameter beta plays regulation EmoveAnd EemptyBetween relation Effect, Eempty, dijConstant for two labellings;In order to calculate FinEMD, the value for calculating Fin is needed, Fin is to need to move The minimum workload mated to standard gesture template by finger contours to be identified.
According to stream definition of matrix F in EMD, it is set as finding the minimum needed for matched finger contours template Workload.What the first constraint formula was limited is a direction of finger contours movement:From the mound finger contours of identification (need) to Hole (standard gesture template), wherein last constraint formula can limit the maximal workload moved when finger contours are mated.
Step 5 template matching:
Template matching is mainly used in the identification of gesture, it would be desirable to which the hand of identification is used as class c for having minimum different distance.
C=arg min FinEMD (H, Tc)
Wherein H is input handss;TcIt is the template of class c;FinEMD(H,Tc) representing the finger for needing to recognize need to when mating The workload of the finger that matches in gesture template is moved to, concrete calculation is included in above-mentioned 2nd step.
Step 6 to the present invention's and refers to that gesture identification method is estimated by the experiment of various gestures, and concrete operations are such as Under:
(1) the multiple different gesture of user is obtained using Kinect, gesture numeral is identified by template matching, especially It is that user enumerates 2 and refers to gesture (represent numeral 2) and 3 and refer to the recognition effect of gesture (representing digital 3) (such as Fig. 8 and Fig. 9 Shown), the numeral that time-serial position and gesture are represented by recognition result shows in dialog box in real time;
(2) to the gesture identification method that Threshold segmentation and FinEMD algorithms combine to be estimated, wherein, hf=1.6, β=0.5, is indicated by confusion matrix using the gesture identification result of the algorithm, the high precision of gesture identification up to 93.2%, Additionally, with reference to the algorithm of Threshold segmentation so that the average treatment speed of gesture identification is soon to 0.0750s.
According to above-mentioned steps, the accurate number of this method gesture identification is with discrimination as shown in Table 1.Test result indicate that:This The being mated based on finger contours of offer, is provided and refers to that gesture identification method can not only exclude complex environment and illumination effect, also Can guarantee that and refer to the accuracy and recognition speed of gesture identification.
Technological means disclosed in the present invention program are not limited only to the technological means disclosed in above-mentioned embodiment, also include The technical scheme being made up of above technical characteristic combination in any.It should be pointed out that for those skilled in the art For, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications are also considered as Protection scope of the present invention.

Claims (6)

1. being mated with finger contours based on depth image and refer to gesture identification method, it is characterised in that to comprise the steps:
Step one, obtains the depth image for including gesture, eliminates the noise in image;
Step 2, is partitioned into gesture profile therein from depth image, and represents gesture profile with time-serial position;
Step 3, by gesture profile and its time-serial position, is partitioned into finger contours therein using threshold decomposition algorithm;
Step 4, uses for reference land mobile distance (EMD) algorithm, calculates acquired finger contours and standard gesture template Mobile minimum range FinEMD required for during matching somebody with somebody;
The FinEMD distances of calculating and existing standard form are carried out coupling and are compared by step 5, you can obtain matching result, complete Into gesture identification.
2. according to claim 1 being mated with finger contours based on depth image and refer to gesture identification method, its feature It is, in the step one, takes the method for strengthening smothing filtering to eliminate the noise in depth image.
3. according to claim 1 being mated with finger contours based on depth image and refer to gesture identification method, its feature It is, in the step 2, fixed to hand position including following the trail of work(using the hand in Kinect Windows SDK kits Position;The threshold interval of depth image is set, obtains roughly hand region;Using RANSAC to being worn on the black mark of wrist Show and positioned, with further exactly by hand region and remaining background segment;Hand wheel is represented with time-serial position Exterior feature, and record the relative distance between each profile summit of hand and a central point.
4. according to claim 1 being mated with finger contours based on depth image and refer to gesture identification method, its feature It is, in the step 3, detects that finger contours, wherein threshold value are set between 1.5-1.7 using threshold decomposition algorithm.
5. according to claim 1 being mated with finger contours based on depth image and refer to gesture identification method, its feature It is, in the step 4, the computing formula foundation of FinEMD is as follows:
By settingFor firstThe hand label of cluster, wherein riRepresent whole I-th finger of individual hand cluster,RepresentThe weight of cluster;
By settingFor secondThe hand label of cluster, wherein tjRepresent whole IndividualThe jth root finger of hand cluster,RepresentThe weight of cluster;
The time-serial position of one label of hand is launched, the corresponding curved section of each of which finger, handle portion group The finger section of time-serial position seen as by each label of collection:By each cluster riIt is defined as i-th on time-serial position Angular interval at individual finger section or so end points, ri=[ria, rib], wherein 0≤ria<rib≤1;The weight of clusterFor the normal region of finger section, wherein i value 1,2,3 ... successively;A value 1,2,3 ... successively;B values successively 1,2,3 ...;
D=[dij] be label R and T ground distance matrix, wherein dijIt is from cluster riArrive tjGround distance, dijBetween being defined as Every [ria, rib] to interval [t is completely coveredja, tjb] Minimum sliding distance, wherein i value 1,2,3 ... successively;Wherein j is taken successively Value 1,2,3 ...;A value 1,2,3 ... successively;B value 1,2,3 ... successively;Namely:
For two kinds of labels, R and T, their FinEMD distance definitions be need mobile finger to standard gesture template away from The minimum workload of the vacant amount after plus gesture template matching, after commonly entering finger contours and standard gesture template matching Finger areas have certain vacant amount, i.e.,:
F i n E M D ( R , T ) = &beta;E m o v e + ( 1 - &beta; ) E e m p t y = ( 1 - &beta; ) | &Sigma; i = 1 m &OverBar; w r i - &Sigma; j = 1 n &OverBar; w t j | + &beta;&Sigma; i = 1 m &OverBar; &Sigma; j = 1 n &OverBar; d i j f i j &Sigma; i = 1 m &OverBar; &Sigma; j = 1 n &OverBar; f i j
WhereinIt is normalization factor, fijIt is from cluster riArrive cluster tjStream, wherein i value 1,2,3 ... successively; Wherein j value 1,2,3 ... successively;These constitute stream matrix F, and parameter beta plays regulation EmoveAnd EemptyBetween relation effect, Eempty, dijConstant for two labellings;In order to calculate FinEMD, the value for calculating Fin is needed, Fin is to need to move all fingers Minimum workload,
F i n = arg min W O R K ( R , T , F i n ) = arg min&Sigma; i = 1 m &OverBar; &Sigma; j = 1 n &OverBar; d i j f i j
s . t . f i j &GreaterEqual; 0 1 &le; i &le; m &OverBar; , 1 &le; j &le; n &OverBar; , &Sigma; j = 1 n &OverBar; f i j &le; w r i 1 &le; i &le; m &OverBar; , &Sigma; i = 1 m &OverBar; f i j &le; w t j 1 &le; j &le; n &OverBar; , &Sigma; i = 1 m &OverBar; &Sigma; j = 1 n &OverBar; f i j = min ( &Sigma; i = 1 m &OverBar; w r i , &Sigma; j = 1 n &OverBar; w t j ) ,
According to stream definition of matrix F in EMD, it is set as finding the minimum work needed for matched finger contours template Amount, what the first constraint formula was limited are directions of finger contours movement:From the finger contours for needing to recognize to standard gesture mould Plate, wherein last constraint formula can limit the maximal workload moved when finger contours are mated.
6. according to claim 1 being mated with finger contours based on depth image and refer to gesture identification method, its feature It is, in the step 5, gesture matching formula is as follows:
C=arg minFinEMD (H, Tc)
Wherein H is input handss;TcIt is the template of class c;FinEMD(H,Tc) represent need identification finger need to move when mating The workload of the finger that matches in gesture template is moved, the circular of the gesture matching formula C is applied mechanically in step 4 Calculate the minimum workload for needing mobile finger that the vacant amount after gesture template matching is added to the distance of standard gesture template The computational methods of FinEMD.
CN201610865679.XA 2016-09-29 2016-09-29 Being mated with finger contours based on depth image and refer to gesture identification method Pending CN106503626A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610865679.XA CN106503626A (en) 2016-09-29 2016-09-29 Being mated with finger contours based on depth image and refer to gesture identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610865679.XA CN106503626A (en) 2016-09-29 2016-09-29 Being mated with finger contours based on depth image and refer to gesture identification method

Publications (1)

Publication Number Publication Date
CN106503626A true CN106503626A (en) 2017-03-15

Family

ID=58291244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610865679.XA Pending CN106503626A (en) 2016-09-29 2016-09-29 Being mated with finger contours based on depth image and refer to gesture identification method

Country Status (1)

Country Link
CN (1) CN106503626A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220634A (en) * 2017-06-20 2017-09-29 西安科技大学 Based on the gesture identification method for improving D P algorithms and multi-template matching
CN107885327A (en) * 2017-10-27 2018-04-06 长春理工大学 A kind of Fingertip Detection based on Kinect depth information
CN108806375A (en) * 2018-05-09 2018-11-13 河南工学院 A kind of interactive teaching method and system based on image recognition
CN109710066A (en) * 2018-12-19 2019-05-03 平安普惠企业管理有限公司 Exchange method, device, storage medium and electronic equipment based on gesture identification
WO2019091491A1 (en) * 2017-11-13 2019-05-16 Zyetric Gaming Limited Gesture recognition based on depth information and computer vision
CN111626168A (en) * 2020-05-20 2020-09-04 中移雄安信息通信科技有限公司 Gesture recognition method, device, equipment and medium
CN112613384A (en) * 2020-12-18 2021-04-06 安徽鸿程光电有限公司 Gesture recognition method, gesture recognition device and control method of interactive display equipment
CN113158912A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Gesture recognition method and device, storage medium and electronic equipment
CN113303768A (en) * 2021-06-09 2021-08-27 哈雷医用(广州)智能技术有限公司 Method and device for diagnosing hand illness state
CN114097008A (en) * 2019-11-14 2022-02-25 腾讯美国有限责任公司 System and method for automatic identification of hand activity defined in a unified parkinson's disease rating scale

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1838109A (en) * 2006-04-10 2006-09-27 西安交通大学 Mode parameter recognition method based on experience mode decomposition and Laplace wavelet
CN102142084A (en) * 2011-05-06 2011-08-03 北京网尚数字电影院线有限公司 Method for gesture recognition
CN103927555A (en) * 2014-05-07 2014-07-16 重庆邮电大学 Static sign language letter recognition system and method based on Kinect sensor
CN105868715A (en) * 2016-03-29 2016-08-17 苏州科达科技股份有限公司 Hand gesture identifying method, apparatus and hand gesture learning system
CN105930784A (en) * 2016-04-15 2016-09-07 济南大学 Gesture recognition method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1838109A (en) * 2006-04-10 2006-09-27 西安交通大学 Mode parameter recognition method based on experience mode decomposition and Laplace wavelet
CN102142084A (en) * 2011-05-06 2011-08-03 北京网尚数字电影院线有限公司 Method for gesture recognition
CN103927555A (en) * 2014-05-07 2014-07-16 重庆邮电大学 Static sign language letter recognition system and method based on Kinect sensor
CN105868715A (en) * 2016-03-29 2016-08-17 苏州科达科技股份有限公司 Hand gesture identifying method, apparatus and hand gesture learning system
CN105930784A (en) * 2016-04-15 2016-09-07 济南大学 Gesture recognition method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHOU REN等: ""Robust Part-Based Hand Gesture Recognition Using Kinect Sensor"", 《IEEE TRANSACTIONS ON MULTIMEDIA》 *
徐鹏飞等: ""基于Kinect深度图像信息的手势分割和指尖检测算法"", 《西南科技大学学报》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220634B (en) * 2017-06-20 2019-02-15 西安科技大学 Based on the gesture identification method for improving D-P algorithm and multi-template matching
CN107220634A (en) * 2017-06-20 2017-09-29 西安科技大学 Based on the gesture identification method for improving D P algorithms and multi-template matching
CN107885327B (en) * 2017-10-27 2020-11-13 长春理工大学 Fingertip detection method based on Kinect depth information
CN107885327A (en) * 2017-10-27 2018-04-06 长春理工大学 A kind of Fingertip Detection based on Kinect depth information
US11340706B2 (en) 2017-11-13 2022-05-24 Zyetric Gaming Limited Gesture recognition based on depth information and computer vision
WO2019091491A1 (en) * 2017-11-13 2019-05-16 Zyetric Gaming Limited Gesture recognition based on depth information and computer vision
CN108806375A (en) * 2018-05-09 2018-11-13 河南工学院 A kind of interactive teaching method and system based on image recognition
CN109710066B (en) * 2018-12-19 2022-03-25 平安普惠企业管理有限公司 Interaction method and device based on gesture recognition, storage medium and electronic equipment
CN109710066A (en) * 2018-12-19 2019-05-03 平安普惠企业管理有限公司 Exchange method, device, storage medium and electronic equipment based on gesture identification
CN114097008A (en) * 2019-11-14 2022-02-25 腾讯美国有限责任公司 System and method for automatic identification of hand activity defined in a unified parkinson's disease rating scale
CN114097008B (en) * 2019-11-14 2024-05-07 腾讯美国有限责任公司 Method, apparatus and readable medium for identifying dyskinesia
CN111626168A (en) * 2020-05-20 2020-09-04 中移雄安信息通信科技有限公司 Gesture recognition method, device, equipment and medium
CN111626168B (en) * 2020-05-20 2022-12-02 中移雄安信息通信科技有限公司 Gesture recognition method, apparatus, device, and medium
CN112613384A (en) * 2020-12-18 2021-04-06 安徽鸿程光电有限公司 Gesture recognition method, gesture recognition device and control method of interactive display equipment
CN112613384B (en) * 2020-12-18 2023-09-19 安徽鸿程光电有限公司 Gesture recognition method, gesture recognition device and control method of interactive display equipment
CN113158912A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Gesture recognition method and device, storage medium and electronic equipment
CN113158912B (en) * 2021-04-25 2023-12-26 北京华捷艾米科技有限公司 Gesture recognition method and device, storage medium and electronic equipment
CN113303768A (en) * 2021-06-09 2021-08-27 哈雷医用(广州)智能技术有限公司 Method and device for diagnosing hand illness state

Similar Documents

Publication Publication Date Title
CN106503626A (en) Being mated with finger contours based on depth image and refer to gesture identification method
CN104899600B (en) A kind of hand-characteristic point detecting method based on depth map
CN101763500B (en) Method applied to palm shape extraction and feature positioning in high-freedom degree palm image
CN104063059B (en) A kind of real-time gesture recognition method based on finger segmentation
CN103927532B (en) Person&#39;s handwriting method for registering based on stroke feature
CN109800648A (en) Face datection recognition methods and device based on the correction of face key point
CN102368290B (en) Hand gesture identification method based on finger advanced characteristic
CN106022228B (en) A kind of three-dimensional face identification method based on grid local binary patterns in length and breadth
CN103455794B (en) A kind of dynamic gesture identification method based on frame integration technology
CN105787442B (en) A kind of wearable auxiliary system and its application method of the view-based access control model interaction towards disturbance people
CN103218605B (en) A kind of fast human-eye positioning method based on integral projection and rim detection
CN101464948A (en) Object identification method for affine constant moment based on key point
CN103984922B (en) Face identification method based on sparse representation and shape restriction
CN103971102A (en) Static gesture recognition method based on finger contour and decision-making trees
CN103093196A (en) Character interactive input and recognition method based on gestures
CN111507206B (en) Finger vein identification method based on multi-scale local feature fusion
CN105005769A (en) Deep information based sign language recognition method
CN106971130A (en) A kind of gesture identification method using face as reference
CN103336835B (en) Image retrieval method based on weight color-sift characteristic dictionary
CN104156690B (en) A kind of gesture identification method based on image space pyramid feature bag
CN104866824A (en) Manual alphabet identification method based on Leap Motion
CN106446911A (en) Hand recognition method based on image edge line curvature and distance features
CN109558855A (en) A kind of space gesture recognition methods combined based on palm contour feature with stencil matching method
CN112749646A (en) Interactive point-reading system based on gesture recognition
CN107909003B (en) gesture recognition method for large vocabulary

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170315