CN103984928B - Finger gesture recognition methods based on depth image - Google Patents
Finger gesture recognition methods based on depth image Download PDFInfo
- Publication number
- CN103984928B CN103984928B CN201410213052.7A CN201410213052A CN103984928B CN 103984928 B CN103984928 B CN 103984928B CN 201410213052 A CN201410213052 A CN 201410213052A CN 103984928 B CN103984928 B CN 103984928B
- Authority
- CN
- China
- Prior art keywords
- hand
- point
- finger
- profile
- chained list
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of finger gesture recognition methods based on depth image, and it includes opening depth of field camera, obtains the process of depth of field video data;The pushing deduction of personage, determines the process of palm of the hand position and pushing data;To hand spheric region image partition, cutting, and the process pre-processed;And finger tip identification, and according to the process of its geometrical relationship progress gesture identification.The present invention is according to the characteristics of depth image, and Fast trim goes out hand region, is analyzed and processed just for target area, reduces computational complexity, and it is good to change adaptability to dynamic scene;The maximum depression points scanning algorithm of profile is employed to finger tip identification, the robustness of finger tip identification is improved, is recognized accurately after finger tip, each finger is identified according to the direction vector of finger and their geometrical relationship, so as to provide the identification of various gestures.The method of the present invention is easy, flexible, and easily realizes.
Description
Technical field
The invention belongs to field of human-computer interaction, and in particular to a kind of finger gesture recognition methods based on depth image.
Background technology
Current gesture identification method both domestic and external is roughly divided into 2 classes, based on wearing kind equipment and based on Conventional visual.It is based on
The gesture identification of wearable device is to obtain finger motion characteristic, incoming meter from sensors such as data glove, position trackers
Calculation machine carries out the analysis of joint data so as to obtain gesture to reach man-machine interaction using neutral net simultaneously.Major advantage is can
To determine the posture and gesture of finger, but comparatively costly, it is unfavorable for large-scale popularization application.Known based on Conventional visual
Method for distinguishing gathers gesture video or image information using common camera, then processing is identified.Although which is made
User brings good man-machine interaction, but in order to improve the robustness of system and effectively carry out hand position, hand, hand
Finger direction etc. and the extraction being characterized, identification people need to wear coloured gloves, are installed with the clothes of particular requirement, and recognize people
Background need unified color, therefore based on Conventional visual know method for distinguishing easily by background, light, camera position
Etc. the influence of environmental factor.
Application No. CN201210331153 Chinese invention patent application discloses " a kind of based on the man-machine of gesture identification
Then exchange method ", this method carries out Hand Gesture Segmentation, sets up gesture template by handling gesture image/video stream, its
In for Hand Gesture Segmentation and modeling be HSV Face Detections, then recognize gesture, finally simulation control mouse is interacted.
Although this method can recognize that palm opens and clenched fist this 2 kinds of simple gestures, for Hand Gesture Segmentation under complicated scene take compared with
Long, the real-time for man-machine interaction is not good enough.
Patent No. CN200910198196 Chinese invention patent discloses a kind of " finger tip positioning side of instruction gesture
Method ", this method can automatically determine indicate gesture finger tip position, it use background subtraction come to extraction hand region,
The method for being also area of skin color feature to extract wherein used, advantage is that detection speed is fast, accurate and be easily achieved, and deficiency is
Complexity and the interference of various environment and the requirement of noise to scene is higher.
Application No. CN201010279511 Chinese invention patent application discloses a kind of " hand in man-machine interactive system
Portion, instruction independent positioning method and gesture determine method ", this method obtains image/video stream first, then passes through image difference method
Foreground image is partitioned into, binary conversion treatment is carried out to image and Minimum Convex Closure vertex set is obtained, then by each pixel of Minimum Convex Closure
Potential hand region is necessarily included centered on summit in constructed region, then application model knows method for distinguishing from candidate region
Determine target.Also only have 2 kinds of gesture identifications to be to close and open at present.The deficiency of this method is that minimum to whole image progress convex
Bag pixel vertex set is detected that workload is big, it is impossible to which hand zone location is settled at one go, and error easily occurs, together
When also have certain requirement to the pixel of camera.
The content of the invention
The technical problems to be solved by the invention are gesture in existing man-machine interaction, finger identification method, are existed externally
Portion's illumination condition is poor or the excessively complicated presentation change for causing the colour of skin is greatly so as to insincere, to background complexity requirement height and
It is computationally intensive to cause the problems such as performance is not high there is provided a kind of finger gesture recognition methods based on depth image, this method according to
The characteristic of depth image, is quickly positioned using respective algorithms to hand in image/video stream, is recognized finger tip, is finally successfully known
Gesture is not gone out, so as to improve flexibility and the simplicity of man-machine interaction.
Idea of the invention is that:Depth image is shot by depth-of-field video camera, one is divided by the center of circle of hand skeleton point
Using a particular value as the border circular areas of radius, this part is exactly our target areas to be analyzed and processed.Then to target
Region carries out clustering processing, obtains hand mask, and the profile that hand is obtained with contour following algorithm is detected using convex closure collection, utilizes
Improved sags and crests angle algorithm identifies finger tip, and different fingers are identified according to the geometric position of finger.Finally according to hand
Refer to number, finger orientation vector, geometric position and some threshold conditions of setting to come to the accurate identification of gesture.
To solve the above problems, the present invention is achieved by the following technical solutions:
A kind of finger gesture recognition methods based on depth image, comprises the following steps:
(1) depth of field camera is opened, the process of depth of field video data is obtained, i.e.,
(1.1) video flowing of the direct shooting background of depth of field camera and manipulator's whole body depth image;
(1.2) the voxel information of every frame depth image acquired in the video flowing of shooting is carried out into spatial alternation is
Point cloud information in real space, thus obtain each pixel from depth of field camera with a distance from, and obtain the bone of manipulator
Bone point information.
(2) the pushing deduction of personage, determines the process of palm of the hand position and pushing data, i.e.,
(2.1) the point cloud information obtained is handled, and 1. the height of manipulator is calculated according to formula, wherein
①
In formula, Hr is the height of manipulator to be measured, and HB is background pixel height, and Hp is by manipulator to be measured in image is taken the photograph
Pixels tall, d be manipulator from depth of field camera with a distance from, θ be depth of field camera horizontal direction on vertical angle;
(2.2) according to the Human Height counted in advance and pushing corresponding relation, the pushing HL of manipulator is obtained.
(3) to hand spheric region image partition, cutting, and the process pre-processed, i.e.,
(3.1) by based on the filtering apart from depth, removing all point datas for being more than pushing half with palm of the hand distance, soon
Speed obtains hand data, wherein hand data be included in by radius of pushing half, the palm of the hand in the spherical field in the center of circle,
②
In formula, p (x0,y0,z0) it is palm of the hand point, its coordinate for finding hand skeleton point by skeleton point is palm of the hand point
Position coordinates, HL is pushing, and HandPoint is the hand point set of the target area obtained;
(3.2) clustering processing is carried out to the hand data being cropped to by K mean cluster algorithm;
(3.3) by setting min cluster number to carry out filtering exclusion to non-hand images i.e. noise jamming pixel clusters, obtain
Hand mask.
(4) finger tip is recognized, and carries out the process of gesture identification according to its geometrical relationship, i.e.,
(4.1) Moore neighborhood contour following algorithms are used to the hand mask obtained, detects the profile of hand mask
Profile, and obtain hand profile point chained list collection;
(4.2) appearance profile of the hand mask to being obtained uses Graham scanning algorithms, detects hand profile
Convex closure collection, and obtain the salient point chained list collection on hand profile;
(4.3) appearance profile and the convex closure collection of hand profile to hand mask is calculated using profile maximum depression spot scan
Method, detects the maximum depression points between all salient points, and obtain the sags and crests chained list collection on hand profile;
(4.4) concave-convex clamp angle recognizer is used to the sags and crests chained list collection on the hand profile that is obtained, obtains finger tip
Point chained list collection;
(4.5) identify after each finger, start to recognize gesture;For the identification of a gesture, finger is identified first
Number, then obtain the title of finger, and the direction vector of each finger and the angle between them, these three conditions are made
For three layers of decision tree, so as to realize the identification of gesture.
In step (3.2), the span of the K values in the K mean cluster algorithm is 2.
In step (4.3), the detailed process of the maximum depression points scanning algorithm of the profile is as follows:
The salient point chained list collection that (4.3.1) replicates simultaneously on hand profile is initial sags and crests chained list collection;
(4.3.2) is concentrated between 2 salient points adjacent before and after convex closure point chained list collection, hand profile point chained list successively
Each hand outline concave point with a range formula for point to line, detect its hand outline concave o'clock to 2 salient points connection straight lines
Ultimate range concave point;
(4.3.3) inserts the concave point of its ultimate range in sags and crests chained list collection between this 2 salient points;
(4.3.4) repeats to jump to step (4.3.2), until hand profile point chained list concentrates all detections to finish;
(4.3.5) obtains the point of its maximum by above-mentioned iteration, then is maximum depression points, and generate orderly hand wheel
Concavo-convex collection point chained list on exterior feature.
In step (4.4), the detailed process of the concavo-convex angle recognizer is as follows:
A salient point P0 is found from top to bottom in the sags and crests chained list collection of (4.4.1) in order on hand profile, and
Respectively adjacent concave point P1 and concave point P2 are chosen from its front and rear 2 direction;
(4.4.2) is from concave point P1 to salient point P0, salient point P0 to concave point P2 makees 2 vectors, calculates its folder in salient point P0 points
Angle, if its angle is less than the threshold value of setting, salient point P0 points are identified as finger tip deposit finger tip point chained list collection;
(4.4.3), if the sags and crests chained list collection on hand profile has not been detected also, repeat step (4.4.1) is detected
Next candidate's salient point;Otherwise terminate;
(4.4.4) calculates the distance that finger tip point chained list concentrates every 2 adjacent and non-adjacent finger tip points successively, by adjacent 2
Individual finger tip point is defined as thumb apart from maximum and non-adjacent 2 finger tip points apart from most big-and-middle public finger tip point, with thumb
It is adjacent and be defined as forefinger apart from maximum finger tip point, it is non-adjacent and be defined as small thumb apart from maximum finger tip point with thumb
Refer to, nearest finger tip point is defined as middle finger with forefinger;Left finger tip point is defined as the third finger.
Compared with prior art, the present invention is according to the characteristics of depth image, and Fast trim goes out hand region, just for target
Region is analyzed and processed, simple operation, flexibly, is easily realized, it is to avoid using traditional background subtraction be partitioned into target,
Computational complexity is greatly reduced, and it is good to change adaptability to dynamic scene;Profile maximum depression is employed to finger tip identification
Spot scan algorithm, improves traditional 3 alignment algorithms used for finger tip detection, overcomes in camera point
In the case that resolution is certain, the problem of closely all causing true finger tip wrong identification too far excessively, the Shandong of finger tip identification is improved
Rod, is recognized accurately after finger tip, and each finger is identified according to the direction vector of finger and their geometrical relationship, so that
The identification of various gestures is provided.The method of the present invention is easy, flexible, and easily realizes.
Brief description of the drawings
The totality that Fig. 1 is the present invention realizes framework.
Fig. 2 is the hand images cutting system framework based on height.
Fig. 3 is that the gesture of one embodiment of the invention cuts schematic diagram.
Fig. 4 is the hand mask schematic diagram of one embodiment of the invention.
Fig. 5 is the angle recognizer schematic diagram of one embodiment of the invention.
Fig. 6 is the finger tip detection schematic diagram of one embodiment of the invention.
Fig. 7 is the finger identification process schematic diagram of one embodiment of the invention.
Embodiment
A kind of finger gesture recognition methods based on depth image, program totally realizes that block diagram is as shown in Figure 1.Fig. 2 is base
Block diagram, hand cutting effect are cut in the hand images of height as shown in figure 3, to obtain the hand cutting effect shown in Fig. 3, wrapping
Include following steps:
(1) depth of field camera is opened, the process of depth of field video data is obtained.I.e.
(1.1) video flowing of the direct shooting background of depth of field camera and manipulator's whole body depth image.
(1.2) the voxel information of every frame depth image acquired in the video flowing of shooting is carried out into spatial alternation is
Point cloud information in real space, thus obtain each pixel from depth of field camera with a distance from, and obtain the bone of manipulator
Bone point information.Above-mentioned skeleton point is that Microsoft somatosensory device Kinect is provided, and it defines 20 artis to represent a bone
Frame (under standing state), the api function of Kinect offers is directly invoked during programming can just make the position coordinates for using skeleton point
Etc. information.
It is the tool for putting cloud information in real space that the voxel information per frame depth image, which carries out spatial alternation,
Body process is changed automatically by Microsoft somatosensory device Kinect, and API Function can be obtained by reference point when using
Coordinate information.
(2) the pushing deduction of personage, determines the process of palm of the hand position and pushing data.Here two parts are included:Personage
Height is measured and hand data acquisition, and wherein hand data include palm of the hand position, pushing data etc..I.e.
(2.1) the point cloud information obtained is handled, and 1. calculates according to formula the height of manipulator,
Wherein
①
In formula, Hr is the height of manipulator to be measured, and HB is background pixel height, and Hp is by manipulator to be measured in image is taken the photograph
Pixels tall, d be manipulator from depth of field camera with a distance from, what it was obtained in video streaming image from acquisition, acquisition modes
It is to directly invoke Kinect somatosensory device api function, θ is the vertical angle in depth of field camera horizontal direction.In the present embodiment
In, θ value is 21.5 °, and HB value is 240.
(2.2) according to the Human Height counted in advance and pushing corresponding relation, the pushing HL of manipulator is obtained.Above-mentioned people
Body is high to be obtained with pushing corresponding relation according to mass data by statistic software for calculation progress multiple linear regression analysis
Go out.
(3) to hand spheric region image partition, cutting, and the process pre-processed.Here two portions are also included
Point:Hand spheric region is subjected to image segmentation by respective algorithms and related algorithm progress is used to the image after segmentation
Reason.I.e.
It is pushing that hand spheric region is split for diameter using palm of the hand coordinate as origin, carried out equivalent to all points
One, based on the filtering apart from depth, removes all and palm of the hand distance and is more than pushing cloud data.This actual effect brought
Exactly hand is split in noisy background, it is quick and effectively improve target in complex environment and extract accuracy.So
Image is handled by related algorithms such as clusters afterwards, the detection of palm profile is carried out.
(3.1) by based on the filtering apart from depth, removing all point datas for being more than pushing half with palm of the hand distance, soon
Speed obtains hand data, wherein hand data be included in by radius of pushing half, the palm of the hand in the spherical field in the center of circle,
②
In formula, p (x0,y0,z0) it is palm of the hand point, its coordinate for finding hand skeleton point by skeleton point is palm of the hand point
Position coordinates, HL is pushing, and HandPoint is the hand point set of the target area obtained.
(3.2) clustering processing is carried out to the hand data being cropped to by K mean cluster algorithm.
K values in the K mean cluster algorithm are specified the number of class by developer, and in the present embodiment, K values take fixed value
2。
(3.3) by setting min cluster number to carry out filtering exclusion to non-hand images i.e. noise jamming pixel clusters, obtain
Hand mask, the binary picture that the hand mask is made up of 0 and 1.As shown in Figure 4.
In the present embodiment, the minimum number pixel threshold of setting is 50 pixels.
(4) finger tip is recognized, and according to the process of its geometrical relationship progress gesture identification.Finger tip is known according to related algorithm
Not, by judging that the geometrical relationships such as finger quantity, direction, angle carry out gesture identification.I.e.
Using the finger tip detection method based on contour curvature, and combine the characteristic of depth image, it is proposed that sags and crests angle is known
Other algorithm, this algorithm overcomes deficiency of the conventional 3 points of alignment methods to finger tip detection, such as lacks relative consistency, to from taking the photograph
As the distance of head requires and increased the operand of program.Then utilization space position relationship recognizes finger.Finally by three
Individual hierarchical classifier method, i.e., three layers decision tree analyze and process to gesture, so as to recognize gesture.
(4.1) Moore neighborhood contour following algorithms are used to the hand mask obtained, detects the profile of hand mask
Profile, and obtain hand profile point chained list collection.Wherein Moore neighborhoods contour following algorithm is classical contour detecting algorithm,
Hand profile is simply determined using it in the present invention.
(4.2) appearance profile of the hand mask to being obtained uses Graham scanning algorithms, detects hand profile
Convex closure collection, and obtain the salient point chained list collection on hand profile.Wherein Graham scanning algorithms are also classical algorithm, in the present invention
In be used for detect hand convex closure collection.
(4.3) appearance profile and the convex closure collection of hand profile to hand mask is calculated using profile maximum depression spot scan
Method, detects the maximum depression points between all salient points, and obtain the sags and crests chained list collection on hand profile.
The detailed process of the maximum depression points scanning algorithm of the profile is as follows:
The salient point chained list collection that (4.3.1) replicates simultaneously on hand profile is initial sags and crests chained list collection.
(4.3.2) is concentrated between 2 salient points adjacent before and after convex closure point chained list collection, hand profile point chained list successively
Each hand outline concave point with a range formula for point to line, detect its hand outline concave o'clock to 2 salient points connection straight lines
Ultimate range concave point.
(4.3.3) inserts the concave point of its ultimate range in sags and crests chained list collection between this 2 salient points.
(4.3.4) repeats to jump to step (4.3.2), until hand profile point chained list concentrates all detections to finish.
(4.3.5) obtains the point of its maximum by above-mentioned iteration, then is maximum depression points, and generate orderly hand wheel
Concavo-convex collection point chained list on exterior feature.
(4.4) concave-convex clamp angle recognizer is used to the sags and crests chained list collection on the hand profile that is obtained, obtains finger tip
Point chained list collection.
The detailed process of the concavo-convex angle recognizer is as follows:
A salient point P0 is found from top to bottom in the sags and crests chained list collection of (4.4.1) in order on hand profile, and
Respectively adjacent concave point P1 and concave point P2 are chosen from its front and rear 2 direction.
(4.4.2) is from concave point P1 to salient point P0, salient point P0 to concave point P2 makees 2 vectors, calculates its folder in salient point P0 points
Angle, if its angle is less than the threshold value of setting, salient point P0 points are identified as finger tip deposit finger tip point chained list collection.
(4.4.3), if the sags and crests chained list collection on hand profile has not been detected also, repeat step (4.4.1) is detected
Next candidate's salient point.Otherwise terminate.
(4.4.4) calculates the distance that finger tip point chained list concentrates every 2 adjacent and non-adjacent finger tip points successively, by adjacent 2
Individual finger tip point is defined as thumb apart from maximum and non-adjacent 2 finger tip points apart from most big-and-middle public finger tip point, with thumb
It is adjacent and be defined as forefinger apart from maximum finger tip point, it is non-adjacent and be defined as small thumb apart from maximum finger tip point with thumb
Refer to, nearest finger tip point is defined as middle finger with forefinger;Left finger tip point is defined as the third finger.
Fig. 5 and Fig. 6 are the finger tip detection methods of the invention based on contour curvature, with reference to the bumps of the characteristic proposition of depth image
Angle recognizer carries out the effect that finger tip detection is run.Traditional finger tip detection is nearly all the side using 3 points of alignment
Method, the major defect of this method is that because threshold value value is fixed hand and camera hypotelorism can all cause to miss too far
Sentence, and need to do three point on a straight line detection, increase sequential operation amount.Using sags and crests angle recognizer proposed by the invention
This problem can effectively be solved.Sags and crests angle threshold value is set to 40 in the present embodiment.
(4.5) identify after each finger, start to recognize gesture.For the identification of a gesture, finger is identified first
Number, then obtain the title of finger, and the direction vector of each finger and the angle between them, these three conditions are made
For three layers of decision tree, so as to realize the identification of gesture.Above-mentioned decision tree is by carrying out inductive learning, generation pair to sample
The decision tree answered or decision rule, a kind of mathematical method that then new data are classified according to decision tree or rule,
In various sorting algorithms, decision tree is most intuitively a kind of.Three layers of decision tree are exactly respectively as in tree by above three condition
The classification foundation of one layer of decision node, so as to reach classification purpose.
Fig. 7 is 4 processes of finger identification:Process 1 is farthest identified according to the distance between all adjacent finger points
Thumb and forefinger;Process 2 be rely on apart from thumb it is farthest little finger is identified apart from finger tip;Process 3 is to lean on and food
Refer to nearest finger tip to carry out being judged as middle finger;Process 4 is the left third finger.
The processing procedure that hand detection in the present invention and finger are recognized is that having depth image data input each time
When carry out, if same object still exists in next frame depth image, and profile simply has with previous frame image
When deformed, then all object properties will continue to quote the characteristic point that old depth image frame analysis is drawn, thus can be with
Program work amount is reduced, efficiency is improved.
Identify after each finger, start to recognize gesture.For the identification of a gesture, of finger is identified first
Number, then obtains the title of finger, and the direction vector of each finger and the angle between them, and these three conditions are used as one
Individual three layers of decision tree, so as to realize the identification of gesture.For example gesture " numeral 3 " and gesture " I love you ", are identified first
Three fingers, then correspond to finger name, and gesture " " has used thumb, forefinger and little finger of toe, this can be light in I love you "
Judge and " difference of digital 3 " gestures, equally, " numeral 3 " has used forefinger, middle finger and the third finger.For example gesture is " digital again
2 " and Chinese gesture " seven ", finger number and the finger name that two gestures are used are all identical, can be by the vectors of two gestures
Angle is distinguished, for " for numeral 2 ", the vector angle of its two finger orientations must be an acute angle, and be less than
Computer can be just allowed to identify that this is " numeral 2 ", same Chinese gesture " seven " its two during the threshold value that we set
Vector angle on finger orientation be more than one we set threshold value when be " seven ".
Claims (4)
1. the finger gesture recognition methods based on depth image, it is characterized in that comprising the following steps:
(1) depth of field camera is opened, the process of depth of field video data is obtained, i.e.,
(1.1) video flowing of the direct shooting background of depth of field camera and manipulator's whole body depth image;
(1.2) it is reality the voxel information of every frame depth image acquired in the video flowing of shooting to be carried out into spatial alternation
Point cloud information in space, thus obtain each pixel from depth of field camera with a distance from, and obtain the skeleton point of manipulator
Information;
(2) the pushing deduction of personage, determines the process of palm of the hand position and pushing data, i.e.,
(2.1) the point cloud information obtained is handled, and 1. the height of manipulator is calculated according to formula, wherein
In formula, Hr is the height of manipulator to be measured, and HB is background pixel height, pictures of the Hp by manipulator to be measured in image is taken the photograph
Plain height, d be manipulator from depth of field camera with a distance from, θ is the vertical angle in depth of field camera horizontal direction;
(2.2) according to the Human Height counted in advance and pushing corresponding relation, the pushing HL of manipulator is obtained;
(3) to hand spheric region image partition, cutting, and the process pre-processed, i.e.,
(3.1) by based on the filtering apart from depth, removing all point datas for being more than pushing half with palm of the hand distance, quickly obtaining
Take hand data, wherein hand data are included in by radius of pushing half, the palm of the hand is in the spherical field in the center of circle;
(3.2) clustering processing is carried out to the hand data being cropped to by K mean cluster algorithm;
(3.3) by setting min cluster number to carry out filtering exclusion to non-hand images i.e. noise jamming pixel clusters, hand is obtained
Mask;
(4) finger tip is recognized, and carries out the process of gesture identification according to its geometrical relationship, i.e.,
(4.1) Moore neighborhood contour following algorithms are used to the hand mask obtained, detects the profile wheel of hand mask
Exterior feature, and obtain hand profile point chained list collection;
(4.2) appearance profile of the hand mask to being obtained uses Graham scanning algorithms, detects the convex closure of hand profile
Collection, and obtain the salient point chained list collection on hand profile;
(4.3) appearance profile and the convex closure collection of hand profile to hand mask is using the maximum depression points scanning algorithm of profile, inspection
The maximum depression points between all salient points are measured, and obtain the sags and crests chained list collection on hand profile;
(4.4) concave-convex clamp angle recognizer is used to the sags and crests chained list collection on the hand profile that is obtained, obtains finger tip point chain
Table collection;
(4.5) identify after each finger, start to recognize gesture;For the identification of a gesture, of finger is identified first
Number, then obtains the title of finger, and the direction vector of each finger and the angle between them, and by the number of finger,
The title of finger and these three conditions of the vector angle of finger are as three layers of decision tree, so as to realize the identification of gesture.
2. the finger gesture recognition methods based on depth image according to claim 1, it is characterized in that, in step (3.2),
K values in the K mean cluster algorithm take fixed value 2.
3. the finger gesture recognition methods based on depth image according to claim 1, it is characterized in that, in step (4.3),
The detailed process of the maximum depression points scanning algorithm of the profile is as follows:
The salient point chained list collection that (4.3.1) replicates simultaneously on hand profile is initial sags and crests chained list collection;
It is each that (4.3.2) is concentrated between 2 salient points adjacent before and after convex closure point chained list collection, hand profile point chained list successively
Hand outline concave point detects its hand outline concave o'clock and connects straight lines most to 2 salient points with the range formula of point to line
The concave point of big distance;
(4.3.3) inserts the concave point of its ultimate range in sags and crests chained list collection between this 2 salient points;
(4.3.4) repeats to jump to step (4.3.2), until hand profile point chained list concentrates all detections to finish;
(4.3.5) obtains the point of its maximum by above-mentioned iteration, then is maximum depression points, and generate on orderly hand profile
A concavo-convex collection point chained list.
4. the finger gesture recognition methods based on depth image according to claim 1, it is characterized in that, in step (4.4),
The detailed process of the concavo-convex angle recognizer is as follows:
A salient point P0 is found in the sags and crests chained list collection of (4.4.1) in order on hand profile from top to bottom, and respectively
Adjacent concave point P1 and concave point P2 are chosen from its front and rear 2 direction;
(4.4.2) is from concave point P1 to salient point P0, salient point P0 to concave point P2 makees 2 vectors, calculates its angle in salient point P0 points, such as
Really its angle is less than the threshold value of setting, then salient point P0 points are identified as finger tip deposit finger tip point chained list collection;
(4.4.3), if the sags and crests chained list collection on hand profile has not been detected also, repeat step (4.4.1) detection is next
Individual candidate's salient point;Otherwise terminate;
(4.4.4) calculates the distance that finger tip point chained list concentrates every 2 adjacent and non-adjacent finger tip points successively, by adjacent 2 fingers
Cusp is defined as thumb apart from maximum and non-adjacent 2 finger tip points apart from most big-and-middle public finger tip point, adjacent with thumb
And it is defined as forefinger apart from maximum finger tip point, it is non-adjacent and be defined as little finger apart from maximum finger tip point with thumb, with
The nearest finger tip point of forefinger is defined as middle finger;Left finger tip point is defined as the third finger.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410213052.7A CN103984928B (en) | 2014-05-20 | 2014-05-20 | Finger gesture recognition methods based on depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410213052.7A CN103984928B (en) | 2014-05-20 | 2014-05-20 | Finger gesture recognition methods based on depth image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103984928A CN103984928A (en) | 2014-08-13 |
CN103984928B true CN103984928B (en) | 2017-08-11 |
Family
ID=51276890
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410213052.7A Active CN103984928B (en) | 2014-05-20 | 2014-05-20 | Finger gesture recognition methods based on depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103984928B (en) |
Families Citing this family (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104503275B (en) * | 2014-11-21 | 2017-03-08 | 深圳市超节点网络科技有限公司 | Non-contact control method based on gesture and its equipment |
CN104375647B (en) * | 2014-11-25 | 2017-11-03 | 杨龙 | Exchange method and electronic equipment for electronic equipment |
CN104778460B (en) * | 2015-04-23 | 2018-05-04 | 福州大学 | A kind of monocular gesture identification method under complex background and illumination |
CN106886741A (en) * | 2015-12-16 | 2017-06-23 | 芋头科技(杭州)有限公司 | A kind of gesture identification method of base finger identification |
CN106971135A (en) * | 2016-01-14 | 2017-07-21 | 芋头科技(杭州)有限公司 | A kind of slip gesture recognition methods |
CN106971130A (en) * | 2016-01-14 | 2017-07-21 | 芋头科技(杭州)有限公司 | A kind of gesture identification method using face as reference |
CN106971131A (en) * | 2016-01-14 | 2017-07-21 | 芋头科技(杭州)有限公司 | A kind of gesture identification method based on center |
CN106971132A (en) * | 2016-01-14 | 2017-07-21 | 芋头科技(杭州)有限公司 | One kind scanning gesture simultaneously knows method for distinguishing |
CN106970701A (en) * | 2016-01-14 | 2017-07-21 | 芋头科技(杭州)有限公司 | A kind of gesture changes recognition methods |
CN105718776B (en) * | 2016-01-19 | 2018-06-22 | 桂林电子科技大学 | A kind of three-dimension gesture verification method and system |
CN105929956A (en) * | 2016-04-26 | 2016-09-07 | 苏州冰格智能科技有限公司 | Virtual reality-based input method |
CN106648063B (en) * | 2016-10-19 | 2020-11-06 | 北京小米移动软件有限公司 | Gesture recognition method and device |
CN106709951B (en) * | 2017-01-03 | 2019-10-18 | 华南理工大学 | A kind of finger-joint localization method based on depth map |
CN107272878B (en) * | 2017-02-24 | 2020-06-16 | 广州幻境科技有限公司 | Identification method and device suitable for complex gesture |
CN109643372A (en) * | 2017-02-28 | 2019-04-16 | 深圳市大疆创新科技有限公司 | A kind of recognition methods, equipment and moveable platform |
CN107092347B (en) * | 2017-03-10 | 2020-06-09 | 深圳市博乐信息技术有限公司 | Augmented reality interaction system and image processing method |
CN106980828B (en) * | 2017-03-17 | 2020-06-19 | 深圳市魔眼科技有限公司 | Method, device and equipment for determining palm area in gesture recognition |
CN108876968A (en) * | 2017-05-10 | 2018-11-23 | 北京旷视科技有限公司 | Recognition of face gate and its anti-trailing method |
CN107341811B (en) * | 2017-06-20 | 2020-11-13 | 上海数迹智能科技有限公司 | Method for segmenting hand region by using MeanShift algorithm based on depth image |
CN107743219B (en) * | 2017-09-27 | 2019-04-12 | 歌尔科技有限公司 | Determination method and device, projector, the optical projection system of user's finger location information |
CN108073283B (en) * | 2017-12-07 | 2021-02-09 | 广州深灵科技有限公司 | Hand joint calculation method and glove |
CN108062525B (en) * | 2017-12-14 | 2021-04-23 | 中国科学技术大学 | Deep learning hand detection method based on hand region prediction |
CN110889387A (en) * | 2019-12-02 | 2020-03-17 | 浙江工业大学 | Real-time dynamic gesture recognition method based on multi-track matching |
CN110888536B (en) * | 2019-12-12 | 2023-04-28 | 北方工业大学 | Finger interaction recognition system based on MEMS laser scanning |
CN111191632B (en) * | 2020-01-08 | 2023-10-13 | 梁正 | Gesture recognition method and system based on infrared reflective glove |
CN113781182A (en) * | 2021-09-22 | 2021-12-10 | 拉扎斯网络科技(上海)有限公司 | Behavior recognition method, behavior recognition device, electronic apparatus, storage medium, and program product |
CN114170382B (en) * | 2021-12-07 | 2022-11-22 | 深圳职业技术学院 | High-precision three-dimensional reconstruction method and device based on numerical control machine tool |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982557A (en) * | 2012-11-06 | 2013-03-20 | 桂林电子科技大学 | Method for processing space hand signal gesture command based on depth camera |
CN103208002A (en) * | 2013-04-10 | 2013-07-17 | 桂林电子科技大学 | Method and system used for recognizing and controlling gesture and based on hand profile feature |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9696808B2 (en) * | 2006-07-13 | 2017-07-04 | Northrop Grumman Systems Corporation | Hand-gesture recognition method |
-
2014
- 2014-05-20 CN CN201410213052.7A patent/CN103984928B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982557A (en) * | 2012-11-06 | 2013-03-20 | 桂林电子科技大学 | Method for processing space hand signal gesture command based on depth camera |
CN103208002A (en) * | 2013-04-10 | 2013-07-17 | 桂林电子科技大学 | Method and system used for recognizing and controlling gesture and based on hand profile feature |
Non-Patent Citations (2)
Title |
---|
"基于Kinect深度信息的手指检测与手势识别";郑斌汪等;《Transactions on Computer Science and Technology》;20140331;第3卷(第1期);第9-14页 * |
"基于景深图像的身高测量系统设计";周长邵等;《桂林电子科技大学学报》;20130630;第33卷(第3期);第214-217页 * |
Also Published As
Publication number | Publication date |
---|---|
CN103984928A (en) | 2014-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103984928B (en) | Finger gesture recognition methods based on depth image | |
JP6079832B2 (en) | Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method | |
Zhou et al. | A novel finger and hand pose estimation technique for real-time hand gesture recognition | |
Raheja et al. | Real-time robotic hand control using hand gestures | |
Oprisescu et al. | Automatic static hand gesture recognition using tof cameras | |
Wu et al. | Robust fingertip detection in a complex environment | |
Bhuyan et al. | Fingertip detection for hand pose recognition | |
TW201120681A (en) | Method and system for operating electric apparatus | |
Mesbahi et al. | Hand gesture recognition based on convexity approach and background subtraction | |
CN106886741A (en) | A kind of gesture identification method of base finger identification | |
Ma et al. | Real-time and robust hand tracking with a single depth camera | |
Li et al. | A novel hand gesture recognition based on high-level features | |
Wang | Gesture recognition by model matching of slope difference distribution features | |
Tang et al. | Hand tracking and pose recognition via depth and color information | |
Chaudhary et al. | A vision-based method to find fingertips in a closed hand | |
Hoque et al. | Computer vision based gesture recognition for desktop object manipulation | |
Do et al. | Particle filter-based fingertip tracking with circular hough transform features | |
Simion et al. | Finger detection based on hand contour and colour information | |
Jacob et al. | Real time static and dynamic hand gestures cognizance for human computer interaction | |
Badi et al. | Feature extraction technique for static hand gesture recognition | |
Choi et al. | A study on providing natural two-handed interaction using a hybrid camera | |
Xie et al. | Hand posture recognition using kinect | |
Hussain et al. | Tracking and replication of hand movements by teleguided intelligent manipulator robot | |
Jiang et al. | A robust method of fingertip detection in complex background | |
Prabhakar et al. | AI And Hand Gesture Recognition Based Virtual Mouse |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |