CN106886741A - A kind of gesture identification method of base finger identification - Google Patents
A kind of gesture identification method of base finger identification Download PDFInfo
- Publication number
- CN106886741A CN106886741A CN201510943700.9A CN201510943700A CN106886741A CN 106886741 A CN106886741 A CN 106886741A CN 201510943700 A CN201510943700 A CN 201510943700A CN 106886741 A CN106886741 A CN 106886741A
- Authority
- CN
- China
- Prior art keywords
- hand
- user
- information
- palm
- gesture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/117—Biometrics derived from hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a kind of gesture identification method of base finger identification, belong to technical field of hand gesture recognition;Method includes:The video data stream for being associated with user's whole body is obtained by an image collecting device, and treatment obtains skeleton point information;According to skeleton point information, determine palm of the hand positional information with pushing information;Judge that whether height of the palm of the hand of user apart from ground is more than a default height threshold, and step is continued executing with when being according to palm of the hand positional information;Judgement obtains the image of palm area, and image to palm area carries out partition, cutting and pre-processes, and obtains corresponding hand mask and exports;According to result, the fingertip area of hand is identified, and the gesture of user is identified according to the geometrical relationship of fingertip area.The beneficial effect of above-mentioned technical proposal is:Eliminate the figure viewed from behind to ring, it is to avoid some invalid gestures are mistaken as user and carry out the situation generation of gesture instruction input, improve the accuracy of gesture identification.
Description
Technical field
The present invention relates to technical field of hand gesture recognition, more particularly to a kind of gesture identification method.
Background technology
Current gesture identification method both domestic and external is roughly divided into 2 classes, based on wearing kind equipment and based on Conventional visual.Gesture identification based on wearable device is to obtain finger motion characteristic from sensors such as data glove, position trackers, and the analysis that incoming computer carries out joint data using neutral net simultaneously reaches man-machine interaction so as to obtain gesture.Major advantage can be the posture and gesture for determining finger, but comparatively costly, be unfavorable for large-scale popularization application.Method for distinguishing is known based on Conventional visual and gathers gesture video or image information using common camera, then be identified treatment.Although which brings good man-machine interaction to user, but for the extraction for improving the robustness of system and effectively carry out hand position, hand, finger orientation etc. and be characterized, identification people needs to wear coloured gloves, it is installed with the clothes of particular requirement, and recognizing the background of people needs unified color, therefore is easily influenceed by environmental factors such as background, light, the positions of camera based on Conventional visual knowledge method for distinguishing.
The content of the invention
According to the above-mentioned problems in the prior art, a kind of technical scheme of the finger gesture recognition methods based on depth image is now provided, specifically included:
A kind of gesture identification method, wherein, comprise the following steps:
Step S1, the video data stream for being associated with user's whole body is obtained by an image collecting device, and process the skeleton point information of each skeleton point for obtaining being associated with the user;
Step S2, according to the skeleton point information, it is determined that representing the palm of the hand positional information and the pushing pushing information for representing the user of the palm of the hand position of the user;
Step S3, judges whether height of the palm of the hand of the user apart from ground is more than a default height threshold according to the palm of the hand positional information:
If so, then continuing executing with the step S4;
If it is not, then exiting;
Step S4, judgement obtains the image of palm area, and image to the palm area carries out partition, cutting and pre-processes, and obtains corresponding hand mask and exports;
Step S5, according to the result, identifies the fingertip area of hand, and the gesture of the user is identified according to the geometrical relationship of the fingertip area.
Preferably, the gesture identification method, wherein, in the step S1, described image harvester is depth of field camera;
The video data is the depth of field video data of the whole body for being associated with the user.
Preferably, the gesture identification method, wherein, the step S1 includes:
Step S11, the video data stream of the depth image of the whole body of background and the user is included using the collection of described image harvester;
Step S12, the three-dimensional information of the pixel of the depth image of every frame that the video data stream is included carries out spatial alternation, to obtain corresponding cloud information in real space;
Step S13, according to the corresponding described cloud information of each described pixel, obtains the distance between each described pixel and described depth of field camera;
Step S14, respectively according to the corresponding distance of each described pixel, treatment obtains the skeleton point information.
Preferably, the gesture identification method, wherein, the step S2 includes:
Step S21, the skeleton point information of each skeleton point for being associated with the user obtained according to treatment obtains the palm of the hand positional information of the user;
Step S22, the skeleton point information of each skeleton point for being associated with the user obtained according to treatment, the height information of the user is calculated according to following formula:
Wherein, H1Represent the height values of the user, H2Represent the pixels tall numerical value of background, H3Represent pixels tall numerical value of the user in collected video image, d represents the distance between the user and depth of field camera numerical value, θ represents depth of field camera vertical angle numerical value in the horizontal direction;
Step S23, according to the corresponding relation of default Human Height and human body between pushing, obtains the described pushing information of the user.
Preferably, the gesture identification method, wherein, the step S4 includes:
Step S41, according to the palm of the hand positional information and the pushing information, the information of the distance more than the pixel of the pushing half of all and palm of the hand position that the hand of the user includes is removed, and the information of all described pixel included according to the hand after removal obtains hand data;
The hand data that treatment is obtained are carried out clustering processing by step S42 by K mean cluster algorithm, are obtained by the hand data after clustering processing;
Step S43, sets min cluster number, is excluded with the filtering that the hand data are carried out with noise jamming pixel clusters, so as to obtain being associated with the hand mask of the hand data and export.
Preferably, the gesture identification method, wherein, the hand data are included in a spheric region with the described pushing half of the user as radius and with the palm of the hand position of the user as the center of circle.
Preferably, the gesture identification method, wherein, the step S5 includes:
Step S51, the edge contour of the hand mask is obtained using the detection of Moore neighborhoods contour following algorithm, and obtains first chain set of all profile points included on the edge contour;
Step S52, the convex closure collection on the hand profile of the hand mask is obtained using the detection of Graham scanning algorithms, and acquisition includes the second point chain set of all convex closures;
Step S53, using profile maximum depression points scanning algorithm, maximum depression points between detection on the edge contour of the hand mask and the convex closure collection of the hand profile obtains all salient points, and obtain the thirdly chain set of the sags and crests included on the hand profile;
Step S54, using concavo-convex angle recognizer, obtains including the 4th chain set of all finger tip points of hand according to the thirdly chain process of aggregation for being associated with the hand profile;
Step S55, each finger of hand is obtained according to finger tip point identification, then performs gesture identification operation.
Preferably, the gesture identification method, wherein, in the step S55, perform the step of gesture identification is operated and specifically include:
Step S551, identification obtains the number of all described finger of hand;
Step S552, judged to obtain the angle between the every title of the finger, direction vector and the adjacent finger and export according to presupposed information;
Step S553, forms one or three layers of decision tree, and gesture is identified according to three layers of decision tree according to the information exported in the step S552.
Preferably, the gesture identification method, wherein, in the step S42, the K mean cluster is calculated
K values in method are set as fixed numbers 2.
The beneficial effect of above-mentioned technical proposal is to provide a kind of gesture identification method, can eliminate the influence of extraneous background, and avoid some invalid gestures be mistaken as user carry out gesture instruction input situation occur, improve the accuracy of gesture identification.
Brief description of the drawings
During Fig. 1 is preferred embodiment of the invention, a kind of overall procedure schematic diagram of gesture identification method;
During Fig. 2 is preferred embodiment of the invention, the schematic flow sheet of the skeleton point information for obtaining user is gathered and processed;
During Fig. 3 is preferred embodiment of the invention, treatment obtains the schematic flow sheet of palm of the hand positional information and pushing information;
During Fig. 4 is preferred embodiment of the invention, treatment obtains the schematic flow sheet of hand mask;
During Fig. 5 is preferred embodiment of the invention, the schematic flow sheet being identified to gesture;
During Fig. 6 is preferred embodiment of the invention, the schematic flow sheet of profile maximum depression points scanning algorithm;
During Fig. 7 is preferred embodiment of the invention, the schematic flow sheet of concavo-convex angle recognizer;
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.Based on the embodiment in the present invention, the every other embodiment that those of ordinary skill in the art are obtained on the premise of creative work is not made belongs to the scope of protection of the invention.
It should be noted that in the case where not conflicting, the embodiment in the present invention and the feature in embodiment can be mutually combined.
The invention will be further described with specific embodiment below in conjunction with the accompanying drawings, but not as limiting to the invention.
In preferred embodiment of the invention, there is provided a kind of gesture identification method, the overall procedure of the method is as shown in figure 1, comprise the steps:
Step S1, the video data stream for being associated with user's whole body is obtained by an image collecting device, and process the skeleton point information of each skeleton point for obtaining being associated with user;
Specifically, as shown in Fig. 2 above-mentioned steps S1 comprises the steps:
Step S11, the video data stream of the depth image of the whole body of background and user is included using image acquisition device;
In preferred embodiment of the invention, above-mentioned image collecting device can be the camera being installed on the intelligent terminal for supporting gesture instruction interaction, depth of field camera is can preferably be, that is, the camera of the function of supporting to be capable of blur-free imaging in the range of the longitudinal separation of imaging object.
Then in above-mentioned steps S11, the video flowing of the whole body depth image of the background and user of picture where user is directly shot using above-mentioned depth of field camera, eventually form above-mentioned video data stream and export.
Step S12, the three-dimensional information of the pixel of the depth image of every frame that video data stream is included carries out spatial alternation, to obtain corresponding cloud information in real space;
In preferred embodiment of the invention, in above-mentioned steps S12, the voxel information of each pixel respectively obtains its corresponding cloud information in real space after carrying out spatial alternation in the depth image of acquired every frame in the video data stream that will be photographed.
Step S13, according to corresponding cloud information of each pixel, obtains the distance between each pixel and depth of field camera;
By the point cloud information obtained in above-mentioned steps S12, the distance obtained between each pixel of correspondence and depth of field camera can be further processed in above-mentioned steps S13.
Step S14, respectively according to the corresponding distance of each pixel, treatment obtains skeleton point information.
In above-mentioned steps S14, the skeleton point information for obtaining user finally can be processed according to the distance between each pixel and above-mentioned depth of field camera respectively.So-called skeleton point, it is believed that be a kind of human body markup model, multiple skeleton points that can be used to mark human body different parts are included in the markup model, and different skeleton points may be respectively used for marking each joint of human body.Human visual's model that for example certain class multiple skeleton point is formed, the skeleton state that human body is under standing state is represented there is defined 20 skeleton points, and each skeleton point is an artis.In other words, before above-mentioned gesture identification method is performed, it is necessary first to a pre-defined physically weak analog model of people for including multiple skeleton points, there is the more technical scheme that can realize presetting above-mentioned human visual's model in the prior art, in this not go into detail.
Then in preferred embodiment of the invention, spatial alternation is carried out per the voxel information of frame depth image for the detailed process of the point cloud information in real space can be realized according to related software, only need to be called when realizing the api interface of related software code, will not be repeated here.
Step S2, according to skeleton point information, it is determined that representing the palm of the hand positional information and the pushing pushing information for representing user of the palm of the hand position of user;
In preferred embodiment of the invention, above-mentioned palm of the hand positional information is used to indicate the palm of the hand position of user, and further, palm of the hand positional information can serve to indicate that the position of the hand of user.
In preferred embodiment of the invention, pushing information can serve to indicate that the hand length of user.The pushing information is usually default, and the Human Height and pushing ratio for for example being obtained by training in advance are calculated, and above-mentioned calculating process can hereinafter be described in detail.
Then in preferred embodiment of the invention, as shown in figure 3, above-mentioned steps S2 is further included:
Step S21, the skeleton point information of each skeleton point for being associated with user obtained according to treatment obtains the palm of the hand positional information of user;
Step S22, the skeleton point information of each skeleton point for being associated with user obtained according to treatment, the height information of user is calculated according to following formula:
Wherein, H1Represent the height values of user, H2Represent the pixels tall numerical value of background, H3Pixels tall numerical value of the user in collected video image is represented, d represents the distance between user and depth of field camera numerical value, and θ represents depth of field camera vertical angle numerical value in the horizontal direction.Then above-mentioned H2Numerical value can preset, for example value be 240, similarly θ can also preset, be for example set as 21.5 °.
Step S23, according to the corresponding relation of default Human Height and human body between pushing, obtains the pushing information of user.
In preferred embodiment of the invention, above-mentioned Human Height and pushing corresponding relation can be according to substantial amounts of human body related datas, and by way of big data is counted carrying out multiple linear regression analysis obtains.
Step S3, judges whether height of the palm of the hand of user apart from ground is more than a default height threshold according to palm of the hand positional information:
If so, then continuing executing with step S4;
If it is not, then exiting;
In the prior art, generally there is such a case in the intelligent terminal for supporting gesture identification:Although user stands in the range of the picture catching of depth of field camera, it is not intended to carry out gesture operation to corresponding intelligent terminal.Then now, user may unconsciously brandish arm when some other affairs (such as being talked with other people) is carried out, this a series of actions is possible to that intelligent terminal can be caused to misread gesture motion, will user do not feel in some gesture motions for doing be identified as the gesture motion for needing to be controlled intelligent terminal.
Then in preferred embodiment of the invention, the situation for misreading gesture motion in order to avoid above-mentioned occurs, and presets a height threshold first before gesture identification, when the height threshold does the gesture motion of standard for user, the calibrated altitude that hand is liftoff.In other words, as long as the liftoff height of the hand of user is higher than above-mentioned height threshold, then may indicate that user is current and attempting being input into gesture instruction to intelligent terminal.Conversely, then it is considered that user does not have a mind to control intelligent terminal by gesture motion.
Then in above-mentioned steps S3, the liftoff distance of hand can be determined by the image of palm of the hand position and user's whole body first.During the pre-setting of above-mentioned height threshold, the height distance that the center (i.e. the palm of the hand) of the corresponding hand of gesture motion that height threshold directly can be set into standard is liftoff, so in actual calculating process, terrain clearance directly can be calculated using above-mentioned palm of the hand position and be contrasted with default height threshold.Similarly, above-mentioned height threshold is it can also be provided that the liftoff height distance of the corresponding hand bottom/top edge of the gesture motion of standard, in this case, in actual calculating process, the marginal position on the bottom/top for being accomplished by extrapolating hand substantially according to palm of the hand positional information first, and and then be calculated the actual terrain clearance of hand, and contrasted with default height threshold.
Step S4, judgement obtains the image of palm area, and image to palm area carries out partition, cutting and pre-processes, and obtains corresponding hand mask and exports;
In preferred embodiment of the invention, above-mentioned steps S4 is specific as shown in figure 4, comprising the steps:
Step S41, according to palm of the hand positional information and pushing information, removes the information of the distance more than the pixel of pushing half of all and palm of the hand position that the hand of user includes, and the information of all pixels point included according to the hand after removal obtains hand data;
In preferred embodiment of the invention, based on the filter algorithm apart from depth, the data of all pixels for being apart more than pushing half with above-mentioned palm of the hand position are removed such that it is able to quick obtaining hand data.In other words, remain is a spheric region with pushing half as radius with palm of the hand position as the center of circle after eventually passing through filtering, and all pixels point in the spheric region is retained, as the pixel of hand data.
Therefore, in preferred embodiment of the invention, the hand data of user are comprised in a spheric region with the pushing half of user as radius and with the palm of the hand position of user as the center of circle.
Specifically, in preferred embodiment of the invention, in above-mentioned steps S41, the set of the pixel in above-mentioned spheric region is calculated according to following formula, obtains final product hand data:
Wherein, p0Represent the set of the pixel in above-mentioned spheric region, the pixel that p includes for the hand of user, p (x, y, z) it is used for denotation coordination for (x, y, z) pixel p, it is (x0 that p (x0, y0, z0) is used for denotation coordination, y0, z0), i.e., for representing palm of the hand position where pixel, H4Numerical value for representing pushing information.
The hand data that treatment is obtained are carried out clustering processing by step S42 by K mean cluster algorithm, are obtained by the hand data after clustering processing;
In preferred embodiment of the invention, the K values in K mean cluster algorithm in above-mentioned steps S42 can be specified the number of class by developer, and in a preferred embodiment of the invention, K values take fixed numerical value 2.
Step S43, sets min cluster number, is excluded with the filtering that hand data are carried out with noise jamming pixel clusters, so as to obtain being associated with the hand mask of hand data and export.
In preferred embodiment of the invention, above-mentioned hand mask can be a binary picture being made up of 0 and 1.Then in a preferred embodiment of the invention, in above-mentioned steps S43, the min cluster number (minimum cluster numbers pixel threshold) of setting is 50 pixels.
Step S5, according to result, identifies the fingertip area of hand, and the gesture of user is identified according to the geometrical relationship of fingertip area.
In preferred embodiment of the invention, the characteristic of the finger tip detection method combination depth image of contour curvature can be based on, propose a kind of sags and crests angle recognizer, this algorithm overcomes 3 points of conventional alignment methods and (for example lacks relative consistency to the deficiency of finger tip detection, there is requirement higher to the distance between image and camera, and the operand of program can be increased etc.).And on the basis of above-mentioned sags and crests angle recognizer, each finger of hand is recognized using the spatial relation of human body and hand.Finally can be by three layers of decision tree of formation, the fingertip area for relying on each finger is analyzed treatment to gesture, so as to recognize the gesture motion of user.
Specifically, in preferred embodiment of the invention, as shown in figure 5, above-mentioned steps S5 includes:
Step S51, the edge contour of hand mask is obtained using the detection of Moore neighborhoods contour following algorithm, and obtains first chain set of all profile points included on edge contour;
Moore neighborhoods contour following algorithm is the algorithm for detecting profile relatively more classical present in prior art, be will not be repeated here.
Step S52, the convex closure collection on the hand profile of hand mask is obtained using the detection of Graham scanning algorithms, and acquisition includes the second point chain set of all convex closures;
The Graham also classical algorithm for monitoring profile, also repeats no more herein.
Step S53, using profile maximum depression points scanning algorithm, the maximum depression points between detection on the edge contour of hand mask and the convex closure collection of hand profile obtains all salient points, and obtains the thirdly chain set of the sags and crests included on hand profile;
Further, in preferred embodiment of the invention, as shown in fig. 6, in above-mentioned steps S53, so-called profile maximum depression points scanning algorithm is specifically included:
Step S531, using the second point chain set on hand profile as initial thirdly chain set.
Step S532, successively to each hand outline concave point between 2 salient points adjacent before and after second point chain set, thirdly in chain set with point to line a range formula, detecting the depression points of its hand profile has the concave point of ultimate range to the connection straight line between above-mentioned 2 adjacent salient points.
Step S533, the above-mentioned concave point with ultimate range is inserted into the set of above-mentioned thirdly chain between above-mentioned 2 adjacent salient points.
Step S534, repeats above-mentioned steps S532 to S533, is finished until the point in the set of above-mentioned thirdly chain is all detected.
Step S535, the point of its maximum is obtained by iteration, then be maximum depression points, and generates the thirdly chain set on orderly hand profile.
Step S54, using concavo-convex angle recognizer, obtains including the 4th chain set of all finger tip points of hand according to the thirdly chain process of aggregation for being associated with hand profile;
Specifically, in preferred embodiment of the invention, as shown in fig. 7, in above-mentioned steps S54, so-called concavo-convex angle recognizer includes:
Step S541, from top to bottom finds a salient point P1 in thirdly chain set sequentially on hand profile, and from its front and rear 2 direction chooses adjacent concave point P2 and concave point P3 respectively.
Step S542, from concave point P2 to salient point P1, salient point P1 to concave point P2 make 2 vectors, calculate its angle in salient point P1 points, if threshold value of its angle less than setting, salient point P1 points are identified as fingertip area and are stored in above-mentioned 4th chain set.
Step S543, if the thirdly chain set on hand profile has not detected that repeat the above steps S541 also, to detect next candidate's salient point;Otherwise terminate.
Step S55, each finger of hand is obtained according to finger tip point identification, then performs gesture identification operation.
In preferred embodiment of the invention, in above-mentioned steps S55, the distance of every 2 adjacent and non-adjacent finger tip points in above-mentioned 4th chain set can be successively calculated, and the different corresponding fingers of fingertip area are determined according to distance.
Specifically, in a preferred embodiment of the invention, adjacent 2 finger tip points can be defined as thumb apart from maximum and non-adjacent 2 finger tip points most big-and-middle public finger tip point of distance, adjacent with thumb and distance maximum finger tip point is defined as forefinger, non-adjacent with thumb and distance maximum finger tip point is defined as little finger, and the finger tip point nearest with forefinger is defined as middle finger;Left finger tip point is defined as the third finger.
In a preferred embodiment of the invention, the default threshold value of above-mentioned sags and crests angle can be set to 40 °, then be can effectively solve the problem that using technical solution of the present invention and judge problem by accident present in traditional finger tip detection, while reducing amount of calculation.
In preferred embodiment of the invention, for an identification for gesture, the number of finger is identified according to above-mentioned steps first, and obtain the title of finger, and the direction vector and the angle between them of each finger, and three layers of decision tree are formed with above three condition, the identification of gesture motion is finally realized according to above-mentioned three layers of decision tree.
In preferred embodiment of the invention, above-mentioned three layers of decision tree is a kind of sorting technique then classified to new data according to decision tree or rule by carrying out inductive learning to sample, generating corresponding decision tree or decision rule, in various sorting algorithms, decision tree is most intuitively a kind of.Three layers of decision tree are exactly respectively as one layer of classification foundation of decision node in tree, so as to reach classification purpose by above three condition.
The processing procedure of hand detection in the present invention and finger identification is carried out when having depth image data input each time, if same object still exists in next frame depth image, and profile is when having deformed with previous frame image, then all of object properties will continue to quote the characteristic point that old depth image frame analysis draws, program work amount can be thus reduced, efficiency is improved.
In a preferred embodiment of the invention, gesture and gesture that the process being identified to gesture according to above-mentioned three layers of decision tree is for example gesticulated in identification numeral " I love you ":
Identify that the current gesture motion of user is related to three fingers first, and obtain corresponding finger name further to be recognized.
Can be known by advance training, thumb, forefinger and little finger of toe have been used in gesture " I love you ", and gesticulate when numeral for example gesticulates Arabic numerals " 3 " and used forefinger, middle finger and the third finger, therefore directly can carry out the differentiation of gesture motion by having used which root finger in the gesture motion.
Again for example, the gesture motion for equally carrying out numeral is gesticulated, for example, gesticulate Arabic numerals " 2 " and gesticulate Chinese figure " seven ", the finger number and finger name that two gestures are used are all identical, then can be distinguished by two vector angles of gesture:
For Arabic numerals " 2 ", when gesticulating, the direction vector angle of its two fingers must be an acute angle to user, and can realize a default threshold value less than us, now can just allow computer to identify that this is Arabic numerals " 2 ".
Correspondingly, for Chinese gesture " seven ", when gesticulating, the direction vector angle of two finger is more than angle when gesticulating Arabic numerals " 2 " to user, when can then work as angle more than above-mentioned default threshold value, current gesture motion is identified as " seven ".
In preferred embodiment of the invention; it is above-mentioned that gesture motion is identified to include various specific embodiments using three layers of decision tree; it is numerous to list herein, as long as being that the three layers of decision tree formed using above three condition are included in protection scope of the present invention to the mode that gesture is identified.
The foregoing is only preferred embodiments of the present invention; not thereby embodiments of the present invention and protection domain are limited; to those skilled in the art; the scheme obtained by all utilization description of the invention and the equivalent done by diagramatic content and obvious change should be can appreciate that, should be included in protection scope of the present invention.
Claims (9)
1. a kind of gesture identification method, it is characterised in that comprise the following steps:
Step S1, the video data stream for being associated with user's whole body is obtained by an image collecting device, and
Treatment obtains the skeleton point information of each skeleton point for being associated with the user;
Step S2, according to the skeleton point information, it is determined that representing the palm of the hand of the palm of the hand position of the user
Positional information and the pushing pushing information for representing the user;
Step S3, the height of the palm of the hand apart from ground of the user is judged according to the palm of the hand positional information
Whether a default height threshold is more than:
If so, then continuing executing with the step S4;
If it is not, then exiting;
Step S4, judgement obtains the image of palm area, and image to the palm area is split
Cut and pre-processed, obtain corresponding hand mask and export;
Step S5, according to the result, identifies the fingertip area of hand, and according to the finger tip
The geometrical relationship in region is identified to the gesture of the user.
2. gesture identification method as claimed in claim 1, it is characterised in that in the step S1,
Described image harvester is depth of field camera;
The video data is the depth of field video data of the whole body for being associated with the user.
3. gesture identification method as claimed in claim 2, it is characterised in that the step S1 includes:
Step S11, the whole body of background and the user is included using the collection of described image harvester
Depth image video data stream;
Step S12, the three of the pixel of the depth image of every frame that the video data stream is included
Dimension information carries out spatial alternation, to obtain corresponding cloud information in real space;
Step S13, according to the corresponding described cloud information of each described pixel, obtains each described pixel
The distance between with the depth of field camera;
Step S14, respectively according to the corresponding distance of each described pixel, treatment obtains the bone
Point information.
4. gesture identification method as claimed in claim 1, it is characterised in that the step S2 includes:
Step S21, according to each described skeleton point for being associated with the user that treatment is obtained
Skeleton point information, obtains the palm of the hand positional information of the user;
Step S22, according to each described skeleton point for being associated with the user that treatment is obtained
Skeleton point information, the height information of the user is calculated according to following formula:
Wherein, H1Represent the height values of the user, H2The pixels tall numerical value of background is represented,
H3Pixels tall numerical value of the user in collected video image is represented, d represents described and uses
The distance between person and depth of field camera numerical value, θ represents the depth of field camera in the horizontal direction
Vertical angle numerical value;
Step S23, according to the corresponding relation of default Human Height and human body between pushing, obtains described
The described pushing information of user.
5. gesture identification method as claimed in claim 1, it is characterised in that the step S4 includes:
Step S41, according to the palm of the hand positional information and the pushing information, removes the user's
Letter of the distance of all and palm of the hand position that hand includes more than the pixel of the pushing half
Breath, and the information of all described pixel included according to the hand after removal obtains hand data;
The hand data that treatment is obtained are carried out at cluster by step S42 by K mean cluster algorithm
Reason, obtains by the hand data after clustering processing;
Step S43, sets min cluster number, and noise jamming pixel clusters are carried out with to the hand data
Filtering is excluded, so as to obtain being associated with the hand mask of the hand data and export.
6. gesture identification method as claimed in claim 5, it is characterised in that the hand packet contains
With the described pushing half of the user as radius and with the palm of the hand position of the user
For in a spheric region in the center of circle.
7. gesture identification method as claimed in claim 1, it is characterised in that the step S5 includes:
Step S51, the edge of the hand mask is obtained using the detection of Moore neighborhoods contour following algorithm
Profile, and obtain first chain set of all profile points included on the edge contour;
Step S52, is obtained on the hand profile of the hand mask using the detection of Graham scanning algorithms
Convex closure collection, and acquisition include the second point chain set of all convex closures;
Step S53, using profile maximum depression points scanning algorithm, in the edge of the hand mask
Detection obtains the maximum depression between all salient points on the convex closure collection of profile and the hand profile
Point, and obtain the thirdly chain set of the sags and crests included on the hand profile;
Step S54, using concavo-convex angle recognizer, according to being associated with described the of the hand profile
3 chain process of aggregation obtain including the 4th chain set of all finger tip points of hand;
Step S55, each finger of hand is obtained according to finger tip point identification, is then performed gesture and is known
Do not operate.
8. gesture identification method as claimed in claim 7, it is characterised in that in the step S55,
Perform the step of gesture identification is operated and specifically include:
Step S551, identification obtains the number of all described finger of hand;
Step S552, according to presupposed information judge obtain the every title of the finger, direction vector with
And angle between the adjacent finger and export;
Step S553, one or three layers of decision tree, and root are formed according to the information exported in the step S552
Gesture is identified according to three layers of decision tree.
9. gesture identification method as claimed in claim 5, it is characterised in that in the step S42,
K values in the K mean cluster algorithm are set as fixed numbers 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510943700.9A CN106886741A (en) | 2015-12-16 | 2015-12-16 | A kind of gesture identification method of base finger identification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510943700.9A CN106886741A (en) | 2015-12-16 | 2015-12-16 | A kind of gesture identification method of base finger identification |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106886741A true CN106886741A (en) | 2017-06-23 |
Family
ID=59174230
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510943700.9A Pending CN106886741A (en) | 2015-12-16 | 2015-12-16 | A kind of gesture identification method of base finger identification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106886741A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108335296A (en) * | 2018-02-28 | 2018-07-27 | 中际山河科技有限责任公司 | A kind of pole plate identification device and method |
WO2020057122A1 (en) * | 2018-09-18 | 2020-03-26 | 北京市商汤科技开发有限公司 | Data processing method and apparatus, electronic device, and storage medium |
CN111045511A (en) * | 2018-10-15 | 2020-04-21 | 华为技术有限公司 | Gesture-based control method and terminal equipment |
WO2020088092A1 (en) * | 2018-11-01 | 2020-05-07 | 北京达佳互联信息技术有限公司 | Key point position determining method and apparatus, and electronic device |
CN111222486A (en) * | 2020-01-15 | 2020-06-02 | 腾讯科技(深圳)有限公司 | Training method, device and equipment for hand gesture recognition model and storage medium |
CN111401318A (en) * | 2020-04-14 | 2020-07-10 | 支付宝(杭州)信息技术有限公司 | Action recognition method and device |
CN111670457A (en) * | 2017-12-03 | 2020-09-15 | 脸谱公司 | Optimization of dynamic object instance detection, segmentation and structure mapping |
CN115100747A (en) * | 2022-08-26 | 2022-09-23 | 山东宝德龙健身器材有限公司 | Treadmill intelligent auxiliary system based on visual detection |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103839040A (en) * | 2012-11-27 | 2014-06-04 | 株式会社理光 | Gesture identification method and device based on depth images |
CN103984928A (en) * | 2014-05-20 | 2014-08-13 | 桂林电子科技大学 | Finger gesture recognition method based on field depth image |
CN104063059A (en) * | 2014-07-13 | 2014-09-24 | 华东理工大学 | Real-time gesture recognition method based on finger division |
CN104360811A (en) * | 2014-10-22 | 2015-02-18 | 河海大学 | Single-figure hand gesture recognition method |
CN104463146A (en) * | 2014-12-30 | 2015-03-25 | 华南师范大学 | Posture identification method and device based on near-infrared TOF camera depth information |
CN104899600A (en) * | 2015-05-28 | 2015-09-09 | 北京工业大学 | Depth map based hand feature point detection method |
-
2015
- 2015-12-16 CN CN201510943700.9A patent/CN106886741A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103839040A (en) * | 2012-11-27 | 2014-06-04 | 株式会社理光 | Gesture identification method and device based on depth images |
CN103984928A (en) * | 2014-05-20 | 2014-08-13 | 桂林电子科技大学 | Finger gesture recognition method based on field depth image |
CN104063059A (en) * | 2014-07-13 | 2014-09-24 | 华东理工大学 | Real-time gesture recognition method based on finger division |
CN104360811A (en) * | 2014-10-22 | 2015-02-18 | 河海大学 | Single-figure hand gesture recognition method |
CN104463146A (en) * | 2014-12-30 | 2015-03-25 | 华南师范大学 | Posture identification method and device based on near-infrared TOF camera depth information |
CN104899600A (en) * | 2015-05-28 | 2015-09-09 | 北京工业大学 | Depth map based hand feature point detection method |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111670457B (en) * | 2017-12-03 | 2023-12-01 | 元平台公司 | Optimization of dynamic object instance detection, segmentation and structure mapping |
CN111670457A (en) * | 2017-12-03 | 2020-09-15 | 脸谱公司 | Optimization of dynamic object instance detection, segmentation and structure mapping |
CN108335296A (en) * | 2018-02-28 | 2018-07-27 | 中际山河科技有限责任公司 | A kind of pole plate identification device and method |
CN108335296B (en) * | 2018-02-28 | 2021-10-01 | 中际山河科技有限责任公司 | Polar plate identification device and method |
JP2021519994A (en) * | 2018-09-18 | 2021-08-12 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | Data processing methods and devices, electronic devices and storage media |
WO2020057122A1 (en) * | 2018-09-18 | 2020-03-26 | 北京市商汤科技开发有限公司 | Data processing method and apparatus, electronic device, and storage medium |
JP7096910B2 (en) | 2018-09-18 | 2022-07-06 | ベイジン・センスタイム・テクノロジー・デベロップメント・カンパニー・リミテッド | Data processing methods and equipment, electronic devices and storage media |
CN111045511B (en) * | 2018-10-15 | 2022-06-07 | 华为技术有限公司 | Gesture-based control method and terminal equipment |
CN111045511A (en) * | 2018-10-15 | 2020-04-21 | 华为技术有限公司 | Gesture-based control method and terminal equipment |
WO2020088092A1 (en) * | 2018-11-01 | 2020-05-07 | 北京达佳互联信息技术有限公司 | Key point position determining method and apparatus, and electronic device |
CN111222486A (en) * | 2020-01-15 | 2020-06-02 | 腾讯科技(深圳)有限公司 | Training method, device and equipment for hand gesture recognition model and storage medium |
CN111222486B (en) * | 2020-01-15 | 2022-11-04 | 腾讯科技(深圳)有限公司 | Training method, device and equipment for hand gesture recognition model and storage medium |
CN111401318A (en) * | 2020-04-14 | 2020-07-10 | 支付宝(杭州)信息技术有限公司 | Action recognition method and device |
CN111401318B (en) * | 2020-04-14 | 2022-10-04 | 支付宝(杭州)信息技术有限公司 | Action recognition method and device |
CN115100747A (en) * | 2022-08-26 | 2022-09-23 | 山东宝德龙健身器材有限公司 | Treadmill intelligent auxiliary system based on visual detection |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106886741A (en) | A kind of gesture identification method of base finger identification | |
CN103984928B (en) | Finger gesture recognition methods based on depth image | |
CN106971130A (en) | A kind of gesture identification method using face as reference | |
CN101344816B (en) | Human-machine interaction method and device based on sight tracing and gesture discriminating | |
CN102368290B (en) | Hand gesture identification method based on finger advanced characteristic | |
CN103971102B (en) | Static Gesture Recognition Method Based on Finger Contour and Decision Tree | |
Bhuyan et al. | Fingertip detection for hand pose recognition | |
CN102831404B (en) | Gesture detecting method and system | |
CN104899600B (en) | A kind of hand-characteristic point detecting method based on depth map | |
CN102402289B (en) | Mouse recognition method for gesture based on machine vision | |
CN110221699B (en) | Eye movement behavior identification method of front-facing camera video source | |
CN106971131A (en) | A kind of gesture identification method based on center | |
CN109190460B (en) | Hand-shaped arm vein fusion identification method based on cumulative matching and equal error rate | |
CN105975934A (en) | Dynamic gesture identification method and system for augmented reality auxiliary maintenance | |
CN106970701A (en) | A kind of gesture changes recognition methods | |
CN108846356B (en) | Palm tracking and positioning method based on real-time gesture recognition | |
Pandey et al. | Hand gesture recognition for sign language recognition: A review | |
CN103530892A (en) | Kinect sensor based two-hand tracking method and device | |
CN106446911A (en) | Hand recognition method based on image edge line curvature and distance features | |
CN106503619B (en) | Gesture recognition method based on BP neural network | |
Vishwakarma et al. | Simple and intelligent system to recognize the expression of speech-disabled person | |
CN105335711A (en) | Fingertip detection method in complex environment | |
CN103927555A (en) | Static sign language letter recognition system and method based on Kinect sensor | |
CN108614988A (en) | A kind of motion gesture automatic recognition system under complex background | |
CN103544469B (en) | Fingertip detection method and device based on palm ranging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170623 |
|
RJ01 | Rejection of invention patent application after publication |