CN107958218A - A kind of real-time gesture knows method for distinguishing - Google Patents

A kind of real-time gesture knows method for distinguishing Download PDF

Info

Publication number
CN107958218A
CN107958218A CN201711221554.4A CN201711221554A CN107958218A CN 107958218 A CN107958218 A CN 107958218A CN 201711221554 A CN201711221554 A CN 201711221554A CN 107958218 A CN107958218 A CN 107958218A
Authority
CN
China
Prior art keywords
gesture
msub
hand
image
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711221554.4A
Other languages
Chinese (zh)
Inventor
张晖
杨纯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201711221554.4A priority Critical patent/CN107958218A/en
Publication of CN107958218A publication Critical patent/CN107958218A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/117Biometrics derived from hands

Abstract

The invention discloses a kind of real-time gesture to know method for distinguishing, includes the following steps:1) by the gesture video of acquisition, the image sequence being ranked up in chronological order is decomposed into, and to carrying out hand region segmentation after gained image preprocessing;2) extract the hand-shaped characteristic of hand region in every piece image and be identified as corresponding gesture value with SVM support vector machines;3) direction character of movement locus of the gesture value of every piece image with being obtained by iteration LK pyramids optical flow algorithm is combined as to the feature vector of each dynamic gesture image;4) 2) 3) circulation performs, and loop stop conditions are processed for all images of current video, so as to obtain one group of complete characteristic vector sequence;5) gesture template library 6 is established) obtained characteristic vector sequence is all optimized into DTW with all templates in template library match, the matched distortion factor is calculated, if greater than distortion threshold, then recognition failures, if less than distortion threshold, then export recognition result.

Description

A kind of real-time gesture knows method for distinguishing
Technical field
The present invention relates to technical field of image processing, particularly a kind of real-time gesture knows method for distinguishing.
Background technology
Gesture identification refers to carry out human hand form, displacement etc. continuous collecting, modeling and identifies, so that the hand by acquisition Gesture information is converted to corresponding instruction, and some operations are realized for controlling.Dynamic hand gesture recognition is the combination of static gesture identification, It is made of a series of gesture motion, emphasis is the hand information obtained in video flowing, extracts gesture feature and gesture motion Track, so as to carry out dynamic hand gesture recognition.
The Internet of things era, human-computer interaction are no longer the interaction of mechanical button, touch screen interaction, but can with interactive voice, The more simple and convenient mode such as gesture interaction is presented.The interactive mode of dynamic gesture is more in line with the daily exchange of people and practises It is used, while it is extended to realize more and richer semanteme, filled up figure, machinery, touch screen interaction and natural language interaction it Between blank, therefore the identification technology has important research significance and wide application in the development of field of human-computer interaction Prospect.
Although the dynamic hand gesture recognition technology of existing view-based access control model has made great progress, do not apply really also System that simultaneously can be widely available in complex environment.Main reason is that background has very strong uncertainty, light in true environment The color of line, power, change may all bring influence, while dynamic hand gesture recognition system in real time to the discrimination of gesture recognition system Uniting, it is also very high that the disposal ability of computer is required.Movement velocity, the display form of dynamic gesture differ, moving object blocks Discrimination will be reduced.
The content of the invention
The technical problems to be solved by the invention are overcome the deficiencies in the prior art and provide a kind of real-time gesture identification Method, in the identification process of dynamic gesture to the image information under illumination variation, complex background and dynamic gesture track into One-step optimization, improves dynamic hand gesture recognition rate.
The present invention uses following technical scheme to solve above-mentioned technical problem:
A kind of real-time gesture proposed according to the present invention knows method for distinguishing, includes the following steps:
Step 1, obtain gesture vision signal in real time, and gesture vision signal is decomposed into image sequence according to time sequence;
Step 2, pre-process to obtain bianry image to the image sequence that step 1 obtains, wherein pretreatment include medium filtering, Color space is changed and colour of skin Threshold segmentation;
Step 3, carry out morphologic filtering to the pretreated bianry image of step 2, is determined in two dimensional surface using barycenter The method of position is partitioned into hand region;
Step 4, the hand-shaped characteristic composition hand-shaped characteristic vector for extracting hand region are simultaneously identified with SVM algorithm of support vector machine Into corresponding gesture value;
Step 5, to hand region using iteration LK pyramids optical flow algorithm extraction movement locus direction character, make its with Gesture value is combined as the feature vector of each dynamic gesture image;
Step 6, circulation perform step 4 and step 5, until gesture terminates, so that the characteristic vector sequence of gesture is obtained, should The length of characteristic vector sequence is the number of image sequence in step 1;
Step 7, establish gesture template library, the characteristic vector sequence F_test for the gesture that step 6 is obtained, with gesture template All template F_ref optimize DTW matchings in storehouse, and calculate the distortion factor of F_test and F_ref, if greater than distortion threshold Value, then recognition failures, if less than distortion threshold, then export recognition result.
Know the further prioritization scheme of method for distinguishing as a kind of real-time gesture of the present invention, used in step 1 infrared Camera obtains gesture vision signal in real time.
Know the further prioritization scheme of method for distinguishing as a kind of real-time gesture of the present invention, the pretreatment in step 2, YCrCb color spaces are transformed into by image space switch technology to the image after medium filtering and carry out colour of skin Threshold segmentation.
Know the further prioritization scheme of method for distinguishing as a kind of real-time gesture of the present invention, barycenter is used in step 3 The method of positioning is partitioned into hand region, and the centroid position of hand region asks method as follows:
If (x, y) be hand region in location of pixels, I (x, y) be hand region in (x, y) place pixel value, hand The zeroth order matrix and first order matrix in region be respectively:
Wherein, M00For zeroth order matrix, M10, M01The first order matrix of respectively x and y;
The centroid position for trying to achieve hand region is:
Wherein, xcFor the abscissa of centroid position, ycFor the ordinate of centroid position.
Know the further prioritization scheme of method for distinguishing as a kind of real-time gesture of the present invention, step 4 is specific as follows:
The hand profile for 4-1-1) extracting hand region saves as point sequence and by its point sequence drawing line;
The central moment and 7 Hu not bending moments of the hand profile 4-1-2) are calculated, is taken out in seven characteristic components of geometric moment Preceding four components, make the area girth of itself and hand profile form hand-shaped characteristic vector than amounting to 5 hand-shaped characteristics;
4-1-3) utilize 4-1-1) and the hand-shaped characteristic vectors of all image sequences 4-1-2) is obtained, sent after being normalized to it Learnt and trained into SVM training aids, be identified as corresponding gesture value.
Know the further prioritization scheme of method for distinguishing as a kind of real-time gesture of the present invention, step 5 utilizes iteration LK Pyramid optical flow algorithm, solves the optical flow field of image sequence, obtains the initial feature of movement locus, chooses movement locus tangent line Direction characters of the angle θ as movement locus, and by the continuous θ uniform quantizations of value.
Know the further prioritization scheme of method for distinguishing as a kind of real-time gesture of the present invention, optimize DTW in step 7 With specific as follows:
(1) to the slope of DTW matchings constrained path in the search procedure in global optimum path, slop control is existedBetween;
(2) to DTW match settings distortion thresholds, if M and N is to be respectively the length for participating in matched two groups of characteristic vector sequences Degree, optimal path length be in max (M, N) between M+N, and in optimal path unmatched number and optimal path length into Proportional relationship, selects α × (M+N) to be used as distortion threshold, α is direct proportion coefficient.
Know the further prioritization scheme of method for distinguishing as a kind of real-time gesture of the present invention, α is set as 0.25.
The present invention compared with prior art, has following technique effect using above technical scheme:
(1) dynamic gesture identification method provided by the invention, it can become illumination in the identification process of dynamic gesture Change, the image information under complex background and dynamic gesture track further optimize, raising dynamic hand gesture recognition rate;
(2) invention can be applied under smart home environment, and household electrical appliances are simply manipulated by gesture, allow user to enjoy By the comfort of man-machine gesture interaction, and gesture is simple and convenient easy to learn;There are boundless application scenarios.
Brief description of the drawings
Fig. 1 is the flow diagram of real-time gesture recognition method of the present invention;
Fig. 2 is the image preprocessing schematic diagram of real-time gesture recognition method of the present invention;
Fig. 3 is the center coordination hand Segmentation flow chart of real-time gesture recognition method of the present invention;
Fig. 4 is the hand-shaped characteristic vector extraction flow chart of real-time gesture recognition method of the present invention;
Fig. 5 is the iteration LK pyramid optical flow algorithm flow charts of real-time gesture recognition method of the present invention;
Fig. 6 is the functional block diagram for the dynamic hand gesture recognition device that real-time gesture recognition method of the present invention provides.
Embodiment
In order to make the object, technical solutions and advantages of the present invention clearer, below in conjunction with the accompanying drawings and the specific embodiments The present invention will be described in detail.
Capturing gesture video in real time by terminal camera first, image module gathers image information, and in chronological order Image is ranked up, pretreatment work is done to every two field picture of gained and carries out hand region segmentation, extracts the hand of hand region The direction character of shape feature and movement locus, hand-shaped characteristic is sent into SVM classifier and corresponds to gesture value, and by it with being extracted Course bearing feature be combined into the feature vector of dynamic gesture, so as to get characteristic vector sequence and template library in all mould Plate all carries out DTW matchings, calculates its distortion factor, if minimum distortion degree is bigger than distortion factor threshold value, recognition failures, if minimum The distortion factor is smaller than distortion factor threshold value, then exports recognition result.Result is sent into control module, reads the instruction corresponding to gesture, So as to carry out home wiring control.The invention can be applied under smart home environment, allow user to enjoy the comfortable of man-machine gesture interaction Sense, and gesture is simple and convenient easy to learn.There are boundless application scenarios.As shown in Figure 1, gesture under Intelligent household scene of the present invention The flow of identification mainly includes the following steps that:
Step 1, the gesture video that image capture module records camera carries out Image Acquisition, to each frame figure of collection As carrying out hand Segmentation after being pre-processed, preprocess method refers to Fig. 2, and center coordination hand Segmentation method refers to Fig. 3;
Step 2, extract the hand-shaped characteristic of hand region and be identified as corresponding gesture value, hand with SVM support vector machines Feature extraction flow refers to Fig. 4;
Step 3, dynamic gesture is tracked using iteration LK pyramids optical flow algorithm, the direction of extraction gesture motion track is special Levy, and the feature vector of each dynamic gesture image is formed with gesture value, iteration LK pyramid optical flow algorithm flow charts refer to Fig. 5.
The Directional feature extraction mode of gesture motion track is as follows:
The Motion mapping of gesture will appear as the change of the gesture all the points pixel value, by the change of pixel value on image Change speed and be defined as light stream, and the gray-value variation of each pixel represents optical flow field in images of gestures.The present invention is sweared using light stream Amount carries out gesture motion analysis, and light stream vector reflects the situation of change that pixel value is each put in images of gestures.Utilize iteration LK Pyramid optical flow algorithm, obtains the optical flow field for the image sequence that can reflect motion gesture information, from the optical flow field being calculated In can extract many features, but at a time specific location is different and movement velocity different band is come in order to avoid gesture Influence, the present invention selection movement locus angle of contingence θ as track characteristic.If corresponding tracking on t-1 moment movement locus Point position isAnd corresponding trace point position is on t moment movement locusIt can try to achieve in this section Motion excursion amount in time intervalWhenWhen, there is the angle of contingence at this timeWhenAndWhen, there is θ=pi/2 at this time;WhenAndWhen, there is θ at this time =3 pi/2s, wherein angle of contingence θ had both described the information of gesture motion track well, while gesture motion feature is become more Add simple.Since the value of θ is continuous, direct use can increase considerably operand, in order to reduce the complexity of calculating, carry High arithmetic speed, by θ uniform quantizations.
Step 4, by increasing slope path constraint and setting distortion factor threshold value to be optimized and change original DTW algorithms Into, and obtained feature vector is matched with all templates in template library, export recognition result.Slop control is existedBetween, reduce calculation amount;If M and N is the length for respectively participating in matched two groups of characteristic vector sequences, optimal road Electrical path length is in max (M, N) between M+N, and unmatched number and optimal path length pass in direct ratio in optimal path System, selects α × (M+N) to be used as distortion threshold, α is direct proportion coefficient, and α is set as 0.25;After optimizing to DTW, combining adaptive Template carries out gesture matching and identification.
As shown in Fig. 2, the image preprocessing of real-time gesture recognition method of the present invention mainly includes the following steps that:
Step 201, the square templates for the image sequence of acquisition being adjusted to 3*3 are denoted as Image1;
Step 202, carry out medium filtering after the pixel sequence to 9 coordinate points of image Image1 and obtain Image2, in Value filtering method using intermediate value substitution current location point pixel value, intermediate value is unrelated with the size of the pixel value of noise, be not easy by Influence of noise, and can preferably retain the detailed information such as image border.
Step 203, RGB color is completed to image Image2 to the conversion of YCrCb color spaces, obtains image Image3 conversion formulas are as follows:
Wherein Y represents lightness, and Cr and Cb represent colourity, and R, G, B value range are 0~255.
Step 204, modeled by Gauss and choose suitable colour of skin threshold parameter to image Image3 into row threshold division, obtained It is as follows to image Image4, process:
1. carrying out binary conversion treatment to image Image3 using two chrominance channels of Cr and Cb, formula is as follows:
Wherein T threshold parameter, g (u, v) are image Image3, and (u, v) is the position of Image3 pixels, and 0 represents black, 1 represents white.
2. assume Cr and Cb meet normal distribution, obtain mean μ of the image Image3 pixels in Cr and Cb passages and Variance δ, result is brought into One-Dimensional Normal distribution formula so as to establish one-dimensional statistical model;
3. with reference to the probability density distribution table of One-Dimensional Normal distribution, can obtain in the range of [μ -2.8 δ, μ+2.8 δ] Probability very close 1, so the pixel at pixel value within the range can substantially be counted as colour of skin point.Cr is corresponding Cr threshold parameter of the section [μ -2.8 δ, μ+2.8 δ] that Gauss model counts as the colour of skin, the corresponding Gauss models of Cb Count Cb threshold parameter of the obtained section [μ -2.8 δ, μ+2.8 δ] as the colour of skin:
Rangecr=[μcr-2.8δcr, μcr+2.8δcr]
Rangecb=[μcb-2.8δcb, μcb+2.8δcb]
Wherein RangecrRepresent the Cr threshold parameters of the colour of skin, RangecbRepresent the Cb threshold parameters of the colour of skin.
4. being non-colour of skin point or not within the above range, image Image4 can be obtained by Threshold segmentation.
As shown in figure 3, the center coordination hand Segmentation flow chart of real-time gesture recognition method of the present invention, in this process, Assuming that the uniform quality distribution of human hand, then can be weighed corresponding hand region quality using hand region area, simplify algorithm. Present image hand region centroid position asks method as follows:
If (x, y) be hand region in location of pixels, I (x, y) be hand region in (x, y) place pixel value, hand The zeroth order matrix and first order matrix in region be respectively:
Wherein, M00For zeroth order matrix, M10, M01The first order matrix of respectively x and y, can be in the hope of the barycenter position of hand region It is set to:
Wherein, xcFor the abscissa of centroid position, ycFor the ordinate of centroid position.
As shown in figure 4, the hand-shaped characteristic extraction flow chart of real-time gesture recognition method of the present invention, Hu not bending moment makes It is as follows:
For coordinate in image be f (x, y) for the point pixel value of (x, y) and binary function f (x, y) is continuous function, that Available equation below obtains the geometric moment and P+q rank central moments of P+q ranks respectively:
Wherein p and q is exponent number, mpqFor the geometric moment of P+q ranks, μpqFor P+q rank central moments.
WhereinWithIt is the abscissa and ordinate of image centroid respectively.
Normalized central moment calculation formula is:Wherein
Utilize second order and three ranks normalization central moment construction 7 invariant moments M1~M7
M12002
M2=(η2002)2+4η11 2
M3=(η30-3η12)2+(3η1203)2
M4=(η3012)2+(η2103)2
M5=(η30-3η12)(η3012)((η3012)2-3(η2103)2)+(3η1203)(η2103)(3(η3012)2- (η2103)2)
M6=(η2002)((η3012)2-(η2103)2)+4η113012)(η2103)
M7=(3 η1203)(η3012)((η3012)2-3(η2103)2)—(η3012)(η2103)(3(η3012)2- (η2103)2)
As shown in figure 5, the iteration LK pyramid optical flow algorithm flows of real-time gesture recognition method of the present invention;
By constructing image pyramid, the standard light stream of top layer is then calculated, finally relays the image on pyramid top Original image size is arrived greatly, motion compensation is carried out with bilinear interpolation in the amplification process of interlayer, was being scaled with making up Information caused by journey is lost.
As shown in fig. 6, the functional block diagram for the dynamic hand gesture recognition device that real-time gesture recognition method of the present invention provides, bag Include image module, Hand Gesture Segmentation module, hand-shaped characteristic and track characteristic extraction module, SVM training and identification module, template generation Module, DTW match cognizations module, control module.
Image module:Pre-processed for gathering and preserving real-time images of gestures, and to the image of collection;
Hand Gesture Segmentation module:Colour of skin Threshold segmentation is carried out to image, takes the mode dividing gesture of center coordination;
Hand-shaped characteristic and track characteristic extraction module:Extraction for hand-shaped characteristic and track characteristic;
SVM training and identification module:Feature vector after normalization is learnt and trained using SVM;
Template generation module:DTW classifies to gesture value and track characteristic training and study, produces template library;
DTW match cognization modules:Gesture matching and identification are adaptively carried out using the DTW combinations template after optimization.
Control module:For receiving the recognition result of gesture recognition module, if correctly being matched with template library gesture, flicker Green light simultaneously reads the progress home wiring control of the instruction corresponding to gesture, if can not be matched with template library gesture, flashing red light, continues Gesture is done, until successful match.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, the change or replacement that can readily occur in, all should Cover within the scope of the present invention.

Claims (8)

1. a kind of real-time gesture knows method for distinguishing, it is characterised in that includes the following steps:
Step 1, obtain gesture vision signal in real time, and gesture vision signal is decomposed into image sequence according to time sequence;
Step 2, pre-process to obtain bianry image to the image sequence that step 1 obtains, wherein pretreatment includes medium filtering, color Space is changed and colour of skin Threshold segmentation;
Step 3, carry out morphologic filtering to the pretreated bianry image of step 2, using center coordination in two dimensional surface Method is partitioned into hand region;
Step 4, the hand-shaped characteristic for extracting hand region form hand-shaped characteristic vector and are identified as pair with SVM algorithm of support vector machine The gesture value answered;
Step 5, the direction character to hand region using iteration LK pyramids optical flow algorithm extraction movement locus, make itself and gesture Value is combined as the feature vector of each dynamic gesture image;
Step 6, circulation perform step 4 and step 5, until gesture terminates, so as to obtain the characteristic vector sequence of gesture, this feature The length of sequence vector is the number of image sequence in step 1;
Step 7, establish gesture template library, the characteristic vector sequence F_test for the gesture that step 6 is obtained, and in gesture template library All template F_ref optimize DTW matchings, and calculate the distortion factor of F_test and F_ref, if greater than distortion threshold, Then recognition failures, if less than distortion threshold, then export recognition result.
2. a kind of real-time gesture according to claim 1 knows method for distinguishing, it is characterised in that is taken the photograph in step 1 using infrared As head obtains gesture vision signal in real time.
3. a kind of real-time gesture according to claim 1 knows method for distinguishing, it is characterised in that the pretreatment in step 2, it is right Image after medium filtering is transformed into YCrCb color spaces by image space switch technology and carries out colour of skin Threshold segmentation.
4. a kind of real-time gesture according to claim 1 knows method for distinguishing, it is characterised in that is determined in step 3 using barycenter The method of position is partitioned into hand region, and the centroid position of hand region asks method as follows:
If (x, y) be hand region in location of pixels, I (x, y) be hand region in (x, y) place pixel value, hand region Zeroth order matrix and first order matrix be respectively:
<mrow> <msub> <mi>M</mi> <mn>00</mn> </msub> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mi>x</mi> </munder> <munder> <mo>&amp;Sigma;</mo> <mi>y</mi> </munder> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>M</mi> <mn>10</mn> </msub> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mi>x</mi> </munder> <munder> <mo>&amp;Sigma;</mo> <mi>y</mi> </munder> <mi>x</mi> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>M</mi> <mn>01</mn> </msub> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mi>x</mi> </munder> <munder> <mo>&amp;Sigma;</mo> <mi>y</mi> </munder> <mi>y</mi> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow>
Wherein, M00For zeroth order matrix, M10, M01The first order matrix of respectively x and y;
The centroid position for trying to achieve hand region is:
<mrow> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>M</mi> <mn>10</mn> </msub> <msub> <mi>M</mi> <mn>00</mn> </msub> </mfrac> <mo>,</mo> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>M</mi> <mn>01</mn> </msub> <msub> <mi>M</mi> <mn>00</mn> </msub> </mfrac> </mrow>
Wherein, xcFor the abscissa of centroid position, ycFor the ordinate of centroid position.
5. a kind of real-time gesture according to claim 1 knows method for distinguishing, it is characterised in that step 4 is specific as follows:
The hand profile for 4-1-1) extracting hand region saves as point sequence and by its point sequence drawing line;
The central moment and 7 Hu not bending moments of the hand profile 4-1-2) are calculated, takes out preceding four in seven characteristic components of geometric moment A component, makes the area girth of itself and hand profile form hand-shaped characteristic vector than amounting to 5 hand-shaped characteristics;
4-1-3) utilize 4-1-1) and the hand-shaped characteristic vectors of all image sequences 4-1-2) is obtained, it is sent into after being normalized to it Learnt and trained in SVM training aids, be identified as corresponding gesture value.
6. a kind of real-time gesture according to claim 1 knows method for distinguishing, it is characterised in that step 5 utilizes iteration LK gold Word tower optical flow algorithm, solves the optical flow field of image sequence, obtains the initial feature of movement locus, chooses the movement locus angle of contingence Direction characters of the θ as movement locus, and by the continuous θ uniform quantizations of value.
7. a kind of real-time gesture according to claim 1 knows method for distinguishing, it is characterised in that optimizes DTW matchings in step 7 It is specific as follows:
(1) to DTW matching in the search procedure in global optimum path constrained path slope, by slop control 1/2~2 it Between;
(2) to DTW match settings distortion thresholds, if M and N is to be respectively the length for participating in matched two groups of characteristic vector sequences, Optimal path length is in max (M, N) between M+N, and in optimal path unmatched number and optimal path length into just Proportionate relationship, selects α × (M+N) to be used as distortion threshold, α is direct proportion coefficient.
8. a kind of real-time gesture according to claim 7 knows method for distinguishing, it is characterised in that α is set as 0.25.
CN201711221554.4A 2017-11-22 2017-11-22 A kind of real-time gesture knows method for distinguishing Pending CN107958218A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711221554.4A CN107958218A (en) 2017-11-22 2017-11-22 A kind of real-time gesture knows method for distinguishing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711221554.4A CN107958218A (en) 2017-11-22 2017-11-22 A kind of real-time gesture knows method for distinguishing

Publications (1)

Publication Number Publication Date
CN107958218A true CN107958218A (en) 2018-04-24

Family

ID=61962751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711221554.4A Pending CN107958218A (en) 2017-11-22 2017-11-22 A kind of real-time gesture knows method for distinguishing

Country Status (1)

Country Link
CN (1) CN107958218A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829928A (en) * 2018-12-29 2019-05-31 芜湖哈特机器人产业技术研究院有限公司 A kind of extracting method of target image extracting method and picture position feature
CN110110660A (en) * 2019-05-07 2019-08-09 广东工业大学 Analysis method, device and the equipment of operation by human hand behavior
CN110221717A (en) * 2019-05-24 2019-09-10 李锦华 Virtual mouse driving device, gesture identification method and equipment for virtual mouse
CN110309806A (en) * 2019-07-08 2019-10-08 哈尔滨理工大学 A kind of gesture recognition system and its method based on video image processing
CN110888533A (en) * 2019-11-27 2020-03-17 云南电网有限责任公司电力科学研究院 High-precision gesture interaction system and method combined with somatosensory equipment
CN111158457A (en) * 2019-12-31 2020-05-15 苏州莱孚斯特电子科技有限公司 Vehicle-mounted HUD (head Up display) human-computer interaction system based on gesture recognition
CN111158491A (en) * 2019-12-31 2020-05-15 苏州莱孚斯特电子科技有限公司 Gesture recognition man-machine interaction method applied to vehicle-mounted HUD
CN111222486A (en) * 2020-01-15 2020-06-02 腾讯科技(深圳)有限公司 Training method, device and equipment for hand gesture recognition model and storage medium
CN111601129A (en) * 2020-06-05 2020-08-28 北京字节跳动网络技术有限公司 Control method, control device, terminal and storage medium
CN111665934A (en) * 2020-04-30 2020-09-15 哈尔滨理工大学 Gesture recognition system and method based on ZYNQ software and hardware coprocessing
CN111797709A (en) * 2020-06-14 2020-10-20 浙江工业大学 Real-time dynamic gesture track recognition method based on regression detection
CN111914808A (en) * 2020-08-19 2020-11-10 福州大学 Gesture recognition system realized based on FPGA and recognition method thereof
CN112667088A (en) * 2021-01-06 2021-04-16 湖南翰坤实业有限公司 Gesture application identification method and system based on VR walking platform

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055925A (en) * 2009-11-06 2011-05-11 康佳集团股份有限公司 Television supporting gesture remote control and using method thereof
US20120148097A1 (en) * 2010-12-14 2012-06-14 Electronics And Telecommunications Research Institute 3d motion recognition method and apparatus
CN103152626A (en) * 2013-03-08 2013-06-12 苏州百纳思光学科技有限公司 Far infrared three-dimensional hand signal detecting device of intelligent television set
CN104331151A (en) * 2014-10-11 2015-02-04 中国传媒大学 Optical flow-based gesture motion direction recognition method
CN205053686U (en) * 2015-09-18 2016-03-02 深圳市嘉世通科技有限公司 Curtain control system with gesture recognition
CN205453981U (en) * 2015-09-28 2016-08-10 深圳奥视通电子有限公司 But image recognition's STB and smart home systems thereof
CN106815578A (en) * 2017-01-23 2017-06-09 重庆邮电大学 A kind of gesture identification method based on Depth Motion figure Scale invariant features transform

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055925A (en) * 2009-11-06 2011-05-11 康佳集团股份有限公司 Television supporting gesture remote control and using method thereof
US20120148097A1 (en) * 2010-12-14 2012-06-14 Electronics And Telecommunications Research Institute 3d motion recognition method and apparatus
CN103152626A (en) * 2013-03-08 2013-06-12 苏州百纳思光学科技有限公司 Far infrared three-dimensional hand signal detecting device of intelligent television set
CN104331151A (en) * 2014-10-11 2015-02-04 中国传媒大学 Optical flow-based gesture motion direction recognition method
CN205053686U (en) * 2015-09-18 2016-03-02 深圳市嘉世通科技有限公司 Curtain control system with gesture recognition
CN205453981U (en) * 2015-09-28 2016-08-10 深圳奥视通电子有限公司 But image recognition's STB and smart home systems thereof
CN106815578A (en) * 2017-01-23 2017-06-09 重庆邮电大学 A kind of gesture identification method based on Depth Motion figure Scale invariant features transform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张前军: ""基于DTW及光流法融合的动态手势识别技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829928A (en) * 2018-12-29 2019-05-31 芜湖哈特机器人产业技术研究院有限公司 A kind of extracting method of target image extracting method and picture position feature
CN110110660A (en) * 2019-05-07 2019-08-09 广东工业大学 Analysis method, device and the equipment of operation by human hand behavior
CN110221717A (en) * 2019-05-24 2019-09-10 李锦华 Virtual mouse driving device, gesture identification method and equipment for virtual mouse
CN110309806B (en) * 2019-07-08 2020-12-11 哈尔滨理工大学 Gesture recognition system and method based on video image processing
CN110309806A (en) * 2019-07-08 2019-10-08 哈尔滨理工大学 A kind of gesture recognition system and its method based on video image processing
CN110888533A (en) * 2019-11-27 2020-03-17 云南电网有限责任公司电力科学研究院 High-precision gesture interaction system and method combined with somatosensory equipment
CN111158491A (en) * 2019-12-31 2020-05-15 苏州莱孚斯特电子科技有限公司 Gesture recognition man-machine interaction method applied to vehicle-mounted HUD
CN111158457A (en) * 2019-12-31 2020-05-15 苏州莱孚斯特电子科技有限公司 Vehicle-mounted HUD (head Up display) human-computer interaction system based on gesture recognition
CN111222486A (en) * 2020-01-15 2020-06-02 腾讯科技(深圳)有限公司 Training method, device and equipment for hand gesture recognition model and storage medium
CN111222486B (en) * 2020-01-15 2022-11-04 腾讯科技(深圳)有限公司 Training method, device and equipment for hand gesture recognition model and storage medium
CN111665934A (en) * 2020-04-30 2020-09-15 哈尔滨理工大学 Gesture recognition system and method based on ZYNQ software and hardware coprocessing
CN111601129A (en) * 2020-06-05 2020-08-28 北京字节跳动网络技术有限公司 Control method, control device, terminal and storage medium
CN111797709A (en) * 2020-06-14 2020-10-20 浙江工业大学 Real-time dynamic gesture track recognition method based on regression detection
CN111797709B (en) * 2020-06-14 2022-04-01 浙江工业大学 Real-time dynamic gesture track recognition method based on regression detection
CN111914808A (en) * 2020-08-19 2020-11-10 福州大学 Gesture recognition system realized based on FPGA and recognition method thereof
CN111914808B (en) * 2020-08-19 2022-08-12 福州大学 Gesture recognition system realized based on FPGA and recognition method thereof
CN112667088A (en) * 2021-01-06 2021-04-16 湖南翰坤实业有限公司 Gesture application identification method and system based on VR walking platform

Similar Documents

Publication Publication Date Title
CN107958218A (en) A kind of real-time gesture knows method for distinguishing
CN109359538B (en) Training method of convolutional neural network, gesture recognition method, device and equipment
CN103593680B (en) A kind of dynamic gesture identification method based on the study of HMM independent increment
CN110021051B (en) Human image generation method based on generation of confrontation network through text guidance
CN109472198B (en) Gesture robust video smiling face recognition method
CN102880865B (en) Dynamic gesture recognition method based on complexion and morphological characteristics
CN109815826B (en) Method and device for generating face attribute model
Yang et al. Exploring temporal preservation networks for precise temporal action localization
CN104601964B (en) Pedestrian target tracking and system in non-overlapping across the video camera room of the ken
CN102831404B (en) Gesture detecting method and system
CN111385462A (en) Signal processing device, signal processing method and related product
CN102081918B (en) Video image display control method and video image display device
CN109376582A (en) A kind of interactive human face cartoon method based on generation confrontation network
CN102567716B (en) Face synthetic system and implementation method
CN103473801A (en) Facial expression editing method based on single camera and motion capturing data
CN108197534A (en) A kind of head part&#39;s attitude detecting method, electronic equipment and storage medium
CN109410168A (en) For determining the modeling method of the convolutional neural networks model of the classification of the subgraph block in image
CN110399809A (en) The face critical point detection method and device of multiple features fusion
CN110738161A (en) face image correction method based on improved generation type confrontation network
CN101308571A (en) Method for generating novel human face by combining active grid and human face recognition
CN108803874A (en) A kind of human-computer behavior exchange method based on machine vision
CN108363973A (en) A kind of unconfined 3D expressions moving method
CN113762201B (en) Mask detection method based on yolov4
CN113963032A (en) Twin network structure target tracking method fusing target re-identification
CN109800659A (en) A kind of action identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180424

RJ01 Rejection of invention patent application after publication