CN103745228A - Dynamic gesture identification method on basis of Frechet distance - Google Patents

Dynamic gesture identification method on basis of Frechet distance Download PDF

Info

Publication number
CN103745228A
CN103745228A CN201310752309.1A CN201310752309A CN103745228A CN 103745228 A CN103745228 A CN 103745228A CN 201310752309 A CN201310752309 A CN 201310752309A CN 103745228 A CN103745228 A CN 103745228A
Authority
CN
China
Prior art keywords
gesture
sequence
frame
characteristic sequence
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310752309.1A
Other languages
Chinese (zh)
Other versions
CN103745228B (en
Inventor
张长水
侯广东
崔润鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201310752309.1A priority Critical patent/CN103745228B/en
Publication of CN103745228A publication Critical patent/CN103745228A/en
Application granted granted Critical
Publication of CN103745228B publication Critical patent/CN103745228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a dynamic gesture identification method on the basis of a Frechet distance, which at least comprises the following steps of: acquiring gesture position information of a dynamic gesture fragment to be identified in an input video; carrying out matching on an acquired gesture state variation characteristic sequence and a characteristic sequence in a preset model according to the Frechet distance; and according to Frechet distance matching information, acquiring and outputting a similarity result. According to the invention, the extracted characteristic sequence and the pre-obtained model are subjected to similarity measurement in a certain mode and according to the similarity degree, a corresponding classification of the gesture to be identified is determined; and by utilizing the characteristic that the Frechet distance keeps stretching transformation of a time sequence curve along the time dimension unchanged, the dynamic gesture identification method can be well suitable for the condition of nonuniform distribution of the change speed of the dynamic gesture along the time dimension.

Description

Based on the dynamic gesture identification method of Fr é chet distance
Technical field
The present invention relates to a kind of dynamic gesture identification method based on Fr é chet distance.
Background technology
Gesture is that people are daily to express heart activity, to carry out the important method of communication exchange with other people.We are the situation of change on time dimension according to gesture conventionally, and gesture is divided into static gesture and dynamic gesture.
The particular space state of the hand that static gesture refers to the position of at a time relatively-stationary finger, palm, form towards features such as, attitudes.Specific static gesture can with in feature space with it corresponding specified point represent.What answer in contrast is dynamic gesture, and it is consisted of motion, the attitude sequence of continually varying hand in a period of time region.For dynamic gesture, if we choose suitable feature, the gesture information of particular point in time is described, so along with time t, t=1,2 ..., the continuous variation of T, we have just obtained one section of characteristic sequence (F corresponding to this dynamic gesture 1, F 2..., F t).If we make the each element in characteristic sequence successively in feature space, dynamic gesture always can be described with a corresponding curve in feature space so.
And after dynamic gesture describes out, problem is below gesture identification.Generally speaking, the basic task of gesture identification matches corresponding gesture feature and existing model exactly, thereby the curve in corresponding gesture characteristic parameter space or point is divided into the process of different sets or classification.
At present, the flow process framework of gesture recognition system generally consists of three parts such as gesture modeling, gesture analysis and gesture couplings.
Gesture modeling is normally for given gesture.Under normal circumstances, choosing of gesture model need to be determined in conjunction with actual application background.As for dynamic gesture, choosing of gesture model need to reflect its feature on time, Spatial Dimension, the process being closely connected before and after simultaneously the motion of gesture will being considered as.
Once gesture model is decided, we will pass through the relevant feature parameters of gesture analysis process computation gesture, and in fact these features have represented for certain gestures attitude and certain description of movement locus.The process of gesture analysis is comparatively complicated, and it is the process such as gesture location, feature selecting, feature extraction, model parameter estimation comprehensive normally.By these a series of processes, we separate the part relevant to gesture from image or video, and according to the location to gesture, from relevant range or according to the relation of front and back frame video, extract characteristic in order to the observer state of gesture is carried out to suitable description, according to training sample feature, model parameter is estimated simultaneously.The implementation method of above steps need to design according to the concrete application background of problem equally.
The basic task of gesture coupling is that feature and model are matched, thereby the curve in corresponding gesture characteristic parameter space or point is divided into the process of different subsets.At present conventional thinking is the similarity measurement that the feature of extraction is carried out to certain form with the model obtaining in advance, according to similarity degree, determines the classification that gesture to be identified is corresponding.
Summary of the invention
For the problems referred to above, the object of the present invention is to provide a kind of dynamic gesture identification method based on Fr é chet distance.
For achieving the above object, a kind of dynamic gesture identification method based on Fr é chet distance of the present invention, at least comprises the following steps:
Obtain the hand gesture location information of dynamic gesture fragment to be identified in input video;
Obtain the described hand gesture location place gesture state variation characteristic sequence of front and back frame continuously;
The gesture state variation characteristic sequence obtaining is mated according to Fr é chet distance with the characteristic sequence in preset model;
According to Fr é chet, apart from match information, obtain similarity result output.
Preferably, the concrete steps of obtaining the hand gesture location information of dynamic gesture fragment to be identified in input video described in are:
According to the rgb value of pixel in video, obtain the probable value that in any two field picture of video, area of skin color occurs;
According to described probable value judgement, obtain all area of skin color that distribute in the each two field picture of video;
Obtain the light stream value of each area of skin color two field picture before and after continuously;
According to the light stream value of all area of skin color that distribute, obtain the region of average light flow valuve maximum, i.e. hand gesture location region.
Preferably, the gesture state variation characteristic sequence that obtains continuous front and back, described hand gesture location place frame described in comprises movement locus variation characteristic sequence and attitude variation characteristic sequence.
Preferably, the coupling step of described movement locus variation characteristic sequence is:
Any frame of hand gesture location region inner video image is set and the average light stream of former frame is direction vector, that is: F=(x, y), wherein x, y represents that respectively average light stream is at the component laterally, on longitudinally;
Obtain the corresponding motion feature sequence of the each frame of inputted video image for (F 1, F 2..., F t);
Obtaining the motion feature sequence arranging in model is (M 1, M 2..., M t);
Choose motion feature sequence (F 1, F 2..., F t) in any one section of sequence fragment f=(F i, F i+1..., F j), obtain any vector (x in this fragment 1, y 1) and motion feature sequence (M 1, M 2..., M t) in any vector (x 2, y 2) distance, be:
d 1 ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 - x 1 · x 2 + y 1 · y 2 x 1 2 + y 1 2 · x 2 2 + y 2 2 ;
From the multiple d that obtain 1((x 1, y 1), (x 2, y 2)) maximizing, ask for boundary under maximal value, just can obtain f=(F i, F i+1..., F j) and (M 1, M 2..., M t) Fr é chet apart from δ f;
One threshold epsilon is set 1, judge this threshold value and δ fsize:
If δ f≤ ε 1, judge that in video to be identified, corresponding gesture motion shown in i frame to the j frame is the gesture of model coupling;
Otherwise, be not.
Preferably, the coupling step of described attitude variation characteristic sequence is:
Finger fingertip and middle finger in any frame of hand gesture location region inner video image are set and refer to that the coordinate of the corresponding finger of root center is attitude vector, be Z, wherein Z is the proper vector of 12 dimensions;
Obtain the corresponding posture feature sequence of the each frame of inputted video image for (Z 1, Z 2..., Z t);
Obtaining the posture feature sequence arranging in model is (N 1, N 2..., N t);
Choose attitude motion characteristic sequence (Z 1, Z 2..., Z t) in any one section of sequence fragment f=(Z i, Z i+1..., Z j), obtain any vector Z and attitude motion characteristic sequence (N in this fragment 1, N 2..., N t) in the distance of any vectorial N, be:
d 2(Z,N)=||Z-N|| 2
From the multiple d that obtain 2(Z, N) maximizing, asks for boundary under maximal value, just can obtain f=(Z i, Z i+1..., Z j) and (N 1, N 2..., N t) Fr é chet apart from δ f;
One threshold epsilon is set 2, judge this threshold value and δ fsize:
If δ f≤ ε 2, judge that in video to be identified, corresponding gesture motion shown in i frame to the j frame is the gesture of model coupling;
Otherwise, be not.
Beneficial effect of the present invention is:
The present invention is by carrying out the characteristic sequence of extraction the similarity measurement of certain form with the model obtaining in advance, according to similarity degree, determine the classification that gesture to be identified is corresponding, and utilize Fr é chet distance to there is the characteristic that time-serial position is remained unchanged along time dimension stretching, can adapt to well dynamic gesture pace of change along time dimension situation pockety.
Accompanying drawing explanation
Fig. 1 is the framework schematic diagram of the dynamic gesture identification method based on Fr é chet distance described in the embodiment of the present invention;
Fig. 2 is embodiment of the present invention certain gestures orbiting motion schematic diagram;
Fig. 3 is embodiment of the present invention certain gestures attitude change procedure figure;
Fig. 4 is that embodiment of the present invention gesture posture feature is chosen schematic diagram;
Fig. 5 is the framework schematic diagram that embodiment of the present invention posture feature is chosen.
Embodiment
Below in conjunction with Figure of description, the present invention will be further described.
As shown in Figure 1, a kind of dynamic gesture identification method based on Fr é chet distance described in the embodiment of the present invention, at least comprises the following steps:
Obtain the hand gesture location information of dynamic gesture fragment to be identified in input video;
Obtain the described hand gesture location place gesture state variation characteristic sequence of front and back frame continuously;
The gesture state variation characteristic sequence obtaining is mated according to Fr é chet distance with the characteristic sequence in preset model;
According to Fr é chet, apart from match information, obtain similarity result output.
Wherein, it should be noted that: definition Fr é chet distance is: any one belongs to
Figure BDA0000451086450000051
the parametric curve in space can be expressed as Continuous Mappings
Figure BDA0000451086450000052
wherein a<b.For curve
Figure BDA0000451086450000053
with
Figure BDA0000451086450000054
fr é chet distance is
&delta; F ( f , g ) : = inf &alpha; : [ 0,1 ] &RightArrow; [ a , a &prime; ] &beta; : [ 0,1 ] &RightArrow; [ b , b &prime; ] max t &Element; [ 0,1 ] d ( f ( &alpha; ( t ) ) , g ( &beta; ( t ) ) )
α in this formula, β is for being defined in the continuous monotonically increasing function on [0,1], and meets α (0)=a, α (1)=a', β (0)=b, β (0)=b'.
Said method step is further explained:
First to determine the place, position of gesture in video image, namely want the hand gesture location of positioning image.Its concrete steps of determining hand gesture location information are:
According to the rgb value of pixel in video, obtain the probable value that in any two field picture of video, area of skin color occurs;
According to described probable value judgement, obtain all area of skin color that distribute in the each two field picture of video;
Obtain the light stream value of each area of skin color two field picture before and after continuously;
According to the light stream value of all area of skin color that distribute, obtain the region of average light flow valuve maximum, i.e. hand gesture location region.
The specific explanations of above-mentioned steps is such: in the identification of gesture track, specific gesture implication shows by hand movement locus as a whole.For each two field picture, can to the hand moving in video, position by color and kinematic constraint.
On the one hand, in video, each two field picture is comprised of R, G, B triple channel, need to be by the rgb value of area of skin color and non-area of skin color in some pictures be added up, obtain one large little be 256 × 256 × 256 form, in form, each represents the probability that its corresponding rgb value occurs with colour of skin form.Like this, for each pixel of the each two field picture in video, just obtained its approximation probability as skin, and then the distribution of the region that can roughly estimate the approximate colour of skin in the each two field picture of video.
On the other hand, because hand in gesture is expressed is kept in motion as a whole all the time, for a certain two field picture in video, need the light stream by itself and former frame image, the region of choosing average light stream maximum in the region of the approximate colour of skin, has obtained the location to motion hand.
Behind hand gesture location location, then to extract characteristic sequence.Because the described gesture state variation characteristic sequence that obtains continuous front and back, described hand gesture location place frame comprises movement locus variation characteristic sequence and attitude variation characteristic sequence.Therefore, need respectively the feature of movement locus aspect and attitude aspect to be extracted.
The coupling step of described movement locus variation characteristic sequence is:
Any frame of hand gesture location region inner video image is set and the average light stream of former frame is direction vector, that is: F=(x, y), wherein x, y represents that respectively average light stream is at the component laterally, on longitudinally;
Obtain the corresponding motion feature sequence of the each frame of inputted video image for (F 1, F 2..., F t);
Obtaining the motion feature sequence arranging in model is (M 1, M 2..., M t);
Choose motion feature sequence (F 1, F 2..., F t) in any one section of sequence fragment f=(F i, F i+1..., F j), obtain any vector (x in this fragment 1, y 1) and motion feature sequence (M 1, M 2..., M t) in any vector (x 2, y 2) distance, be:
d 1 ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 - x 1 &CenterDot; x 2 + y 1 &CenterDot; y 2 x 1 2 + y 1 2 &CenterDot; x 2 2 + y 2 2 ;
From the multiple d that obtain 1((x 1, y 1), (x 2, y 2)) maximizing, ask for boundary under maximal value, just can obtain f=(F i, F i+1..., F j) and (M 1, M 2..., M t) Fr é chet apart from δ f;
One threshold epsilon is set 1, judge this threshold value and δ fsize:
If δ f≤ ε 1, judge that in video to be identified, corresponding gesture motion shown in i frame to the j frame is the gesture of model coupling;
Otherwise, be not.
Specific explanations is:
1, model is set up
Certain gestures track is to be combined in a fixed order by some the most basic strokes, and therefore, we choose the sequence specifically consisting of direction vector (x, y) as the model that represents gesture track.As shown in Figure 2, each point corresponding each element of seasonal effect in time series of representative model successively in figure, the corresponding direction vector of each element (x, y) is marked by arrow in the drawings.
2, feature selecting and extraction
The sequence that gesture track consists of specific direction vector represents, and the direction of motion of gesture can be by asking for the calculating of light stream (optical flow).According to the location estimation to motion hand, can be by the average light stream F=(x, y) of the image calculation gesture region of this frame in video and adjacent former frame, wherein x, y represents that respectively average light stream is along horizontal, longitudinal component, and sets it as this two field picture character pair.
Like this for the each frame (V in video 1, V 2..., V t), just obtained corresponding characteristic sequence (F 1, F 2..., F t).
3, Fr é chet is apart from coupling
Choose motion feature sequence (F 1, F 2..., F t) in any one section of sequence fragment f=(F i, F i+1..., F j), obtain any vector (x in this fragment 1, y 1) and motion feature sequence (M 1, M 2..., M t) in any vector (x 2, y 2) distance, be:
d 1 ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 - x 1 &CenterDot; x 2 + y 1 &CenterDot; y 2 x 1 2 + y 1 2 &CenterDot; x 2 2 + y 2 2 ;
From the multiple d that obtain 1((x 1, y 1), (x 2, y 2)) maximizing, ask for boundary under maximal value, just can obtain f=(F i, F i+1..., F j) and (M 1, M 2..., M t) Fr é chet apart from δ f;
One threshold epsilon is set 1, judge this threshold value and δ fsize:
If δ f≤ ε 1, judge that in video to be identified, corresponding gesture motion shown in i frame to the j frame is the gesture of model coupling;
Otherwise, be not.
The coupling step of described attitude variation characteristic sequence is:
Finger fingertip and middle finger in any frame of hand gesture location region inner video image are set and refer to that the coordinate of the corresponding finger of root center is attitude vector, be Z, wherein Z is the proper vector of 12 dimensions;
Obtain the corresponding posture feature sequence of the each frame of inputted video image for (Z 1, Z 2..., Z t);
Obtaining the posture feature sequence arranging in model is (N 1, N 2..., N t);
Choose attitude motion characteristic sequence (Z 1, Z 2..., Z t) in any one section of sequence fragment f=(Z i, Z i+1..., Z j), obtain any vector Z and attitude motion characteristic sequence (N in this fragment 1, N 2..., N t) in the distance of any vectorial N, be:
d 2(Z,N)=||Z-N|| 2
From the multiple d that obtain 2(Z, N) maximizing, asks for boundary under maximal value, just can obtain f=(Z i, Z i+1..., Z j) and (N 1, N 2..., N t) Fr é chet apart from δ f;
One threshold epsilon is set 2, judge this threshold value and δ fsize:
If δ f≤ ε 2, judge that in video to be identified, corresponding gesture motion shown in i frame to the j frame is the gesture of model coupling;
Otherwise, be not.
Specific explanations is:
1, model is set up
For specific gesture attitude change procedure, choose its representative gesture stationary posture sequence as the model in order to describe gesture attitude change procedure.Each element in attitude sequence is all taken from the stationary posture database of hand, and they are corresponding one by one with feature cited below.Be illustrated in figure 3 attitude variation diagram.
2, character selection and abstraction
Specific gesture attitude is determined towards information such as, finger and palm relative positions by hand.Therefore choose each finger fingertip and middle finger and refer to that the position coordinates at relative wrist center, root joint is as the feature of description certain gestures attitude.As shown in Figure 4.
Be illustrated in figure 5 the framework schematic diagram that posture feature is chosen, actual carry out feature estimate time, need in the stationary posture database of hand, retrieve by the characteristics of image of this two field picture, obtain the stationary posture model being similar to the most with it, estimation as it in this moment attitude, and by each finger fingertip of model and middle finger refer to root joint with respect to wrist center laterally, lengthwise position coordinate information, totally 12 dimension real numbers are as to this moment gesture attitude real features.
3, Fr é chet is apart from coupling
Choose attitude motion characteristic sequence (Z 1, Z 2..., Z t) in any one section of sequence fragment f=(Z i, Z i+1..., Z j), obtain any vector Z and attitude motion characteristic sequence (N in this fragment 1, N 2..., N t) in the distance of any vectorial N, be:
d 2(Z,N)=||Z-N|| 2
From the multiple d that obtain 2(Z, N) maximizing, asks for boundary under maximal value, just can obtain f=(Z i, Z i+1..., Z j) and (N 1, N 2..., N t) Fr é chet apart from δ f;
One threshold epsilon is set 2, judge this threshold value and δ fsize:
If δ f≤ ε 2, judge that in video to be identified, corresponding gesture motion shown in i frame to the j frame is the gesture of model coupling;
Otherwise, be not.
Above; be only preferred embodiment of the present invention, but protection scope of the present invention is not limited to this, any be familiar with those skilled in the art the present invention disclose technical scope in; the variation that can expect easily or replacement, within all should being encompassed in protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain that claim was defined.

Claims (5)

1. the dynamic gesture identification method based on Fr é chet distance, is characterized in that, at least comprises the following steps:
Obtain the hand gesture location information of dynamic gesture fragment to be identified in input video;
Obtain the described hand gesture location place gesture state variation characteristic sequence of front and back frame continuously;
The gesture state variation characteristic sequence obtaining is mated according to Fr é chet distance with the characteristic sequence in preset model;
According to Fr é chet, apart from match information, obtain similarity result output.
2. the dynamic gesture identification method based on Fr é chet distance according to claim 1, is characterized in that, described in obtain the hand gesture location information of dynamic gesture fragment to be identified in input video concrete steps be:
According to the rgb value of pixel in video, obtain the probable value that in any two field picture of video, area of skin color occurs;
According to described probable value judgement, obtain all area of skin color that distribute in the each two field picture of video;
Obtain the light stream value of each area of skin color two field picture before and after continuously;
According to the light stream value of all area of skin color that distribute, obtain the region of average light flow valuve maximum, i.e. hand gesture location region.
3. the dynamic gesture identification method based on Fr é chet distance according to claim 1, it is characterized in that, described in obtain described hand gesture location place continuously before and after the gesture state variation characteristic sequence of frame comprise movement locus variation characteristic sequence and attitude variation characteristic sequence.
4. the dynamic gesture identification method based on Fr é chet distance according to claim 3, is characterized in that, the coupling step of described movement locus variation characteristic sequence is:
Any frame of hand gesture location region inner video image is set and the average light stream of former frame is direction vector, that is: F=(x, y), wherein x, y represents that respectively average light stream is at the component laterally, on longitudinally;
Obtain the corresponding motion feature sequence of the each frame of inputted video image for (F 1, F 2..., F t);
Obtaining the motion feature sequence arranging in model is (M 1, M 2..., M t);
Choose motion feature sequence (F 1, F 2..., F t) in any one section of sequence fragment f=(F i, F i+1..., F j), obtain any vector (x in this fragment 1, y 1) and motion feature sequence (M 1, M 2..., M t) in any vector (x 2, y 2) distance, be:
d 1 ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 - x 1 &CenterDot; x 2 + y 1 &CenterDot; y 2 x 1 2 + y 1 2 &CenterDot; x 2 2 + y 2 2 ;
From the multiple d that obtain 1((x 1, y 1), (x 2, y 2)) maximizing, ask for boundary under maximal value, just can obtain f=(F i, F i+1..., F j) and (M 1, M 2..., M t) Fr é chet apart from δ f;
One threshold epsilon is set 1, judge this threshold value and δ fsize:
If δ f≤ ε 1, judge that in video to be identified, corresponding gesture motion shown in i frame to the j frame is the gesture of model coupling;
Otherwise, be not.
5. the dynamic gesture identification method based on Fr é chet distance according to claim 3, is characterized in that, the coupling step of described attitude variation characteristic sequence is:
Finger fingertip and middle finger in any frame of hand gesture location region inner video image are set and refer to that the coordinate of the corresponding finger of root center is attitude vector, be Z, wherein Z is the proper vector of 12 dimensions;
Obtain the corresponding posture feature sequence of the each frame of inputted video image for (Z 1, Z 2..., Z t);
Obtaining the posture feature sequence arranging in model is (N 1, N 2..., N t);
Choose attitude motion characteristic sequence (Z 1, Z 2..., Z t) in any one section of sequence fragment f=(Z i, Z i+1..., Z j), obtain any vector Z and attitude motion characteristic sequence (N in this fragment 1, N 2..., N t) in the distance of any vectorial N, be:
d 2(Z,N)=||Z-N|| 2
From the multiple d2 that obtain (z, N) maximizing, ask for boundary under maximal value, just can obtain f=(Z i, Z i+1..., Z j) and (N 1, N 2..., N t) Fr é chet apart from δ f;
One threshold epsilon is set 2, judge this threshold value and δ fsize:
If δ f≤ ε 2, judge that in video to be identified, corresponding gesture motion shown in i frame to the j frame is the gesture of model coupling;
Otherwise, be not.
CN201310752309.1A 2013-12-31 2013-12-31 Dynamic gesture identification method on basis of Frechet distance Active CN103745228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310752309.1A CN103745228B (en) 2013-12-31 2013-12-31 Dynamic gesture identification method on basis of Frechet distance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310752309.1A CN103745228B (en) 2013-12-31 2013-12-31 Dynamic gesture identification method on basis of Frechet distance

Publications (2)

Publication Number Publication Date
CN103745228A true CN103745228A (en) 2014-04-23
CN103745228B CN103745228B (en) 2017-01-11

Family

ID=50502245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310752309.1A Active CN103745228B (en) 2013-12-31 2013-12-31 Dynamic gesture identification method on basis of Frechet distance

Country Status (1)

Country Link
CN (1) CN103745228B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104317391A (en) * 2014-09-24 2015-01-28 华中科技大学 Stereoscopic vision-based three-dimensional palm posture recognition interactive method and system
CN107133361A (en) * 2017-05-31 2017-09-05 北京小米移动软件有限公司 Gesture identification method, device and terminal device
CN107368181A (en) * 2016-05-12 2017-11-21 株式会社理光 A kind of gesture identification method and device
CN107563286A (en) * 2017-07-28 2018-01-09 南京邮电大学 A kind of dynamic gesture identification method based on Kinect depth information
WO2018113405A1 (en) * 2016-12-19 2018-06-28 广州虎牙信息科技有限公司 Live broadcast interaction method based on video stream, and corresponding apparatus thereof
CN108509049A (en) * 2018-04-19 2018-09-07 北京华捷艾米科技有限公司 The method and system of typing gesture function
CN108729902A (en) * 2018-05-03 2018-11-02 西安永瑞自动化有限公司 Pumping unit online system failure diagnosis and its diagnostic method
CN112733718A (en) * 2021-01-11 2021-04-30 深圳市瑞驰文体发展有限公司 Foreign matter detection-based billiard game cheating identification method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5454043A (en) * 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5594469A (en) * 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
CN101763515A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Real-time gesture interaction method based on computer vision
CN101976330A (en) * 2010-09-26 2011-02-16 中国科学院深圳先进技术研究院 Gesture recognition method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5454043A (en) * 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5594469A (en) * 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
CN101763515A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Real-time gesture interaction method based on computer vision
CN101976330A (en) * 2010-09-26 2011-02-16 中国科学院深圳先进技术研究院 Gesture recognition method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王云飞: "动态手势识别中关键技术的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
赵亚飞: "基于视觉的手势识别技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104317391A (en) * 2014-09-24 2015-01-28 华中科技大学 Stereoscopic vision-based three-dimensional palm posture recognition interactive method and system
CN104317391B (en) * 2014-09-24 2017-10-03 华中科技大学 A kind of three-dimensional palm gesture recognition exchange method and system based on stereoscopic vision
CN107368181A (en) * 2016-05-12 2017-11-21 株式会社理光 A kind of gesture identification method and device
CN107368181B (en) * 2016-05-12 2020-01-14 株式会社理光 Gesture recognition method and device
WO2018113405A1 (en) * 2016-12-19 2018-06-28 广州虎牙信息科技有限公司 Live broadcast interaction method based on video stream, and corresponding apparatus thereof
CN107133361A (en) * 2017-05-31 2017-09-05 北京小米移动软件有限公司 Gesture identification method, device and terminal device
CN107563286A (en) * 2017-07-28 2018-01-09 南京邮电大学 A kind of dynamic gesture identification method based on Kinect depth information
CN107563286B (en) * 2017-07-28 2020-06-23 南京邮电大学 Dynamic gesture recognition method based on Kinect depth information
CN108509049A (en) * 2018-04-19 2018-09-07 北京华捷艾米科技有限公司 The method and system of typing gesture function
CN108729902A (en) * 2018-05-03 2018-11-02 西安永瑞自动化有限公司 Pumping unit online system failure diagnosis and its diagnostic method
CN112733718A (en) * 2021-01-11 2021-04-30 深圳市瑞驰文体发展有限公司 Foreign matter detection-based billiard game cheating identification method and system
CN112733718B (en) * 2021-01-11 2021-08-06 深圳市瑞驰文体发展有限公司 Foreign matter detection-based billiard game cheating identification method and system

Also Published As

Publication number Publication date
CN103745228B (en) 2017-01-11

Similar Documents

Publication Publication Date Title
CN103745228A (en) Dynamic gesture identification method on basis of Frechet distance
CN102968772B (en) A kind of image defogging method capable based on dark channel information
CN109886358B (en) Human behavior recognition method based on multi-time-space information fusion convolutional neural network
CN106780543B (en) A kind of double frame estimating depths and movement technique based on convolutional neural networks
CN107492121B (en) Two-dimensional human body bone point positioning method of monocular depth video
CN102982513B (en) A kind of adapting to image defogging method capable based on texture
CN103700114B (en) A kind of complex background modeling method based on variable Gaussian mixture number
CN101441717B (en) Method and system for detecting eroticism video
CN103003846B (en) Articulation region display device, joint area detecting device, joint area degree of membership calculation element, pass nodular region affiliation degree calculation element and joint area display packing
CN105354865A (en) Automatic cloud detection method and system for multi-spectral remote sensing satellite image
CN104992167A (en) Convolution neural network based face detection method and apparatus
CN113393550B (en) Fashion garment design synthesis method guided by postures and textures
CN106709931B (en) Method for mapping facial makeup to face and facial makeup mapping device
CN107730536B (en) High-speed correlation filtering object tracking method based on depth features
CN104732200A (en) Skin type and skin problem recognition method
CN102013011A (en) Front-face-compensation-operator-based multi-pose human face recognition method
CN110097029B (en) Identity authentication method based on high way network multi-view gait recognition
CN106447630A (en) High-spectral image sharpening method based on probability matrix decomposition
CN105426872A (en) Face age estimation method based on correlation Gaussian process regression
CN106529378A (en) Asian human face age characteristic model generating method and aging estimation method
Liu et al. GMDL: Toward precise head pose estimation via Gaussian mixed distribution learning for students’ attention understanding
CN102663729A (en) Method for colorizing vehicle-mounted infrared video based on contour tracing
CN104408731A (en) Region graph and statistic similarity coding-based SAR (synthetic aperture radar) image segmentation method
CN105139370A (en) Double-wave-band camera real time image fusion method based on visible light and near infrared
CN102779353A (en) High-spectrum color visualization method with distance maintaining property

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant