CN101661556A - Static gesture identification method based on vision - Google Patents

Static gesture identification method based on vision Download PDF

Info

Publication number
CN101661556A
CN101661556A CN200910190601A CN200910190601A CN101661556A CN 101661556 A CN101661556 A CN 101661556A CN 200910190601 A CN200910190601 A CN 200910190601A CN 200910190601 A CN200910190601 A CN 200910190601A CN 101661556 A CN101661556 A CN 101661556A
Authority
CN
China
Prior art keywords
gesture
zone
method based
characteristic
identification method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200910190601A
Other languages
Chinese (zh)
Inventor
王轩
吴堃
于成龙
王茂吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN200910190601A priority Critical patent/CN101661556A/en
Publication of CN101661556A publication Critical patent/CN101661556A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a static gesture identification method based on vision, comprising the following steps of S1, gesture image pretreatment: separating a hand region from an environment accordingto the complexional characteristic of a human body and obtaining a gesture profile through image filtering and image morphological operation; S2, gesture characteristic parameter extraction: extracting an Hu invariable moment characteristic, a gesture region characteristic and a Fourier description subparameter so as to form a characteristic vector; and S3, gesture identification, using a multi-layer sensor classifier having self-organizing and self-studying abilities, capable of effectively resisting noise and treating incomplete mode, and having mode generalization ability. The static gesture identification method based on vision in the invention firstly carries out pretreatment and binarizes the original gesture image according to the complexional characteristic of the human body. The extracted gesture characteristic parameters are in three groups, namely the Hu invariable moment characteristic, the gesture region characteristic and the Fourier description subparameter, which form the characteristic vector together. The characteristic has better recognition rate.

Description

Static gesture identification method based on vision
Technical field
The present invention relates to a kind of control method, all many-sides such as particularly a kind of research, stunt in the film making that is applied to the teaching of area of computer aided sign language, the bilingual broadcast of TV programme, visual human handled, the making of animation, medical research, Entertainment, simultaneously also help to improve deaf-mute's life study and work condition, be their static gesture identification method based on vision for better service.
Background technology
The identification work to 46 sign language symbols had been finished in the Fujitsu laboratory in 1991
In Dec, 2003, the Cybernet system house of Michigan, USA develops the system of cover gesture storm by name, is that company develops at weather predicting program, and the host can control the process of forecast by simple gesture.
2008, the scientist that Toshiba Corp is located at a research laboratory of univ cambridge uk invented " gesture interaction technology " (gesture interface technology), and it can make the people who sees TV control TV by simple gesture.
There is following technical difficulty in the prior art:
(1) difficulty of gesture target detection
The real-time detection of target is meant that detecting target under with the complicated background condition the people from image stream comes, and this is one of problem of mainly studying of machine vision.
(2) difficulty of gesture Target Recognition
Gesture identification is to explain the implication that it is high-level according to the attitude of staff and change procedure, because gesture has following characteristics:
A) be elastomeric objects, so very big with difference between a kind of gesture;
B) hand has bulk redundancy information, is identification finger feature because the people discerns the gesture key, so the palm feature is a redundant information;
Therefore c) position is at three dimensions, is difficult to the location, and the image that computing machine obtains is three-dimensional projection to two dimension, so projecting direction is very crucial;
D) the surperficial right and wrong of hand are smooth, therefore easily produce shade.
Summary of the invention
The present invention the invention provides the good static gesture identification method based on vision of a kind of recognition effect in order to overcome above-mentioned the deficiencies in the prior art.
The technical solution adopted for the present invention to solve the technical problems is: a kind of static gesture identification method based on vision is provided, it may further comprise the steps: the pre-service of S1 images of gestures, from environment, cut apart the zone of selling according to the features of skin colors of human body, obtain the gesture profile by figure filtering and morphological image operation then; S2 gesture feature parameter extraction has extracted the Hu invariant moment features, gesture provincial characteristics and Fourier descriptors parameter, composition characteristic vector; The S3 gesture identification is used the multilayer perceptron sorter, and this sorter has self-organization and self-learning capability, and effectively antinoise is promoted ability with handling imperfect pattern and having pattern.
The scheme that the present invention solves further technical matters is: described step S1 comprises S11 binaryzation images of gestures, and denoising of S12 smothing filtering and S13 profile extract.
The scheme that the present invention solves further technical matters is: described S11 binaryzation images of gestures is according to the colour of skin information of human body original images of gestures to be carried out binary conversion treatment, obtains the staff zone.
The scheme that the present invention solves further technical matters is: described S12 smothing filtering denoising uses medium filtering and linear smoothing filtering adversary zone binary map to carry out denoising, obtains clearly gesture zone.
The scheme that the present invention solves further technical matters is: described S13 profile extracts and adopts Laplce's edge extracting algorithm.
The scheme that the present invention solves further technical matters is: described gesture provincial characteristics comprises the ratio of gesture region area and gesture zone rectangular area, the length breadth ratio in gesture zone, the ratio of the two sections that the center of gravity of gesture binary map is cut apart the gesture boundary rectangle, the girth of hand region area and gesture profile.
Compared to prior art, static gesture identification method based on vision of the present invention is before carrying out characteristic parameter extraction to original images of gestures, carry out pre-service earlier, according to the features of skin colors of human body with original images of gestures binaryzation, and to the method denoising of the binary map of images of gestures by the figure image intensifying, obtain not having the binary map of noise, then binary map is carried out edge extracting, obtain the profile of gesture, the gesture feature parameter of extraction has three groups, be respectively the Hu invariant moment features, gesture provincial characteristics and Fourier descriptors are formed proper vector jointly, and this stack features has the better recognition rate, make recognition effect good, the discrimination height.
Description of drawings
Fig. 1 is the principle module diagram of the static gesture identification method based on vision of the present invention.
Embodiment
As shown in Figure 1, the invention provides a kind of static gesture identification method based on vision, it may further comprise the steps: the pre-service of S1 images of gestures, from environment, cut apart the zone of selling according to the features of skin colors of human body, and obtain the gesture profile by figure filtering and morphological image operation then; S2 gesture feature parameter extraction has extracted the Hu invariant moment features, gesture provincial characteristics and Fourier descriptors parameter, composition characteristic vector; S3 gesture identification part is used the multilayer perceptron sorter, and this sorter has self-organization and self-learning capability, and effectively antinoise is promoted ability with handling imperfect pattern and having pattern.
Described step S1 comprises S11 binaryzation images of gestures, and denoising of S12 smothing filtering and S13 profile extract.
S11 binaryzation images of gestures: the colour of skin information according to human body is carried out binary conversion treatment to original images of gestures, obtains the staff zone, and algorithm is as follows:
Each pixel for original image
{
Calculate the Hue and the Satisfaction value of this pixel by rgb value;
Judge that the Hue of this pixel and Satisfaction value are whether all in this people's colour of skin interval;
If this pixel would be become black;
Otherwise this pixel is become white;
}
Through binary conversion treatment, can obtain the binary map in gesture zone.
The denoising of S12 smothing filtering: because mainly there is the influence of salt-pepper noise in the binary map after the binaryzation, native system uses medium filtering and linear smoothing filtering adversary zone binary map to carry out denoising, obtains clearly gesture zone.
The S13 profile extracts: adopt Laplce's edge extracting algorithm, Laplace operator is a kind of second derivative scalar operator that two-dimensional function is carried out computing, therefore quite responsive to the noise in the image, and when handling, can produce a precipitous zero crossing in edge, therefore if, can detect by Laplace operator for noiseless and have the image of brink.
S2 gesture feature parameter extraction: to the gesture model that obtains after pretreated, be gesture zone binary map and gesture profile, carry out the gesture model characteristic parameter extraction, the gesture feature parameter that the present invention will extract has three groups, be respectively the Hu invariant moment features, gesture provincial characteristics and Fourier descriptors, form proper vector jointly:
<1〉gesture provincial characteristics:
struct?REGION_FEATURE
{
Double area_ratio; The ratio of // gesture region area and gesture zone rectangular area
Double aspect_ratio; The length breadth ratio in // gesture zone
Double bary_ratio; The center of gravity of // gesture binary map is with gesture boundary rectangle gesture zone square
The ratio of the two sections that shape is cut apart
Double hand_area; // hand region area is represented with number of pixels
Double hand_perimeter; The girth of // gesture profile
};
<2〉Hu invariant moment features
struct?CvHuMoments
{
double?hu1,hu2,hu3,hu4,hu5,hu6,hu7
};
<3〉Fourier descriptors
struct?FOURIER_FEATURE
{
vector<double> feature_data;
};
The calculating of gesture zone rectangle
int?cvFindContours(
CvArr*image,
CvMemStorage*storage,
CvSeq**first_contour,
int?header_size=sizeof(CvContour),
int?mode=CV_RETR_LIST,
int?method=CV_CHAIN_APPROX_SIMPLE,
CvPoint?offset=cvPoint(0,0));
The major parameter explanation:
The image:8-bit, the single channel image. nonzero element is left 0 as 1,0 pixel value
Storage: the storage container of the profile that obtains
First_contour: output parameter: the pointer that comprises first output profile
Function cvFindContours extracts profile from bianry image, and returns the number that extracts profile.The content of pointer first_contour is filled in by function.It comprises the pointer of first outermost layer profile, if pointer is NULL, does not then detect profile (is complete black such as image).Other profile can from first_contour utilize h_next and v_next links and accesses to.
Then by CvRect gesture zone rectangle=((CvContour*) contour)->rect, just can obtain the regional rectangle of gesture.
Computing method about the hand_area feature:
After gesture zone rectangle calculates, below the calculating of all feature all be in the rectangular area, gesture zone of image, to carry out.The area hand_area of gesture is the sum of all pixels that the hand zone comprises, and by rectangular area, gesture zone in the scanning binary map, the number of calculating black color dots can draw.
area=∑∑f(x,y) (4-1)
Wherein
Figure G2009101906012D00051
The built-in function cvContourArea that we can call OpenCV calculates hand_area, and function prototype is as follows:
double?cvContourArea(
const?CvArr*contour,
CvSlice?slice=CV_WHOLE_SEQ);
Parameter c ontour is an image outline, and slice then is the starting point of interest outline portion, and default is the area that calculates whole profile.The rreturn value of function is the area of whole profile.
Computing method about the area_ratio feature:
Area_ratio=hand_area/ (gesture zone rectangle .width* gesture zone rectangle .height)
Computing method about the aspect_ratio feature:
Rectangle .widht/ (float) gesture zone, aspect_ratio=(float) gesture zone rectangle .height
Computing method about the hand_perimeter feature:
The girth hand_perimeter of gesture profile is the sum of all pixels that the gesture contour images comprises in rectangular area, gesture zone, by rectangular area, gesture zone in the scanning gesture profile diagram, the number of calculating black color dots can draw:
hand_perimeter=∑∑f(x,y)
Wherein
Figure G2009101906012D00061
The built-in function cvArcLength that we can call OpenCV calculates hand_perimeter, and function prototype is as follows:
double?cvArcLength(
const?void*curve,
CvSlice?slice=CV_WHOLE_SEQ,
int?is_closed=-1);
Parameter c urve is curve sequence of point sets or array, and slice then is the starting point of curve, and default is the length of calculating entire curve, and parameter is_closed represents whether curve is closed.The rreturn value of function is to calculate profile girth or length of curve.
The computing method of Hu invariant moment features:
1962, M.K.Hu has at first proposed invariant moments, he has provided the definition of continuous function square and about the fundamental property of square, the character such as translation invariance, rotational invariance and constant rate of relevant square have been proved, specifically provided the expression formula of seven invariant moments, embodied formula such as formula with translation invariance, rotational invariance and constant rate
φ 1=μ 2002
φ 2=(μ 2002) 2+(2μ 11) 2
φ 3=(μ 30-3μ 12) 2+(3μ 2103) 2
φ 4=(μ 3012) 2+(μ 2103) 2
φ 5=(μ 30-3μ 12)(μ 3012)[(μ 3012) 2-3(μ 2103) 2]+(3μ 2103)(μ 2103)[3(μ 3012) 2-(μ 2103)
φ 6=(μ 2002)[(μ 3012) 2-(μ 2103) 2]+4μ 113012)(μ 2103)
φ 7=(3μ 2103)(μ 3012)[(μ 3012) 2-3(μ 2103) 2]-(μ 30-3μ 12)(μ 2103)[3(μ 3012) 2-(μ 2103)
Normalized central moment η PqBe defined as:
&eta; pq = &mu; pq &mu; 00 r
Wherein r = p + q + 2 2 , p+q=2,3,.....
The main code of computed image Hu square
void?CFeature::CalcHuMoments(const?IplImage?*img,CvHuMoments
&humoments)
{
CvMoments moments;
cvMoments(img,&moments,0);
cvGetHuMoments(&moments,&humoments);
}
The computing method of Fourier descriptors
Fourier transform is a kind of of line style conversion, and it provides a kind of solution that solves the line style system problem.Fourier transform plays an important role in the theory of a lot of subjects, although can be as treating other conversion, Fourier transform is regarded as the function of pure mathematics, but in a lot of fields, Fourier transform produces their the same clear physical meaning of function too.
Individual with N apart from one another by the sampling function f of Δ x unit (x), make it become series
{f(x 0),f(x 0+Δx),f(x 0+2Δx,....f(x 0+[N-1]Δx)}
Regulation f (x)=f (x 0+ Δ x), at this moment x is discrete value x=0,1,2 ..., N 1, and an above-mentioned sequence table is shown
{f(0),f(1),f(2),...,f(N-1)}
At this moment Fourier transform is to being
F ( u ) = 1 N &Sigma; x = 0 N - 1 f ( x ) exp [ - j 2 &pi;ux / N ]
f ( x ) = &Sigma; u = 0 N - 1 F ( u ) exp [ j 2 &pi;ux / N ]
F in the formula (u)=F (u Δ u) u=0,1,2 ... N-1, and &Delta;u = 1 N&Delta;x , Corresponding to 0, Δ u, 2 Δ u .... (N-1) Δ u
Fourier transform is right f ( x ) &DoubleLeftRightArrow; F ( u )
Discrete Fourier transform (DFT) satisfies orthogonality condition
1 N &Sigma; x = 0 N - 1 exp [ j 2 &pi; u 1 x / N ] exp [ - j 2 &pi; u 2 x / N ] = 1 if ( u 1 = u 2 ) 0 elsewhere
In fact, often make N=2 m
Fourier descriptors (Fourier DescriPtors) is the fourier transform coefficient of body form boundary curve, and it is the result of the frequency-domain analysis of object boundary curve signal, is the method that a kind of description is not subjected to moving of initial point and the curve that rotation influences.
The basic thought of Fourier descriptors is that the shape of supposition object is that the curve of a sealing is along the sequence of being had a few on the boundary curve: and x (l), y (l): l=0,1 ..., n-1} is expressed as with plural form:
P(l)=x(l)+jy(l) (l=0,1,....,n-1, j = - 1 )
Like this, the border just can be represented on the one-dimensional space.
The discrete Fourier coefficient of one-dimensional sequence is defined as:
z ( k ) = 1 n &Sigma; l = 0 n - 1 p ( l ) exp ( - j 2 &pi;lk n ) (k=0,1,....n-1)
Z is the Fourier transform of p, is the expression of point sequence in frequency domain.Its inverse fourier transform is
P ( l ) = p ( l ) = &Sigma; k = 0 n - 1 z ( k ) exp ( j 2 &pi; lk n ) (l=0,1,....n-1)
Utilize character z (the k)=z* (n-k) (z* is the conjugate complex number of Z) of fourier coefficient, in coefficient Z, cancellation from K + l ( 0 &le; K < n 2 - 1 ) To the interior radio-frequency component of n-K-1 scope.Carry out inverse fourier transform again, will obtain approximate curve with virgin curve, but the part of suddenling change in the virgin curve will become smoothly, and this curve that is similar to is called the K curve of approximation of virgin curve.{ z (k): k≤K} is called Fourier descriptors (Fourier Descriptor) to fourier coefficient subclass under this meaning.Because the characteristic that fourier coefficient has energy to concentrate to low frequency, so just can reach the purpose on differentiation difformity border with less coefficient.
Fourier descriptors is relevant with the initial point position of yardstick, direction and the curve of shape.Have rotation in order to discern, the shape of translation and yardstick unchangeability, need carry out normalization to Fourier descriptors.According to Fourier transform character, with body form border initial point position translation a length, object amplifies r doubly, anglec of rotation duty and translational displacement (x0, y0) after, the fourier transform coefficient z ' of new shape (k):
Figure G2009101906012D00086
Figure G2009101906012D00087
K=0 wherein, 1 ... .n-1, x ' (l)+iy ' (l)=x (l+a)+iy (l+a)
As can be seen from the above equation, when describing shape with fourier coefficient, coefficient amplitude Z (k), k=0,1 ... .n-1, have rotational invariance and translation invariance (Z (0) does not have translation invariance), and irrelevant with the selection of spring of curve.When object translation, only change its Z (0) component F (x 0+ iy 0) value.Amplitude Z (k) of each coefficient (except the Z (0)) divided by Z (l), then
Figure G2009101906012D00091
Do not change with change of scale, so
Figure G2009101906012D00092
K=1,2 ... .., n-1, represents delivery Have rotation simultaneously, translation and yardstick unchangeability, and irrelevant with the selection of spring of curve position, we it as Fourier descriptors, so normalized Fourier descriptors d (k) is defined as:
Figure G2009101906012D00094
K=1,2 ... n-1, represents delivery
Native system is got preceding 12 coefficients except that z (0), obtains the proper vector of one 12 dimension, and this characteristic sentence measurer has rotation, translation and yardstick unchangeability, and flat and irrelevant with the selection of the start position of curve, specific algorithm is as follows:
CFeature::CalcFourierFeature(const?IplImage*img,FOURIER_FEATURE
&fourier_feature)
{
assert(img!=NULL);
Int MAX_POINT_NUMBER=1024; // suppose that the quantity of point mostly is most
1024
IplImage *clone_img=cvCloneImage(img);
CvMemStorage*storage=cvCreateMemStorage(0);
CvSeq *contour;
int?contour_count=cvFindContours(
clone_img,
storage,
&contour,
sizeof(CvContour),
CV_RETR_EXTERNAL,
CV_CHAIN_APPROX_NONE,
cvPoint(0,0));
double?img_area=img->width*img->height;
for(;contour;contour=contour->h_next)
{
double?contour_area=fabs(cvContourArea(contour,
CV_WHOLE_SEQ));
if(contour_area/img_area>=0.05)
break;
}
Int point_count=contour->total; The quantity of // point
//assert(point_count<MAX_POINT_NUMBER);
MAX_POINT_NUMBER=point_count;
if(point_count>MAX_POINT_NUMBER)
{
fourier_feature.feature_data.clear();
return?false;
}
CvPoint*pt=new?CvPoint[point_count];
cvCvtSeqToArray(contour,pt,CV_WHOLE_SEQ);
CvMat?*real_input=cvCreateMat(1,MAX_POINT_NUMBER,
CV_64F);
CvMat?*imag_input=cvCreateMat(1,MAX_POINT_NUMBER,
CV_64F);
CvMat?*comp_input=cvCreateMat(1,MAX_POINT_NUMBER,
CV_64FC2);
cvZero(real_input);
cvZero(imag_input);
cvZero(comp_input);
for(int?i=0;i<point_count;i++)
{
CvSetReal2D (real_input, 0, i, pt[i] .x); // x is set to real part
CvSetReal2D (imag_input, 0, i, pt[i] .y); // y is set to imaginary part
}
cvMerge(real_input,imag_input,NULL,NULL,comp_input);
cvDFT(comp_input, comp_input,
CV_DXT_FORWARD|CV_DXT_SCALE, comp_input->heigbt); // Fourier becomes
Change
CvMat?*real_output=cvCreateMat(1,MAX_POINT_NUMBER,
CV_64F);
CvMat?*imag_output=cvCreateMat(1,MAX_POINT_NUMBER,
CV_64F);
CvMat*spectrum=cvCreateMat(1,MAX_POINT_NUMBER,
CV_64F); // frequency spectrum
cvSplit(comp_input,real_output,imag_output,NULL,NULL);
cvPow(real_output,real_output,2.0);
cvPow(imag_output,imag_output,2.0);
cvAdd(real_output,imag_output,spectrum,NULL);
cvPow(spectrum,spectrum,0.5);
double?factor=1.0/(cvGetReal2D(spectrum,0,0));
cvScale(spectrum,spectrum,factor,0);
for(int?i=1;i<11;i++)
{
double?d=cvGetReal2D(spectrum,0,i);
fourier?feature.featuredata.push?back(d);
}
cvReleaseMat(&real_input);
cvReleaseMat(&imag_input);
cvReleaseMat(&comp_input);
cvReleaseMat(&real_output);
cvReleaseMat(&imag_output);
cvReleaseMat(&spectrum);
cvReleaseImage(&clone_img);
cvReleaseMemStorage(&storage);
delete[]pt;
return?true;
}
Static gesture identification method based on vision of the present invention is before carrying out characteristic parameter extraction to original images of gestures, carry out pre-service earlier, according to the features of skin colors of human body with original images of gestures binaryzation, and to the method denoising of the binary map of images of gestures by the figure image intensifying, obtain not having the binary map of noise, then binary map is carried out edge extracting, obtain the profile of gesture, the gesture feature parameter of extracting has three groups, is respectively the Hu invariant moment features, gesture provincial characteristics and Fourier descriptors, the common proper vector of forming, this stack features has the better recognition rate, makes recognition effect good, the discrimination height.

Claims (9)

1. static gesture identification method based on vision, it may further comprise the steps: the pre-service of S1 images of gestures, from environment, cut apart the zone of selling according to the features of skin colors of human body, obtaining the gesture profile by figure filtering and morphological image operation then; S2 gesture feature parameter extraction has extracted the Hu invariant moment features, gesture provincial characteristics and Fourier descriptors parameter, composition characteristic vector; The S3 gesture identification is used the multilayer perceptron sorter, and this sorter has self-organization and self-learning capability, and effectively antinoise is promoted ability with handling imperfect pattern and having pattern.
2. the static gesture identification method based on vision according to claim 1 is characterized in that: described step S1 comprises S11 binaryzation images of gestures, and denoising of S12 smothing filtering and S13 profile extract.
3. the static gesture identification method based on vision according to claim 2 is characterized in that: described S11 binaryzation images of gestures is according to the colour of skin information of human body original images of gestures to be carried out binary conversion treatment, obtains the staff zone.
4. the static gesture identification method based on vision according to claim 2 is characterized in that: described S12 smothing filtering denoising uses medium filtering and linear smoothing filtering adversary zone binary map to carry out denoising, obtains clearly gesture zone.
5. the static gesture identification method based on vision according to claim 4 is characterized in that: described S13 profile extracts and adopts Laplce's edge extracting algorithm.
6. the static gesture identification method based on vision according to claim 1, it is characterized in that: described gesture provincial characteristics comprises the ratio of gesture region area and gesture zone rectangular area, the length breadth ratio in gesture zone, the ratio of the two sections that the center of gravity of gesture binary map is cut apart the gesture boundary rectangle, the girth of hand region area and gesture profile.
7. the static gesture identification method based on vision according to claim 6, it is characterized in that: after gesture zone rectangle calculates, the area of gesture (area) is the sum of all pixels that the hand zone comprises, by rectangular area, gesture zone in the scanning binary map, the number of calculating black color dots can draw
area=∑∑f(x,y)
Wherein
Figure A2009101906010002C1
8. the static gesture identification method based on vision according to claim 6, it is characterized in that: the girth (hand perimeter) of the girth gesture profile of gesture profile is the sum of all pixels that the gesture contour images comprises in rectangular area, gesture zone, by rectangular area, gesture zone in the scanning gesture profile diagram, the number of calculating black color dots can draw:
hand_perimeter=∑∑f(x,y)
Wherein
Figure A2009101906010003C1
9. the static gesture identification method based on vision according to claim 1 is characterized in that: described Fourier descriptors parameter will be carried out normalization when calculating.
CN200910190601A 2009-09-25 2009-09-25 Static gesture identification method based on vision Pending CN101661556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910190601A CN101661556A (en) 2009-09-25 2009-09-25 Static gesture identification method based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910190601A CN101661556A (en) 2009-09-25 2009-09-25 Static gesture identification method based on vision

Publications (1)

Publication Number Publication Date
CN101661556A true CN101661556A (en) 2010-03-03

Family

ID=41789567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910190601A Pending CN101661556A (en) 2009-09-25 2009-09-25 Static gesture identification method based on vision

Country Status (1)

Country Link
CN (1) CN101661556A (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101912676A (en) * 2010-07-30 2010-12-15 湖州海振电子科技有限公司 Treadmill capable of recognizing gesture
CN102592113A (en) * 2011-12-23 2012-07-18 哈尔滨工业大学深圳研究生院 Rapid identification method for static gestures based on apparent characteristics
CN102968642A (en) * 2012-11-07 2013-03-13 百度在线网络技术(北京)有限公司 Trainable gesture recognition method and device based on gesture track eigenvalue
CN102968618A (en) * 2012-10-24 2013-03-13 浙江鸿程计算机系统有限公司 Static hand gesture recognition method fused with BoF model and spectral clustering algorithm
CN103077381A (en) * 2013-01-08 2013-05-01 郑州威科姆科技股份有限公司 Monocular dynamic hand gesture recognition method on basis of fractional Fourier transformation
CN104050442A (en) * 2013-03-11 2014-09-17 霍尼韦尔国际公司 Gesture recognition system operability verification
CN104063703A (en) * 2014-07-22 2014-09-24 清华大学 Gesture identification method based on inverted index mode
CN104182772A (en) * 2014-08-19 2014-12-03 大连理工大学 Gesture recognition method based on deep learning
CN104331158A (en) * 2014-10-29 2015-02-04 山东大学 Gesture-controlled human-computer interaction method and device
TWI475422B (en) * 2012-10-31 2015-03-01 Wistron Corp Method for recognizing gesture and electronic device
CN104463250A (en) * 2014-12-12 2015-03-25 广东工业大学 Sign language recognition translation method based on Davinci technology
CN104866825A (en) * 2015-05-17 2015-08-26 华南理工大学 Gesture language video frame sequence classification method based on Hu moments
CN105451029A (en) * 2015-12-02 2016-03-30 广州华多网络科技有限公司 Video image processing method and device
CN106295531A (en) * 2016-08-01 2017-01-04 乐视控股(北京)有限公司 A kind of gesture identification method and device and virtual reality terminal
CN103793718B (en) * 2013-12-11 2017-01-18 台州学院 Deep study-based facial expression recognition method
CN106599771A (en) * 2016-10-21 2017-04-26 上海未来伙伴机器人有限公司 Gesture image recognition method and system
CN107430431A (en) * 2015-01-09 2017-12-01 雷蛇(亚太)私人有限公司 Gesture identifying device and gesture identification method
US9846816B2 (en) 2015-10-26 2017-12-19 Pixart Imaging Inc. Image segmentation threshold value deciding method, gesture determining method, image sensing system and gesture determining system
CN108205641A (en) * 2016-12-16 2018-06-26 比亚迪股份有限公司 Images of gestures processing method and processing device
CN109190516A (en) * 2018-08-14 2019-01-11 东北大学 A kind of static gesture identification method based on volar edge contour vectorization
CN109634410A (en) * 2018-11-28 2019-04-16 上海鹰觉科技有限公司 Unmanned plane photographic method and system based on gesture identification
CN109684959A (en) * 2018-12-14 2019-04-26 武汉大学 The recognition methods of video gesture based on Face Detection and deep learning and device
CN109934159A (en) * 2019-03-11 2019-06-25 西安邮电大学 A kind of gesture identification method of multiple features fusion
CN110197138A (en) * 2019-05-15 2019-09-03 南京极目大数据技术有限公司 A kind of quick gesture identification method based on video frame feature
CN110363787A (en) * 2018-03-26 2019-10-22 北京市商汤科技开发有限公司 Information acquisition method and system, electronic equipment, program and medium

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101912676A (en) * 2010-07-30 2010-12-15 湖州海振电子科技有限公司 Treadmill capable of recognizing gesture
CN102592113A (en) * 2011-12-23 2012-07-18 哈尔滨工业大学深圳研究生院 Rapid identification method for static gestures based on apparent characteristics
CN102592113B (en) * 2011-12-23 2014-07-30 哈尔滨工业大学深圳研究生院 Rapid identification method for static gestures based on apparent characteristics
CN102968618A (en) * 2012-10-24 2013-03-13 浙江鸿程计算机系统有限公司 Static hand gesture recognition method fused with BoF model and spectral clustering algorithm
TWI475422B (en) * 2012-10-31 2015-03-01 Wistron Corp Method for recognizing gesture and electronic device
CN102968642A (en) * 2012-11-07 2013-03-13 百度在线网络技术(北京)有限公司 Trainable gesture recognition method and device based on gesture track eigenvalue
CN102968642B (en) * 2012-11-07 2018-06-08 百度在线网络技术(北京)有限公司 A kind of trainable gesture identification method and device based on gesture path characteristic value
CN103077381A (en) * 2013-01-08 2013-05-01 郑州威科姆科技股份有限公司 Monocular dynamic hand gesture recognition method on basis of fractional Fourier transformation
CN104050442A (en) * 2013-03-11 2014-09-17 霍尼韦尔国际公司 Gesture recognition system operability verification
CN103793718B (en) * 2013-12-11 2017-01-18 台州学院 Deep study-based facial expression recognition method
CN104063703A (en) * 2014-07-22 2014-09-24 清华大学 Gesture identification method based on inverted index mode
CN104182772A (en) * 2014-08-19 2014-12-03 大连理工大学 Gesture recognition method based on deep learning
CN104182772B (en) * 2014-08-19 2017-10-24 大连理工大学 A kind of gesture identification method based on deep learning
CN104331158A (en) * 2014-10-29 2015-02-04 山东大学 Gesture-controlled human-computer interaction method and device
CN104331158B (en) * 2014-10-29 2018-05-25 山东大学 The man-machine interaction method and device of a kind of gesture control
CN104463250B (en) * 2014-12-12 2017-10-27 广东工业大学 A kind of Sign Language Recognition interpretation method based on Davinci technology
CN104463250A (en) * 2014-12-12 2015-03-25 广东工业大学 Sign language recognition translation method based on Davinci technology
CN107430431A (en) * 2015-01-09 2017-12-01 雷蛇(亚太)私人有限公司 Gesture identifying device and gesture identification method
CN104866825B (en) * 2015-05-17 2019-01-29 华南理工大学 A kind of sign language video frame sequence classification method based on Hu square
CN104866825A (en) * 2015-05-17 2015-08-26 华南理工大学 Gesture language video frame sequence classification method based on Hu moments
US9846816B2 (en) 2015-10-26 2017-12-19 Pixart Imaging Inc. Image segmentation threshold value deciding method, gesture determining method, image sensing system and gesture determining system
CN105451029A (en) * 2015-12-02 2016-03-30 广州华多网络科技有限公司 Video image processing method and device
CN105451029B (en) * 2015-12-02 2019-04-02 广州华多网络科技有限公司 A kind of processing method and processing device of video image
CN106295531A (en) * 2016-08-01 2017-01-04 乐视控股(北京)有限公司 A kind of gesture identification method and device and virtual reality terminal
CN106599771A (en) * 2016-10-21 2017-04-26 上海未来伙伴机器人有限公司 Gesture image recognition method and system
CN108205641A (en) * 2016-12-16 2018-06-26 比亚迪股份有限公司 Images of gestures processing method and processing device
CN108205641B (en) * 2016-12-16 2020-08-07 比亚迪股份有限公司 Gesture image processing method and device
CN110363787A (en) * 2018-03-26 2019-10-22 北京市商汤科技开发有限公司 Information acquisition method and system, electronic equipment, program and medium
CN109190516A (en) * 2018-08-14 2019-01-11 东北大学 A kind of static gesture identification method based on volar edge contour vectorization
CN109634410A (en) * 2018-11-28 2019-04-16 上海鹰觉科技有限公司 Unmanned plane photographic method and system based on gesture identification
CN109684959A (en) * 2018-12-14 2019-04-26 武汉大学 The recognition methods of video gesture based on Face Detection and deep learning and device
CN109934159A (en) * 2019-03-11 2019-06-25 西安邮电大学 A kind of gesture identification method of multiple features fusion
CN110197138A (en) * 2019-05-15 2019-09-03 南京极目大数据技术有限公司 A kind of quick gesture identification method based on video frame feature
CN110197138B (en) * 2019-05-15 2020-02-04 南京极目大数据技术有限公司 Rapid gesture recognition method based on video frame characteristics

Similar Documents

Publication Publication Date Title
CN101661556A (en) Static gesture identification method based on vision
Rekha et al. Shape, texture and local movement hand gesture features for indian sign language recognition
CN104834922B (en) Gesture identification method based on hybrid neural networks
Adithya et al. Artificial neural network based method for Indian sign language recognition
Mittal et al. Hand detection using multiple proposals.
Ranga et al. American sign language fingerspelling using hybrid discrete wavelet transform-gabor filter and convolutional neural network
CN102682287B (en) Pedestrian detection method based on saliency information
CN105205449B (en) Sign Language Recognition Method based on deep learning
CN102508547A (en) Computer-vision-based gesture input method construction method and system
Kobayashi et al. Three-way auto-correlation approach to motion recognition
Yarlagadda et al. A novel method for human age group classification based on Correlation Fractal Dimension of facial edges
Zhao et al. License plate location based on Haar-like cascade classifiers and edges
Jambhale et al. Gesture recognition using DTW & piecewise DTW
Vishwakarma et al. Simple and intelligent system to recognize the expression of speech-disabled person
CN111126240A (en) Three-channel feature fusion face recognition method
Meng et al. An extended HOG model: SCHOG for human hand detection
Kulkarni et al. Facial expression recognition
Sang et al. Robust palmprint recognition base on touch-less color palmprint images acquired
CN101271465A (en) Lens clustering method based on information bottleneck theory
CN103927555A (en) Static sign language letter recognition system and method based on Kinect sensor
Izzah et al. Translation of sign language using generic fourier descriptor and nearest neighbour
Shan et al. Looking around the backyard helps to recognize faces and digits
Singh et al. Implementation and evaluation of DWT and MFCC based ISL gesture recognition
CN106971143A (en) A kind of human face light invariant feature extraction method of utilization logarithmic transformation and smothing filtering
Kalangi et al. Deployment of Haar Cascade algorithm to detect real-time faces

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20100303