CN104866826A - Static gesture language identification method based on KNN algorithm and pixel ratio gradient features - Google Patents

Static gesture language identification method based on KNN algorithm and pixel ratio gradient features Download PDF

Info

Publication number
CN104866826A
CN104866826A CN201510254132.1A CN201510254132A CN104866826A CN 104866826 A CN104866826 A CN 104866826A CN 201510254132 A CN201510254132 A CN 201510254132A CN 104866826 A CN104866826 A CN 104866826A
Authority
CN
China
Prior art keywords
image
pixel
sign language
gradient features
knn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510254132.1A
Other languages
Chinese (zh)
Other versions
CN104866826B (en
Inventor
李兆海
徐向民
青春美
倪浩淼
黄爱发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201510254132.1A priority Critical patent/CN104866826B/en
Publication of CN104866826A publication Critical patent/CN104866826A/en
Application granted granted Critical
Publication of CN104866826B publication Critical patent/CN104866826B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a static gesture language identification method based on a KNN algorithm and pixel ratio gradient features. The method comprises following steps: 1. shooting color images; 2. performing binaryzation on the images according to color features of the images; 3. positioning hands according to shape features of the images and segmenting the image; 4. normalizing the segmented images; 5. extracting pixel ratio gradient features of the images and using the extracting pixel ratio gradient features as feature vectors of the images; 6. calculating the Euclidean distance between the input feature vectors and the a standard gesture language image feature library; 7. determining optimal matching results based on the KNN algorithm; and 8. outputting the identification results. According to the invention, feature matching is carried out with the combination of the color features, shape features and pixel ratio gradient features of the images and by the use of the KNN algorithm, so that the identification rate is improved, the adaptability for different environments is enhanced, the algorithm is relatively simple, the complexity is low, the system operation speed is rapid, and the equipment is low in cost.

Description

A kind of based on KNN and the pixel static sign Language Recognition Method than Gradient Features
Technical field
The invention belongs to static Sign Language Recognition technical field, the static sign Language Recognition Method of particularly a kind of Iamge Segmentation, location, feature extraction and pattern-recognition.
Background technology
It is reported, China's hearing, language physical disabilities are more than 2,000 ten thousand people, and annual also with the speed increase of 20,000 ~ 30,000.To sign language,---the most important communation means of this kind of crowd---are studied, not only contribute to life, the study and work condition of improving these individuals with disabilities, for they provide better service, also can be applied to all many-sides such as stunt process, cartoon making, medical research, Entertainment in the teaching of area of computer aided sign language, the bilingual broadcasting of TV programme, visual human's research, film making simultaneously.
The development of computer vision technique relates to multidisciplinary field, and content is quite extensive, all relevant to optics, signal transacting, artificial intelligence, pattern-recognition, image procossing and computer technology etc.Along with signal processing theory, the theory of computation, theories of vision, etc. the development of correlation technique, computer vision have also been obtained and develops fast, and, robotization untouchable with it, visual, real-time and the feature such as intelligent are widely used in the every field such as commercial production, health care, Aero-Space, scientific research and national defense construction, are subject to the great attention of a lot of country and industry.
According to the difference of gesture import pathway, Sign Language Recognition can be divided into based on data glove with based on the large class of computer vision two.Based in the Sign Language Recognition of data glove, when human body makes various gesture motion, the positional information of each for hand joint part can be passed to computing machine to make subsequent treatment by position tracker by data glove more exactly.The data that this implementation gets are more accurate, and be applicable to the identification of a large amount of vocabulary, discrimination is also higher.But this method requires that user wears the data gathering tools such as thick and heavy several handgrip covers, and convenience is poor, and these instrumental values are expensive.Based in the Sign Language Recognition of computer vision, digital photographing apparatus is used to obtain images of gestures, utilize computer vision and Digital Image Processing correlation technique to process image, obtain gesture feature, the technology of last Land use models identification identifies sign language.The method can make man-machine interaction naturally convenient, and only use some cheap apparatus such as digital photographing apparatus, economic input is low simultaneously; But be limited to the current situation of computer vision and graphics, the visual data obtained by digital photographing apparatus is accurate not enough, cause subsequent characteristics to be extracted and detection-phase robustness unsecured.Thus the less identification being applied to a large amount of vocabulary.But based on the trend that the Sign Language Recognition of computer vision will be Sign Language Recognition technical development, along with the development of computer technology and imaging technique, along with the maturation of mode identification technology, Sign Language Recognition approach based on computer vision can overcome poor robustness and the inefficient problem of detecting and tracking, realize the Sign Language Recognition of the vision of highly artificial property intelligence.
Summary of the invention
The object of the invention is to the shortcoming overcoming above-mentioned prior art, provide that a kind of discrimination is high, strong robustness, cheap easily based on KNN and the pixel static sign Language Recognition Method than Gradient Features.The static sign language image input computing machine that picture pick-up device is taken by the method, processes rear identifiable design to image and goes out static sign language and be converted into text output.
The object of the invention is to be achieved through the following technical solutions.
For improving the recognition effect of sign language systems, the present invention proposes a kind of based on KNN and the pixel static sign Language Recognition Method than Gradient Features.The prerequisite of method is sign language person to put on blue gloves, and the method includes the steps of: step 1: color image shot also inputs computing machine; Step 2: the color characteristic based on image carries out binaryzation to image; Step 3: based on the position of the shape facility location hand of image, and split; Step 4: the image after segmentation is normalized; Step 5: the pixel extracting image than and Gradient Features as the proper vector of image; Step 6: calculate the proper vector of input and the weighted euclidean distance in standard sign language characteristics of image storehouse; Step 7: based on KNN algorithm determination best matching result; Step 8: recognition result is exported.
Further, described step 2 comprises following sub-step: step 21: the rgb value calculating each pixel of coloured image; Step 22: judge whether its color is blue by the rgb value of pixel; Step 23: all being composed by the RGB of the pixel being judged as blueness in step 22 is 255, it is 0 that the rgb value of other pixels is composed, and obtains binary image.
Further, described step 3 comprises following steps:
Step 31: binary image is expanded;
Step 32: all profiles in traversing graph picture, goes out contour area frame by a minimum rectangle;
Step 33: the length and width and the area that calculate all rectangular areas, judges whether this region is hand region, and judgment criterion is:
The length length surrounding the described minimum rectangle of profile meets: len_min<length<len_max,
The width height surrounding the minimum rectangle of profile meets: hgh_min<height<hgh_max, and the judgement meeting this criterion is hand region;
Step 34: will the contours segmentation in hand region be judged to be out.
Further optimally, len_min=150, len_max=250; Hgh_min=100, hgh_max=200.Image is unified into 20*36 size by the image normalization in described step 4.
Further, described step 5 comprises following steps:
Step S51: the normalized image after segmentation is divided into 16 little rectangles by 4*4, namely each little rectangle is of a size of 5*9;
Step S52: calculate the pixel number in each little rectangle, the number of white pixel point is stored in array an, and the number of total pixel is stored in array bn;
Step S53: computed segmentation goes out the pixel number on 6 equal separated times of described 16 little rectangles, the number of white pixel point is stored in array an, and the number of total pixel is stored in array bn;
Step S54: using an/bn as the pixel ratio of image and gradient eigenvector.
Further, when described step 6 calculates the Euclidean distance in proper vector and standard sign language characteristics of image storehouse, the weight strengthening Gradient Features be considered, suppose that the proper vector of the sign language image inputted is the proper vector of certain template in sign language characteristics of image storehouse is weighted euclidean distance computing formula then between them is as follows:
d = &Sigma; i = 1 16 ( a i - b i ) 2 + &Sigma; i = 17 22 [ 9 ( a i - b i ) ] 2 .
Preferably, image is unified into 20*36 size by the image normalization in described step 4.
Compared with prior art, provided by the present invention based on KNN and the pixel static sign Language Recognition Method than Gradient Features, combine color of image feature, shape facility and pixel and compare Gradient Features, and make use of KNN algorithm to carry out characteristic matching, improve discrimination and the adaptability to varying environment, and algorithm is relatively simple, complexity is low, system running speed is fast, and equipment cost is cheap, has certain advance.
Accompanying drawing explanation
Fig. 1 is of the present invention based on KNN and the pixel process flow diagram than the static sign Language Recognition Method of Gradient Features;
Fig. 2 is the sub-process figure of Fig. 1 step S102;
Fig. 3 is the sub-process figure of Fig. 1 step S103;
Fig. 4 is the sub-process figure of Fig. 1 step S105.
Embodiment
Illustrate below in conjunction with accompanying drawing and specific embodiment of the invention is described in further detail; but specific embodiment of the invention and protection are not limited thereto; if have process or the symbol of not special detailed description it is noted that following, be all that those skilled in the art can refer to existing techniques in realizing.
As shown in Figure 1, be the Organization Chart of sign Language Recognition preferred embodiment of the present invention.Of the present invention based on KNN and pixel than the static sign Language Recognition Method of Gradient Features, comprise following steps: step S101: color image shot; Step S102: binaryzation is carried out to image; Step S103: locate the position of hand and split; Step S104: to the image normalization after segmentation; Step S105: the pixel ratio and the Gradient Features that extract image; Step S106: calculate the proper vector of input and the weighted euclidean distance in standard sign language characteristics of image storehouse; Step S107: determine best matching result based on KNN; Step S108: export recognition result.
In step S101, be sign language person to need to put on secondary blue gloves, based in the Gesture Recognition of computer vision, Hand Gesture Segmentation under complex background is very difficult, and this is mainly various due to background, and environmental factor is also unpredictable, not only there is no ripe theory as guidance, and existing method realizes difficulty, computation complexity is high, and effect neither be very desirable.Therefore the present invention adopts the method increasing restriction, and namely put on secondary blue gloves, this can reduce the difficulty of Hand Gesture Segmentation greatly, utilizes color detection, just can be easy to locating segmentation and go out gesture part.In addition, be applied in Wearable device conveniently, when the present invention makes a video recording, camera be placed on and do sign language person front and take forward.
As shown in Figure 2, being utilize color characteristic to carry out the process flow diagram of binaryzation to image, is the further refinement of step 102.Specifically comprise following steps: step S201: the rgb value calculating each pixel of coloured image; Step S202: judge whether its color is blue by the rgb value of pixel; Step S203: all being composed by the RGB of the pixel being judged as blueness in step S202 is 255, it is 0 that the rgb value of other pixels is composed, and obtains binary image.Judge that whether a pixel is that blue criterion is as follows:
(1) in R, G, B value of pixel s, the value of B is maximum;
(2) B value should be greater than certain threshold value B_th.Preferably, B_th is set to 50.
(3) because region is significantly blue, so the variance of R, G, B should be greater than certain threshold value Var_th.Preferably, Var_th is set to 350.
Wherein, the formula of the variance VAR of RGB is:
Var = ( R - average ) 2 + ( G - average ) 2 + ( B - average ) 2 3
In formula,
average = R + G + B 3
Worn blue gloves owing to being sign language person, after above-mentioned binarization, hand region major part pixel gray-scale value is 255, is white pixel point, is conducive to follow-up orientation and segmentation.
As shown in Figure 3, being the process flow diagram utilizing shape facility opponent region to position and split, is the further refinement to step S103.Specifically comprise following steps: step S301: binary image is once expanded, under the environment that light is more weak, plication region on blue gloves does not meet the judgment criterion in step S202, by binarized be black pixel point, thus cause the fracture of hand region contour, therefore need expansion plans picture to connect the region of fracture, promote follow-up location; Step S302: all profiles in traversing graph picture, goes out contour area frame by a minimum rectangle, calculates the length of rectangular area and wide, judges whether this region is hand region.Because camera during shooting is in the front of being sign language person, the size in hand region in a certain scope, must can go out the hand region in profile by this conditional filtering.Step S303: will the contours segmentation in hand region be judged to be out.
Concrete filter criteria is:
1. the length of surrounding the minimum rectangle of profile meets: len_min<length<len_max, preferably, and len_min=150, len_max=250
2. the width surrounding the minimum rectangle of profile meets: hgh_min<height<hgh_max, preferably, and hgh_min=100, hgh_max=200
After above step, the binary image of a static sign language image conversion in order to only comprise hand region.
Step S104 is normalized the image after segmentation, and the hand area size that each sign language is partitioned into is different, therefore needs image to unify size, and this is the accuracy in order to improve during the identification of subsequent extracted characteristic sum.Preferably, image is unified into 20*36 size by normalization.
As shown in Figure 4, being extract the pixel ratio of image and the process flow diagram of Gradient Features, is the further refinement to step S105.Detailed extraction step is: step S401: image is divided into 16 little rectangles by 4*4, and each rectangular dimension is 5*9; Step S402: calculate the pixel number in each little rectangle, the number of white pixel point is stored in array an, and the number of total pixel is stored in array bn; Step S403: computed segmentation goes out the pixel number on 6 equal separated times of described 16 little rectangles, the number of white pixel point is stored in array an, and the number of total pixel is stored in array bn; Step S404: using an/bn as the pixel ratio of image and gradient eigenvector.So far, we obtain the proper vector of one 22 dimension, and the 1st dimension is to the 16th dimension for pixel is than feature, and the 17th dimension is Gradient Features to the 22nd dimension.This feature comparatively directly and the pixel distribution situation embodied meticulously in binary image, is very helpful with mating to follow-up pattern-recognition.In addition, be the ratio of pixel number but not simple pixel number due to what utilize here, this feature has good adaptability to different scene images.
Step S106 calculates the proper vector of input and the characteristic Euclidean distance in standard sign language characteristics of image storehouse, and Euclidean distance and euclidean metric are the most frequently used amounts for the difference between characterization data.The difference that this step obtains characterizes a pith to template matches, has vital effect to follow-up identification.Consider that the pixel number on equal separated time is less, when characterizing distance, the proportion that accounts for is very little, therefore needs for Gradient Features adds certain weight, and after debugging, weight is elected as 9 better.
Suppose that the proper vector of the sign language image inputted is the proper vector of certain template in sign language characteristics of image storehouse is weighted euclidean distance computing formula then between them is as follows:
d = &Sigma; i = 1 16 ( a i - b i ) 2 + &Sigma; i = 17 22 [ 9 ( a i - b i ) ] 2
Step S107 determines best matching result based on KNN algorithm.KNN algorithm, i.e. K arest neighbors sorting algorithm, the core concept of KNN algorithm is that then this sample also belongs to this classification, and has the characteristic of sample in this classification if the great majority in the k of sample in a feature space the most adjacent individual sample belong to some classifications.Being applied as in the present invention: the weighted euclidean distance sequence calculated in step S106 is sorted, find out minimum 20, corresponding 20 templates respectively, each template certain manual alphabet corresponding, just obtain 20 letters thus, get the maximum letter of wherein occurrence number as the result identified.
Step S108 exports recognition result with the form of word.

Claims (6)

1. based on KNN and pixel than a static sign Language Recognition Method for Gradient Features, it is characterized in that, comprise the following steps:
Step 1: color image shot also inputs computing machine;
Step 2: the color characteristic based on image carries out binaryzation to image;
Step 3: based on the position of the shape facility location hand of image, and split;
Step 4: the image after segmentation is normalized;
Step 5: the pixel extracting image than and Gradient Features as the proper vector of image;
Step 6: calculate the proper vector of input and the Euclidean distance in standard sign language characteristics of image storehouse;
Step 7: based on KNN algorithm determination best matching result;
Step 8: recognition result is exported.
2. as claimed in claim 1 based on KNN and pixel than the static sign Language Recognition Method of Gradient Features, it is characterized in that, described step 3 comprises following steps:
Step 31: binary image is expanded;
Step 32: all profiles in traversing graph picture, goes out contour area frame by a minimum rectangle;
Step 33: the length and width and the area that calculate all rectangular areas, judges whether this region is hand region, and judgment criterion is:
The length length surrounding the described minimum rectangle of profile meets: len_min<length<len_max,
The width height surrounding the minimum rectangle of profile meets: hgh_min<height<hgh_max, and the judgement meeting this criterion is hand region;
Step 34: will the contours segmentation in hand region be judged to be out.
3. as claimed in claim 1 based on KNN and pixel than the static sign Language Recognition Method of Gradient Features, it is characterized in that:
len_min=150,len_max=250;hgh_min=100,hgh_max=200。
4. as claimed in claim 1 based on KNN and pixel than the static sign Language Recognition Method of Gradient Features, it is characterized in that: image is unified into 20*36 size by the image normalization in described step 4.
5. as claimed in claim 1 based on KNN and pixel than the static sign Language Recognition Method of Gradient Features, it is characterized in that, described step 5 comprises following steps:
Step S51: the normalized image after segmentation is divided into 16 little rectangles by 4*4, and namely each little rectangle is of a size of 5*9;
Step S52: calculate the pixel number in each little rectangle, the number of white pixel point is stored in array an, and the number of total pixel is stored in array bn;
Step S53: computed segmentation goes out the pixel number on 6 equal separated times of described 16 little rectangles, the number of white pixel point is stored in array an, and the number of total pixel is stored in array bn;
Step S54: using an/bn as the pixel ratio of image and gradient eigenvector.
6. as claimed in claim 1 based on KNN and the pixel static sign Language Recognition Method than Gradient Features, it is characterized in that, when described step 6 calculates the Euclidean distance in proper vector and standard sign language characteristics of image storehouse, consider the weight strengthening Gradient Features, suppose that the proper vector of the sign language image inputted is the proper vector of the template in standard sign language characteristics of image storehouse is weighted euclidean distance computing formula then between them is as follows:
d = &Sigma; i = 1 16 ( a i - b i ) 2 + &Sigma; i = 17 22 [ 9 ( a i - b i ) ] 2 .
CN201510254132.1A 2015-05-17 2015-05-17 A kind of static sign Language Recognition Method based on KNN and pixel ratio Gradient Features Active CN104866826B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510254132.1A CN104866826B (en) 2015-05-17 2015-05-17 A kind of static sign Language Recognition Method based on KNN and pixel ratio Gradient Features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510254132.1A CN104866826B (en) 2015-05-17 2015-05-17 A kind of static sign Language Recognition Method based on KNN and pixel ratio Gradient Features

Publications (2)

Publication Number Publication Date
CN104866826A true CN104866826A (en) 2015-08-26
CN104866826B CN104866826B (en) 2019-01-15

Family

ID=53912647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510254132.1A Active CN104866826B (en) 2015-05-17 2015-05-17 A kind of static sign Language Recognition Method based on KNN and pixel ratio Gradient Features

Country Status (1)

Country Link
CN (1) CN104866826B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103311A (en) * 2017-05-31 2017-08-29 西安工业大学 A kind of recognition methods of continuous sign language and its device
CN108121943A (en) * 2016-11-30 2018-06-05 阿里巴巴集团控股有限公司 Method of discrimination and device and computing device based on picture
CN108255303A (en) * 2018-01-24 2018-07-06 重庆邮电大学 A kind of gesture identification method based on self-control data glove
CN109816045A (en) * 2019-02-11 2019-05-28 青岛海信智能商用系统股份有限公司 A kind of commodity recognition method and device
CN112115853A (en) * 2020-09-17 2020-12-22 西安羚控电子科技有限公司 Gesture recognition method and device, computer storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090285459A1 (en) * 2008-05-15 2009-11-19 Gaurav Aggarwal Fingerprint representation using gradient histograms
CN102385439A (en) * 2011-10-21 2012-03-21 华中师范大学 Man-machine gesture interactive system based on electronic whiteboard
US20120154618A1 (en) * 2010-12-15 2012-06-21 Microsoft Corporation Modeling an object from image data
CN102930537A (en) * 2012-10-23 2013-02-13 深圳市宜搜科技发展有限公司 Image detection method and system
CN104268507A (en) * 2014-09-15 2015-01-07 南京邮电大学 Manual alphabet identification method based on RGB-D image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090285459A1 (en) * 2008-05-15 2009-11-19 Gaurav Aggarwal Fingerprint representation using gradient histograms
US20120154618A1 (en) * 2010-12-15 2012-06-21 Microsoft Corporation Modeling an object from image data
CN102385439A (en) * 2011-10-21 2012-03-21 华中师范大学 Man-machine gesture interactive system based on electronic whiteboard
CN102930537A (en) * 2012-10-23 2013-02-13 深圳市宜搜科技发展有限公司 Image detection method and system
CN104268507A (en) * 2014-09-15 2015-01-07 南京邮电大学 Manual alphabet identification method based on RGB-D image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
戴俊: "基于图像高阶NMI值的手势识别算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121943A (en) * 2016-11-30 2018-06-05 阿里巴巴集团控股有限公司 Method of discrimination and device and computing device based on picture
US11126827B2 (en) 2016-11-30 2021-09-21 Alibaba Group Holding Limited Method and system for image identification
CN107103311A (en) * 2017-05-31 2017-08-29 西安工业大学 A kind of recognition methods of continuous sign language and its device
CN108255303A (en) * 2018-01-24 2018-07-06 重庆邮电大学 A kind of gesture identification method based on self-control data glove
CN109816045A (en) * 2019-02-11 2019-05-28 青岛海信智能商用系统股份有限公司 A kind of commodity recognition method and device
CN112115853A (en) * 2020-09-17 2020-12-22 西安羚控电子科技有限公司 Gesture recognition method and device, computer storage medium and electronic equipment

Also Published As

Publication number Publication date
CN104866826B (en) 2019-01-15

Similar Documents

Publication Publication Date Title
Ibrahim et al. An automatic Arabic sign language recognition system (ArSLRS)
Ahmed et al. Vision based hand gesture recognition using dynamic time warping for Indian sign language
Uebersax et al. Real-time sign language letter and word recognition from depth data
CN111563452B (en) Multi-human-body gesture detection and state discrimination method based on instance segmentation
CN103530599A (en) Method and system for distinguishing real face and picture face
CN104866826A (en) Static gesture language identification method based on KNN algorithm and pixel ratio gradient features
CN111160291B (en) Human eye detection method based on depth information and CNN
CN104123529A (en) Human hand detection method and system thereof
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
CN110610127B (en) Face recognition method and device, storage medium and electronic equipment
CN106709518A (en) Android platform-based blind way recognition system
CN114937232B (en) Wearing detection method, system and equipment for medical waste treatment personnel protective appliance
CN112101208A (en) Feature series fusion gesture recognition method and device for elderly people
Atharifard et al. Robust component-based face detection using color feature
CN102663777A (en) Target tracking method and system based on multi-view video
Ibrahem et al. Real-time weakly supervised object detection using center-of-features localization
CN114549557A (en) Portrait segmentation network training method, device, equipment and medium
CN105261038A (en) Bidirectional optical flow and perceptual hash based fingertip tracking method
CN103996207A (en) Object tracking method
CN110309729A (en) Tracking and re-detection method based on anomaly peak detection and twin network
Hu et al. Fast face detection based on skin color segmentation using single chrominance Cr
CN102855025B (en) Optical multi-touch contact detection method based on visual attention model
CN112257513A (en) Training method, translation method and system for sign language video translation model
Hoque et al. Computer vision based gesture recognition for desktop object manipulation
Wang et al. Skin Color Weighted Disparity Competition for Hand Segmentation from Stereo Camera.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant