CN105354551B - Gesture identification method based on monocular cam - Google Patents

Gesture identification method based on monocular cam Download PDF

Info

Publication number
CN105354551B
CN105354551B CN201510738071.6A CN201510738071A CN105354551B CN 105354551 B CN105354551 B CN 105354551B CN 201510738071 A CN201510738071 A CN 201510738071A CN 105354551 B CN105354551 B CN 105354551B
Authority
CN
China
Prior art keywords
picture
palmar hand
gesture
sample
classifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510738071.6A
Other languages
Chinese (zh)
Other versions
CN105354551A (en
Inventor
朱郁丛
李小波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Img Technology Co Ltd
Original Assignee
Beijing Img Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Img Technology Co Ltd filed Critical Beijing Img Technology Co Ltd
Priority to CN201510738071.6A priority Critical patent/CN105354551B/en
Publication of CN105354551A publication Critical patent/CN105354551A/en
Application granted granted Critical
Publication of CN105354551B publication Critical patent/CN105354551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The invention proposes a kind of gesture identification methods based on monocular cam, comprising: shoots video with monocular cam, obtains the video frame of monocular cam acquisition;Image characteristic analysis is carried out to video frame;If there is palmar hand region picture, each moment and corresponding palmar hand regional location are stored to global buffer;The palmar hand regional location at multiple moment is analyzed, the relative position in palmar hand region in preset duration is obtained, corresponding gesture operation is identified according to the relative position in palmar hand region;The gesture operation of identification is compared with multiple Pre-defined gestures, obtains the Pre-defined gesture to match, calls the corresponding call back function of matched Pre-defined gesture, to complete corresponding control action.The present invention acquires the palm position area information of user by monocular cam, realizes gesture intelligent recognition, and call gesture respective operations using call back function, interacts output, has the characteristics that accuracy of identification is high, hardware device structure is simple.

Description

Gesture identification method based on monocular cam
Technical field
Designed image identification technology of the present invention field, in particular to a kind of gesture identification method based on monocular cam.
Background technique
Existing limbs identification technology usually utilizes the limbs image of binocular camera acquisition user.Due to user's Limbs image is entrained among a large amount of background image, and the limbs image that user is isolated from background image is relatively difficult. Further, since user is usually expressed using gesture when controlling equipment, therefore accurately from background image Images of gestures is isolated, it is more meaningful for solving practical problems.Also, existing image capture device is usually binocular image Acquire equipment.
How equipment is acquired using monocular image and accurately images of gestures is isolated in identification from background image, be currently to scheme As the technical issues that need to address in identification technology.
Summary of the invention
The purpose of the present invention aims to solve at least one of described technological deficiency.
For this purpose, being taken the photograph it is an object of the invention to propose a kind of gesture identification method based on monocular cam by monocular As the palm position area information of head acquisition user, gesture intelligent recognition is realized, and call the corresponding behaviour of gesture using call back function Make, interact output, has the characteristics that accuracy of identification is high, hardware device structure is simple.
To achieve the goals above, the embodiment of the present invention provides a kind of gesture identification method based on monocular cam, Include the following steps:
Step S1 shoots video using monocular cam, obtains the video frame of the monocular cam acquisition;
Step S2 carries out image characteristic analysis to the video frame, to judge in the video frame with the presence or absence of palmar hand Region picture carries out image characteristic analysis to the video frame, includes the following steps:
Extract the characteristics of image of the video frame;
Category filter is carried out to described image feature using classifier, judges whether deposit in the video according to the selection result In palmar hand region picture, if it is present executing step S3, the video frame is otherwise abandoned;Wherein, in the classifier to institute Before stating characteristics of image progress category filter, sample data is obtained, trains the sample data to choose the classifier, and right The classifier is initialized, wherein the sample data includes positive sample picture and negative sample picture, the positive sample figure Piece is the picture sample for including palm area, and the negative sample picture is the picture sample for not including palm area;
Step S3 then obtains the palmar hand region position under current time if there is palmar hand region picture It sets, and stores to global buffer, repeat step S1 to step S3, obtain the palmar hand area at multiple moment in preset duration Domain position stores each moment and corresponding palmar hand regional location to the global buffer;
Step S4 analyzes the palmar hand regional location at the multiple moment, obtains in the preset duration Corresponding gesture operation is identified according to the relative position in the palmar hand region in the relative position in palmar hand region;
The obtained gesture operation of identification is compared by step S5 with multiple Pre-defined gestures, obtains making a reservation for of matching Adopted gesture calls the corresponding call back function of the matched Pre-defined gesture, and to complete corresponding control action, call back function is logical It crosses formulation Function Format to be placed in dynamic base and export for system calling, wherein each Pre-defined gesture, call back function and control Movement corresponds.
Further, in the step S1, the monocular cam shoot video before, to the monocular cam into Row initialization, sets the resolution ratio of the monocular cam.
Further, the positive sample picture is obtained, is included the following steps: by manually being screened from pre-selection samples pictures set Out include the picture of palm area, and mark the region where palm on the picture, saves markup information, obtain positive sample Picture.
Further, the negative sample picture is obtained, is included the following steps:
It does not include background picture in palm area and practical application by manually being filtered out from pre-selection samples pictures set;
The background picture is split to meet the size of the positive sample picture;
Duplicate feature in the background picture is removed, negative sample picture is obtained.
Further, the training sample data is included the following steps: with choosing the classifier
The characteristics of image of the positive sample picture and negative sample picture is extracted respectively;
The characteristics of image of the positive sample picture and negative sample picture is merged and generates training sample set;
The training sample set is extracted to obtain detection sample set;
Classifier is trained using training sample set, and using the detection sample set to the classifier after training into Row test obtains the accuracy rate of current class device;
Iteration setting training parameter obtains the accuracy rate of multiple classifiers, and chooses the highest classifier of accuracy rate as institute State the classifier used in step S2.
Further, further include following steps: the data in the global buffer are regularly updated and deleted.
Further, in the step S3, it replicates to the corresponding palmar hand region picture of the palmar hand regional location extremely In the global buffer, and unique overall situation UUID is generated for palmar hand region picture, as the palmar hand region picture Title.
Gesture identification method according to an embodiment of the present invention based on monocular cam, acquires user by monocular cam Palm position area information, realize gesture intelligent recognition, and using call back function call gesture respective operations, interact defeated Out, have the characteristics that accuracy of identification is high, hardware device structure is simple.The present invention, which realizes, uses video camera and computer vision algorithms make Body language is translated, the communication bridge between machine abundant whereby and people, that Gesture Recognition allows people not need is additional Tool can be linked up with machine.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partially become from the following description Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect of the invention and advantage will become from the description of the embodiment in conjunction with the following figures Obviously and it is readily appreciated that, in which:
Fig. 1 is the flow diagram according to the gesture identification method based on monocular cam of the embodiment of the present invention;
Fig. 2 is the flow chart according to the gesture identification method based on monocular cam of the embodiment of the present invention;
Fig. 3 is the flow chart according to the initialization camera and classifier of the embodiment of the present invention;
Fig. 4 is the flow chart according to characteristics of image and the classifier screening of the acquisition video frame of the embodiment of the present invention;
Fig. 5 is the flow chart judged according to the gesture of the embodiment of the present invention;
Fig. 6 is to be sampled to save flow chart according to the hand region picture of the embodiment of the present invention;
Fig. 7 is to obtain flow chart according to the positive sample picture of the embodiment of the present invention;
Fig. 8 is to obtain flow chart according to the negative sample picture of the embodiment of the present invention;
Fig. 9 is to extract flow chart according to the positive and negative sample characteristics of the embodiment of the present invention;
Figure 10 is the classifier training flow chart according to the embodiment of the present invention.
Specific embodiment
The embodiment of the present invention is described below in detail, the example of embodiment is shown in the accompanying drawings, wherein identical from beginning to end Or similar label indicates same or similar element or element with the same or similar functions.It is retouched below with reference to attached drawing The embodiment stated is exemplary, it is intended to is used to explain the present invention, and is not considered as limiting the invention.
It is said below with reference to gesture identification method based on monocular cam of the Fig. 1 to Figure 10 to the embodiment of the present invention It is bright.
As shown in Figure 1, the gesture identification method based on monocular cam of the embodiment of the present invention, includes the following steps:
Step S1 shoots video using monocular cam, obtains the video frame of monocular cam acquisition.
The present invention, as man-machine interface, is interacted defeated using monocular cam by monocular cam reading user gesture Enter.
Before monocular cam shoots video, monocular cam is initialized first, sets monocular cam Resolution ratio.
Fig. 3 is the flow chart according to the initialization camera and classifier of the embodiment of the present invention.
Step S301, loading configuration file.
Startup program reads configuration from configuration file and generates.
Step S302 judges whether to load successfully, if so, thening follow the steps S304 and step S305, otherwise executes step Rapid S303.
Step S303 prompts loading configuration file failure.
Prompt initialization failure simultaneously reports corresponding mistake.Step S304 initializes camera.
Camera is initialized by configuration parameter, configures the resolution ratio of camera.
Step S305 initializes classifier.
User can carry out gesture shake operation apart from monocular cam certain distance.User is shot by monocular cam Video, then obtain monocular cam acquisition video frame.
Step S2, video frame carry out image characteristic analysis, to judge in video frame with the presence or absence of palmar hand region picture.
Firstly, extracting the characteristics of image of video frame.Then, category filter is carried out to characteristics of image using classifier, according to The selection result judges, if it is present executing step S3, otherwise to abandon the view with the presence or absence of palmar hand region picture in video frame Frequency frame.
In one embodiment of the invention, before classifier carries out category filter to characteristics of image, sample number is obtained According to acquisition judges automatically in actual use is iterated training for the picture of palmar hand, and number of training chooses classification accordingly Device, and classifier is initialized.
It is carried out simultaneously it should be noted that can be initialized with the camera of step S1 to the initialization of classifier.In order to Convenient for description, the initialization of classifier is described here.
In one embodiment of the invention, sample data includes positive sample picture and negative sample picture, positive sample picture It is the picture sample for including palm area, negative sample picture is the picture sample for not including palm area.
Firstly, obtain positive sample picture, include the following steps: by manually from pre-selection samples pictures set in filter out including The picture of palm area, and the region on picture where mark palm, save markup information, obtain positive sample picture.
Fig. 7 is to obtain flow chart according to the positive sample picture of the embodiment of the present invention.
Step S701 preselects positive sample picture.
Step S702 judges whether there is palmar hand region, if so, S704 is thened follow the steps, it is no to then follow the steps S703.
By manually screening to pre-selection positive sample picture, whether judgement wherein has palmar hand region picture.
Step S703 is abandoned.
Step S704 marks palmar hand region.
By being manually labeled to the picture with palmar hand region, i.e., the position mark of palm on picture.
Step S705 saves markup information.
Save the palm markup information in step S704.
Secondly, obtaining negative sample picture, include the following steps:
It (1) does not include Background in palm area and practical application by manually being filtered out from pre-selection samples pictures set Piece.
(2) background picture is split to meet the close size of positive sample picture.
(3) removal is split to background characteristics excessively duplicate in background picture, obtains negative sample picture.
Fig. 8 is to obtain flow chart according to the negative sample picture of the embodiment of the present invention.
Step S801 preselects negative sample picture.
Step S802 judges whether there is palmar hand region, if so, S804 is thened follow the steps, it is no to then follow the steps S803.
By manually screening to pre-selection negative sample picture, whether judgement wherein has palmar hand region picture.
Step S803 is abandoned.
Step S804, it is automatic to divide.
Background picture is split to meet the close size of positive sample picture.
Step S805 abandons feature and excessively repeats.
Excessively duplicate background characteristics in negative sample picture after abandoning segmentation.
Finally, number of training chooses classifier accordingly, include the following steps:
(1) characteristics of image of positive sample picture and negative sample picture is extracted respectively.
(2) characteristics of image of positive sample picture and negative sample picture is merged and generates training sample set.
(3) training sample set is extracted with certain proportion to obtain detection sample set.
Fig. 9 is to extract flow chart according to the positive and negative sample characteristics of the embodiment of the present invention.
Step S901 inputs positive sample collection.
Step S902 lifts the characteristics of image of positive sample collection.
Step S903 inputs negative sample collection.
Step S904 extracts the characteristics of image of negative sample collection.
Step S905 merges and generates training sample set.
Step S906 extracts detection sample set.
Figure 10 is the classifier training flow chart according to the embodiment of the present invention.
Step S1001 inputs training sample.
Step S1002, training classifier.
Classifier is trained using training sample set, wherein initialization training parameter to positive sample and negative sample into Row training.
Step S1003, input detection sample set.
Step S1004 detects training result.
The classifier after training is tested to obtain the accuracy rate of current class device using detection sample set.
Step S1005, training parameter.
Iteration setting training parameter obtains the accuracy rate of multiple classifiers, and chooses the highest classifier of accuracy rate as step The classifier used in rapid S2.
Classified using the highest classifier of accuracy rate chosen in step S1005 to characteristics of image, so that judgement is It is no to have palmar hand feature, if there is executing step S3, otherwise abandon the frame.
Step S3 then obtains the palmar hand regional location under current time, and store if there is palmar hand region picture To global buffer.
Fig. 4 is the flow chart according to characteristics of image and the classifier screening of the acquisition video frame of the embodiment of the present invention.
Step S401, whether determining program exits, if so, S402 is thened follow the steps, it is no to then follow the steps S403.
Step S402, exits the program.
Step S403 obtains video frame.
Obtain the video frame of monocular cam.
Step S404 obtains every frame correspondence image feature.
Step S405 classifies to feature with classifier.
According to the category filter of classifier as a result, judging with the presence or absence of palmar hand region picture in video frame, if it is, Execute step S406.
Step S406 extracts hand region position.
The position of hand region in video is extracted from the picture of palmar hand region.
Step S407 judges whether it is palm feature, if so, S409 is thened follow the steps, it is no to then follow the steps S408.
Step S408 abandons the frame.
Step S409 is stored in current time and palmar hand regional location.
Step S410, global buffer.
Current time and palmar hand regional location are stored in global buffer.
Step S1 to step S3 is repeated, the palmar hand regional location at multiple moment in preset duration is obtained, by each moment And corresponding palmar hand regional location is stored to global buffer.That is, circulation reads video frame from monocular cam and obtains every The characteristics of image of frame carries out classification to characteristics of image using classifier and finds out palmar hand region picture.
In an embodiment of the present invention, the corresponding palmar hand region picture in opponent hand region position is replicated to global buffer It is interior, and unique overall situation UUID is generated for palmar hand region picture, as the palmar hand region picture name.
Fig. 6 is to be sampled to save flow chart according to the hand region picture of the embodiment of the present invention.
Step S601, input video frame.
Step S602 extracts the characteristics of image in video frame.
Step S603 obtains the palmar hand band of position.
Step S604 carries out small probability sampling to characteristics of image.
Step S605 obtains palmar hand region picture.
Step S606 saves palmar hand region picture.
When saving picture, with the corresponding unique overall situation UUID of each palmar hand region picture, as the palmar hand region Picture name.
It should be noted that the data in global buffer are regularly updated and are deleted, to remove legacy data.
Step S4 obtains the palm area information at multiple moment from global buffer, to the palmar hand region position at multiple moment It sets and is analyzed, obtain the relative position in palmar hand region in preset duration, identified according to the relative position in palmar hand region Corresponding gesture operation predicts that user thinks the gesture of input by palmar hand Pictures location within the scope of certain time.
The obtained gesture operation of identification is compared by step S5 with multiple Pre-defined gestures, obtains making a reservation for of matching Adopted gesture calls the corresponding call back function of matched Pre-defined gesture, to complete corresponding control action.Wherein, each predetermined Adopted gesture, call back function and control action correspond.
Fig. 5 is the flow chart judged according to the gesture of the embodiment of the present invention.
Step S501 obtains the palm area kept in global buffer and corresponding moment.
At the time of being stored with corresponding palm area picture of multiple moment in global buffer and its respectively correspond, from the overall situation Palm area and corresponding input frame moment are obtained in caching.
Step S502 updates caching, deleted legacy data.
Step S503 judges the gesture of user's input.
It is identified according to the relative position in palmar hand region, the correspondence gesture operation of user's input.
Step S504 obtains gesture predetermined.
The gesture operation identified in step S503 is compared with multiple Pre-defined gestures, finds positional relationship With the gesture operation for then judging user's input for the gesture, that is, obtain the Pre-defined gesture to match.
Step S505, the call back function for calling the gesture to bind.
In one embodiment of the invention, call back function is placed in dynamic base by formulation Function Format and is exported for being System calls.It calls the gesture to correspond to call back function, corresponding gesture operation is completed, realize the interaction with machinery equipment, such as The variation that content is shown in control display equipment, controls the start and stop etc. of household appliance.
Further, the present invention also provides error handling processing design methods: defeated with console by asserting in debugging enironment Malfunction false information, exports Debugging message by system log in actual moving process.
It is carried out below with reference to overall flow of the Fig. 2 to the gesture identification method based on monocular cam of the embodiment of the present invention Explanation.
Step S201, loading configuration file.
Step S202 initializes camera.
Step S203 initializes classifier.
Step S204 obtains video frame.
Step S205 obtains characteristics of image.
Step S206, classifies to feature.
Step S207 obtains palmar hand region.
Step S208 stores current palmar hand region.
Step S209 obtains palmar hand region.
Palmar hand region is corresponded to gesture library by step S210.
Step S211 judges current gesture.
Gesture identification method according to an embodiment of the present invention based on monocular cam, acquires user by monocular cam Palm position area information, realize gesture intelligent recognition, and using call back function call gesture respective operations, interact defeated Out, have the characteristics that accuracy of identification is high, hardware device structure is simple.The present invention, which realizes, uses video camera and computer vision algorithms make Body language is translated, the communication bridge between machine abundant whereby and people, that Gesture Recognition allows people not need is additional Tool can be linked up with machine.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not Centainly refer to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be any One or more embodiment or examples in can be combined in any suitable manner.
Although the embodiments of the present invention has been shown and described above, it is to be understood that above-described embodiment is example Property, it is not considered as limiting the invention, those skilled in the art are not departing from the principle of the present invention and objective In the case where can make changes, modifications, alterations, and variations to the above described embodiments within the scope of the invention.The scope of the present invention It is extremely equally limited by appended claims.

Claims (4)

1. a kind of gesture identification method based on monocular cam, which comprises the steps of:
Step S1 shoots video using monocular cam, obtains the video frame of the monocular cam acquisition;
Step S2 carries out image characteristic analysis to the video frame, to judge in the video frame with the presence or absence of palmar hand region Picture carries out image characteristic analysis to the video frame, includes the following steps:
Extract the characteristics of image of the video frame;
Category filter is carried out to described image feature using classifier, is judged in the video according to the selection result with the presence or absence of hand Otherwise hand region picture abandons the video frame if it is present executing step S3;Wherein, in the classifier to the figure Before carrying out category filter as feature, sample data is obtained, trains the sample data to choose the classifier, and to described Classifier is initialized, wherein the sample data includes positive sample picture and negative sample picture, and the positive sample picture is Picture sample including palm area, the negative sample picture are the picture sample for not including palm area;
(1) positive sample picture is obtained, is included the following steps: by manually filtering out from pre-selection samples pictures set including palm area The picture in domain, and on picture mark palm where region by being manually labeled to the picture with palmar hand region, i.e., The position mark of palm on picture;Protect palm markup information;
(2) negative sample picture is obtained, includes the following steps: that by manually filtering out from pre-selection samples pictures set do not include palm Background picture in region and practical application;Background picture is split to meet the close size of positive sample picture;To back Excessively duplicate background characteristics is split removal in scape picture, excessively duplicate background in the negative sample picture after abandoning segmentation Feature obtains negative sample picture;
(3) finally, number of training chooses classifier accordingly, include the following steps: to extract positive sample picture and negative sample respectively The characteristics of image of picture;The characteristics of image of positive sample picture and negative sample picture is merged and generates training sample set;To training sample This collection is extracted with certain proportion to obtain detection sample set, wherein input training sample;Training classifier, utilizes training Sample set is trained classifier, wherein initialization training parameter is trained positive sample and negative sample;Then input inspection Survey sample set;Training result is detected, the classifier after training is tested to obtain current class device using detection sample set Accuracy rate;Iteration setting training parameter obtains the accuracy rate of multiple classifiers, and chooses the highest classifier conduct of accuracy rate and adopt Classifier;Classified using the highest classifier of the accuracy rate of selection to characteristics of image, to judge whether there is palm Otherwise portion's feature abandons the frame if there is executing step S3;
Step S3 then obtains the palmar hand regional location under current time if there is palmar hand region picture, and It stores to global buffer, repeats step S1 to step S3, obtain the palmar hand region position at multiple moment in preset duration It sets, each moment and corresponding palmar hand regional location is stored to the global buffer;Step S4, to the multiple moment The palmar hand regional location analyzed, the relative position in palmar hand region in the preset duration is obtained, according to described Identify corresponding gesture operation in the relative position in palmar hand region;
The gesture operation that identification obtains is compared with multiple Pre-defined gestures, obtains the predefined hand to match by step S5 Gesture calls the corresponding call back function of the matched Pre-defined gesture, and to complete corresponding control action, call back function passes through system Determine Function Format to be placed in dynamic base and export for system calling, wherein each Pre-defined gesture, call back function and control action It corresponds.
2. as described in claim 1 based on the gesture identification method of monocular cam, which is characterized in that in the step S1 In, before the monocular cam shoots video, the monocular cam is initialized, the monocular cam is set Resolution ratio.
3. as described in claim 1 based on the gesture identification method of monocular cam, which is characterized in that further include walking as follows It is rapid: the data in the global buffer are regularly updated and deleted.
4. as described in claim 1 based on the gesture identification method of monocular cam, which is characterized in that in the step S3 In, duplication is the palm in the corresponding palmar hand region picture to the global buffer of the palmar hand regional location Portion region picture generates unique overall situation UUID, as the palmar hand region picture name.
CN201510738071.6A 2015-11-03 2015-11-03 Gesture identification method based on monocular cam Active CN105354551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510738071.6A CN105354551B (en) 2015-11-03 2015-11-03 Gesture identification method based on monocular cam

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510738071.6A CN105354551B (en) 2015-11-03 2015-11-03 Gesture identification method based on monocular cam

Publications (2)

Publication Number Publication Date
CN105354551A CN105354551A (en) 2016-02-24
CN105354551B true CN105354551B (en) 2019-07-16

Family

ID=55330519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510738071.6A Active CN105354551B (en) 2015-11-03 2015-11-03 Gesture identification method based on monocular cam

Country Status (1)

Country Link
CN (1) CN105354551B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450715A (en) * 2016-05-31 2017-12-08 大唐电信科技股份有限公司 A kind of man-machine interaction multifunctional wrist strap terminal based on gesture identification
CN108121350B (en) * 2016-11-29 2021-08-24 腾讯科技(深圳)有限公司 Method for controlling aircraft to land and related device
CN108520228A (en) * 2018-03-30 2018-09-11 百度在线网络技术(北京)有限公司 Gesture matching process and device
CN108830148A (en) * 2018-05-04 2018-11-16 北京汽车集团有限公司 Traffic gesture identification method, device, computer readable storage medium and vehicle
CN109145803B (en) * 2018-08-14 2022-07-22 京东方科技集团股份有限公司 Gesture recognition method and device, electronic equipment and computer readable storage medium
CN111007806B (en) * 2018-10-08 2022-04-08 珠海格力电器股份有限公司 Smart home control method and device
CN110164060B (en) * 2019-05-23 2020-11-03 哈尔滨拓博科技有限公司 Gesture control method for doll machine, storage medium and doll machine
CN112203015B (en) * 2020-09-28 2022-03-25 北京小米松果电子有限公司 Camera control method, device and medium system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930270A (en) * 2012-09-19 2013-02-13 东莞中山大学研究院 Method and system for identifying hands based on complexion detection and background elimination
CN103530613A (en) * 2013-10-15 2014-01-22 无锡易视腾科技有限公司 Target person hand gesture interaction method based on monocular video sequence
CN103593680A (en) * 2013-11-19 2014-02-19 南京大学 Dynamic hand gesture recognition method based on self incremental learning of hidden Markov model
CN104992171A (en) * 2015-08-04 2015-10-21 易视腾科技有限公司 Method and system for gesture recognition and man-machine interaction based on 2D video sequence

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930270A (en) * 2012-09-19 2013-02-13 东莞中山大学研究院 Method and system for identifying hands based on complexion detection and background elimination
CN103530613A (en) * 2013-10-15 2014-01-22 无锡易视腾科技有限公司 Target person hand gesture interaction method based on monocular video sequence
CN103593680A (en) * 2013-11-19 2014-02-19 南京大学 Dynamic hand gesture recognition method based on self incremental learning of hidden Markov model
CN104992171A (en) * 2015-08-04 2015-10-21 易视腾科技有限公司 Method and system for gesture recognition and man-machine interaction based on 2D video sequence

Also Published As

Publication number Publication date
CN105354551A (en) 2016-02-24

Similar Documents

Publication Publication Date Title
CN105354551B (en) Gesture identification method based on monocular cam
CN103488283B (en) Messaging device and control method and background thereof determine method
CN110472082B (en) Data processing method, data processing device, storage medium and electronic equipment
Zhai et al. Automatic identification of mycobacterium tuberculosis from ZN-stained sputum smear: Algorithm and system design
US10395091B2 (en) Image processing apparatus, image processing method, and storage medium identifying cell candidate area
CN105957521A (en) Voice and image composite interaction execution method and system for robot
CN107944427A (en) Dynamic human face recognition methods and computer-readable recording medium
CN108647557A (en) Information processing equipment, information processing method and storage medium
EP3293699A1 (en) Method, system for removing background of a video, and a computer-readable storage device
KR101753097B1 (en) Vehicle detection method, data base for the vehicle detection, providing method of data base for the vehicle detection
CN105718954B (en) A kind of recognition methods of objective attribute target attribute and classification based on view tactile fusion
CN111061898A (en) Image processing method, image processing device, computer equipment and storage medium
US9152857B2 (en) System and method for detecting object using depth information
CN104463827B (en) A kind of automatic testing method and corresponding electronic equipment of image capture module
CN111027450A (en) Bank card information identification method and device, computer equipment and storage medium
CN104077597A (en) Image classifying method and device
CN111860448A (en) Hand washing action recognition method and system
US20120169860A1 (en) Method for detection of a body part gesture to initiate a web application
CN104793068A (en) Image acquisition-based automatic test method
CN106777071B (en) Method and device for acquiring reference information by image recognition
CN105608411B (en) For the image classification method and its device of preset monitoring camera
KR101967858B1 (en) Apparatus and method for separating objects based on 3D depth image
CN110275658A (en) Display control method, device, mobile terminal and storage medium
CN110322470A (en) Action recognition device, action recognition method and recording medium
CN110110660B (en) Method, device and equipment for analyzing hand operation behaviors

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant