CN105354551A - Gesture recognition method based on monocular camera - Google Patents

Gesture recognition method based on monocular camera Download PDF

Info

Publication number
CN105354551A
CN105354551A CN201510738071.6A CN201510738071A CN105354551A CN 105354551 A CN105354551 A CN 105354551A CN 201510738071 A CN201510738071 A CN 201510738071A CN 105354551 A CN105354551 A CN 105354551A
Authority
CN
China
Prior art keywords
picture
gesture
palmar hand
sorter
monocular cam
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510738071.6A
Other languages
Chinese (zh)
Other versions
CN105354551B (en
Inventor
朱郁丛
李小波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Img Technology Co Ltd
Original Assignee
Beijing Img Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Img Technology Co Ltd filed Critical Beijing Img Technology Co Ltd
Priority to CN201510738071.6A priority Critical patent/CN105354551B/en
Publication of CN105354551A publication Critical patent/CN105354551A/en
Application granted granted Critical
Publication of CN105354551B publication Critical patent/CN105354551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The present invention provides a gesture recognition method based on a monocular camera. The method comprises: shooting a video by using a monocular camera, and acquiring video frames captured by the monocular camera; performing an image feature analysis on the video frames; if a picture of a palm region exists, storing a corresponding palm region position of each moment to a global cache; analyzing palm region positions of multiple moments to acquire a relative position of the palm region in a preset time period, and recognizing a corresponding gesture operation according to the relative position of the palm region; and comparing the recognized gesture operation with a plurality of predefined gestures to acquire a matching predefined gesture, and calling a callback function corresponding to the matching predefined gesture to complete a corresponding control action. According to the gesture recognition method based on a monocular camera provided by the present invention, palm region position information of a user is captured by a monocular camera, so as to intelligently recognize a gesture; and an operation corresponding to a gesture is called by using a callback function, to perform interactive output, thereby achieving the characteristics of being high in recognition precision and simple in hardware equipment structure.

Description

Based on the gesture identification method of monocular cam
Technical field
Designed image recognition technology field of the present invention, particularly a kind of gesture identification method based on monocular cam.
Background technology
Existing limbs recognition technology, normally utilizes the limbs image of binocular camera collection user.Limbs image due to user is mixed among a large amount of background images, and the limbs image ratio isolating user from background image is more difficult.In addition, because user is in the situations such as controlling equipment, normally adopts gesture to express, therefore from background image, isolate images of gestures accurately, more meaningful for solving practical problems.Further, existing image capture device is generally binocular image collecting device.
How utilizing monocular image collecting device and accurately identify from background image and isolate images of gestures, is the technical issues that need to address in present image recognition technology.
Summary of the invention
Object of the present invention is intended at least solve one of described technological deficiency.
For this reason, the object of the invention is to propose a kind of gesture identification method based on monocular cam, the palm position area information of user is gathered by monocular cam, realize gesture Intelligent Recognition, and utilize call back function to call gesture respective operations, export alternately, have that accuracy of identification is high, the simple feature of hardware device structure.
To achieve these goals, embodiments of the invention provide a kind of gesture identification method based on monocular cam, comprise the steps:
Step S1, utilizes monocular cam capture video, obtains the frame of video that described monocular cam gathers;
Step S2, carries out image characteristic analysis to described frame of video, to judge whether there is palmar hand region picture in described frame of video;
Step S3, if there is described palmar hand region picture, then obtain the described palmar hand regional location under current time, and be stored in global buffer, repeat step S1 to step S3, obtain the described palmar hand regional location in multiple moment in preset duration, the palmar hand regional location of each moment and correspondence is stored in described global buffer;
Step S4, analyzes the described palmar hand regional location in described multiple moment, obtains the relative position in palmar hand region in described preset duration, identifies corresponding gesture operation according to the relative position in described palmar hand region;
Step S5, the gesture operation obtain identification and multiple Pre-defined gesture compare, obtain the Pre-defined gesture matched, call the call back function that the Pre-defined gesture of described coupling is corresponding, to complete corresponding control action, wherein, each Pre-defined gesture, call back function and control action one_to_one corresponding.
Further, in described step S1, before described monocular cam capture video, initialization is carried out to described monocular cam, set the resolution of described monocular cam.
Further, in described step S2, image characteristic analysis is carried out to described frame of video, comprises the steps:
Extract the characteristics of image of described frame of video;
Utilize sorter to carry out category filter to described characteristics of image, judge in described video, whether to there is palmar hand region picture according to the selection result, if existed, then perform step S3, otherwise abandon this frame of video.
Further, also comprise the steps: before described sorter carries out category filter to described characteristics of image, obtain sample data, train described sample data to choose described sorter, and initialization is carried out to described sorter, wherein, described sample data comprises positive samples pictures and negative sample picture, described positive samples pictures is the picture sample comprising palm area, and described negative sample picture is the picture sample not comprising palm area.
Further, obtain described positive samples pictures, comprise the steps: from the set of preliminary election samples pictures, to filter out the picture comprising palm area by artificial, and mark the region at palm place on described picture, preserve markup information, obtain positive samples pictures.
Further, obtain described negative sample picture, comprise the steps:
Palm area is not comprised and background picture in practical application by artificial filtering out from the set of preliminary election samples pictures;
The size meeting described positive samples pictures is split to described background picture;
The feature repeated in described background picture is removed, obtains negative sample picture.
Further, train described sample data to choose described sorter, comprise the steps:
Extract the characteristics of image of described positive samples pictures and negative sample picture respectively;
The characteristics of image of described positive samples pictures and negative sample picture is merged generation training sample set;
Extract to obtain detecting sample set to described training sample set;
Utilize training sample set pair sorter to train, and utilize described detection sample set to test to the sorter after training the accuracy rate obtaining current class device;
Iteration arranges the accuracy rate that training parameter obtains multiple sorter, and chooses the highest sorter of accuracy rate as the described sorter adopted in described step S2.
Further, also comprise the steps: to carry out regular update and deletion to the data in described global buffer.
Further, in described step S3, copy in palmar hand region picture corresponding to described palmar hand regional location to described global buffer, and generate unique overall UUID for described palmar hand region picture, as this palmar hand region picture name.
According to the gesture identification method based on monocular cam of the embodiment of the present invention, the palm position area information of user is gathered by monocular cam, realize gesture Intelligent Recognition, and utilize call back function to call gesture respective operations, export alternately, have that accuracy of identification is high, the simple feature of hardware device structure.The present invention realizes using video camera and computer vision algorithms make to translate body language, and enrich the communication bridge between machine and people whereby, Gesture Recognition allows that people does not need extra instrument just can link up with machine.
The aspect that the present invention adds and advantage will part provide in the following description, and part will become obvious from the following description, or be recognized by practice of the present invention.
Accompanying drawing explanation
Above-mentioned and/or additional aspect of the present invention and advantage will become obvious and easy understand from accompanying drawing below combining to the description of embodiment, wherein:
Fig. 1 is the FB(flow block) of the gesture identification method based on monocular cam according to the embodiment of the present invention;
Fig. 2 is the process flow diagram of the gesture identification method based on monocular cam according to the embodiment of the present invention;
Fig. 3 is the process flow diagram of initialization camera according to the embodiment of the present invention and sorter;
Fig. 4 is the process flow diagram screened according to characteristics of image and the sorter of the acquisition frame of video of the embodiment of the present invention;
Fig. 5 is the process flow diagram judged according to the gesture of the embodiment of the present invention;
Fig. 6 preserves process flow diagram according to the hand region picture sampling of the embodiment of the present invention;
Fig. 7 obtains process flow diagram according to the positive samples pictures of the embodiment of the present invention;
Fig. 8 obtains process flow diagram according to the negative sample picture of the embodiment of the present invention;
Fig. 9 extracts process flow diagram according to the positive and negative sample characteristics of the embodiment of the present invention;
Figure 10 is the sorter training process flow diagram according to the embodiment of the present invention.
Embodiment
Be described below in detail embodiments of the invention, the example of embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or has element that is identical or similar functions from start to finish.Be exemplary below by the embodiment be described with reference to the drawings, be intended to for explaining the present invention, and can not limitation of the present invention be interpreted as.
Below with reference to Fig. 1 to Figure 10, the gesture identification method based on monocular cam of the embodiment of the present invention is described.
As shown in Figure 1, the gesture identification method based on monocular cam of the embodiment of the present invention, comprises the steps:
Step S1, utilizes monocular cam capture video, obtains the frame of video that monocular cam gathers.
The present invention adopts monocular cam as man-machine interface, reads user's gesture input alternately by monocular cam.
Before monocular cam capture video, first initialization is carried out to monocular cam, the resolution of setting monocular cam.
Fig. 3 is the process flow diagram of initialization camera according to the embodiment of the present invention and sorter.
Step S301, loading configuration file.
Start-up routine reads configuration from configuration file and produces.
Step S302, judges whether to load successfully, if so, then performs step S304 and step S305, otherwise performs step S303.
Step S303, the failure of prompting loading configuration file.
Prompting initialization failure also reports corresponding mistake..Step S304, initialization camera.
By configuration parameter initialization camera, the resolution of configuration camera.
Step S305, initialization sorter.
User apart from monocular cam certain distance, can carry out gesture shake operation.Taken the video of user by monocular cam, then obtain the frame of video that monocular cam gathers.
Step S2, frame of video carries out image characteristic analysis, to judge whether there is palmar hand region picture in frame of video.
First, the characteristics of image of frame of video is extracted.Then, utilize sorter to carry out category filter to characteristics of image, judge in frame of video, whether to there is palmar hand region picture according to the selection result, if existed, then perform step S3, otherwise abandon this frame of video.
In one embodiment of the invention, before sorter carries out category filter to characteristics of image, obtain sample data, obtaining automatic decision is in actual use that the picture of palmar hand carries out repetitive exercise, number of training chooses sorter according to this, and carries out initialization to sorter.
It should be noted that, can carry out with the camera initialization of step S1 the initialization of sorter simultaneously.For convenience of description, herein the initialization of sorter is described.
In one embodiment of the invention, sample data comprises positive samples pictures and negative sample picture, and positive samples pictures is the picture sample comprising palm area, and negative sample picture is the picture sample not comprising palm area.
First, obtain positive samples pictures, comprise the steps: from the set of preliminary election samples pictures, to filter out the picture comprising palm area by artificial, and on picture, mark the region at palm place, preserve markup information, obtain positive samples pictures.
Fig. 7 obtains process flow diagram according to the positive samples pictures of the embodiment of the present invention.
Step S701, the positive samples pictures of preliminary election.
Step S702, has judged whether palmar hand region, if so, then performs step S704, otherwise performs step S703.
By manually screening the positive samples pictures of preliminary election, judge wherein whether there is palmar hand region picture.
Step S703, abandons.
Step S704, mark palmar hand region.
Marked by the artificial picture to having palmar hand region, namely on picture, the position of palm marks.
Step S705, preserves markup information.
Preserve the palm markup information in step S704.
Secondly, obtain negative sample picture, comprise the steps:
(1) palm area is not comprised and background picture in practical application by artificial filtering out from the set of preliminary election samples pictures.
(2) split to meet the close size of positive samples pictures to background picture.
(3) carry out segmentation to the background characteristics too repeated in background picture to remove, obtain negative sample picture.
Fig. 8 obtains process flow diagram according to the negative sample picture of the embodiment of the present invention.
Step S801, preliminary election negative sample picture.
Step S802, has judged whether palmar hand region, if so, then performs step S804, otherwise performs step S803.
By manually screening preliminary election negative sample picture, judge wherein whether there is palmar hand region picture.
Step S803, abandons.
Step S804, auto Segmentation.
Split to meet the close size of positive samples pictures to background picture.
Step S805, abandons feature and too repeats.
Abandon the background characteristics too repeated in the negative sample picture after segmentation.
Finally, number of training chooses sorter according to this, comprises the steps:
(1) characteristics of image of positive samples pictures and negative sample picture is extracted respectively.
(2) characteristics of image of positive samples pictures and negative sample picture is merged generation training sample set.
(3) extract to obtain detecting sample set with certain proportion to training sample set.
Fig. 9 extracts process flow diagram according to the positive and negative sample characteristics of the embodiment of the present invention.
Step S901, inputs positive sample set.
Step S902, mentions the characteristics of image of positive sample set.
Step S903, input negative sample collection.
Step S904, extracts the characteristics of image of negative sample collection.
Step S905, merges and generates training sample set.
Step S906, extracts and detects sample set.
Figure 10 is the sorter training process flow diagram according to the embodiment of the present invention.
Step S1001, input training sample.
Step S1002, training classifier.
Utilize training sample set pair sorter to train, wherein, initialization training parameter aligns sample and negative sample is trained.
Step S1003, input detects sample set.
Step S1004, detects training result.
Detection sample set is utilized to test to the sorter after training the accuracy rate obtaining current class device.
Step S1005, training parameter.
Iteration arranges the accuracy rate that training parameter obtains multiple sorter, and chooses the highest sorter of accuracy rate as the sorter adopted in step S2.
The sorter utilizing the accuracy rate chosen in step S1005 the highest is classified to characteristics of image, thus has judged whether palmar hand feature, if had at execution step S3, otherwise abandons this frame.
Step S3, if there is palmar hand region picture, then obtains the palmar hand regional location under current time, and is stored in global buffer.
Fig. 4 is the process flow diagram screened according to characteristics of image and the sorter of the acquisition frame of video of the embodiment of the present invention.
Step S401, whether determining program exits, and if so, then performs step S402, otherwise performs step S403.
Step S402, quits a program.
Step S403, obtains frame of video.
Obtain the frame of video of monocular cam.
Step S404, obtains every frame correspondence image feature.
Step S405, classifies to feature with sorter.
According to the category filter result of sorter, judge in frame of video, whether to there is palmar hand region picture, if so, then perform step S406.
Step S406, extracts hand region position.
Hand region position is in video extracted from the picture of palmar hand region.
Step S407, determines whether palm feature, if so, then performs step S409, otherwise performs step S408.
Step S408, abandons this frame.
Step S409, stored in current time and palmar hand regional location.
Step S410, global buffer.
By current time and palmar hand regional location stored in global buffer.
Repeat step S1 to step S3, obtain the palmar hand regional location in multiple moment in preset duration, the palmar hand regional location of each moment and correspondence is stored in global buffer.That is, circulate and read frame of video from monocular cam and obtain the characteristics of image of every frame, utilize sorter to carry out classification to characteristics of image and find out palmar hand region picture.
In an embodiment of the present invention, copy palmar hand region picture corresponding to position, opponent hand region in global buffer, and generate unique overall UUID for palmar hand region picture, as this palmar hand region picture name.
Fig. 6 preserves process flow diagram according to the hand region picture sampling of the embodiment of the present invention.
Step S601, input video frame.
Step S602, extracts the characteristics of image in frame of video.
Step S603, obtains the palmar hand band of position.
Step S604, carries out small probability sampling to characteristics of image.
Step S605, obtains palmar hand region picture.
Step S606, preserves palmar hand region picture.
When preserving picture, with the unique overall UUID that each palmar hand region picture is corresponding, as this palmar hand region picture name.
It should be noted that, the data in global buffer need to carry out regular update and deletion, to remove legacy data.
Step S4, the palm area information in multiple moment is obtained from global buffer, the palmar hand regional location in multiple moment is analyzed, obtain the relative position in palmar hand region in preset duration, identify corresponding gesture operation according to the relative position in palmar hand region, namely want by palmar hand Pictures location prediction user within the scope of certain hour the gesture inputted.
Step S5, the gesture operation obtain identification and multiple Pre-defined gesture compare, and obtain the Pre-defined gesture matched, call the call back function that the Pre-defined gesture of coupling is corresponding, to complete corresponding control action.Wherein, each Pre-defined gesture, call back function and control action one_to_one corresponding.
Fig. 5 is the process flow diagram judged according to the gesture of the embodiment of the present invention.
Step S501, obtains the palm area and corresponding moment that keep in global buffer.
In global buffer, store palm area picture corresponding to multiple moment and moment corresponding respectively thereof, from global buffer, obtain palm area and corresponding incoming frame moment.
Step S502, upgrades buffer memory, deleted legacy data.
Step S503, judges the gesture that user inputs.
Relative position according to palmar hand region identifies, the corresponding gesture operation of user's input.
Step S504, obtains predefined gesture.
To identify in step S503 that the gesture operation that obtains and multiple Pre-defined gesture compare, find that gesture operation that position relationship coupling then judges that user inputs is as this gesture, namely obtains the Pre-defined gesture matched.
Step S505, calls the call back function of this gesture binding.
In one embodiment of the invention, call back function is placed in dynamic base by formulation Function Format also derives for system call.Call the corresponding call back function of this gesture, completed corresponding gesture operation, realization is mutual with machinery and equipment, such as, control the change of displaying contents on display device, controls the start and stop etc. of home appliance.
Further, present invention also offers error handling processing method for designing: by asserting and control desk output error message in debugging enironment, in actual moving process, export Debugging message by system journal.
Below with reference to Fig. 2, the overall flow of the gesture identification method based on monocular cam of the embodiment of the present invention is described.
Step S201, loading configuration file.
Step S202, initialization camera.
Step S203, initialization sorter.
Step S204, obtains frame of video.
Step S205, obtains characteristics of image.
Step S206, classifies to feature.
Step S207, obtains palmar hand region.
Step S208, stores current palmar hand region.
Step S209, obtains palmar hand region.
Step S210, by corresponding for palmar hand region gesture library.
Step S211, judges current gesture.
According to the gesture identification method based on monocular cam of the embodiment of the present invention, the palm position area information of user is gathered by monocular cam, realize gesture Intelligent Recognition, and utilize call back function to call gesture respective operations, export alternately, have that accuracy of identification is high, the simple feature of hardware device structure.The present invention realizes using video camera and computer vision algorithms make to translate body language, and enrich the communication bridge between machine and people whereby, Gesture Recognition allows that people does not need extra instrument just can link up with machine.
In the description of this instructions, specific features, structure, material or feature that the description of reference term " embodiment ", " some embodiments ", " example ", " concrete example " or " some examples " etc. means to describe in conjunction with this embodiment or example are contained at least one embodiment of the present invention or example.In this manual, identical embodiment or example are not necessarily referred to the schematic representation of above-mentioned term.And the specific features of description, structure, material or feature can combine in an appropriate manner in any one or more embodiment or example.
Although illustrate and describe embodiments of the invention above, be understandable that, above-described embodiment is exemplary, can not be interpreted as limitation of the present invention, those of ordinary skill in the art can change above-described embodiment within the scope of the invention when not departing from principle of the present invention and aim, revising, replacing and modification.Scope of the present invention is by claims extremely equivalency.

Claims (9)

1. based on a gesture identification method for monocular cam, it is characterized in that, comprise the steps:
Step S1, utilizes monocular cam capture video, obtains the frame of video that described monocular cam gathers;
Step S2, carries out image characteristic analysis to described frame of video, to judge whether there is palmar hand region picture in described frame of video;
Step S3, if there is described palmar hand region picture, then obtain the described palmar hand regional location under current time, and be stored in global buffer, repeat step S1 to step S3, obtain the described palmar hand regional location in multiple moment in preset duration, the palmar hand regional location of each moment and correspondence is stored in described global buffer;
Step S4, analyzes the described palmar hand regional location in described multiple moment, obtains the relative position in palmar hand region in described preset duration, identifies corresponding gesture operation according to the relative position in described palmar hand region;
Step S5, the gesture operation obtain identification and multiple Pre-defined gesture compare, obtain the Pre-defined gesture matched, call the call back function that the Pre-defined gesture of described coupling is corresponding, to complete corresponding control action, wherein, each Pre-defined gesture, call back function and control action one_to_one corresponding.
2., as claimed in claim 1 based on the gesture identification method of monocular cam, it is characterized in that, in described step S1, before described monocular cam capture video, initialization is carried out to described monocular cam, set the resolution of described monocular cam.
3., as claimed in claim 1 based on the gesture identification method of monocular cam, it is characterized in that, in described step S2, image characteristic analysis is carried out to described frame of video, comprises the steps:
Extract the characteristics of image of described frame of video;
Utilize sorter to carry out category filter to described characteristics of image, judge in described video, whether to there is palmar hand region picture according to the selection result, if existed, then perform step S3, otherwise abandon this frame of video.
4. as claimed in claim 3 based on the gesture identification method of monocular cam, it is characterized in that, also comprise the steps: before described sorter carries out category filter to described characteristics of image, obtain sample data, train described sample data to choose described sorter, and initialization is carried out to described sorter, wherein, described sample data comprises positive samples pictures and negative sample picture, described positive samples pictures is the picture sample comprising palm area, and described negative sample picture is the picture sample not comprising palm area.
5., as claimed in claim 4 based on the gesture identification method of monocular cam, it is characterized in that, obtain described positive samples pictures, comprise the steps:
From the set of preliminary election samples pictures, filter out the picture comprising palm area by artificial, and mark the region at palm place on described picture, preserve markup information, obtain positive samples pictures.
6., as claimed in claim 5 based on the gesture identification method of monocular cam, it is characterized in that, obtain described negative sample picture, comprise the steps:
Palm area is not comprised and background picture in practical application by artificial filtering out from the set of preliminary election samples pictures;
The size meeting described positive samples pictures is split to described background picture;
The feature repeated in described background picture is removed, obtains negative sample picture.
7., as claimed in claim 6 based on the gesture identification method of monocular cam, it is characterized in that, train described sample data to choose described sorter, comprise the steps:
Extract the characteristics of image of described positive samples pictures and negative sample picture respectively;
The characteristics of image of described positive samples pictures and negative sample picture is merged generation training sample set;
Extract to obtain detecting sample set to described training sample set;
Utilize training sample set pair sorter to train, and utilize described detection sample set to test to the sorter after training the accuracy rate obtaining current class device;
Iteration arranges the accuracy rate that training parameter obtains multiple sorter, and chooses the highest sorter of accuracy rate as the described sorter adopted in described step S2.
8., as claimed in claim 1 based on the gesture identification method of monocular cam, it is characterized in that, also comprise the steps: to carry out regular update and deletion to the data in described global buffer.
9. as claimed in claim 1 based on the gesture identification method of monocular cam, it is characterized in that, in described step S3, copy in palmar hand region picture corresponding to described palmar hand regional location to described global buffer, and generate unique overall UUID for described palmar hand region picture, as this palmar hand region picture name.
CN201510738071.6A 2015-11-03 2015-11-03 Gesture identification method based on monocular cam Active CN105354551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510738071.6A CN105354551B (en) 2015-11-03 2015-11-03 Gesture identification method based on monocular cam

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510738071.6A CN105354551B (en) 2015-11-03 2015-11-03 Gesture identification method based on monocular cam

Publications (2)

Publication Number Publication Date
CN105354551A true CN105354551A (en) 2016-02-24
CN105354551B CN105354551B (en) 2019-07-16

Family

ID=55330519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510738071.6A Active CN105354551B (en) 2015-11-03 2015-11-03 Gesture identification method based on monocular cam

Country Status (1)

Country Link
CN (1) CN105354551B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450715A (en) * 2016-05-31 2017-12-08 大唐电信科技股份有限公司 A kind of man-machine interaction multifunctional wrist strap terminal based on gesture identification
CN108121350A (en) * 2016-11-29 2018-06-05 腾讯科技(深圳)有限公司 A kind of method and relevant apparatus for controlling aircraft landing
CN108520228A (en) * 2018-03-30 2018-09-11 百度在线网络技术(北京)有限公司 Gesture matching process and device
CN108830148A (en) * 2018-05-04 2018-11-16 北京汽车集团有限公司 Traffic gesture identification method, device, computer readable storage medium and vehicle
CN109145803A (en) * 2018-08-14 2019-01-04 京东方科技集团股份有限公司 Gesture identification method and device, electronic equipment, computer readable storage medium
CN110164060A (en) * 2019-05-23 2019-08-23 哈尔滨拓博科技有限公司 A kind of gestural control method, storage medium and doll machine for doll machine
CN111007806A (en) * 2018-10-08 2020-04-14 珠海格力电器股份有限公司 Smart home control method and device
CN112203015A (en) * 2020-09-28 2021-01-08 北京小米松果电子有限公司 Camera control method, device and medium system
CN112686169A (en) * 2020-12-31 2021-04-20 深圳市火乐科技发展有限公司 Gesture recognition control method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930270A (en) * 2012-09-19 2013-02-13 东莞中山大学研究院 Method and system for identifying hands based on complexion detection and background elimination
CN103530613A (en) * 2013-10-15 2014-01-22 无锡易视腾科技有限公司 Target person hand gesture interaction method based on monocular video sequence
CN103593680A (en) * 2013-11-19 2014-02-19 南京大学 Dynamic hand gesture recognition method based on self incremental learning of hidden Markov model
CN104992171A (en) * 2015-08-04 2015-10-21 易视腾科技有限公司 Method and system for gesture recognition and man-machine interaction based on 2D video sequence

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930270A (en) * 2012-09-19 2013-02-13 东莞中山大学研究院 Method and system for identifying hands based on complexion detection and background elimination
CN103530613A (en) * 2013-10-15 2014-01-22 无锡易视腾科技有限公司 Target person hand gesture interaction method based on monocular video sequence
CN103593680A (en) * 2013-11-19 2014-02-19 南京大学 Dynamic hand gesture recognition method based on self incremental learning of hidden Markov model
CN104992171A (en) * 2015-08-04 2015-10-21 易视腾科技有限公司 Method and system for gesture recognition and man-machine interaction based on 2D video sequence

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450715A (en) * 2016-05-31 2017-12-08 大唐电信科技股份有限公司 A kind of man-machine interaction multifunctional wrist strap terminal based on gesture identification
CN108121350A (en) * 2016-11-29 2018-06-05 腾讯科技(深圳)有限公司 A kind of method and relevant apparatus for controlling aircraft landing
CN108520228A (en) * 2018-03-30 2018-09-11 百度在线网络技术(北京)有限公司 Gesture matching process and device
CN108830148A (en) * 2018-05-04 2018-11-16 北京汽车集团有限公司 Traffic gesture identification method, device, computer readable storage medium and vehicle
CN109145803A (en) * 2018-08-14 2019-01-04 京东方科技集团股份有限公司 Gesture identification method and device, electronic equipment, computer readable storage medium
CN109145803B (en) * 2018-08-14 2022-07-22 京东方科技集团股份有限公司 Gesture recognition method and device, electronic equipment and computer readable storage medium
CN111007806A (en) * 2018-10-08 2020-04-14 珠海格力电器股份有限公司 Smart home control method and device
CN110164060A (en) * 2019-05-23 2019-08-23 哈尔滨拓博科技有限公司 A kind of gestural control method, storage medium and doll machine for doll machine
CN112203015A (en) * 2020-09-28 2021-01-08 北京小米松果电子有限公司 Camera control method, device and medium system
CN112686169A (en) * 2020-12-31 2021-04-20 深圳市火乐科技发展有限公司 Gesture recognition control method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN105354551B (en) 2019-07-16

Similar Documents

Publication Publication Date Title
CN105354551A (en) Gesture recognition method based on monocular camera
US20170285916A1 (en) Camera effects for photo story generation
CN113095124A (en) Face living body detection method and device and electronic equipment
EP3001354A1 (en) Object detection method and device for online training
CN105956059A (en) Emotion recognition-based information recommendation method and apparatus
CN107771391B (en) Method and apparatus for determining exposure time of image frame
US10395091B2 (en) Image processing apparatus, image processing method, and storage medium identifying cell candidate area
CN103425257B (en) A kind of reminding method of uncommon character information and device
EP3293699A1 (en) Method, system for removing background of a video, and a computer-readable storage device
WO2018000643A1 (en) Method and device for sorting photographs
CN103140862A (en) User interface system and method of operation thereof
CN107944427A (en) Dynamic human face recognition methods and computer-readable recording medium
CN111061898A (en) Image processing method, image processing device, computer equipment and storage medium
CN104077597B (en) Image classification method and device
CN105512255A (en) Picture screening method and device and mobile terminal
CN111027450A (en) Bank card information identification method and device, computer equipment and storage medium
CN115699082A (en) Defect detection method and device, storage medium and electronic equipment
KR102075111B1 (en) Ui function test system and method
US20220044147A1 (en) Teaching data extending device, teaching data extending method, and program
CN110363190A (en) A kind of character recognition method, device and equipment
CN113516113A (en) Image content identification method, device, equipment and storage medium
CN109359029A (en) It is a kind of automate non-intrusion type Android apply accessible support detection method
CN113050860A (en) Control identification method and related device
CN106777071B (en) Method and device for acquiring reference information by image recognition
CN109961403A (en) Method of adjustment, device, storage medium and the electronic equipment of photo

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant