CN107846555A - Automatic shooting method, device, user terminal and computer-readable storage medium based on gesture identification - Google Patents
Automatic shooting method, device, user terminal and computer-readable storage medium based on gesture identification Download PDFInfo
- Publication number
- CN107846555A CN107846555A CN201711080252.XA CN201711080252A CN107846555A CN 107846555 A CN107846555 A CN 107846555A CN 201711080252 A CN201711080252 A CN 201711080252A CN 107846555 A CN107846555 A CN 107846555A
- Authority
- CN
- China
- Prior art keywords
- gesture
- feature
- automatic shooting
- face
- identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
Abstract
The invention provides a kind of automatic shooting method based on gesture identification, including:Collection is located at image unit image information within the vision;Face frame is identified from described image information;Gesture feature is extracted in the preset range of the face frame;Judge whether the gesture feature is default gesture motion;When the gesture feature is default gesture motion, control shooting.Present invention also offers a kind of automatic shooting device based on gesture identification.Flase drop can effectively be reduced using this method, the situation that gesture is detected in other non-gesture areas is effectively eliminated, greatly improve gestures detection accuracy rate.
Description
Technical field
The invention belongs to camera work field, and in particular to automatic shooting method, device based on gesture identification, user are whole
End and computer-readable storage medium.
Background technology
The automatic shooting of gesture is taken pictures to be exactly user can be achieved with by putting gesture photo, goes again without user
Camera is pressed manually to take pictures button, really realizes automation mechanized operation.But as long as current gesture is taken pictures and detects that default gesture is moved
It is carried out shooting, no matter the gesture motion is the gesture motion that photographer makes.Camera shooting is so caused to be easy to
Clap by mistake, cause the memory space wretched insufficiency of camera, cause the usage experience of user ineffective.
The content of the invention
In view of the above-mentioned deficiencies in the prior art, it is an object of the present invention to provide a kind of automatic shooting side based on gesture identification
Method, device, user terminal and computer-readable storage medium, the shortcomings that for overcoming in the prior art.
Specifically, the present invention proposes embodiment in detail below:
The embodiments of the invention provide a kind of automatic shooting method based on gesture identification, including:
Collection is located at image unit image information within the vision;
Face frame is identified from described image information;
Gesture feature is extracted in the preset range of the face frame;
Judge whether the gesture feature is default gesture motion;
When the gesture feature is default gesture motion, control shooting.
As the further improvement of above-mentioned technical proposal, the default gesture motion is obtained by autonomous learning;It is described
Learning process includes:
Gather gesture sample collection;
Extract the feature of sample set;
Utilize the features training grader extracted.
As the further improvement of above-mentioned technical proposal, the feature of the sample set is histograms of oriented gradients feature.
As the further improvement of above-mentioned technical proposal, the extraction gesture feature is using the gesture based on Face Detection point
Cut algorithm.
It is described that rgb color space, RGB tri- are selected based on Face Detection as the further improvement of above-mentioned technical proposal
Color-values meet conditional 1:{R>95,G>40,B>20,max{R,G,B}-min{R,G,B}>15,|R-G|>15,R>G,R>B}
Or conditional 2:{R>220,G>210,B>170,|R-G|≤15,R>B,G>B}.
The embodiment of the present invention additionally provides a kind of automatic shooting device based on gesture identification, including:
Acquisition module, it is located at image unit image information within the vision for gathering;
Face recognition module, for identifying face frame from described image information;
Hand Gesture Segmentation module, for extracting gesture feature in the preset range of the face frame;
Judge module, for judging whether the gesture feature is default gesture motion;
Logging modle is shot, for when the gesture feature is default gesture motion, control to be shot.
As the further improvement of above-mentioned technical proposal, the default gesture motion is obtained by autonomous learning;It is described
Learning process includes:
Gather gesture sample collection;
Extract the feature of sample set;
Utilize the features training grader extracted.
As the further improvement of above-mentioned technical proposal, the feature of the sample set is histograms of oriented gradients feature.
As the further improvement of above-mentioned technical proposal, the Hand Gesture Segmentation module extraction gesture feature, which uses, is based on the colour of skin
The Hand Gesture Segmentation algorithm of detection, described to select rgb color space based on Face Detection, tri- color-values of RGB meet conditional 1:
{R>95,G>40,B>20,max{R,G,B}-min{R,G,B}>15,|R-G|>15,R>G,R>B } or conditional 2:{R>220,G>
210,B>170,|R-G|≤15,R>B,G>B}。
The embodiment of the present invention additionally provides a kind of user terminal, and the user terminal includes memory and processor, institute
State memory be used for store support the computing device above method program, the processor be configurable for execution described in deposit
The program stored in reservoir.
The embodiment of the present invention additionally provides a kind of computer-readable storage medium, for saving as the computer used in said apparatus
Software instruction.
Using technical scheme provided by the invention, compared with existing known technology, at least have the advantages that:Will
Human face detection tech is applied in gesture identification, can effectively reduce flase drop, improves correct verification and measurement ratio.Effectively eliminate at other
Non- gesture area detects the situation of gesture, greatly improves gestures detection accuracy rate.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by embodiment it is required use it is attached
Figure is briefly described, it will be appreciated that the following drawings illustrate only certain embodiments of the present invention, therefore be not construed as pair
The restriction of scope, for those of ordinary skill in the art, on the premise of not paying creative work, can also be according to this
A little accompanying drawings obtain other related accompanying drawings.
Fig. 1 is a kind of schematic flow sheet for automatic shooting method based on gesture identification that the embodiment of the present invention proposes.
Fig. 2 is a kind of structural representation for automatic shooting device based on gesture identification that the embodiment of the present invention proposes.
Fig. 3 is a kind of structural representation for user terminal that the embodiment of the present invention proposes.
Main element symbol description:
101- acquisition modules;102- face recognition modules;103- Hand Gesture Segmentation modules;104- judge modules;105- is shot
Logging modle;10- memories;11- processors.
Embodiment
Hereinafter, the various embodiments of the disclosure will be described more fully.The disclosure can have various embodiments, and
It can adjust and change wherein.It should be understood, however, that:It is limited to specific reality disclosed herein in the absence of by disclosure protection domain
The intention of example is applied, but the disclosure should be interpreted as covering all in the spirit and scope for the various embodiments for falling into the disclosure
Adjustment, equivalent and/or alternative.
Hereinafter, disclosed in the term " comprising " that can be used in the various embodiments of the disclosure or " may include " instruction
Function, operation or the presence of element, and do not limit the increase of one or more functions, operation or element.In addition, such as exist
Used in the various embodiments of the disclosure, term " comprising ", " having " and its cognate are meant only to represent special characteristic, number
Word, step, operation, the combination of element, component or foregoing item, and be understood not to exclude first one or more other
Feature, numeral, step, operation, element, component or foregoing item combination presence or one or more features of increase, numeral,
Step, operation, element, component or foregoing item combination possibility.
The statement (" first ", " second " etc.) used in the various embodiments of the disclosure can be modified in various implementations
Various element in example, but respective sets can not be limited into element.For example, presented above be not intended to limit the suitable of the element
Sequence and/or importance.The purpose presented above for being only used for differentiating an element and other elements.For example, the first user sets
Standby and second user equipment instruction different user devices, although the two is all user equipment.For example, each of the disclosure is not being departed from
In the case of the scope of kind embodiment, the first element is referred to alternatively as the second element, and similarly, the second element is also referred to as first
Element.
It should be noted that:, can be by the first composition member if an element ' attach ' to another element by description
Part is directly connected to the second element, and " connection " the 3rd can be formed between the first element and the second element
Element.On the contrary, when an element " being directly connected to " is arrived into another element, it will be appreciated that be in the first element
And second be not present the 3rd element between element.
The term used in the various embodiments of the disclosure is only used for describing the purpose of specific embodiment and not anticipated
In the various embodiments of the limitation disclosure.Unless otherwise defined, be otherwise used herein all terms (including technical term and
Scientific terminology) there is the implication identical being generally understood that with the various embodiment one skilled in the art of the disclosure to contain
Justice.The term (term such as limited in the dictionary typically used) be to be interpreted as have with correlative technology field
Situational meaning identical implication and the implication with Utopian implication or overly formal will be not construed as, unless at this
It is clearly defined in disclosed various embodiments.
Embodiment 1
As shown in figure 1, the embodiments of the invention provide a kind of automatic shooting method based on gesture identification, including:
S101, collection are located at image unit image information within the vision.
In the present embodiment, collection is located at image unit image information within the vision in real time, and the image of collection is at least
Comprising a character image, automatic shooting function is realized will pass through the gesture of the personage, it is not necessary to which photographer passes through Touch Screen
Or pressing physical button shoots to realize.
S102, face frame is identified from described image information.
Face datection is carried out to image, if detecting face from image, carries out next step gesture feature extraction, it is no
Then continue to carry out Face datection to the image information obtained in real time.
Human face detection tech is applied in gesture identification, only first detects the face of user, then identifies the user's
Gesture motion, related shooting operation is performed according to the gesture motion of the user.It is several that increase recognition of face can effectively reduce flase drop
Rate, improve correct verification and measurement ratio.The situation that gesture is detected in other non-gesture areas is effectively eliminated, greatly improves gesture
Detection accuracy.
Face frame can be rectangle frame or ellipse.
After face is detected, human face region and neighboring area are carried out to the amplification of preset ratio, then in this region
The interior feature extraction for carrying out gesture.When being detected as a face in image, on the basis of the border of current face, it is amplified
Processing, when detecting multiple faces, is amplified by the rectangle for covering all persons' face or on the basis of oval frame
Processing.Parts of images around face is amplified processing, highlights gesture feature information, is advantageously believed in gesture feature
The extraction and identification of breath.
S103, extract gesture feature in the preset range of the face frame.
The gesture motion image one that gesture feature is due to user is extracted in the preset range of the face frame to be positioned at
In the preset range of the facial image of the user, images of gestures region is partitioned into the preset range of face frame, so as to
Rapid extraction gesture feature is simultaneously identified.
The scope that gesture feature extracts is limited in face frame preset range, is advantageous to lift recognition speed and reduction
Identify error rate.
Extraction gesture feature uses the Hand Gesture Segmentation algorithm based on Face Detection.
In order to facilitate the identification of gesture.In the present embodiment, extract gesture feature and be based on RGB color mould using one kind
The skin color detection method of type, this method can separate in human body complexion (including hand) region from background.In other realities
Other skin color detection methods can also be selected by applying in example.From the skin color detection method based on rgb color space.Tri- colors of RGB
Color value need to meet conditional 1:{R>95,G>40,B>20,max{R,G,B}-min{R,G,B}>15,|R-G|>15,R>G,R>B}
Or conditional 2:{R>220,G>210,B>170,|R-G|≤15,R>B,G>B}.
Using the Hand Gesture Segmentation method based on Face Detection so that the background after segmentation is simpler, and interference information is reduced.
Because background is simple, without the concern for various complicated scenes, institute can be greatly reduced when grader is trained
The quantity for the images of gestures positive sample that need to be gathered.
S104, judge whether the gesture feature is default gesture motion.
Judge whether the gesture feature is default gesture motion, is performed when being judged as default gesture motion next
Step automatic shooting operation, otherwise returns and continues executing with recognition of face.
The default gesture motion is obtained by autonomous learning;The learning process includes:
A, gesture sample collection is gathered;
Collection gesture sample collection includes positive sample and negative sample.Positive sample is the image comprising gesture, and negative sample is not wrap
Image containing gesture.
B, the feature of sample set is extracted;
The feature of extraction sample set can describe a vector of image.The feature for extracting sample set is HOG
(Histogram of Oriented Gradien) feature, i.e. histograms of oriented gradients feature.HOG features are one kind in image
It is used for carrying out the Feature Descriptor of object detection in process field and computer vision field.It is by calculating and statistical picture office
The gradient orientation histogram in portion region carrys out constitutive characteristic.
C, the features training grader extracted is utilized.
S105, when the gesture feature is default gesture motion, control shooting.
Embodiment 2
As shown in Fig. 2 the embodiment of the present invention additionally provides a kind of automatic shooting device based on gesture identification, including:Adopt
Collect module 101, face recognition module 102, Hand Gesture Segmentation module 103, judge module 104 and shooting logging modle 105.
Acquisition module 101, it is located at image unit image information within the vision for gathering;
Face recognition module 102, for identifying face frame from described image information;
Hand Gesture Segmentation module 103, for extracting gesture feature in the preset range of the face frame;
Judge module 104, for judging whether the gesture feature is default gesture motion;
Logging modle 105 is shot, for when the gesture feature is default gesture motion, control to be shot.
Collection is located at image unit image information within the vision to acquisition module 101 in real time, and the image of collection at least wraps
Containing a character image, automatic shooting function is realized will pass through the gesture of the personage, it is not necessary to photographer by Touch Screen or
Person presses physical button to realize shooting.
Face recognition module 102 carries out Face datection to image, if detecting face from image, carries out in next step
Gesture feature extracts, and otherwise continues to carry out Face datection to the image information obtained in real time.
Human face detection tech is applied in gesture identification, only first detects the face of user, then identifies the user's
Gesture motion, related shooting operation is performed according to the gesture motion of the user.It is several that increase recognition of face can effectively reduce flase drop
Rate, improve correct verification and measurement ratio.The situation that gesture is detected in other non-gesture areas is effectively eliminated, greatly improves gesture
Detection accuracy.
After face is detected, human face region and neighboring area are carried out to the amplification of preset ratio, then in this region
The interior feature extraction for carrying out gesture.When being detected as a face in image, on the basis of the border of current face, it is amplified
Processing, when detecting multiple faces, is amplified by the rectangle for covering all persons' face or on the basis of oval frame
Processing.Parts of images around face is amplified processing, highlights gesture feature information, is advantageously believed in gesture feature
The extraction and identification of breath.
Hand Gesture Segmentation module 103 extracts gesture feature in the preset range of the face frame, because the gesture of user is moved
Make image to be necessarily located in the preset range of facial image of the user, images of gestures is partitioned into the preset range of face frame
Region, so as to rapid extraction gesture feature and be identified.
The scope that gesture feature extracts is limited in face frame preset range, is advantageous to lift recognition speed and reduction
Identify error rate.
Extraction gesture feature uses the Hand Gesture Segmentation algorithm based on Face Detection.
In order to facilitate the identification of gesture.In the present embodiment, extract gesture feature and be based on RGB color mould using one kind
The skin color detection method of type, this method can separate in human body complexion (including hand) region from background.In other realities
Other skin color detection methods can also be selected by applying in example.From the skin color detection method based on rgb color space.Tri- colors of RGB
Color value need to meet conditional 1:{R>95,G>40,B>20,max{R,G,B}-min{R,G,B}>15,|R-G|>15,R>G,R>B}
Or conditional 2:{R>220,G>210,B>170,|R-G|≤15,R>B,G>B}.
Using the Hand Gesture Segmentation method based on Face Detection so that the background after segmentation is simpler, and interference information is reduced.
Because background is simple, without the concern for various complicated scenes, institute can be greatly reduced when grader is trained
The quantity for the images of gestures positive sample that need to be gathered.
Judge module 104 judges whether the gesture feature is default gesture motion, when being judged as that default gesture moves
Automatic shooting operation in next step is performed when making, otherwise returns and continues executing with recognition of face.
As shown in figure 3, the embodiment of the present invention additionally provides a kind of user terminal, the user terminal include memory 10 with
And processor 11, memory 10 are used to store the program for supporting processor 11 to perform method in embodiment 1, processor 11 is configured
For for performing the program stored in memory 10.
The embodiment of the present invention additionally provides a kind of computer-readable storage medium, for saving as used in the device in embodiment 2
Computer software instructions.
It will be appreciated by those skilled in the art that accompanying drawing is a schematic diagram for being preferable to carry out scene, module in accompanying drawing or
Flow is not necessarily implemented necessary to the present invention.
It will be appreciated by those skilled in the art that the module in equipment in implement scene can be described according to implement scene into
Row is distributed in the equipment of implement scene, can also carry out respective change and be disposed other than the one or more of this implement scene to set
In standby.The module of above-mentioned implement scene can be merged into a module, can also be further split into multiple submodule.
The invention described above sequence number is for illustration only, does not represent the quality of implement scene.Disclosed above is only the present invention
Several specific implementation scenes, still, the present invention is not limited to this, and the changes that any person skilled in the art can think of is all
Protection scope of the present invention should be fallen into.
Claims (10)
- A kind of 1. automatic shooting method based on gesture identification, it is characterised in that including:Collection is located at image unit image information within the vision;Face frame is identified from described image information;Gesture feature is extracted in the preset range of the face frame;Judge whether the gesture feature is default gesture motion;When the gesture feature is default gesture motion, control shooting.
- 2. the automatic shooting method according to claim 1 based on gesture identification, it is characterised in that the default gesture Action is obtained by autonomous learning;The learning process includes:Gather gesture sample collection;Extract the feature of sample set;Utilize the features training grader extracted.
- 3. the automatic shooting method according to claim 2 based on gesture identification, it is characterised in that the spy of the sample set Levy as histograms of oriented gradients feature.
- 4. the automatic shooting method according to claim 1 based on gesture identification, it is characterised in that the extraction gesture is special Sign uses the Hand Gesture Segmentation algorithm based on Face Detection.
- 5. the automatic shooting method according to claim 4 based on gesture identification, it is characterised in that described to be examined based on the colour of skin Rgb color space is selected in survey, and tri- color-values of RGB meet conditional 1:{R>95,G>40,B>20,max{R,G,B}-min{R, G,B}>15,|R-G|>15,R>G,R>B } or conditional 2:{R>220,G>210,B>170,|R-G|≤15,R>B,G>B}.
- A kind of 6. automatic shooting device based on gesture identification, it is characterised in that including:Acquisition module, it is located at image unit image information within the vision for gathering;Face recognition module, for identifying face frame from described image information;Hand Gesture Segmentation module, for extracting gesture feature in the preset range of the face frame;Judge module, for judging whether the gesture feature is default gesture motion;Logging modle is shot, for when the gesture feature is default gesture motion, control to be shot.
- 7. the automatic shooting device according to claim 6 based on gesture identification, it is characterised in that the default gesture Action is obtained by autonomous learning;The learning process includes:Gather gesture sample collection;Extract the feature of sample set;Utilize the features training grader extracted.
- 8. the automatic shooting device according to claim 6 based on gesture identification, it is characterised in that the Hand Gesture Segmentation mould Block extraction gesture feature uses the Hand Gesture Segmentation algorithm based on Face Detection, described to select rgb color space based on Face Detection, Tri- color-values of RGB meet conditional 1:{R>95,G>40,B>20,max{R,G,B}-min{R,G,B}>15,|R-G|>15,R> G,R>B } or conditional 2:{R>220,G>210,B>170,|R-G|≤15,R>B,G>B}.
- 9. a kind of user terminal, it is characterised in that the user terminal includes memory and processor, and the memory is used for The program of any one of computing device claim 1 to 5 methods described is supported in storage, and the processor is configurable for performing The program stored in the memory.
- 10. a kind of computer-readable storage medium, it is characterised in that for saving as used in any one of claim 6 to 8 described device Computer software instructions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711080252.XA CN107846555A (en) | 2017-11-06 | 2017-11-06 | Automatic shooting method, device, user terminal and computer-readable storage medium based on gesture identification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711080252.XA CN107846555A (en) | 2017-11-06 | 2017-11-06 | Automatic shooting method, device, user terminal and computer-readable storage medium based on gesture identification |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107846555A true CN107846555A (en) | 2018-03-27 |
Family
ID=61681804
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711080252.XA Pending CN107846555A (en) | 2017-11-06 | 2017-11-06 | Automatic shooting method, device, user terminal and computer-readable storage medium based on gesture identification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107846555A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738118A (en) * | 2019-09-16 | 2020-01-31 | 平安科技(深圳)有限公司 | Gesture recognition method and system, management terminal and computer readable storage medium |
CN111062312A (en) * | 2019-12-13 | 2020-04-24 | RealMe重庆移动通信有限公司 | Gesture recognition method, gesture control method, device, medium and terminal device |
CN111901681A (en) * | 2020-05-04 | 2020-11-06 | 东南大学 | Intelligent television control device and method based on face recognition and gesture recognition |
CN112565602A (en) * | 2020-11-30 | 2021-03-26 | 北京地平线信息技术有限公司 | Method and apparatus for controlling image photographing apparatus, and computer-readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509079A (en) * | 2011-11-04 | 2012-06-20 | 康佳集团股份有限公司 | Real-time gesture tracking method and tracking system |
CN102592115A (en) * | 2011-12-26 | 2012-07-18 | Tcl集团股份有限公司 | Hand positioning method and system |
CN103530607A (en) * | 2013-09-30 | 2014-01-22 | 智慧城市系统服务(中国)有限公司 | Method and device for hand detection and hand recognition |
CN104700088A (en) * | 2015-03-23 | 2015-06-10 | 南京航空航天大学 | Gesture track recognition method based on monocular vision motion shooting |
CN106454071A (en) * | 2016-09-09 | 2017-02-22 | 捷开通讯(深圳)有限公司 | Terminal and automatic shooting method based on gestures |
CN106909884A (en) * | 2017-01-17 | 2017-06-30 | 北京航空航天大学 | A kind of hand region detection method and device based on hierarchy and deformable part sub-model |
-
2017
- 2017-11-06 CN CN201711080252.XA patent/CN107846555A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509079A (en) * | 2011-11-04 | 2012-06-20 | 康佳集团股份有限公司 | Real-time gesture tracking method and tracking system |
CN102592115A (en) * | 2011-12-26 | 2012-07-18 | Tcl集团股份有限公司 | Hand positioning method and system |
CN103530607A (en) * | 2013-09-30 | 2014-01-22 | 智慧城市系统服务(中国)有限公司 | Method and device for hand detection and hand recognition |
CN104700088A (en) * | 2015-03-23 | 2015-06-10 | 南京航空航天大学 | Gesture track recognition method based on monocular vision motion shooting |
CN106454071A (en) * | 2016-09-09 | 2017-02-22 | 捷开通讯(深圳)有限公司 | Terminal and automatic shooting method based on gestures |
CN106909884A (en) * | 2017-01-17 | 2017-06-30 | 北京航空航天大学 | A kind of hand region detection method and device based on hierarchy and deformable part sub-model |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738118A (en) * | 2019-09-16 | 2020-01-31 | 平安科技(深圳)有限公司 | Gesture recognition method and system, management terminal and computer readable storage medium |
CN110738118B (en) * | 2019-09-16 | 2023-07-07 | 平安科技(深圳)有限公司 | Gesture recognition method, gesture recognition system, management terminal and computer readable storage medium |
CN111062312A (en) * | 2019-12-13 | 2020-04-24 | RealMe重庆移动通信有限公司 | Gesture recognition method, gesture control method, device, medium and terminal device |
WO2021115181A1 (en) * | 2019-12-13 | 2021-06-17 | RealMe重庆移动通信有限公司 | Gesture recognition method, gesture control method, apparatuses, medium and terminal device |
CN111062312B (en) * | 2019-12-13 | 2023-10-27 | RealMe重庆移动通信有限公司 | Gesture recognition method, gesture control device, medium and terminal equipment |
CN111901681A (en) * | 2020-05-04 | 2020-11-06 | 东南大学 | Intelligent television control device and method based on face recognition and gesture recognition |
CN111901681B (en) * | 2020-05-04 | 2022-09-30 | 东南大学 | Intelligent television control device and method based on face recognition and gesture recognition |
CN112565602A (en) * | 2020-11-30 | 2021-03-26 | 北京地平线信息技术有限公司 | Method and apparatus for controlling image photographing apparatus, and computer-readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10372226B2 (en) | Visual language for human computer interfaces | |
CN107846555A (en) | Automatic shooting method, device, user terminal and computer-readable storage medium based on gesture identification | |
US10318797B2 (en) | Image processing apparatus and image processing method | |
CN105608447B (en) | To the detection method of human body face smile expression depth convolutional neural networks | |
US10691940B2 (en) | Method and apparatus for detecting blink | |
CN108012081B (en) | Intelligent beautifying method, device, terminal and computer readable storage medium | |
CN104683692B (en) | A kind of continuous shooting method and device | |
CN109255324A (en) | Gesture processing method, interaction control method and equipment | |
CN106056064A (en) | Face recognition method and face recognition device | |
CN110458059A (en) | A kind of gesture identification method based on computer vision and identification device | |
CN109165589A (en) | Vehicle based on deep learning recognition methods and device again | |
CN111563435A (en) | Sleep state detection method and device for user | |
CN106033539A (en) | Meeting guiding method and system based on video face recognition | |
CN107368806A (en) | Image correction method, device, computer-readable recording medium and computer equipment | |
CN110415212A (en) | Abnormal cell detection method, device and computer readable storage medium | |
WO2018076484A1 (en) | Method for tracking pinched fingertips based on video | |
CN109711309A (en) | A kind of method whether automatic identification portrait picture closes one's eyes | |
CN111259757B (en) | Living body identification method, device and equipment based on image | |
CN110188722A (en) | A kind of method and terminal of local recognition of face image duplicate removal | |
Shanmugavadivu et al. | Rapid face detection and annotation with loosely face geometry | |
CN105988580A (en) | Screen control method and device of mobile terminal | |
EP3200092A1 (en) | Method and terminal for implementing image sequencing | |
CN112766028A (en) | Face fuzzy processing method and device, electronic equipment and storage medium | |
CN106650583A (en) | Face detection method, face detection device and terminal equipment | |
CN104063041B (en) | A kind of information processing method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180327 |
|
RJ01 | Rejection of invention patent application after publication |