CN104850234A - Unmanned plane control method and unmanned plane control system based on facial expression recognition - Google Patents
Unmanned plane control method and unmanned plane control system based on facial expression recognition Download PDFInfo
- Publication number
- CN104850234A CN104850234A CN201510280895.3A CN201510280895A CN104850234A CN 104850234 A CN104850234 A CN 104850234A CN 201510280895 A CN201510280895 A CN 201510280895A CN 104850234 A CN104850234 A CN 104850234A
- Authority
- CN
- China
- Prior art keywords
- image
- expression
- unmanned plane
- expression recognition
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of unmanned plane control and computer vision combination, and discloses an unmanned plane control method based on facial expression recognition. The unmanned plane control method based on facial expression recognition specifically comprises the following steps of (1) collecting various human facial expressions as training sample images, extracting features of local regions to form a feature group and performing classified training on the feature group to obtain an expression recognizer based on the facial expressions; (2) acquiring images in a visual angle range by using an image acquiring device arranged on an unmanned plane, judging whether people are in the image or not, starting an expression recognition step if people are in the images so as to obtain facial images of people, and continuing to acquire images in the visual angle range if people are not in the images; (3) recognizing facial expressions of obtained human faces on the basis of extracted feature information and the expression recognizer of the facial expressions; and (4) translating the expressions of people in the images into control commands of the unmanned plane. A user can use the unmanned plane control method conveniently, and operation experience of the user is improved.
Description
Technical field
The present invention relates to the technical field that unmanned plane and computer vision combine, particularly relate to a kind of unmanned aerial vehicle (UAV) control method and system based on Expression Recognition.
Background technology
Unmanned plane of the prior art has number of different types, its different unmanned plane range of application separately also widely, for rotor wing unmanned aerial vehicle, its be a kind of can the aircraft of vertical takeoff and landing, this aircraft has that volume is little, lightweight, flight stability, cheap, can carry out hovering etc. plurality of advantages, have very good application and research to be worth.Due to unmanned plane not carrying pilot, therefore its flight controls to adopt remote control to realize more.
Remote control thereof general at present has hand-held remote controller and the general mobile terminal (as: panel computer and mobile phone etc.) of customization.The advantage of this method can realize precisely, in real time controlling to unmanned plane, and reliability is higher; Shortcoming increases extra cost and is unfavorable for carrying, and when there is hardware fault, there will be " proposing control to return " phenomenon.Therefore be necessary, on the basis not increasing existing unmanned plane hardware cost, to study new unmanned aerial vehicle (UAV) control method, as independently no-manned machine distant control solution or the useful of existing remote control thereof are assisted.
Summary of the invention
Need extra hardware device for unmanned aerial vehicle (UAV) control of the prior art, cause cost high step to be not easy to the technical matters of carrying, the invention discloses a kind of unmanned aerial vehicle (UAV) control method and system based on Expression Recognition.
Specific implementation of the present invention is as follows: a kind of unmanned aerial vehicle (UAV) control method based on Expression Recognition, it specifically comprises the following steps: step one, collects various different human face expression as training sample image, each sample image comprises multiple regional area, extract the feature morphogenesis characters group of each regional area, and feature group is carried out classification based training, obtain the Expression Recognition device based on facial expression; Step 2, the image acquiring device be arranged on unmanned plane obtain the image in angular field of view, and judge whether there is people in this image, are start the step of Expression Recognition, obtain the face-image of people, otherwise continue to obtain the image in angular field of view; The characteristic information of the face-image that step 3, extraction step two obtain, and the facial expression identifying obtained face based on the Expression Recognition device of extracted characteristic information and facial expression; Step 4, the expression of the people in image is translated as the control command of unmanned plane, and sends to unmanned plane.
Further, above-mentioned steps one specifically comprises feature extraction and classification based training two steps.
Further, feature extraction described above specifically comprises the following steps: step 1: collect training sample image, each sample image comprises 49 regional areas, and each expression is expressed by the feature group of 49 regional areas, extracts the Harr feature of each regional area; Step 2: adopt Weak Classifier to classify, calculate the error rate often organizing local feature; Step 3: choose the minimum n of an error rate feature and form new assemblage characteristic
; Step 4: constantly joined by remaining local feature in new assemblage characteristic, continues to use Weak Classifier to classify, if error rate reduces, is just joined by new local feature
; Step 5: constantly repeat step 4, until error rate reaches optimal value, or
in feature quantity reach N; Step 6: record assemblage characteristic
.
Further, above-mentioned classification based training adopts Adaboost method training characteristics group.
Further, above-mentioned training characteristics group specifically comprises the following steps: step 1: capturing sample image is also demarcated each sample; Step 2: find one group of assemblage characteristic by feature extraction; Step 3: train Weak Classifier and miscount rate; Step 4: obtain optimum classifier by iteration; Step 5: export optimum classifier.Further, described method also comprises according to the position in the picture of face in current frame image, finds best match position in subsequent frames, and adjusts the visual angle of image acquiring device according to matching result, follows the tracks of face to realize image acquiring device.
Further, above-mentioned image acquiring device is arranged on unmanned plane by The Cloud Terrace, and described The Cloud Terrace drives image acquiring device to Arbitrary Rotation, to get the expression of user.
Further, said method also comprises after image acquiring device gets the expression of people, and according to face change in location in the picture, the anglec of rotation of adjustment The Cloud Terrace, with the face of the visual angle controlling this image acquiring device this unmanned plane of Control-oriented all the time.
Further, above-mentioned unmanned plane also comprises change-over switch, described change-over switch is for switching the current controlled party's formula of unmanned plane, and described controlled party's formula comprises remote control device control and/or mobile terminal remote pilot and/or gesture remote control and controls and/or Expression Recognition control.
The invention also discloses a kind of unmanned connected control system based on Expression Recognition, it specifically comprises the image acquiring device be arranged on unmanned plane, image processing module, Expression Recognition module and expression translater; Described image acquiring device is for obtaining the image in real-time angular field of view; Described image processing module judges whether there is people in image according to the image got, and in time there is people in image, starts Expression Recognition module, otherwise continues to obtain the image in real-time angular field of view; Described Expression Recognition module comprises characteristics extraction module and classification based training module; Described characteristics extraction module for collecting various different human face expression as training sample image, and extracts the feature morphogenesis characters group of each regional area, and described each sample image comprises multiple regional area; Described classification based training module is used for feature group to carry out classification based training, obtains the Expression Recognition device based on facial expression; The expression of the people in image is translated as the control command of unmanned plane by described expression translater, and sends to unmanned plane.
By adopting above technical scheme, the present invention has following beneficial effect: the present invention by arranging image acquiring device (such as camera or camera or high speed camera etc.) on unmanned plane, can image in Real-time Obtaining angular field of view, and analyze the control expression that whether there is people in this image, be, the control command of unmanned plane is translated as after then identifying, thus by the expression of people, unmanned plane is controlled, make the control hardware not needing to carry other, convenient for users to use, reduce the acquisition cost of user.Expression Recognition is incorporated in unmanned aerial vehicle (UAV) control, can independently controls unmanned plane or assisting as telechiric device.Not only can reduce unmanned plane hardware cost, also can increase the manipulation enjoyment of unmanned plane, strengthen the interaction of people and unmanned plane.Use feature extraction and classification based training train expressive features and identify, improve the success ratio of Expression Recognition.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment below, be to be understood that, the following drawings illustrate only some embodiment of the present invention, therefore the restriction to scope should be counted as, for those of ordinary skill in the art, under the prerequisite not paying creative work, other relevant accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is the treatment step after unmanned plane gathers image.
Fig. 2 is the unmanned aerial vehicle (UAV) control method implementing procedure figure based on expression.
Embodiment
For making the object of the embodiment of the present invention, technical scheme and advantage clearly, below in conjunction with accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, hereafter described embodiment is only the present invention's part embodiment, instead of whole embodiments.The assembly of the embodiment of the present invention describing and illustrate in usual accompanying drawing herein can be arranged with various different configuration and design.Therefore, below to the detailed description of the embodiments of the invention provided in the accompanying drawings and the claimed scope of the present invention of not intended to be limiting, but selected embodiment of the present invention is only represented.Based on embodiments of the invention, the every other embodiment that those skilled in the art obtain under the prerequisite not making creative work, all belongs to the scope of protection of the invention.
The invention discloses a kind of unmanned aerial vehicle (UAV) control method based on Expression Recognition, it specifically comprises the following steps: step one, collects various different human face expression as training sample image, each sample image comprises multiple regional area, extract the feature morphogenesis characters group of each regional area, and feature group is carried out classification based training, obtain the Expression Recognition device based on facial expression; Step 2, the image acquiring device be arranged on unmanned plane obtain the image in angular field of view, and judge whether there is people in this image, are start the step of Expression Recognition, obtain the face-image of people, otherwise continue to obtain the image in angular field of view; The characteristic information of the face-image that step 3, extraction step two obtain, and the facial expression identifying obtained face based on the Expression Recognition device of extracted characteristic information and facial expression; Step 4, the expression of the people in image is translated as the control command of unmanned plane, and sends to unmanned plane.First the present invention collects the sample image of multiple different human face expression as training, and face is divided into multiple region, thus obtain the eigenwert in multiple regions of multiple face, then these features carried out being combined to form feature group and carry out classification based training, thus obtaining the Expression Recognition device based on facial movement unit.When after the new facial expression getting people, extract its facial characteristics value, and this eigenwert is carried out mating and comparison with the Expression Recognition device of facial expression, thus obtain the facial expression of face, then the control command being mapped to unmanned plane controls unmanned plane, convenient for users to use, improve the operating experience of user.
Fig. 1 is the treatment step after unmanned plane gathers image, and it first gathers video image, then carries out Face detection and basis, carries out Expression Recognition according to the face navigated to, and then carries out expression translation and controls.
Fig. 2 is the unmanned aerial vehicle (UAV) control method implementing procedure figure based on expression, and it first will obtain multiple training set and form Expression Recognition device, then by the expression that this Expression Recognition device identification gets.
Further, above-mentioned steps one specifically comprises feature extraction and classification based training two steps.Described feature extraction specifically comprises the following steps: step 1: collect training sample image, each sample image comprises 49 regional areas, each regional area includes moving cell, facial movement unit is the exclusive characteristic of human face expression, the information that facial movement unit comprises is very large, such as: the corners of the mouth raises up, nose swells, eyebrow forces down etc., be therefore difficult to identify moving cell accurately, thus cause a lot of expression recognition methods all not consider to apply facial moving cell.Compared with the feature (Gabor characteristic, Haar characteristic sum LBP feature) of some classics, the information described by facial movement unit is different.Gabor characteristic etc. mainly lay particular emphasis on the essential information such as pixel grey scale, contrast, texture of image itself, these information some with expression classification without any relation, facial movement unit is then consider emphatically how to describe out with the information be associated of expressing one's feelings by image, therefore determines that the feature of different expression classification in facial expression image more will be conducive to sorter identification human face expression by Facial action unit.Each expression is expressed by the feature group of 49 regional areas, extracts the Harr feature of each regional area.Such as common happiness, sadness, indignation, surprised, detest, frightened etc. all can be expressed by the Feature Combination of these 49 regional areas.Step 2: adopt Weak Classifier to classify, calculating often organizes the error rate of local feature (here mainly in order to integration characteristics, feature group after each integration is equivalent to one group of facial movement unit, because human face expression is made up of a series of facial movement unit, therefore, each assemblage characteristic is the feature based on facial movement unit that can be regarded as a certain class expression.Invention method therefor be just to locate one integrate after feature group, he can represent one group of facial movement unit, and the general of the feature utilizing Haar to extract exists error with facial movement element characteristic, and error rate.The feature that error rate is low more meets facial movement unit, and therefore this category feature is retained.) step 3: choose the minimum n of an error rate feature and form new assemblage characteristic
(n is threshold value, is the number of a change, needs ceaselessly to adjust to reach best in training.); Step 4: constantly joined by remaining local feature in new assemblage characteristic, continues to use Weak Classifier to classify, if error rate reduces, is just joined by new local feature
; Step 5: constantly repeat step 4, until error rate reaches optimal value, or
in feature quantity to reach N(N be sample size); Step 6: record assemblage characteristic
.Said method is adopted to extract the optimum combination feature of training sample.
Further, above-mentioned classification based training adopts Adaboost method training characteristics group.Adopt Adaboost method to train these feature groups, thus find out the feature group being really applicable to carrying out expression classification.
Further, above-mentioned training characteristics group specifically comprises the following steps: step 1: gather training sample image and each sample demarcated to (such as often pair of sample adds a weight label, one is 1 is positive sample, and another is-1 is negative sample); Step 2: find one group of assemblage characteristic (feature extraction is above the equal of build model, is then to model training) here by feature extraction; Step 3: training Weak Classifier and calculate its error rate (the same with the error rate of preceding office portion feature, be all calculate the degree of conformity of feature, but for feature different); Step 4: obtain optimum classifier by iteration; Step 5: export optimum classifier.Through training and screening, just can find out the composition (namely finding those features that can represent expression) based on facial movement unit, thus realize the Expression Recognition of face.
In addition, feature extraction can also extract Gabor characteristic, LBP characteristic sum SIFT operator, and sorting technique also has: SVM method and neural network etc.Method of the present invention can obtain a strong classifier from a lot of Weak Classifier learning.General Weak Classifier can be better than random assortment, and strong classifier is almost close to entirely true.
Further, said method also comprises according to the position in the picture of face in current frame image, finds best match position in subsequent frames, and adjusts the visual angle of image acquiring device according to matching result, follows the tracks of face to realize image acquiring device.Recognition of face and tracking can adopt Mean-shift to carry out in conjunction with skin color segmentation.
Further, above-mentioned image acquiring device is arranged on unmanned plane by The Cloud Terrace, and described The Cloud Terrace drives image acquiring device to Arbitrary Rotation, to get the expression of user.
Further, after image acquiring device gets the expression of people, according to face change in location in the picture, the anglec of rotation of adjustment The Cloud Terrace, with the face of the visual angle controlling this image acquiring device this unmanned plane of Control-oriented all the time.
Further, above-mentioned unmanned plane also comprises change-over switch, described change-over switch is for switching the current controlled party's formula of unmanned plane, and described controlled party's formula comprises remote control device control and/or mobile terminal remote pilot and/or gesture remote control and controls and/or Expression Recognition control.
The invention also discloses a kind of unmanned connected control system based on Expression Recognition, it specifically comprises the image acquiring device be arranged on unmanned plane, image processing module, Expression Recognition module and expression translater; Described image acquiring device is for obtaining the image in real-time angular field of view; Described image processing module judges whether there is people in image according to the image got, and in time there is people in image, starts Expression Recognition module, otherwise continues to obtain the image in real-time angular field of view; Described Expression Recognition module comprises characteristics extraction module and classification based training module; Described characteristics extraction module for collecting various different human face expression as training sample image, and extracts the feature morphogenesis characters group of each regional area, and described each sample image comprises multiple regional area; Described classification based training module is used for feature group to carry out classification based training, obtains the Expression Recognition device based on facial expression; The expression of the people in image is translated as the control command of unmanned plane by described expression translater, and sends to unmanned plane.The present invention by arranging image acquiring device (such as camera) on unmanned plane, Real-time Obtaining is in the image in camera angular field of view, and analyze the control expression that whether there is people in this image, be, be translated as the control command of unmanned plane after then identifying, thus by the expression of people, unmanned plane controlled, make the control hardware not needing to carry other, convenient for users to use, reduce the acquisition cost of user.Expression Recognition is incorporated in unmanned aerial vehicle (UAV) control, can independently controls unmanned plane or assisting as telechiric device.Not only can reduce unmanned plane hardware cost, also can increase the manipulation enjoyment of unmanned plane, strengthen the interaction of people and unmanned plane.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the system of foregoing description and the specific works process of device, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
Last it is noted that the above embodiment, be only the specific embodiment of the present invention, in order to technical scheme of the present invention to be described, be not intended to limit, protection scope of the present invention is not limited thereto, although with reference to previous embodiment to invention has been detailed description, those of ordinary skill in the art is to be understood that: be anyly familiar with those skilled in the art in the technical scope that the present invention discloses, it still can be modified to the technical scheme described in previous embodiment or can expect change easily, or equivalent replacement is carried out to wherein portion of techniques feature, and these amendments, change or replacement, do not make the essence of appropriate technical solution depart from the spirit and scope of embodiment of the present invention technical scheme.All should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection domain of claim.
Claims (10)
1. the unmanned aerial vehicle (UAV) control method based on Expression Recognition, it specifically comprises the following steps: step one, collects various different human face expression as training sample image, each sample image comprises multiple regional area, extract the feature morphogenesis characters group of each regional area, and feature group is carried out classification based training, obtain the Expression Recognition device based on facial expression; Step 2, the image acquiring device be arranged on unmanned plane obtain the image in angular field of view, and judge whether there is people in this image, are start the step of Expression Recognition, obtain the face-image of people, otherwise continue to obtain the image in angular field of view; The characteristic information of the face-image that step 3, extraction step two obtain, and the facial expression identifying obtained face based on the Expression Recognition device of extracted characteristic information and facial expression; Step 4, the expression of the people in image is translated as the control command of unmanned plane, and sends to unmanned plane.
2., as claimed in claim 1 based on the unmanned aerial vehicle (UAV) control method of Expression Recognition, it is characterized in that described step one specifically comprises feature extraction and classification based training two steps.
3. as claimed in claim 2 based on the unmanned aerial vehicle (UAV) control method of Expression Recognition, it is characterized in that described feature extraction specifically comprises the following steps: step 1: collect training sample image, each sample image comprises 49 regional areas, each expression is expressed by the feature group of 49 regional areas, extracts the Harr feature of each regional area; Step 2: adopt Weak Classifier to classify, calculate the error rate often organizing local feature; Step 3: choose the minimum n of an error rate feature and form new assemblage characteristic
; Step 4: constantly joined by remaining local feature in new assemblage characteristic, continues to use Weak Classifier to classify, if error rate reduces, is just joined by new local feature
; Step 5: constantly repeat step 4, until error rate reaches optimal value, or
in feature quantity reach N; Step 6: record new assemblage characteristic
.
4., as claimed in claim 3 based on the unmanned aerial vehicle (UAV) control method of Expression Recognition, it is characterized in that described classification based training adopts Adaboost method training characteristics group.
5., as claimed in claim 4 based on the unmanned aerial vehicle (UAV) control method of Expression Recognition, it is characterized in that described training characteristics group specifically comprises the following steps: step 1: capturing sample image is also demarcated each sample; Step 2: find one group of assemblage characteristic by feature extraction; Step 3: train Weak Classifier and miscount rate; Step 4: obtain optimum classifier by iteration; Step 5: export optimum classifier.
6. as claimed in claim 1 based on the unmanned aerial vehicle (UAV) control method of Expression Recognition, it is characterized in that described method also comprises according to the position in the picture of face in current frame image, find best match position in subsequent frames, and the visual angle of image acquiring device is adjusted according to matching result, to realize image acquiring device, face is followed the tracks of.
7. as claimed in claim 1 based on the unmanned aerial vehicle (UAV) control method of Expression Recognition, it is characterized in that described image acquiring device is arranged on unmanned plane by The Cloud Terrace, described The Cloud Terrace drives image acquiring device to Arbitrary Rotation, to get the expression of user.
8. as claimed in claim 1 based on the unmanned aerial vehicle (UAV) control method of Expression Recognition, it is characterized in that described method also comprises after image acquiring device gets the expression of people, according to face change in location in the picture, the anglec of rotation of adjustment The Cloud Terrace, with the face of the visual angle controlling this image acquiring device this unmanned plane of Control-oriented all the time.
9. as claimed in claim 1 based on the unmanned aerial vehicle (UAV) control method of Expression Recognition, it is characterized in that described unmanned plane also comprises change-over switch, described change-over switch is for switching the current controlled party's formula of unmanned plane, and described controlled party's formula comprises remote control device control and/or mobile terminal remote pilot and/or gesture remote control and controls and/or Expression Recognition control.
10., based on a unmanned connected control system for Expression Recognition, it is characterized in that specifically comprising the image acquiring device be arranged on unmanned plane, image processing module, Expression Recognition module and expression translater; Described image acquiring device is for obtaining the image in real-time angular field of view; Described image processing module judges whether there is people in image according to the image got, and in time there is people in image, starts Expression Recognition module, otherwise continues to obtain the image in real-time angular field of view; Described Expression Recognition module comprises characteristics extraction module and classification based training module; Described characteristics extraction module for collecting various different human face expression as training sample image, and extracts the feature morphogenesis characters group of each regional area, and described each sample image comprises multiple regional area; Described classification based training module is used for feature group to carry out classification based training, obtains the Expression Recognition device based on facial expression; Described expression translater is used for the control command expression of the people in image being translated as unmanned plane, and sends to unmanned plane.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510280895.3A CN104850234A (en) | 2015-05-28 | 2015-05-28 | Unmanned plane control method and unmanned plane control system based on facial expression recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510280895.3A CN104850234A (en) | 2015-05-28 | 2015-05-28 | Unmanned plane control method and unmanned plane control system based on facial expression recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104850234A true CN104850234A (en) | 2015-08-19 |
Family
ID=53849930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510280895.3A Pending CN104850234A (en) | 2015-05-28 | 2015-05-28 | Unmanned plane control method and unmanned plane control system based on facial expression recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104850234A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105159452A (en) * | 2015-08-28 | 2015-12-16 | 成都通甲优博科技有限责任公司 | Control method and system based on estimation of human face posture |
CN105589466A (en) * | 2016-02-24 | 2016-05-18 | 谭圆圆 | Flight control device of unmanned aircraft and flight control method thereof |
CN105607647A (en) * | 2016-02-25 | 2016-05-25 | 谭圆圆 | Shooting scope adjusting system of aerial equipment and corresponding adjusting method |
CN105955474A (en) * | 2016-04-27 | 2016-09-21 | 努比亚技术有限公司 | Prompting method of application evaluation, and mobile terminal |
CN105979147A (en) * | 2016-06-22 | 2016-09-28 | 上海顺砾智能科技有限公司 | Intelligent shooting method of unmanned aerial vehicle |
WO2017049816A1 (en) * | 2015-09-24 | 2017-03-30 | 北京零零无限科技有限公司 | Method and device for controlling unmanned aerial vehicle to rotate along with face |
WO2018116028A1 (en) * | 2016-12-21 | 2018-06-28 | Hangzhou Zero Zero Technology Co., Ltd. | System and method for controller-free user drone interaction |
CN108292141A (en) * | 2016-03-01 | 2018-07-17 | 深圳市大疆创新科技有限公司 | Method and system for target following |
CN108804893A (en) * | 2018-03-30 | 2018-11-13 | 百度在线网络技术(北京)有限公司 | A kind of control method, device and server based on recognition of face |
US10726248B2 (en) | 2018-02-01 | 2020-07-28 | Ford Global Technologies, Llc | Validating gesture recognition capabilities of automated systems |
CN111524339A (en) * | 2016-08-18 | 2020-08-11 | 深圳市大疆创新科技有限公司 | Unmanned aerial vehicle frequency alignment method and system, unmanned aerial vehicle and remote controller |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101825947A (en) * | 2010-05-04 | 2010-09-08 | 中兴通讯股份有限公司 | Method and device for intelligently controlling mobile terminal and mobile terminal thereof |
US20120169895A1 (en) * | 2010-03-24 | 2012-07-05 | Industrial Technology Research Institute | Method and apparatus for capturing facial expressions |
CN104463100A (en) * | 2014-11-07 | 2015-03-25 | 重庆邮电大学 | Intelligent wheelchair man-machine interaction system and method based on facial expression recognition mode |
-
2015
- 2015-05-28 CN CN201510280895.3A patent/CN104850234A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120169895A1 (en) * | 2010-03-24 | 2012-07-05 | Industrial Technology Research Institute | Method and apparatus for capturing facial expressions |
CN101825947A (en) * | 2010-05-04 | 2010-09-08 | 中兴通讯股份有限公司 | Method and device for intelligently controlling mobile terminal and mobile terminal thereof |
CN104463100A (en) * | 2014-11-07 | 2015-03-25 | 重庆邮电大学 | Intelligent wheelchair man-machine interaction system and method based on facial expression recognition mode |
Non-Patent Citations (1)
Title |
---|
郝敬松: "基于改进的AdaBoost算法与局部特征方法的自动人脸识系统的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105159452A (en) * | 2015-08-28 | 2015-12-16 | 成都通甲优博科技有限责任公司 | Control method and system based on estimation of human face posture |
CN105159452B (en) * | 2015-08-28 | 2018-01-12 | 成都通甲优博科技有限责任公司 | A kind of control method and system based on human face modeling |
WO2017049816A1 (en) * | 2015-09-24 | 2017-03-30 | 北京零零无限科技有限公司 | Method and device for controlling unmanned aerial vehicle to rotate along with face |
CN105589466A (en) * | 2016-02-24 | 2016-05-18 | 谭圆圆 | Flight control device of unmanned aircraft and flight control method thereof |
CN105607647A (en) * | 2016-02-25 | 2016-05-25 | 谭圆圆 | Shooting scope adjusting system of aerial equipment and corresponding adjusting method |
CN108292141A (en) * | 2016-03-01 | 2018-07-17 | 深圳市大疆创新科技有限公司 | Method and system for target following |
US10802491B2 (en) | 2016-03-01 | 2020-10-13 | SZ DJI Technology Co., Ltd. | Methods and systems for target tracking |
CN105955474A (en) * | 2016-04-27 | 2016-09-21 | 努比亚技术有限公司 | Prompting method of application evaluation, and mobile terminal |
CN105979147A (en) * | 2016-06-22 | 2016-09-28 | 上海顺砾智能科技有限公司 | Intelligent shooting method of unmanned aerial vehicle |
CN111524339A (en) * | 2016-08-18 | 2020-08-11 | 深圳市大疆创新科技有限公司 | Unmanned aerial vehicle frequency alignment method and system, unmanned aerial vehicle and remote controller |
US10409276B2 (en) | 2016-12-21 | 2019-09-10 | Hangzhou Zero Zero Technology Co., Ltd. | System and method for controller-free user drone interaction |
CN110687902A (en) * | 2016-12-21 | 2020-01-14 | 杭州零零科技有限公司 | System and method for controller-free user drone interaction |
WO2018116028A1 (en) * | 2016-12-21 | 2018-06-28 | Hangzhou Zero Zero Technology Co., Ltd. | System and method for controller-free user drone interaction |
CN110687902B (en) * | 2016-12-21 | 2020-10-20 | 杭州零零科技有限公司 | System and method for controller-free user drone interaction |
US11340606B2 (en) | 2016-12-21 | 2022-05-24 | Hangzhou Zero Zero Technology Co., Ltd. | System and method for controller-free user drone interaction |
US10726248B2 (en) | 2018-02-01 | 2020-07-28 | Ford Global Technologies, Llc | Validating gesture recognition capabilities of automated systems |
CN108804893A (en) * | 2018-03-30 | 2018-11-13 | 百度在线网络技术(北京)有限公司 | A kind of control method, device and server based on recognition of face |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104850234A (en) | Unmanned plane control method and unmanned plane control system based on facial expression recognition | |
KR102174595B1 (en) | System and method for identifying faces in unconstrained media | |
CN107239728B (en) | Unmanned aerial vehicle interaction device and method based on deep learning attitude estimation | |
Zheng et al. | Recent advances of deep learning for sign language recognition | |
Kadhim et al. | A Real-Time American Sign Language Recognition System using Convolutional Neural Network for Real Datasets. | |
CN106598226A (en) | UAV (Unmanned Aerial Vehicle) man-machine interaction method based on binocular vision and deep learning | |
Shanta et al. | Bangla sign language detection using sift and cnn | |
Liu et al. | Heterogeneous face image matching using multi-scale features | |
CN104463172A (en) | Face feature extraction method based on face feature point shape drive depth model | |
CN107741781A (en) | Flight control method, device, unmanned plane and the storage medium of unmanned plane | |
CN103971137A (en) | Three-dimensional dynamic facial expression recognition method based on structural sparse feature study | |
CN103488299A (en) | Intelligent terminal man-machine interaction method fusing human face and gestures | |
Pandey et al. | Hand gesture recognition for sign language recognition: A review | |
Sharma et al. | Recognition of single handed sign language gestures using contour tracing descriptor | |
CN105159452A (en) | Control method and system based on estimation of human face posture | |
CN110046544A (en) | Digital gesture identification method based on convolutional neural networks | |
Balasuriya et al. | Learning platform for visually impaired children through artificial intelligence and computer vision | |
CN103034851A (en) | Device and method of self-learning skin-color model based hand portion tracking | |
Sarma et al. | Hand gesture recognition using deep network through trajectory-to-contour based images | |
CN104656884A (en) | Intelligent terminal human-computer interaction method capable of fusing human face and gesture | |
Singh et al. | A Review For Different Sign Language Recognition Systems | |
WO2021203368A1 (en) | Image processing method and apparatus, electronic device and storage medium | |
CN112069898A (en) | Method and device for recognizing human face group attribute based on transfer learning | |
Li et al. | A survey of face recognition methods | |
Afdhal et al. | Emotion recognition using the shapes of the wrinkles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150819 |
|
RJ01 | Rejection of invention patent application after publication |