CN105654049A - Facial expression recognition method and device - Google Patents

Facial expression recognition method and device Download PDF

Info

Publication number
CN105654049A
CN105654049A CN201511016827.2A CN201511016827A CN105654049A CN 105654049 A CN105654049 A CN 105654049A CN 201511016827 A CN201511016827 A CN 201511016827A CN 105654049 A CN105654049 A CN 105654049A
Authority
CN
China
Prior art keywords
calibration
expression
image
recognition
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201511016827.2A
Other languages
Chinese (zh)
Other versions
CN105654049B (en
Inventor
谭莲芝
李志锋
乔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201511016827.2A priority Critical patent/CN105654049B/en
Publication of CN105654049A publication Critical patent/CN105654049A/en
Application granted granted Critical
Publication of CN105654049B publication Critical patent/CN105654049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the machine learning technical field and provides a facial expression recognition method and device. The method includes the following steps that: a face image satisfying a preset condition is obtained; the location of N feature points of a face in the face image are detected, wherein N is an integer greater than zero; eyes, a nose and/or the corners of a mouth are selected as key feature points according to the detected locations of the N feature points, so that the face image can be calibrated; facial expression recognition is performed on the calibrated face image through a trained depth convolutional neural network, and a recognition result is outputted. With the method and device of the invention adopted, the accuracy of facial expression recognition can be effectively improved, and even the facial expression recognition rate of faces in images where large-area occlusion exists, low-resolution images and images where side faces exist can be improved.

Description

The method of expression recognition and device
Technical field
The invention belongs to machine learning techniques field, particularly relate to method and the device of expression recognition.
Background technology
Expression recognition refers to that the expression state analyzing face from given facial image belongs to any classification, thus infers and be identified the mental emotion of object, such as glad, sad, surprised, frightened, detest, anger etc. Expression recognition has been widely used in the aspect such as man-machine interaction, machine learning, is the hot subject in the field such as current machine learning and pattern recognition.
But, some face expression recognition method existing, the methods such as such as IntraFace, main use geometry to extract human face characteristic point, it is difficult to detect and blocks the face that face, image resolution rate are little or exist on the image of side face, so that the range of application of expression recognition is restricted.
Summary of the invention
Given this, the embodiment of the present invention provides method and the device of a kind of expression recognition, to improve the performance of expression recognition.
First aspect, embodiments provides a kind of method of expression recognition, and described method comprises:
Obtain and meet pre-conditioned facial image;
Detect the position of the N number of unique point of face in described facial image, described N be greater than zero integer;
Position according to the described N number of unique point detected, chooses eyes, nose and/or the corners of the mouth and is calibrated by described facial image as key feature points;
By the degree of depth convolutional neural networks trained, the described facial image after calibration is carried out expression recognition, and export recognition result.
Second aspect, the device of a kind of expression recognition, described device comprises:
Facial image acquiring unit, meets pre-conditioned facial image for obtaining;
Detecting unit, for detecting the position of the N number of unique point of face in described facial image, described N be greater than zero integer;
Alignment unit, for the position according to the described N number of unique point detected, chooses eyes, nose and/or the corners of the mouth and is calibrated by described facial image as key feature points;
Recognition unit, carries out expression recognition for the degree of depth convolutional neural networks by having trained to the described facial image after calibration, and exports recognition result.
The useful effect that the embodiment of the present invention compared with prior art exists is: the embodiment of the present invention is by facial image detection, calibration and the process such as identification based on the degree of depth convolutional neural networks after training, can effectively improve the accuracy rate of expression recognition, even if more block for having, image resolution rate is little or the face that exists on the image of side face, higher Expression Recognition rate can also be reached, there is stronger ease for use and practicality.
Accompanying drawing explanation
In order to the technical scheme being illustrated more clearly in the embodiment of the present invention, it is briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the realization flow schematic diagram of the method for the expression recognition that the embodiment of the present invention provides;
Fig. 2 is the exemplary plot of the human face expression that the embodiment of the present invention provides;
Fig. 3 is the composition structural representation of the device of the expression recognition that the embodiment of the present invention provides.
Embodiment
In below describing, in order to illustrate instead of in order to limit, it is proposed that the such as detail of particular system structure, technology and so on, understand the embodiment of the present invention thoroughly to cut. But, the technician of this area is not it should be appreciated that having can also realize in other embodiment of these details the present invention. In other situation, omit the detailed explanation to well-known system, device, circuit and method, in order to avoid unnecessary details hinders description of the invention.
In order to technical solutions according to the invention are described, it is described below by specific embodiment.
Referring to Fig. 1, Fig. 1 shows the realization flow of the method for the expression recognition that the embodiment of the present invention provides, and the method is applicable in all kinds of terminating unit, such as Personal Computer, panel computer, mobile phone etc. The method process describes in detail as follows:
Step S101, obtains and meets pre-conditioned facial image.
In the present embodiment, pre-conditioned facial image includes the facial image of side face, image resolution rate is less than the first preset value facial image is met described in and/or the part that face is blocked is greater than the 2nd preset value and is less than the facial image of the 3rd preset value.
Exemplary, the present embodiment is little mainly for data centralization image resolution rates such as ICML, have more blocking and the facial image of side face.
In step s 102, detect the position of the N number of unique point of face in described facial image, described N be greater than zero integer.
Preferably, the present embodiment adopts the degree of depth convolutional neural networks (degree of depth convolutional neural networks namely trained) of multi-task learning to detect the position of face 68 unique points in described facial image.
In step s 103, according to the position of the described N number of unique point detected, choose eyes, nose and/or the corners of the mouth and as key feature points, described facial image is calibrated.
The present embodiment, according to the position of described 68 unique points detected, is chosen eyes, nose and/or the corners of the mouth and is calibrated by described facial image as key feature points. Namely according to former facial image key feature points position and calibration after facial image described in the mapping relation of position of key feature points, the position in fixing described key feature points facial image after calibration. Wherein, described collimation technique can adopt the similarity transformation based on affined transformation, is corresponded in proportion by the described key feature points on former facial image on the facial image after calibration after conversion.
In step S104, by the degree of depth convolutional neural networks trained, the described facial image after calibration is carried out expression recognition, and export recognition result.
Particularly, Facial expression database is obtained;
Using the human face expression image in described Facial expression database as training dataset, and described training dataset is carried out described calibration process;
Described training dataset after calibration process is carried out data enhancement process;
By the described training dataset after data enhancement process, degree of depth convolutional neural networks is trained, obtain the described degree of depth convolutional neural networks after training;
By the described degree of depth convolutional neural networks after training, the described facial image after calibration is carried out expression recognition, and export recognition result.
Namely by training dataset, degree of depth convolutional neural networks is trained, obtain the parameter of degree of depth convolutional neural networks, by obtaining the degree of depth convolutional neural networks (the described degree of depth convolutional neural networks after namely training) of described parameter, the described facial image after calibration is identified, and export recognition result.
In the present embodiment, described data enhancement process includes but not limited to the process such as horizon glass picture, rotation and translation amplification.
The described human face expression image concentrated by described training data after calibration carries out data enhancement process and comprises:
The human face expression image level upset concentrated by described training data after calibration, obtains horizontal mirror image; And/or,
The human face expression image concentrated by described training data after calibration rotates and obtains M and open image rotating, wherein M be greater than zero integer; And/or,
Such as, being rotated by the described human face expression image after calibration and obtain 12 image rotatings, wherein, angle of rotation is from negative 60 degree to positive 60 degree, and the step-length of angle of rotation is 10 degree, i.e.-60 degree,-50 degree ,-40 degree ,-30 degree ,-20 degree ,-10 degree, 10 degree, 20 degree, 30 degree, 40 degree, 50 degree, 60 degree. Described human face expression image (namely not doing the human face expression image of rotation process after calibration) after 12 images obtained after rotation are added calibration obtains altogether 13 human face expression images.
The human face expression image translation amplification concentrated by described training data after calibration obtains L and opens displacement images.
Such as, the described human face expression image translation amplification after calibration is obtained 5 displacement images. 3 pixels are translated toward its four diagonal angles by described human face expression image, 4 human face expression images after being translated, add the image getting (width-3 of former human face expression image) * (height-3 of former human face expression image) size in the middle of described human face expression image, described 5 the human face expression images finally translation obtained are unified is amplified to a size, the size of the human face expression image after namely calibrating.
It should be noted that, the method of existing employing degree of depth convolutional neural networks identification human face expression, in the database that facial image is fewer, owing to training data is less so that Expired Drugs easily occurs when carrying out expression recognition in degree of depth convolutional neural networks. And the present invention by carrying out data enhancement process to described facial image so that training data can expand original 20 times to, thus effectively prevent degree of depth convolutional neural networks, in the database that facial image is few, Expired Drugs occurs.
As an optional example of the present invention, facial image after described calibration can also be carried out described data enhancement process at cognitive phase by the present embodiment, described facial image after data enhancement process is carried out expression recognition by degree of depth convolutional neural networks by having trained again, and exports recognition result.
In the present embodiment, described Facial expression database selects the database of Kaggle human face expression contest in 2013.Using the human face expression in this database as training dataset, this training dataset is divided into 8 parts, stay one part (data set every time stayed is all different) as strengthening training dataset every time, and other data sets are inputted degree of depth convolutional neural networks as training dataset and trains, the result as recognition of face of averaging after training 8 times. Preferably, the present embodiment can also adopt the network model that VGG team adopts on ILSVRC-2012 data set, and carries out tuning (finetuning) on this network model. The study rate of tuning is initialized as 0.001, and the feature of two layers, the in the network model after tuning the 4th layer of Convolution sums the 4th layer of pond layer is input to the 5th layer of convolutional layer, such that it is able to retain the information that the degree of depth convolutional neural networks before more pond learns.
By the degree of depth convolutional neural networks after training, the two of ICML data centralization test sets are carried out expression recognition, and export recognition result. It should be noted that, the network of test is compared with the network of training, and the network of test does not need to return error, but directly exports the result of test set expression recognition.
In order to verify the feasibility of the method for the invention, ICML data set is tested. ICML data set (as shown in Figure 2) facial image size is 48*48, and training set has 28709 images, and test set has 3589 images (all not illustrating in Fig. 2). Wherein there is the image without face, have the more image blocking face, have the side face image of 2/3. In experiment, according to the standard of data set, human face expression is divided into happiness, sadness, surprised, frightened, detest, angry and neutral seven kinds of expressions.
Through test, on ICML data set, the rate of accuracy reached of the method for the invention expression recognition is to 70.1% (as shown in table 1), and according to Kaggle human face expression contest site description, on this data set, the accuracy rate of eye-observation expression is 66��68%.
Fear Angry Detest Glad Sad Surprised Neutral
Fear 61.51 1.22 9.78 4.48 14.87 1.63 6.52
Angry 27.27 65.45 1.82 1.82 0.00 1.82 1.82
Detest 8.80 1.33 56.22 3.79 11.83 8.90 8.14
Glad 1.37 0.00 1.37 88.74 3.07 1.71 3.75
Sad 9.09 0.00 10.61 5.22 57.07 0.00 16.67
Surprised 2.16 0.00 7.45 4.33 2.64 81.01 2.40
Neutral 5.11 0.00 4.63 5.91 12.62 1.44 70.29
Table 1
The composition structural representation of the device of the expression recognition that Fig. 2 provides for the embodiment of the present invention. For convenience of explanation, illustrate only the part relevant to the embodiment of the present invention.
The device of described expression recognition can be applicable to various terminating unit, such as pocket computer (PocketPersonalComputer, PPC), palm PC, computer, notebook computer, personal digital assistant (PersonalDigitalAssistant, PDA) etc., can be the unit that software unit, hardware cell or the software and hardware run in these terminals combines, it is also possible to be integrated in these terminals as independent suspension member or run in the application system of these terminals.
The device of described expression recognition comprises:
Facial image acquiring unit 31, meets pre-conditioned facial image for obtaining;
Detecting unit 32, for detecting the position of the N number of unique point of face in described facial image, described N be greater than zero integer;
Alignment unit 33, for the position according to the described N number of unique point detected, chooses eyes, nose and/or the corners of the mouth and is calibrated by described facial image as key feature points;
Recognition unit 34, carries out expression recognition for the degree of depth convolutional neural networks by having trained to the described facial image after calibration, and exports recognition result.
Further, described recognition unit 34 comprises:
Human face expression acquisition module 341, for obtaining Facial expression database;
Calibration module 342, for using the human face expression image in described Facial expression database as training dataset, and described training dataset is carried out described calibration process;
Data enhancement process module 343, for carrying out data enhancement process to the described training dataset after calibration process;
Training module 344, for being trained by degree of depth convolutional neural networks by the described training dataset after data enhancement process, obtains the described degree of depth convolutional neural networks after training;
Identify module 345, for the described facial image after calibration being carried out expression recognition by the described degree of depth convolutional neural networks after training, and export recognition result.
Further, described data enhancement process module 343 specifically for:
The human face expression image concentrated by described training data after calibration carries out horizon glass picture, rotation and translation amplification process.
Further, described data enhancement process module 343 specifically for:
The human face expression image level upset concentrated by described training data after calibration, obtains horizontal mirror image; And/or,
The human face expression image concentrated by described training data after calibration rotates and obtains M and open image rotating, wherein M be greater than zero integer; And/or,
The human face expression image translation amplification concentrated by described training data after calibration obtains L and opens displacement images, wherein L be greater than zero integer.
In sum, the useful effect that the embodiment of the present invention compared with prior art exists is: even if the degree of depth convolutional neural networks that the embodiment of the present invention adopts multi-task learning detects that the advantage of face 68 unique points is that facial image resolving power is less, this network also can detect 68 on face unique point. The tradition image that can't detect of algorithm, as less in image resolution rate, have more blocking, the facial image of side face, the degree of depth convolutional neural networks major part detection of multi-task learning is accurately. In addition, compared to the method for existing degree of depth convolutional neural networks expression recognition, the network model that VGG team adopts on ILSVRC-2012 data set is optimized (finetuning) by the embodiment of the present invention, and the feature in conjunction with two layers, the 4th layer of Convolution sums the 4th layer of pond layer is input to the 5th layer of convolutional layer, the expression shape change of local on facial image is remained more, it is to increase the accuracy rate of expression recognition. In addition, the embodiment of the present invention is realizing in said process, it is not necessary to increase extra hardware, can effectively reduce costs, and has stronger ease for use and practicality.
The technician of art can be well understood to, for convenience of description and succinctly, only it is illustrated with the division of above-mentioned each functional unit, in practical application, can complete by different functional units, module as required and by above-mentioned functions distribution, it is divided into different functional units or module, to complete all or part of function described above by the internal structure of described device. Each functional unit in embodiment can be integrated in a processing unit, can also be that the independent physics of each unit exists, can also two or more unit in a unit integrated, above-mentioned integrated unit both can adopt the form of hardware to realize, it is also possible to adopts the form of software functional unit to realize. In addition, the concrete title of each functional unit also just for the ease of mutual differentiation, is not limited to the protection domain of the application.The concrete working process of each unit in said apparatus, it is possible to reference to the corresponding process in aforementioned embodiment of the method, do not repeat them here.
Those of ordinary skill in the art are it should be appreciated that the unit of each example that describes in conjunction with embodiment disclosed herein and algorithm steps, it is possible to realize with the combination of electronic hardware or computer software and electronic hardware. These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme. Each specifically can should be used for using different methods to realize described function by professional and technical personnel, but this kind realizes should not thinking the scope exceeding the present invention.
In embodiment provided by the present invention, it should be appreciated that, disclosed device and method, it is possible to realize by another way. Such as, device embodiment described above is only schematic, such as, the division of described unit, being only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can ignore, or do not perform. Another point, it can be that the indirect coupling by some interfaces, device or unit or communication connect that shown or discussed coupling each other or directly coupling or communication connect, it is possible to be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or can also be distributed on multiple NE. Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to is that the independent physics of each unit exists, it is also possible to two or more unit are in a unit integrated. Above-mentioned integrated unit both can adopt the form of hardware to realize, it is also possible to adopts the form of software functional unit to realize.
If described integrated unit realize using the form of software functional unit and as independent production marketing or when using, it is possible to be stored in a computer read/write memory medium. Based on such understanding, the technical scheme of the embodiment of the present invention in essence or says that part prior art contributed or all or part of of this technical scheme can embody with the form of software product, this computer software product is stored in a storage media, comprise some instructions with so that a computer equipment (can be Personal Computer, server, or the network equipment etc.) or treater (processor) perform all or part of step of method described in each embodiment of the embodiment of the present invention. And aforesaid storage media comprises: USB flash disk, portable hard drive, read-only storage (ROM, Read-OnlyMemory), random access memory (RAM, RandomAccessMemory), magnetic disc or CD etc. various can be program code stored medium.
The above embodiment only in order to the technical scheme of the present invention to be described, is not intended to limit; Although with reference to previous embodiment to invention has been detailed description, it will be understood by those within the art that: the technical scheme described in foregoing embodiments still can be modified by it, or wherein part technology feature is carried out equivalent replacement; And these amendments or replacement, do not make the spirit and scope of the essence disengaging each embodiment technical scheme of the embodiment of the present invention of appropriate technical solution.

Claims (9)

1. the method for an expression recognition, it is characterised in that, described method comprises:
Obtain and meet pre-conditioned facial image;
Detect the position of the N number of unique point of face in described facial image, described N be greater than zero integer;
Position according to the described N number of unique point detected, chooses eyes, nose and/or the corners of the mouth and is calibrated by described facial image as key feature points;
By the degree of depth convolutional neural networks trained, the described facial image after calibration is carried out expression recognition, and export recognition result.
2. the method for expression recognition as claimed in claim 1, it is characterised in that, the described facial image after calibration is carried out expression recognition by the described degree of depth convolutional neural networks by having trained, and exports recognition result and comprise:
Obtain Facial expression database;
Using the human face expression image in described Facial expression database as training dataset, and described training dataset is carried out described calibration process;
Described training dataset after calibration process is carried out data enhancement process;
By the described training dataset after data enhancement process, degree of depth convolutional neural networks is trained, obtain the described degree of depth convolutional neural networks after training;
By the described degree of depth convolutional neural networks after training, the described facial image after calibration is carried out expression recognition, and export recognition result.
3. the method for expression recognition as claimed in claim 2, it is characterised in that, described described training dataset after calibration is carried out data enhancement process comprise:
The human face expression image concentrated by described training data after calibration carries out horizon glass picture, rotation and translation amplification process.
4. the method for expression recognition as claimed in claim 3, it is characterised in that, the described human face expression image concentrated by described training data after calibration carries out horizon glass picture, rotation and translation amplification process and comprises:
The human face expression image level upset concentrated by described training data after calibration, obtains horizontal mirror image; And/or,
The human face expression image concentrated by described training data after calibration rotates and obtains M and open image rotating, wherein M be greater than zero integer; And/or,
The human face expression image translation amplification concentrated by described training data after calibration obtains L and opens displacement images, wherein L be greater than zero integer.
5. the method for expression recognition as described in item as arbitrary in Claims 1-4, it is characterized in that, described degree of depth convolutional neural networks is five layers of convolutional neural networks, study rate be initialized as 0.001 and the feature of the 4th layer of convolutional layer and two layers, the 4th layer of pond layer be input to the 5th layer of convolutional layer.
6. the device of an expression recognition, it is characterised in that, described device comprises:
Facial image acquiring unit, meets pre-conditioned facial image for obtaining;
Detecting unit, for detecting the position of the N number of unique point of face in described facial image, described N be greater than zero integer;
Alignment unit, for the position according to the described N number of unique point detected, chooses eyes, nose and/or the corners of the mouth and is calibrated by described facial image as key feature points;
Recognition unit, carries out expression recognition for the degree of depth convolutional neural networks by having trained to the described facial image after calibration, and exports recognition result.
7. the device of expression recognition as claimed in claim 6, it is characterised in that, described recognition unit comprises:
Human face expression acquisition module, for obtaining Facial expression database;
Calibration module, for using the human face expression image in described Facial expression database as training dataset, and described training dataset is carried out described calibration process;
Data enhancement process module, for carrying out data enhancement process to the described training dataset after calibration process;
Training module, for being trained by degree of depth convolutional neural networks by the described training dataset after data enhancement process, obtains the described degree of depth convolutional neural networks after training;
Identify module, for the described facial image after calibration being carried out expression recognition by the described degree of depth convolutional neural networks after training, and export recognition result.
8. the device of expression recognition as claimed in claim 7, it is characterised in that, described data enhancement process module specifically for:
The human face expression image concentrated by described training data after calibration carries out horizon glass picture, rotation and translation amplification process.
9. the device of expression recognition as claimed in claim 8, it is characterised in that, described data enhancement process module specifically for:
The human face expression image level upset concentrated by described training data after calibration, obtains horizontal mirror image; And/or,
The human face expression image concentrated by described training data after calibration rotates and obtains M and open image rotating, wherein M be greater than zero integer; And/or,
The human face expression image translation amplification concentrated by described training data after calibration obtains L and opens displacement images, wherein L be greater than zero integer.
CN201511016827.2A 2015-12-29 2015-12-29 The method and device of facial expression recognition Active CN105654049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511016827.2A CN105654049B (en) 2015-12-29 2015-12-29 The method and device of facial expression recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511016827.2A CN105654049B (en) 2015-12-29 2015-12-29 The method and device of facial expression recognition

Publications (2)

Publication Number Publication Date
CN105654049A true CN105654049A (en) 2016-06-08
CN105654049B CN105654049B (en) 2019-08-16

Family

ID=56478352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511016827.2A Active CN105654049B (en) 2015-12-29 2015-12-29 The method and device of facial expression recognition

Country Status (1)

Country Link
CN (1) CN105654049B (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106169073A (en) * 2016-07-11 2016-11-30 北京科技大学 A kind of expression recognition method and system
CN106203628A (en) * 2016-07-11 2016-12-07 深圳先进技术研究院 A kind of optimization method strengthening degree of depth learning algorithm robustness and system
CN106203298A (en) * 2016-06-30 2016-12-07 北京集创北方科技股份有限公司 Biological feather recognition method and device
CN106295566A (en) * 2016-08-10 2017-01-04 北京小米移动软件有限公司 Facial expression recognizing method and device
CN106447604A (en) * 2016-09-30 2017-02-22 北京奇虎科技有限公司 Method and device for transforming facial frames in videos
CN106599800A (en) * 2016-11-25 2017-04-26 哈尔滨工程大学 Face micro-expression recognition method based on deep learning
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN107463888A (en) * 2017-07-21 2017-12-12 竹间智能科技(上海)有限公司 Face mood analysis method and system based on multi-task learning and deep learning
CN107633204A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN107644158A (en) * 2017-09-05 2018-01-30 维沃移动通信有限公司 A kind of face identification method and mobile terminal
CN107844782A (en) * 2017-11-29 2018-03-27 济南浪潮高新科技投资发展有限公司 A kind of face identification method based on the serial depth network of multitask
CN108090408A (en) * 2016-11-21 2018-05-29 三星电子株式会社 For performing the method and apparatus of Facial expression recognition and training
CN108171176A (en) * 2017-12-29 2018-06-15 中车工业研究院有限公司 A kind of subway driver's emotion identification method and device based on deep learning
CN108197593A (en) * 2018-01-23 2018-06-22 深圳极视角科技有限公司 More size face's expression recognition methods and device based on three-point positioning method
CN108229268A (en) * 2016-12-31 2018-06-29 商汤集团有限公司 Expression Recognition and convolutional neural networks model training method, device and electronic equipment
WO2018121777A1 (en) * 2016-12-31 2018-07-05 深圳市商汤科技有限公司 Face detection method and apparatus, and electronic device
CN108509828A (en) * 2017-02-28 2018-09-07 深圳市朗驰欣创科技股份有限公司 A kind of face identification method and face identification device
CN108710829A (en) * 2018-04-19 2018-10-26 北京红云智胜科技有限公司 A method of the expression classification based on deep learning and the detection of micro- expression
CN109034079A (en) * 2018-08-01 2018-12-18 中国科学院合肥物质科学研究院 A kind of human facial expression recognition method under the non-standard posture for face
CN109344744A (en) * 2018-09-14 2019-02-15 北京师范大学 The micro- facial expressions and acts unit detecting method of face based on depth convolutional neural networks
CN109376625A (en) * 2018-10-10 2019-02-22 东北大学 A kind of human facial expression recognition method based on convolutional neural networks
CN109447729A (en) * 2018-09-17 2019-03-08 平安科技(深圳)有限公司 A kind of recommended method of product, terminal device and computer readable storage medium
CN109508623A (en) * 2018-08-31 2019-03-22 杭州千讯智能科技有限公司 Item identification method and device based on image procossing
CN109508654A (en) * 2018-10-26 2019-03-22 中国地质大学(武汉) Merge the human face analysis method and system of multitask and multiple dimensioned convolutional neural networks
CN109522945A (en) * 2018-10-31 2019-03-26 中国科学院深圳先进技术研究院 One kind of groups emotion identification method, device, smart machine and storage medium
CN110097107A (en) * 2019-04-23 2019-08-06 安徽大学 Alternaria mali roberts disease recognition and classification method based on convolutional neural networks
CN110110672A (en) * 2019-05-10 2019-08-09 广东工业大学 A kind of facial expression recognizing method, device and equipment
CN110705430A (en) * 2019-09-26 2020-01-17 江苏科技大学 Multi-person facial expression recognition method and system based on deep learning
CN110991433A (en) * 2020-03-04 2020-04-10 支付宝(杭州)信息技术有限公司 Face recognition method, device, equipment and storage medium
CN111634233A (en) * 2020-05-25 2020-09-08 杭州鸿泉物联网技术股份有限公司 Safe driving system and method
US10776614B2 (en) 2018-02-09 2020-09-15 National Chiao Tung University Facial expression recognition training system and facial expression recognition training method
CN112528978A (en) * 2021-02-10 2021-03-19 腾讯科技(深圳)有限公司 Face key point detection method and device, electronic equipment and storage medium
CN112825117A (en) * 2019-11-20 2021-05-21 北京眼神智能科技有限公司 Behavior attribute judgment method, behavior attribute judgment device, behavior attribute judgment medium and behavior attribute judgment equipment based on head features
WO2021196389A1 (en) * 2020-04-03 2021-10-07 平安科技(深圳)有限公司 Facial action unit recognition method and apparatus, electronic device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110222724A1 (en) * 2010-03-15 2011-09-15 Nec Laboratories America, Inc. Systems and methods for determining personal characteristics
CN104463172A (en) * 2014-12-09 2015-03-25 中国科学院重庆绿色智能技术研究院 Face feature extraction method based on face feature point shape drive depth model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110222724A1 (en) * 2010-03-15 2011-09-15 Nec Laboratories America, Inc. Systems and methods for determining personal characteristics
CN104463172A (en) * 2014-12-09 2015-03-25 中国科学院重庆绿色智能技术研究院 Face feature extraction method based on face feature point shape drive depth model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
何孟华: "自发和人为表情分析识别研究", 《中国优秀硕士学位论文全文数据库》 *
王凯: "人脸识别在电子商务安全认证中的应用研究", 《中国优秀硕士学位论文全文数据库》 *
王剑云等: "一种基于深度学习的表情识别方法", 《计算机与现代化》 *

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203298A (en) * 2016-06-30 2016-12-07 北京集创北方科技股份有限公司 Biological feather recognition method and device
CN106169073A (en) * 2016-07-11 2016-11-30 北京科技大学 A kind of expression recognition method and system
CN106203628A (en) * 2016-07-11 2016-12-07 深圳先进技术研究院 A kind of optimization method strengthening degree of depth learning algorithm robustness and system
CN106203628B (en) * 2016-07-11 2018-12-14 深圳先进技术研究院 A kind of optimization method and system enhancing deep learning algorithm robustness
CN106295566B (en) * 2016-08-10 2019-07-09 北京小米移动软件有限公司 Facial expression recognizing method and device
CN106295566A (en) * 2016-08-10 2017-01-04 北京小米移动软件有限公司 Facial expression recognizing method and device
CN106447604A (en) * 2016-09-30 2017-02-22 北京奇虎科技有限公司 Method and device for transforming facial frames in videos
CN108090408A (en) * 2016-11-21 2018-05-29 三星电子株式会社 For performing the method and apparatus of Facial expression recognition and training
CN106599800A (en) * 2016-11-25 2017-04-26 哈尔滨工程大学 Face micro-expression recognition method based on deep learning
US11182591B2 (en) 2016-12-31 2021-11-23 Shenzhen Sensetime Technology Co, Ltd Methods and apparatuses for detecting face, and electronic devices
CN108229268A (en) * 2016-12-31 2018-06-29 商汤集团有限公司 Expression Recognition and convolutional neural networks model training method, device and electronic equipment
WO2018121777A1 (en) * 2016-12-31 2018-07-05 深圳市商汤科技有限公司 Face detection method and apparatus, and electronic device
CN108509828A (en) * 2017-02-28 2018-09-07 深圳市朗驰欣创科技股份有限公司 A kind of face identification method and face identification device
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN107463888B (en) * 2017-07-21 2020-05-19 竹间智能科技(上海)有限公司 Face emotion analysis method and system based on multi-task learning and deep learning
CN107463888A (en) * 2017-07-21 2017-12-12 竹间智能科技(上海)有限公司 Face mood analysis method and system based on multi-task learning and deep learning
CN107633204B (en) * 2017-08-17 2019-01-29 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN107633204A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN107644158A (en) * 2017-09-05 2018-01-30 维沃移动通信有限公司 A kind of face identification method and mobile terminal
CN107844782A (en) * 2017-11-29 2018-03-27 济南浪潮高新科技投资发展有限公司 A kind of face identification method based on the serial depth network of multitask
CN108171176A (en) * 2017-12-29 2018-06-15 中车工业研究院有限公司 A kind of subway driver's emotion identification method and device based on deep learning
CN108171176B (en) * 2017-12-29 2020-04-24 中车工业研究院有限公司 Subway driver emotion identification method and device based on deep learning
CN108197593A (en) * 2018-01-23 2018-06-22 深圳极视角科技有限公司 More size face's expression recognition methods and device based on three-point positioning method
CN108197593B (en) * 2018-01-23 2022-02-18 深圳极视角科技有限公司 Multi-size facial expression recognition method and device based on three-point positioning method
US10776614B2 (en) 2018-02-09 2020-09-15 National Chiao Tung University Facial expression recognition training system and facial expression recognition training method
TWI711980B (en) * 2018-02-09 2020-12-01 國立交通大學 Facial expression recognition training system and facial expression recognition training method
CN108710829A (en) * 2018-04-19 2018-10-26 北京红云智胜科技有限公司 A method of the expression classification based on deep learning and the detection of micro- expression
CN109034079A (en) * 2018-08-01 2018-12-18 中国科学院合肥物质科学研究院 A kind of human facial expression recognition method under the non-standard posture for face
CN109508623A (en) * 2018-08-31 2019-03-22 杭州千讯智能科技有限公司 Item identification method and device based on image procossing
CN109344744B (en) * 2018-09-14 2021-10-29 北京师范大学 Face micro-expression action unit detection method based on deep convolutional neural network
CN109344744A (en) * 2018-09-14 2019-02-15 北京师范大学 The micro- facial expressions and acts unit detecting method of face based on depth convolutional neural networks
CN109447729A (en) * 2018-09-17 2019-03-08 平安科技(深圳)有限公司 A kind of recommended method of product, terminal device and computer readable storage medium
CN109376625A (en) * 2018-10-10 2019-02-22 东北大学 A kind of human facial expression recognition method based on convolutional neural networks
CN109508654A (en) * 2018-10-26 2019-03-22 中国地质大学(武汉) Merge the human face analysis method and system of multitask and multiple dimensioned convolutional neural networks
CN109522945A (en) * 2018-10-31 2019-03-26 中国科学院深圳先进技术研究院 One kind of groups emotion identification method, device, smart machine and storage medium
CN110097107A (en) * 2019-04-23 2019-08-06 安徽大学 Alternaria mali roberts disease recognition and classification method based on convolutional neural networks
CN110110672A (en) * 2019-05-10 2019-08-09 广东工业大学 A kind of facial expression recognizing method, device and equipment
CN110705430A (en) * 2019-09-26 2020-01-17 江苏科技大学 Multi-person facial expression recognition method and system based on deep learning
CN112825117A (en) * 2019-11-20 2021-05-21 北京眼神智能科技有限公司 Behavior attribute judgment method, behavior attribute judgment device, behavior attribute judgment medium and behavior attribute judgment equipment based on head features
CN110991433A (en) * 2020-03-04 2020-04-10 支付宝(杭州)信息技术有限公司 Face recognition method, device, equipment and storage medium
CN110991433B (en) * 2020-03-04 2020-06-23 支付宝(杭州)信息技术有限公司 Face recognition method, device, equipment and storage medium
WO2021196389A1 (en) * 2020-04-03 2021-10-07 平安科技(深圳)有限公司 Facial action unit recognition method and apparatus, electronic device, and storage medium
CN111634233A (en) * 2020-05-25 2020-09-08 杭州鸿泉物联网技术股份有限公司 Safe driving system and method
CN112528978A (en) * 2021-02-10 2021-03-19 腾讯科技(深圳)有限公司 Face key point detection method and device, electronic equipment and storage medium
CN112528978B (en) * 2021-02-10 2021-05-14 腾讯科技(深圳)有限公司 Face key point detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN105654049B (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN105654049A (en) Facial expression recognition method and device
CN109685135B (en) Few-sample image classification method based on improved metric learning
Li et al. Edge detection algorithm of cancer image based on deep learning
Wang et al. Image inpainting detection based on multi-task deep learning network
Lehmann et al. General projective maps for multidimensional data projection
Wang et al. Auxiliary segmentation method of osteosarcoma in MRI images based on denoising and local enhancement
Li et al. A PCB electronic components detection network design based on effective receptive field size and anchor size matching
Yu et al. ex-vit: A novel explainable vision transformer for weakly supervised semantic segmentation
CN114281950B (en) Data retrieval method and system based on multi-graph weighted fusion
Yi et al. Cross-stage multi-scale interaction network for RGB-D salient object detection
Liu et al. Salient object detection via hybrid upsampling and hybrid loss computing
Wang et al. Incremental Template Neighborhood Matching for 3D anomaly detection
Wang et al. Towards optimal deep fusion of imaging and clinical data via a model‐based description of fusion quality
Hu et al. Modeling images using transformed Indian buffet processes
Huang et al. MSSA‐Net: A novel multi‐scale feature fusion and global self‐attention network for lesion segmentation
Sun et al. Nonparametric point cloud filter
Sun et al. An efficient interactive segmentation framework for medical images without pre‐training
Ling et al. EPolar‐UNet: An edge‐attending polar UNet for automatic medical image segmentation with small datasets
Guo et al. Unsupervised anomaly detection and segmentation on dirty datasets
Jung et al. Depth map upsampling with image decomposition
Khan et al. Active contours in the complex domain for salient object detection
Wen et al. Hierarchical two-stage modal fusion for triple-modality salient object detection
He et al. Multi-attention embedded network for salient object detection
Zhou et al. Improved GCN framework for human motion recognition
Xu et al. Skeleton extraction of hard‐pen regular script based on stroke characterization and ambiguous zone detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant