CN108268838A - Facial expression recognizing method and facial expression recognition system - Google Patents
Facial expression recognizing method and facial expression recognition system Download PDFInfo
- Publication number
- CN108268838A CN108268838A CN201810001358.4A CN201810001358A CN108268838A CN 108268838 A CN108268838 A CN 108268838A CN 201810001358 A CN201810001358 A CN 201810001358A CN 108268838 A CN108268838 A CN 108268838A
- Authority
- CN
- China
- Prior art keywords
- expression
- face
- facial
- feature
- adopting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008921 facial expression Effects 0.000 title claims abstract description 77
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000014509 gene expression Effects 0.000 claims abstract description 158
- 238000001514 detection method Methods 0.000 claims abstract description 58
- 230000001815 facial effect Effects 0.000 claims abstract description 55
- 238000000605 extraction Methods 0.000 claims abstract description 23
- 238000004422 calculation algorithm Methods 0.000 claims description 37
- 238000012549 training Methods 0.000 claims description 29
- 239000013598 vector Substances 0.000 claims description 29
- 230000004044 response Effects 0.000 claims description 25
- 210000004709 eyebrow Anatomy 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 13
- 230000033001 locomotion Effects 0.000 claims description 12
- 230000008030 elimination Effects 0.000 claims description 11
- 238000003379 elimination reaction Methods 0.000 claims description 11
- 238000011156 evaluation Methods 0.000 claims description 11
- 239000000284 extract Substances 0.000 claims description 4
- 238000004519 manufacturing process Methods 0.000 claims description 4
- 230000004807 localization Effects 0.000 claims 1
- 238000007476 Maximum Likelihood Methods 0.000 abstract 1
- 238000012706 support-vector machine Methods 0.000 description 9
- 238000010276 construction Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 4
- 230000008451 emotion Effects 0.000 description 4
- 238000012887 quadratic function Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000008433 psychological processes and functions Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
This application discloses a kind of facial expression recognizing method, including:Face is detected from original image;Face alignment and positioning feature point are carried out to the face of detection;Face feature information is extracted from facial image;According to the characteristic of acquisition, expression classification is carried out, realizes facial expression recognition.The application ensure that the accuracy of Expression Recognition, be with a wide range of applications by Face datection, positioning feature point, feature extraction, expression classification so as to carry out the prediction that human face expression carries out maximum likelihood.
Description
Technical Field
The application relates to a facial expression recognition method and a facial expression recognition system, and belongs to the technical field of facial expression recognition.
Background
The generation of human emotion is a very complex psychological process, the expression of emotion is accompanied by a plurality of expression modes, and the expression modes which are usually studied by computer students are mainly three types: expression, voice, action. In the three emotion expression modes, the emotion proportion contributed by the expression is as high as 55%, and with the increasingly wide application of the human-computer interaction technology, the human face expression recognition technology has very important significance in the field of human-computer interaction. As one of the main research methods in the field of pattern recognition and machine learning, a large number of facial expression recognition algorithms have been proposed.
However, the facial expression recognition technology also has its weaknesses: 1. different human expression changes: the facial expressions can generate differences according to the differences of different human expression modes; 2. the same person context changes: real-time performance of the expression of the same person in real life; 3. external conditions, such as: background, illumination, angle, distance, and the like have a large influence on emotion recognition. All of the above will affect the accuracy of facial expression recognition.
Disclosure of Invention
The method and the system for recognizing the facial expressions aim to provide a method and a system for recognizing the facial expressions, which can realize accurate recognition of the expressions.
In order to achieve the purpose, the invention provides a facial expression recognition method.
The facial expression recognition method is characterized by comprising the following steps:
detecting a human face from an original image;
carrying out face alignment and feature point positioning on the detected face;
extracting facial feature information from the face image;
and carrying out expression classification according to the acquired feature data to realize facial expression recognition.
The face detection comprises the following steps: namely, the existence of human faces is detected from original images of various scenes, and human face regions are accurately separated.
Further, the detecting the human face from the original image includes:
scanning an original image line by line based on a local binary mode to obtain a response image;
adopting an AdaBoost algorithm to carry out face detection on the response image, and detecting the existence of a face;
and (5) adopting an AdaBoost algorithm to carry out human eye detection and separating out a human face area.
Optionally, the multi-scale detection is performed according to 1.25-0.9 in the detection process by using the AdaBoost algorithm.
Further, the performing face alignment and feature point positioning on the detected face includes:
and marking the face characteristic points by adopting a local constraint model.
Optionally, labeling the facial feature points by using the local constraint model, after obtaining the coordinates of the feature points, selecting regions reflecting differences among various expressions, and extracting two types of features of expression features based on deformation and expression features based on movement;
and performing feature evaluation by adopting recursive feature elimination and a linear vector machine, and further performing feature selection on the selected features.
Further, the extracting facial feature information from the face image includes:
selecting areas which represent differences among various expressions, and extracting two types of characteristics of expression characteristics based on deformation and expression characteristics based on movement;
and performing feature evaluation by adopting recursive feature elimination and a linear vector machine, and further performing feature selection on the selected features.
Optionally, the regions representing differences between the expressions include the contour points of the eyes, the nose tip, the mouth corner points, the eyebrows, and the face.
Further, the extracting facial feature information from the face image further includes: and performing feature selection on the extracted facial feature information, acquiring a facial feature subset, and storing the facial feature information for expression recognition.
Further, the classifying the expressions according to the acquired feature data, and implementing facial expression recognition includes:
selecting samples according to the extracted facial feature information, training an expression classifier by using priori knowledge, wherein each sample corresponds to a corresponding expression label;
and realizing expression classification by adopting a least square rule through an expression classifier.
Further, the classifying the expressions according to the acquired feature data, and implementing facial expression recognition further includes:
and (3) manufacturing a base vector space by using the expression characteristics of the known label, and projecting the characteristics of the expression to be detected to the space to judge the expression category so as to identify the facial expression.
As a specific implementation manner, the facial expression recognition method includes the following steps: (1) detecting a human face from an original image; (2) carrying out face alignment and feature point positioning on the detected face; (3) extracting facial feature information from the face image; (4) and carrying out expression classification according to the acquired feature data to realize facial expression recognition.
Wherein, step (1) further includes: (11) scanning an original image line by line based on a local binary mode to obtain a response image; (12) adopting an AdaBoost algorithm to carry out face detection on the response image, and detecting the existence of a face; (13) and (5) adopting an AdaBoost algorithm to carry out human eye detection and separating out a human face area.
Further, the AdaBoost algorithm is adopted to carry out face detection or human eye detection, and multi-scale detection is carried out according to 1.25-0.9.
The step (2) further comprises: and marking the face characteristic points by adopting a local constraint model.
The step (3) further comprises: (31) selecting three main areas of a mouth, eyebrows and eyes which represent differences among various expressions, and extracting two types of characteristics of expression characteristics based on deformation and expression characteristics based on movement; (32) and performing feature evaluation by adopting recursive feature elimination and a linear vector machine, and further performing feature selection on the selected features.
Further, feature selection is carried out on the extracted facial feature information, a facial feature subset is obtained, and the facial feature information is stored and used for expression recognition.
The step (4) further comprises: (41) selecting samples according to the extracted facial feature information, training an expression classifier by using priori knowledge, wherein each sample corresponds to a corresponding expression label; (42) and realizing expression classification by adopting a least square rule through an expression classifier.
Furthermore, a base vector space is manufactured by using the expression characteristics of the known label, the expression category of the expression to be detected is judged by projecting the characteristics of the expression to the space, and the facial expression recognition is carried out.
In another aspect of the present application, a facial expression recognition system is provided, where the system includes: the system comprises a face detection module, a feature point positioning module, a feature extraction module and a facial expression recognition module;
the facial expression recognition module is used for detecting a human face from an original image;
the characteristic point positioning module is connected with the face detection module and is used for carrying out face alignment and characteristic point positioning on the detected face;
the feature extraction module is connected with the feature point positioning module and used for extracting facial feature information from the face image;
the facial expression recognition module is connected with the feature extraction module and used for predicting the maximum possibility of facial expression data to be recognized through the trained expression classifier according to the extracted facial feature information, finding out the expression category with the highest possibility and realizing facial expression recognition.
Optionally, the face detection module scans the original image line by line based on a local binary mode to obtain a response image;
adopting an AdaBoost algorithm to carry out face detection on the response image, and detecting the existence of a face;
and (5) adopting an AdaBoost algorithm to carry out human eye detection and separating out a human face area.
Optionally, the multi-scale detection is performed according to 1.25-0.9 in the detection process by using the AdaBoost algorithm.
Optionally, the feature point positioning module labels the face feature points by using a local constraint model.
Optionally, the feature extraction module selects regions reflecting differences among various expressions, and extracts two types of features of expression features based on deformation and expression features based on movement;
and performing feature evaluation by adopting recursive feature elimination and a linear vector machine, and further performing feature selection on the selected features.
Optionally, the regions representing the differences between the types of expressions comprise at least one of a mouth, eyebrows, eyes, and nose tips.
Optionally, the feature extraction module performs feature selection on the extracted facial feature information, obtains a facial feature subset, and stores the facial feature information for expression recognition.
Optionally, the facial expression recognition module classifies expressions according to the acquired feature data, and implementing facial expression recognition includes: selecting samples according to the extracted facial feature information, training an expression classifier by using priori knowledge, wherein each sample corresponds to a corresponding expression label;
and realizing expression classification by adopting a least square rule through an expression classifier.
Optionally, the facial expression recognition module uses expression features of known labels to make a base vector space, and the expression to be detected determines the expression category by projecting the features of the expression to the space, so as to perform facial expression recognition.
The beneficial effects that this application can produce include:
the method and the device have the advantages that the maximum possibility of the facial expressions is predicted by face detection, feature point positioning, feature extraction and expression classification, so that the accuracy of expression recognition is guaranteed, and the method and the device have wide application prospects.
Drawings
Fig. 1 is a schematic flow chart of a face recognition method according to the present application.
Fig. 2 is a schematic diagram of an architecture of a face recognition system according to the present application.
Detailed Description
The present application will be described in detail with reference to examples, but the present application is not limited to these examples.
Example 1
The following describes the facial expression recognition method and system provided by the present invention in detail with reference to the accompanying drawings.
Referring to fig. 1, a flow diagram of a facial expression recognition method according to the present invention is shown. The method comprises the following steps: s11: detecting a human face from an original image; s12: carrying out face alignment and feature point positioning on the detected face; s13: extracting facial feature information from the face image; s14: and carrying out expression classification according to the acquired feature data to realize facial expression recognition. The above steps are described in detail below with reference to the accompanying drawings.
S11: a face is detected from an original image.
Face detection: namely, the existence of human faces is detected from original images of various scenes, and human face regions are accurately separated. As a preferred embodiment, step S11 can be further completed by the following steps: 11) scanning an original image line by line based on a local binary mode to obtain a response image; 12) adopting an AdaBoost algorithm to carry out face detection on the response image, and detecting the existence of a face; 13) and (5) adopting an AdaBoost algorithm to carry out human eye detection and separating out a human face area.
Local Binary Pattern (LBP) is an effective texture descriptor, which has excellent delineation capability for local texture features of images. The LBP operator process is similar to the template operation in the filtering process, and the original image is scanned line by line; for each pixel point in the original image, taking the gray value of the point as a threshold value, and carrying out binarization on 8 fields of 3 multiplied by 3 around the point; and (4) forming the binary result into an 8-bit binary number according to a certain sequence, and using the value (0-255) of the binary number as the point response.
As shown in table 1, in an embodiment, the original image corresponds to a gray value, for the center point of the 3 × 3 area in table 1, the 8 fields are binarized by using the gray value 88 as a threshold, and the result of binarization is formed into a binary number 10001011, i.e. 139 in decimal, as the response of the center in a clockwise direction (the order may be arbitrary, but needs to be uniform) from the top left point. After the whole progressive scanning process is finished, obtaining an LBP response image which can be used as the characteristic of subsequent work; the corresponding gray scale values of the resulting response image are shown in table 2.
180 | 52 | 5 |
213 | 88 | 79 |
158 | 84 | 156 |
Table 1 an embodiment of the original image corresponds to gray scale values.
1 | 0 | 0 |
1 | 139 | 0 |
1 | 0 | 1 |
Table 2 the resulting response image corresponds to gray scale values.
The AdaBoost algorithm, which is proposed by Freund and Schapire according to an online distribution algorithm, allows designers to continually add new weak classifiers until a sufficiently small error rate for a certain subscription is reached. In the AdaBoost algorithm, each training sample is assigned a weight that characterizes the probability that it was selected into the training set by a component classifier. If a sample point has been accurately classified, then the probability that it is selected is reduced in constructing the next training set; conversely, if a sample point is not correctly classified, its weight is increased. Through the training of the T wheel, the AdaBoost algorithm can focus on the samples which are difficult to detect, and a strong classifier for target detection is obtained comprehensively.
The AdaBoost algorithm is described as follows:
1) given a calibrated training sample set (x)1,y1),(x2,y2),……(xL,yL). Wherein, gj(xi) The jth Haar-Like feature, x, representing the ith training imageiE.g. X, represents the input training sample, yiE Y ═ {1, -1} represents true and false samples, respectively.
2) Initialization weight w1,i1/2m,1/2n, where m, n respectively represent data of true and false samples, and the total number of samples L is m + n.
3) For T rounds of training, For T is 1,2, …, T.
The weights for all samples are normalized:
for the jth Haar-Like feature in each sample, a simple classifier can be obtained, i.e., the threshold θ is determinedjAnd an offset PjSo that the error epsilonjThe minimum value is reached:
wherein,
offset PjThe inequality direction is determined, and only +/-1 two conditions exist.
In a determined simple classifier, a classifier is found having a minimum error epsilontWeak classifier h oft。
4) The weights of all samples are updated:
wherein, βt=εt/(1-εt) If x isiQuilt hiCorrectly classify, then eiWhen it is equal to 0, otherwise ei=1。
5) The final strong classifier is:
wherein,αt=ln(1/βt) Is according to htMeasured by the prediction error of (1).
Therefore, the human face can be detected through the steps. In the detection process, multi-scale detection can be carried out according to 1.25-0.9, and finally windows are combined to output results.
And on the basis of detecting the human face, the AdaBoost algorithm is used for human eye detection. The basic principle of human eye detection is the same as that of human face detection, and is not described herein again. In the human eye detection process, multi-scale detection can be performed according to 1.25-0.9, and a rejection mechanism (which can be established according to the characteristics of the position, the size and the like of the human eyes) is established.
S12: and carrying out face alignment and feature point positioning on the detected face.
Positioning the characteristic points: namely, according to the input human face image, key feature points of the face, such as eyes, nose tip, mouth corner points, eyebrows and contour points of each part of the human face, are automatically positioned. As a preferred embodiment, step S12 can be further completed by the following steps: and marking the face characteristic points by adopting a local constraint model.
The local constraint model (CLM) accomplishes the detection of facial feature points by initializing the location of the average face and then letting the feature points on each average face search for matches in their neighborhood locations. The whole process is divided into two stages: a model building phase and a point fitting phase. The model construction phase can subdivide the construction of two different models: shape model construction and Patch model construction. Shape model construction is the modeling of the shape of a face model, which describes the criteria followed by shape changes. The Patch model models the neighborhood around each feature point, establishes a feature point matching criterion, and judges the optimal matching of the feature points.
The local constraint model (CLM) algorithm is described as follows:
1) shape model construction
Calculating the average of all face samples in the training set after alignmentAll the shapes are the same. Suppose there are M pictures, each picture has N characteristic points, and the coordinate of each characteristic point is assumed to be (x)i,yi) The vector composed of the coordinates of N feature points on one image is represented by x ═ x1y1x2y2… xNyN]TMeaning, the average face of all images is available:
calculating the difference between the shape vector of each sample image and the average face to obtain a shape change matrix X with zero mean:
the principal components determining the change in face shape can be obtained by PCA transformation of the matrix X, i.e.
Determining a main characteristic value lambdaiAnd corresponding feature vectors pi. Since the eigenvectors corresponding to the larger eigenvalues generally contain the main information of the sample, the eigenvectors corresponding to the largest k eigenvalues are selected to form the orthogonal matrix P (P ═ P)1,p2,…,pk)。
Weight vector b of shape change is (b)1,b2,…,bk)TEach component of b represents its magnitude in the direction of the corresponding eigenvector:
then for any face test image, its sample shape vector can be expressed as:
2) patch model construction
Supposing that M face images exist in a training sample, selecting N key feature points of the face on each image, selecting a patch area with a fixed size around each feature point, and marking the patch area containing the feature points as a positive sample; then truncating the same size patch in the non-feature point area and marking as a negative sample.
Assuming that there are a total of r patches per feature point, it is formed into a vector (x)(1),x(2),…x(r))TFor each image in the sample set, there isThen the output will have only positive and negative examples, i.e. patch is a feature point region and a non-feature point region. Then y(i)1,2, … r, wherein y is { -1,1} i { -1, 2, … r(i)1 is a positive sample mark, y(i)Negative sample label-1. The trained linear support vector machine is:
wherein xiThe subspace vector, i.e. the support vector, representing the sample set αiIs a weight coefficient, MsIs the number of support vectors per feature point, and b is the offset. The following can be obtained:
y(i)=wT·x(i)+θ
wT=[w1w2… wn]is the weight coefficient of each support vector and θ is the offset. Thus, a patch model is established for each feature point.
3) Point fitting
A similar response map, denoted R (X, Y), is generated for each feature point by performing a local search within the bounding region of the currently estimated feature point location.
Fitting a quadratic function to the response plot, assuming that R (X, Y) is in the neighborhood range (X)0,y0) We find the maximum and fit a function to this position so that the position corresponds one-to-one to the maximum R (X, Y). The quadratic function can be described as follows:
r(x,y)=a(x-x0)2+b(y-y0)2+c
wherein a, b and c are coefficients of a quadratic function, and the solving method is to minimize the error between the quadratic functions R (X, Y) and R (X, Y), namely to complete a least square calculation:
with the parameters a, b, and c, r (x, y) is an objective cost function for the feature point location, and then a deformation constraint cost function is added to form an objective function for feature point search, where the objective function is as follows:
optimizing the objective function each time to obtain a new feature point position, and then updating in an iteration mode until the maximum value is converged, so that the face point fitting is completed.
S13: facial feature information is extracted from the face image.
Feature extraction: namely, representative feature information of the human face is extracted from the normalized human face image. As a preferred embodiment, step S13 can be further completed by the following steps: (31) selecting three main areas of a mouth, eyebrows and eyes which represent differences among various expressions, and extracting two types of characteristics of expression characteristics based on deformation and expression characteristics based on movement; (32) and performing feature evaluation by adopting recursive feature elimination and a linear vector machine, and further performing feature selection on the selected features.
And (3) marking the facial feature points by using a local constraint model, acquiring the coordinates of the feature points, selecting the shape features of three main areas, namely the mouth, the eyebrows and the eyes, calculating the related slope information among key points in the three areas, and extracting the expression features based on deformation. And simultaneously tracking key points in the three regions, extracting corresponding displacement information, extracting distance information between specific feature points of the expression pictures, subtracting the distances from the calm pictures to obtain change information of the distances, and extracting the expression features based on motion.
And performing feature evaluation by adopting recursive feature elimination and a linear vector machine, and further denoising the selected features by adopting the weight value calculated by the support vector machine as a sorting criterion.
The feature selection algorithm is described as follows:
inputting: training sample setl is the number of categories
And (3) outputting: feature order set R
1) Initializing an original feature set S [ {1, 2, …, D }, and a feature ordering set R [ ]
2) Generate (l (l-1))/2 training samples:
in training sampleFinding out pairwise combinations of different categories to obtain a final training sample:
the following process is circulated until S [ ]:
3) obtaining l training subsamples Xj(j=1,2,…,(l(l-1))/2);
Respectively by XjTraining support vector machine to obtain w respectivelyj(j=1,2,…,l);
Calculating a ranking criterion score
Finding features with minimum ranking criteria score
An updated feature set R ═ { p } ∪ R;
this feature S is removed in S as S/p.
S14: and carrying out expression classification according to the acquired feature data to realize facial expression recognition.
And (4) classification: that is, human expressions are roughly classified into seven categories, which are happiness, anger, sadness, disgust, surprise, fear, and neutrality, respectively. As a preferred embodiment, step S14 can be further completed by the following steps: (41) selecting samples according to the extracted facial feature information, training an expression classifier by using priori knowledge, wherein each sample corresponds to a corresponding expression label; (42) and realizing expression classification by adopting a least square rule through an expression classifier.
Training an expression classifier: and training the extracted facial features by using a support vector machine algorithm, and obtaining an expression classifier after the training is finished.
Support Vector Machine (SVM) algorithm description:
input training setWherein xi∈RD,yi∈{+1,-1},xiIs the ith sample, N is the sample size, and D is the sample feature number. The SVM finds the optimal classification hyperplane w · x + b ═ 0.
The optimization problem required to be solved by the SVM is as follows:
s.t.yi(w·xi+b)≥1-ξii=1,2,…,N
ξi≥0,i=1,2,…,N
while the original problem can be converted into a dual problem:
wherein, αiIs a lagrange multiplier.
The final solution for w is:
the discriminant function of the SVM is:
and (4) classifying expressions: and inputting the extracted facial feature information into a trained classifier, and enabling the classifier to give a value of expression prediction. I.e., applying the least squares rule, the best functional match of the data is found by minimizing the sum of the squares of the errors. Thus, a complete face recognition process is completed.
Referring to fig. 2, an architecture of a facial expression recognition system according to the present invention is schematically illustrated; the system comprises: a face detection module 21, a feature point positioning module 22, a feature extraction module 23 and a facial expression recognition module 24.
The face detection module 21 is configured to detect a face from an original image. The face detection module 21 may scan the original image line by line based on a local binary mode to obtain a response image; then, adopting an AdaBoost algorithm to carry out face detection on the response image, and detecting the existence of a face; and then, adopting an AdaBoost algorithm to carry out human eye detection, and separating out a human face area. The specific implementation of face detection refers to the aforementioned method flow, and is not described herein again.
The feature point positioning module 22 is connected to the face detection module 21, and is configured to perform face alignment and feature point positioning on the detected face. And marking the face characteristic points by adopting a local constraint model, and positioning key characteristic points of the face, such as eyes, nose tips, corner points of the mouth, eyebrows and contour points of all parts of the face. The specific implementation of feature point positioning refers to the aforementioned method flow, and is not described herein again.
The feature extraction module 23 is connected to the feature point positioning module 22, and is configured to extract facial feature information from a face image. The feature extraction module 23 may extract two types of features of expression features based on deformation and expression features based on movement by selecting three main regions, i.e., mouth, eyebrow, and eye, that represent differences among various expressions; and then, performing feature evaluation by adopting recursive feature elimination and a linear vector machine, and further performing feature selection on the selected features. And in the characteristic extraction stage, the extracted facial characteristic information is subjected to characteristic selection, a facial characteristic subset is obtained, and the facial characteristic information is stored and used for expression recognition. The specific implementation manner refers to the aforementioned method flow, and is not described herein again.
The face recognition module 24 is connected to the feature extraction module 23, and is configured to classify expressions according to the acquired feature data, so as to implement facial expression recognition. The feature extraction module 24 may select samples according to the extracted facial feature information, train an expression classifier using priori knowledge, and each sample corresponds to a corresponding expression label; and then, the expression classifier is used for realizing expression classification by adopting a least square rule. The classification process is to use the expression features of the known labels to manufacture a base vector space, and the expression to be detected judges the type of the expression by projecting the features of the expression to the space so as to recognize the facial expression. The specific implementation manner refers to the aforementioned method flow, which is not described herein again.
Embodiment 2 facial expression recognition method
The method for recognizing the facial expressions in the embodiment comprises the following steps:
step 11: detecting a human face from an original image;
in this step, a specific embodiment includes step 101, step 102, and step 103.
Step 101: and scanning the original image line by line based on the local binary mode to obtain a response image.
Step 102: and adopting an AdaBoost algorithm to detect the human face of the response image, and detecting the existence of the human face.
Step 103: and (5) adopting an AdaBoost algorithm to carry out human eye detection and separating out a human face area.
In a specific mode, the AdaBoost algorithm is adopted to carry out human face detection or human eye detection, and multi-scale detection is carried out according to 1.25-0.9.
Step 12: carrying out face alignment and feature point positioning on the detected face;
in this step, a specific implementation manner is: and marking the face characteristic points by adopting a local constraint model.
Step 13: extracting facial feature information from the face image;
in this step, a specific embodiment includes step 301 and step 302.
Step 301: selecting three main areas of a mouth, eyebrows and eyes which represent differences among various expressions, and extracting two types of characteristics of expression characteristics based on deformation and expression characteristics based on movement;
in this step, another specific embodiment is: selecting main areas of which contour points of eyes, nose tips, mouth corner points, eyebrows and all parts of the human face represent differences among various expressions, and extracting two types of characteristics of expression characteristics based on deformation and expression characteristics based on movement;
step 302: and performing feature evaluation by adopting recursive feature elimination and a linear vector machine, and further performing feature selection on the selected features.
In a specific implementation mode, feature selection is carried out on the extracted facial feature information, facial feature subsets are obtained, and the facial feature information is stored and used for expression recognition.
Step 14: and carrying out expression classification according to the acquired feature data to realize facial expression recognition.
In this step, a specific embodiment includes step 401 and step 402.
Step 401: selecting samples according to the extracted facial feature information, training an expression classifier by using priori knowledge, wherein each sample corresponds to a corresponding expression label;
step 402: and realizing expression classification by adopting a least square rule through an expression classifier.
In a specific implementation mode, a base vector space is manufactured by using expression features of known labels, and the expression category is judged by projecting the features of the expression to be detected to the space, so that facial expression recognition is performed.
The various algorithms involved in this example are the same as those in example 1.
Embodiment 3 facial expression recognition system
The facial expression recognition system in the embodiment includes: the system comprises a face detection module, a feature point positioning module, a feature extraction module and a facial expression recognition module;
the facial expression recognition module is used for detecting a human face from an original image;
in a specific implementation manner, the face detection module scans an original image line by line based on a local binary mode to obtain a response image;
adopting an AdaBoost algorithm to carry out face detection on the response image, and detecting the existence of a face;
and (5) adopting an AdaBoost algorithm to carry out human eye detection and separating out a human face area.
In a specific embodiment, the multi-scale detection is performed according to 1.25-0.9 in the detection process by using the AdaBoost algorithm.
The characteristic point positioning module is connected with the face detection module and is used for carrying out face alignment and characteristic point positioning on the detected face;
in a specific embodiment, the feature point positioning module labels the face feature points by using a local constraint model.
The feature extraction module is connected with the feature point positioning module and used for extracting facial feature information from the face image;
in a specific embodiment, the feature extraction module selects regions reflecting differences among various expressions, and extracts two types of features of expression features based on deformation and expression features based on movement;
and performing feature evaluation by adopting recursive feature elimination and a linear vector machine, and further performing feature selection on the selected features.
In a specific embodiment, the region that represents the difference between the expressions includes an eye, a nose tip, a mouth corner point, an eyebrow, and contour points of each part of the human face.
The facial expression recognition module is connected with the feature extraction module and used for predicting the maximum possibility of facial expression data to be recognized through a trained expression classifier according to the extracted facial feature information, finding out the expression category with the highest possibility and realizing facial expression recognition;
in a particular embodiment: the facial expression recognition module classifies expressions according to the acquired feature data, and the facial expression recognition comprises the following steps: selecting samples according to the extracted facial feature information, training an expression classifier by using priori knowledge, wherein each sample corresponds to a corresponding expression label;
and realizing expression classification by adopting a least square rule through an expression classifier.
In a specific embodiment, the facial expression recognition module classifies expressions according to the acquired feature data, and implementing facial expression recognition includes: selecting samples according to the extracted facial feature information, training an expression classifier by using priori knowledge, wherein each sample corresponds to a corresponding expression label;
the expression classification is realized by adopting a least square rule through an expression classifier;
the facial expression recognition module uses expression features of known labels to manufacture a base vector space, and the expression to be detected judges the type of the expression by projecting the features of the expression to the space so as to recognize the facial expression.
The various algorithms involved in this example are the same as those in example 1.
Although the present application has been described with reference to a few embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the application as defined by the appended claims.
Claims (10)
1. A facial expression recognition method is characterized by comprising the following steps:
detecting a human face from an original image;
carrying out face alignment and feature point positioning on the detected face;
extracting facial feature information from the face image;
and carrying out expression classification according to the acquired feature data to realize facial expression recognition.
2. The method of claim 1, wherein detecting the face from the original image comprises:
scanning an original image line by line based on a local binary mode to obtain a response image;
adopting an AdaBoost algorithm to carry out face detection on the response image, and detecting the existence of a face;
adopting an AdaBoost algorithm to carry out human eye detection and separating out a human face area;
preferably, the multiscale detection is performed according to 1.25-0.9 in the detection process by adopting the AdaBoost algorithm.
3. The method of claim 1, wherein the face alignment and feature point positioning of the detected face comprises:
and marking the face characteristic points by adopting a local constraint model.
4. The method of claim 1, wherein extracting facial feature information from the face image comprises:
selecting areas which represent differences among various expressions, and extracting two types of characteristics of expression characteristics based on deformation and expression characteristics based on movement;
performing feature evaluation by adopting recursive feature elimination and a linear vector machine, and further performing feature selection on the selected features;
preferably, the regions reflecting differences among various expressions comprise contour points of eyes, nose tips, mouth corner points, eyebrows and all parts of the human face;
preferably, the extracting facial feature information from the face image further includes: and performing feature selection on the extracted facial feature information, acquiring a facial feature subset, and storing the facial feature information for expression recognition.
5. The method of claim 1, wherein the performing expression classification according to the acquired feature data to realize facial expression recognition comprises:
selecting samples according to the extracted facial feature information, training an expression classifier by using priori knowledge, wherein each sample corresponds to a corresponding expression label;
the expression classification is realized by adopting a least square rule through an expression classifier;
preferably, the classifying the expression according to the acquired feature data, and implementing facial expression recognition further includes:
and (3) manufacturing a base vector space by using the expression characteristics of the known label, and projecting the characteristics of the expression to be detected to the space to judge the expression category so as to identify the facial expression.
6. A system for facial expression recognition, the system comprising: the system comprises a face detection module, a feature point positioning module, a feature extraction module and a facial expression recognition module;
the facial expression recognition module is used for detecting a human face from an original image;
the characteristic point positioning module is connected with the face detection module and is used for carrying out face alignment and characteristic point positioning on the detected face;
the feature extraction module is connected with the feature point positioning module and used for extracting facial feature information from the face image;
the facial expression recognition module is connected with the feature extraction module and used for predicting the maximum possibility of facial expression data to be recognized through the trained expression classifier according to the extracted facial feature information, finding out the expression category with the highest possibility and realizing facial expression recognition.
7. The system of claim 6, wherein the face detection module scans the original image line by line based on a local binary pattern to obtain a response image;
adopting an AdaBoost algorithm to carry out face detection on the response image, and detecting the existence of a face;
and (5) adopting an AdaBoost algorithm to carry out human eye detection and separating out a human face area.
8. The system of claim 6, wherein the feature point localization module labels facial feature points using a local constraint model.
9. The system according to claim 6, wherein the feature extraction module selects regions representing differences between various expressions, and extracts two types of features of expression features based on deformation and expression features based on motion;
performing feature evaluation by adopting recursive feature elimination and a linear vector machine, and further performing feature selection on the selected features;
preferably, the regions reflecting differences among various expressions comprise contour points of eyes, nose tips, mouth corner points, eyebrows and all parts of the human face;
preferably, the feature extraction module performs feature selection on the extracted facial feature information, acquires a facial feature subset, and stores the facial feature information for expression recognition.
10. The system of claim 6, wherein the facial expression recognition module performs expression classification according to the acquired feature data, and implementing facial expression recognition comprises: selecting samples according to the extracted facial feature information, training an expression classifier by using priori knowledge, wherein each sample corresponds to a corresponding expression label;
the expression classification is realized by adopting a least square rule through an expression classifier;
preferably, the facial expression recognition module uses expression features of known labels to make a base vector space, and the expression to be detected determines the expression category by projecting the features of the expression to the space, so as to perform facial expression recognition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810001358.4A CN108268838B (en) | 2018-01-02 | 2018-01-02 | Facial expression recognition method and facial expression recognition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810001358.4A CN108268838B (en) | 2018-01-02 | 2018-01-02 | Facial expression recognition method and facial expression recognition system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108268838A true CN108268838A (en) | 2018-07-10 |
CN108268838B CN108268838B (en) | 2020-12-29 |
Family
ID=62773093
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810001358.4A Active CN108268838B (en) | 2018-01-02 | 2018-01-02 | Facial expression recognition method and facial expression recognition system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108268838B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109409273A (en) * | 2018-10-17 | 2019-03-01 | 中联云动力(北京)科技有限公司 | A kind of motion state detection appraisal procedure and system based on machine vision |
CN109712144A (en) * | 2018-10-29 | 2019-05-03 | 百度在线网络技术(北京)有限公司 | Processing method, training method, equipment and the storage medium of face-image |
CN109948672A (en) * | 2019-03-05 | 2019-06-28 | 张智军 | A kind of wheelchair control method and system |
CN109948541A (en) * | 2019-03-19 | 2019-06-28 | 西京学院 | A kind of facial emotion recognition methods and system |
CN110020638A (en) * | 2019-04-17 | 2019-07-16 | 唐晓颖 | Facial expression recognizing method, device, equipment and medium |
CN110059650A (en) * | 2019-04-24 | 2019-07-26 | 京东方科技集团股份有限公司 | Information processing method, device, computer storage medium and electronic equipment |
CN110166836A (en) * | 2019-04-12 | 2019-08-23 | 深圳壹账通智能科技有限公司 | A kind of TV program switching method, device, readable storage medium storing program for executing and terminal device |
CN110334643A (en) * | 2019-06-28 | 2019-10-15 | 广东奥园奥买家电子商务有限公司 | A kind of feature evaluation method and device based on recognition of face |
CN110348899A (en) * | 2019-06-28 | 2019-10-18 | 广东奥园奥买家电子商务有限公司 | A kind of commodity information recommendation method and device |
CN110941993A (en) * | 2019-10-30 | 2020-03-31 | 东北大学 | Dynamic personnel classification and storage method based on face recognition |
CN111144374A (en) * | 2019-12-31 | 2020-05-12 | 泰康保险集团股份有限公司 | Facial expression recognition method and device, storage medium and electronic equipment |
WO2020125386A1 (en) * | 2018-12-18 | 2020-06-25 | 深圳壹账通智能科技有限公司 | Expression recognition method and apparatus, computer device, and storage medium |
WO2020133072A1 (en) * | 2018-12-27 | 2020-07-02 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for target region evaluation and feature point evaluation |
CN112132117A (en) * | 2020-11-16 | 2020-12-25 | 黑龙江大学 | Fusion identity authentication system assisting coercion detection |
CN112307942A (en) * | 2020-10-29 | 2021-02-02 | 广东富利盛仿生机器人股份有限公司 | Facial expression quantitative representation method, system and medium |
CN112560685A (en) * | 2020-12-16 | 2021-03-26 | 北京嘀嘀无限科技发展有限公司 | Facial expression recognition method and device and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1794265A (en) * | 2005-12-31 | 2006-06-28 | 北京中星微电子有限公司 | Method and device for distinguishing face expression based on video frequency |
CN1996344A (en) * | 2006-12-22 | 2007-07-11 | 北京航空航天大学 | Method for extracting and processing human facial expression information |
US20130163829A1 (en) * | 2011-12-21 | 2013-06-27 | Electronics And Telecommunications Research Institute | System for recognizing disguised face using gabor feature and svm classifier and method thereof |
CN104021384A (en) * | 2014-06-30 | 2014-09-03 | 深圳市创冠智能网络技术有限公司 | Face recognition method and device |
CN104268580A (en) * | 2014-10-15 | 2015-01-07 | 南京大学 | Class cartoon layout image management method based on scene classification |
CN104951743A (en) * | 2015-03-04 | 2015-09-30 | 苏州大学 | Active-shape-model-algorithm-based method for analyzing face expression |
CN105069447A (en) * | 2015-09-23 | 2015-11-18 | 河北工业大学 | Facial expression identification method |
CN106022391A (en) * | 2016-05-31 | 2016-10-12 | 哈尔滨工业大学深圳研究生院 | Hyperspectral image characteristic parallel extraction and classification method |
CN106407958A (en) * | 2016-10-28 | 2017-02-15 | 南京理工大学 | Double-layer-cascade-based facial feature detection method |
US20170132408A1 (en) * | 2015-11-11 | 2017-05-11 | Samsung Electronics Co., Ltd. | Methods and apparatuses for adaptively updating enrollment database for user authentication |
CN106919884A (en) * | 2015-12-24 | 2017-07-04 | 北京汉王智远科技有限公司 | Human facial expression recognition method and device |
CN106934375A (en) * | 2017-03-15 | 2017-07-07 | 中南林业科技大学 | The facial expression recognizing method of distinguished point based movement locus description |
US20170301121A1 (en) * | 2013-05-02 | 2017-10-19 | Emotient, Inc. | Anonymization of facial images |
-
2018
- 2018-01-02 CN CN201810001358.4A patent/CN108268838B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1794265A (en) * | 2005-12-31 | 2006-06-28 | 北京中星微电子有限公司 | Method and device for distinguishing face expression based on video frequency |
CN1996344A (en) * | 2006-12-22 | 2007-07-11 | 北京航空航天大学 | Method for extracting and processing human facial expression information |
US20130163829A1 (en) * | 2011-12-21 | 2013-06-27 | Electronics And Telecommunications Research Institute | System for recognizing disguised face using gabor feature and svm classifier and method thereof |
US20170301121A1 (en) * | 2013-05-02 | 2017-10-19 | Emotient, Inc. | Anonymization of facial images |
CN104021384A (en) * | 2014-06-30 | 2014-09-03 | 深圳市创冠智能网络技术有限公司 | Face recognition method and device |
CN104268580A (en) * | 2014-10-15 | 2015-01-07 | 南京大学 | Class cartoon layout image management method based on scene classification |
CN104951743A (en) * | 2015-03-04 | 2015-09-30 | 苏州大学 | Active-shape-model-algorithm-based method for analyzing face expression |
CN105069447A (en) * | 2015-09-23 | 2015-11-18 | 河北工业大学 | Facial expression identification method |
US20170132408A1 (en) * | 2015-11-11 | 2017-05-11 | Samsung Electronics Co., Ltd. | Methods and apparatuses for adaptively updating enrollment database for user authentication |
CN106919884A (en) * | 2015-12-24 | 2017-07-04 | 北京汉王智远科技有限公司 | Human facial expression recognition method and device |
CN106022391A (en) * | 2016-05-31 | 2016-10-12 | 哈尔滨工业大学深圳研究生院 | Hyperspectral image characteristic parallel extraction and classification method |
CN106407958A (en) * | 2016-10-28 | 2017-02-15 | 南京理工大学 | Double-layer-cascade-based facial feature detection method |
CN106934375A (en) * | 2017-03-15 | 2017-07-07 | 中南林业科技大学 | The facial expression recognizing method of distinguished point based movement locus description |
Non-Patent Citations (5)
Title |
---|
KHAN, MASOOD MEHMOOD ET AL: "Automated Facial Expression Classification and affect interpretation using infrared measurement of facial skin temperature variations", 《TRANSACTIONS ON AUTONOMOUS AND ADAPTIVE SYSTEMS》 * |
PUAL VIOLA ET AL: "Rapid Object Detection using a Boosted Cascade of SimpleFeatures", 《PROCEEDINGS OF THE 2001 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
师亚亭等: "基于嘴巴状态约束的人脸特征点定位算法", 《智能系统学报》 * |
蒋政: "人脸识别中特征提取算法的研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
马飞: "基于几何特征的表情识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109409273A (en) * | 2018-10-17 | 2019-03-01 | 中联云动力(北京)科技有限公司 | A kind of motion state detection appraisal procedure and system based on machine vision |
CN109712144A (en) * | 2018-10-29 | 2019-05-03 | 百度在线网络技术(北京)有限公司 | Processing method, training method, equipment and the storage medium of face-image |
WO2020125386A1 (en) * | 2018-12-18 | 2020-06-25 | 深圳壹账通智能科技有限公司 | Expression recognition method and apparatus, computer device, and storage medium |
CN113302619B (en) * | 2018-12-27 | 2023-11-14 | 浙江大华技术股份有限公司 | System and method for evaluating target area and characteristic points |
US12026600B2 (en) | 2018-12-27 | 2024-07-02 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for target region evaluation and feature point evaluation |
WO2020133072A1 (en) * | 2018-12-27 | 2020-07-02 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for target region evaluation and feature point evaluation |
CN109948672A (en) * | 2019-03-05 | 2019-06-28 | 张智军 | A kind of wheelchair control method and system |
CN109948541A (en) * | 2019-03-19 | 2019-06-28 | 西京学院 | A kind of facial emotion recognition methods and system |
CN110166836B (en) * | 2019-04-12 | 2022-08-02 | 深圳壹账通智能科技有限公司 | Television program switching method and device, readable storage medium and terminal equipment |
CN110166836A (en) * | 2019-04-12 | 2019-08-23 | 深圳壹账通智能科技有限公司 | A kind of TV program switching method, device, readable storage medium storing program for executing and terminal device |
CN110020638A (en) * | 2019-04-17 | 2019-07-16 | 唐晓颖 | Facial expression recognizing method, device, equipment and medium |
CN110059650A (en) * | 2019-04-24 | 2019-07-26 | 京东方科技集团股份有限公司 | Information processing method, device, computer storage medium and electronic equipment |
CN110348899A (en) * | 2019-06-28 | 2019-10-18 | 广东奥园奥买家电子商务有限公司 | A kind of commodity information recommendation method and device |
CN110334643A (en) * | 2019-06-28 | 2019-10-15 | 广东奥园奥买家电子商务有限公司 | A kind of feature evaluation method and device based on recognition of face |
CN110334643B (en) * | 2019-06-28 | 2023-05-23 | 知鱼智联科技股份有限公司 | Feature evaluation method and device based on face recognition |
CN110941993A (en) * | 2019-10-30 | 2020-03-31 | 东北大学 | Dynamic personnel classification and storage method based on face recognition |
CN111144374A (en) * | 2019-12-31 | 2020-05-12 | 泰康保险集团股份有限公司 | Facial expression recognition method and device, storage medium and electronic equipment |
CN111144374B (en) * | 2019-12-31 | 2023-10-13 | 泰康保险集团股份有限公司 | Facial expression recognition method and device, storage medium and electronic equipment |
CN112307942A (en) * | 2020-10-29 | 2021-02-02 | 广东富利盛仿生机器人股份有限公司 | Facial expression quantitative representation method, system and medium |
CN112132117A (en) * | 2020-11-16 | 2020-12-25 | 黑龙江大学 | Fusion identity authentication system assisting coercion detection |
CN112560685A (en) * | 2020-12-16 | 2021-03-26 | 北京嘀嘀无限科技发展有限公司 | Facial expression recognition method and device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108268838B (en) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108268838B (en) | Facial expression recognition method and facial expression recognition system | |
CN108510000B (en) | Method for detecting and identifying fine-grained attribute of pedestrian in complex scene | |
CN106682598B (en) | Multi-pose face feature point detection method based on cascade regression | |
CN109800648B (en) | Face detection and recognition method and device based on face key point correction | |
CN108520226B (en) | Pedestrian re-identification method based on body decomposition and significance detection | |
Sung et al. | Example-based learning for view-based human face detection | |
CN100478979C (en) | Status identification method by using body information matched human face information | |
CN104978549B (en) | Three-dimensional face images feature extracting method and system | |
US20050226509A1 (en) | Efficient classification of three dimensional face models for human identification and other applications | |
CN103279768B (en) | A kind of video face identification method based on incremental learning face piecemeal visual characteristic | |
US20110141258A1 (en) | Emotion recognition method and system thereof | |
JP2008310796A (en) | Computer implemented method for constructing classifier from training data detecting moving object in test data using classifier | |
CN107341447A (en) | A kind of face verification mechanism based on depth convolutional neural networks and evidence k nearest neighbor | |
CN111126482A (en) | Remote sensing image automatic classification method based on multi-classifier cascade model | |
Li et al. | Efficient 3D face recognition handling facial expression and hair occlusion | |
CN107045621A (en) | Facial expression recognizing method based on LBP and LDA | |
CN112381047B (en) | Enhanced recognition method for facial expression image | |
Liliana et al. | Human emotion recognition based on active appearance model and semi-supervised fuzzy C-means | |
CN113486902A (en) | Three-dimensional point cloud classification algorithm automatic selection method based on meta-learning | |
Amores et al. | Fast spatial pattern discovery integrating boosting with constellations of contextual descriptors | |
CN114399731B (en) | Target positioning method under supervision of single coarse point | |
Poostchi et al. | Feature selection for appearance-based vehicle tracking in geospatial video | |
CN112183215B (en) | Human eye positioning method and system combining multi-feature cascading SVM and human eye template | |
Rasines et al. | Feature selection for hand pose recognition in human-robot object exchange scenario | |
Hariri et al. | Geometrical and visual feature quantization for 3d face recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |