CN103400105A - Method identifying non-front-side facial expression based on attitude normalization - Google Patents

Method identifying non-front-side facial expression based on attitude normalization Download PDF

Info

Publication number
CN103400105A
CN103400105A CN201310261775XA CN201310261775A CN103400105A CN 103400105 A CN103400105 A CN 103400105A CN 201310261775X A CN201310261775X A CN 201310261775XA CN 201310261775 A CN201310261775 A CN 201310261775A CN 103400105 A CN103400105 A CN 103400105A
Authority
CN
China
Prior art keywords
attitude
front face
facial
normalized
expression recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310261775XA
Other languages
Chinese (zh)
Other versions
CN103400105B (en
Inventor
郑文明
冯天从
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201310261775.XA priority Critical patent/CN103400105B/en
Publication of CN103400105A publication Critical patent/CN103400105A/en
Application granted granted Critical
Publication of CN103400105B publication Critical patent/CN103400105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method identifying a non-front-side facial expression based on attitude normalization. The method comprises that facial expressions in a training sample set are learned via a nonlinear regression model to obtain a mapping function from non-front-side facial characteristic points to front-side facial characteristic points; attitude estimation and characteristic point positioning are carried out on to-be-tested non-front-side facial images via a multi-template active appearance model, and the characteristic points of a non-front-side face are normalized to a front-side attitude via the corresponding attitude mapping function; and geometric positions of the characteristic points of a front-side face are classified into expressions via a support vector machine. The method identifying the non-front-side facial expression is simple and effectively, which solves the problem that different facial attitudes cause different expressions, and satisfies the requirement of identifying non-front-side facial expressions in real time.

Description

Based on the normalized non-front face expression recognition method of attitude
Technical field
The present invention relates to pattern-recognition and image processing field, particularly relate to a kind of based on the normalized non-front face expression recognition method of attitude.
Background technology
Expression is the external presentation of mood and emotion, by Basic emotions theory model, expression can be divided into to six classes: angry, detest, frightened, glad, sad, surprised.Human face expression is identified all the time all very important Research Significances of tool, in a plurality of fields such as man-machine interaction, public safety, intelligent video display, huge marketable value is arranged.The tradition expression recognition method is research object mainly with positive or approximate positive facial image greatly.Yet statistics shows, in actual life, due to the randomness of Image Acquisition, 75% facial image is all non-front.If only adopt classic method to analyze these non-front face images, often can't obtain gratifying result.Therefore the present invention is mainly for how effectively carrying out non-this practical problems of front face Expression Recognition.
With respect to the front face facial expression image, non-front face facial expression image exists groups of people's face to be blocked mostly, thereby causes the disappearance of certain expression information; Simultaneously, the diversity of human face posture variation also will certainly be introduced difference in larger class to expression classification; In addition, find the expressive features that is independent of human face posture very difficult, by the feature of traditional front expression recognition method extraction, can introduce and contain the attitude variation in interior many redundant informations.The new problems such as therefore, non-front face Expression Recognition mainly needs to solve the part expression of being brought by non-positive attitude and is blocked, and in the expression class, difference is large, and the feature extraction specific aim is weak, make the human face expression recognition system have more practicality.
Summary of the invention
Goal of the invention: in order to address the above problem, the present invention proposes a kind of based on the normalized non-front face expression recognition method of attitude.
Technical scheme: based on the normalized non-front face expression recognition method of attitude, at first the method is learnt by nonlinear regression model (NLRM) the facial expression image that training sample is concentrated, obtain the mapping function of non-front face unique point to the front face unique point, then the facial expression image in test sample book is carried out to human face posture estimation and positioning feature point, and use the mapping function of corresponding attitude, the unique point of non-front face is normalized to front, and expression classification is carried out in the geometric position of finally adopting support vector machine to align the dough figurine face characteristic point.
The present invention adopts technique scheme, has following beneficial effect:
1. based on the normalized non-front face Expression Recognition scheme of attitude, non-front face Expression Recognition problem is converted into to general front face Expression Recognition problem, only use a front face sorter can realize the Expression Recognition of different attitudes, reduced sorter quantity.
2. the attitude normalization that recurrence realizes based on Gaussian process is to the positioning feature point noise ratio than robust, and it is also very accurate that unique point is predicted; Use characteristic point geometry position is as input feature vector, and intrinsic dimensionality is low, and calculated amount is little, has fully ensured the real-time of the method.
3. use multi-template AAM can realize simultaneously that the attitude of non-front face is estimated and the location of human face characteristic point, human face posture is estimated rate of accuracy reached to 96.5% and is had good robustness on the Multi-PIE storehouse, and the average root-mean-square error between positioning feature point and manual calibration point is only 1.49 pixels.
4. use geometric properties and one to one the expression recognition method of multiclass SVM can effectively identify the expression that intensity is larger, obtain good discrimination and recognition speed, and the variation between all kinds of expressions had to good robustness.
The accompanying drawing explanation
Fig. 1 is method flow diagram of the present invention;
Fig. 2 is initiatively apparent model attitude method of estimation process flow diagram of multi-template of the present invention;
Fig. 3 is the initiatively demarcation of apparent model unique point and positioning result figure of multi-template of the present invention;
Fig. 4 is multi-template apparent model human face characteristic point positioning error schematic diagram initiatively under different attitudes under the Multi-PIE storehouse;
Fig. 5 be under the Multi-PIE storehouse under different attitudes multi-template initiatively the apparent model attitude estimate the accuracy rate schematic diagram;
Fig. 6 is different non-linear regression mode attitude normalization comparison diagram as a result;
Fig. 7 is the human face characteristic point schematic diagram for expression classification;
Fig. 8 is unique point attitude normalization root-mean-square error comparison diagram;
Fig. 9 is unique point attitude normalization comparison diagram as a result under different noise situations;
Figure 10 is non-front face Expression Recognition confusion matrix figure under the Multi-PIE storehouse;
Figure 11 is non-front and front face Expression Recognition comparison diagram as a result under the Multi-PIE storehouse.
Embodiment
Below in conjunction with specific embodiment, further illustrate the present invention, should understand these embodiment only is not used in and limits the scope of the invention be used to the present invention is described, after having read the present invention, those skilled in the art all fall within the application's claims limited range to the modification of the various equivalent form of values of the present invention.
Concrete steps as Fig. 1 the method are:
Step 1: the facial image that adopts three pairs of some methods that training sample is concentrated carries out the human face characteristic point alignment under different attitudes.Due to prenasale and two inboard canthus points, be not subject to the impact of human face expression, therefore fixing at these 3 asks for corresponding affined transformation, by affined transformation, can human face characteristic point be snapped to corresponding standard faces under different attitudes, eliminate as much as possible the impact of different people face shape for human face expression identification.
Step 2: the facial image of training sample being concentrated to non-positive attitude k align by step 1) after people's face, uses the study of Gaussian process regression model to obtain the mapping function f that attitude k arrives positive attitude (k), different attitude k can learn to obtain different f (k), by corresponding f (k)Just attitude normalization can be carried out in the human face characteristic point position of attitude k, realize the mapping of non-front face characteristic point position to the front face characteristic point position, thereby non-front face expression is transformed into to the front face attitude, identify.
Step 3: active apparent model (the Active Appearance Model that training sample is concentrated to the corresponding attitude of different attitude people's face training, AAM), obtain the AAM template under different attitudes, use multi-template AAM method when estimating human face posture, to realize the face characteristic point location.
Step 4: to the test Facial Expression Image of input, by described step 3, estimate the location that obtains human face posture k and realize human face characteristic point, then by described step 2, the unique point of non-front face attitude is normalized to the front face attitude, obtain its position of human face characteristic point in positive attitude.
Step 5: use the support vector machine based on the radial basis kernel function, expression classification is carried out to as feature in the geometric position of the front face unique point that obtains in described step 4, obtain the Expression Recognition result.
In this embodiment step 1, based on the pretreated method of people's face of three pairs of somes alignment, be:
The alignment of people's face refers to by rotation, translation or the conversion of convergent-divergent equiaffine, the facial image of different sizes, shape and inclination is snapped to the process of " standard faces ".For alignment people face, the position that need at first extract some face characteristic points, as eyes, nose, the corners of the mouth etc.Correlative study shows that prenasale and two inboard canthus points are not subject to the impact of human face expression, so the present invention fixes at these 3 and realizes the alignment of people's face by affined transformation.Suppose after (x, y) is for alignment a certain characteristic point position on facial image, (x ', y ') corresponding to its front present position of alignment, can obtain by affined transformation that relation is as follows between the two:
x ′ y ′ = s cos θ - sin θ s sin θ s cos θ x y + t x t y = a - b t x b a t y x y 1 (1)
Canthus, given left side point position (x l, y l), canthus, right side point position (x r, y r), prenasale position (x n, y n), following formula can further be write as
x l - y l 1 0 y l x l 0 1 x r - y r 1 0 y r x r 0 1 x n - y n 1 0 y n x n 0 1 a b t x t y = x l ′ y l ′ x r ′ y r ′ x n ′ y n ′ (2)
Can solve following formula by the method for pseudoinverse, variable to be solved is T=[a, b, t x, t y] T, suppose that T left side matrix replaces with symbol A, the vector on equal sign right side is B:
T=(A TA) -1(A TB) (3)
By affined transformation T, can realize the alignment of different attitude human face unique points, thereby facilitate the follow-up Expression Recognition of carrying out.
This is in this embodiment step 2, while due to the flat rotation of people's water for washing the face, surpassing 45 °, unique point on groups of people's face (as eyes, eyebrow) can be blocked because the variation of attitude is excessive, therefore can be used for the normalized people's face of attitude object and mainly refers to-45 °~45 ° non-front face facial expression images that horizontally rotate in scope.
Wherein, the attitude normalization concrete grammar that returns based on Gaussian process is:
Attitude normalization, in this attitude normalization that mainly refers to the human face characteristic point position, namely returns the position of non-front face unique point is mapped to front face by Gaussian process.Suppose that training sample set is by N kNon-front face attitude k and corresponding front face posture feature point are formed, be designated as { D k, D 0.D kAnd D 0In each element be designated as respectively
Figure BDA00003413434200041
With
Figure BDA00003413434200042
(i=1,2 ..., N k), they are the vector (d is the human face characteristic point number) of 2d dimension, and required target is exactly to pass through Gaussian process recurrence learning mapping function f (k):
Figure BDA00003413434200043
Will by f (k)
Figure BDA00003413434200044
Be mapped to
Figure BDA00003413434200045
p i 0 = f ( k ) ( p i k ) + ϵ i (4)
Wherein, noise ε i~N (0, σ 2), σ 2For noise variance.f (k)Can cross the recurrence mode by Gauss solves.The statistical property of Gaussian process is determined by its mean value function and covariance function fully, for a new human face characteristic point input Its corresponding prediction average
Figure BDA00003413434200048
And covariance function
Figure BDA00003413434200049
Be respectively:
f ( k ) ( p * k ) = k * T ( K + σ 2 I ) - 1 D 0 (5)
cov ( k ) ( p * k ) = k ( p * k , p * k ) - k * T ( K + σ 2 I ) - 1 k * (6)
Wherein,
Figure BDA000034134342000412
K () is the kernel function that Gaussian process returns, and can be made as following form:
k ( p i k , p j k ) = σ f 2 exp ( - 1 2 ( p i k - p j k ) T Λ - 1 ( p i k - p j k ) ) + σ s p i k p j k (7)
This kernel function is actual is all combinations of square index kernel function and linear kernel function, can effectively process linear and nonlinear data.Wherein, i, j=1,2 ..., N k,
Figure BDA000034134342000414
With
Figure BDA000034134342000415
For super parameter,
Figure BDA00003413434200051
Be signal variance, determined the uncertainty of predicted value.And Λ is corresponding to difference input component p iVariance measure; In addition, the super parameter σ of linear kernel sPlay and control f (k)The effect of output yardstick.
Determine the form of kernel function, just can return the training of training sample set is determined to the value of super parameter by Gaussian process, thereby set up the regression model of attitude k to positive attitude.Remember that the vector that all super parameters form is θ, X and y are still the input and output of sample set, by Bayes principle, can be obtained:
p ( θ | X , y , k ) = p ( y | X , k , θ ) p ( θ ) p ( y | X , k ) (8)
Wherein, denominator and super parameter θ are irrelevant, can ignore; And suppose that usually prior distribution p (θ) is uniformly distributed for approximate, thereby also can ignore.Finally can, by asking the extreme point problem of p (θ | X, y, k) to be converted into the extreme point of asking p (y|X, k, θ), remember that super estimates of parameters is
Figure BDA00003413434200055
:
θ ^ = arg max ( p ( y | X , k , θ ) ) (9)
Ask its logarithm extreme value, that is:
L ( θ ) = - 1 2 ( n log ( 2 π ) + log | K ( X , X ) + σ 2 I | + y T ( K ( X , X ) + σ 2 I ) - 1 y ) (10)
Following formula is a typical unconstrained optimization problem, can ask its extreme value by conjugate gradient algorithm, thereby estimates to obtain super parameter value.
After by training, obtaining the super parameter value of regression model that attitude is k, when having attitude to be the unique point test sample book input of k, but just through type (5) uses the Gaussian process forecast of regression model to obtain its position of human face characteristic point in positive attitude.
In step 3 of the present invention, before using multi-template AAM method to estimate human face posture, human face region carries human-face detector by the OpenCV storehouse and obtains.Training AAM template has been selected 68 human face characteristic points, and wherein left and right sides eyebrow is each 5, each 6 of right and left eyes, 9, nose, 20, lip, 17 of facial contours.Wherein, the estimation of multi-template AAM attitude with characteristic point positioning method is:
Step1: by certain angle intervals, different attitude people's faces are set up to corresponding AAM model by training, obtain based on a plurality of AAM templates under different attitudes;
Step2: for test person face sample, carry out match with these a plurality of AAM templates respectively, get the AAM template of error of fitting minimum as fitting result;
Step3: the corresponding attitude of this AAM template is the Output rusults that human face posture is estimated, getting simultaneously the corresponding positioning feature point result of this template is the output of human face characteristic point location.
Fitting algorithm has used reverse composition algorithm in Step2, makes most of computing to complete at pretreatment stage, thereby has effectively improved the operation efficiency of AAM.The human face characteristic point geometric position that obtains in Step3 is follow-up will be for expression classification.The present invention has chosen in the human face characteristic point of multi-template AAM location 37 by experiment as input feature vector, the unique point of specifically choosing is as follows: each 5 unique points of left and right sides eyebrow, each 6 unique points of right and left eyes, 3 unique points of nose, 12 unique points in the lip outside.
In step 5 of the present invention, the Expression Recognition result is divided into to six classes, i.e. angry, detest, frightened, glad, sad and surprised; Support vector machine based on the radial basis kernel function is the libsvm bag that uses.Concrete, sorting technique based on support vector machine is: support vector machine (Support Vector Machine, SVM) be a kind of learning algorithm that is based upon on Statistical Learning Theory and structure risk minimum principle basis, in small sample problem, non-linear and higher-dimension pattern-recognition, show many distinctive advantages.As a kind of supervised learning algorithm, support vector machine solves nonlinear problem by introducing kernel function, regard positive negative sample as two set in higher dimensional space, by finding lineoid, higher dimensional space is divided into to two parts, make positive sample set and negative sample set fall into respectively two different semispaces, and guarantee two interval maximums between set.The non-front face Expression Recognition of attitude normalization solution adopts support vector machine to classify, and utilizes radial basis function (Radial Basis Function, RBF) as kernel function, as the formula (12).
K(u,v)=exp(-γ||u-v|| 2),γ>0 (12)
In specific implementation process, adopt the libsvm that professor Lin Zhiren etc. designs and develops that the C-SVM form that provides is provided, " one to one " mode of employing expands to the multiclass pattern recognition problem by two class problems.Wherein, in the parameter γ of RBF kernel function and C-SVM, control wrong minute sample punishment extent index C and carry out parameter optimization by the grid search method, get γ that training set cross validation accuracy rate is the highest and C as optimized parameter, when many group γ high-accuracy corresponding identical with C arranged, select minimum that group of C as optimized parameter.

Claims (5)

1. based on the normalized non-front face expression recognition method of attitude, it is characterized in that,
The facial expression image that training sample is concentrated is learnt by nonlinear regression model (NLRM), obtained the mapping function of non-front face unique point to the front face unique point;
Facial expression image in test sample book is carried out to human face posture estimation and positioning feature point, and use the mapping function of corresponding attitude, the unique point of non-front face is normalized to the front face attitude;
Expression classification is carried out in the geometric position of adopting support vector machine to align the dough figurine face characteristic point.
2. according to claim 1 based on the normalized non-front face expression recognition method of attitude, it is characterized in that: described nonlinear regression model (NLRM) adopts the Gaussian process regression model, the form that the kernel function of Gaussian process regression model uses equal square index kernel functions to combine with linear kernel function.
3. according to claim 1 based on the normalized non-front face expression recognition method of attitude, it is characterized in that: described human face posture is estimated and positioning feature point adopts initiatively apparent model of multi-template.
4. according to claim 1 based on the normalized non-front face expression recognition method of attitude, it is characterized in that: the geometric position of described front face unique point comprises eyes, nose, the lip outside two-dimensional geometry coordinate of totally 37 unique points.
5. according to claim 1 based on the normalized non-front face expression recognition method of attitude, it is characterized in that: described support vector machine adopts the libsvm bag based on the radial basis kernel function.
CN201310261775.XA 2013-06-26 2013-06-26 Method identifying non-front-side facial expression based on attitude normalization Active CN103400105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310261775.XA CN103400105B (en) 2013-06-26 2013-06-26 Method identifying non-front-side facial expression based on attitude normalization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310261775.XA CN103400105B (en) 2013-06-26 2013-06-26 Method identifying non-front-side facial expression based on attitude normalization

Publications (2)

Publication Number Publication Date
CN103400105A true CN103400105A (en) 2013-11-20
CN103400105B CN103400105B (en) 2017-05-24

Family

ID=49563724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310261775.XA Active CN103400105B (en) 2013-06-26 2013-06-26 Method identifying non-front-side facial expression based on attitude normalization

Country Status (1)

Country Link
CN (1) CN103400105B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447473A (en) * 2015-12-14 2016-03-30 江苏大学 PCANet-CNN-based arbitrary attitude facial expression recognition method
CN106125004A (en) * 2016-08-29 2016-11-16 哈尔滨理工大学 Lithium battery health status Forecasting Methodology based on neutral net kernel function GPR
CN106405427A (en) * 2016-08-29 2017-02-15 哈尔滨理工大学 Lithium battery state of health prediction method based on neural network and Maternard kernel function GPR
CN106845327A (en) * 2015-12-07 2017-06-13 展讯通信(天津)有限公司 The training method of face alignment model, face alignment method and device
CN108073855A (en) * 2016-11-11 2018-05-25 腾讯科技(深圳)有限公司 A kind of recognition methods of human face expression and system
CN108197547A (en) * 2017-12-26 2018-06-22 深圳云天励飞技术有限公司 Face pose estimation, device, terminal and storage medium
CN108256426A (en) * 2017-12-15 2018-07-06 安徽四创电子股份有限公司 A kind of facial expression recognizing method based on convolutional neural networks
CN108304800A (en) * 2018-01-30 2018-07-20 厦门启尚科技有限公司 A kind of method of Face datection and face alignment
CN108363413A (en) * 2018-01-18 2018-08-03 深圳市中科智诚科技有限公司 A kind of face recognition device that the accuracy of identification with Face detection function is high
CN108805009A (en) * 2018-04-20 2018-11-13 华中师范大学 Classroom learning state monitoring method based on multimodal information fusion and system
CN108985257A (en) * 2018-08-03 2018-12-11 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109711378A (en) * 2019-01-02 2019-05-03 河北工业大学 Human face expression automatic identifying method
CN105095827B (en) * 2014-04-18 2019-05-17 汉王科技股份有限公司 Facial expression recognition device and method
CN109993063A (en) * 2019-03-05 2019-07-09 福建天晴数码有限公司 A kind of method and terminal identified to rescue personnel
CN110909618A (en) * 2019-10-29 2020-03-24 泰康保险集团股份有限公司 Pet identity recognition method and device
CN112215050A (en) * 2019-06-24 2021-01-12 北京眼神智能科技有限公司 Nonlinear 3DMM face reconstruction and posture normalization method, device, medium and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794265A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and device for distinguishing face expression based on video frequency
CN102013011A (en) * 2010-12-16 2011-04-13 重庆大学 Front-face-compensation-operator-based multi-pose human face recognition method
CN102663351A (en) * 2012-03-16 2012-09-12 江南大学 Face characteristic point automation calibration method based on conditional appearance model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794265A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and device for distinguishing face expression based on video frequency
CN102013011A (en) * 2010-12-16 2011-04-13 重庆大学 Front-face-compensation-operator-based multi-pose human face recognition method
CN102663351A (en) * 2012-03-16 2012-09-12 江南大学 Face characteristic point automation calibration method based on conditional appearance model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
呼月宁 等: "AAM在多姿态人脸特征点检测中的应用", 《计算机工程与应用》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095827B (en) * 2014-04-18 2019-05-17 汉王科技股份有限公司 Facial expression recognition device and method
CN106845327A (en) * 2015-12-07 2017-06-13 展讯通信(天津)有限公司 The training method of face alignment model, face alignment method and device
CN106845327B (en) * 2015-12-07 2019-07-02 展讯通信(天津)有限公司 Training method, face alignment method and the device of face alignment model
CN105447473A (en) * 2015-12-14 2016-03-30 江苏大学 PCANet-CNN-based arbitrary attitude facial expression recognition method
CN105447473B (en) * 2015-12-14 2019-01-08 江苏大学 A kind of any attitude facial expression recognizing method based on PCANet-CNN
CN106125004A (en) * 2016-08-29 2016-11-16 哈尔滨理工大学 Lithium battery health status Forecasting Methodology based on neutral net kernel function GPR
CN106405427A (en) * 2016-08-29 2017-02-15 哈尔滨理工大学 Lithium battery state of health prediction method based on neural network and Maternard kernel function GPR
CN108073855A (en) * 2016-11-11 2018-05-25 腾讯科技(深圳)有限公司 A kind of recognition methods of human face expression and system
CN108256426A (en) * 2017-12-15 2018-07-06 安徽四创电子股份有限公司 A kind of facial expression recognizing method based on convolutional neural networks
CN108197547B (en) * 2017-12-26 2019-12-17 深圳云天励飞技术有限公司 Face pose estimation method, device, terminal and storage medium
CN108197547A (en) * 2017-12-26 2018-06-22 深圳云天励飞技术有限公司 Face pose estimation, device, terminal and storage medium
CN108363413A (en) * 2018-01-18 2018-08-03 深圳市中科智诚科技有限公司 A kind of face recognition device that the accuracy of identification with Face detection function is high
CN108363413B (en) * 2018-01-18 2020-12-18 深圳市海清视讯科技有限公司 Face recognition equipment with face positioning function
CN108304800A (en) * 2018-01-30 2018-07-20 厦门启尚科技有限公司 A kind of method of Face datection and face alignment
CN108805009A (en) * 2018-04-20 2018-11-13 华中师范大学 Classroom learning state monitoring method based on multimodal information fusion and system
CN108985257A (en) * 2018-08-03 2018-12-11 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109711378A (en) * 2019-01-02 2019-05-03 河北工业大学 Human face expression automatic identifying method
CN109711378B (en) * 2019-01-02 2020-12-22 河北工业大学 Automatic facial expression recognition method
CN109993063A (en) * 2019-03-05 2019-07-09 福建天晴数码有限公司 A kind of method and terminal identified to rescue personnel
CN112215050A (en) * 2019-06-24 2021-01-12 北京眼神智能科技有限公司 Nonlinear 3DMM face reconstruction and posture normalization method, device, medium and equipment
CN110909618A (en) * 2019-10-29 2020-03-24 泰康保险集团股份有限公司 Pet identity recognition method and device
CN110909618B (en) * 2019-10-29 2023-04-21 泰康保险集团股份有限公司 Method and device for identifying identity of pet

Also Published As

Publication number Publication date
CN103400105B (en) 2017-05-24

Similar Documents

Publication Publication Date Title
CN103400105A (en) Method identifying non-front-side facial expression based on attitude normalization
Zhang et al. Pedestrian detection method based on Faster R-CNN
CN102332086B (en) Facial identification method based on dual threshold local binary pattern
CN103198303B (en) A kind of gender identification method based on facial image
CN109902590A (en) Pedestrian's recognition methods again of depth multiple view characteristic distance study
CN104392241B (en) A kind of head pose estimation method returned based on mixing
CN105046197A (en) Multi-template pedestrian detection method based on cluster
CN103020614B (en) Based on the human motion identification method that space-time interest points detects
CN104834941A (en) Offline handwriting recognition method of sparse autoencoder based on computer input
CN104680144A (en) Lip language recognition method and device based on projection extreme learning machine
CN103226835A (en) Target tracking method and system based on on-line initialization gradient enhancement regression tree
CN104392246A (en) Inter-class inner-class face change dictionary based single-sample face identification method
CN108960258A (en) A kind of template matching method based on self study depth characteristic
CN103927554A (en) Image sparse representation facial expression feature extraction system and method based on topological structure
CN102799872A (en) Image processing method based on face image characteristics
CN103714554A (en) Video tracking method based on spread fusion
CN105809713A (en) Object tracing method based on online Fisher discrimination mechanism to enhance characteristic selection
CN105976397A (en) Target tracking method based on half nonnegative optimization integration learning
Chen et al. Robust vehicle detection and viewpoint estimation with soft discriminative mixture model
Liang et al. Dynamic and combined gestures recognition based on multi-feature fusion in a complex environment
Tan et al. L1-norm latent SVM for compact features in object detection
CN103714340A (en) Self-adaptation feature extracting method based on image partitioning
CN103530651A (en) Head attitude estimation method based on label distribution
Zhang et al. Dynamic gesture recognition based on fusing frame images
CN106250818A (en) A kind of total order keeps the face age estimation method of projection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant