CN101908149A - Method for identifying facial expressions from human face image sequence - Google Patents

Method for identifying facial expressions from human face image sequence Download PDF

Info

Publication number
CN101908149A
CN101908149A CN2010102185432A CN201010218543A CN101908149A CN 101908149 A CN101908149 A CN 101908149A CN 2010102185432 A CN2010102185432 A CN 2010102185432A CN 201010218543 A CN201010218543 A CN 201010218543A CN 101908149 A CN101908149 A CN 101908149A
Authority
CN
China
Prior art keywords
image
mfv
image sequence
value
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010102185432A
Other languages
Chinese (zh)
Inventor
吕坤
张欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN2010102185432A priority Critical patent/CN101908149A/en
Publication of CN101908149A publication Critical patent/CN101908149A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method for identifying facial expressions from a human face image sequence, belonging to the technical field of analyzing and identifying human facial expressions. The method of the invention comprises the following steps of: firstly, adopting a method for tracing feature points, sequentially extracting the displacement amount of the normalized facial key point and the length of the special geometrical characteristic for each frame image of the expression image sequence, and combining the data to form a characteristic column vector; secondly, sequentially arranging all characteristic column vectors of the sequence to form a characteristic matrix, wherein each characteristic matrix represents a facial expression image sequence; finally, comparing the similarities among the characteristic matrixes by using a canonical correlation analysis method, thereby determining the human face images to be identified into one of the basic expressions of happiness, sadness, fear, hate, surprise and anger. In the invention, the canonical correlation analysis method is successfully applied to identifying the human facial expressions, the dynamic information in the expression generation course is utilized effectively and the higher recognition rate and the shorter CPU computation time are acquired.

Description

A kind of method of from human face image sequence, discerning countenance
Technical field
The present invention relates to a kind of method of from human face image sequence, discerning countenance, belong to human face expression analysis and recognition technology field.
Background technology
Along with the fast development of computer technology, human face expression analysis automatically and recognition technology will make countenance become a new channel (channel) of man-machine interaction, and allow reciprocal process become more natural and effective.Countenance analysis and identification comprise three basic problems: 1. how to find in image and people from location face; 2. how from detected face image or face image sequence, to extract effective expressive features; 3. how to design appropriate sorting technique and come Identification Lists affectionate person class.In recent years, having a lot of research work to be devoted to discern human face expression: Cohn etc. from image sequence has proposed a kind of method based on light stream and has discerned slight change in the countenance in document " Feature-Point Tracking by Optical FlowDiscriminates Subtle Differences in Facial Expression " (Int ' l Conf.Automatic Face and Gesture Recognition, pp.396-401 (1998)).(IVCNZ 2008 at document " Facial expression recognition from image sequences using optimizedfeature selection " for people such as Laj evardi, New Zealand, Page (s): 1-6 (2008)) discloses a kind of Feature Selection process in, used naive Bayesian (NB) sorter to discern the method for human face expression by optimizing.Sun Zhengxing etc. document " towards the LSVM algorithm of video sequence expression classification " (computer-aided design (CAD) and graphics journal. the 21st the volume, the 4th phase (2009)) disclose a kind of employing in and from video human face, extracted the expression geometric properties based on the moving shape model (ASM) of a tracking, and the method for using local support vector machine (LSVM) sorter that expression is classified.The shortcoming of these methods is: it only extracts feature from peak value expression frame, ignored the important time domain multidate information that the expression production process comprises, so the accuracy rate of its identification is not high.
In addition, " Active and Dynamic Information Fusion for Facial ExpressionUnderstanding from Image Sequences " (IEEE TRANSACTIONS ON PATTERNANALYSIS AND MACHINE INTELLIGENCE, VOL.27, NO.5, MAY 2005), " Manifoldbased analysis of facial expression " (Image and Vision Computing 24 (2006) 605-614), " Facial expression recognition from video sequences:temporal and static modeling " (Computer Vision and Image Understanding91 (2003) 160-187) though etc. the method that proposes of the document time domain multidate information that used the expression production process to comprise, its calculation of complex, computing cost is big.
The important prior art that the present invention uses is: and canonical correlation analysis (CanonicalCorrelation Analysis, CCA).
Canonical correlation analysis is the classical tool in the statistical study, and it can be used to measure the linear relationship between two or more data sets.Canonical correlation coefficient is defined as two d dimensional linear subspace L 1And L 2Between leading role θ iCosine value:
cos θ i = max u i ∈ L 1 , v i ∈ L 2 u i T v i ( 1 ≤ i ≤ d ) ;
Wherein:
u i T u i = v i T v i = 1 , u i T u j = v i T v j = 0 ( i ≠ j ) ;
Parameter d is represented the linear subspaces dimension.
The canonical correlation analysis technology successfully has been applied to fields such as image set coupling, people's face or object identification in recent years, and therefore the canonical correlation analysis technology being used to solve the Expression Recognition problem is a simple but effective method theoretically.But, in the Expression Recognition problem, the face image difference of same individual different table feelings is also little, even the image difference of two opposite expressions neither be very big, so simply the canonical correlation analysis technology is applied in the Expression Recognition, can not obtains effect preferably.So far, also do not find pertinent literature and the practical application that the canonical correlation analysis technology is used for human face expression identification.
Summary of the invention
The objective of the invention is to have proposed a kind of method of from human face image sequence, discerning countenance in order to overcome the deficiency that prior art exists.The present invention uses method that face feature point follows the tracks of to extract the length of normalized face key point displacement and particular geometric feature successively at each two field picture in the facial expression image sequence, and these data are formed a characteristic series vector; All characteristic series vectors in the sequence are arranged in order and are formed an eigenmatrix, and each eigenmatrix is represented a facial expression image sequence; Utilize the similarity between the canonical correlation analysis method comparative feature matrix then, thereby facial image to be identified is defined as one of six kinds of basic facial expressions (happiness, sadness, fear, detest, surprised and angry).
The objective of the invention is to be achieved through the following technical solutions.
A kind of method of from human face image sequence, discerning countenance, its concrete operations step is:
Step 1, selection image sequence
Select the image sequence of representing six kinds of basic facial expressions such as happiness, sadness, fear, detest, surprised and anger from the countenance database, the quantity of the image sequence of every kind of basic facial expression is more than 20; From each facial expression image sequence, choose m (m 〉=10, and m is a positive integer) two field picture, and each facial expression image sequence finishes to the peak value facial expression image from neutral facial expression image.
Step 2, sign face feature point
On the basis of step 1, sign face feature point; Be specially:
The 1st step: identify 20 face feature points successively in first two field picture in each facial expression image sequence; Wherein, the 1st, 2 unique points lay respectively at the brows position of the right eyebrow and left side eyebrow; 3rd, 4 unique points lay respectively at the eyebrow tail position of the right eyebrow and left side eyebrow; 5th, 6 unique points lay respectively at the inner eye corner position of the right eyes and left side eyes; 7th, 8 unique points lay respectively at the minimum point of the right eyes and left side eyes; 9th, 10 unique points lay respectively at the tail of the eye position of the right eyes and left side eyes; 11st, 12 unique points lay respectively at the peak of the right eyes and left side eyes; 13rd, 14 unique points lay respectively at right-most position of the wing of nose and the left-most position of the wing of nose; The 15th unique point is positioned at the nose position; 16th, 17 unique points lay respectively at the right-most position of the corners of the mouth and the left-most position of the corners of the mouth; 18th, 19 unique points lay respectively at lip center line and the crossing highs and lows of lip outline line; The 20th unique point is positioned at face's center line and the crossing minimum point of face mask line.
The method of 20 face feature points of described sign includes but not limited to: 1. identify manually; 2. adopt people such as Vukdadinovic at document " Fully automatic facial feature point detectionusing gabor feature based boosted classifiers " (Proc.IEEE Int ' l Conf.on Systems, Man and Cybernetics, pp.1692-1698 (2005)) the enhancing classifier methods based on the Gabor feature that proposes in realizes the automatic location to 20 face feature points.
The 2nd step: the people's who proposes in document " Anthropometry of the Head and Face " (NewYork:Raven Press (1994)) according to Farkas the eyes and the statistics of cheek, nose and cheek spatial relationship calculate the position of the 21st, 22 unique points in first two field picture in each facial expression image sequence; 21st, 22 unique points lay respectively at the cheekbone position of right side cheek and left side cheek.
The 3rd step: adopt people such as Patras document " Particle filtering with factorizedlikelihoods for tracking facial features " (Proc.Int ' l Conf.AutomaticFace ﹠amp; Gesture Recognition, pp.97-102 (2004)) but in the particle filter tracking method that proposes based on the likelihood function factorization, according to the position of 22 unique points in first two field picture in each facial expression image sequence, follow the tracks of 22 face feature points in the subsequent frame image in each facial expression image sequence.
The 4th step: adopt affine transformation method to adjust the position and the size of people's face in the image, the equal and opposite in direction of people's face in the same image sequence and position are consistent.Be specially:
At first make the line maintenance level of two inner eye corner points in first two field picture in each image sequence; Then, according to the position of these 3 points of uppermost point of two inner eye corner points in first two field picture in the image sequence of place and philtrum, 22 face feature points in remaining each frame are done mapping and normalized; After handling through affined transformation, in the same image sequence people of all images be bold little equate and the position of these 3 points of uppermost point of two inner eye corner points and philtrum and first frame in the position consistency of these 3 points.
Step 3, extraction countenance feature
On the basis of step 2 operation, from every image, extract the countenance feature successively; Be specially:
The 1st step: set up the Xb-Yb coordinate system, this coordinate system is an initial point with the lower left corner of every image, and level is the Xb axle to right, and direction is the Yb axle straight up; Obtain on every image 22 characteristic point coordinates value (xb successively according to 22 unique point place locations of pixels on every image based on the Xb-Yb coordinate system i, yb i) and the coordinate figure (x of the uppermost point of people's face philtrum of every image Origin, y Origin), wherein, i=1~22, and i is a positive integer;
The 2nd step: set up the X-Y coordinate system, this coordinate system is an initial point with the uppermost point of people's face philtrum of every image, and level is an X-axis to right, and direction is a Y-axis straight up; Obtain 22 characteristic point coordinates value (x on every image by formula 1 and formula 2 based on the X-Y coordinate system i, y i);
x i=xb i-x origin (1)
y i=yb i-y origin (2)
The 3rd step: the horizontal ordinate displacement that obtains 22 unique points of every image by formula 3 and formula 4
Figure BSA00000172012900051
The ordinate displacement
Figure BSA00000172012900052
Δx i ′ = x i - x ‾ i - - - ( 3 )
Δy i ′ = y i - y ‾ i - - - ( 4 )
Wherein, Be respectively the abscissa value and the ordinate value of the character pair point of first two field picture in this image place image sequence.
The 4th step: obtain the horizontal ordinate displacement Δ x after 22 unique point normalization of every image by formula 5 and formula 6 successively i, ordinate displacement Δ y i
Δx i = Δx i ′ / x base - - - ( 5 )
Δy i = Δy i ′ / y base - - - ( 6 )
Wherein, x Base=x 6-x 5, y Base=y 6-y 5x 5And x 6Be respectively the abscissa value of the 5th, 6 unique points of this image; y 5And y 6Be respectively the ordinate value of the 5th, 6 unique points of this image.
The 5th step: obtain 10 geometric distance feature mfv in every image 1~mfv 10Be specially:
Obtain eyes stretching degree value mfv according to formula 7 1:
mfv 1=((y 11-y 7)+(y 12-y 8))/2 (7)
Wherein, y 7, y 8, y 11, y 12Be respectively the ordinate value of the 7th, 8,11,12 unique points of this image.
Obtain eye widths value mfv according to formula 8 2:
mfv 2=((x 5-x 9)+(x 10-x 6))/2 (8)
Wherein, x 5, x 6, x 9, x 10Be respectively the abscissa value of the 5th, 6,9,10 unique points of this image.
Obtain brows height value mfv according to formula 9 3:
mfv 3=(y 1+y 2)/2 (9)
Wherein, y 1, y 2Be respectively the ordinate value of the 1st, 2 unique points of this image.
Obtain eyebrow tail height value mfv according to formula 10 4:
mfv 4=(y 3+y 4)/2 (10)
Wherein, y 3, y 4Be respectively the ordinate value of the 3rd, 4 unique points of this image.
Obtain eyebrow width value mfv according to formula 11 5:
mfv 5=((x 1-x 3)+(x 4-x 2))/2 (11)
Wherein, x 1, x 2, x 3, x 4Be respectively the abscissa value of the 1st, 2,3,4 unique points of this image.
Obtain mouth stretching degree value mfv according to formula 12 6:
mfv 6=y 18-y 19 (12)
Wherein, y 18, y 19Be respectively the ordinate value of the 18th, 19 unique points of this image.
Obtain mouth width value mfv according to formula 13 7:
mfv 7=x 17-x 16 (13)
Wherein, x 16, x 17Be respectively the abscissa value of the 16th, 17 unique points of this image.
Obtain nose-corners of the mouth distance value mfv according to formula 14 8:
mfv 8=((y 15-y 16)+(y 15-y 17))/2 (14)
Wherein, y 15, y 16, y 17Be respectively the ordinate value of the 15th, 16,17 unique points of this image.
Obtain eyes-cheek distance value mfv according to formula 15 9:
mfv 9=(((y 11+y 7)/2-y 21)+((y 12+y 8)/2-y 22))/2 (15)
Wherein, y 21, y 22Be respectively the ordinate value of the 21st, 22 unique points of this image.
Obtain nose-chin distance value mfv according to formula 16 10:
mfv 10=y 15-y 20 (16)
Wherein, y 15, y 20Be respectively the ordinate value of the 15th, 20 unique points of this image.
The 6th step: by 10 geometric distance feature mfv in 17 pairs of every images of formula 1~mfv 10Carry out normalized;
mfv j = mfv j / mfv ‾ j - - - ( 17 )
Wherein, j=1~10, and j is a positive integer;
Figure BSA00000172012900072
Be geometric distance feature corresponding in first two field picture in this image place image sequence.
The 7th step: 10 geometric distance feature mfv that use every image respectively 1~mfv 10And the horizontal ordinate displacement Δ x that obtains after 22 unique point normalizeds of every width of cloth image i, ordinate displacement Δ y iConstitute 54 dimensional vector z of expression one width of cloth face image expression information kWherein: z k∈ R 54, 1≤k≤m, R represents real number);
The 8th step: use characteristic matrix Z={z 1, z 2..., z m∈ R 54 * mRepresent a facial expression image sequence; Z wherein 1The neutral expression of expression, z mExpression peak value expression.
Step 4, treat the test pattern sequence and classify;
On the basis of step 3, adopt canonical correlation analysis to treat the test pattern sequence and classify; Be specially:
The 1st step: the image sequence of every kind of basic facial expression choosing in the step 1 is divided into 2 parts at random, and a part is as training data, and a part is as test data; The quantity of training data is Q, Q 〉=20, and Q is a positive integer; Article one, training data is a facial expression image sequence; Article one, test data also is a facial expression image sequence.
The 2nd step: go on foot the training data that obtains at the 1st, people such as employing T.-K.Kim are at document " Discriminative Learning and Recognition of Image Set Classes UsingCanonical Correlations " (IEEE Transactions On Pattern Analysis AndMachine Intelligence, Vol.29, No.6 (2007)) the canonical correlation discriminant analysis method that proposes in is handled, and obtains transformation matrix T ∈ R 54 * n, n<54, and n is a positive integer; Use transformation matrix T that the eigenmatrix Z of the image in the pantomimia image sequence of choosing in the step 1 (comprising training data and test data) is changed then, obtain Z '=T TZ.
The 3rd step: facial expression image sequence of picked at random from the test data described in the 1st step, the eigenmatrix Z ' that calculates this facial expression image sequence and the canonical correlation coefficient of the eigenmatrix Z ' of each training data with.
The 4th the step: the 3rd the step result the basis on, calculate respectively this facial expression image sequence and every kind of basic facial expression canonical correlation coefficient and mean value; Choose the expression of 6 minimum value correspondences in the mean value, as classification results.
Through above-mentioned steps, can finish the Expression Recognition for the treatment of the test pattern sequence.
Beneficial effect
Compare with existing recognition methods, a kind of method of discerning countenance from human face image sequence of the present invention successfully applies to the canonical correlation analysis method in the human face expression identification, effectively utilized the multidate information in the expression production process, and obtained high recognition and less CPU operation time.
Description of drawings
Fig. 1 is 7 two field pictures in certain 15 frame image sequence in the specific embodiment of the present invention;
Fig. 2 is the unique point in the facial image and Xb-Yb coordinate system in the specific embodiment of the present invention, X-Y coordinate system synoptic diagram;
Fig. 3 is the general structure frame synoptic diagram of the inventive method.
Embodiment
The present invention is described in detail below in conjunction with the drawings and specific embodiments.
Use Cohn-Kanade countenance database in the present embodiment, therefrom select 212 image sequences of 50 people of six kinds of basic facial expressions such as representing happiness, sadness, fear, detest, surprised and anger.From each facial expression image sequence, respectively choose 15 two field pictures,, finish to the peak value facial expression image from neutral facial expression image.Figure 1 shows that 7 frames in certain 15 frame image sequence wherein.22 unique points in each facial image and Xb-Yb coordinate system, X-Y coordinate system are as shown in Figure 2.The image sequence of selecting 35 people wherein is as training set, and remaining image sequence guarantees and individual irrelevant human face expression classifying quality with this as test set.Each test all uses the test of picked at random and training set to carry out 5 times, calculates its average result.In study and assorting process, the parameter d of expression linear subspaces dimension is made as 10, and the parameter n of expression transformation matrix T dimension is made as 20.
The general structure frame synoptic diagram of the inventive method as shown in Figure 3.Use the inventive method, the confusion matrix of recognition result is as shown in table 1.The diagonal element of matrix is the number percent that countenance is correctly classified, and nondiagonal element is corresponding to the number percent of mis-classification.The average accuracy rate of this paper method has surpassed 90%.
The confusion matrix (%) of table 1 this paper method recognition result
Glad Sad Frightened Detest Surprised Angry
Glad 95.2 0 2.6 0 2.2 0
Sad 0 92.5 0 2.8 0 4.7
Frightened 8.8 0 87.2 0 0 4.0
Detest 0 1.5 7.4 91.1 0 0
Surprised 5.2 0 1.2 0 90.5 3.1
Angry 0 10.3 0 4.2 0 85.5
For explanation effect of the present invention, use identical data, adopt optimization feature selection approach and LSVM method to test respectively, its result is as shown in table 2.
The comparison (%) of table 2 distinct methods discrimination
Glad Sad Frightened Detest Surprised Angry Average accuracy rate
This paper method 95.2 92.5 87.2 91.1 90.5 85.5 90.3
Optimize feature selecting 76.3 67.6 55 100 78 100 79.5
LSVM 91.3 86.4 89.6 88.3 92.5 86.5 89.1
Test shows that the inventive method has higher accuracy rate, and from the operation steps of the inventive method, the inventive method is simple as can be seen.
The above only is a preferred implementation of the present invention; should be understood that; for those skilled in the art; under the prerequisite that does not break away from the principle of the invention; can also make some improvement; perhaps part technical characterictic wherein is equal to replacement, these improvement and replace and also should be considered as protection scope of the present invention.

Claims (3)

  1. One kind from human face image sequence identification countenance method, it is characterized in that: its concrete operations step is:
    Step 1, selection image sequence
    Select the image sequence of representing six kinds of basic facial expressions such as happiness, sadness, fear, detest, surprised and anger from the countenance database, the quantity of the image sequence of every kind of basic facial expression is more than 20; From each facial expression image sequence, choose the m two field picture, wherein: m 〉=10, and m is a positive integer; And each facial expression image sequence finishes to the peak value facial expression image from neutral facial expression image;
    Step 2, sign face feature point
    On the basis of step 1, sign face feature point; Be specially:
    The 1st step: identify 20 face feature points successively in first two field picture in each facial expression image sequence; Wherein, the 1st, 2 unique points lay respectively at the brows position of the right eyebrow and left side eyebrow; 3rd, 4 unique points lay respectively at the eyebrow tail position of the right eyebrow and left side eyebrow; 5th, 6 unique points lay respectively at the inner eye corner position of the right eyes and left side eyes; 7th, 8 unique points lay respectively at the minimum point of the right eyes and left side eyes; 9th, 10 unique points lay respectively at the tail of the eye position of the right eyes and left side eyes; 11st, 12 unique points lay respectively at the peak of the right eyes and left side eyes; 13rd, 14 unique points lay respectively at right-most position of the wing of nose and the left-most position of the wing of nose; The 15th unique point is positioned at the nose position; 16th, 17 unique points lay respectively at the right-most position of the corners of the mouth and the left-most position of the corners of the mouth; 18th, 19 unique points lay respectively at lip center line and the crossing highs and lows of lip outline line; The 20th unique point is positioned at face's center line and the crossing minimum point of face mask line;
    The 2nd step: the people's who proposes in document " Anthropometry of the Head and Face " according to Farkas the eyes and the statistics of cheek, nose and cheek spatial relationship calculate the position of the 21st, 22 unique points in first two field picture in each facial expression image sequence; 21st, 22 unique points lay respectively at the cheekbone position of right side cheek and left side cheek;
    The 3rd step: adopt people such as Patras at document " Particle filtering with factorizedlikelihoods for tracking facial features " but in the particle filter tracking method based on the likelihood function factorization of proposition, according to the position of 22 unique points in first two field picture in each facial expression image sequence, follow the tracks of 22 face feature points in the subsequent frame image in each facial expression image sequence;
    The 4th step: adopt affine transformation method to adjust the position and the size of people's face in the image, the equal and opposite in direction of people's face in the same image sequence and position are consistent;
    Step 3, extraction countenance feature
    On the basis of step 2 operation, from every image, extract the countenance feature successively; Be specially:
    The 1st step: set up the Xb-Yb coordinate system, this coordinate system is an initial point with the lower left corner of every image, and level is the Xb axle to right, and direction is the Yb axle straight up; Obtain on every image 22 characteristic point coordinates value (xb successively according to 22 unique point place locations of pixels on every image based on the Xb-Yb coordinate system i, yb i) and the coordinate figure (x of the uppermost point of people's face philtrum of every image Origin, y Origin), wherein, i=1~22, and i is a positive integer;
    The 2nd step: set up the X-Y coordinate system, this coordinate system is an initial point with the uppermost point of people's face philtrum of every image, and level is an X-axis to right, and direction is a Y-axis straight up; Obtain 22 characteristic point coordinates value (x on every image by formula 1 and formula 2 based on the X-Y coordinate system i, y i);
    x i=xb i-x origin (1)
    y i=yb i-y origin (2)
    The 3rd step: the horizontal ordinate displacement that obtains 22 unique points of every image by formula 3 and formula 4
    Figure FSA00000172012800021
    The ordinate displacement
    Figure FSA00000172012800022
    Δx i ′ = x i - x ‾ i - - - ( 3 )
    Δy i ′ = y i - y ‾ i - - - ( 4 )
    Wherein,
    Figure FSA00000172012800025
    Be respectively the abscissa value and the ordinate value of the character pair point of first two field picture in this image place image sequence;
    The 4th step: obtain the horizontal ordinate displacement Δ x after 22 unique point normalization of every image by formula 5 and formula 6 successively i, ordinate displacement Δ y i
    Δx i = Δx i ′ / x base - - - ( 5 )
    Δy i = Δy i ′ / y base - - - ( 6 )
    Wherein, x Base=x 6-x 5, y Base=y 6-y 5x 5And x 6Be respectively the abscissa value of the 5th, 6 unique points of this image; y 5And y 6Be respectively the ordinate value of the 5th, 6 unique points of this image;
    The 5th step: obtain 10 geometric distance feature mfv in every image 1~mfv 10Be specially:
    Obtain eyes stretching degree value mfv according to formula 7 1:
    mfv 1=((y 11-y 7)+(y 12-y 8))/2 (7)
    Wherein, y 7, y 8, y 11, y 12Be respectively the ordinate value of the 7th, 8,11,12 unique points of this image;
    Obtain eye widths value mfv according to formula 8 2:
    mfv 2=((x 5-x 9)+(x 10-x 6))/2 (8)
    Wherein, x 5, x 6, x 9, x 10Be respectively the abscissa value of the 5th, 6,9,10 unique points of this image;
    Obtain brows height value mfv according to formula 9 3:
    mfv 3=(y 1+y 2)/2 (9)
    Wherein, y 1, y 2Be respectively the ordinate value of the 1st, 2 unique points of this image;
    Obtain eyebrow tail height value mfv according to formula 10 4:
    mfv 4=(y 3+y 4)/2 (10)
    Wherein, y 3, y 4Be respectively the ordinate value of the 3rd, 4 unique points of this image;
    Obtain eyebrow width value mfv according to formula 11 5:
    mfv 5=((x 1-x 3)+(x 4-x 2))/2 (11)
    Wherein, x 1, x 2, x 3, x 4Be respectively the abscissa value of the 1st, 2,3,4 unique points of this image;
    Obtain mouth stretching degree value mfv according to formula 12 6:
    mfv 6=y 18-y 19 (12)
    Wherein, y 18, y 19Be respectively the ordinate value of the 18th, 19 unique points of this image;
    Obtain mouth width value mfv according to formula 13 7:
    mfv 7=x 17-x 16 (13)
    Wherein, x 16, x 17Be respectively the abscissa value of the 16th, 17 unique points of this image;
    Obtain nose-corners of the mouth distance value mfv according to formula 14 8:
    mfv 8=((y 15-y 16)+(y 15-y 17))/2 (14)
    Wherein, y 15, y 16, y 17Be respectively the ordinate value of the 15th, 16,17 unique points of this image;
    Obtain eyes-cheek distance value mfv according to formula 15 9
    mfv 9=(((y 11+y 7)/2-y 21)+((y 12+y 8)/2-y 22))/2 (15)
    Wherein, y 21, y 22Be respectively the ordinate value of the 21st, 22 unique points of this image;
    Obtain nose-chin distance value mfv according to formula 16 10:
    mfv 10=y 15-y 20 (16)
    Wherein, y 15, y 20Be respectively the ordinate value of the 15th, 20 unique points of this image;
    The 6th step: by 10 geometric distance feature mfv in 17 pairs of every images of formula 1~mfv 10Carry out normalized;
    mfv j = mfv j / mfv ‾ j - - - ( 17 )
    Wherein, j=1~10, and j is a positive integer;
    Figure FSA00000172012800042
    Be geometric distance feature corresponding in first two field picture in this image place image sequence;
    The 7th step: 10 geometric distance feature mfv that use every image respectively 1~mfv 10And the horizontal ordinate displacement Δ x that obtains after 22 unique point normalizeds of every width of cloth image i, ordinate displacement Δ y iConstitute 54 dimensional vector z of expression one width of cloth face image expression information kWherein: z k∈ R 54, 1≤k≤m, R represents real number);
    The 8th step: use characteristic matrix Z={z 1, z 2..., z m∈ R 54 * mRepresent a facial expression image sequence; Z wherein 1The neutral expression of expression, z mExpression peak value expression;
    Step 4, treat the test pattern sequence and classify
    On the basis of step 3, adopt canonical correlation analysis to treat the test pattern sequence and classify; Be specially:
    The 1st step: the image sequence of every kind of basic facial expression choosing in the step 1 is divided into 2 parts at random, and a part is as training data, and a part is as test data; The quantity of training data is Q, Q 〉=20, and Q is a positive integer; Article one, training data is a facial expression image sequence; Article one, test data also is a facial expression image sequence;
    The 2nd step: go on foot the training data that obtains at the 1st, the canonical correlation discriminant analysis method that people such as employing T.-K.Kim propose in document " Discriminative Learning and Recognition of Image Set Classes UsingCanoni cal Correlations " is handled, and obtains transformation matrix T ∈ R 54 * n, n<54, and n is a positive integer; Use transformation matrix T that the eigenmatrix Z of the image in the pantomimia image sequence of choosing in the step 1 is changed then, obtain Z '=T TZ;
    The 3rd step: facial expression image sequence of picked at random from the test data described in the 1st step, the eigenmatrix Z ' that calculates this facial expression image sequence and the canonical correlation coefficient of the eigenmatrix Z ' of each training data with;
    The 4th the step: the 3rd the step result the basis on, calculate respectively this facial expression image sequence and every kind of basic facial expression canonical correlation coefficient and mean value; Choose the expression of 6 minimum value correspondences in the mean value, as classification results;
    Through above-mentioned steps, can finish the Expression Recognition for the treatment of the test pattern sequence.
  2. 2. a kind of method of discerning countenance from human face image sequence as claimed in claim 1 is characterized in that: the method for 20 face feature points of sign includes but not limited in first two field picture described in the 1st step of step 2 in each facial expression image sequence: 1. identify manually; 2. the enhancing classifier methods based on the Gabor feature that adopts people such as Vukdadinovic to propose in document " Fully automatic facial feature point detection using gaborfeature based boosted classifiers " realizes the automatic location to 20 face feature points.
  3. 3. a kind of method of from human face image sequence, discerning countenance as claimed in claim 1, it is characterized in that: step 2 adopts affine transformation method to adjust the position and the size of people's face in the image described in the 4th step, the equal and opposite in direction and the position of people's face in the same image sequence are consistent, are specially:
    At first make the line maintenance level of two inner eye corner points in first two field picture in each image sequence; Then, according to the position of these 3 points of uppermost point of two inner eye corner points in first two field picture in the image sequence of place and philtrum, 22 face feature points in remaining each frame are done mapping and normalized; After handling through affined transformation, in the same image sequence people of all images be bold little equate and the position of these 3 points of uppermost point of two inner eye corner points and philtrum and first frame in the position consistency of these 3 points.
CN2010102185432A 2010-07-06 2010-07-06 Method for identifying facial expressions from human face image sequence Pending CN101908149A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010102185432A CN101908149A (en) 2010-07-06 2010-07-06 Method for identifying facial expressions from human face image sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010102185432A CN101908149A (en) 2010-07-06 2010-07-06 Method for identifying facial expressions from human face image sequence

Publications (1)

Publication Number Publication Date
CN101908149A true CN101908149A (en) 2010-12-08

Family

ID=43263604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010102185432A Pending CN101908149A (en) 2010-07-06 2010-07-06 Method for identifying facial expressions from human face image sequence

Country Status (1)

Country Link
CN (1) CN101908149A (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945361A (en) * 2012-10-17 2013-02-27 北京航空航天大学 Facial expression recognition method based on feature point vectors and texture deformation energy parameter
CN103400145A (en) * 2013-07-19 2013-11-20 北京理工大学 Voice-vision fusion emotion recognition method based on hint nerve networks
CN103679143A (en) * 2013-12-03 2014-03-26 北京航空航天大学 Method for capturing facial expressions in real time without supervising
CN103971137A (en) * 2014-05-07 2014-08-06 上海电力学院 Three-dimensional dynamic facial expression recognition method based on structural sparse feature study
CN103996029A (en) * 2014-05-23 2014-08-20 安庆师范学院 Expression similarity measuring method and device
CN104866807A (en) * 2014-02-24 2015-08-26 腾讯科技(深圳)有限公司 Face positioning method and system
CN105559804A (en) * 2015-12-23 2016-05-11 上海矽昌通信技术有限公司 Mood manager system based on multiple monitoring
CN105740688A (en) * 2016-02-01 2016-07-06 腾讯科技(深圳)有限公司 Unlocking method and device
WO2017045404A1 (en) * 2015-09-16 2017-03-23 Intel Corporation Facial expression recognition using relations determined by class-to-class comparisons
CN106650555A (en) * 2015-11-02 2017-05-10 苏宁云商集团股份有限公司 Real person verifying method and system based on machine learning
CN106940792A (en) * 2017-03-15 2017-07-11 中南林业科技大学 The human face expression sequence truncation method of distinguished point based motion
CN107203734A (en) * 2016-03-17 2017-09-26 掌赢信息科技(上海)有限公司 A kind of method and electronic equipment for obtaining mouth state
CN107341460A (en) * 2017-06-26 2017-11-10 北京小米移动软件有限公司 Face tracking method and device
CN108074203A (en) * 2016-11-10 2018-05-25 中国移动通信集团公司 A kind of teaching readjustment method and apparatus
CN108108651A (en) * 2016-11-25 2018-06-01 广东亿迅科技有限公司 The non-wholwe-hearted driving detection method of driver and system based on video human face analysis
CN108540863A (en) * 2018-03-29 2018-09-14 武汉斗鱼网络科技有限公司 Barrage setting method, storage medium, equipment and system based on human face expression
CN108875335A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 The method and authenticating device and non-volatile memory medium of face unlock and typing expression and facial expressions and acts
CN109034099A (en) * 2018-08-14 2018-12-18 华中师范大学 A kind of expression recognition method and device
CN109063679A (en) * 2018-08-24 2018-12-21 广州多益网络股份有限公司 A kind of human face expression detection method, device, equipment, system and medium
CN109255322A (en) * 2018-09-03 2019-01-22 北京诚志重科海图科技有限公司 A kind of human face in-vivo detection method and device
CN109409273A (en) * 2018-10-17 2019-03-01 中联云动力(北京)科技有限公司 A kind of motion state detection appraisal procedure and system based on machine vision
CN105095827B (en) * 2014-04-18 2019-05-17 汉王科技股份有限公司 Facial expression recognition device and method
CN109784123A (en) * 2017-11-10 2019-05-21 浙江思考者科技有限公司 The analysis and judgment method of real's expression shape change
CN109948569A (en) * 2019-03-26 2019-06-28 重庆理工大学 A kind of three-dimensional hybrid expression recognition method using particle filter frame
CN110909680A (en) * 2019-11-22 2020-03-24 咪咕动漫有限公司 Facial expression recognition method and device, electronic equipment and storage medium
CN110941332A (en) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 Expression driving method and device, electronic equipment and storage medium
CN111382648A (en) * 2018-12-30 2020-07-07 广州市百果园信息技术有限公司 Method, device and equipment for detecting dynamic facial expression and storage medium
CN111626253A (en) * 2020-06-02 2020-09-04 上海商汤智能科技有限公司 Expression detection method and device, electronic equipment and storage medium
CN111951930A (en) * 2020-08-19 2020-11-17 陈霄 Emotion identification system based on big data
CN112132084A (en) * 2020-09-29 2020-12-25 上海松鼠课堂人工智能科技有限公司 Eye micro-expression analysis method and system based on deep learning

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945361B (en) * 2012-10-17 2016-10-05 北京航空航天大学 Feature based point vector and the facial expression recognizing method of texture deformation energy parameter
CN102945361A (en) * 2012-10-17 2013-02-27 北京航空航天大学 Facial expression recognition method based on feature point vectors and texture deformation energy parameter
CN103400145B (en) * 2013-07-19 2016-08-10 北京理工大学 Voice based on clue neutral net-vision merges emotion identification method
CN103400145A (en) * 2013-07-19 2013-11-20 北京理工大学 Voice-vision fusion emotion recognition method based on hint nerve networks
CN103679143A (en) * 2013-12-03 2014-03-26 北京航空航天大学 Method for capturing facial expressions in real time without supervising
CN103679143B (en) * 2013-12-03 2017-02-15 北京航空航天大学 Method for capturing facial expressions in real time without supervising
CN104866807A (en) * 2014-02-24 2015-08-26 腾讯科技(深圳)有限公司 Face positioning method and system
CN104866807B (en) * 2014-02-24 2019-09-13 腾讯科技(深圳)有限公司 A kind of Face detection method and system
CN105095827B (en) * 2014-04-18 2019-05-17 汉王科技股份有限公司 Facial expression recognition device and method
CN103971137A (en) * 2014-05-07 2014-08-06 上海电力学院 Three-dimensional dynamic facial expression recognition method based on structural sparse feature study
CN103996029A (en) * 2014-05-23 2014-08-20 安庆师范学院 Expression similarity measuring method and device
CN103996029B (en) * 2014-05-23 2017-12-05 安庆师范学院 Expression method for measuring similarity and device
US10339369B2 (en) 2015-09-16 2019-07-02 Intel Corporation Facial expression recognition using relations determined by class-to-class comparisons
WO2017045157A1 (en) * 2015-09-16 2017-03-23 Intel Corporation Facial expression recognition using relations determined by class-to-class comparisons
WO2017045404A1 (en) * 2015-09-16 2017-03-23 Intel Corporation Facial expression recognition using relations determined by class-to-class comparisons
CN106650555A (en) * 2015-11-02 2017-05-10 苏宁云商集团股份有限公司 Real person verifying method and system based on machine learning
CN105559804A (en) * 2015-12-23 2016-05-11 上海矽昌通信技术有限公司 Mood manager system based on multiple monitoring
CN105740688A (en) * 2016-02-01 2016-07-06 腾讯科技(深圳)有限公司 Unlocking method and device
CN107203734A (en) * 2016-03-17 2017-09-26 掌赢信息科技(上海)有限公司 A kind of method and electronic equipment for obtaining mouth state
CN108074203A (en) * 2016-11-10 2018-05-25 中国移动通信集团公司 A kind of teaching readjustment method and apparatus
CN108108651A (en) * 2016-11-25 2018-06-01 广东亿迅科技有限公司 The non-wholwe-hearted driving detection method of driver and system based on video human face analysis
CN106940792A (en) * 2017-03-15 2017-07-11 中南林业科技大学 The human face expression sequence truncation method of distinguished point based motion
CN107341460B (en) * 2017-06-26 2022-04-22 北京小米移动软件有限公司 Face tracking method and device
CN107341460A (en) * 2017-06-26 2017-11-10 北京小米移动软件有限公司 Face tracking method and device
CN108875335A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 The method and authenticating device and non-volatile memory medium of face unlock and typing expression and facial expressions and acts
US10922533B2 (en) 2017-10-23 2021-02-16 Beijing Kuangshi Technology Co., Ltd. Method for face-to-unlock, authentication device, and non-volatile storage medium
CN108875335B (en) * 2017-10-23 2020-10-09 北京旷视科技有限公司 Method for unlocking human face and inputting expression and expression action, authentication equipment and nonvolatile storage medium
CN109784123A (en) * 2017-11-10 2019-05-21 浙江思考者科技有限公司 The analysis and judgment method of real's expression shape change
CN108540863B (en) * 2018-03-29 2021-03-12 武汉斗鱼网络科技有限公司 Bullet screen setting method, storage medium, equipment and system based on facial expressions
CN108540863A (en) * 2018-03-29 2018-09-14 武汉斗鱼网络科技有限公司 Barrage setting method, storage medium, equipment and system based on human face expression
CN109034099A (en) * 2018-08-14 2018-12-18 华中师范大学 A kind of expression recognition method and device
CN109034099B (en) * 2018-08-14 2021-07-13 华中师范大学 Expression recognition method and device
CN109063679A (en) * 2018-08-24 2018-12-21 广州多益网络股份有限公司 A kind of human face expression detection method, device, equipment, system and medium
CN109255322A (en) * 2018-09-03 2019-01-22 北京诚志重科海图科技有限公司 A kind of human face in-vivo detection method and device
CN109409273A (en) * 2018-10-17 2019-03-01 中联云动力(北京)科技有限公司 A kind of motion state detection appraisal procedure and system based on machine vision
CN111382648A (en) * 2018-12-30 2020-07-07 广州市百果园信息技术有限公司 Method, device and equipment for detecting dynamic facial expression and storage medium
CN109948569B (en) * 2019-03-26 2022-04-22 重庆理工大学 Three-dimensional mixed expression recognition method using particle filter framework
CN109948569A (en) * 2019-03-26 2019-06-28 重庆理工大学 A kind of three-dimensional hybrid expression recognition method using particle filter frame
CN110941332A (en) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 Expression driving method and device, electronic equipment and storage medium
CN110909680A (en) * 2019-11-22 2020-03-24 咪咕动漫有限公司 Facial expression recognition method and device, electronic equipment and storage medium
CN111626253A (en) * 2020-06-02 2020-09-04 上海商汤智能科技有限公司 Expression detection method and device, electronic equipment and storage medium
CN111951930A (en) * 2020-08-19 2020-11-17 陈霄 Emotion identification system based on big data
CN111951930B (en) * 2020-08-19 2021-10-15 中食安泓(广东)健康产业有限公司 Emotion identification system based on big data
CN112132084A (en) * 2020-09-29 2020-12-25 上海松鼠课堂人工智能科技有限公司 Eye micro-expression analysis method and system based on deep learning
CN112132084B (en) * 2020-09-29 2021-07-09 上海松鼠课堂人工智能科技有限公司 Eye micro-expression analysis method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN101908149A (en) Method for identifying facial expressions from human face image sequence
Feng et al. Facial expression recognition based on local binary patterns
Tian Evaluation of face resolution for expression analysis
Vishwakarma et al. Hybrid classifier based human activity recognition using the silhouette and cells
CN101763503B (en) Face recognition method of attitude robust
CN105139039A (en) Method for recognizing human face micro-expressions in video sequence
Feng et al. A coarse-to-fine classification scheme for facial expression recognition
CN107563312A (en) Facial expression recognizing method
Sarode et al. Facial expression recognition
CN105335732A (en) Method for identifying shielded face on basis of blocks and identification of non-negative matrix factorization
CN103020614B (en) Based on the human motion identification method that space-time interest points detects
Linder et al. Real-time full-body human gender recognition in (RGB)-D data
CN104778472B (en) Human face expression feature extracting method
Meng et al. An extended HOG model: SCHOG for human hand detection
Chang et al. Applications of Block Linear Discriminant Analysis for Face Recognition.
Lee et al. Head and body orientation estimation using convolutional random projection forests
Hotta Support vector machine with local summation kernel for robust face recognition
CN103077383B (en) Based on the human motion identification method of the Divisional of spatio-temporal gradient feature
Li et al. Location-sensitive sparse representation of deep normal patterns for expression-robust 3D face recognition
Lu et al. Real-time facial expression recognition based on pixel-pattern-based texture feature
Zhao et al. Experiments with facial expression recognition using spatiotemporal local binary patterns
Zhao et al. Facial expression recognition based on local binary patterns and least squares support vector machines
Huang et al. Dynamic facial expression recognition using boosted component-based spatiotemporal features and multi-classifier fusion
Halidou et al. Pedestrian detection based on multi-block local binary pattern and biologically inspired feature
Said et al. Wavelet networks for facial emotion recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20101208