CN105069447A - Facial expression identification method - Google Patents

Facial expression identification method Download PDF

Info

Publication number
CN105069447A
CN105069447A CN201510621774.0A CN201510621774A CN105069447A CN 105069447 A CN105069447 A CN 105069447A CN 201510621774 A CN201510621774 A CN 201510621774A CN 105069447 A CN105069447 A CN 105069447A
Authority
CN
China
Prior art keywords
human face
textural characteristics
facial expression
face expression
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510621774.0A
Other languages
Chinese (zh)
Other versions
CN105069447B (en
Inventor
郭迎春
唐红梅
乔帆帆
师硕
于洋
刘依
翟艳东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN201510621774.0A priority Critical patent/CN105069447B/en
Publication of CN105069447A publication Critical patent/CN105069447A/en
Application granted granted Critical
Publication of CN105069447B publication Critical patent/CN105069447B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a facial expression identification method and relates to a method for identifying images. The facial expression identification method is a facial expression identification method which extracts expression textural characteristics using a center symmetrical ternary patterns (CSTP) algorithm. The method comprises steps of preprocessing a facial expression image; extracting facial expression textural characteristics on each sub-block of the facial expression image; and determining final facial expression textural characteristics of the facial expression image. Therefore, the facial expression identification is completed. The method overcomes problems that texture description is not fine due to a complex identification background and the identification rate is not high in the prior art.

Description

A kind of recognition methods of human face expression
Technical field
Technical scheme of the present invention relates to the method for identifying figure, specifically a kind of recognition methods of human face expression.
Background technology
Expression recognition is computer vision field challenging research topic, significant in psychology, man-machine interaction research etc.In recent years, expression recognition technical development is rapid, and the method for main flow has Gabor filter method, active shape model ASM (ActiveShapeModels), principal component analysis (PCA) PCA (PrincipalComponentAnalysis) and textural characteristics method.Gabor wavelet can extract the textural characteristics of human face expression from different scale and different directions, but the high dimension that its computation process produces likely causes exhausting of calculator memory, and computation process is quite consuming time.ASM algorithm can react the change of expression intuitively, but the structure of model is quite complicated, have impact on the automaticity of algorithm to a certain extent.PCA algorithm can extract the overall textural characteristics of facial expression image, but it is difficult to avoid the too high problem of covariance matrix dimension that produces when processing data.Binary principal component analysis (PCA) 2DPCA is the expandable algorithm of PCA, to some extent solves the problems referred to above, but it is the feature extracting image in the horizontal direction, and have ignored the feature in vertical direction.Local binary patterns LBP operator is proposed by people such as Ojala the earliest, it can rendering image local grain information effectively, and LBP and extension feature thereof are applied to human face expression and detect, and have outstanding performance, its shortcoming is when illumination variation is violent, and the reliability of LBP can decline.For the limitation of LBP, centralization binary pattern (CBP) is applied to Expression Recognition by the people such as Fu Xiaofeng, CBP feature adopts " diagonal angle principle ", namely the gray-scale value of diagonal element is calculated, central point pixel is joined in traditional LBP operator and goes, and give central point pixel highest weight value.But it is only utilize neighboring pixel brightness change information that CBP and LBP exists equally, and does not consider the zonal local message of human face expression, cause the defect that the discrimination of expression is not high.
In a word, existing facial expression recognizing method exists because identifying the texture description that background complexity causes not meticulous, the defect that discrimination is not high.
Summary of the invention
Technical matters to be solved by this invention is: the recognition methods providing a kind of human face expression, a kind of use Central Symmetry three binarization mode (CenterSymmetricalTernaryPatterns, hereinafter referred to as CSTP) algorithm extracts the recognition methods of human face expression of expression textural characteristics, overcome the complicated texture description caused of prior art identification background not meticulous, the defect that discrimination is not high.
The present invention solves this technical problem adopted technical scheme: a kind of recognition methods of human face expression, and be a kind of recognition methods using CSTP algorithm to extract the human face expression of human face expression textural characteristics, concrete steps are as follows:
The first step, Facial Expression Image pre-service:
Adopt the Facial Expression Image that carried out in the face expression database of Face datection and geometrical normalization, and on former database, apply following formula (1) carry out gaussian filtering process, complete Facial Expression Image pre-service thus,
G ( x , y ) = 1 2 πσ 2 e - x 2 + y 2 2 σ 2 - - - ( 1 ) ,
Wherein (x, y) is pixel coordinate, and σ is variance;
Second step, extracts the human face expression textural characteristics in the sub-block of each Facial Expression Image:
Carry out piecemeal to the Facial Expression Image that first step pre-service completes, be divided into 5 × 5 non-overlapped sub-blocks, utilize CSTP algorithm to extract human face expression textural characteristics in each Facial Expression Image sub-block, step is as follows;
(1) to calculate (P, R) neighborhood in about the pixel of center pixel symmetry differing from Δ g i:
Δg i=g i-g i+P/2,i=0,1,…,P/2-1(2),
Wherein, g is pixel, and P is surrounding neighbors number, and R is radius, R=2;
(2) statistics is somebody's turn to do the pixel of (P, R) neighborhood to the average of difference
Δg i ‾ = ( Σ i = 0 P / 2 - 1 Δg i ) / ( P / 2 ) - - - ( 3 ) ;
(3) capping value U and lower limit L:
U = Δg i ‾ * ( 1 + t ) - - - ( 4 ) ,
L = Δg i ‾ * ( 1 - t ) - - - ( 5 ) ,
Wherein, t is threshold parameter, is empirical value, t ∈ (0,1), and t adjusts with the change of illumination;
(4) the CSTP human face expression textural characteristics of each pixel is extracted:
CSTP P , R = Σ i = 0 P / 2 - 1 2 i · S ( Δg i , U , L ) - - - ( 6 ) ,
Wherein, S ( &Delta;g i , U , L ) = 1 , &Delta;g i > U 0 , L &le; &Delta;g i &le; U - 1 , &Delta;g i < L - - - ( 7 ) ;
The cataloged procedure of CSTP is, from the upper left corner clockwise, as the gray-scale value g of surrounding pixel point iwith the g taking center pixel as symmetry i+P/2difference DELTA g ibe greater than U, then this position is encoded to 1, as Δ g ibe less than L, then this position is encoded to-1; Be clearly " 0 " all "-1 " in coding, be called holotype, coding in " 1 " be clearly " 0 ", then "-1 " is set to " 1 ", be called negative mode, after the coding completing positive and negative two patterns, align negative mode carry out corresponding inclusive-OR operation, each pixel in each sub-block carries out the feature extraction of CSTP, namely extract the CSTP human face expression textural characteristics of each pixel, the CSTP histogram H adding up each sub-block is exactly the human face expression textural characteristics in each Facial Expression Image sub-block of extracting;
3rd step, determine the human face expression textural characteristics that Facial Expression Image is final:
(1) the information entropy E of each the Facial Expression Image sub-block in second step is calculated j:
E j = - &Sigma; i = 0 m - 1 P j i logP j i - - - ( 8 ) ,
Wherein, represent the probability that a jth sub-block i-th grade of pixel occurs, m represents progression, gets 256;
(2) the weights W of each Facial Expression Image sub-block is calculated j:
W j = E j / &Sigma; j = 0 n - 1 E j - - - ( 9 ) ,
Wherein, n is each Facial Expression Image sub-block number;
(3) histogram vectors after each sub-block weighting of each Facial Expression Image is calculated
Wherein H jit is the histogram of each the Facial Expression Image jth sub-block utilizing second step to extract;
To connect the histogram vectors after each sub-block weighting of each Facial Expression Image as final human face expression textural characteristics;
4th step, completes the identification of human face expression:
Adopt SVMKernelMethods tool box training SVM classifier, idiographic flow is as follows:
(1) the human face expression textural characteristics that training sample and test sample book are extracted is inputted, this human face expression textural characteristics is through gained after second step above and the 3rd step, go out the training and testing classification sample matrix of training sample human face expression textural characteristics matrix and test sample book human face expression textural characteristics matrix difference correspondence according to these human face expression textural characteristics sample architecture, the value in classification sample matrix is the class categories of sample;
(2) for local facial express one's feelings textural characteristics adopt gaussian kernel function, check figure is set to 8, Lagrange factor c=100, regulating parameter λ=10 of Quadratic Optimum method -7, first training sample human face expression textural characteristics matrix and training classification sample matrix are sent into svmmulticlassoneagainstall function and obtain support vector, weight is with biased, again by training sample textural characteristics matrix, support vector, weight, biased and Gaussian function is sent in svmmultival function and is trained, finally by test sample book textural characteristics matrix, support vector, weight, biased and Gaussian function is sent in svmmultival function and is predicted, complete the identification of human face expression thus, in JAFFE storehouse, experiment obtains anger, detest, fear, glad, neutral, sad and surprised 7 kinds of expressions, and experiment obtains anger in CMU-AMP storehouse, glad and surprised 3 kinds of expressions, complete the identification of human face expression thus.
Above-mentioned facial expression recognizing method, described SVM classifier is known.
The invention has the beneficial effects as follows: compared with prior art, outstanding substantive distinguishing features of the present invention and marked improvement as follows:
(1) the inventive method has broken away from the dependence to center pixel, overcome the complicated texture description caused of prior art identification background not meticulous, the defect that recognition performance is low, meticulousr to local texture description, inclusive-OR operation enhances the important information of expression, makes the sign of whole human face expression textural feature space sample and classification performance be further enhanced and improve;
(2) the inventive method by calculate the information entropy of each sub-block construct each sub-block adaptive weighted after proper vector, each block feature vector after adaptive weighted is concatenated as the final human face expression textural characteristics of Facial Expression Image, merge local feature and global characteristics, improve the performance of Expression Recognition.
Accompanying drawing explanation
Below in conjunction with drawings and Examples, the present invention is further described.
Fig. 1 is the schematic process flow diagram of the inventive method.
Fig. 2 (a) is the Facial Expression Image after human face expression pre-service in JAFFE storehouse.
Fig. 2 (b) is the Facial Expression Image after human face expression pre-service in CMU-AMP storehouse.
Fig. 3 (a) is the three-valued schematic diagram of CSTP.
Fig. 3 (b) is CSTP inclusive-OR operation schematic diagram.
Fig. 4 (a) is that in the inventive method, in JAFFE storehouse, CSTP gets the discrimination schematic diagram under different t value.
Fig. 4 (b) is that in the inventive method, in CMU-AMP storehouse, CSTP gets the discrimination schematic diagram under different t value.
Embodiment
Embodiment illustrated in fig. 1ly show, the flow process of the recognition methods of human face expression of the present invention is: the identification of the final human face expression textural characteristics of Facial Expression Image pre-service → the extract human face expression textural characteristics in the sub-block of each Facial Expression Image → determine Facial Expression Image → complete human face expression.
In more detail, the textural characteristics in the sub-block of each Facial Expression Image of said extracted comprises: to calculate in (P, R) neighborhood about the pixel of center pixel symmetry difference Δ g i→ statistics is somebody's turn to do the pixel of (P, R) neighborhood to the average of difference the CSTP textural characteristics of each pixel of → capping value U and lower limit L → extract; Above-mentionedly determine that the final textural characteristics of Facial Expression Image comprises: the information entropy E calculating each the Facial Expression Image sub-block in previous step j→ calculate the weights W of each Facial Expression Image sub-block j→ calculate the histogram vectors after each sub-block weighting of each Facial Expression Image
Fig. 2 is the pretreated schematic diagram of Facial Expression Image in the inventive method, wherein: Fig. 2 (a) illustrated embodiment shows the Facial Expression Image after human face expression pre-service in the methods of the invention in JAFFE storehouse, be respectively angry, detest, fear, glad, neutral, sad and surprised 7 kinds of expressions.Fig. 2 (b) illustrated embodiment shows the Facial Expression Image in the pretreated CMU-AMP storehouse of human face expression in the methods of the invention, is surprised, angry and glad 3 kinds of expressions respectively.
Explanation embodiment illustrated in fig. 3 in the methods of the invention in sub-block each pixel carry out CSTP texture feature extraction and be defined as follows:
CSTP P , R = &Sigma; i = 0 ( P / 2 ) - 1 2 i &CenterDot; S ( &Delta;g i , U , L ) - - - ( 11 ) ,
Wherein, S ( &Delta;g i , U , L ) = 1 , &Delta;g i > U 0 , L &le; &Delta;g i &le; U - 1 , &Delta;g i < L - - - ( 12 ) ,
Fig. 3 (a) is the three-valued schematic diagram of CSTP, wherein from left to right, have nine blockages in this figure first image subblock, the numerical value in each blockage represents the gray-scale value of each pixel, and two pixels in first image subblock in same pattern circle represent a pixel pair, totally 4 pixels pair, be respectively (251,178), (135,90), (53,23) and (246,198).To calculate in this neighborhood about the pixel of center pixel symmetry difference Δ g from upper left corner clockwise direction i, its numerical value, as shown in four blockages containing numerical value in this figure second image subblock, is 73,45,30,48 respectively viewed from the clockwise direction of the upper left corner; Then the pixel of four blockages containing numerical value in second image subblock is added up to the average of difference utilize obtain higher limit U and lower limit L with threshold value t, finally judge 3 kinds of patterns (-1,0,1) according to formula (12), its result such as 4 blockages containing numerical value in this figure the 3rd image subblock are depicted as 1,0 ,-1,0.
Fig. 3 (b) is CSTP or computing schematic diagram, intuitively by containing 4 blockages of numerical value in Fig. 3 (a) the 3rd image subblock by becoming a column vector form clockwise, splitting into positive and negative two patterns and being weighted respectively;-1 part above in a line column vector coding replaces by 0, and be defined as holotype, binary coding is 1000, and it is 8 that weighting is transformed into the decimal system; 1 part below in a line column vector coding replaces by 0, and-1 takes absolute value, and is defined as negative mode, and binary coding is 0010, and it is 2 that weighting is transformed into the decimal system; Finally carry out inclusive-OR operation, binary coding is 1010, and it is 10 that weighting is transformed into the decimal system; Inclusive-OR operation strengthens expression information, prevents the loss of its some important informations in characteristic extraction procedure.
The relation of threshold value t and human face expression average recognition rate in CSTP in Fig. 4 (a) illustrated embodiment display JAFFE database, the figure illustrates the corresponding relation of the average recognition rate of threshold value t value and human face expression in CSTP different in JAFFE database, when in the CSTP in JAFFE database, threshold value t gets 0.5 and 0.7, effect is best, and the discrimination of human face expression can reach 95.71%.
Fig. 4 (b) illustrated embodiment shows the relation of threshold value t and human face expression average recognition rate in CSTP in CMU-AMP database, the figure illustrates the corresponding relation of the average recognition rate of threshold value t value and human face expression in CSTP different in CMU-AMP database, when in the CSTP in CMU-AMP database, threshold value t gets 0.9, the average recognition rate of human face expression is the highest.
The experiment basis of Fig. 4 (a) and Fig. 4 (b) illustrated embodiment is that training sample and test sample book are divided into 5 × 5 sub-blocks, and P=8, R=2, SVM check figure is set to 8.
Embodiment
The recognition methods of a kind of human face expression of the present embodiment, be a kind of recognition methods using CSTP algorithm to extract the human face expression of human face expression textural characteristics, concrete steps are as follows:
The first step, Facial Expression Image pre-service:
Adopt the Facial Expression Image that carried out in the face expression database of Face datection and geometrical normalization, and on former database, apply following formula (1) carry out gaussian filtering process, complete Facial Expression Image pre-service thus,
G ( x , y ) = 1 2 &pi;&sigma; 2 e - x 2 + y 2 2 &sigma; 2 - - - ( 1 ) ,
Wherein (x, y) is pixel coordinate, and σ is variance;
Second step, extracts the human face expression textural characteristics in the sub-block of each Facial Expression Image:
Carry out piecemeal to the Facial Expression Image that first step pre-service completes, be divided into 5 × 5 non-overlapped sub-blocks, utilize CSTP algorithm to extract human face expression textural characteristics in each Facial Expression Image sub-block, step is as follows;
(1) to calculate (P, R) neighborhood in about the pixel of center pixel symmetry differing from Δ g i:
Δg i=g i-g i+P/2,i=0,1,…,P/2-1(2),
Wherein, g is pixel, and P is surrounding neighbors number, and R is radius, R=2;
(2) statistics is somebody's turn to do the pixel of (P, R) neighborhood to the average of difference
&Delta;g i &OverBar; = ( &Sigma; i = 0 P / 2 - 1 &Delta;g i ) / ( P / 2 ) - - - ( 3 ) ;
(3) capping value U and lower limit L:
U = &Delta;g i &OverBar; * ( 1 + t ) - - - ( 4 ) ,
L = &Delta;g i &OverBar; * ( 1 - t ) - - - ( 5 ) ,
Wherein, t is threshold parameter, is empirical value, t ∈ (0,1), and t adjusts with the change of illumination;
(4) the CSTP human face expression textural characteristics of each pixel is extracted:
CSTP P , R = &Sigma; i = 0 P / 2 - 1 2 i &CenterDot; S ( &Delta;g i , U , L ) - - - ( 6 ) ,
Wherein, S ( &Delta;g i , U , L ) = 1 , &Delta;g i > U 0 , L &le; &Delta;g i &le; U - 1 , &Delta;g i < L - - - ( 7 ) ;
The cataloged procedure of CSTP is, from the upper left corner clockwise, as the gray-scale value g of surrounding pixel point iwith the g taking center pixel as symmetry i+P/2difference DELTA g ibe greater than U, then this position is encoded to 1, as Δ g ibe less than L, then this position is encoded to-1; Be clearly " 0 " all "-1 " in coding, be called holotype, coding in " 1 " be clearly " 0 ", then "-1 " is set to " 1 ", be called negative mode, after the coding completing positive and negative two patterns, align negative mode carry out corresponding inclusive-OR operation, each pixel in each sub-block carries out the feature extraction of CSTP, namely extract the CSTP human face expression textural characteristics of each pixel, the CSTP histogram H adding up each sub-block is exactly the human face expression textural characteristics in each Facial Expression Image sub-block of extracting;
3rd step, determine the human face expression textural characteristics that Facial Expression Image is final:
(1) the information entropy E of each the Facial Expression Image sub-block in second step is calculated j:
E j = - &Sigma; i = 0 m - 1 P j i logP j i - - - ( 8 ) ,
Wherein, represent the probability that a jth sub-block i-th grade of pixel occurs, m represents progression, gets 256;
(2) the weights W of each Facial Expression Image sub-block is calculated j:
W j = E j / &Sigma; j = 0 n - 1 E j - - - ( 9 ) ,
Wherein, n is each Facial Expression Image sub-block number;
(3) histogram vectors after each sub-block weighting of each Facial Expression Image is calculated
Wherein H jit is the histogram of each the Facial Expression Image jth sub-block utilizing second step to extract;
To connect the histogram vectors after each sub-block weighting of each Facial Expression Image as final human face expression textural characteristics;
4th step, completes the identification of human face expression:
Adopt SVMKernelMethods tool box training SVM classifier, idiographic flow is as follows:
(1) the human face expression textural characteristics that training sample and test sample book are extracted is inputted, this human face expression textural characteristics is through gained after second step above and the 3rd step, go out the training and testing classification sample matrix of training sample human face expression textural characteristics matrix and test sample book human face expression textural characteristics matrix difference correspondence according to these human face expression textural characteristics sample architecture, the value in classification sample matrix is the class categories of sample;
(2) for local facial express one's feelings textural characteristics adopt gaussian kernel function, check figure is set to 8, Lagrange factor c=100, regulating parameter λ=10 of Quadratic Optimum method -7, first training sample human face expression textural characteristics matrix and training classification sample matrix are sent into svmmulticlassoneagainstall function and obtain support vector, weight is with biased, again by training sample textural characteristics matrix, support vector, weight, biased and Gaussian function is sent in svmmultival function and is trained, finally by test sample book textural characteristics matrix, support vector, weight, biased and Gaussian function is sent in svmmultival function and is predicted, complete the identification of human face expression thus, last experiment in JAFFE storehouse obtains anger, detest, fear, glad, neutral, sad and surprised 7 kinds of expressions, and experiment obtains surprised in CMU-AMP storehouse, angry and glad 3 kinds of expressions, complete the identification of human face expression thus.
The present embodiment uses the Gaussian function of SVM classifier as classification function, carries out multicategory classification.
The present embodiment is tested at JAFFE and CMU-AMP Facial expression database.Wherein JAFFE expresses one's feelings in storehouse and has 10 people, and everyone has 7 kinds of expressions, namely angry, detest, fear, glad, neutral, sad and surprised, often kind of express one's feelings 3 width images, totally 210 width images.In experiment, everyone often plants expression and chooses 2 as training sample, totally 140, remaining 70 as test sample book, the size of image is normalized to 128 × 128.CMU-AMP expression has 13 people in storehouse, and everyone is containing surprised, angry, happiness these 3 kinds expressions, and often kind of expression chooses 16 width images, totally 624 width images.In experiment, everyone often plants expression random selecting 8 as training sample, and totally 312 training samples, choose everyone 8 as test sample book in residual image, and totally 312 as test sample book, and the size of image is normalized to 64 × 64.The present embodiment is that the MATLABR2013b platform under Windows8 environment has run.Fig. 2 is 7 kinds of expressive parts samples in the JAFFE face expression database of the present embodiment and 3 kinds of expressive parts samples in CMU-AMP face expression database.The experiment basis of embodiment is that training sample in two expression storehouses and test sample book are divided into 5 × 5 sub-blocks, and P=8, R=2, SVM check figure is set to 8.
Table 1 lists the discrimination (%) of the human face expression on the JAFFE database of the present embodiment.Table 2 lists the Expression Recognition rate (%) on the CMU-AMP database of the present embodiment.
The discrimination (%) of the human face expression on table 1.JAFFE database
Expression Recognition rate (%) on table 2.CMU-AMP database
The data of consolidated statement 1 and table 2 are known, the average recognition rate of the inventive method obviously will exceed these four kinds of classic algorithm of 2DPCA, Gabor+PCA, LBP and CBP, in JAFFE database, fear that this Expression Recognition rate is lower than Gabor+PCA and CBP, for fear with detest and surprised expression similarity high, CSTP feature similarity, be easy to fear be classified as and detest and surprised, this is also the following problem needing solution; The recognition methods of extracting the human face expression of expression textural characteristics in CMU-AMP database with CSTP algorithm is better than other several algorithms in anger and these two kinds of human face expressions of happiness and average recognition rate, surprised this human face expression is lower than 2DPCA, reason is that in CMU-AMP database, human face expression classification is very few, the reason that training sample is too much, illustrates that the recognition methods of the human face expression of the present invention CSTP algorithm extraction expression textural characteristics has superiority on process small sample problem simultaneously.

Claims (1)

1. a recognition methods for human face expression, is characterized in that: be a kind of recognition methods using CSTP algorithm to extract the human face expression of human face expression textural characteristics, concrete steps are as follows:
The first step, Facial Expression Image pre-service:
Adopt the Facial Expression Image that carried out in the face expression database of Face datection and geometrical normalization, and on former database, apply following formula (1) carry out gaussian filtering process, complete Facial Expression Image pre-service thus,
G ( x , y ) = 1 2 &pi;&sigma; 2 e - x 2 + y 2 2 &sigma; 2 - - - ( 1 ) ,
Wherein (x, y) is pixel coordinate, and σ is variance;
Second step, extracts the human face expression textural characteristics in the sub-block of each Facial Expression Image:
Carry out piecemeal to the Facial Expression Image that first step pre-service completes, be divided into 5 × 5 non-overlapped sub-blocks, utilize CSTP algorithm to extract human face expression textural characteristics in each Facial Expression Image sub-block, step is as follows;
(1) to calculate (P, R) neighborhood in about the pixel of center pixel symmetry differing from Δ g i:
Δg i=g i-g i+P/2,i=0,1,…,P/2-1(2),
Wherein, g is pixel, and P is surrounding neighbors number, and R is radius, R=2;
(2) statistics is somebody's turn to do the pixel of (P, R) neighborhood to the average of difference
&Delta;g i &OverBar; = ( &Sigma; i = 0 P / 2 - 1 &Delta;g i ) / ( P / 2 ) - - - ( 3 ) ;
(3) capping value U and lower limit L:
U = &Delta;g i &OverBar; * ( 1 + t ) - - - ( 4 ) ,
L = &Delta;g i &OverBar; * ( 1 - t ) - - - ( 5 ) ,
Wherein, t is threshold parameter, is empirical value, t ∈ (0,1), and t adjusts with the change of illumination;
(4) the CSTP human face expression textural characteristics of each pixel is extracted:
CSTP P , R = &Sigma; i = 0 P / 2 - 1 2 i &CenterDot; S ( &Delta;g i , U , L ) - - - ( 6 ) ,
Wherein, S ( &Delta;g i , U , L ) = 1 , &Delta;g i > U 0 , L &le; &Delta;g i &le; U - 1 , &Delta;g i < L - - - ( 7 ) ;
The cataloged procedure of CSTP is, from the upper left corner clockwise, as the gray-scale value g of surrounding pixel point iwith the g taking center pixel as symmetry i+P/2difference DELTA g ibe greater than U, then this position is encoded to 1, as Δ g ibe less than L, then this position is encoded to-1; Be clearly " 0 " all "-1 " in coding, be called holotype, coding in " 1 " be clearly " 0 ", then "-1 " is set to " 1 ", be called negative mode, after the coding completing positive and negative two patterns, align negative mode carry out corresponding inclusive-OR operation, each pixel in each sub-block carries out the feature extraction of CSTP, namely extract the CSTP human face expression textural characteristics of each pixel, the CSTP histogram H adding up each sub-block is exactly the human face expression textural characteristics in each Facial Expression Image sub-block of extracting;
3rd step, determine the human face expression textural characteristics that Facial Expression Image is final:
(1) the information entropy E of each the Facial Expression Image sub-block in second step is calculated j:
E j = - &Sigma; i = 0 m - 1 P j i logP j i - - - ( 8 ) ,
Wherein, represent the probability that a jth sub-block i-th grade of pixel occurs, m represents progression, gets 256;
(2) the weights W of each Facial Expression Image sub-block is calculated j:
W j = E j / &Sigma; j = 0 n - 1 E j - - - ( 9 ) ,
Wherein, n is each Facial Expression Image sub-block number;
(3) histogram vectors after each sub-block weighting of each Facial Expression Image is calculated
Wherein H jit is the histogram of each the Facial Expression Image jth sub-block utilizing second step to extract;
To connect the histogram vectors after each sub-block weighting of each Facial Expression Image as final human face expression textural characteristics;
4th step, completes the identification of human face expression:
Adopt SVMKernelMethods tool box training SVM classifier, idiographic flow is as follows:
(1) the human face expression textural characteristics that training sample and test sample book are extracted is inputted, this human face expression textural characteristics is through gained after second step above and the 3rd step, go out the training and testing classification sample matrix of training sample human face expression textural characteristics matrix and test sample book human face expression textural characteristics matrix difference correspondence according to these human face expression textural characteristics sample architecture, the value in classification sample matrix is the class categories of sample;
(2) for local facial express one's feelings textural characteristics adopt gaussian kernel function, check figure is set to 8, Lagrange factor c=100, regulating parameter λ=10 of Quadratic Optimum method -7, first training sample human face expression textural characteristics matrix and training classification sample matrix are sent into svmmulticlassoneagainstall function and obtain support vector, weight is with biased, again by training sample textural characteristics matrix, support vector, weight, biased and Gaussian function is sent in svmmultival function and is trained, finally by test sample book textural characteristics matrix, support vector, weight, biased and Gaussian function is sent in svmmultival function and is predicted, complete the identification of human face expression thus, in JAFFE storehouse, experiment obtains anger, detest, fear, glad, neutral, sad and surprised 7 kinds of expressions, and experiment obtains anger in CMU-AMP storehouse, glad and surprised 3 kinds of expressions, complete the identification of human face expression thus.
CN201510621774.0A 2015-09-23 2015-09-23 A kind of recognition methods of human face expression Expired - Fee Related CN105069447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510621774.0A CN105069447B (en) 2015-09-23 2015-09-23 A kind of recognition methods of human face expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510621774.0A CN105069447B (en) 2015-09-23 2015-09-23 A kind of recognition methods of human face expression

Publications (2)

Publication Number Publication Date
CN105069447A true CN105069447A (en) 2015-11-18
CN105069447B CN105069447B (en) 2018-05-29

Family

ID=54498809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510621774.0A Expired - Fee Related CN105069447B (en) 2015-09-23 2015-09-23 A kind of recognition methods of human face expression

Country Status (1)

Country Link
CN (1) CN105069447B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913053A (en) * 2016-06-07 2016-08-31 合肥工业大学 Monogenic multi-characteristic face expression identification method based on sparse fusion
CN106096557A (en) * 2016-06-15 2016-11-09 浙江大学 A kind of semi-supervised learning facial expression recognizing method based on fuzzy training sample
CN106127196A (en) * 2016-09-14 2016-11-16 河北工业大学 The classification of human face expression based on dynamic texture feature and recognition methods
CN106778529A (en) * 2016-11-25 2017-05-31 南京理工大学 A kind of face identification method based on improvement LDP
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN107392151A (en) * 2017-07-21 2017-11-24 竹间智能科技(上海)有限公司 Face image various dimensions emotion judgement system and method based on neutral net
CN108229552A (en) * 2017-12-29 2018-06-29 咪咕文化科技有限公司 A kind of model treatment method, apparatus and storage medium
CN108268838A (en) * 2018-01-02 2018-07-10 中国科学院福建物质结构研究所 Facial expression recognizing method and facial expression recognition system
CN109299653A (en) * 2018-08-06 2019-02-01 重庆邮电大学 A kind of human face expression feature extracting method based on the complete three value mode of part of improvement
CN109886091A (en) * 2019-01-08 2019-06-14 东南大学 Three-dimensional face expression recognition methods based on Weight part curl mode
CN114140865A (en) * 2022-01-29 2022-03-04 深圳市中讯网联科技有限公司 Intelligent early warning method and device, storage medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140055609A1 (en) * 2012-08-22 2014-02-27 International Business Machines Corporation Determining foregroundness of an object in surveillance video data
CN103778412A (en) * 2014-01-16 2014-05-07 重庆邮电大学 Face recognition method based on local ternary pattern adaptive threshold

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140055609A1 (en) * 2012-08-22 2014-02-27 International Business Machines Corporation Determining foregroundness of an object in surveillance video data
CN103778412A (en) * 2014-01-16 2014-05-07 重庆邮电大学 Face recognition method based on local ternary pattern adaptive threshold

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SUKANYA SAGARIKA MEHER等: "Face Recognition and Facial Expression Identification Using PCA", 《ADVANCE COMPUTING CONFERENCE (IACC), 2014 IEEE INTERNATIONAL》 *
李德宁: "改进型LTP在人脸识别中的研究与应用", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN105913053B (en) * 2016-06-07 2019-03-08 合肥工业大学 A kind of facial expression recognizing method for singly drilling multiple features based on sparse fusion
CN105913053A (en) * 2016-06-07 2016-08-31 合肥工业大学 Monogenic multi-characteristic face expression identification method based on sparse fusion
CN106096557B (en) * 2016-06-15 2019-01-18 浙江大学 A kind of semi-supervised learning facial expression recognizing method based on fuzzy training sample
CN106096557A (en) * 2016-06-15 2016-11-09 浙江大学 A kind of semi-supervised learning facial expression recognizing method based on fuzzy training sample
CN106127196A (en) * 2016-09-14 2016-11-16 河北工业大学 The classification of human face expression based on dynamic texture feature and recognition methods
CN106127196B (en) * 2016-09-14 2020-01-14 河北工业大学 Facial expression classification and identification method based on dynamic texture features
CN106778529A (en) * 2016-11-25 2017-05-31 南京理工大学 A kind of face identification method based on improvement LDP
CN107392151A (en) * 2017-07-21 2017-11-24 竹间智能科技(上海)有限公司 Face image various dimensions emotion judgement system and method based on neutral net
CN108229552A (en) * 2017-12-29 2018-06-29 咪咕文化科技有限公司 A kind of model treatment method, apparatus and storage medium
CN108268838A (en) * 2018-01-02 2018-07-10 中国科学院福建物质结构研究所 Facial expression recognizing method and facial expression recognition system
CN108268838B (en) * 2018-01-02 2020-12-29 中国科学院福建物质结构研究所 Facial expression recognition method and facial expression recognition system
CN109299653A (en) * 2018-08-06 2019-02-01 重庆邮电大学 A kind of human face expression feature extracting method based on the complete three value mode of part of improvement
CN109886091A (en) * 2019-01-08 2019-06-14 东南大学 Three-dimensional face expression recognition methods based on Weight part curl mode
CN109886091B (en) * 2019-01-08 2021-06-01 东南大学 Three-dimensional facial expression recognition method based on weighted local rotation mode
CN114140865A (en) * 2022-01-29 2022-03-04 深圳市中讯网联科技有限公司 Intelligent early warning method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN105069447B (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN105069447A (en) Facial expression identification method
Jiang et al. HDCB-Net: A neural network with the hybrid dilated convolution for pixel-level crack detection on concrete bridges
CN110738207B (en) Character detection method for fusing character area edge information in character image
CN108876780B (en) Bridge crack image crack detection method under complex background
CN105956560B (en) A kind of model recognizing method based on the multiple dimensioned depth convolution feature of pondization
CN101859382B (en) License plate detection and identification method based on maximum stable extremal region
CN103854016B (en) Jointly there is human body behavior classifying identification method and the system of feature based on directivity
CN103605972A (en) Non-restricted environment face verification method based on block depth neural network
CN107784288A (en) A kind of iteration positioning formula method for detecting human face based on deep neural network
CN103279770B (en) Based on the person&#39;s handwriting recognition methods of stroke fragment and contour feature
CN104298981A (en) Face microexpression recognition method
Cai et al. Traffic sign recognition algorithm based on shape signature and dual-tree complex wavelet transform
CN104778472B (en) Human face expression feature extracting method
CN104143091A (en) Single-sample face recognition method based on improved mLBP
CN105868734A (en) Power transmission line large-scale construction vehicle recognition method based on BOW image representation model
CN104951793A (en) STDF (standard test data format) feature based human behavior recognition algorithm
CN103186776A (en) Human detection method based on multiple features and depth information
CN110751195A (en) Fine-grained image classification method based on improved YOLOv3
CN111104924B (en) Processing algorithm for identifying low-resolution commodity image
CN105046278A (en) Optimization method of Adaboost detection algorithm on basis of Haar features
Shitole et al. Recognition of handwritten Devanagari characters using linear discriminant analysis
Sathya et al. Perspective vehicle license plate transformation using deep neural network on genesis of CPNet
Cho et al. Modified perceptual cycle generative adversarial network-based image enhancement for improving accuracy of low light image segmentation
Zhou et al. Feature extraction based on local directional pattern with svm decision-level fusion for facial expression recognition
Rajithkumar et al. Template matching method for recognition of stone inscripted Kannada characters of different time frames based on correlation analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180529