CN103984919A - Facial expression recognition method based on rough set and mixed features - Google Patents

Facial expression recognition method based on rough set and mixed features Download PDF

Info

Publication number
CN103984919A
CN103984919A CN201410168960.9A CN201410168960A CN103984919A CN 103984919 A CN103984919 A CN 103984919A CN 201410168960 A CN201410168960 A CN 201410168960A CN 103984919 A CN103984919 A CN 103984919A
Authority
CN
China
Prior art keywords
feature
rough set
adopt
facial expression
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410168960.9A
Other languages
Chinese (zh)
Inventor
段丽
钟晓
乔亦民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI UNISCOPE COMMUNICATION TECHNOLOGY Co Ltd
Original Assignee
SHANGHAI UNISCOPE COMMUNICATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI UNISCOPE COMMUNICATION TECHNOLOGY Co Ltd filed Critical SHANGHAI UNISCOPE COMMUNICATION TECHNOLOGY Co Ltd
Priority to CN201410168960.9A priority Critical patent/CN103984919A/en
Publication of CN103984919A publication Critical patent/CN103984919A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a facial expression recognition method based on a rough set and mixed features. The method includes the following steps that (1) facial detection is carried out; (2) local geometric distortion features are extracted by means of a method combining an active appearance model and the rough set; (3) overall features of expressions are extracted with the combination of an improved weighting primary component analysis method and the rough set; (4) feature fusion is carried out on the extracted local geometric distortion features and the overall features by means of kernel canonical correlation analysis under a high-dimensional small sample to eliminate feature redundancy, and fused typical features are obtained; (5) the fused typical features serve as observation vectors of a discrete HMM for classification and recognition, and a classification result is obtained. As is presented by experiments, the improved method can shorten facial expression recognition time and improve the facial expression recognition rate.

Description

Based on the facial expression recognizing method of rough set and composite character
[technical field]
The invention belongs to technical field of computer information processing, particularly facial expression recognizing method.
[background technology]
Human face expression identification (Facial Expression Recognition, FER) be the process that computing machine carries out feature extraction and sorts out the expression information of face, it makes to calculate function knows people's expression information, and then deduction people's psychological condition, thereby it is mutual to realize high-grade intelligent between man-machine, is that of the multidisciplinary intersections such as pattern-recognition, physiology, psychology, computer vision is rich in challenging problem.
Although human face expression recognition technology is studied by a lot of people, but existing method does not disperse to emphasize expression shape change is had the information in the regions such as the people face eyes, eyebrow, face of significant contribution, and single feature extracting method can not be contained all effective informations, the multiplicity of not considering the importance degree of attributive character simultaneously, infosystem being contributed, cause thus the discrimination of human face expression identification not high, the long problem of recognition time of Expression Recognition.
[summary of the invention]
The object of the invention is to provide a kind of facial expression recognizing method, not high in order to solve the discrimination of human face expression of the prior art identification, the long problem of recognition time of Expression Recognition.
For achieving the above object, implementing facial expression recognizing method of the present invention comprises the steps:
Carry out face detection;
Adopt the method that active apparent model and rough set combine to extract the local geometric deformation characteristics of expressing one's feelings;
Adopt improved weighted principal component analyzing method and rough set to combine and extract the global feature of expression;
Adopt the kernel canonical correlation analysis under higher-dimension small sample to carry out Fusion Features elimination feature redundancy to the local geometric deformation characteristics of extracting and global feature, obtain the characteristic feature after merging;
Feature after utilization is merged, as the observed value vector of Discrete HMM, is carried out Classification and Identification, obtains classification results.
According to above-mentioned principal character, adopt the method that active apparent model and rough set combine to extract in the step of local geometric deformation characteristics of expression, the geometric description that these organ shape such as the eyebrow that adopts AAM algorithm to extract to describe in face, eyes, nose, face and structural relation change is as recognition feature, the position that uses AAM algorithm search to orient each key feature points of face, select totally 32 key points in eyebrow, eyes, nose, face, lower jaw region, then calculate different distance between different characteristic point as characteristic parameter.
According to above-mentioned principal character, adopt the method that active apparent model and rough set combine to extract in the step of local geometric deformation characteristics of expression, adopt improved rough set attribute reduction method to select the distance feature obtaining, after feature selecting, every width image is selected 14 main distance parameters.
According to above-mentioned principal character, adopt improved weighted principal component analyzing method and rough set to combine and extract in the step of the global feature of expressing one's feelings, for a testing image sequence, first adopt improved weighted principal component analyzing method, construct one using two eyes and face as the two-way Gaussian function in emergence central point San center as weighting function for identification, add level, the gradual change scale-up factor of vertical two directions, realize the two-way adjustable of weighting region, emphasize dispersedly expression shape change to have the eyes of significant contribution, eyebrow, the positional information of these three points of face, make facial expression feature more outstanding.
According to above-mentioned principal character, adopt improved weighted principal component analyzing method and rough set to combine and extract in the step of the global feature of expressing one's feelings, improved weighted principal component analyzing method obtains 52 × 10 eigenvectors matrix, for the higher problem of dimension, still adopt improved rough set attribute reduction method to carry out yojan to 52 dimensional feature vectors of the every width image obtaining, just 24 × 10 eigenvectors matrix of sequence thus obtains expressing one's feelings, its vector is turned to the column vector of 240 dimensions, form the global feature of an image sequence, the global feature vector of an image is made up of the vector of 240 dimensions.
According to above-mentioned principal character, adopt the kernel canonical correlation analysis under higher-dimension small sample to carry out Fusion Features elimination feature redundancy to the local geometric deformation characteristics of extracting and global feature, obtain in the step of the characteristic feature after merging, adopt Canonical Correlation Analysis to carry out Fusion Features to extract two kinds of features above, before fusion, need data set to do respectively standardization, select again suitable kernel function type, the local geometric features and the global feature that extract above merge, the dimension of proper vector z' after Fusion Features is 40 dimensions, observed value vector by normalized as Discrete HMM.
According to above-mentioned principal character, feature after utilization is merged is as the observed value vector of Discrete HMM, carrying out that Classification and Identification obtains in the step of classification results is the observed value vector as HMM after the proper vector normalization of 40 dimensions that obtain after merging, according to the observed value of input, each model training is asked p (o/ λ i), 1≤i≤6.If i *=argmax 1≤i≤6[p (o/ λ i)], i *be the affiliated expression of this segment table feelings sequence classification.
Compared with prior art, the present invention can improve the discrimination of human face expression identification by said method, and the recognition time of Expression Recognition is shorter, thereby is easier to implement.
[brief description of the drawings]
Fig. 1 is for implementing process flow diagram of the present invention.
Fig. 2 is the positioning result figure of AAM.
Fig. 3 is the face characteristic point diagram of demarcating.
Fig. 4 is distance feature description list.
Fig. 5 is HMM structural representation.
Fig. 6 is that kernel canonical correlation analysis obtains the table with test results that fusion feature is identified.
Fig. 7 is the result comparison diagram of four kinds of expression recognition methods.
[embodiment]
The present invention is extensively reading after the domestic and international existing document about human face expression feature extraction and Expression Recognition technology, relatively and use for reference existing successful facial expression recognizing method, gordian technique to corresponding human face expression feature extraction and identification is carried out Improvement and perfection, the algorithm of oneself is proposed simultaneously, improve the discrimination of human face expression identification, and shortened the recognition time of Expression Recognition.
Refer to shown in Fig. 1, implement facial expression recognizing method of the present invention and comprise the steps:
Step 1: carry out face detection;
Step 2: adopt the method that active apparent model and rough set combine to extract the local geometric deformation characteristics of expressing one's feelings;
Step 3: adopt improved weighted principal component analyzing method and rough set to combine and extract the global feature of expression;
Step 4: adopt the kernel canonical correlation analysis under higher-dimension small sample to carry out Fusion Features elimination feature redundancy to the local geometric deformation characteristics of extracting and global feature, obtain the characteristic feature after merging;
Step 5: the feature after utilization is merged, as the observed value vector of Discrete HMM, is carried out Classification and Identification, obtains classification results.
Below respectively step 2 to step 5 is described in detail.
1) local geometric deformation characteristics is extracted
Adopt the method (AAM-RS) that active apparent model and rough set combine to extract the local geometric deformation characteristics of expressing one's feelings.Eyebrow in face, eyes, nose, face etc. have been constructed the colourful expression shape change of face, adopt AAM algorithm to extract the geometric description of describing these organ shape and structural relation variation etc. as recognition feature.The position that use AAM algorithm search is oriented each key feature points of face is as Fig. 2, select totally 32 key points in eyebrow, eyes, nose, face, lower jaw region, as shown in Figure 3, then calculate different distance between different characteristic point as characteristic parameter.The distance (distance in Fig. 3 between unique point 6 and unique point 13) that represents two inner eye corners with d is used as feature normalized factor.
After normalization, putting i is defined as to the Euclidean distance Dis (i, j) of a j:
Dis ( i , j ) = ( x i - x j ) 2 + ( y i - y j ) 2 / d
After normalization, putting i is defined as to the vertical range Hei (i, j) of a j:
Hei(i,j)=|y i-y j|/d (1-2)
After normalization, putting i is defined as to the horizontal distance W id (i, j) of a j:
Wid(i,j)=|x i-x j|/d (1-3)
Wherein x iand x jrepresent respectively the horizontal ordinate of some i and some j; y iand y jrepresent respectively the ordinate of some i and j.The distance feature for Expression Recognition being derived by unique point is described as shown in Figure 4.
For solving redundancy feature and the too high problem of dimension, remove those redundancy features in primitive character, so both ensure that the feature after selecting has stronger recognition capability equally, reduce again the working time of whole system.Adopt improved rough set attribute reduction method to select the distance feature obtaining herein, rough set theory related to the present invention is defined as follows:
Define 1 one knowledge-representation system S and can be expressed as S=<U, C, D, V, f>, wherein, U is the set of object, and C ∪ D=R is community set, and subset C and D are called conditional attribute and decision attribute, v=∪ a ∈ RVa is the set of property value, and Va has represented the scope of property value a ∈ R, and f is an information function, the property value that it specifies each object x in U.
Define the cluster relation of equivalence S on 2 given domain U and U, if , and ∩ P (common factor of all relation of equivalence in P) remains a relation of equivalence on domain U, is called can not differentiate relation on P, is designated as IND (P), and also normal brief note is P.And
IND ( p ) = { ( x , y ) &Element; U &times; U : f ( x , a ) = f ( y , a ) &ForAll; a &Element; p } , (if x, y) ∈ IND (p), x, y is called with respect to P and can not differentiates.
It is an equivalence relation family that R is established in definition 3, and r ∈ R, if IND (R)=IND (R-{r}) claims that r is the knowledge that can be divided out in R; If P=R-{r} is independently, P is a yojan in R.If r can not be divided out in R, equivalence relation family R is independently; Otherwise R is correlated with, it is dependence.If if Q is independently, and IND (Q)=IND (P), claim that Q is a yojan of P.In P the set of the relation that is necessary composition be called the core of P, be denoted as CORE (P).Core and yojan have following relation: CORE (P)=∩ RED (P).Wherein, RED (P) represents all yojan of P.Endorse and be interpreted as in the time of Reduction of Knowledge, it be can not cancellation knowledge characteristic set.
If domain U is divided into m equivalence class by definition 4 certain community set, each equivalence class has element to be respectively n1, n2 ... nm, the knowledge quantity that this community set has so W ( n 1 , n 2 , . . . , n m ) = W ( 1,1 ) &times; &Sigma; 1 < i < j < m n i &times; n j . It meets:
①W(n)=0;
②W(n 1,...,n i,...,n j,...,n m)=W(n 1,...,n j,...,n i,...,n m);
③W(n 1,n 2,...,n m)=W(n 1,n 2,...,n m)+W(n 2,n 3,...,n m);
④W(n 1,n i+n j)=W(n 1+n i)+W(n 1+n j);
It is U that domain is established in definition 5, and P is the set of some attribute in information table, and Q is another community set in information table, and community set Q is designated as W with respect to the Relative knowledge quantity of community set P u, Q/P=W u, P ∪ Q-W u,P.
Definition 6 is provided with infosystem S, and a (x) is the value of x on attribute a, c ijin representing matrix, i is capable, and the element of j row, can be defined as by identification matrix: wherein i, j=1,2 ..., n, here n=|U|.
It is a decision table R=C ∪ D that S=(U, R, V, f) is established in definition 7, for conditional attribute collection, D is decision kind set, if
U/C={X1,X2,…,Xn}
U/D={Y1,Y2,…,Yn}
D is defined as follows about the support of C:
K C ( D ) = 1 | U | &Sigma; i = 1 n | C &OverBar; Y i | = 1 | U | &Sigma; i = 1 1 | POS C ( Y i ) | Y i &Element; U / D
Wherein, || represent the element number of graph-inclusion, decision attribute support is measuring of decision table entirety classification capacity, therefore also referred to as classification quality.
Introduced feature selection algorithm removes those redundancy features in primitive character, has so both ensured that the feature after selecting has the strong recognition capability of religion, the working time of having reduced again whole system equally.Use improved heuristic attribute reduction method herein, algorithm detailed process is described below:
Input: decision system S=(U, C ∩ D), wherein, U is domain, is object set, and C is conditional attribute set, and D is decision attribute set (only having a decision attribute herein).
Output: the yojan RED of conditional attribute set C.
Step 1 initialization: , W (ai)=0;
Step 2 is calculated can identification matrix M, and by the core CORE that can identification matrix generates property set, { ai}, joins the attribute that only comprises individual element in matrix entries in CORE CORE=CORE ∪;
Step 3RED=CORE, and in puncture table M, comprise all of core, the knowledge quantity W (ai) of computation attribute, and calculate | M|;
Step 4while (| M| ≠ 0)
{AR=C-RED;
Knowledge quantity W (ai) in AR is worth in each from matrix M of minimum attribute and is deleted; The attribute that only comprises individual element in the item of matrix M is joined in RED_TEMP, deletes all that in M, comprise RED_TEMP, calculate WRED (ai), and calculate | M|, RED=RED+RED_TEMP; // delete the little attribute of those knowledge values, then find out the unsuppressible attribute (containing the item of single property element) of deleting after this attribute and join in yojan set and go.}
Step 5Return RED.
These 36 features are carried out to yojan with the improved attribute reduction method in above-mentioned, obtain 14 to system have significant contribution feature (be D1, D3, D4, D6, D8, D9, D17, D18, D23, D24, D26, D29, D31, D32), the variation of face component structure while describing different expression.
2) global feature extracts
For a testing image sequence, first adopt improved weighted principal component analyzing (UWPCA) method, one of improved weighted principal component analyzing (UWPCA) method construct using two eyes and face as the two-way Gaussian function in emergence central point San center as weighting function for identification, add the gradual change scale-up factor of level, vertical two directions, realize the two-way adjustable of weighting region, obtain the adjustable weighting function of the two-way yardstick in following San center:
&omega; ( i , j ) = exp { - { [ ( i - x 1 ) 2 / a 1 2 + ( j - y 1 ) 2 / b 1 2 ] [ ( i - x 2 ) 2 / a 2 2 + ( j - y 2 ) 2 / b 2 2 ] [ ( i - x 3 ) 2 / a 3 2 + ( j - y 3 ) / b 3 2 ] ^ ( 1 / 3 ) } } - - - ( 2 - 1 )
Wherein (i, j) represents the position of this dimensional feature (pixel) in image, (x 1, y 1), (x 2, y 2), (x 3, y 3) be respectively the position at three weighting centers.The feature of this function is to have 3 emergence centers, compared with original emergence function, it is not the single information of emphasizing some points or two points, but the positional information of emphasizing 3 points that can disperse, and adds the gradual change scale-up factor of level, vertical two directions.Emphasizing dispersedly has eyes, the eyebrow of significant contribution, the positional information of these three points of face to expression shape change, makes facial expression feature more outstanding.Found through experiments, in the time that three emergence center bright spot shapes of emergence function are respectively ellipse, recognition effect is better than it and is shaped as bowlder, works as a 1=a 2=15, b 1=b 2=20, a 3=15, b 3=25 o'clock, recognition effect was best.
Obtain 52 × 10 eigenvectors matrix through said method, for the higher problem of dimension, still adopt improved rough set attribute reduction method to carry out yojan to 52 dimensional feature vectors of the every width image obtaining, just 24 × 10 eigenvectors matrix of sequence thus obtains expressing one's feelings, its vector is turned to the column vector of 240 dimensions, the global feature that forms an image sequence, the global feature vector of an image is made up of the vector of 240 dimensions.
3) local and overall expressive features merges
Adopt Canonical Correlation Analysis (CCA) to carry out Fusion Features to extract two kinds of features above, in fusion process, owing to participating in, may there are the different of dimension selection from y in two data set x of fusion or each component differs greatly, be unfavorable for the extraction of correlated characteristic, before fusion, need data set to do respectively standardization
x * = x - &mu; x &sigma; x , y * = y - &mu; y &sigma; y - - - ( 3 - 1 )
Wherein μ x=E (x), μ y=E (y) is respectively the mean vector of sample; σ x, σ ythe average of the standard deviation vector that is respectively sample on each component.Kernel function K ' after definition mapping ij=< φ ' (x i), φ ' (x j) >, have:
K ij &prime; [ [ &phi; ( x i ) - 1 n &Sigma; p = 1 n &phi; ( x p ) ] , [ &phi; ( x j ) - 1 n &Sigma; q = 1 n &phi; ( x q ) ] ] - - - ( 3 - 2 )
From theory of reproducing kernel space:
M = 1 N K x T JK y , P = 1 N K x T JK x + &eta; 1 K x , Q = 1 N K y T JK y + &eta; 2 K y , J = I - 1 N A T - - - ( 3 - 3 )
A is that all elements is 1 N dimensional vector, η 1=η 2=η.
Solve vector α and β, the generalized eigenvalue characteristic of correspondence vector of a demand solution P and M, or decompose and try to achieve by Cholesky.Try to achieve after α and β, extract the non-linear canonical correlation feature between x and y.
Non-linear correlation feature between combination part and the entirety extracted is as the linked character of the two:
z &prime; = &beta; &beta; T K x K y - - - ( 3 - 4 )
Z ' is the feature after tried to achieve fusion.
It is as follows that expression merges detailed process:
1) the feature input vector that two kinds of feature extracting methods obtain is respectively X, and Y, utilizes the standardization pre-service before formula (3-1) merges, and obtains respectively X', Y'.
2) select again suitable kernel function type, try to achieve kernel function k x, k y, and utilize formula (3-2) to carry out centralization pre-service, obtain respectively k ' x, k ' y.
3) set up matrix M according to formula (3-3), P, Q.
4) by solving the generalized eigenvalue characteristic of correspondence vector of P and M, or decomposed to try to achieve by Cholesky and obtain α, β, then try to achieve fusion feature collection z' according to formula (3-4).
According to above algorithm steps 1)~4), the local geometric features of extracting above and global feature are merged, and the dimension of the proper vector z' after Fusion Features is 40 dimensions, the observed value vector by normalized as Discrete HMM.
4) Classification and Identification based on HMM
After the proper vector normalization of 40 dimensions that obtain after merging, as the observed value vector of HMM, according to the observed value of input, each model training is asked p (o/ λ i), 1≤i≤6.If i *=argmax 1≤i≤6[p (o/ λ i)], i *be the affiliated expression of this segment table feelings sequence classification.HMM Expression Recognition system architecture schematic diagram as shown in Figure 5.The algorithm proposing is herein tested in the expression storehouse of Cohn-Kanade (Cohn-Kanade AU-coded Facial Expression Database), choosing at random the facial expression image of 40 people in Cohn-Kanade expression storehouse tests, front 10 people's facial expression image sequence is as training sample, 30 people's of residue facial expression image sequence is as test sample book, everyone comprises 10 width images by the order growing from weak to strong at every kind of expression, totally 2400 width images.
Merge two category features through kernel canonical correlation analysis, eliminate feature redundancy, obtain the characteristic feature after merging, train six hidden Markov models.After having trained, we test every kind of random 30 sequences of selection of expression, and test result as shown in Figure 6.Frightened, glad as can be seen from the table, the discrimination of surprised three classes expressions is higher, and detest and sad discrimination relatively low.Average recognition rate is 91.12%.Glad, surprised discrimination is relatively high is that two kinds of expressions of this collecting, mostly all in very big state, are expressed one's feelings fairly proper because in our image library used.And detest relatively, other several Expression Recognition rates are very low, and reason is in image library, and the most amplitude of this expression is less, and move lack of standardizationly, are easily mistaken for sadness.
For the contrast validity of Fusion Features herein, at training and testing image sequence all under identical condition, compare that the present invention and canonical correlation analysis merge and two kinds of single features extract and the recognition result of HMM, as shown in Figure 7, horizontal ordinate represents six kinds of basic facial expressions, and ordinate represents discrimination.As can be seen from Figure 7, average recognition rate of the present invention is higher than other three kinds.
Compared with prior art, the present invention can improve the discrimination of human face expression identification by said method, and the recognition time of Expression Recognition is shorter, thereby is easier to implement.

Claims (7)

1. the facial expression recognizing method based on rough set and composite character, comprises the steps:
Carry out face detection;
Adopt the method that active apparent model and rough set combine to extract the local geometric deformation characteristics of expressing one's feelings;
Adopt improved weighted principal component analyzing method and rough set to combine and extract the global feature of expression;
Adopt the kernel canonical correlation analysis under higher-dimension small sample to carry out Fusion Features elimination feature redundancy to the local geometric deformation characteristics of extracting and global feature, obtain the characteristic feature after merging;
Feature after utilization is merged, as the observed value vector of Discrete HMM, is carried out Classification and Identification, obtains classification results.
2. the facial expression recognizing method based on rough set and composite character as claimed in claim 1, it is characterized in that: adopt the method that active apparent model and rough set combine to extract in the step of local geometric deformation characteristics of expression, adopt AAM algorithm to extract the eyebrow of describing in face, eyes, nose, the geometric description that these organ shape such as face and structural relation change is as recognition feature, the position that uses AAM algorithm search to orient each key feature points of face, select eyebrow, eyes, nose, face, totally 32 key points in lower jaw region, then calculate different distance between different characteristic point as characteristic parameter.
3. the facial expression recognizing method based on rough set and composite character as claimed in claim 2, it is characterized in that: adopt the method that active apparent model and rough set combine to extract in the step of local geometric deformation characteristics of expression, adopt improved rough set attribute reduction method to select the distance feature obtaining, after feature selecting, every width image is selected 14 main distance parameters.
4. the facial expression recognizing method based on rough set and composite character as claimed in claim 3, it is characterized in that: adopt improved weighted principal component analyzing method and rough set to combine and extract in the step of the global feature of expressing one's feelings, for a testing image sequence, first adopt improved weighted principal component analyzing method, construct one using two eyes and face as the two-way Gaussian function in emergence central point San center as weighting function for identification, add level, the gradual change scale-up factor of vertical two directions, realize the two-way adjustable of weighting region, emphasize dispersedly expression shape change to have the eyes of significant contribution, eyebrow, the positional information of these three points of face, make facial expression feature more outstanding.
5. the facial expression recognizing method based on rough set and composite character as claimed in claim 4, it is characterized in that: adopt improved weighted principal component analyzing method and rough set to combine and extract in the step of the global feature of expressing one's feelings, improved weighted principal component analyzing method obtains 52 × 10 eigenvectors matrix, for the higher problem of dimension, still adopt improved rough set attribute reduction method to carry out yojan to 52 dimensional feature vectors of the every width image obtaining, just 24 × 10 eigenvectors matrix of sequence thus obtains expressing one's feelings, its vector is turned to the column vector of 240 dimensions, form the global feature of an image sequence, the global feature vector of an image is made up of the vector of 240 dimensions.
6. the facial expression recognizing method based on rough set and composite character as claimed in claim 5, it is characterized in that: adopt the kernel canonical correlation analysis under higher-dimension small sample to carry out Fusion Features elimination feature redundancy to the local geometric deformation characteristics of extracting and global feature, obtain in the step of the characteristic feature after merging, adopt Canonical Correlation Analysis to carry out Fusion Features to extract two kinds of features above, before fusion, need data set to do respectively standardization, select again suitable kernel function type, the local geometric features and the global feature that extract above merge, the dimension of proper vector z' after Fusion Features is 40 dimensions, observed value vector by normalized as Discrete HMM.
7. the facial expression recognizing method based on rough set and composite character as claimed in claim 6, it is characterized in that: the feature after utilization is merged is as the observed value vector of Discrete HMM, carrying out that Classification and Identification obtains in the step of classification results is the observed value vector as HMM after the proper vector normalization of 40 dimensions that obtain after merging, according to the observed value of input, each model training is asked p (o/ λ i), 1≤i≤6.If i *=argmax 1≤i≤6[p (o/ λ i)], i *be the affiliated expression of this segment table feelings sequence classification.
CN201410168960.9A 2014-04-24 2014-04-24 Facial expression recognition method based on rough set and mixed features Pending CN103984919A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410168960.9A CN103984919A (en) 2014-04-24 2014-04-24 Facial expression recognition method based on rough set and mixed features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410168960.9A CN103984919A (en) 2014-04-24 2014-04-24 Facial expression recognition method based on rough set and mixed features

Publications (1)

Publication Number Publication Date
CN103984919A true CN103984919A (en) 2014-08-13

Family

ID=51276881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410168960.9A Pending CN103984919A (en) 2014-04-24 2014-04-24 Facial expression recognition method based on rough set and mixed features

Country Status (1)

Country Link
CN (1) CN103984919A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408468A (en) * 2014-11-26 2015-03-11 西安电子科技大学 Face recognition method based on rough set and integrated learning
CN105447446A (en) * 2015-11-12 2016-03-30 易程(苏州)电子科技股份有限公司 Face recognition method and system based on principal component of rough set
CN105809113A (en) * 2016-03-01 2016-07-27 湖南拓视觉信息技术有限公司 Three-dimensional human face identification method and data processing apparatus using the same
CN105938561A (en) * 2016-04-13 2016-09-14 南京大学 Canonical-correlation-analysis-based computer data attribute reduction method
CN106257489A (en) * 2016-07-12 2016-12-28 乐视控股(北京)有限公司 Expression recognition method and system
CN107016416A (en) * 2017-04-12 2017-08-04 中国科学院重庆绿色智能技术研究院 The data classification Forecasting Methodology merged based on neighborhood rough set and PCA
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN107292218A (en) * 2016-04-01 2017-10-24 中兴通讯股份有限公司 A kind of expression recognition method and device
CN108090513A (en) * 2017-12-19 2018-05-29 天津科技大学 Multi-biological characteristic blending algorithm based on particle cluster algorithm and typical correlation fractal dimension
CN108446658A (en) * 2018-03-28 2018-08-24 百度在线网络技术(北京)有限公司 The method and apparatus of facial image for identification
CN109409273A (en) * 2018-10-17 2019-03-01 中联云动力(北京)科技有限公司 A kind of motion state detection appraisal procedure and system based on machine vision
CN109711378A (en) * 2019-01-02 2019-05-03 河北工业大学 Human face expression automatic identifying method
CN110678878A (en) * 2017-03-20 2020-01-10 华为技术有限公司 Apparent feature description attribute identification method and device
CN111227789A (en) * 2018-11-29 2020-06-05 百度在线网络技术(北京)有限公司 Human health monitoring method and device
CN112163133A (en) * 2020-09-25 2021-01-01 南通大学 Breast cancer data classification method based on multi-granularity evidence neighborhood rough set
CN112686978A (en) * 2021-01-07 2021-04-20 网易(杭州)网络有限公司 Expression resource loading method and device and electronic equipment
CN113049606A (en) * 2021-03-11 2021-06-29 云南电网有限责任公司电力科学研究院 Large-area high-precision insulator pollution distribution assessment method
CN113076916A (en) * 2021-04-19 2021-07-06 山东大学 Dynamic facial expression recognition method and system based on geometric feature weighted fusion
US11093796B2 (en) 2017-03-29 2021-08-17 International Business Machines Corporation Entity learning recognition

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8369608B2 (en) * 2009-06-22 2013-02-05 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for detecting drowsy facial expressions of vehicle drivers under changing illumination conditions

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8369608B2 (en) * 2009-06-22 2013-02-05 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for detecting drowsy facial expressions of vehicle drivers under changing illumination conditions

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
段丽,等: ""基于粗糙集的表情特征选择"", 《计算机工程与应用》 *
段丽: ""基于粗糙集与混合特征的人脸表情识别研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
邵晓根,等: ""基于UWPCA与粗糙集相结合的表情识别"", 《计算机工程与应用》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408468A (en) * 2014-11-26 2015-03-11 西安电子科技大学 Face recognition method based on rough set and integrated learning
CN105447446A (en) * 2015-11-12 2016-03-30 易程(苏州)电子科技股份有限公司 Face recognition method and system based on principal component of rough set
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN105809113B (en) * 2016-03-01 2019-05-21 湖南拓视觉信息技术有限公司 Three-dimensional face identification method and the data processing equipment for applying it
CN105809113A (en) * 2016-03-01 2016-07-27 湖南拓视觉信息技术有限公司 Three-dimensional human face identification method and data processing apparatus using the same
CN107292218A (en) * 2016-04-01 2017-10-24 中兴通讯股份有限公司 A kind of expression recognition method and device
CN105938561A (en) * 2016-04-13 2016-09-14 南京大学 Canonical-correlation-analysis-based computer data attribute reduction method
CN106257489A (en) * 2016-07-12 2016-12-28 乐视控股(北京)有限公司 Expression recognition method and system
CN110678878A (en) * 2017-03-20 2020-01-10 华为技术有限公司 Apparent feature description attribute identification method and device
CN110678878B (en) * 2017-03-20 2022-12-13 华为技术有限公司 Apparent feature description attribute identification method and device
US11410411B2 (en) 2017-03-20 2022-08-09 Huawei Technologies Co., Ltd. Method and apparatus for recognizing descriptive attribute of appearance feature
US11941536B2 (en) 2017-03-29 2024-03-26 International Business Machines Corporation Entity learning recognition
US11093796B2 (en) 2017-03-29 2021-08-17 International Business Machines Corporation Entity learning recognition
CN107016416A (en) * 2017-04-12 2017-08-04 中国科学院重庆绿色智能技术研究院 The data classification Forecasting Methodology merged based on neighborhood rough set and PCA
CN107016416B (en) * 2017-04-12 2021-02-12 中国科学院重庆绿色智能技术研究院 Data classification prediction method based on neighborhood rough set and PCA fusion
CN108090513A (en) * 2017-12-19 2018-05-29 天津科技大学 Multi-biological characteristic blending algorithm based on particle cluster algorithm and typical correlation fractal dimension
CN108446658A (en) * 2018-03-28 2018-08-24 百度在线网络技术(北京)有限公司 The method and apparatus of facial image for identification
CN109409273A (en) * 2018-10-17 2019-03-01 中联云动力(北京)科技有限公司 A kind of motion state detection appraisal procedure and system based on machine vision
CN111227789A (en) * 2018-11-29 2020-06-05 百度在线网络技术(北京)有限公司 Human health monitoring method and device
CN109711378B (en) * 2019-01-02 2020-12-22 河北工业大学 Automatic facial expression recognition method
CN109711378A (en) * 2019-01-02 2019-05-03 河北工业大学 Human face expression automatic identifying method
CN112163133A (en) * 2020-09-25 2021-01-01 南通大学 Breast cancer data classification method based on multi-granularity evidence neighborhood rough set
CN112686978A (en) * 2021-01-07 2021-04-20 网易(杭州)网络有限公司 Expression resource loading method and device and electronic equipment
CN112686978B (en) * 2021-01-07 2021-09-03 网易(杭州)网络有限公司 Expression resource loading method and device and electronic equipment
CN113049606A (en) * 2021-03-11 2021-06-29 云南电网有限责任公司电力科学研究院 Large-area high-precision insulator pollution distribution assessment method
CN113076916A (en) * 2021-04-19 2021-07-06 山东大学 Dynamic facial expression recognition method and system based on geometric feature weighted fusion

Similar Documents

Publication Publication Date Title
CN103984919A (en) Facial expression recognition method based on rough set and mixed features
US10929649B2 (en) Multi-pose face feature point detection method based on cascade regression
Ali et al. Boosted NNE collections for multicultural facial expression recognition
CN108229330A (en) Face fusion recognition methods and device, electronic equipment and storage medium
Liu et al. Composite components-based face sketch recognition
CN104392246B (en) It is a kind of based between class in class changes in faces dictionary single sample face recognition method
Mirza et al. Gender classification from offline handwriting images using textural features
CN103679158A (en) Face authentication method and device
Huber et al. Mask-invariant face recognition through template-level knowledge distillation
CN103366160A (en) Objectionable image distinguishing method integrating skin color, face and sensitive position detection
CN106909946A (en) A kind of picking system of multi-modal fusion
CN107133651A (en) The functional magnetic resonance imaging data classification method of subgraph is differentiated based on super-network
CN105894050A (en) Multi-task learning based method for recognizing race and gender through human face image
Presti et al. Boosting Hankel matrices for face emotion recognition and pain detection
CN106096517A (en) A kind of face identification method based on low-rank matrix Yu eigenface
CN101169830A (en) Human face portrait automatic generation method based on embedded type hidden markov model and selective integration
Ilmini et al. Computational personality traits assessment: A review
CN109614866A (en) Method for detecting human face based on cascade deep convolutional neural networks
Ali et al. Fusion based fastica method: Facial expression recognition
CN104021381A (en) Human movement recognition method based on multistage characteristics
CN105809713A (en) Object tracing method based on online Fisher discrimination mechanism to enhance characteristic selection
Kajla et al. Graph neural networks using local descriptions in attributed graphs: an application to symbol recognition and hand written character recognition
CN103258186A (en) Integrated face recognition method based on image segmentation
Fu et al. Personality trait detection based on ASM localization and deep learning
Yuan et al. Children's drawing psychological analysis using shallow convolutional neural network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140813