CN107977630A - A kind of smile's kind judging method based on character face's Expression Recognition - Google Patents

A kind of smile's kind judging method based on character face's Expression Recognition Download PDF

Info

Publication number
CN107977630A
CN107977630A CN201711261098.6A CN201711261098A CN107977630A CN 107977630 A CN107977630 A CN 107977630A CN 201711261098 A CN201711261098 A CN 201711261098A CN 107977630 A CN107977630 A CN 107977630A
Authority
CN
China
Prior art keywords
feature
smile
face
picture
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711261098.6A
Other languages
Chinese (zh)
Inventor
杨世鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201711261098.6A priority Critical patent/CN107977630A/en
Publication of CN107977630A publication Critical patent/CN107977630A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to smile's kind judging method field, is in particular a kind of smile's kind judging method based on character face's Expression Recognition, it is possible to increase to the accuracy of face face smile identification, comprises the following steps:The notable feature standard parameter of a few class smiles is established, determines the comparison relation of standard parameter and identified parameter;Portrait face smile's picture is pre-processed;To the picture structure point distribution shape model pre-processed, the feature extraction based on uniformLGBP is done, establishes the picture feature of conspicuousness;Feature Selection is carried out to picture feature using genetic algorithm, obtains outstanding population;Using obtained outstanding population as the fiducial value per class smile's feature, two new object functions in class and between class are established, target is exactly the value for minimizing class inner function and the value for maximizing function between class;The classification of face feature is carried out, after optimal characteristics have been chosen, the method for random forest is taken to classify to feature, completes the identification of smile's classification and judgement.

Description

A kind of smile's kind judging method based on character face's Expression Recognition
Technical field
The present invention relates to smile's kind judging method, is in particular a kind of smile based on character face's Expression Recognition Kind judging method.
Background technology
It is that character face's smile's information is handled using software technology that facial smile, which knows, extracts its feature and is known The process of classification is not judged.The facial smile identification of face is typically used to human-computer interaction and artificial intelligence field.For example, pass through Facial smile's identification can strengthen intelligent robot and the direct interaction of the mankind, what intelligent robot can be arrived by image recognition Information performs their action to plan arrangement.Facial smile's identification is the further intelligence on the basis of human facial expression recognition Smile in analysis detection facial image, to determine smile's classification of identified object, such as professional formula smile, passive type are laughed at Appearance, active smile, Di Xieenshi smiles etc..Currently to facial smile identification several method mainly have, by such as happily, The various moods such as sad, and model or topological structure are established in corresponding characteristic area, such as face, eyes, eyebrow, afterwards Required characteristic point, such as the eyes and pixel on periphery, the pixel on eyebrow and periphery are extracted in figure after imaging Deng, or it is finally again that these are special using geometric properties region, the geometric properties region in geometric properties region, eyes such as face Sign point either determines in geometric properties region after being contrasted with the model or topological structure accordingly having had built up, at this During one, primitive decision is made just for the general expression of the face such as happy, sad.As " local Gabor binary patterns (LGBP) " after, mainly being converted with Gabor wavelet to image characteristic region, by LBP extraction feature pixels and surrounding picture Relation between vegetarian refreshments, and this relation is shown as into histogram.This method does not have in traditional face characteristic extraction model Have and consider the otherness of different classes of expression in Feature Selection, can not accomplish to distinguish facial smile's level, it is right Smile's classification of identified object is tested and assessed.
The content of the invention
The invention mainly solves the technical problem of providing a kind of smile's kind judging based on character face's Expression Recognition Method, it is possible to increase to the accuracy of face face smile identification.
In order to solve the above technical problems, the present invention relates to a kind of smile kind judging side based on character face's Expression Recognition Method, comprises the following steps:
(1) professional formula smile, passive type smile, active smile, the notable feature standard parameter of Di Xieenshi smiles are established, really Determine the comparison relation of standard parameter and identified parameter.Professional formula smile feature shows as smile shape, mouth week contraction of muscle.Passively Formula smile's feature shows as cheekbone and periocular muscle is shunk at the same time, and eyes are in " narrowing eye " shape.Active smile's feature shows as cheekbone Bone is shunk, periocular muscle Outer shrink.Di Xieenshi smile's features show as cheekbone contraction, periocular muscle Outer shrink, canthus It is obvious to locate crow's feet;
(2) portrait face smile's picture is pre-processed;
(3) to the portrait face smile picture structure point distribution shape model pre-processed, the feature based on uniformLGBP is done Extraction, establishes the picture feature of conspicuousness;
(4) first time Feature Selection is carried out to picture feature using genetic algorithm, obtains outstanding population;
(5) it is new to establish two in class and between class using obtained outstanding population as the fiducial value per class smile's feature Object function, target be exactly minimize class inner function value and maximize class between function value, and using Pareto optimization calculate Method optimizes it;
(6) classification of face feature is carried out, after optimal characteristics have been chosen, takes the method for random forest to carry out feature Classification, completes the identification of smile's classification and judges.
Preferably, professional formula smile's feature shows as smile shape, mouth week contraction of muscle;The passive type smile feature Show as cheekbone and periocular muscle is shunk at the same time, eyes are in " narrowing eye " shape;Active smile's feature shows as cheekbone contraction, Periocular muscle Outer shrink;The Di Xieenshi smiles feature shows as cheekbone and shrinks, periocular muscle Outer shrink, fish at canthus Tail line is obvious.
Preferably, the pretreatment in the step (2) includes recognition of face, noise processed and face alignment.
Preferably, the method for the feature extraction based on uniformLGBP is as follows in the step (3):First to picture into Row Gabor filtering transformations, secondly use all pixels in picture feature of the LBP operator extractions histogram as face, i.e., Using center pixel value as threshold value, pixel value in adjacent domains is then 1 if greater than the threshold value, is otherwise 0, by using this After method binaryzation, 2Q binary pattern can be obtained, a uniform pattern is defined again, that is, is formed after the binaryzation obtained Description;It is otherwise non-uniform pattern if there is no more than twice from 0 to 1 or 1 to 0 saltus step is then uniform pattern;Finally Justice in this way, Q2-Q+2 is reduced to by 2Q original binary pattern.
Preferably, the method for the picture feature for establishing conspicuousness based on uniformLGBP is as follows in the step (3): For the picture of a given n × n-pixel, m × m face feature block is divided it into first.Each face feature block is actual Upper is also the picture of l × l pixel, and then the picture of this l × l is done based on uniformLGBP feature extractions, in order to Significant face feature is enough picked out, it is necessary to it is the maximum intensity value in uniformLGBP to set a threshold value 0.2q, wherein q, Then, the maximum intensity value of each pixel in l × l pictures and this threshold value are compared, if this pixel Intensity level is greater than or equal to this threshold value, then corresponding pixel is regarded as significantly, for each face feature Block, if there is there is four or more to be considered as significant point, then this face feature block is defined as 1, otherwise It is defined as 0.
Preferably, the step (4) is specially:Random initializtion population first, then calculates the accuracy of each population, Four classes are divided into according to accuracy, provide every a kind of fitness function for calculating population;Finally optimized using genetic algorithm Obtain outstanding population.
Preferably, the fitness function is that wherein α is the ratio for being correctly categorized into its classification, and ε, ρ 1, ρ 2 is parameter, Arc is the state of face feature block, and m is the quantity of face feature block.
Preferably, the object function in the step (5) in class is:F1(Sk)=1NwΣr=1mΣc=1m(Sk-mw)2]] >;Object function in the step (5) between class is:F2(Sk)=1lΣi=1l(1NbΣr=1mΣc=1m(Si-mb)2),]]>.
Preferably, mw=1Nw Σ i=1NwMi, mb=1Nb Σ j=1NbMj, described]]>Mi is calculated by heredity The solution that method is obtained from a kind of smile, the Nw are the numbers of solution, and the Mj is to be obtained by genetic algorithm from different classes of Solution, the Nb is the number of its homographic solution, and the l then represents the number of variety classes smile, and the Sk is population.
Advantageous effects:
A kind of smile's kind judging method based on character face's Expression Recognition of the present invention, based on traditional LGBP and UniformLGBP expression recognitions, have used the fitness function of GA and the target letter of new Pareto optimization algorithms Number, can more accurately judge smile's classification.The present invention is made that energetically tentative in face face smile's identification decision field Explore.
Brief description of the drawings
Fig. 1 is the flow diagram of the present invention.
Embodiment
With reference to specific embodiment, the present invention is further explained.It is to be understood that these embodiments are merely to illustrate the present invention Rather than limit the scope of the invention.In addition, it should also be understood that, after reading the content taught by the present invention, people in the art Member can make various changes or modifications the present invention, and such equivalent forms equally fall within the application the appended claims and limited Scope.
Embodiments of the present invention are related to smile's kind judging method based on character face's Expression Recognition, as shown in Figure 1, Comprise the following steps:Face picture is pre-processed;Feature based on uniformLGBP is done to the face picture pre-processed Extraction, and establish the conspicuousness of picture feature;First time Feature Selection is carried out to picture feature using genetic algorithm, is obtained outstanding Population;According to fiducial value of the obtained outstanding population as the outstanding population per class smile's feature, to establish in class and class Between two new object functions, target is exactly the value for minimizing class inner function and the value for maximizing function between class, and is used Pareto optimizes algorithm to be optimized to it;The classification of face feature is carried out, after optimal characteristics have been chosen, is taken random The method of forest classifies feature.It is specific as follows:
The picture of step 1, given lineup's face, is pre-processed first, wherein mainly identified comprising people's face, noise processed and Face face alignment.Comprise the following steps that:
Step 1.1.1, in recognition of face step, Haar-like recognitions of face have been used.Haar-like features pass through four kinds Mode represents face:Edge feature, linear character, central feature, to corner characteristics.The feature of every kind of form all contains white With the square of black, characteristic value is sums that the are all pixels in white square and subtracting all pixels in black square.So The template that the feature of four kinds of forms forms a standard is ultimately used to detection face.
Step 1.1.2, noise processed is done using two-sided filter.Talk about publicly known, Gaussian filter is in each sampling time Only consider pixel between space length and without considering the similarity degree between pixel.Therefore, Gaussian filter method is often Picture to be dealt with is made blur.Unlike, two-sided filter possesses two parameters, respectively by the geometry of pixel The difference of distance and pixel determines.Therefore, two-sided filter can be effectively protected the marginality of picture and can also eliminate at the same time The noise of picture.
Step 1.1.3, because CK+ databases (this database used in an experiment) contain the mark point of face, Mark point need not be followed the trail of again to describe facial contours.It is pointed out that in CK+ databases, the number of mark point is 68, But the present invention only needs the point that those can describe face border.Only retain the pixel value within face border, remove background The unwanted information of those in picture, can so improve the precision of picture extraction.
Step 2, in this step, Gabor filtering methods are used on picture to represent face's picture first.Then, In the selection of picture texture expression, consider to replace traditional LGBP using uniformLGBP.What last basis obtained UniformLGBP, generation face notable feature.Provide uniformLGBP methods by detailed below and be based on The process of uniformLGBP generations face notable feature.
It is identical with LBP, a label is set to each pixel in picture by following formula
Η (xp, yp)=I (f (xp, yp) >=f (xc, yc)) (1)
Wherein f (xc, yc) is the pixel value in central point (xc, yc), and f (xp, yp) (p=0,1 ..., Q-1) is central point The pixel value of the peripheral point of (xc, yc) position, for I (A) for 1 when the value in A is true, I (A) is that 0 value worked as in A is non- Very.The shape considered now is no longer traditional square but changes circle into consider.The realization for the step of passing through the above, obtains 2Q binary pattern.It is no more than twice from 0 to 1 or 1 to 0 binary pattern of saltus step is uniform next, defining those and existing Pattern, is otherwise non-uniform pattern.The number of original binary pattern is reduced to by Q2-Q+2 according to the method for proposition.
Step 3, establish face's notable feature based on uniformLGBP.It is first for the picture of a given n × n-pixel First divide it into m × m face feature block.Each face feature block is actually also the picture of l × l pixel then to this The picture of a l × l does uniformLGBP feature extracting methods.In order to pick out significant face feature, it is necessary to set one A threshold value 0.2q, wherein q are the maximum intensity values in uniformLGBP.Then, each pixel in l × l pictures is most Big intensity level and this threshold value are compared.If the intensity level of this pixel is greater than or equal to this threshold value, then corresponding Pixel be regarded as significantly.For each face feature block, if there is there is four or more to be considered It is significant point, then this face feature block is defined as 1, is otherwise defined as 0.
Step 4, define initial population:Make the state that aij is face feature block.Set when face feature block is notable Aij is 1, is otherwise 0.All face feature aij blocks are formed matrix Sk, matrix is as follows:
Wherein Sk, which is that the optimal characteristics in a specific smile population are one of, to be solved, and N is then the number of population.
Step 5, calculating parameter α:It is D to make similar smile's training sample concentrate a trained picture, then obtains this instruction Practice the face feature matrix of picture D, it is ID to be set to the matrix.When meeting following rule, this training picture D, which belongs to this, to be laughed at Hold class:
|ID∩Sk|≥ΩΣr=1mΣc=1marc---(3)]]>
Wherein Ω is that a threshold value is set to 0.8, and the similarity that this feature for demonstrating the need for and choosing has 80% just can be shown that it is Belong to this smile's class.
The required parameter that is over is defined in step 6, above-mentioned steps, the solution of optimal characteristics is next searched for using GA. Because having had been removed from unwanted external context in face's alignment step, in initial matrix Sk, boundary element a1j The then random selection of=amj=0 (j=1,2 ..., m), ai1=aim=0 (i=1,2 ..., m) other elements.Replaced for non- Generation solution NR, its initial solution are also randomly selected from these candidate solutions.After initialization terminates, Sk is according to following for solution Fitness function is evolved:
Wherein α be correctly be categorized into its classification ratio and ε, ρ 1, ρ 2 be parameter.Fitness value F (Sk) depends primarily upon parameter alpha And Characteristic Number, usually smaller fitness value correspond to better solution.As α=0, usual solution be it is very infeasible, in order to It is this as a result, setting ε=0.0001 to make F (Sk) as big as possible to avoid occurring again α=0.As α=100, this solution very may be used OK, therefore in order to prevent unrelated feature enters solution, is arranged to 0.005.Therefore, it is selected to the quantity of feature can both protect The advantage of card population can reduce intrinsic dimensionality again.As 90≤α < 100, more features meetings are because of this high α value and often Number enters disaggregation.In order not to make value F (Sk) small in α=100 in 90≤α < 100 when ratios, plus a threshold constant ρ 1 and set It is 1.As 0≤α < 90, the value needs of F (Sk) are smaller than in 90≤α < 100, make ρ 2=2.
Fitness function is defined by above formula, the Feature Selection algorithm based on GA is summarized as shown in algorithm 1.
Algorithm 1:Feature Selection algorithm based on GA
Input:
Fitness function, F ()
Maximum iteration, G
Assuming that population number, n
By intersecting the ratio of substitution group member, r in each step
Aberration rate, m%
New population, NR
Output:
Population P
Start
Step 1:Generate random population P (S1, S2 ..., Sk) (k=1,2 ..., n);
Step 2:Assess each population, calculate F (Sk) (k=1,2 ..., n);
Step 3:Selection:(1-r) n member is selected to be added to NR from population P with probabilistic method;
Step 4:Intersect:Assume from population P by probability selection pair.For each pair it is assumed that producing two offsprings with crossover operator. All the progeny is added NR;
Step 5:Variation:M% (1-r) n member is selected from NR using uniform probability.For select it is each into Member, in its expression randomly choose one negate;
Step 6:Renewal:Value in NR is copied in P;
Step 7:If iterationses≤G
Continue step 2;
else
Stop;
Terminate
Step 7, according to from the obtained solution of improved GA algorithms (step 6) optimization, next, using Pareto optimization algorithms do into The Feature Selection optimization of one step.
Sk is solved in order to obtain, considers this following multi-objective optimization question:
Min/maxF (Sk)=(F1 (Sk), F1 (Sk) ..., F1 (Sk))
In the feature selecting based on Pareto optimization algorithms, consider to have used Fisher linear decision rules establish two it is new Optimization object function:
F1(Sk)=1NwΣr=1mΣc=1m(Sk-mw)2]]>With F2 (Sk)=1l Σ i=1l (1Nb Σ r=1m Σ c=1m (Si-mb) 2)---(6)]]>
And
mw=1NwΣi=1NwMi,mb=1NbΣj=1NbMj---(7)]]>
Wherein Mi is the number that the solution and Nw obtained by GA from a kind of smile is solution, Mj be by GA from different classes of To solution and Nb be its homographic solution number, l then represents the number of variety classes smile.
Pass through the object function provided, it can be seen that F1 (Sk) and F2 (Sk) is to have corresponded to gap and expansion between diminution class respectively Gap between major class.
By using SPEA, the two the objective function optimization problems provided by above formula can be resolved.And it is based on The flow of Pareto optimization algorithms is provided by algorithm 2.
Algorithm 2:Feature selecting algorithm based on Pareto optimization algorithms
Input:
Size be k population, P (S1, S2 ..., Sk)
Each solve based on formula (6), the object function F1, F2 of (7)
Maximum iterations, H
Output:
Population P and P '
Start
Step 1:Generation initial population P and establish one it is empty extra non-by dominant set P ';
Step 2:Non- copied to by domination solution in P is additionally collected in P ';
Step 3:The solution dominated in P ' by P ' other members is deleted;
Step 4:The quantity of ifP '>N'
P ' is trimmed using the method for cluster;
end
Step 5:Calculate the fitness each solved in P and P ';
Step 6:Sm member is selected from P+P ' from the statistical method of uniform sampling;
Step 7:Sm member is done and is intersected;
Step 8:Make a variation to Sm member;
Step 9:If iterationses≤H
Continue step 2;
else
Stop;
Terminate
After step 8, optimal characteristics are elected, to be classified into four class smiles, these four smiles are respectively:Professional formula is laughed at Appearance, passive type smile, active smile, Di Xieenshi smiles.Next random forest grader method is provided, it can be effective Improve the precision of face face smile classification in ground.
Random forest is a kind of assembled classifier, its essence is the set of a tree classificator, base grader therein It is the categorised decision tree without beta pruning built with post-class processing algorithm, output result is then true using simple majority ballot method It is fixed.
Gini coefficient indexs are the fragmentation criterions of post-class processing in random forest, its calculating process is as follows:
Gini(S)=1-Σi=1mtryPi2---(8)]]>
Wherein Pi represents the probability that class Yi occurs in sample set S.
Face feature sorting algorithm based on random forest provides in algorithm 3.
Algorithm 3:Face feature sorting algorithm based on random forest
Input:
One training set (X, Y), X is characterized, and Y is classification
Classification number has, c
Original training set size, N
Intrinsic dimensionality in each training sample, M
The number of post-class processing, T
The intrinsic dimensionality that each node uses, mtry (mtry≤M)
Minimum sample number on node, s (end condition)
Output:
The feature of each node split institute foundation
Start
Step 1:Establish i post-class processing;
Step 2:I=1;
Step 3:Have from original training set (X, Y) and put back to repetition extraction one new training set S of N number of sample composition, as i-th The root node of tree, and is trained since root node;
Step 4:When if present nodes meet end condition s
If present node is leaf node;
else
Mtry feature is randomly selected from M dimensional features as candidate feature.Present node is calculated according to mtry candidate feature Gini coefficients.The feature of selection Gini coefficient minimums is come into line splitting, is divided into left child node and right child node;
endif
Calculate other nodes on i-th tree;
Step 5:ifi≠T
I=i+1;Continue to do step 3;
else
Stop;
Terminate
Finally provide the Feature Selection based on uniformLGBP feature extractions, GA and Pareto optimization algorithms, face face smile Algorithm:
Algorithm 4:Face's face smile's recognizer
Input:
Training picture in all smile's classifications
Significant face feature block
Other specification is set
Output:
Represent the optimal solution per class smile
Start
Step 1:Initialization population P (S1, S2 ..., Sk) (k=1,2 ..., n);
Step 2:GA (algorithm 1) is to initial population P;
Step 3:Optimal solution Si is picked out from GA;
Step 4:Two parts will be divided into solution Si:Ui and Li;
Step 5:Fixed Ui, is Li GA (algorithm 1);Fixed Li, is Ui GA (algorithm 1);
Step 6:New Ui and Li is formed new Si and is copied into P;
Step 7:Optimal characteristic results P is chosen from GA;
Step 8:Each solution in 6,7 couples of P of formula is evaluated, and obtains F1 and F2;
Step 9:Pareto optimization algorithms (algorithm 2) are done to the solution in P;
Step 10:Return to Pareto optimal solutions;
Step 11:Selected feature is classified using random forest grader (algorithm 3);
Terminate
Step 9, finally select database to train inventive algorithm until algorithmic statement, has many open platforms to provide at present The database of face face smile, such as CK+, eNTERFACE and MMI.A database is selected to carry out inventive algorithm Training and test, a preferable disaggregated model is obtained according to last test result.
It is not difficult to find that the present invention is effectively by expression recognition technology successful implantation to facial smile's identification technology, Have the following advantages that and good effect:The present invention is used based on traditional LGBP and uniformLGBP expression recognitions The object function of the fitness function of GA and new Pareto optimization algorithms, can more accurately judge smile's classification.This hair Bright be made that in face face smile's identification decision field energetically attempts sex exploration.
Certainly, described above is not limitation of the present invention, and the present invention is also not limited to the example above, the art The variations, modifications, additions or substitutions that those of ordinary skill is made in the essential scope of the present invention, fall within the guarantor of the present invention Protect scope.

Claims (9)

1. a kind of smile's kind judging method based on character face's Expression Recognition, comprises the following steps:
(1) professional formula smile, passive type smile, active smile, the notable feature standard parameter of Di Xieenshi smiles are established, really Determine the comparison relation of standard parameter and identified parameter;
(2) portrait face smile's picture is pre-processed;
(3) to the portrait face smile picture structure point distribution shape model pre-processed, the feature based on uniformLGBP is done Extraction, establishes the picture feature of conspicuousness;
(4) first time Feature Selection is carried out to picture feature using genetic algorithm, obtains outstanding population;
(5) it is new to establish two in class and between class using obtained outstanding population as the fiducial value per class smile's feature Object function, target be exactly minimize class inner function value and maximize class between function value, and using Pareto optimization calculate Method optimizes it;
(6) classification of face feature is carried out, after optimal characteristics have been chosen, takes the method for random forest to carry out feature Classification, completes the identification of smile's classification and judges.
2. a kind of smile's kind judging method based on character face's Expression Recognition according to claim 1, its feature exist In:Occupation formula smile's feature shows as smile shape, mouth week contraction of muscle;The passive type smile feature show as cheekbone and Periocular muscle is shunk at the same time, and eyes are in " narrowing eye " shape;Active smile's feature shows as cheekbone and shrinks, outside periocular muscle Shrink;The Di Xieenshi smiles feature shows as cheekbone and shrinks, periocular muscle Outer shrink, and crow's feet is obvious at canthus.
3. a kind of smile's kind judging method based on character face's Expression Recognition according to claim 1, its feature exist In:Pretreatment in the step (2) includes recognition of face, noise processed and face alignment.
4. a kind of smile's kind judging method based on character face's Expression Recognition according to claim 1, its feature exist In:The method of the feature extraction based on uniformLGBP is as follows in the step (3):Gabor filtering is carried out to picture first to become Change, all pixels in picture secondly are used with feature of the LBP operator extractions histogram as face, i.e., is made center pixel value For threshold value, pixel value in adjacent domains is then 1 if greater than the threshold value, is otherwise 0, by using this method binaryzation with Afterwards, 2Q binary pattern can be obtained, define a uniform pattern again, that is, description formed after the binaryzation obtained;If It is otherwise non-uniform pattern in the presence of no more than twice from 0 to 1 or 1 to 0 saltus step is then uniform pattern;Finally by such Justice, Q2-Q+2 is reduced to by 2Q original binary pattern.
5. a kind of smile's kind judging method based on character face's Expression Recognition according to claim 1, its feature exist In:The method of the picture feature for establishing conspicuousness based on uniformLGBP is as follows in the step (3):It is given for one N × n-pixel picture, divide it into m × m face feature block first;Each face feature block be actually also a l × The picture of l pixels, then makees based on uniformLGBP feature extractions the picture of this l × l, in order to pick out significantly Face feature, it is necessary to which it is the maximum intensity value in uniformLGBP to set a threshold value 0.2q, wherein q, then, in l × l The maximum intensity value of each pixel in picture and this threshold value are compared, if the intensity level of this pixel be more than or Equal to this threshold value, then corresponding pixel is regarded as significantly, for each face feature block, if there is having Four or more is considered as significant point, then this face feature block is defined as 1, is otherwise defined as 0.
6. a kind of smile's kind judging method based on character face's Expression Recognition according to claim 1, its feature exist In:The step (4) is specially:Random initializtion population first, then calculates the accuracy of each population, according to accuracy point For four classes, every a kind of fitness function for calculating population is provided;Finally optimize to obtain outstanding kind using genetic algorithm Group.
7. a kind of smile's kind judging method based on character face's Expression Recognition according to claim 6, its feature exist In:The fitness function is wherein, α is the ratio for being correctly categorized into its classification, and ε, ρ 1, ρ 2 is parameter, and arc is face feature The state of block, m are the quantity of face feature block.
8. a kind of smile's kind judging method based on character face's Expression Recognition according to claim 1, its feature exist In:Object function in the step (5) in class is:F1(Sk)=1NwΣr=1mΣc=1m(Sk-mw)2]]>;The step (5) object function between class is:F2(Sk)=1lΣi=1l(1NbΣr=1mΣc=1m(Si-mb)2),]]>.
9. a kind of smile's kind judging method based on character face's Expression Recognition according to claim 8, its feature exist In:Mw=1Nw Σ i=1NwMi, mb=1Nb Σ j=1NbMj, described]]>Mi is from a kind of smile by genetic algorithm In obtained solution, the Nw is the number of solution, and the Mj is the solution obtained by genetic algorithm from different classes of, and the Nb is The number of its homographic solution, the l then represent the number of variety classes smile, and the Sk is population.
CN201711261098.6A 2017-12-04 2017-12-04 A kind of smile's kind judging method based on character face's Expression Recognition Pending CN107977630A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711261098.6A CN107977630A (en) 2017-12-04 2017-12-04 A kind of smile's kind judging method based on character face's Expression Recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711261098.6A CN107977630A (en) 2017-12-04 2017-12-04 A kind of smile's kind judging method based on character face's Expression Recognition

Publications (1)

Publication Number Publication Date
CN107977630A true CN107977630A (en) 2018-05-01

Family

ID=62009115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711261098.6A Pending CN107977630A (en) 2017-12-04 2017-12-04 A kind of smile's kind judging method based on character face's Expression Recognition

Country Status (1)

Country Link
CN (1) CN107977630A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858379A (en) * 2019-01-03 2019-06-07 深圳壹账通智能科技有限公司 Smile's sincerity degree detection method, device, storage medium and electronic equipment
CN110532971A (en) * 2019-09-02 2019-12-03 京东方科技集团股份有限公司 Image procossing and device, training method and computer readable storage medium
CN114333024A (en) * 2021-12-31 2022-04-12 郑州工程技术学院 Method, device, equipment and storage medium for recognizing facial expressions of students based on confrontation training network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036255A (en) * 2014-06-21 2014-09-10 电子科技大学 Facial expression recognition method
CN104504365A (en) * 2014-11-24 2015-04-08 闻泰通讯股份有限公司 System and method for smiling face recognition in video sequence
CN105469080A (en) * 2016-01-07 2016-04-06 东华大学 Facial expression recognition method
CN106096641A (en) * 2016-06-07 2016-11-09 南京邮电大学 A kind of multi-modal affective characteristics fusion method based on genetic algorithm
US20170109571A1 (en) * 2010-06-07 2017-04-20 Affectiva, Inc. Image analysis using sub-sectional component evaluation to augment classifier usage

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170109571A1 (en) * 2010-06-07 2017-04-20 Affectiva, Inc. Image analysis using sub-sectional component evaluation to augment classifier usage
CN104036255A (en) * 2014-06-21 2014-09-10 电子科技大学 Facial expression recognition method
CN104504365A (en) * 2014-11-24 2015-04-08 闻泰通讯股份有限公司 System and method for smiling face recognition in video sequence
CN105469080A (en) * 2016-01-07 2016-04-06 东华大学 Facial expression recognition method
CN106096641A (en) * 2016-06-07 2016-11-09 南京邮电大学 A kind of multi-modal affective characteristics fusion method based on genetic algorithm

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858379A (en) * 2019-01-03 2019-06-07 深圳壹账通智能科技有限公司 Smile's sincerity degree detection method, device, storage medium and electronic equipment
CN110532971A (en) * 2019-09-02 2019-12-03 京东方科技集团股份有限公司 Image procossing and device, training method and computer readable storage medium
CN110532971B (en) * 2019-09-02 2023-04-28 京东方科技集团股份有限公司 Image processing apparatus, training method, and computer-readable storage medium
US11961327B2 (en) 2019-09-02 2024-04-16 Boe Technology Group Co., Ltd. Image processing method and device, classifier training method, and readable storage medium
CN114333024A (en) * 2021-12-31 2022-04-12 郑州工程技术学院 Method, device, equipment and storage medium for recognizing facial expressions of students based on confrontation training network
CN114333024B (en) * 2021-12-31 2024-01-26 郑州工程技术学院 Method, device, equipment and storage medium for student facial expression recognition based on countermeasure training network

Similar Documents

Publication Publication Date Title
CN105469080B (en) A kind of facial expression recognizing method
CN109389074B (en) Facial feature point extraction-based expression recognition method
CN104036255B (en) A kind of facial expression recognizing method
CN107977630A (en) A kind of smile's kind judging method based on character face's Expression Recognition
CN108108760A (en) A kind of fast human face recognition
CN113239839B (en) Expression recognition method based on DCA face feature fusion
Akhand et al. Convolutional Neural Network based Handwritten Bengali and Bengali-English Mixed Numeral Recognition.
Warrell et al. Labelfaces: Parsing facial features by multiclass labeling with an epitome prior
Chen et al. Offline handwritten digits recognition using machine learning
CN106611156A (en) Pedestrian recognition method and system capable of self-adapting to deep space features
Ibarra-Vazquez et al. Brain programming is immune to adversarial attacks: Towards accurate and robust image classification using symbolic learning
Sahlol et al. A proposed OCR algorithm for the recognition of handwritten Arabic characters
Chouchane et al. 3D and 2D face recognition using integral projection curves based depth and intensity images
Fatemifar et al. Particle swarm and pattern search optimisation of an ensemble of face anomaly detectors
Shayegan et al. A new dataset size reduction approach for PCA-based classification in OCR application
Teredesai et al. Issues in evolving GP based classifiers for a pattern recognition task
Zheng et al. Capturing micro deformations from pooling layers for offline signature verification
Gona et al. Multimodal biometric reorganization system using deep learning convolutional neural network
Sahloul et al. OFF-line system for the recognition of handwritten arabic character
Brimblecombe Face detection using neural networks
Zhao et al. A head pose estimation method based on multi-feature fusion
CN111950403A (en) Iris classification method and system, electronic device and storage medium
CN107341485B (en) Face recognition method and device
CN111898473A (en) Driver state real-time monitoring method based on deep learning
Deaney et al. A comparison of facial feature representation methods for automatic facial expression recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination