CN103258204B - A kind of automatic micro-expression recognition method based on Gabor and EOH feature - Google Patents

A kind of automatic micro-expression recognition method based on Gabor and EOH feature Download PDF

Info

Publication number
CN103258204B
CN103258204B CN201210041341.4A CN201210041341A CN103258204B CN 103258204 B CN103258204 B CN 103258204B CN 201210041341 A CN201210041341 A CN 201210041341A CN 103258204 B CN103258204 B CN 103258204B
Authority
CN
China
Prior art keywords
expression
gabor
training
feature
micro
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210041341.4A
Other languages
Chinese (zh)
Other versions
CN103258204A (en
Inventor
吴奇
申寻兵
傅小兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Psychology of CAS
Original Assignee
Institute of Psychology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Psychology of CAS filed Critical Institute of Psychology of CAS
Priority to CN201210041341.4A priority Critical patent/CN103258204B/en
Publication of CN103258204A publication Critical patent/CN103258204A/en
Application granted granted Critical
Publication of CN103258204B publication Critical patent/CN103258204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides a kind of expression recognition method automatically, including step 10), human face region in the two field picture of capture video, and carry out pretreatment;Step 20), to the image zooming-out Gabor characteristic of corresponding human face region and EOH feature;Step 30), individual features is merged, obtain the final sign of target video;By training the grader of gained, obtain the expression sequence label of each frame video image;Step 40), this expression sequence label is scanned, it is judged that the persistent period of expression, according to obtain micro-expression, output expression classification.

Description

A kind of automatic micro-expression recognition method based on Gabor and EOH feature
Technical field
The present invention relates to Expression Recognition and image recognition technology, more particularly, to the automatic micro-expression recognition method of one.
Background technology
Current unipolitics general layout is turbulent, terrorist activity many places take place frequently.The scientists and engineers of countries in the world is Strive to find the behavior clue relevant with violence and extreme behavior, and attempt the technology that above-mentioned behavior can be detected by exploitation Or method.
In micro-expression and the mankind the emotion information course of processing be closely related, it cannot be forged, and is not controlled by consciousness, instead Real feelings and the intention of mankind's heart are reflected.Thus, micro-expression becomes can carry out lie and having that dangerous intention detects Effect clue.U.S. Department of Defense, the Central Intelligence Agency, native country security bureau, safety management inspection administration etc. even have begun to accept the instruction of Ekman Practice course, utilize micro-expression to carry out anti-terrorism work.
But, people's understanding, the most extremely limited to the actual application of micro-expression to micro-expression essence.This mainly by The research expressed in the micro-expression as micro-expression Research foundation is the most at the early-stage.Owing to the persistent period of micro-expression is the shortest, Trained observer is not had to be difficult to the micro-expression of correct identification.Although researcher is proposed micro-expression training tool, but The research of Frank, Herbasz, Sinuk, Keller and Nolan finds, even if receiving micro-expression training tool (METT) instruction Practicing, the achievement of the micro-expression in tested identification reality is the most very poor (accuracy rate only has about 40%).At present, micro-expression researcher If wanting micro-expression is accurately identified with relevant practical work person, it is necessary for by means of face behavior coding system FACS can The video that can comprise micro-expression encodes frame by frame.But, not only FACS coding training relatively time-consuming, coding person it is generally required to The training accepting 100 hours can be only achieved the most skilled degree;And use FACS to carry out encoding the most very time-consuming, encode 10 points The video of clock at least needs 10 hours.
In the research of Porter et al., low speed video camera (30 frames/second) is only used to carry out short to tested facial expression Time record, needs the video analyzed i.e. to be up to 300,000 frames in research.And under such circumstances due to micro-expression of capturing Very little, the expression to micro-expression is only capable of being described statistics and the method for inferential statistics cannot be used to divide to carry out data quantity Analysis.If considering to use high-speed camera, to meet the demand of statistical analysis, then need the video frame number of manual coding by increased numbers Times.If considering non-volatile recording facial expression (such as under hearing situation), then the number of videos needing manual coding will more Quickly increase.Coding rate and magnanimity need the video data of artificial treatment the most slowly so that being encoded into of micro-expression For the arduous work wasted time and energy so that correlational study and practice are all difficult to go on, this be current micro-expression research with The biggest obstacle faced in micro-expression application work.Therefore, the most micro-expression research, or the application of micro-expression, for certainly Micro-Expression analysis instrument of dynamicization, all has the biggest demand.Micro-Expression analysis instrument of development automatization, is current micro-table The top priority of feelings researchers.
The technology and method of computer vision (Computer Vision) is tied mutually with psychologic correlational study achievement Close, be possible to develop micro-Expression Recognition system of automatization.It practice, at present, had from the U.S. and Japan Two independent research groups are explored in this problem.
Polikovsky, Kameda and Ohta proposed one and utilize 3D gradient orientation histogram (3D in 2009 Gradients orientation histogram) extract facial movement information method.They face is divided into 12 emerging Interest district (ROI), and by respective regions in video has been tracked extracting the 3D gradient orientation histogram of respective regions Characterize.In addition, they also have collected one ' micro-expression data storehouse '.They require tested with alap Expression intensity, The fastest speed makes facial expression, and images the tested process high-speed camera making facial expression.K The result of mean cluster (K-means) shows, on this data base, 3D gradient orientation histogram can be with Efficient Characterization difference face The facial expression motor unit (action unit) being in out of phase (phase) in region.
Shreve proposes a kind of new expression video dividing method between being equal to 2009 to 2011 years.They utilize light stream (optical flow) carrys out calculating optical stress (optical strain).Then, face is divided into 8 region of interest by them Territory, and the image of eyes, nose and mouth is removed with a T-shaped labelling.They will be by counting in each interest region The a certain threshold value that the optical stresses obtained obtains with training compares, it is achieved that express one's feelings macroscopic view (i.e. generic expression) and micro- The segmentation of expression video.
These two research distances identify that the target of micro-expression still has bigger distance, is only merely desk study automatically. The method that Polikovsy etc. propose can only distinguish the different AU of different facial zone and residing out of phase thereof, and can not be straight Connect to expression and expression persistent period directly measure.And the measurement of persistent period of expressing one's feelings is known for automatic micro-expression It is the function allowed for for other system.Additionally, the method inducing micro-expression in this research also has the biggest problem: In this research, micro-expression be the tested imitation of requirement out, and require that Expression intensity is the lowest.At present, researcher is thought Micro-expression is difficult to be forged, the difference of micro-expression and generic expression be that its persistent period is the shortest and intensity with expression without Any relation.The video simultaneously comprising micro-expression and macroscopic view expression can not be carried out point by the method that Shreve proposed equal to 2009 Cut, 2011 they achieve first and in a unified framework, both split, but result shows, the method Capture rate the lowest (50%) to micro-expression, rate of false alarm the highest (50%).Additionally, micro-expression data collection that Shreve etc. collect is also There is the biggest problem, be with requiring that the method for tested imitation is collected.It is crucial that, the method for Shreve etc. is a table The method of feelings Video segmentation, it is impossible to the expression generic comprised in video is identified.
Summary of the invention
For overcoming drawbacks described above of the prior art, the present invention proposes a kind of automatic micro-expression recognition method.
According to an aspect of the present invention, it is proposed that a kind of automatic micro-expression recognition method, including step 10), capture regards Human face region in the two field picture of frequency, and carry out pretreatment;Step 20), image zooming-out Gabor characteristic to corresponding human face region With EOH feature;Step 30), individual features is merged, obtain the final sign of target video;By dividing of training gained Class device, obtains the expression sequence label of each frame video image;Step 40), this expression sequence label is scanned, it is judged that table The persistent period of feelings, according to the micro-expression obtained, output expression classification.
The present processes improves recognition speed and accuracy, improves speed and the efficiency of training so that micro-expression Identify that entrance can application.
Accompanying drawing explanation
Fig. 1 is the flow chart of the micro-expression recognition method according to the present invention.
As it can be seen, in order to enable clearly to realize the structure of embodiments of the invention, be labelled with in the drawings specific structure and Device, but it is only for signal needs, it is not intended to limit the invention in this ad hoc structure, device and environment, according to specifically Needing, these devices and environment can be adjusted or revise by those of ordinary skill in the art, the adjustment that carried out or Person's amendment is still included in the scope of appended claims.
Detailed description of the invention
Micro-expression recognition method automatic with the one that the present invention is provided by specific embodiment is carried out in detail below in conjunction with the accompanying drawings Describe.In the following description, the multiple different aspect of the present invention will be described, but, for the ordinary skill in this area For personnel, the present invention can be implemented just with the some or all structures of the present invention or flow process.In order to explain For definition, elaborate specific number, configuration and order, however, it will be apparent that in the case of there is no these specific detail The present invention can also be implemented.In other cases, in order to not obscure the present invention, will no longer enter for some well-known features Row elaborates.
Generally speaking, when carrying out micro-Expression Recognition, will comprehensively extract Gabor characteristic and the EOH feature of micro-expression, obtain After the comprehensive characterization of the Gabor+EOH obtaining target, the most directly it is used for training by all features, but feature was carried out Filter.Experiment shows, carries out filtration only for EOH feature and is only effectively.The method filtered is: obtain with the initial stage Training sample, after initializing training sample weight, is carried out taking turns training, it is thus achieved that Weak Classifier by Gentleboost (MI+DWT) Weighting fault rate on every one-dimensional characteristic;If this error rate is higher than a certain threshold value a set in advance, this feature is abandoned, instead Then this feature is retained.After obtaining the feature filtered, will be by PreAvgGentleboost_EOHfilter row feature Select.The method (will reduce feature quantity about 74%) while greatly reducing algorithm computation complexity, retain grader Nicety of grading so that Gabor+EOH feature is used for micro-Expression Recognition and is possibly realized.
When using PreAvgGentleboost_EOHfilter to carry out feature selection, Weak Classifier will be considered Weighted error and weighted square error rather than the simple weighted square error considering Weak Classifier. PreAvgGentleboost_EOHfilter will be ranked up according to this aggregative indicator, and obtain during by training loop iteration is weak Grader arranges with this aggregative indicator ascending order.The selection of Weak Classifier will be carried out according to system of selection before according to this list. The problem that the method will can damage grader accuracy rate when solving and directly use Gabor+EOH feature.By SVM with PreAvgGentleboost_EOHfilter is combined, for training to obtain new grader.
Automatic micro-Expression Recognition based on Gabor characteristic
What human visual system can approximate regards a layer structure as, and the processing to visual information is to go forward one by one in stratiform , and wherein primary visual cortex is the basis of this layer structure, therefore, it can push away from this conclusion, primary human vision In the cortex response to static expression picture, it should contain the useful information made a distinction of difference being expressed one's feelings, i.e. Ren Leichu Level visual cortex is a kind of Efficient Characterization mode identifying expression to the characteristic manner of static expression picture.If computer vision system The primary human vision cortex characteristic manner to expressing one's feelings can be simulated or imitate to system, and is analyzed in some way, then by terms of The high speed processing speed of calculation machine system, the computer vision system ultimately formed then may be equivalent to the mankind with superelevation with one Expression is identified by the mode that expression is scanned by speed.Such system can be to everyone every in each frame of video One expression is identified, and after the result of identification being analyzed, computer system should be able to be with certain accuracy rate identification In video, whether comprise micro-expression and what expression is micro-expression of comprising be respectively.
Wavelet transformation is a kind of mathematical tool that new development in recent ten years is got up.Wherein, Gabor wavelet converts as one Plant feature extraction and image-characterization methods, be widely used in area of pattern recognition.Gabor wavelet conversion has energy Obtain the characteristic of optimal partial in spatial domain and frequency domain, therefore, it is possible to describe well corresponding to spatial frequency features simultaneously simultaneously (spatial frequency characteristics, have another name called yardstick scale), locus (spatial And the partial structurtes information of set direction (orientation selectivity) localization).Researcher channel syndrome Bright, the filter response of most of primary visual cortex simple cell of mammal (including humans) can be by one group from phase As Two-Dimensional Gabor Wavelets simulate.
Face datection and preprocess method
For improving system automation degree, use the Face datection that Kienzle, Bakir, Franz and Scholkopf propose Face in video is captured by algorithm automatically.This algorithm uses support vector machine (the support vector of 5 layers Machines, SVM) form (cascaded-based) the face detecting system cascaded.Wherein, the SVM of ground floor adopts Carry out improving to improve the recognition speed of system by rank defect (rank deficient) method.This algorithm is at Pentium 4 Can realize in the system of 2.8Ghz in real time the face in video being detected.Its survey on MIT-CMU face database Examination data show, the accuracy of this algorithm is the highest (hitting rate 93.1%, rate of false alarm 0.034%).
The Image semantic classification process used is as follows: after completing automatic Face datection, the facial image that captures will first by It is converted into 8 gray level images;Then, with quadratic linear interpolation method by image normalization to 48 × 48 pixel sizes.In this Shen In please, embodiment 1 and embodiment 2 all will take the method to carry out Image semantic classification.
The Gabor characterizing method of human face expression
The two-dimensional Gabor filter group facial image to capturing is used to carry out feature extraction, to form human face expression Gabor characterizes.Two-dimensional Gabor filter is one group has the plane wave of Gaussian envelope, it is possible to accurately extract the local of image Feature, and displacement, deformation, rotation, dimensional variation and illumination variation are had certain robustness.Its Gabor core is defined as:
Ψ u , v ( z ) = | | k u , v | | 2 σ 2 e ( - | | k u , v | | 2 | | z | | 2 / 2 σ 2 ) ( e i k u , v z - e - σ 2 / 2 ) . - - - ( 1 )
Wherein KmaxBeing peak frequency, f is the interval factor in frequency domain between Gabor core.φu∈ [0, π), which dictates that the direction of Gabor filter.(x, y) represents position to z=, and ‖ ‖ represents that delivery is grasped Make, and parameter u and v represent direction and the yardstick controlling Gabor filter respectively.
The Section 1 of formula (1) determines the oscillating part of wave filter, and Section 2 is then for compensating direct current component, to eliminate filter The ripple device response sensitivity to image overall light change.Parameter σ controls the number of oscillation of envelope function.Extracting facial expression image Gabor characteristic time, typically can select different yardsticks and direction to form one group of Gabor core, then by Gabor core with given Image carries out convolution algorithm and produces the Gabor characteristic of image.Parameter u, v, K are depended in the design of Gabor filter groupmax, f, σ Selection.In this application, use the Gabor filter group in the 9 yardstick 8 directions facial image to capturing to carry out feature to carry Take.Design parameter selects as follows:
σ=2 π, kmax=pi/2,V=0 ..., 8, u=0 ..., 7. (2)
The Gabor of Facial Expression Image I (z) characterizes | o (z) |U, vCan be by the convolution fortune of this image with Gabor filter Calculate and produce, it may be assumed that
| o ( z ) | u , v = ( Re ( o ( z ) ) u , v ) 2 + ( Im ( o ( z ) ) u , v ) 2 ,
Re(o(z))U, v=I (z) * Re (ΨU, v(z)),
Im(o(z))U, v=I (z) * Im (ΨU, v(z)). (3)
By | o (z) |U, vIt is converted into column vector OU, u, then by OU, vBe sequentially connected with direction by yardstick, formed one row to Amount G (I), it may be assumed that
G (I)=O=(O0,0O0,1…O8,7). (4)
Owing to having multiple directions and yardstick, the Gabor characteristic dimension ultimately formed will be up to 48 × 48 × 9 × 8= 165888 dimensions, have the highest redundancy.
Embodiment one, combine the expression recognition of Gabor characteristic and Gentleboost
Want the micro-expression in video be analyzed, first have to the generic expression in static images be identified. First the expression recognition performance of algorithm is estimated on Facial expression database (also known as CK) by embodiment one.
This data base comprises 6 kinds of basic facial expression videos of 100 university students (the range of age: 18-30 year).Wherein, 65% Model be women, the model of 15% is Black people, and the model of 3% is Aisan or Latin descendants.Video capture mode is: model presses The expression action of regulation is performed by as requested, and video camera is to simulate S terminal (S-video) signal to tested front face table Feelings carry out record.Video is finally stored with 8 gray level image forms of 640 × 480.
In embodiment one, select the neutral expression in 374 sections of videos of wherein 97 models and one or two be in 6 kinds of basic facial expression images of summit phase place (peak) form data set.The expression picture quantity of final choice is 518.For The Generalization Capability of test algorithm, uses 10 foldings crosscheck (10-fold cross validation) to carry out algorithm performance Assessment.
Initially with Gentleboost as grader (classifier).Gentleboost is a kind of committee machine Device, it uses the method for the principle (principle of divide and conquer) divided and rule to solve of complexity Habit task.It simulates the process of human colony's decision-making, i.e. finds ' expert ' and obtains the knowledge about problem, and passes through The mode allowing these experts vote, carrys out the final description of shape dual problem.Gentleboost, will by changing the distribution of sample One weak learning model (weak learning model) is converted into strong learning model (strong learning model).Cause This, Gentleboost algorithm can be suitably used for the classification problem of this kind of complexity of Expression Recognition, and has outstanding performance.Due to Gentleboost algorithm implies feature selection process when training, and the Gabor used characterizes and has the highest feature dimensions Number, therefore Gentleboost will select small number of feature for last classification, and the grader ultimately formed should Very high-class speed and Generalization Capability can be had.
The reason selecting Gentleboost rather than more conventional Adaboost is, researcher has turned out Faster, for object identification, the accuracy of Gentleboost is higher for the speed of Gentleboost convergence.In embodiment one In, use with mutual information (mutual information, MI) and changeable weight cutting (dynamical weight Trimming, DWT) Gentleboost that improved of method carries out Expression Recognition.
Gabor characteristic owing to using is high redundancy, and therefore, the method using mutual information here is selected to remove Information redundancy between the Weak Classifier arrived, to reject invalid Weak Classifier, promotes the performance of Gentleboost.
Mutual information is to a kind of tolerance of dependency between two stochastic variables.The mutual information of two stochastic variable X and Y can It is defined as:
I(X;Y)=H (X)+H (Y)-H (X, Y)=H (X)-H (X | Y)=H (Y)-H (Y | X). (5)
If stochastic variable is discrete, then entropy H (X) may be defined as:
H (X)=-∑ p (x) lg (p (x)) (6)
Assume, when the T+1 wheel of training, had T Weak Classifier { hv(1), hv(2)... hv(T)Be chosen, then calculate standby Select grader and selected the function of maximum MI between grader may be defined as:
R ( h j ) = max t = 1,2 , . . . , T MI ( h j , h v ( t ) ) . - - - ( 7 )
Wherein MI (hj, hv(t)) it is stochastic variable hjAnd hv(g)Between MI.
By by R (hj) do with a certain predetermined threshold value TMI (threshold of mutual information) Relatively judge that whether this Weak Classifier is abandoned during in training, with ensure newly to train the information of the Weak Classifier obtained not by The set having selected Weak Classifier to form is comprised.If R is (hj) < TMI then illustrates that this Weak Classifier is effective, and by weak point of addition Among class device set, otherwise then being abandoned, algorithm will select a new Weak Classifier, directly from alternative weak classifier set To qualified Weak Classifier selected.If can be eligible without Weak Classifier, then train termination.At said process In, except considering the MI of Weak Classifier, algorithm also needs to take the performance of Weak Classifier into account.Each take turns selection time, algorithm Select has minimum weight square error (weighted-squared error) in training set and meets MI condition Weak Classifier.
Changeable weight cutting
Adding after MI, owing to selecting Weak Classifier can additionally increase the cycle-index of algorithm according to MI, therefore algorithm Training time can be longer than before unmodified so that the problem of training speed becomes more serious.As it was previously stated, the Algorithm for Training time Long, at this, DWT is extended to Gentleboost.The method is still referred to as DWT.
Taking turns in iterative process algorithm is each, the sample in training set will filter according to its sample weights.If sample Weight wi< t (β) then this sample will be rejected from current training set.T (β) is the distribution of previous cycle current sample weights β th percentile.
Weak Classifier and multicategory classification problem
Using regression stump is the Weak Classifier of Gentleboost, i.e. has:
ht(x)=a δ (xf> θ)+b. (8)
X in above formulafBeing characterized f ' the th feature of vector x, θ is threshold value, and δ is indicator function (indicator Function), a and b is regression parameter.Wherein
b = Σ i w i y i δ ( x f ≤ θ ) Σ i w i δ ( x f ≤ θ ) , - - - ( 9 )
a + b = Σ i w i y i δ ( x f > θ ) Σ i w i δ ( x f > θ ) . - - - ( 10 )
In formula (9) and (10), y is the class label (± 1) of sample.
The method of the one-to-many of Gentleboost algorithm can regard the Adaboost.MH algorithm under a kind of specified conditions as. For this multicategory classification problem of Expression Recognition, the method for one-to-many is used to be achieved, will the sample conduct of a certain classification Positive class in training, and the sample of other all categories is as the negative class in training, determines belonging to sample finally by ballot Classification, i.e. has:
F ( x ) = arg max l H ( x , l ) , l = 1,2 , . . . , K - - - ( 11 )
In above formula, l is the class label of sample, and (x, l) is the discriminant equation of two classification grader to H, and K is to be classified The quantity of classification.
Experimental comparison open and close respectively MI Yu DWT various under the conditions of the performance of Gentleboost algorithm.Table 1 List accuracy rate and the training time of each algorithm.
The Expression Recognition accuracy rate of table 1 various Gentleboost algorithm and training time
In various Gentleboost algorithms, Gentleboost based on MI Yu DWT has the highest accuracy rate (88.61%), but such accuracy rate is the most satisfactory for micro-Expression Recognition.
Result shows, the combination of MI+DWT improves the training speed of Gentleboost, and training speed has risen to original 1.44 times of Gentleboost, about can save the training time of nearly half.This effect is brought by DWT.Noticeable It is that, according to the difference of problem, the acceleration effect of DWT should be different.Simple question is got at classification interface, and adding of DWT is quick-acting Fruit should be the best.Use higher threshold value can bring faster training speed, but also can reduce the accurate of grader simultaneously Rate.When using DWT, researcher needs directly to make balance at speed and accuracy.The parameter used is set to β=0.1, this Should be able to ensure that Gentleboost algorithm all has higher performance and trains faster when in the face of most classification problem Speed.
Embodiment two, combine the expression recognition of Gabor characteristic and GentleSVM
SVM is a kind of general feed-forward type neutral net, and it sets up a hyperplane by minimizing structure risk (hyper-plane) as decision-making curved surface so that the interval (margin) between positive example and counter-example maximizes.For Expression Recognition For, the similar nature of SVM Yu Gentleboost, all have the peak performance in this field.
Selecting Gabor characteristic with Gentleboost, SVM is then at the new table formed through feature selection Levy and be trained forming final grader.In our current research, this combination is referred to as GentleSVM.
The data set that embodiment two is used is identical with embodiment one.In order to test the Generalization Capability of algorithm, use 10 foldings Algorithm performance is estimated by crosscheck.
After the training of Gentleboost completes, the Gabor that the Weak Classifier selected by Gentleboost is used Feature reconnects in removal redundancy feature (rejecting duplicate keys) afterwards, forms the new Gabor to human face expression and characterizes.And SVM will be trained in the sign new at this.SVM is a kind of binary classifier, multiclass pattern recognition this for Expression Recognition Problem, can realize by being decomposed into the combination of multiple two class problems.Embodiment two use one-to-many manner realize SVM Multi-class classification problem.Specifically have:
F ( x ) = arg max i ( ( w i ) T φ ( x ) + b i ) | | w i | | , i = 1,2 , . . . , K - - - ( 12 )
In above formula, i is sample class label, and w is weight vectors, and φ (x) is characteristic vector, and K is the quantity of classification.
Consider when using Adaboost to carry out feature selection, the performance of Linear SVM and non-linear SVM closely, and The classification speed of Linear SVM, far faster than nonlinear SVM, uses 1 norm soft margin Linear SVM (1-norm in embodiment two Soft margin linear SVM) as grader.
Embodiment 2 compared for the performance of various GentleSVM algorithm combination and the Expression Recognition of original SVM.Result such as table 2 Shown in table 3.
The Expression Recognition accuracy rate of table 2 various GentleSVM algorithm and training time
The recognition accuracy of all kinds of expression of table 3
As shown in table 2, during the accuracy rate of all GentleSVM algorithm combination has all exceeded original SVM and embodiment one The accuracy rate of improved Gentleboost.Additionally, all GentleSVM algorithm combination all just completed within 20 seconds The training of 10 times, it means that after the training completing Gentleboost, system has only to pay little time cost with regard to energy Performance is improved further.In all of GentleSVM algorithm combination, the accuracy rate of MI+DWT combination Expression Recognition is the highest (92.66%).Combining with the result of embodiment one and see, the above results explanation MI+DWT combination improves effectively Gentleboost is for classification and performance during feature selection.
As shown in table 3, all GentleSVM algorithm combination, and SVM, all can extremely accurate identify surprised, detest, Happy and neutral expression.For sad, angry and frightened, the most difficult identification of above-mentioned algorithm.But, in addition to SVM, all GentleSVM combines the accuracy rate for above-mentioned 3 kinds of Expression Recognition all more than 80%.For a full automatic Expression Recognition For system, such accuracy rate be in acceptable within the scope of.
Embodiment three, automatic micro-Expression Recognition based on Gabor characteristic
On the basis of embodiment two, embodiment three builds automatic micro-Expression Recognition based on Gabor characteristic.
In order to promote the Generalization Capability of system, have collected a new training set and system is trained.This training set Altogether comprise 1135 expression pictures.Wherein, 518 pictures come selected from aforesaid CK data base, and 146 are selected from MMI table Feelings data base, 185 are selected from FG-NET data base, and 143 are selected from JAFEE data base, and 63 are selected from STOIC data Storehouse, 54 are selected from POFA data base, are downloaded voluntarily by network for other 26.
METT includes have expressed 7 kinds of basic facial expressions such as sadness, surprised, angry, detest, frightened, happy, contempt altogether 56 sections of micro-expression video altogether.Selecting the totally 48 sections of micro-expression video that wherein have expressed front 6 kinds of basic facial expressions is that test set is to being System performance is estimated.Performance indications for system evaluation include: micro-expression capture accuracy rate, i.e. system correctly to be pointed out to regard Either with or without micro-expression in Pin, there is several micro-expression;Micro-expression re-recognizes accuracy rate, and i.e. system is except correctly to capture micro-expression, also Correctly to identify the expression classification of micro-expression.
In order to process from different expression data storehouses facial expression image difference on illumination level, in micro-Expression Recognition system In add extra preprocessing process.Except performing the preprocessing process in embodiment 1 and 2, also select the gray scale to image Carrying out an extra normalized, i.e. gradation of image mean normalization is 0, and the normalized square mean of gradation of image is 1.
In order to be identified micro-expression, each two field picture in video is carried out by the algorithm first obtained by embodiment 2 Identify, it is thus achieved that the output of label that this video is expressed one's feelings.Hereafter, the output of this expression label will be scanned, to confirm video The turning point of middle expression synthesis.Hereafter, the persistent period of expression is entered by system by the frame per second according to the turning point obtained and video Row is measured.For example, it is assumed that the frame per second of one section of video is 30 frames/second, its expression label is output as 111222, then the expression of this video Conversion turning point is the first frame, the midpoint of label 1 → 2 and last frame.Therefore in this embodiment expression 1 and 2 persistent period equal For 1/10s.
Hereafter, micro-expression and affiliated label thereof will be extracted by system according to micro-expression definition.Wherein, system will continue Time is that the expression between 1/25s to 1/5s is thought of as micro-expression.The persistent period expression more than 1/5s will be identified as common table Feelings, and be omitted.
At present, it is considered to the universality of the expression of contempt is not so good as other 6 kinds of basic facial expressions (Frank et al, 2009), at present 6 kinds of micro-expressions of training system identification (sad, surprised, angry, detest, frightened, happy).
Performance when the different training set of use and different pretreatments method is compared by the present embodiment, if training set For CK, and not carrying out extra pretreatment, system recognition accuracy on METT is at a fairly low, and the most only 50%, this is even Lower than without the achievement that the trained mankind are tested to survey before METT.When training set is replaced by new training set, system Capture accuracy rate risen to 91.67%, recognition accuracy now is 81.25%.If system the most also performs aforesaid Additional pre-treatment process, then the capture accuracy rate of system can rise to 95.83%, and recognition accuracy now can reach 85.42%.This achievement is better than the achievement that class trained is tested to be surveyed after METT, and (about 70%-80%, is shown in Frank et al, 2009;Russell et al, 2006).The above results indicate representative big training sample for The importance of automatic micro-Expression Recognition.Consider that the contribution degree of system is about by additional pre-treatment when system uses new training set 4%, the above results is pointed out, and the representativeness of sample may be more important for micro-Expression Recognition, uses the pretreatment side of complexity Method only may can bring limited performance boost.
Table 4 system identification achievement to micro-expression of different ethnic groups
Automatic micro-Expression Recognition based on Gabor+EOH composite character
An automatic micro-table micro-expression in video being analyzed the most tentatively has been obtained by above-described embodiment Feelings identification system.But from the point of view of aforementioned result, the accuracy rate of this system can also be further enhanced.One of them is important asks Topic is exactly the problem of small sample.Consider the researcher in the at present automatically Expression Recognition field person that is mostly abroad studies, used by them Expression data storehouse in comprise be mostly white man, therefore, want to obtain enough sample sizes has the most representational training Collection is not easy to.
The how accuracy rate of boosting algorithm in the case of only having small sample?This is problem encountered.Solve this Problem can be set about from the sign used by change system.Good sign can make the between class distance between different concepts big, in concept Inter-object distance is little, and meanwhile, good sign also has higher degrees of tolerance for error.In brief, good sign can be little The classification interface of problem also can preferably be described in the case of sample.
EOH (Edge Orientation Histogram) can provide such characteristic.What EOH extracted is to close in image In image border and the information in direction thereof, i.e. it is related to the information of picture shape, insensitive for image overall illumination variation, right Insensitive in displacement and the rotation of little yardstick.This feature extracting method also has physiological basis.Such as, researcher is sent out Existing, under primary visual cortex, the receptive field of cell has significant directional sensitivity, and single neuron is only to being in its receptive field In stimulation make a response, the most single neuron only information to a certain frequency range presents stronger reaction, such as the limit of specific direction The characteristics of image such as edge, line segment, striped.EOH can regard the simulation to this characteristic as.As it was previously stated, use, there is Basic of Biology Visual information characterizing method researcher can be helped preferably to construct computer vision system.Meanwhile, research also demonstrate EOH can Successfully to extract, face is distinguished the feature laughed at ridicule.The more important thing is, researcher find EOH training set sample size relatively Also outstanding performance it is provided that in the case of Shao.The extraction of Gabor+EOH composite character
Gabor characteristic is extracted, as previously mentioned.As follows with Feature Fusion for EOH feature extraction:
First image is carried out edge extracting, use at this Sobel operator to carry out edge extracting.Wherein, image I is at point (x, gradient y) can be obtained by the Sobel operator of respective direction and the convolution of image, i.e. have:
Gx(x, y)=Sobelx* I (x, y)
Gy(x, y)=Sobely* I (x, y) (13)
Wherein:
Sobel x = - 1 0 1 - 2 0 2 - 1 0 1 , Sobel y = - 1 - 2 - 1 0 0 0 1 2 1 - - - ( 14 )
Then image I point (x, edge strength y) is:
G ( x , y ) = G x ( x , y ) 2 + G y ( x , y ) 2 - - - ( 15 )
In order to the noise in marginal information is filtered, Wo Menyou
G &prime; ( x , y ) = G ( x , y ) G ( x , y ) &GreaterEqual; T 0 G ( x , y ) < T - - - ( 16 )
Image I point (x, edge direction y) may be defined as:
&theta; ( x , y ) = arctan ( G y ( x , y ) G x ( x , y ) ) - - - ( 17 )
If edge direction is divided into K interval, then image I point (x, edge orientation histogram y) can be calculated as:
Then, the edge integrogram of image can be calculated as:
Arbitrary region during wherein R is image I.
Owing to EOH intrinsic dimensionality is the highest, when carrying out EOH feature extraction, select to zoom to the face captured 24 × 24 pixel sizes.Additionally, only with Ratio Features, i.e. have:
A k 1 , k 2 ( R ) = E k 1 ( R ) + &epsiv; E k 2 ( R ) + &epsiv; - - - ( 20 )
Wherein ε is smoothing factor.
After completing Gabor Yu EOH feature extraction, i.e. carry out Feature Fusion.By EOH feature mode as Gabor It is converted into a column vector, hereafter, it is connect with Gabor characteristic, forms a new column vector, carry out Feature Fusion, i.e. Have:
F={fGabor, fEOH} (21)
Hereinafter all make to carry out feature extraction in aforementioned manners.
Embodiment four, Expression Recognition based on Gabor+EOH composite character
Data set, method for detecting human face and the preprocess method that embodiment 4 is used is identical with embodiment 1 and 2.In order to The Generalization Capability of test algorithm, uses 10 folding crosschecks to be estimated algorithm performance.
Grader in embodiment 4 be respectively adopted in embodiment 1 and embodiment 2 Gentleboost obtained and GentleSVM, the performance of system during to investigate employing different grader.
Wherein, algorithm is modified, eliminate error rate and check condition, i.e. become WT by DWT, the threshold that sample filters Value will not change, and will not due to error rate is too high and re-training.Come from the experimental results, when using Gabor+EOH The accuracy rate that during composite character, Gentleboost identifies is at a fairly low, and only about 50%, even below only use Gabor characteristic Situation.This explanation, in the case of Gabor+EOH composite character, is identified being inapplicable with Gentleboost.
The recognition result of GentleSVM is affected by Gentleboost feature selection, and less (accuracy rate is 85%- 88%), this is primarily due to the feature that Gentleboost selects by GenteSVM and has carried out Feature Fusion, will be for difference The feature that expression is selected is connect, and eliminates redundancy feature therein.But, the now knowledge of two groups of GentleSVM Other accuracy rate is below only using situation during Gabor characteristic.This result again illustrate Gentleboost mixing Gabor and Fail in the case of EOH feature.
Embodiment five, Gentleboost based on mean error grade
In embodiment 5, use the method for mean error grade that Gentleboost is further improved, will be new The algorithm obtained is referred to as AvgGentleboost.The method will solve using Gabor and EOH mixing to characterize, Gentleboost The problem that algorithm can lose efficacy.Considering in the t of iteration takes turns, Gentleboost is at certain training set AtGone up obtain N number of Weak Classifier hI, t, i=1,2 ..., N, meanwhile, if hI, tIn training set AtOn weighted error be EI, t, weighted square error is εI, t, and have Et={ EI, t| i=1,2 ..., N}, εt={ εI, t| i=1,2 ..., N}, then:
r1, i, t=Rank (Et, EI, t),
r2, i, t=Rank (εt, εI, t). (22)
Function Rank in above formula (x, y) is a ranking functions, expression be variable x is done ascending sort after, variable y is at this Location in arrangement.
By Weak Classifier hI, TMean error tier definition be:
RI, t=(r1, i, t+r2, i, t)/2. (23)
Then in t wheel iteration, Gentleboost selects the Weak Classifier with minimum average B configuration grade of errors.That is:
j = arg min i R i , t , i = 1,2 , . . . , N ,
ht=hJ, t (24)
If considering to use MI in training, then what Gentleboost selected is to obtain with ascending sort with mean error grade First Weak Classifier meeting MI filtercondition in the alternative Weak Classifier list obtained.
Data set, method for detecting human face and preprocess method that embodiment 5 is used are the same as in Example 4.In order to survey The Generalization Capability of checking method, uses 10 folding crosschecks to be estimated algorithm performance.
Embodiment 5 is respectively adopted Gabor+EOH and Gabor characteristic, to compare system under different characteristic extracting method Performance.
Grader in embodiment 5 is newly obtained AvgGentleboost and AvgGentleSVM, to enter with embodiment 4 Row compares, and investigates the performance of system when using different grader.
When using Gabor to characterize, closely, both are almost for the performance of AvgGentleboost and Gentleboost Indifference, simply when using feature negligible amounts, AvgGentleboost shows the performance higher than Gentleboost. This result shows, in Gentleboost Algorithm Error minimizes the scope that method is also suitable for, and the property of AvgGentleboost Can close with Gentleboost (comparable).When characteristic manner changes Gabor+EOH mixing sign into, The performance Gentleboost to be significantly larger than of AvgGentleboost.The result shows that AvgGentleboost's is effective Property.When grader is AvgGentleboost, algorithm performance when using Gabor+EOH to characterize is better than and only uses Gabor During feature, meanwhile, the performance of the combination of AvgGentleboost+Gabor+EOH is also better than Gentleboost+Gabor combination Performance, again illustrate the newly obtained algorithm effectiveness for Expression Recognition problem.
The optimal Expression Recognition accuracy rate that table 5.1 various Gentleboost algorithm combination is obtained
The optimal Expression Recognition accuracy rate that table 5.2 various GentleSVM algorithm combination is obtained
Under various conditions, the performance of AvgGentleSVM Yu GentleSVM is the most relatively.Wherein, Gabor is being used During sign, the performance of AvgGentleSVM with GentleSVM is nearly identical.The sign used by AvgGentleSVM is changed During for Gabor+EOH, the accuracy rate of algorithm is the highest can reach 94.01%, situation when the most only using Gabor to characterize, Now the difference between grader is primarily present in the situation using feature more.When characteristic manner is Gabor+EOH, The performance of AvgGentleSVM is higher than GentleSVM, illustrates AvgGentleboost effectiveness on feature selection.
Based on the above results, a conclusion can be obtained: relative to Gentleboost, AvgGentleboost in classification And the performance on feature selection is obtained for raising.But, test result indicate that, relative in embodiment 2, we are obtained For algorithm, the algorithm that embodiment 5 is obtained lifting in accuracy rate is the most restricted, still has the sky of lifting further Between.
The pre-filtered AvgGentleboost of embodiment six, feature based
When reality is applied, the slowest training speed can seriously limit the scope of parameter optimization and actually used training set Size.Want to be actually used in Gabor and EOH mixing characterizing method micro-Expression Recognition, Gentleboost Algorithm for Training speed Cross slow problem necessarily to need to have been resolved.The slow-paced reason of Gentleboost Algorithm for Training is because weak point of training During class device, Gentleboost needs to carry out exhaustive search, to select optimum Weak Classifier.The time of this Algorithm for Training method is multiple Miscellaneous degree is O (NMT log N), and wherein, N is the quantity of the sample included in training set, and M is the intrinsic dimensionality of sample, and T is instruction The quantity of the Weak Classifier that white silk is to be obtained.It will be seen that when in N, M or T, the value of any one variable is bigger, the training of algorithm Time just will become the longest.If wherein both or time three is bigger value, then Algorithm for Training speed is possible to become Unacceptable.
Owing to the method for the decomposition variable M that researcher is proposed before cannot carry out group with existing AvgGentleboost Closing, therefore, in our current research, variable M is decomposed by the pre-filtered method of proposition feature, further speeds up algorithm with this Training process.The method and DWT are combined, can variable N and M be optimized so that algorithm is at large sample simultaneously Also Fast Training can be carried out in the case of high-dimensional.
Before performing Gentleboost iteration, sample characteristics is carried out pre-filtering.Use Gentleboost algorithm entirely It it is one Weak Classifier of each features training on the training sample in portion.If the error rate that this Weak Classifier is in training set more than or Equal to threshold alpha set in advance, then the feature corresponding to this Weak Classifier is abandoned, otherwise then retained.Finally, will All features retained are connect, and form new sign, form new training set.Hereafter AvgGentleboost will be It is trained in this new training set.Combination of the two is referred to as PreAvgGentleboost by us.
Data set, method for detecting human face and preprocess method that embodiment 6 is used are same as in Example 5.In order to survey The Generalization Capability of checking method, uses 10 folding crosschecks to be estimated algorithm performance.
In embodiment 6, we use Gabor+EOH composite character, but are respectively adopted different parameters and carry out feature extraction, To investigate the performance arranging lower system at different parameters.
Grader in embodiment 6 is AvgGentleboost and AvgGentleSVM, and newly obtained PreAvgGentleboost and PreAvgGentleSVM, with the pre-filtered effectiveness of verification characteristics.
The EOH feature extraction parameter that embodiment 6 uses is:
K={4,6,8}, θ ∈ [-π, π) or [-pi/2, pi/2), T=100, ε=0.01 (25)
Considering in training set, the positive class sample size of the expression of a certain classification to be far smaller than its negative class quantity, the most right There is a simplest strategy in this two classes sample classification, will be divided into negative class by all samples.If feature pre-filtering obtains The accuracy rate of Weak Classifier is less than or equal to above-mentioned accuracy rate, then what feature used by this Weak Classifier may be described cannot be preferable Align class and negative class makes differentiation.Therefore, in the present embodiment, feature pre-filtering parameter is α=m/N.Wherein, m is a certain class The positive class sample size of expression, N is the quantity of the training examples in training set.
Preliminary experimental results shows, the training speed of AvgGentleboost is greatly improved, K=4, θ ∈ [-π, Under the conditions of π), the training time was shortened to 10 hours by original nearly 20 days.Under different parameters, feature pre-filtering algorithm is obtained Optimal Expression Recognition accuracy rate is as shown in table 6.1:
The optimal Expression Recognition accuracy rate that under table 6.1 different parameters, feature pre-filtering algorithm is obtained
From upper table it will be seen that feature pre-filtering method can be greatly decreased the feature quantity for training, filter about The feature of 97%, thus computation complexity when reducing training greatly.But, test result indicate that, directly by the method pair Gabor+EOH feature filters, can reduce training computation complexity while so that Gentleboost and The nicety of grading of two kinds of graders of GentlebSVM is by infringement in various degree.The accuracy rate of both graders all can reduce To than level the lowest before not adding EOH feature.Wherein, the condition using the EOH of the edge gradient having symbol is the best Condition in the EOH of signless edge gradient.For Gentleboost, the EOH using 6 directions is optimal ginseng Number, 8 directions are similar with result during 4 direction.For GentleSVM, the direction quantity that EOH feature is had is the most, its property The injured degree of energy is the highest, and when using the EOH in 8 directions, its performance has been even below and has used identical parameters Gentleboost。
Occurring that such result should be to cause owing to feature pre-filtering algorithm eliminates too many feature, this drops significantly The low degree of redundancy of feature.It is known that Boosting algorithm is easy when processing the sample of the feature with high redundancy There is higher performance.Therefore, directly Gabor+EOH feature is carried out characteristic filter and can destroy the feature selection that we are used Working condition required for algorithm.
The characteristic the most how keeping Gabor+EOH feature height redundancy makes again training can speed up and not damage simultaneously The performance of grader?Consider Gabor characteristic be 160,000 dimensions, intrinsic dimensionality the most within the acceptable range, thus training time Between long result in dimension calamity mainly due to EOH intrinsic dimensionality is too high and causes.Result and the knot of example 5 in conjunction with upper table Really, it is believed that, add more finer EOH feature and the performance of algorithm can not be substantially improved, this is likely due to very Many EOH feature is invalid for Expression Recognition.Therefore, if retaining complete Gabor characteristic, only EOH feature is carried out Filter, it is possible to realizing us is that accelerating algorithm trains process, retains again the target of grader nicety of grading.
Based on the experimental result of above table, we have selected the more significant parameter of part, has carried out new experiment, In this experiment, we select only to filter EOH parameter, and this algorithm is we term it PreAvgGentleboost_ EOHfilter and PreAvgGentleSVM_EOHfilter.PreAvgGentleboost_ under table 6.2 different parameters The optimal Expression Recognition accuracy rate that EOHfilter is obtained
According to the result of table 6.2, it may be seen that when the EOH feature using 6 directions to have symbol, The performance of PreAvgGentleboost_EOHfilter is optimal, there is bigger lifting relative to the result in table 6.1.At table 6.2 Two kinds of Parameter Conditions under, the PreAvgGentleboost that the highest recognition accuracy of this algorithm is all slightly below in example 5 exists Gabor+EOH characteristic condition is lower obtained the highest 91.31% accuracy rate, but, its computation complexity is but greatly lowered.Table The optimal Expression Recognition accuracy rate that under 6.3 different parameters, PreAvgGentleboost_EOHfilter is obtained
According to the result of table 6.3, it may be seen that when the EOH feature using 4 directions to have symbol, The performance of PreAvgGentleSVM_EOHfilter is optimal, and being much better than 6 directions has the condition of symbol.Table 3 two seed ginsengs several Under part, the performance of PreAvgGentleSVM_EOHfilter is all better than PreAvgGentleboost_EOHfilter.So far, exist In our 6 examples, the performance of GentleSVM is the most conforming has been higher than Gentleboost.Therefore we have reason to believe For Expression Recognition, the performance of GentleSVM is better than Gentleboost.
In table 6.3, the optimum (94.4%) that employing (4,2) parameter is obtained and AvgGentleSVM in experiment 5 The optimal accuracy rate obtained under Gabor+EOH characteristic condition is similar to (94.01%).But, use identical parameters Computation complexity during PreAvgGentleSVM_EOHfilter training is compared AvgGentleSVM and is decreased 74.08%, training Time is greatly shortened, and this makes to construct a micro-Expression Recognition system based on Gabor+EOH feature becomes possibility.At example In 7, we are by with this algorithm actual configuration one micro-Expression Recognition system based on Gabor+EOH feature.
Embodiment seven, automatic micro-Expression Recognition based on Gabor+EOH composite character
On the basis of embodiment 6, build automatic micro-Expression Recognition system based on Gabor+EOH composite character.Implement Example 7 will use two training sets.One of them training set is identical with the new training set obtained in embodiment 3.At the present embodiment In referred to as training set 1.Another training set is to add to extract from existing Facial expression database on the basis of training set 1 The new training set that 977 Aisan's facial expression images are formed, referred to as training set 2.By using different training sets, in embodiment The impact for systematic function of the training set sample size is investigated in 7.
The test collection that embodiment 7 uses is same as in Example 3.
The Face datection algorithm of embodiment 7 is same as in Example 6.Gabor characteristic extracts preprocess method and embodiment 3 phase With.EOH feature extraction preprocess method is same as in Example 6.Feature extracting method is with embodiment 5.Grader in embodiment 7 For GentleSVM, AvgGentleSVM, and newly obtained PreAvgGentleSVM_EOHfilter, use investigating system Performance during different grader.The experimental comparison result such as following table that embodiment 7 is carried out.
Table 7.1 algorithms of different micro-expression acquisition performance under different parameters compares
Table 7.2 algorithms of different micro-Expression Recognition Performance comparision under different parameters
From table 7.1 it will be seen that for micro-expression capture for, use sample size less training set 1 time, Micro-expression capture accuracy rate of AvgGentleSVM-Gabor is better than GentleSVM-Gabor, and now PreAvgGentleSVM_EOHfilter can reach peak 100% when using relatively multiple features, is better than AvgGentleSVM-Gabor.When training set is replaced by the training set 2 having more multisample, GentleSVM-Gabor and The capture accuracy rate of AvgGentleSVM-Gabor can further promote, and all can reach ceiling under certain parameter 100%, and PreAvgGentleSVM_EOHfilter now has the most all reached 100% under each parameter.This result explanation PreAvgGentleSVM_EOHfilter carries out the performance of micro-expression capture and is higher than its two kinds of algorithms, for micro-expression capture For our amendment that carried out be effective.
From table 7.2 it will be seen that for micro-Expression Recognition, when using the less training set 1 of sample size, Micro-Expression Recognition accuracy rate of AvgGentleSVM-Gabor is basic with GentleSVM-Gabor zero difference, only more in employing GentleSVM-Gabor it is slightly better than during feature.PreAvgGentleSVM_EOHfilter can reach the highest under these conditions 87.5% recognition accuracy, be better than AvgGentleSVM-Gabor.
When training set is replaced by the training set 2 having more multisample, GentleSVM-Gabor and AvgGentleSVM- The recognition accuracy of Gabor can further promote, but the limited extent promoted, the most only improve 2%, now two kinds of algorithms Performance zero difference.But, after having changed the training set having more multisample, PreAvgGentleSVM_EOHfilter's Recognition accuracy but can obtain lifting by a larger margin, can reach the recognition accuracy of 91.67%, is much better than by instruction The average level (about 80%) that the experienced mankind are tested.
Finally it should be noted that above example is only in order to describe technical scheme rather than to this technical method Limiting, the present invention can extend to other amendment in application, change, applies and embodiment, and it is taken as that institute Have such amendment, change, apply, embodiment is all in the range of the spirit or teaching of the present invention.

Claims (11)

1. an automatic micro-expression recognition method, including:
Step 10), human face region in the two field picture of capture video, and carry out pretreatment;
Step 20), to the image zooming-out Gabor characteristic of corresponding human face region and EOH feature;
Step 30), individual features is merged, obtain target video preliminary characterization;Target video is obtained by training Finally characterizing and produce grader, described final sign is obtained by Gentleboost algorithm based on mean error grade; SVM feed-forward type neutral net is used to be trained forming grader on final sign;With
Step 40), obtain the expression sequence label of video image to be tested, this expression sequence label is scanned, it is judged that table The persistent period of feelings, obtain micro-expression, according to described grader output expression classification.
Method the most according to claim 1, wherein, step 10 includes: the facial image captured will be first converted into 8 Position gray level image;With quadratic linear interpolation method by image normalization to 48 × 48 pixel sizes.
Method the most according to claim 1, wherein, step 20 includes:
The two-dimensional Gabor filter group facial image to capturing is used to carry out feature extraction, to form the Gabor of human face expression Characterize;Two-dimensional Gabor filter is one group has the plane wave of Gaussian envelope, and its Gabor core is defined as:WhereinkmaxIt is peak frequency, f It is the interval factor in frequency domain between Gabor core,φu∈ [0, π), (x y) represents that position, parameter u and v are respectively to z= Representing direction and the yardstick controlling Gabor filter, parameter σ controls the number of oscillation of envelope function.
Method the most according to claim 3, wherein, step 20 also includes: image is carried out edge extracting, uses at this Sobel operator carries out edge extracting, and wherein image is obtained by the convolution of the Sobel operator of respective direction with image in certain gradient put Take, select the face captured is zoomed to 24 × 24 pixel sizes.
Method the most according to claim 4, wherein, step 20 also includes: after completing Gabor Yu EOH feature extraction, will EOH feature and Gabor are converted into a column vector, and connect with Gabor characteristic, form a new column vector, carry out feature Merge.
Method the most according to claim 1, wherein, step 30 also includes:
The method using mutual information removes the information redundancy between the Weak Classifier being chosen to, to reject invalid weak typing Device;Further,
Each take turns selection time, select there is in training set minimum weight square error and meet the weak typing of mutual information condition Device;Further,
Sample in training set will filter according to its sample weights.
Method the most according to claim 1, wherein, described in step 30 is finally characterized as the spy after removing redundancy feature Levy.
Method the most according to claim 5, wherein, the training step of step 30 also includes: use mutually in described training The method of information, the described mean error grade that is finally characterized as meets mutual trust with in the alternative sign list of ascending sort first The sign of breath filtercondition.
Method the most according to claim 5, wherein, the pretreatment of step 10 also include to gradation of image mean normalization be 0, and the normalized square mean of gradation of image is 1.
Method the most according to claim 1, wherein, step 40 includes:
Step 410), each two field picture in video is identified, it is thus achieved that the expression label to this video;
Step 420), will to this expression label output be scanned, to confirm the turning point of expression synthesis in video;
Step 430), the persistent period of expression is measured by the turning point of acquisition and the frame per second of video, fixed according to micro-expression Micro-expression and affiliated label thereof are extracted by justice.
11. methods according to claim 1, wherein, the training step of step 30 also includes:
Before performing Gent leboost training iteration, sample characteristics is carried out pre-filtering;
Using Gent leboost algorithm is one Weak Classifier of each features training on whole training samples, this weak typing Device error rate in training set is more than or equal to threshold value set in advance, then the feature corresponding to this Weak Classifier thrown Abandon;
Feature with a grain of salt for institute is connect, is formed new sign, formed new training set;
For Gabor and EOH composite character, filter only for EOH feature.
CN201210041341.4A 2012-02-21 2012-02-21 A kind of automatic micro-expression recognition method based on Gabor and EOH feature Active CN103258204B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210041341.4A CN103258204B (en) 2012-02-21 2012-02-21 A kind of automatic micro-expression recognition method based on Gabor and EOH feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210041341.4A CN103258204B (en) 2012-02-21 2012-02-21 A kind of automatic micro-expression recognition method based on Gabor and EOH feature

Publications (2)

Publication Number Publication Date
CN103258204A CN103258204A (en) 2013-08-21
CN103258204B true CN103258204B (en) 2016-12-14

Family

ID=48962108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210041341.4A Active CN103258204B (en) 2012-02-21 2012-02-21 A kind of automatic micro-expression recognition method based on Gabor and EOH feature

Country Status (1)

Country Link
CN (1) CN103258204B (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440509B (en) * 2013-08-28 2016-05-11 山东大学 A kind of effective micro-expression automatic identifying method
CN104123562A (en) * 2014-07-10 2014-10-29 华东师范大学 Human body face expression identification method and device based on binocular vision
CN104298981A (en) * 2014-11-05 2015-01-21 河北工业大学 Face microexpression recognition method
CN104820495B (en) * 2015-04-29 2019-06-21 姜振宇 A kind of micro- Expression Recognition of exception and based reminding method and device
CN105047194B (en) * 2015-07-28 2018-08-28 东南大学 A kind of self study sound spectrograph feature extracting method for speech emotion recognition
CN105184285A (en) * 2015-10-20 2015-12-23 南京信息工程大学 Posture-spanning colored image facial expression recognition of direct push type migration group sparse discriminant analysis
CN105913038B (en) * 2016-04-26 2019-08-06 哈尔滨工业大学深圳研究生院 A kind of micro- expression recognition method of dynamic based on video
CN106127131A (en) * 2016-06-17 2016-11-16 安徽理工大学 A kind of face identification method based on mutual information printenv locality preserving projections algorithm
CN106228145B (en) * 2016-08-04 2019-09-03 网易有道信息技术(北京)有限公司 A kind of facial expression recognizing method and equipment
CN106485227A (en) * 2016-10-14 2017-03-08 深圳市唯特视科技有限公司 A kind of Evaluation of Customer Satisfaction Degree method that is expressed one's feelings based on video face
CN106485228A (en) * 2016-10-14 2017-03-08 深圳市唯特视科技有限公司 A kind of children's interest point analysis method that is expressed one's feelings based on video face
CN106570474B (en) * 2016-10-27 2019-06-28 南京邮电大学 A kind of micro- expression recognition method based on 3D convolutional neural networks
CN108229268A (en) * 2016-12-31 2018-06-29 商汤集团有限公司 Expression Recognition and convolutional neural networks model training method, device and electronic equipment
CN106934382A (en) * 2017-03-20 2017-07-07 许彐琼 Method and apparatus based on video identification terror suspect
CN107242876B (en) * 2017-04-20 2020-12-15 合肥工业大学 Computer vision method for mental state
CN106971180B (en) * 2017-05-16 2019-05-07 山东大学 A kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary
CN107578014B (en) * 2017-09-06 2020-11-03 上海寒武纪信息科技有限公司 Information processing apparatus and method
CN107273876B (en) * 2017-07-18 2019-09-10 山东大学 A kind of micro- expression automatic identifying method of ' the macro micro- transformation model of to ' based on deep learning
CN108073888A (en) * 2017-08-07 2018-05-25 中国科学院深圳先进技术研究院 A kind of teaching auxiliary and the teaching auxiliary system using this method
CN108256469A (en) * 2018-01-16 2018-07-06 华中师范大学 facial expression recognition method and device
CN108537160A (en) * 2018-03-30 2018-09-14 平安科技(深圳)有限公司 Risk Identification Method, device, equipment based on micro- expression and medium
CN108734570A (en) * 2018-05-22 2018-11-02 深圳壹账通智能科技有限公司 A kind of Risk Forecast Method, storage medium and server
CN110688874B (en) * 2018-07-04 2022-09-30 杭州海康威视数字技术股份有限公司 Facial expression recognition method and device, readable storage medium and electronic equipment
CN109145837A (en) * 2018-08-28 2019-01-04 厦门理工学院 Face emotion identification method, device, terminal device and storage medium
CN109190564A (en) * 2018-09-05 2019-01-11 厦门集微科技有限公司 A kind of method, apparatus of image analysis, computer storage medium and terminal
CN109271977A (en) * 2018-11-23 2019-01-25 四川长虹电器股份有限公司 The automatic classification based training method, apparatus of bill and automatic classification method, device
CN109934278B (en) * 2019-03-06 2023-06-27 宁夏医科大学 High-dimensionality feature selection method for information gain mixed neighborhood rough set
CN110363066A (en) * 2019-05-23 2019-10-22 闽南师范大学 Utilize the mood automatic identification method of adjustment of Internet of Things and LED light mixing technology
CN112800951B (en) * 2021-01-27 2023-08-08 华南理工大学 Micro-expression recognition method, system, device and medium based on local base characteristics
JP7323248B2 (en) * 2021-07-21 2023-08-08 株式会社ライフクエスト STRESS DETERMINATION DEVICE, STRESS DETERMINATION METHOD, AND PROGRAM
CN118081793B (en) * 2024-04-08 2024-10-18 深圳市麦驰物联股份有限公司 Intelligent accompanying robot with emotion recognition and interaction functions

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163285A (en) * 2011-03-09 2011-08-24 北京航空航天大学 Cross-domain video semantic concept detection method based on active learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163285A (en) * 2011-03-09 2011-08-24 北京航空航天大学 Cross-domain video semantic concept detection method based on active learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
The machine knows what you are hiding: An automatic micro-expression recognition system;Wu Q 等;《Affective Computing and Intelligent Interaction: lectures notes in computer science》;20111012;第6975卷;152-162 *
人脸检测与识别的多特征分析与系统实现;朱怀毅;《中国优秀硕士学位论文全文数据库 信息科技辑》;20080615(第06期);I138-458 1-83 *
基于人脸面部特征的性别分类研究;孙鹤;《中国优秀硕士学位论文全文数据库 信息科技辑》;20080615(第06期);I138-471 11-13 *

Also Published As

Publication number Publication date
CN103258204A (en) 2013-08-21

Similar Documents

Publication Publication Date Title
CN103258204B (en) A kind of automatic micro-expression recognition method based on Gabor and EOH feature
CN107145842B (en) Face recognition method combining LBP characteristic graph and convolutional neural network
CN106650806B (en) A kind of cooperating type depth net model methodology for pedestrian detection
CN108596039B (en) Bimodal emotion recognition method and system based on 3D convolutional neural network
CN106599797B (en) A kind of infrared face recognition method based on local parallel neural network
CN100395770C (en) Hand-characteristic mix-together identifying method based on characteristic relation measure
CN107341506A (en) A kind of Image emotional semantic classification method based on the expression of many-sided deep learning
Salman et al. Classification of real and fake human faces using deep learning
CN108875674A (en) A kind of driving behavior recognition methods based on multiple row fusion convolutional neural networks
CN109117744A (en) A kind of twin neural network training method for face verification
CN106650693A (en) Multi-feature fusion identification algorithm used for human face comparison
CN106096535A (en) A kind of face verification method based on bilinearity associating CNN
CN110309861A (en) A kind of multi-modal mankind&#39;s activity recognition methods based on generation confrontation network
CN102136024B (en) Biometric feature identification performance assessment and diagnosis optimizing system
CN110197729A (en) Tranquillization state fMRI data classification method and device based on deep learning
CN105426875A (en) Face identification method and attendance system based on deep convolution neural network
CN105005765A (en) Facial expression identification method based on Gabor wavelet and gray-level co-occurrence matrix
CN107844760A (en) Three-dimensional face identification method based on curved surface normal direction component map Neural Networks Representation
CN107145830A (en) Hyperspectral image classification method with depth belief network is strengthened based on spatial information
CN110321862B (en) Pedestrian re-identification method based on compact ternary loss
CN110046550A (en) Pedestrian&#39;s Attribute Recognition system and method based on multilayer feature study
CN104318221A (en) Facial expression recognition method based on ELM
CN110070078A (en) A kind of drunk driving detection method and system based on sensor and machine vision
CN103077399B (en) Based on the biological micro-image sorting technique of integrated cascade
CN106127230B (en) Image-recognizing method based on human visual perception

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant