CN105913038B - A kind of micro- expression recognition method of dynamic based on video - Google Patents

A kind of micro- expression recognition method of dynamic based on video Download PDF

Info

Publication number
CN105913038B
CN105913038B CN201610265428.8A CN201610265428A CN105913038B CN 105913038 B CN105913038 B CN 105913038B CN 201610265428 A CN201610265428 A CN 201610265428A CN 105913038 B CN105913038 B CN 105913038B
Authority
CN
China
Prior art keywords
video
micro
block
expression
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610265428.8A
Other languages
Chinese (zh)
Other versions
CN105913038A (en
Inventor
马婷
陈梦婷
王焕焕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN201610265428.8A priority Critical patent/CN105913038B/en
Publication of CN105913038A publication Critical patent/CN105913038A/en
Application granted granted Critical
Publication of CN105913038B publication Critical patent/CN105913038B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of micro- expression recognition method of the dynamic based on video, belongs to dynamic recognition technique field.The present invention is the following steps are included: video sequence pre-processes;The a certain number of frames for calculating pretreatment backsight frequency sequence are interpolated into designated length video with interpolation method fixed point and carry out Accurate align;Designated length video is divided into video block, obtains video subset;The behavioral characteristics of video subset are extracted, video subset weight and video subset feature weight are calculated;Video sequence is classified and identified according to calculated result.The invention has the benefit that micro- expressive features that the effective prominent human face region with more expression information is extracted, weaken micro- expressive features that the human face region with less expression information is extracted.Reduce uneven illumination, noise, object block etc., and factors influence, increase the robustness of system.

Description

A kind of micro- expression recognition method of dynamic based on video
Technical field
The present invention relates to a kind of dynamic recognition technique field more particularly to a kind of micro- Expression Recognition sides of dynamic based on video Method.
Background technique
The maximum of micro- expression is characterized in that duration is very short, intensity is weak and is the uncontrollable multiple of unconscious generation Miscellaneous expression.Currently, the upper limit of its general duration is 1/5 second.These transient human face expressions, due to the duration It is short, intensity is low so that its be typically easy to by naked eyes neglected.In order to preferably be analyzed in real time micro- expression and disclose people True emotion, our urgent need one automatic micro- Expression Recognition system.
In psychological field, point out that the mankind are bad to identify micro- expression in the research report about micro- expression. Micro- expression can hardly be perceived because its duration is short, intensity is weak by the mankind.In order to solve this problem, Ekman is ground Micro- expression training tool (Micro Expression Training Tool, METT) is produced, but its discrimination is also only 40%.Even if after METT is suggested, micro- expression that Frank etc. published the mankind once to detect in reality is one ten Divide difficult thing.Since micro- Expression Recognition is also incomplete, researcher has to analyze each frame image in video one by one, and This process is undoubtedly the huge waste of time and efforts.
Computer science and psychological field, which are carried out crossing research, can be very good meet the needs of micro- Expression Recognition.It is several The independent computer scientists of group have begun the research for setting about carrying out this direction.Presently, there are micro- expression know and automatic know Other method has contingency model and machine learning these two kinds of methods.In contingency model method, face is divided into mouth by Shreve etc. Bar, the subregions such as cheek, forehead and eyes, and the region segmentation combination optical flow method of facial image is calculated into each sub-regions Face strained situation.The contingency model calculated in each sub-regions is analyzed, thus to micro- table in video Feelings are detected.
In machine learning method, Pfister etc. is proposed with the structure of temporal interpolation model and Multiple Kernel Learning and is carried out Unconscious micro- Expression Recognition frame.It solves the problems, such as that video is too short using temporal interpolation, describes son with space-time local grain Behavioral characteristics are handled, with support vector machines (Support Vector Machine), Multiple Kernel Learning MKL (Multiple Kernel learning) and random forest RF (Random Forests) solve classification problem.But space-time local grain Description is the complete local binary patterns for extracting the direction expression sequence X Y, XT and YT, which can not truly mention Take the multidate information of interframe in video.Simultaneously as the contribution rate of each part of face is considered as identical by it, has ignored and expressing When emotion, information content entrained by face different zones is different, wherein the regions such as eyes, eyebrow, corners of the mouth carry more Expression information, and the regions such as cheek, forehead carry less information.
Polikovsky etc. has used three-dimensional gradient histogram to describe son to indicate motion information to carry out micro- expression knowledge Not.Ruiz-Hernandez etc. is proposed to progress LBP (Local Binary after local second order Gauss again parametrization Patterns, local binary patterns) coding, more robust and reliable histogram is generated micro- expression to be described.
Meanwhile in patent document CN104933416A --- micro- expression sequence characteristic extracting method based on optical flow field also relates to And micro- Expression Recognition is arrived, but the method has the following shortcomings:
(1) during extracting the multidate information description that the dense optical flow field between consecutive frame carries out video, time-consuming for algorithm;
(2) optical flow field is partitioned into a series of space-time blocks, extracts principal direction in each space-time block, it should with its characterization After piecemeal during the motor pattern of total most points, the subtle micro- expression shape change of face partial points can be ignored;
(3) being easy the factors such as to be blocked by uneven illumination, noise, object is influenced, and precision is not high while calculation amount is very big;
(4) although face is carried out piecemeal processing by the method, by each piece of face transmit micro- expression when carry Information content be regarded as it is identical so that carry do not carry the facial area of relevant information even to final recognition result on a small quantity It impacts.
Summary of the invention
To solve the problems of the prior art, the present invention provides a kind of micro- expression recognition method of the dynamic based on video.
The present invention the following steps are included:
Step 1: video sequence pretreatment;
Step 2: calculating a certain number of frames of pretreatment backsight frequency sequence, is interpolated into designated length with interpolation method fixed point Video simultaneously carries out Accurate align;
Step 3: being divided into video block for designated length video, obtains video subset Y1,Y2,…,YM, wherein M is video block Number;
Step 4: extracting the behavioral characteristics of video subset, calculates video subset weight information;
Step 5: video sequence is classified and is identified according to calculated result.
The present invention completes the temporal normalization of video sequence by interpolation method, so that video sequence is convenient for analysis, identification Performance obtains a degree of raising.
The present invention is further improved, and in step 1, preprocess method includes color image gray processing, histogram equalization Change, be registrated using affine transformation, size normalization etc..
The present invention is further improved, and in step 2, calculates micro- expression start frame, the micro- table of pretreatment backsight frequency sequence Feelings peak value frame, micro- three frame of expression end frame.By by micro- expression video sequence by specifying micro- expression start frame, micro- expression peak value Its interpolation is become the micro- expression video sequence of a fixed length by interpolation algorithm, known by frame, micro- micro- expression frame of three frame of expression end frame Other effect is more preferable.
The present invention is further improved, and step 2 is calculated with 3D gradient projection method.
The present invention is further improved, and the concrete methods of realizing of step 4 includes the following steps:
(1) the dynamic motion information of all video sub-blocks in each video subset is extracted;
(2) weight of the every dimension of feature vector of video sub-block in video subset is calculated separately;
(3) feature description is carried out to video sub-block: by each video sub-block in all video subsets extracted in step (1) Dynamic motion information and step (2) in calculate the every dimension of feature vector multiplied by weight and added up to obtain in video subset The final multidate information of each video sub-block describes son;
(4) video sub-block weight vectors W, W=[ω are calculated12,…,ωM]T, wherein M is the number of video block, ωi Indicate that i-th of video sub-block is for different micro- tables when the feature of video sub-block is described using behavioral characteristics description The separating capacity of feelings classification.
The present invention plays the motion feature of the micro- expression video sequential extraction procedures of dynamic in conjunction with the piecemeal method of weighting, feature weight method Come, generates the dynamic feature extraction method of weighting, the dynamic feature extraction method of weighting is according to each feature to recognition effect Contribution rate it is different, assign different weights, the influence of the factors such as uneven illumination can be weakened with the influence of cancelling noise, increased The robustness of algorithm, so that the effect of identification is significantly improved.Meanwhile the dynamic feature extraction method of weighting is to video sequence Piecemeal is carried out, so that the position of characteristic matching is more accurate.In addition, motion information of the present invention by extraction video sequence, Deepen the understanding to micro- expression dynamic mode to a certain degree.
The present invention is further improved, and the implementation method of step (1) includes gradient method and optical flow method.
The present invention is further improved, and in step (1), describes sub- HOG3D to all video subsets using spatio-temporal gradient Extract HOG3D feature.
The present invention is further improved, in step (4), using the weight method meter that can strengthen local feature contribution Calculate video sub-block weight.For example, calculating the weights omega of each video subset using the process variant of KNNi, can effectively can be strong Change the contribution of local feature.
The present invention is further improved, in step 5, the identification and classification method are as follows:
A1: pretreated fixed length video sequence is divided into training set and test set, to the institute marked off in test set There are all training video sub-blocks marked off in test video sub-block and training set to be described, and calculates each test video Sub-block is the distance between to the corresponding sub-block of all training videos;
A2: the video block marked off for each of test set test video is carried out with Weighted Fuzzy classification Fuzzy classification;
A3: each video block is calculated for the degree of membership of the correspondence video sub-block of all training videos, obtains video The classification results of block;
A4: the classification results obtained to each video block merge, and obtain point with weight of each video block Block degree of membership and total degree of membership with weight;
A5: maximum membership grade principle is utilized, is classified to the micro- expression of the dynamic of facial image.
Compared with prior art, the beneficial effects of the present invention are: (1) is effectively prominent in the total micro- expressive features of video Micro- expressive features that human face region with more expression information is extracted, while weakening the face for having less expression information Micro- expressive features that region is extracted.(2) by micro- expression sequence by specifying micro- expression start frame, micro- expression peak value frame, micro- Its interpolation is become the micro- expression video sequence of a fixed length by interpolation algorithm by the micro- expression frame of three frame of expression end frame.So that micro- Expression sequence is normalized in time, carries out feature description to video sequence after convenience.Due to specifying micro- table Feelings start frame, micro- expression peak value frame, micro- micro- expression frame of three frame of expression end frame insert the more difficult generation of image after interpolation It is worth mistake.Meanwhile a fine alignment is carried out after interpolation, eliminate the introduced error of interframe interpolation.It (3) will the micro- expression of dynamic The motion feature of sequential extraction procedures combines with the piecemeal method of weighting, feature weight method, generates the behavioral characteristics extraction side of weighting Method.The dynamic feature extraction method of weighting is different according to contribution rate of each feature to recognition effect, assigns different weights, can With the influence of cancelling noise, weaken the influence of the factors such as uneven illumination, increase the robustness of algorithm so that the effect of identification have it is bright Aobvious raising.(4) dynamic feature extraction method weighted carries out piecemeal to video sequence, so that the position of characteristic matching is more quasi- Really.(5) fuzzy classification is carried out to video subset using the fuzzy classifier method of weighting, calculates degree of membership, each subset is subordinate to Degree adds up, and according to degree of membership maximum principle, obtains final classification results, can be effectively reduced the misclassification rate of sample, Increase the robustness of test sample.(6) dynamic micro- Expression Recognition can be carried out using the micro- expression video sequence of continuous face, Deepening the understanding to micro- expression dynamic mode to a certain degree.
Detailed description of the invention
Fig. 1 is the method for the present invention flow chart;
Fig. 2 is one embodiment of the invention flow chart;
Fig. 3 is that the present invention classifies to video sequence and identifies an embodiment method flow diagram.
Specific embodiment
The present invention is described in further details with reference to the accompanying drawings and examples.
On the whole, the present invention is a kind of movement letter that micro- expression sequence is extracted using the behavioral characteristics extraction algorithm weighted The micro- expression recognition method of face for ceasing and being classified using Weighted Fuzzy collection theory.
As shown in Figure 1, as a kind of embodiment of the invention, specifically includes the following steps:
Step 1: video sequence pretreatment;
Step 2: the micro- expression start frame, micro- expression peak value frame, micro- expression for calculating pretreated video sequence terminate Frame;Designated length video is interpolated into according to micro- expression start frame, micro- expression peak value frame, micro- expression end frame fixed point and is carried out accurate Alignment;Certainly, other frames can also be chosen in this step, designated length video is then interpolated into using interpolation method.
Step 3: being divided into video block for designated length video, obtains video subset Y1,Y2,…,YM, wherein M is video block Number;
Step 4: extracting the behavioral characteristics of video subset, calculates the weight information of video subset;
Step 5: video sequence is classified and is identified according to calculated result.
As shown in Fig. 2, needing to pre-process existing video sequence in actual application process.As this hair Bright one embodiment, this method directly use the video frame in the Cropped.zip file in CASMEII database to carry out Processing.We by expression micro- in CASMEII database be divided into four classes (happiness, surprise, disgust, Repression), the micro- expression screen sequence not repeated in every class containing 9 experimental subjects.The development platform used is matlab2015a。
Step 1 pre-processes the given micro- expression video sequence of the continuous face being aligned, and uses Matlab2015a carries out color image gray processing, histogram equalization from tape function, uses in preliminary pretreated video interframe Affine transformation is registrated, and finally carries out size normalization again.
Step 2 calculates the view with 3D gradient projection method for the pretreated continuous micro- expression video sequence of face Micro- expression start frame Onset of frequency sequence, micro- expression peak value frame Apex and micro- expression end frame Offset.
Step 3, using calculated micro- expression start frame Onset, micro- expression peak value frame Apex and micro- expression end frame Offset calculates the optical flow field between every two frame and obtains the corresponding motor pattern of each pixel between two field pictures as reference frame. Then, it takes two frame respective pixel of front and back to carry out linear interpolation, and carries out motion compensation, thus obtain the face sequence of unified frame number Column, this example use and are interpolated into 5 frame face sequences.Certainly, as needed, it can also be interpolated into the frame number of other quantity, obtained 5 frame face sequences on, we use information Entropy Method to carry out fining alignment, eliminate and introduce in video interframe interpolation Error.
Step 4, video piecemeal: in 5 obtained frame face sequences, according to the ratio characteristic of face, by face equal proportion It is divided into three pieces of upper, middle and lower and respectively corresponds three eyes for face, nose, mouth parts, has just obtained three videos in this way Block.To be reclassified with all video sub-blocks of classification designator according to eyes, nose, mouth part, formed three it is new Video subset Yi(i=1,2,3).
The concrete methods of realizing of step 4 the following steps are included:
(1) the dynamic motion information of all video sub-blocks in each video subset is extracted.The present invention uses gradient method herein, Such as AlexanderThe spatio-temporal gradient of proposition describes sub- HOG3D (3D-Gradients) and carries out to all video subsets HOG3D feature extraction.In three video subsets of formation, it is adjacent two-by-two to calculate separately each video sub-block in each subset Video image between frame is in x, y, the gradient information on the direction t, and it is projected on 20 face platonic body axis, and will Numerical value on the axis of difference 180 merges, which directly reflects the motion conditions of each pixel of adjacent image.Finally The HOG3D feature of formation is pulled into a column vector λi,j=[di,j,1 di,j,2 … di,j,r]T, the same video subset is mentioned The N number of HOG3D feature column vector taken out is arranged into eigenmatrix ψi=[λi,1i,2,…,λi,N].Certainly, this example is in addition to using The description of HOG3D feature is outer, can also be using other video Dynamic profiling, such as optical flow method or time and space local grain description Deng.
Steps are as follows for the calculating of HOG3D: calculating the video image between frame adjacent two-by-two in video sequence in x, y, the side t Upward gradient information, and it is projected on 20 face platonic body axis then carries out the numerical value differed on 180 axis Merge, and the value on 10 axis after projection merging is quantified.The information directly reflects each pixel of adjacent image Motion conditions.
(2) reliefF algorithm is used, the power of the HOG3D feature of video sub-block in each video subset extracted is calculated Weight αi=[αi,1i,2,…,αi,r], wherein i indicates i-th of video subset, i=1,2 ..., M, and r is all HOG3D features Dimension.Wherein, ReliefF algorithm is concentrated from training sample take out a sample R at random every time, so when handling multi-class problem The k neighbour's sample (near Hits) for finding out R from the sample set similar with R afterwards, from the inhomogeneous sample set of each R K neighbour's sample (near Misses) is found out, the weight of each feature is then updated.In this example, video is divided by we Three pieces, therefore i=1,2,3.
(3) feature description is carried out to video sub-block, by each video sub-block in all video subsets extracted in step (1) Dynamic motion information and step (2) in the feature weight that calculates is multiplied and to be added up to obtain each video in video subset sub The final multidate information of block describes sub- Yi,j, Yi,ji,1di,j,1i,2di,j,2…+αi,rdi,j,r, wherein Yi,jIndicate i-th of view The final multidate information of j-th of video sub-block describes son, i=1,2 ..., M, j=1,2 ..., N, d in frequency subseti,j,1, di,j,2,…,di,j,rFor the r dimensional feature of j-th of video sub-block in i-th of video subset.By N number of view in i-th of video subset The Weighted H OG3D characteristic Y that frequency sub-block is calculatedi,j, eigenmatrix is arranged in by columnSo eigenmatrixIt can write At following form:
Indicate YiIn the HOG3D that is extracted in N number of video sub-block in i-th of video subset it is dynamic State motion information, so that all video sub-blocks be described respectively.
(4) video sub-block weight vectors i.e. contribution rate W, W=[ω are calculated12,…,ωM]T, this example is using KNN Process variant calculate contribution rate W=[ω of three video blocks when carrying out Classification and Identification123]T, ωiExpression is using When the feature of video sub-block is described in behavioral characteristics description, area of i-th of video sub-block for different micro- expression classifications The ability of dividing, the weights omega of video blockiValue it is bigger, indicate for the video block in the same video in entire video identification process In contribution it is bigger.Therefore, the contribution rate ω of video blockiIt reflects when micro- expression occurs, effectively believes contained by the human face region Breath number.ωiIt illustrates and is describing i-th of video sub-block when the feature of video is described in son using certain behavioral characteristics The separating capacity of expression classification micro- for all kinds of differences.It is assumed that the sample point of different classifications is separated from each other in i-th of training set, Generic sample point is close to each other.If ωiTend to 1, shows that this i-th of video sub-block is important for identification 's.On the contrary, if having overlapping between different classes of sample point in training set composed by i-th of video sub-block, then calculating The ω arrivediIt is relatively small.That is, just should be smaller to contribution caused by identification by the training set.
The specific implementation steps are as follows for the process variant of KNN:
In three video subset Y of acquisitioni(i=1,2,3), firstly, calculating each of i-th of video subset video Distance between other all video sub-blocks in the dynamic motion information of sub-block and the video subset, can using Euclidean distance, Chebyshev's distance, manhatton distance, COS distance etc. indicate, then find its K arest neighbors.So i-th of video The weight of collection can be calculated by following formula:
Wherein, Ki,nIndicate i-th of video concentrate in K arest neighbors video with it is affiliated same The video number of class expression.
The present invention plays the motion feature of the micro- expression video sequential extraction procedures of dynamic in conjunction with the piecemeal method of weighting, feature weight method Come, generates the dynamic feature extraction method of weighting, the dynamic feature extraction method of weighting is according to each feature to recognition effect Contribution rate it is different, assign different weights, the influence of the factors such as uneven illumination can be weakened with the influence of cancelling noise, increased The robustness of algorithm, so that the effect of identification is significantly improved.Meanwhile the dynamic feature extraction method of weighting is to video sequence Piecemeal is carried out, so that the position of characteristic matching is more accurate.In addition, motion information of the present invention by extraction video sequence, Deepen the understanding to micro- expression dynamic mode to a certain degree.
Step 5: video sequence is classified and is identified according to calculated result.
As shown in figure 3, the identification and classification method of this example are as follows:
A1: pretreated fixed length video sequence is divided into training set and test set, utilizes step 1 to step 5 pair All training video sub-blocks marked off in all test video sub-blocks and training set marked off in test set are described, And each training video sub-block is calculated the distance between to the corresponding sub-block of all training videos, this example combination fuzzy set theory mentions Weighted Fuzzy classification is gone out;
Wherein, each video in training set is the video for having demarcated expression classification, finds it for establishing model Rule, each video in test set are the videos for not carrying out calibration expression classification, pass through and calculate video features and complete classification To obtain sorted label, sorted label is compared with expression classification belonging to video itself, calculates monitoring The rule of this model and the error etc. of training set, so that it is determined that whether this rule is correct.
Certainly, in addition to Weighted Fuzzy classification, Weighted distance fuzzy classifier method or Weighted Fuzzy branch is also can be used in this example Hold vector machine classification etc..
A2: the video block marked off for each of test set test video is carried out with Weighted Fuzzy classification Fuzzy classification;
A3: each video block is calculated for the degree of membership u of the correspondence video sub-block of all training videosi,j, obtain video The classification results of sub-block, wherein i indicates that i-th of training video, j indicate j-th of video sub-block in i-th of training video;
A4: the classification results obtained to each video block merge, and obtain point with weight of each video block Block degree of membership σi,jWith the total degree of membership β for having weight;
A5: maximum membership grade principle is utilized, is classified to the micro- expression of the dynamic of facial image.
Wherein, in step A3, ui,jCalculation formula are as follows:Wherein, n=1,2 ..., N, N are The number of training video, t are fuzzy factor, disti,jIndicate j-th of training video sub-block in i-th of training video subset At a distance from the Feature Descriptor for the correspondence spatial position video block that Feature Descriptor is regarded with current test,It is N number of to indicate that the video block of the correspondence spatial position of current test video and i-th of training video are concentrated The average distance of video sub-block.
In step A4, the piecemeal degree of membership σ with weight of the video blockI, jWith total degree of membership β's with weight Calculation formula is as follows:
σi,ji·ui,j, (i=1,2 ..., M, j=1,2 ..., N),
Wherein, M is the number of video subset, and N is the number of video block.
The present invention plays the motion feature of the micro- expression video sequential extraction procedures of dynamic in conjunction with the piecemeal method of weighting, feature weight method Come, generates the dynamic feature extraction method of weighting, the dynamic feature extraction method of weighting is according to each feature to recognition effect Contribution rate it is different, assign different weights, the influence of the factors such as uneven illumination can be weakened with the influence of cancelling noise, increased The robustness of algorithm, so that the effect of identification is significantly improved.Meanwhile the dynamic feature extraction method of weighting is to video sequence Piecemeal is carried out, so that the position of characteristic matching is more accurate.In addition, motion information of the present invention by extraction video sequence, Deepen the understanding to micro- expression dynamic mode to a certain degree.
The present invention has following innovative point:
(1) present invention is by micro- expression sequence by specifying micro- expression start frame, micro- expression peak value frame, micro- expression end frame three Its interpolation is become the micro- expression video sequence of a fixed length by interpolation algorithm by the micro- expression frame of frame.So that micro- expression sequence when Between on normalized, it is convenient after feature description is carried out to video sequence.Due to specifying micro- expression start frame, micro- table Feelings peak value frame, micro- micro- expression frame of three frame of expression end frame make the more difficult generation interpolation error of image after interpolation.Meanwhile it inserting A fine alignment is carried out after value, eliminates the introduced error of interframe interpolation.
(2) present invention plays the motion feature of the micro- expression sequential extraction procedures of dynamic in conjunction with the piecemeal method of weighting, feature weight method Come, generates the dynamic feature extraction method of weighting.The dynamic feature extraction method of weighting is according to each feature to recognition effect Contribution rate it is different, assign different weights, the influence of the factors such as uneven illumination can be weakened with the influence of cancelling noise, increased The robustness of algorithm makes the effect of identification be significantly improved.
(3) dynamic feature extraction method weighted carries out piecemeal to video sequence, so that the position of characteristic matching is more quasi- Really.
(4) fuzzy classifier method weighted carries out fuzzy classification to video subset, calculates degree of membership, is subordinate to each subset Degree adds up, and according to degree of membership maximum principle, obtains final classification results, can be effectively reduced the misclassification rate of sample, Increase the robustness of sample.Meanwhile using the fuzzy set theory of weighting, so that more accurate to the classification of micro- expression.
(5) dynamic micro- Expression Recognition can be carried out using the micro- expression video sequence of continuous face, added to a certain degree The deep understanding to micro- expression dynamic mode.
The specific embodiment of the above is better embodiment of the invention, is not limited with this of the invention specific Practical range, the scope of the present invention includes being not limited to present embodiment, all equal according to equivalence changes made by the present invention Within the scope of the present invention.

Claims (8)

1. a kind of micro- expression recognition method of dynamic based on video, it is characterised in that the following steps are included:
Step 1: video sequence pretreatment;
Step 2: calculating a certain number of frames of pretreatment backsight frequency sequence, is interpolated into designated length video with interpolation method fixed point And carry out Accurate align;
Step 3: being divided into video block for designated length video, obtains video subset Y1,Y2,…,YM, wherein M is of video block Number;
Step 4: extracting the behavioral characteristics of video subset, calculates the weight information of video subset;
Step 5: being classified and identified to video sequence according to calculated result,
The identification and classification method are as follows:
A1: pretreated fixed length video sequence is divided into training set and test set, all surveys to marking off in test set All training video sub-blocks marked off in examination video sub-block and training set are described, and calculate each test video sub-block The distance between to the corresponding sub-block of all training videos;
A2: the video block marked off for each of test set test video is obscured with Weighted Fuzzy classification Classification;
A3: the video sub-block of each video block of test video is calculated for the person in servitude of the correspondence video sub-block of all training videos Category degree obtains the classification results of video sub-block;
A4: the classification results obtained to each video sub-block merge, and obtain the piecemeal with weight of each video block Degree of membership and total degree of membership with weight;
A5: maximum membership grade principle is utilized, is classified to the micro- expression of the dynamic of facial image.
2. the micro- expression recognition method of dynamic according to claim 1, it is characterised in that: in step 1, preprocess method It is registrated including color image gray processing, histogram equalization, using affine transformation, size normalization.
3. the micro- expression recognition method of dynamic according to claim 1, it is characterised in that: in step 2, calculate pretreatment Micro- expression start frame of backsight frequency sequence, micro- expression peak value frame, micro- three frame of expression end frame.
4. the micro- expression recognition method of dynamic according to claim 3, it is characterised in that: step 2 3D gradient projection method meter It calculates.
5. the micro- expression recognition method of dynamic according to claim 1-4, it is characterised in that: the specific reality of step 4 Existing method includes the following steps:
(1) the dynamic motion information of all video sub-blocks in each video subset is extracted;
(2) weight of the every dimension of video block feature vector in video subset is calculated separately;
(3) feature description is carried out to video sub-block: by all video subsets extracted in step (1) each video sub-block it is dynamic The multiplied by weight of the every dimension of feature vector calculated in state motion information and step (2) simultaneously is added up to obtain each in video subset The final behavioral characteristics of video sub-block describe son;
(4) video block weight vectors W, W=[ω are calculated12,…,ωM]T, wherein M is the number of video block, ωiIt indicates When the feature of video sub-block being described using behavioral characteristics description, i-th of video sub-block is for different micro- expression classifications Separating capacity.
6. the micro- expression recognition method of dynamic according to claim 5, it is characterised in that: the implementation method of step (1) includes Gradient method and optical flow method.
7. the micro- expression recognition method of dynamic according to claim 6, it is characterised in that: in step (1), using space-time ladder Degree describes sub- HOG3D and extracts HOG3D feature to all video subsets.
8. the micro- expression recognition method of dynamic according to claim 5, it is characterised in that: in step (4), using can The weight method for strengthening local feature contribution calculates video sub-block weight.
CN201610265428.8A 2016-04-26 2016-04-26 A kind of micro- expression recognition method of dynamic based on video Expired - Fee Related CN105913038B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610265428.8A CN105913038B (en) 2016-04-26 2016-04-26 A kind of micro- expression recognition method of dynamic based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610265428.8A CN105913038B (en) 2016-04-26 2016-04-26 A kind of micro- expression recognition method of dynamic based on video

Publications (2)

Publication Number Publication Date
CN105913038A CN105913038A (en) 2016-08-31
CN105913038B true CN105913038B (en) 2019-08-06

Family

ID=56752673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610265428.8A Expired - Fee Related CN105913038B (en) 2016-04-26 2016-04-26 A kind of micro- expression recognition method of dynamic based on video

Country Status (1)

Country Link
CN (1) CN105913038B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485227A (en) * 2016-10-14 2017-03-08 深圳市唯特视科技有限公司 A kind of Evaluation of Customer Satisfaction Degree method that is expressed one's feelings based on video face
CN106485228A (en) * 2016-10-14 2017-03-08 深圳市唯特视科技有限公司 A kind of children's interest point analysis method that is expressed one's feelings based on video face
CN106897671B (en) * 2017-01-19 2020-02-25 济南中磁电子科技有限公司 Micro-expression recognition method based on optical flow and Fisher Vector coding
CN106909907A (en) * 2017-03-07 2017-06-30 佛山市融信通企业咨询服务有限公司 A kind of video communication sentiment analysis accessory system
CN107358206B (en) * 2017-07-13 2020-02-18 山东大学 Micro-expression detection method based on region-of-interest optical flow features
CN110688874B (en) * 2018-07-04 2022-09-30 杭州海康威视数字技术股份有限公司 Facial expression recognition method and device, readable storage medium and electronic equipment
CN110197107B (en) * 2018-08-17 2024-05-28 平安科技(深圳)有限公司 Micro-expression recognition method, micro-expression recognition device, computer equipment and storage medium
CN109190582B (en) * 2018-09-18 2022-02-08 河南理工大学 Novel micro-expression recognition method
CN109398310B (en) * 2018-09-26 2021-01-29 中创博利科技控股有限公司 Unmanned automobile
CN109508644B (en) * 2018-10-19 2022-10-21 陕西大智慧医疗科技股份有限公司 Facial paralysis grade evaluation system based on deep video data analysis
CN109815793A (en) * 2018-12-13 2019-05-28 平安科技(深圳)有限公司 Micro- expression describes method, apparatus, computer installation and readable storage medium storing program for executing
CN111832351A (en) * 2019-04-18 2020-10-27 杭州海康威视数字技术股份有限公司 Event detection method and device and computer equipment
CN110037693A (en) * 2019-04-24 2019-07-23 中央民族大学 A kind of mood classification method based on facial expression and EEG
CN110532911B (en) * 2019-08-19 2021-11-26 南京邮电大学 Covariance measurement driven small sample GIF short video emotion recognition method and system
CN110598608B (en) * 2019-09-02 2022-01-14 中国航天员科研训练中心 Non-contact and contact cooperative psychological and physiological state intelligent monitoring system
CN111274978B (en) * 2020-01-22 2023-05-09 广东工业大学 Micro expression recognition method and device
CN111652159B (en) * 2020-06-05 2023-04-14 山东大学 Micro-expression recognition method and system based on multi-level feature combination

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258204A (en) * 2012-02-21 2013-08-21 中国科学院心理研究所 Automatic micro-expression recognition method based on Gabor features and edge orientation histogram (EOH) features
US8848068B2 (en) * 2012-05-08 2014-09-30 Oulun Yliopisto Automated recognition algorithm for detecting facial expressions
CN104298981A (en) * 2014-11-05 2015-01-21 河北工业大学 Face microexpression recognition method
CN104933416A (en) * 2015-06-26 2015-09-23 复旦大学 Micro expression sequence feature extracting method based on optical flow field

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258204A (en) * 2012-02-21 2013-08-21 中国科学院心理研究所 Automatic micro-expression recognition method based on Gabor features and edge orientation histogram (EOH) features
US8848068B2 (en) * 2012-05-08 2014-09-30 Oulun Yliopisto Automated recognition algorithm for detecting facial expressions
CN104298981A (en) * 2014-11-05 2015-01-21 河北工业大学 Face microexpression recognition method
CN104933416A (en) * 2015-06-26 2015-09-23 复旦大学 Micro expression sequence feature extracting method based on optical flow field

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Subtle Expression Recognition Using Optical Strain Weighted Features;Sze-Teng Liong,et al.;《Computer Vision-ACCV 2014 Workshops》;20150411;第644-654页

Also Published As

Publication number Publication date
CN105913038A (en) 2016-08-31

Similar Documents

Publication Publication Date Title
CN105913038B (en) A kind of micro- expression recognition method of dynamic based on video
CN103632132B (en) Face detection and recognition method based on skin color segmentation and template matching
CN104008370B (en) A kind of video face identification method
CN105160317B (en) One kind being based on area dividing pedestrian gender identification method
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
CN103839065B (en) Extraction method for dynamic crowd gathering characteristics
CN107085716A (en) Across the visual angle gait recognition method of confrontation network is generated based on multitask
CN103034852B (en) The detection method of particular color pedestrian under Still Camera scene
CN104933414A (en) Living body face detection method based on WLD-TOP (Weber Local Descriptor-Three Orthogonal Planes)
Guo et al. Extended local binary patterns for efficient and robust spontaneous facial micro-expression recognition
CN101908149A (en) Method for identifying facial expressions from human face image sequence
CN106599870A (en) Face recognition method based on adaptive weighting and local characteristic fusion
Xu et al. Real-time pedestrian detection based on edge factor and Histogram of Oriented Gradient
CN110728302A (en) Method for identifying color textile fabric tissue based on HSV (hue, saturation, value) and Lab (Lab) color spaces
CN105956552A (en) Face black list monitoring method
CN108280421A (en) Human bodys' response method based on multiple features Depth Motion figure
CN103605993B (en) Image-to-video face identification method based on distinguish analysis oriented to scenes
Jan et al. Automatic 3D facial expression recognition using geometric and textured feature fusion
Tu et al. Robust real-time attention-based head-shoulder detection for video surveillance
CN102129557A (en) Method for identifying human face based on LDA subspace learning
Li et al. Foldover features for dynamic object behaviour description in microscopic videos
Mitsui et al. Object detection by joint features based on two-stage boosting
CN102142083A (en) Face recognition method based on LDA (Linear Discriminant Analysis) subspace learning
Yu et al. A crowd flow estimation method based on dynamic texture and GRNN
Vishwakarma et al. Action recognition using cuboids of interest points

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190806