CN105913038A - Video based dynamic microexpression identification method - Google Patents

Video based dynamic microexpression identification method Download PDF

Info

Publication number
CN105913038A
CN105913038A CN201610265428.8A CN201610265428A CN105913038A CN 105913038 A CN105913038 A CN 105913038A CN 201610265428 A CN201610265428 A CN 201610265428A CN 105913038 A CN105913038 A CN 105913038A
Authority
CN
China
Prior art keywords
video
block
micro
expression
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610265428.8A
Other languages
Chinese (zh)
Other versions
CN105913038B (en
Inventor
马婷
陈梦婷
王焕焕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN201610265428.8A priority Critical patent/CN105913038B/en
Publication of CN105913038A publication Critical patent/CN105913038A/en
Application granted granted Critical
Publication of CN105913038B publication Critical patent/CN105913038B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a video based dynamic microexpression identification method which belongs to the technical field of dynamic identification. The method comprises the following steps: pre-treating a video sequence; calculating a certain amount of frames after the pretreatment of the video sequence; using an interpolation method to assign interpolation values onto a video with a specific length and carrying out accurate alignment; cutting the video with a specific length into video blocks for video sub-sets; extracting the dynamic characteristics of the video sub-sets and calculating the weights of the video sub-sets and the characteristics of the video sub-sets; and according to the calculated results, classifying and identifying the video sequence. With the method, the characteristics of subtle facial expressions that are extracted from a person's face containing more facial expression information effectively stand out while the characteristics of subtle facial expressions that are extracted from a person's face containing less facial expression information are weakened. Further, the impacts of uneven light illumination, noise and object blocking are reduced while the robustness of a system is enhanced.

Description

A kind of dynamic micro-expression recognition method based on video
Technical field
The present invention relates to a kind of dynamic recognition technique field, particularly relate to a kind of dynamic micro-expression recognition method based on video.
Background technology
The maximum feature of micro-expression is that duration is the shortest, intensity is weak and is the uncontrollable complicated table of unconscious generation Feelings.At present, the upper limit of its general persistent period is 1/5 second.These transient human face expressions, due to the persistent period Short, intensity is low so that it is typically easy to be neglected by naked eyes.In order to preferably micro-expression be analyzed in real time and disclose people Real emotion, the automatic micro-Expression Recognition system of our urgent needs one.
In psychological field, the research report about micro-expression is pointed out the mankind be bad at micro-expression and be identified.Micro-table Feelings are because its persistent period is short, intensity is weak hardly can be by the perception of mankind institute.In order to solve this problem, Ekman develops Go out micro-expression training tool (Micro Expression Training Tool, METT), but its discrimination has been also only 40%. Even if after METT is suggested, Frank etc. once published micro-expression that the mankind want to detect in reality be one the most tired Difficult thing.Owing to micro-Expression Recognition is the most incomplete, researcher is had to each two field picture in analysis video one by one, and this One process is undoubtedly the huge waste of time and efforts.
Computer science and psychological field are carried out crossing research and can well meet the demand of micro-Expression Recognition.Several groups only Vertical computer scientists have begun to the research setting about carrying out this direction.The micro-expression that presently, there are knows the side of identification automatically Method has contingency model and this two classes method of machine learning.In contingency model method, Shreve etc. face is divided into face, The subregions such as cheek, forehead and eyes, and the region segmentation of facial image is combined optical flow method calculate each subregion Face strained situation.Contingency model calculated in each subregion is analyzed, thus to the micro-expression in video Detect.
In machine learning method, Pfister etc. proposes the structure of interpolation model and Multiple Kernel Learning between the used time to be carried out unintentionally Know micro-Expression Recognition framework.It uses temporal interpolation to solve the problem that video is too short, describes at son with space-time local grain Reason behavioral characteristics, with support vector machines (Support Vector Machine), Multiple Kernel Learning MKL (Multiple Kernel learning) and random forest RF (Random Forests) solve classification problem.But, space-time local grain Describing son is to extract expression sequence X Y, and the complete local binary patterns in XT and YT direction, this operator can not truly carry Take the multidate information of interframe in video.Simultaneously as the contribution rate of each for face part is considered as identical by it, have ignored at table When reaching emotion, the quantity of information entrained by face zones of different differs, and wherein, the region such as eyes, eyebrow, corners of the mouth is taken With more expression information, and less information is carried in the region such as cheek, forehead.
Polikovsky etc. employ three-dimensional gradient rectangular histogram and describe son and carry out micro-Expression Recognition to represent movable information. Ruiz-Hernandez etc. propose to carry out after local second order Gauss again parametrization LBP (Local Binary Patterns, Local binary patterns) coding, produce more robust and reliable rectangular histogram and micro-expression is described.
Meanwhile, also relate at patent documentation CN104933416A micro-expression sequence characteristic extracting method based on optical flow field Micro-Expression Recognition, but the method has a disadvantage in that
(1), during the dense optical flow field between extraction consecutive frame carries out the multidate information description of video, algorithm is the longest;
(2) optical flow field is being partitioned into a series of space-time block, each space-time block is extracting principal direction, characterizes this point with it After block during the motor pattern of total most points, micro-expression shape change that face partial points is trickle can be ignored;
(3) be easily subject to uneven illumination, the impact of the factor such as noise, object block, the while that precision being the highest, amount of calculation is the biggest;
(4) although face is carried out piecemeal process by the method, but by each piece of face transmitting the time letter that carries of micro-expression Breath amount regards as identical so that carries a small amount of facial zone the most not carrying relevant information and causes final recognition result Impact.
Summary of the invention
For solving the problems of the prior art, the present invention provides a kind of dynamic micro-expression recognition method based on video.
The present invention comprises the following steps:
Step one: video sequence pretreatment;
Step 2: calculate a number of frame of pretreatment rear video sequence, is interpolated into designated length video by interpolation method fixed point And carry out Accurate align;
Step 3: designated length video is divided into video block, obtains video subset Y1,Y2,…,YM, wherein, M is video block Number;
Step 4: extract the behavioral characteristics of video subset, calculates video subset weight information;
Step 5: according to result of calculation video sequence classified and identify.
The present invention completes the temporal normalization of video sequence by interpolation method so that video sequence is easy to analyze, recognition performance Obtain a certain degree of raising.
The present invention is further improved, in step one, preprocess method include coloured image gray processing, histogram equalization, Use affine transformation to carry out registrating, size normalization etc..
The present invention is further improved, and in step 2, calculates micro-expression start frame of pretreatment rear video sequence, micro-expression Peak value frame, micro-expression end frame three frame.By by micro-expression video sequence by specify micro-expression start frame, micro-expression peak value Frame, micro-micro-expression of expression end frame three, become fixed length micro-expression video sequence by interpolation algorithm by its interpolation, Recognition effect is more preferable.
The present invention is further improved, and step 2 3D gradient projection method calculates.
The present invention is further improved, and the concrete methods of realizing of step 4 comprises the steps:
(1) the dynamic motion information of all video sub-blocks in each video subset is extracted;
(2) weight that in video subset, the characteristic vector of video sub-block is often tieed up is calculated respectively;
(3) video sub-block is carried out feature description: each video sub-block in all video subsets will extracted in step (1) Dynamic motion information and step (2) in the multiplied by weight often tieed up of the characteristic vector that calculates carry out cumulative obtaining in video subset The final multidate information of each video sub-block describes son;
(4) video sub-block weight vectors W, W=[ω are calculated12,…,ωM]T, wherein, M is the number of video block, ωi Represent that i-th video sub-block is for different micro-tables when using behavioral characteristics description to be described the feature of video sub-block The separating capacity of feelings classification.
The motion feature of dynamic micro-expression video sequential extraction procedures is combined by the present invention with the piecemeal method of weighting, feature weight method, Generating the dynamic feature extraction method of weighting, the dynamic feature extraction method of weighting is according to the tribute to recognition effect of each feature Offer rate different, give different weights, the impact of the factors such as uneven illumination with the impact of cancelling noise, can be weakened, increase and calculate The robustness of method so that the effect of identification is significantly improved.Meanwhile, the dynamic feature extraction method of weighting is to video sequence Carry out piecemeal so that the position of characteristic matching is more accurate.Additionally, the present invention is by extracting the movable information of video sequence, To a certain degree deepening the understanding to micro-expression dynamic mode.
The present invention is further improved, and the implementation method of step (1) includes gradient method and optical flow method.
The present invention is further improved, and in step (1), uses spatio-temporal gradient to describe sub-HOG3D and puies forward all video subsets Take HOG3D feature.
The present invention is further improved, and in step (4), have employed the weight method calculating that can strengthen local feature contribution Video sub-block weight.Such as, the process variant using KNN calculates the weights omega of each video subseti, it is possible to effectively can be strong Change the contribution of local feature.
The present invention is further improved, and in step 5, described identification and sorting technique be:
A1: pretreated fixed length video sequence is divided into training set and test set, to all surveys marked off in test set The all of training video sub-block marked off in examination video sub-block and training set is described, and calculates each test video sub-block Distance between the sub-block that all training videos are corresponding;
A2: the video block marked off for each test video in test set, obscures by Weighted Fuzzy classification method Classification;
A3: calculate each video block degree of membership for the corresponding video sub-block of all training videos, obtain video sub-block Classification results;
A4: the classification results obtaining each video block merges, the piecemeal with weight obtaining each video block is subordinate to Genus degree and the total degree of membership with weight;
A5: utilize maximum membership grade principle, classifies to dynamic micro-expression of facial image.
Compared with prior art, the invention has the beneficial effects as follows: (1) effective prominent band in micro-expressive features that video is total Micro-expressive features that the human face region having more expression information is extracted, weakens the face district with less expression information simultaneously Micro-expressive features that territory is extracted.(2) by micro-expression sequence by specifying micro-expression start frame, micro-expression peak value frame, micro- Expression end frame three frame micro-expression frame, becomes fixed length micro-expression video sequence by interpolation algorithm by its interpolation.Make micro- Expression sequence has carried out normalization in time, convenient after video sequence is carried out feature description.Owing to specifying micro-table Feelings start frame, micro-expression peak value frame, micro-expression end frame three frame micro-expression frame make the more difficult generation of image after interpolation insert Value mistake.Meanwhile, after interpolation, carry out a fine alignment, eliminate the error that interframe interpolation is introduced.(3) will be the most micro- The motion feature of expression sequential extraction procedures combines with the piecemeal method of weighting, feature weight method, and the behavioral characteristics generating weighting carries Access method.The dynamic feature extraction method of weighting is different to the contribution rate of recognition effect according to each feature, gives different power Weight, with the impact of cancelling noise, can weaken the impact of the factors such as uneven illumination, increase the robustness of algorithm so that identification Effect is significantly improved.(4) dynamic feature extraction method weighted carries out piecemeal to video sequence so that characteristic matching Position is more accurate.(5) use the fuzzy classifier method of weighting that video subset carries out fuzzy classification, calculate degree of membership, to often The degree of membership of individual subset adds up, and according to degree of membership maximum principle, obtains final classification results, can be effectively reduced The misclassification rate of sample, increases the robustness of test sample.(6) continuous print face micro-expression video sequence is utilized to move Micro-Expression Recognition of state, is to a certain degree deepening the understanding to micro-expression dynamic mode.
Accompanying drawing explanation
Fig. 1 is the inventive method flow chart;
Fig. 2 is one embodiment of the invention flow chart;
Fig. 3 is that video sequence is classified and identifies an embodiment method flow diagram by the present invention.
Detailed description of the invention
With embodiment, the present invention is described in further details below in conjunction with the accompanying drawings.
On the whole, the present invention is that a kind of behavioral characteristics extraction algorithm utilizing weighting extracts the movable information of micro-expression sequence also Utilize the micro-expression recognition method of face that Weighted Fuzzy collection theory carries out classifying.
As it is shown in figure 1, as a kind of embodiment of the present invention, specifically include following steps:
Step one: video sequence pretreatment;
Step 2: calculate micro-expression start frame of pretreated video sequence, micro-expression peak value frame, micro-expression end frame; According to micro-expression start frame, micro-expression peak value frame, micro-expression end frame fixed point be interpolated into designated length video and carry out the most right Together;Certainly, this step can also be chosen other frame, then use interpolation method to be interpolated into designated length video.
Step 3: designated length video is divided into video block, obtains video subset Y1,Y2,…,YM, wherein, M is video block Number;
Step 4: extract the behavioral characteristics of video subset, calculates the weight information of video subset;
Step 5: according to result of calculation video sequence classified and identify.
As in figure 2 it is shown, in actual application process, need existing video sequence is carried out pretreatment.As the present invention An embodiment, this method directly uses the frame of video in the Cropped.zip file in CASMEII data base to carry out Process.We expression micro-in CASMEII data base is divided into four classes (happiness, surprise, disgust, Repression), every apoplexy due to endogenous wind contains micro-expression screen sequence that 9 experimental subjecies do not repeat.The development platform used It is matlab2015a.
Step one, carries out pretreatment to the given continuous face micro-expression video sequence alignd, uses matlab2015a Carry out coloured image gray processing, histogram equalization from tape function, use affine transformation to enter between the frame of video of preliminary pretreatment Row registration, carries out size normalization the most again.
Step 2, for pretreated continuous print face micro-expression video sequence, calculates this video sequence by 3D gradient projection method Micro-expression start frame Onset of row, micro-expression peak value frame Apex and micro-expression end frame Offset.
Step 3, uses the micro-expression start frame Onset calculated, micro-expression peak value frame Apex and micro-expression end frame Offset As reference frame, the optical flow field calculated between every two frames obtains the motor pattern that between two two field pictures, each pixel is corresponding.Subsequently, Take before and after two frame respective pixel and carry out linear interpolation, and carry out motion compensation, thus obtain the face sequence of unified frame number, this Example uses and is interpolated into 5 frame face sequences.Certainly, as required, it is also possible to be interpolated into the frame number of other quantity, obtain In 5 frame face sequences, we have employed information Entropy Method and carry out the alignment that becomes more meticulous, and eliminate introducing in video interframe interpolation Error.
Step 4, video piecemeal: in the 5 frame face sequences obtained, according to the ratio characteristic of face, by face equal proportion It is divided into three pieces, upper, middle and lower to correspond to the eyes of face, nose, three parts of face respectively, has the most just obtained three and regarded Frequently block.All of video sub-block with classification designator is reclassified according to eyes, nose, face part, forms three New video subset Yi(i=1,2,3).
The concrete methods of realizing of step 4 comprises the following steps:
(1) the dynamic motion information of all video sub-blocks in each video subset is extracted.The present invention uses gradient method herein, than Such as AlexanderThe spatio-temporal gradient proposed describes sub-HOG3D (3D-Gradients) to be carried out all video subsets HOG3D feature extraction.In three the video subsets formed, calculate each video sub-block in each subset respectively the most adjacent Frame between video image gradient information on x, y, t direction, and it is projected on 20 platonic body axles, And the numerical value on the axle of difference 180 is merged, this information directly reflects the motion conditions of each pixel of adjacent image. Finally the HOG3D feature of formation is pulled into column vector λi,j=[di,j,1 di,j,2 … di,j,r]T, by same video subset The N number of HOG3D characteristic series vector extracted is arranged into eigenmatrix ψi=[λi,1i,2,…,λi,N].Certainly, this example is except adopting With outside HOG3D feature description, it is also possible to use other video Dynamic profiling, as optical flow method or time and space local grain are retouched State son etc..
The calculation procedure of HOG3D is as follows: calculate the video image between frame the most adjacent in video sequence in x, y, t direction On gradient information, and it is projected on 20 platonic body axles, then the numerical value on the axle of difference 180 is carried out Merge, and the value on 10 axles after projection merging is quantified.This information directly reflects each pixel of adjacent image Motion conditions.
(2) use reliefF algorithm, calculate the weight of the HOG3D feature of video sub-block in each video subset extracted αi=[αi,1i,2,…,αi,r], wherein, i represents i-th video subset, i=1,2 ..., M, r are all HOG3D features Dimension.Wherein, ReliefF algorithm, when processing multi-class problem, concentrates one sample R of random taking-up from training sample every time, Then from the sample set similar with R, k neighbour's sample (near Hits) of R is found out, from the inhomogeneous sample of each R This concentration all finds out k neighbour's sample (near Misses), then updates the weight of each feature.In this example, we Video is divided into three pieces, therefore i=1,2,3.
(3) video sub-block is carried out feature description, each video sub-block in all video subsets will extracted in step (1) Dynamic motion information and step (2) in the feature weight that calculates be multiplied and carry out cumulative obtaining each video in video subset The final multidate information of block describes sub-Yi,j, Yi,ji,1di,j,1i,2di,j,2…+αi,rdi,j,r, wherein Yi,jRepresent i-th video The final multidate information concentrating jth video sub-block describes sub, i=1, and 2 ..., M, j=1,2 ..., N, di,j,1,di,j,2,…,di,j,r For the r dimensional feature of jth video sub-block in i-th video subset.Video sub-block N number of in i-th video subset is counted The Weighted H OG3D characteristic Y obtainedi,j, it is arranged in eigenmatrix by rowSo eigenmatrixShape can be written as Formula:
Represent YiThe HOG3D extracted in N number of video sub-block in middle i-th video subset is dynamic Movable information, thus respectively all of video sub-block is described.
(4) video sub-block weight vectors i.e. contribution rate W, W=[ω are calculated12,…,ωM]T, this example uses KNN's Process variant calculates three video blocks contribution rate W=[ω when carrying out Classification and Identification123]T, ωiRepresent and using dynamically When the feature of video sub-block is described by Feature Descriptor, i-th video sub-block is for the differentiation energy of different micro-expression classifications Power, the weights omega of video blockiValue the biggest, represent for this video block in same video during whole video identification Contribute the biggest.Therefore, contribution rate ω of video blockiReflect when micro-expression occurs, effective information contained by this human face region How much.ωiIllustrate use certain behavioral characteristics describe when the feature of video is described by son i-th video sub-block for The separating capacity of all kinds of differences micro-expression classification.Assuming that the sample point of different classifications is separated from each other in i-th training set, with The sample point of classification is close to each other.If ωiTend to 1, show that this i-th video sub-block is important for identifying. On the contrary, if in the training set that formed of i-th video sub-block, having overlap between different classes of sample point, then calculate The ω arrivediRelatively small.It is to say, by this training set to identifying that produced contribution just should be less.
The process variant of KNN to implement step as follows:
In three video subsets Y obtainedi(i=1,2,3), first, calculate in i-th video subset is each In the dynamic motion information of individual video sub-block and this video subset, the spacing of other all video sub-blocks, can use European Distance, Chebyshev's distance, manhatton distance, COS distance etc. represent, find its K arest neighbors subsequently.So The weight of i video subset can be calculated by equation below:
Wherein, Ki,nRepresent i-th video concentrate in K arest neighbors video with affiliated same class The video number of expression.
The motion feature of dynamic micro-expression video sequential extraction procedures is combined by the present invention with the piecemeal method of weighting, feature weight method, Generating the dynamic feature extraction method of weighting, the dynamic feature extraction method of weighting is according to the tribute to recognition effect of each feature Offer rate different, give different weights, the impact of the factors such as uneven illumination with the impact of cancelling noise, can be weakened, increase and calculate The robustness of method so that the effect of identification is significantly improved.Meanwhile, the dynamic feature extraction method of weighting is to video sequence Carry out piecemeal so that the position of characteristic matching is more accurate.Additionally, the present invention is by extracting the movable information of video sequence, To a certain degree deepening the understanding to micro-expression dynamic mode.
Step 5: according to result of calculation video sequence classified and identify.
As it is shown on figure 3, the identification of this example and sorting technique are:
A1: pretreated fixed length video sequence is divided into training set and test set, utilizes step one to step 5 to test Concentrate all of training video sub-block marked off in all test video sub-blocks marked off and training set to be described, and count Calculating each training video sub-block to the distance between sub-block corresponding to all training videos, this example combines fuzzy set theory and proposes Weighted Fuzzy classification method;
Wherein, each video in training set is the video having demarcated expression classification, is used for setting up model and finds its rule, Each video in test set is the video not carrying out demarcating expression classification, completes classification by calculating video features thus obtains To sorted label, expression classification affiliated for sorted label and video self is compared, calculate and monitor this mould The rule of type and the error etc. of training set, so that it is determined that this rule is the most correct.
Certainly, except Weighted Fuzzy classification method, this example can also use Weighted distance fuzzy classifier method, or Weighted Fuzzy support Vector machine classification method etc..
A2: the video block marked off for each test video in test set, obscures by Weighted Fuzzy classification method Classification;
A3: calculate each video block degree of membership u for the corresponding video sub-block of all training videosi,j, obtain video sub-block Classification results, wherein, i represents i-th training video, and j represents jth video sub-block in i-th training video;
A4: the classification results obtaining each video block merges, the piecemeal with weight obtaining each video block is subordinate to Genus degree σi,jWith total degree of membership β with weight;
A5: utilize maximum membership grade principle, classifies to dynamic micro-expression of facial image.
Wherein, in step A3, ui,jComputing formula be:Wherein, n=1,2 ..., N, N are The number of training video, t is fuzzy factor, disti,jJth training video sub-block in expression i-th training video subset The distance of Feature Descriptor and the Feature Descriptor currently testing the corresponding locus video block regarded,Represent that the video block of the corresponding locus of current test video is concentrated N number of with i-th training video The average distance of video sub-block.
In step A4, piecemeal degree of membership σ with weight of described video blockI, jMeter with total degree of membership β with weight Calculation formula is as follows:
σi,ji·ui,j, (i=1,2 ..., M, j=1,2 ..., N),
Wherein, M is the number of video subset, and N is the number of video block.
The motion feature of dynamic micro-expression video sequential extraction procedures is combined by the present invention with the piecemeal method of weighting, feature weight method, Generating the dynamic feature extraction method of weighting, the dynamic feature extraction method of weighting is according to the tribute to recognition effect of each feature Offer rate different, give different weights, the impact of the factors such as uneven illumination with the impact of cancelling noise, can be weakened, increase and calculate The robustness of method so that the effect of identification is significantly improved.Meanwhile, the dynamic feature extraction method of weighting is to video sequence Carry out piecemeal so that the position of characteristic matching is more accurate.Additionally, the present invention is by extracting the movable information of video sequence, To a certain degree deepening the understanding to micro-expression dynamic mode.
The present invention has a following innovative point:
(1) present invention by micro-expression sequence by specify micro-expression start frame, micro-expression peak value frame, micro-expression end frame three frame Micro-expression frame, becomes fixed length micro-expression video sequence by interpolation algorithm by its interpolation.Make micro-expression sequence in the time On carried out normalization, convenient after video sequence is carried out feature description.Owing to specifying micro-expression start frame, micro-table Feelings peak value frame, micro-expression end frame three frame micro-expression frame make the image more difficult generation interpolation error after interpolation.Meanwhile, Carry out a fine alignment after interpolation, eliminate the error that interframe interpolation is introduced.
(2) motion feature of dynamic micro-expression sequential extraction procedures is combined by the present invention with the piecemeal method of weighting, feature weight method, Generate the dynamic feature extraction method of weighting.The dynamic feature extraction method of weighting is according to the tribute to recognition effect of each feature Offer rate different, give different weights, the impact of the factors such as uneven illumination with the impact of cancelling noise, can be weakened, increase and calculate The robustness of method makes the effect identified be significantly improved.
(3) dynamic feature extraction method weighted carries out piecemeal to video sequence so that the position of characteristic matching is more accurate.
(4) fuzzy classifier method weighted carries out fuzzy classification to video subset, calculates degree of membership, the degree of membership to each subset Add up, according to degree of membership maximum principle, obtain final classification results, the misclassification rate of sample can be effectively reduced, Increase the robustness of sample.Meanwhile, have employed the fuzzy set theory of weighting so that the classification to micro-expression is the most accurate.
(5) utilize continuous print face micro-expression video sequence can carry out dynamic micro-Expression Recognition, to a certain degree deepen right The understanding of micro-expression dynamic mode.
The detailed description of the invention of the above is the better embodiment of the present invention, not limits being embodied as of the present invention with this Scope, the scope of the present invention includes being not limited to this detailed description of the invention, and all equivalence changes made according to the present invention are all at this In the protection domain of invention.

Claims (9)

1. a dynamic micro-expression recognition method based on video, it is characterised in that comprise the following steps:
Step one: video sequence pretreatment;
Step 2: calculate a number of frame of pretreatment rear video sequence, is interpolated into designated length video by interpolation method fixed point And carry out Accurate align;
Step 3: designated length video is divided into video block, obtains video subset Y1,Y2,…,YM, wherein, M is video block Number;
Step 4: extract the behavioral characteristics of video subset, calculates the weight information of video subset;
Step 5: according to result of calculation video sequence classified and identify.
Dynamic micro-expression recognition method the most according to claim 1, it is characterised in that: in step one, preprocess method Carry out registrating including coloured image gray processing, histogram equalization, use affine transformation, size normalization.
Dynamic micro-expression recognition method the most according to claim 1, it is characterised in that: in step 2, calculate pretreatment Micro-expression start frame of rear video sequence, micro-expression peak value frame, micro-expression end frame three frame.
Dynamic micro-expression recognition method the most according to claim 3, it is characterised in that: step 2 3D gradient projection method Calculate.
5. according to the dynamic micro-expression recognition method described in any one of claim 1-4, it is characterised in that: the concrete reality of step 4 Existing method comprises the steps:
(1) the dynamic motion information of all video sub-blocks in each video subset is extracted;
(2) weight that in video subset, video block feature vector is often tieed up is calculated respectively;
(3) video sub-block is carried out feature description: each video sub-block in all video subsets will extracted in step (1) Dynamic motion information and step (2) in the multiplied by weight often tieed up of the characteristic vector that calculates carry out cumulative obtaining video In subset, the final multidate information of each video sub-block describes son;
(4) video sub-block weight vectors W, W=[ω are calculated12,…,ωM]T, wherein, M is the number of video block, ωi Represent that i-th video sub-block is for difference when using behavioral characteristics description to be described the feature of video sub-block The separating capacity of micro-expression classification.
Dynamic micro-expression recognition method the most according to claim 5, it is characterised in that: the implementation method bag of step (1) Include gradient method and optical flow method.
Dynamic micro-expression recognition method the most according to claim 6, it is characterised in that: in step (1), during employing Empty gradient describes sub-HOG3D and all video subsets is extracted HOG3D feature.
Dynamic micro-expression recognition method the most according to claim 5, it is characterised in that: in step (4), have employed The weight method that can strengthen local feature contribution calculates video sub-block weight.
9. according to the dynamic micro-expression recognition method described in any one of claim 1-4, it is characterised in that: in step 5, institute State identification and sorting technique be:
A1: pretreated fixed length video sequence is divided into training set and test set, to all surveys marked off in test set The all of training video sub-block marked off in examination video sub-block and training set is described, and calculates each test video Sub-block is to the distance between sub-block corresponding to all training videos;
A2: the video block marked off for each test video in test set, obscures by Weighted Fuzzy classification method Classification;
A3: calculate each video block of test video for the degree of membership of the corresponding video sub-block of all training videos, obtain The classification results of video sub-block;
A4: the classification results obtaining each video block merges, the piecemeal with weight obtaining each video block is subordinate to Genus degree and the total degree of membership with weight;
A5: utilize maximum membership grade principle, classifies to dynamic micro-expression of facial image.
CN201610265428.8A 2016-04-26 2016-04-26 A kind of micro- expression recognition method of dynamic based on video Expired - Fee Related CN105913038B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610265428.8A CN105913038B (en) 2016-04-26 2016-04-26 A kind of micro- expression recognition method of dynamic based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610265428.8A CN105913038B (en) 2016-04-26 2016-04-26 A kind of micro- expression recognition method of dynamic based on video

Publications (2)

Publication Number Publication Date
CN105913038A true CN105913038A (en) 2016-08-31
CN105913038B CN105913038B (en) 2019-08-06

Family

ID=56752673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610265428.8A Expired - Fee Related CN105913038B (en) 2016-04-26 2016-04-26 A kind of micro- expression recognition method of dynamic based on video

Country Status (1)

Country Link
CN (1) CN105913038B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485228A (en) * 2016-10-14 2017-03-08 深圳市唯特视科技有限公司 A kind of children's interest point analysis method that is expressed one's feelings based on video face
CN106485227A (en) * 2016-10-14 2017-03-08 深圳市唯特视科技有限公司 A kind of Evaluation of Customer Satisfaction Degree method that is expressed one's feelings based on video face
CN106897671A (en) * 2017-01-19 2017-06-27 山东中磁视讯股份有限公司 A kind of micro- expression recognition method encoded based on light stream and FisherVector
CN106909907A (en) * 2017-03-07 2017-06-30 佛山市融信通企业咨询服务有限公司 A kind of video communication sentiment analysis accessory system
CN107358206A (en) * 2017-07-13 2017-11-17 山东大学 Micro- expression detection method that a kind of Optical-flow Feature vector modulus value and angle based on area-of-interest combine
CN109190582A (en) * 2018-09-18 2019-01-11 河南理工大学 A kind of new method of micro- Expression Recognition
CN109398310A (en) * 2018-09-26 2019-03-01 深圳万智联合科技有限公司 A kind of pilotless automobile
CN109508644A (en) * 2018-10-19 2019-03-22 陕西大智慧医疗科技股份有限公司 Facial paralysis grade assessment system based on the analysis of deep video data
CN110037693A (en) * 2019-04-24 2019-07-23 中央民族大学 A kind of mood classification method based on facial expression and EEG
CN110197107A (en) * 2018-08-17 2019-09-03 平安科技(深圳)有限公司 Micro- expression recognition method, device, computer equipment and storage medium
CN110532911A (en) * 2019-08-19 2019-12-03 南京邮电大学 Covariance measurement drives the short-sighted frequency emotion identification method of small sample GIF and system
CN110598608A (en) * 2019-09-02 2019-12-20 中国航天员科研训练中心 Non-contact and contact cooperative psychological and physiological state intelligent monitoring system
CN110688874A (en) * 2018-07-04 2020-01-14 杭州海康威视数字技术股份有限公司 Facial expression recognition method and device, readable storage medium and electronic equipment
CN111274978A (en) * 2020-01-22 2020-06-12 广东工业大学 Micro-expression recognition method and device
WO2020119058A1 (en) * 2018-12-13 2020-06-18 平安科技(深圳)有限公司 Micro-expression description method and device, computer device and readable storage medium
CN111652159A (en) * 2020-06-05 2020-09-11 山东大学 Micro-expression recognition method and system based on multi-level feature combination
CN111832351A (en) * 2019-04-18 2020-10-27 杭州海康威视数字技术股份有限公司 Event detection method and device and computer equipment
CN118334324A (en) * 2024-06-17 2024-07-12 中国南方电网有限责任公司超高压输电公司贵阳局 Multi-target recognition and tracking method in transformer substation based on image processing technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258204A (en) * 2012-02-21 2013-08-21 中国科学院心理研究所 Automatic micro-expression recognition method based on Gabor features and edge orientation histogram (EOH) features
US8848068B2 (en) * 2012-05-08 2014-09-30 Oulun Yliopisto Automated recognition algorithm for detecting facial expressions
CN104298981A (en) * 2014-11-05 2015-01-21 河北工业大学 Face microexpression recognition method
CN104933416A (en) * 2015-06-26 2015-09-23 复旦大学 Micro expression sequence feature extracting method based on optical flow field

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258204A (en) * 2012-02-21 2013-08-21 中国科学院心理研究所 Automatic micro-expression recognition method based on Gabor features and edge orientation histogram (EOH) features
US8848068B2 (en) * 2012-05-08 2014-09-30 Oulun Yliopisto Automated recognition algorithm for detecting facial expressions
CN104298981A (en) * 2014-11-05 2015-01-21 河北工业大学 Face microexpression recognition method
CN104933416A (en) * 2015-06-26 2015-09-23 复旦大学 Micro expression sequence feature extracting method based on optical flow field

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SZE-TENG LIONG,ET AL.: "Subtle Expression Recognition Using Optical Strain Weighted Features", 《COMPUTER VISION-ACCV 2014 WORKSHOPS》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485227A (en) * 2016-10-14 2017-03-08 深圳市唯特视科技有限公司 A kind of Evaluation of Customer Satisfaction Degree method that is expressed one's feelings based on video face
CN106485228A (en) * 2016-10-14 2017-03-08 深圳市唯特视科技有限公司 A kind of children's interest point analysis method that is expressed one's feelings based on video face
CN106897671A (en) * 2017-01-19 2017-06-27 山东中磁视讯股份有限公司 A kind of micro- expression recognition method encoded based on light stream and FisherVector
CN106909907A (en) * 2017-03-07 2017-06-30 佛山市融信通企业咨询服务有限公司 A kind of video communication sentiment analysis accessory system
CN107358206A (en) * 2017-07-13 2017-11-17 山东大学 Micro- expression detection method that a kind of Optical-flow Feature vector modulus value and angle based on area-of-interest combine
CN107358206B (en) * 2017-07-13 2020-02-18 山东大学 Micro-expression detection method based on region-of-interest optical flow features
CN110688874A (en) * 2018-07-04 2020-01-14 杭州海康威视数字技术股份有限公司 Facial expression recognition method and device, readable storage medium and electronic equipment
CN110688874B (en) * 2018-07-04 2022-09-30 杭州海康威视数字技术股份有限公司 Facial expression recognition method and device, readable storage medium and electronic equipment
CN110197107B (en) * 2018-08-17 2024-05-28 平安科技(深圳)有限公司 Micro-expression recognition method, micro-expression recognition device, computer equipment and storage medium
CN110197107A (en) * 2018-08-17 2019-09-03 平安科技(深圳)有限公司 Micro- expression recognition method, device, computer equipment and storage medium
CN109190582A (en) * 2018-09-18 2019-01-11 河南理工大学 A kind of new method of micro- Expression Recognition
CN109190582B (en) * 2018-09-18 2022-02-08 河南理工大学 Novel micro-expression recognition method
CN109398310A (en) * 2018-09-26 2019-03-01 深圳万智联合科技有限公司 A kind of pilotless automobile
CN109508644A (en) * 2018-10-19 2019-03-22 陕西大智慧医疗科技股份有限公司 Facial paralysis grade assessment system based on the analysis of deep video data
WO2020119058A1 (en) * 2018-12-13 2020-06-18 平安科技(深圳)有限公司 Micro-expression description method and device, computer device and readable storage medium
CN111832351A (en) * 2019-04-18 2020-10-27 杭州海康威视数字技术股份有限公司 Event detection method and device and computer equipment
CN110037693A (en) * 2019-04-24 2019-07-23 中央民族大学 A kind of mood classification method based on facial expression and EEG
CN110532911B (en) * 2019-08-19 2021-11-26 南京邮电大学 Covariance measurement driven small sample GIF short video emotion recognition method and system
CN110532911A (en) * 2019-08-19 2019-12-03 南京邮电大学 Covariance measurement drives the short-sighted frequency emotion identification method of small sample GIF and system
CN110598608A (en) * 2019-09-02 2019-12-20 中国航天员科研训练中心 Non-contact and contact cooperative psychological and physiological state intelligent monitoring system
CN110598608B (en) * 2019-09-02 2022-01-14 中国航天员科研训练中心 Non-contact and contact cooperative psychological and physiological state intelligent monitoring system
CN111274978B (en) * 2020-01-22 2023-05-09 广东工业大学 Micro expression recognition method and device
CN111274978A (en) * 2020-01-22 2020-06-12 广东工业大学 Micro-expression recognition method and device
CN111652159A (en) * 2020-06-05 2020-09-11 山东大学 Micro-expression recognition method and system based on multi-level feature combination
CN111652159B (en) * 2020-06-05 2023-04-14 山东大学 Micro-expression recognition method and system based on multi-level feature combination
CN118334324A (en) * 2024-06-17 2024-07-12 中国南方电网有限责任公司超高压输电公司贵阳局 Multi-target recognition and tracking method in transformer substation based on image processing technology
CN118334324B (en) * 2024-06-17 2024-08-16 中国南方电网有限责任公司超高压输电公司贵阳局 Multi-target recognition and tracking method in transformer substation based on image processing technology

Also Published As

Publication number Publication date
CN105913038B (en) 2019-08-06

Similar Documents

Publication Publication Date Title
CN105913038A (en) Video based dynamic microexpression identification method
CN103679158B (en) Face authentication method and device
CN105160317B (en) One kind being based on area dividing pedestrian gender identification method
Quan et al. Lacunarity analysis on image patterns for texture classification
CN102938065B (en) Face feature extraction method and face identification method based on large-scale image data
CN103136516B (en) The face identification method that visible ray and Near Infrared Information merge and system
CN107085716A (en) Across the visual angle gait recognition method of confrontation network is generated based on multitask
CN106529477B (en) Video human Activity recognition method based on significant track and temporal-spatial evolution information
CN107169415A (en) Human motion recognition method based on convolutional neural networks feature coding
CN104537356B (en) Pedestrian identification method and the device again that sequence carries out Gait Recognition are taken turns using Switzerland
CN112149758B (en) Hyperspectral open set classification method based on Euclidean distance and deep learning
Xu et al. Real-time pedestrian detection based on edge factor and Histogram of Oriented Gradient
CN102034267A (en) Three-dimensional reconstruction method of target based on attention
CN102184384A (en) Face identification method based on multiscale local phase quantization characteristics
CN106295532A (en) A kind of human motion recognition method in video image
CN111091057A (en) Information processing method and device and computer readable storage medium
CN104794446B (en) Human motion recognition method and system based on synthesis description
CN106971158A (en) A kind of pedestrian detection method based on CoLBP symbiosis feature Yu GSS features
CN106874825A (en) The training method of Face datection, detection method and device
Xu et al. Action recognition by saliency-based dense sampling
CN111461002B (en) Sample processing method for thermal imaging pedestrian detection
CN110728214B (en) Weak and small figure target detection method based on scale matching
CN107609464A (en) A kind of real-time high-precision human face quick detection method
Jan et al. Automatic 3D facial expression recognition using geometric and textured feature fusion
CN106127813B (en) The monitor video motion segments dividing method of view-based access control model energy sensing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190806