CN107403142B - A kind of detection method of micro- expression - Google Patents

A kind of detection method of micro- expression Download PDF

Info

Publication number
CN107403142B
CN107403142B CN201710541472.1A CN201710541472A CN107403142B CN 107403142 B CN107403142 B CN 107403142B CN 201710541472 A CN201710541472 A CN 201710541472A CN 107403142 B CN107403142 B CN 107403142B
Authority
CN
China
Prior art keywords
frame
expression
micro
detected
picture frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710541472.1A
Other languages
Chinese (zh)
Other versions
CN107403142A (en
Inventor
贾伟光
贲晛烨
牟骏
李明
邢辰
吴晨
任亿
王建超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong baoshengxin Information Technology Co.,Ltd.
Original Assignee
SHANDONG CHINA MAGNETIC VIDEO CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANDONG CHINA MAGNETIC VIDEO CO Ltd filed Critical SHANDONG CHINA MAGNETIC VIDEO CO Ltd
Priority to CN201710541472.1A priority Critical patent/CN107403142B/en
Publication of CN107403142A publication Critical patent/CN107403142A/en
Application granted granted Critical
Publication of CN107403142B publication Critical patent/CN107403142B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The present invention provides a kind of detection methods of micro- expression, including:N number of characteristic point is extracted from each picture frame in image to be detected sequence and is aligned face;K key feature points are chosen from N number of characteristic point and are divided into M characteristic point cluster;Cluster to be detected is chosen from M characteristic point cluster;Choose basic frame;Calculate the key point vector of the cluster to be detected in each picture frame;Will be vectorial based on the key point vector of basic frame, calculate the deformation vector of each picture frame;It is used as deformation threshold value by D times of largest deformation vector;The picture frame that all deformation vectors are more than to deformation threshold value is added in accurate micro- expression frame sequence;When there are successive frames in accurate micro- expression frame sequence, and frame number is greater than or equal to preset frame threshold value, using successive frame as micro- expression frame sequence.The micro- expression detected needed for obtaining can be detected from image to be detected sequence using the present invention, and substantially increases the recognition capability of micro- expression and the robustness of detection method.

Description

A kind of detection method of micro- expression
Technical field
This application involves pattern-recognition and technical field of computer vision more particularly to a kind of detection sides of micro- expression Method.
Background technology
In recent years, realize man-machine communication's by carrying out Emotion identification to features such as sound, facial expression, body languages Technology rapidly develops.Wherein, facial expression occupies an important position in terms of analyzing human emotion, still, people in many cases Can hide or inhibit their true emotions.
Micro- expression be a kind of duration be only 1/25 second to 1/5 second very quick expression, it can reveal that the mankind Attempt hiding real feelings, thus the fields such as national security, clinical diagnosis, cracking of cases, danger early warning, private defense all Good application prospect is shown, especially has important application value at aspect of detecting a lie.But the research of micro- expression starting compared with In evening, there is also largely have problem to be solved.
Micro- expression detection refers to that the position of micro- expression start frame, climax frame and end frame is determined from image sequence, it is Very important link in micro- expression data library foundation and micro- Expression Recognition algorithmic procedure.The detection technique of precise and high efficiency can be greatly The development for promoting micro- expression data library and micro- expression automatic identification technology, in clinical detection, case investigation and public peace Congruent field has highly important application prospect and value.
In real life, due to the feature that micro- expression duration is short and intensity is low, it is difficult visually to be identified.Currently People only Jing Guo high pressure training can distinguish micro- expression.But even across correctly training, known by manual type Other correct recognition rata also only has 47%.Therefore, the researcher of computer vision and area of pattern recognition needs to research and develop micro- expression Detection technique detects micro- expression, while it also becomes very powerful and exceedingly arrogant research topic in recent years.
Recently as the rapid development of computer vision and mode identification technology, micro- expression automatic measurement technique achieves Many achievements.For example, 2009, face is divided into several main regions by Shreve etc., and image is extracted using dense optical flow method Characteristic value simultaneously inserts point-score estimation light stream variation using center, and micro- expression is detected by the threshold value comparison with setting.But the party Human face region is simply divided into 8 pieces by method, and many important expression positions such as have ignored eyes.The same year, Polikovsky etc. detects micro- expression starting using the method for 3D gradient orientation histograms in the micro- expression data library of oneself Stage, peak phase and ending phase duration;2011, the expression that Sherve et al. is established using optical flow method at oneself Test experience has been carried out to two kinds of expressions (macro sheet feelings and micro- expression) on micro- expression hybrid database;Subsequent Wu et al., which is used, to be carried It takes image Gabor characteristic and micro- expression is captured by the method that svm classifier is trained.2014, the propositions such as Moilanen were straight using LBP The space time information of square figure feature calculation image sequence detects micro- expression;Subsequent Davison etc. replaces LBP features with HPG features After extracting image sequence characteristic, sets a baseline threshold and detect micro- expression by comparing.Xia in 2016 et al. is proposed A kind of micro- expression detection method based on geometric deformation model, this method walk random model estimate present frame and are in micro- expression The probability of frame sequence;The same year Qu etc. has issued database and has been detected for expression and micro- expression, and extracts sample using LBP-TOP algorithms Eigen detects micro- expression, achieves certain detection result.
But the detection method of micro- expression in the prior art there is also some problems that the accuracy of testing result is still It is not so very high, automatic identification ability is not still very strong.
Invention content
In view of this, the present invention provides a kind of detection methods of micro- expression, so as to from image to be detected sequence Detection obtains the required micro- expression detected, and substantially increases the recognition capability of micro- expression and the robustness of detection method.
What technical scheme of the present invention was specifically realized in:
A kind of detection method of micro- expression, this method include:
For each picture frame in image to be detected sequence, characteristic point detection is carried out to the face in picture frame, is carried Obtain N number of characteristic point;
According to the face in each picture frame of aligned in position of N number of characteristic point in each picture frame;
According to the position of each characteristic point and the characteristics of motion of facial muscles, on the face of each picture frame It is all corresponding in N number of characteristic point to choose K key feature points, and the K key feature points are divided into M characteristic point cluster, Each characteristic point cluster includes at least two key feature points;
According to micro- expression of required detection, at least one characteristic point cluster conduct is chosen from the M characteristic point cluster Cluster to be detected;
A picture frame for representing neutral expression is chosen from image to be detected sequence as basic frame;
According to the coordinate parameters of each key feature points in cluster to be detected, each in image to be detected sequence is calculated The key point vector of cluster to be detected in picture frame;
Will be vectorial based on the key point vector of basic frame, calculate each picture frame in image to be detected sequence The Euclidean distance of key point vector and basis vector, using the Euclidean distance being calculated as the deformation vector of correspondence image frame;
Using the picture frame with maximum deformation vector in image to be detected sequence as climax frame, by the climax frame D times of deformation vector is used as deformation threshold value, wherein 0<D<1;
The picture frame that all deformation vectors in image to be detected sequence are more than to deformation threshold value is added as accurate micro- expression frame Into accurate micro- expression frame sequence;
When there are successive frames in accurate micro- expression frame sequence, and the frame number of the successive frame is greater than or equal to preset frame threshold value When, using the successive frame as micro- expression frame sequence.
Preferably, the key point vector for the cluster to be detected being calculated as follows in a picture frame:
By the abscissa of each key feature points in the cluster to be detected in picture frame according to making after preset be ranked sequentially For the first row of key point vector;
By the ordinate of each key feature points in the cluster to be detected in picture frame according to making after preset be ranked sequentially For the secondary series of key point vector.
Preferably, Euclidean distance is calculated by following formula:
Wherein, vectorial based on a, biFor the key point vector of i-th of picture frame in image to be detected sequence, PiTo wait for The Euclidean distance of the key point vector and basis vector of i-th of picture frame in detection image sequence.
Preferably, using first picture frame in image to be detected sequence as basic frame.
Preferably, the value of the N is 68.
Preferably, the value of the M is 10.
Preferably, the value of the D is 0.6.
Preferably, the value of the frame threshold value is 8.
As above as it can be seen that in the detection method of micro- expression in the present invention, since elder generation extracts from the face in picture frame The face for obtaining N number of characteristic point and being aligned in each picture frame, then according to the position of each characteristic point and facial flesh The characteristics of motion of meat chooses K key feature points and is divided into M characteristic point cluster;Then, it chooses and waits for from characteristic point cluster It detects cluster and chooses basic frame from image to be detected sequence, calculate the key point of the cluster to be detected in each picture frame Vector, and the deformation vector of each picture frame in image to be detected sequence is further calculated, then deformation vector is more than The picture frame of deformation threshold value is added to as accurate micro- expression frame in accurate micro- expression frame sequence, is finally greater than or equal to frame number default Frame threshold value successive frame as micro- expression frame sequence, so as to from image to be detected sequence detection obtain needed for detect it is micro- Expression.Due to the present invention above-mentioned micro- expression detection method in, can be emphasized by way of extracting human face characteristic point eyes, The important expression position such as eyebrow, nose and face, and each key feature points are divided into different spies according to the characteristics of motion Sign point cluster, therefore can obtain more comprehensively, more thering is the feature of judgement power to detect micro- expression, substantially increase the knowledge of micro- expression The robustness of other ability and detection method.Simultaneously as only needing to calculate in the detection method of above-mentioned micro- expression of the present invention European Distance, therefore calculation amount is substantially reduced, it is consumed when reducing, and calculate simply, is easy to understand and realizes, can widely answer For micro- expression automatic identification.
Description of the drawings
Fig. 1 is the flow chart of the detection method of micro- expression in the specific embodiment of the present invention.
Fig. 2 is the signal of the facial feature points detection result in a picture frame in the specific embodiment of the present invention Figure.
Fig. 3 is the distribution schematic diagram of each characteristic point cluster in the specific embodiment of the present invention.
Fig. 4 is the parallelogram law signal of vector addition in the case of two kinds in the specific embodiment of the present invention Figure.
Fig. 5 is the deformation vector change curve schematic diagram in the specific embodiment of the present invention.
Specific implementation mode
To make technical scheme of the present invention and advantage be more clearly understood, below in conjunction with drawings and the specific embodiments, to this Invention is described in further detail.
Fig. 1 is the flow chart of the detection method of micro- expression in the specific embodiment of the present invention.
As shown in Figure 1, in one particular embodiment of the present invention, the detection method of micro- expression may include as follows The step:
Step 101, for each picture frame in image to be detected sequence, characteristic point is carried out to the face in picture frame Detection, extraction obtain N number of characteristic point.
In this step, it needs to carry out characteristic point inspection to the face in each picture frame in image to be detected sequence It surveys, to which extraction obtains N number of characteristic point respectively from the face in each picture frame.
In the inventive solutions, the value of above-mentioned N is natural number.Furthermore it is also possible to according to practical situations Needs, pre-set the specific value of above-mentioned N.For example, preferably, in one particular embodiment of the present invention, the N's Value can be 68.Certainly, according to the needs of actual conditions, the value of the N can also be other preset values.
In addition, it is a in the inventive solutions, a variety of specific implementations can be used to the face in picture frame Characteristic point detection is carried out, and extracts N number of characteristic point.For example, preferably, in one particular embodiment of the present invention, Ke Yili With DLIB increase income library (i.e. apply C++ technologies establish cross-platform general-purpose library) in " shape_predictor " function pair face Characteristic point detection is carried out, N number of characteristic point on the face in picture frame is finally obtained.For example, Fig. 2 is specific for of the invention one The schematic diagram of the facial feature points detection result in a picture frame in embodiment, as shown in Fig. 2, to the people in picture frame After face carries out characteristic point detection, 68 characteristic points on the face in picture frame have been obtained, i.e. marked as 0~67 in Fig. 2 Characteristic point.
Step 102, according to the face in each picture frame of aligned in position of N number of characteristic point in each picture frame.
Since in a step 101, extraction has obtained N number of spy respectively in each picture frame in image to be detected sequence Point is levied, therefore in this step, can be aligned the face in each picture frame according to the position of N number of characteristic point so that The position of each characteristic point of face in each picture frame is consistent.
Step 103, according to the position of each characteristic point and the characteristics of motion of facial muscles, in each picture frame It is all corresponding in N number of characteristic point on face to choose K key feature points, and the K key feature points are divided into M feature Point cluster, each characteristic point cluster include at least two key feature points.
In the inventive solutions, each region of face, but institute are distributed in due to extracting obtained N number of characteristic point The micro- expression that need to be detected generally only appears in some specific regions on face, therefore, can be on face it is interested K key feature points are chosen in region (for example, it is possible to detecting the region of micro- expression), and by the K key feature click and sweep It is divided into M characteristic point cluster, in order to be detected to micro- expression in subsequent operation.
So in the inventive solutions, K above-mentioned key feature points can be from being possible to detect micro- expression Region (for example, the positions such as eyebrow, eyes, nose, face and chin) in choose.
Since the position where each characteristic point is different, and the characteristics of motion of the muscle of each region of face is also not to the utmost It is identical that (for example, eyebrow divides interior angle and exterior angle, and the muscle module at interior angle and exterior angle is different, and the characteristics of motion of muscle is naturally also It is different), therefore in the inventive solutions, it can be according to the position of each characteristic point and the fortune of facial muscles Rule is moved, is corresponded in N number of characteristic point on the face of each picture frame and chooses K key feature points, and described K is closed Key characteristic point is divided into M characteristic point cluster, and each characteristic point cluster includes at least two key feature points.
In addition, in the inventive solutions, the value of above-mentioned M and K are natural number.Furthermore it is also possible to according to reality The needs of applicable cases pre-set the specific value of above-mentioned M and K.
For example, preferably, in one particular embodiment of the present invention, the value of the M can be 10.Certainly, the M Value can also be other preset values.
For example, distribution schematic diagrams of the Fig. 3 for each characteristic point cluster in the specific embodiment of the present invention, such as Fig. 3 It is shown, can be correspondingly arranged on the face in each picture frame 10 characteristic point cluster feature1 as described below~ feature10:
Feature1 is located at left eyebrow external corner region, including 2 key feature points:17~18;
Feature2 is located at left eyebrow inner angular region, including 3 key feature points:19~21;
Feature3 is located at right eyebrow inner angular region, including 3 key feature points:22~24;
Feature4 is located at right eyebrow external corner region, including 2 key feature points:25~26;
Feature5 is located at left eye region, including 6 key feature points:36~41;
Feature6 is located at right eye region, including 6 key feature points:42~47;
Feature7 is located at nasal area, including 5 key feature points:31~35;
Feature8 is located at left corners of the mouth region, including 4 key feature points:48,49,59 and 60;
Feature9 is located at right corners of the mouth region, including 4 key feature points:53,54,55 and 64;
Feature10 is located at chin area, including 3 key feature points:7~9.
In addition, it is a in the inventive solutions, face of a variety of specific implementations in picture frame can be used M characteristic point cluster of middle setting.For example, preferably, in one particular embodiment of the present invention, it can be according to each characteristic point Position and facial behavior coded system (FACS, Facial Action Coding System) system in face Muscular movement rule, according in N number of characteristic point of the moving cell (AU, Action Unit) on the face of each picture frame It is corresponding to choose K key feature points, and the K key feature points are divided into M characteristic point cluster.
Step 104, according to micro- expression of required detection, at least one characteristic point is chosen from the M characteristic point cluster Cluster is as cluster to be detected.
In the inventive solutions, the characteristic point cluster involved by different micro- expressions is different.For example, with lift Characteristic point cluster involved by the related micro- expression of eyebrow be usually feature1~feature4 either feature1~ Feature6, and the characteristic point cluster involved by micro- expression related with angle of curling one's lip is usually feature8~feature9.Cause This, in this step, can according to micro- expression according to required detection, chosen from M above-mentioned characteristic point cluster one or Multiple characteristic point clusters are as cluster to be detected, for the micro- expression detected needed for detection.
Step 105, a picture frame for representing neutral expression is chosen from image to be detected sequence as basic frame.
In this step, it needs to select a picture frame for representing neutral expression, the i.e. figure from image to be detected sequence As the expression of the face in frame is neutral expression, rather than micro- expression, and using the picture frame being selected as basis frame.
Under normal circumstances, the expression of the face in first picture frame in image to be detected sequence is exactly neutral table Feelings, it is therefore advantageous to, it in one particular embodiment of the present invention, can be by first image in image to be detected sequence Frame is as basic frame.
Step 106, it according to the coordinate parameters of each key feature points in cluster to be detected, calculates in image to be detected sequence Each picture frame in cluster to be detected key point vector.
In the inventive solutions, it can be calculated in image to be detected sequence using a variety of specific implementations The key point vector of cluster to be detected in each picture frame.For example, preferably, in a specific embodiment of the invention In, the key point vector of the cluster to be detected in a picture frame can be calculated as follows:
By the abscissa of each key feature points in the cluster to be detected in picture frame according to making after preset be ranked sequentially For the first row of key point vector;
By the ordinate of each key feature points in the cluster to be detected in picture frame according to making after preset be ranked sequentially For the secondary series of key point vector.
For example, it is assumed that it includes 3 to choose feature2 shown in Fig. 3 as cluster to be detected, the cluster to be detected Key feature points 19~21.Assuming that in the 1st picture frame in image to be detected sequence, shown 3 key feature points 19~ 21 coordinate is respectively (x1,y1)、(x2,y2) and (x3,y3), then the key point of the cluster to be detected in the 1st picture frame is vectorial It is the bivector of three rows two row:A=[x1,x2,x3;y1,y2,y3];Assuming that the 2nd figure in image to be detected sequence As in frame, the coordinate of shown 3 key feature points 19~21 is respectively (t1,z1)、(t2,z2) and (t3,z3), then the 2nd image The key point vector of cluster to be detected in frame is:b2=[t1,t2,t3;z1,z2,z3];…….
And so on, it, all can be according to above-mentioned calculation meter for each picture frame in image to be detected sequence It calculates and obtains the key point vector of cluster to be detected.
Step 107, will be vectorial based on the key point vector of basic frame, calculate each in image to be detected sequence The key point vector of picture frame and the Euclidean distance of basis vector, using the Euclidean distance being calculated as the shape of correspondence image frame Become vector.
For example, preferably, in one particular embodiment of the present invention, can be calculated by following formula above-mentioned Euclidean distance:
Wherein, vectorial based on a, biFor the key point vector of i-th of picture frame in image to be detected sequence, PiTo wait for The Euclidean distance of the key point vector and basis vector of i-th of picture frame in detection image sequence.
Through the above steps 107, you can the deformation arrow of each picture frame in image to be detected sequence is calculated (wherein, the deformation vector of basic frame is 0), so as to obtain a deformation vector sequence to amount:P=[P1,P2,P3,...PL], Wherein L is the sum of the picture frame in image to be detected sequence.
It step 108, will be described using the picture frame with maximum deformation vector in image to be detected sequence as climax frame D times of the deformation vector of climax frame is used as deformation threshold value, wherein 0<D<1.
In the inventive solutions, it can pre-set the specific of above-mentioned D according to the needs of practical situations and take Value.For example, preferably, in one particular embodiment of the present invention, the value of the D can be 0.6.Certainly, according to practical feelings The value of the needs of condition, the D can also be other preset values.
Step 109, all deformation vectors in image to be detected sequence are more than the picture frame of deformation threshold value as accurate micro- table Feelings frame is added in accurate micro- expression frame sequence.
Assuming that there are two n-dimensional vectors:X={ x1,…,xnAnd Y={ y1,…,yn, the vector sum of vectorial X and Y are Z, according to According to parallelogram law, vector Z indicates the vector sum of vector X and Y.For some characteristic point of face face, if setting X to be somebody's turn to do Characteristic point t moment position vector, and Y be position vector of this feature point at the t+1 moment, then Z meant that this feature point In the deformation cumulant at t and t+1 moment.Vector addition is flat in the case of Fig. 4 is two kinds in the specific embodiment of the present invention Row quadrilateral rule schematic diagram, as shown in figure 4, if X is consistent with the position vector principal direction of Y, Z just has certain amplitude Increase, as shown in the left figure in Fig. 4;If the position vector principal direction of X and Y is inconsistent, Z can become smaller, such as the right figure in Fig. 4 It is shown.
In the inventive solutions, represented by the deformation vector of picture frame it is exactly collection to be detected in the picture frame Deformation cumulant of the group in different moments, i.e., the movement tendency of cluster region to be detected.Therefore, if some picture frames Deformation vector is more than preset deformation threshold value, then it is micro- can to indicate that the cluster to be detected in the picture frame at this time is possible to occur Expression.So in this step, deformation vector can be more than to the picture frame of deformation threshold value as accurate micro- expression frame, then will The micro- expression frame of all standards in image to be detected sequence is all added in accurate micro- expression frame sequence.
Step 110, when there are successive frames in accurate micro- expression frame sequence, and the frame number of the successive frame be greater than or equal to it is preset When frame threshold value, using the successive frame as micro- expression frame sequence.
In the inventive solutions, above-mentioned frame threshold value can be pre-set according to the needs of practical situations Specific value.For example, preferably, in one particular embodiment of the present invention, the value of the frame threshold value can be 8 (because of SDU The micro- expression sample minimum frame sequence length of database is 8 frames).Certainly, according to the needs of actual conditions, the value of the frame threshold value Can be other preset values.
In this step, frame number can be greater than or equal to the successive frame of preset frame threshold value as micro- expression frame sequence, Therefore, the start frame of micro- expression frame sequence, climax frame and end frame are start frame, the climax frame of detected micro- expression And end frame.
Through the above steps 101~110, you can from image to be detected sequence detection obtain needed for micro- table for detecting There is micro- expression of required detection in the face of feelings, i.e., the face in picture frame in above-mentioned micro- expression frame sequence.
In addition, in the inventive solutions, micro- table proposed in the present invention can be verified by many modes The validity of the detection method of feelings.
For example, in one particular embodiment of the present invention, it can be by the micro- expression data library CASME II and SDU The carried algorithm of the present invention is verified to assess the validity of the proposed algorithm of this patent.
For example, in test experience, the sample in the micro- expression data libraries CASME2 and SDU can be divided into eyebrow position, Four types such as eyes, nose areas and face position are detected respectively.For example, Fig. 5 is specific for of the invention one Deformation vector change curve schematic diagram in embodiment, shown in fig. 5 is the deformation vector variation of glad micro- expression sample.
As shown in figure 5, the above-mentioned micro- expression sample of happiness is the original video piece not yet divided in the micro- expression data libraries SDU Section, shares 120 frames, wherein the micro- expression sequence detected is the 37th~97 frame, main movement unit is the right side corners of the mouth, that is, Fig. 3 In characteristic point cluster feature9, each point on curve represents frame value and deformation size, for example, (62,22) indicate the 62nd The deformation vector of frame is the deformation vector maximum of the 22, the 62nd frame, therefore is climax frame.At this time, it is assumed that the value of D is 0.6, then Deformation threshold value is 22*0.6=13.2;Assuming that the value of frame threshold value is 8, since the totalframes of the 37th~97 frame is more than 8, by The successive frame of 37th~97 frame composition is micro- expression sequence.
Such processing is carried out to each sample standard deviation on the micro- expression data library CASME II and SDU, you can obtain each The deformation vector change curve of sample, and micro- expression frame sequence can be determined whether there is from deformation vector change curve.
In addition, in order to evaluate the accuracy of testing result, experiment can also be with the climax frame of h coding as reference.By In COMPUTER DETECTION and h coding, there are certain deviations, so for the micro- expression data libraries SDU (sample frame per second is 90fps), Testing result of the lower deviation control within 8 frames in climax frame frame value relative to h coding can be accordingly to be regarded as correctly, changing Sentence is talked about, it is assumed that the obtained climax frame through h coding is denoted as W, if the climax detected using the method in the present invention Frame is fallen within the scope of [W-8, W+8], then it is assumed that is detected successfully.Similarly, for II micro- expression data library (sample frame per second of CASME For 200fps), testing result of the lower deviation within 18 frames in the climax frame frame value relative to h coding can be accordingly to be regarded as Correctly.
In order to evaluate the accuracy of testing result, according to micro- expression test experience result, it can be assumed that sample size MAlways, The quantity that success detects is MSuccess, then micro- expression detection success rate f be represented by:
For SDU databases, the M from statistical dataSuccess=63+8+24+79=174, MAlways=300, therefore overall success RateFor II databases of CASME, MAlways=255, MSuccess=52+11+10+61=134, it is overall Success rateExperimental result is as shown in table 1.
Table 1
Micro- expression detection of feature based point cluster deformation vectors feature proposed in the results show present invention The validity of method.But in terms of area-of-interest angle, eyes and nasal area detection success rate are relatively low with respect to eyebrow and face. Can be deduced according to experimental result, this is because eyebrow and the grain of meat in face region it is relatively neat and move it is regular, and Eyes and nose grain of meat is relatively complicated so the characteristics of motion is weaker, these factors can influence the success of detection Rate.
In conclusion in the inventive solutions, N number of feature is obtained since elder generation extracts from the face in picture frame Then the face put and be aligned in each picture frame is advised according to the movement of the position of each characteristic point and facial muscles Rule chooses K key feature points and is divided into M characteristic point cluster;Then, cluster to be detected is chosen from characteristic point cluster simultaneously Basic frame is chosen from image to be detected sequence, is calculated the key point vector of the cluster to be detected in each picture frame, is gone forward side by side One step calculates the deformation vector of each picture frame in image to be detected sequence, and deformation vector is then more than deformation threshold value Picture frame is added to as accurate micro- expression frame in accurate micro- expression frame sequence, and frame number is finally greater than or equal to preset frame threshold value Successive frame obtains the required micro- expression detected as micro- expression frame sequence, so as to be detected from image to be detected sequence.Due to In the detection method of above-mentioned micro- expression of the present invention, eyes, eyebrow, nose can be emphasized by way of extracting human face characteristic point The expression position important with face etc., and each key feature points are divided into different characteristic point clusters according to the characteristics of motion, Therefore can obtain more comprehensively, more having the feature of judgement power to detect micro- expression, substantially increase micro- expression recognition capability and The robustness of detection method.Simultaneously as only needing to calculate Euclidean distance in the detection method of above-mentioned micro- expression of the present invention, therefore Calculation amount is substantially reduced, is consumed when reducing, and is calculated simply, is easy to understand and realizes, can be widely applied to micro- table Feelings automatic identification.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention With within principle, any modification, equivalent substitution, improvement and etc. done should be included within the scope of protection of the invention god.

Claims (7)

1. a kind of detection method of micro- expression, which is characterized in that this method includes:
For each picture frame in image to be detected sequence, characteristic point detection is carried out to the face in picture frame, is extracted To N number of characteristic point;
According to the face in each picture frame of aligned in position of N number of characteristic point in each picture frame;
It is N number of on the face of each picture frame according to the position of each characteristic point and the characteristics of motion of facial muscles It is all corresponding in characteristic point to choose K key feature points, and the K key feature points are divided into M characteristic point cluster, each Characteristic point cluster all includes at least two key feature points;
According to micro- expression of required detection, at least one characteristic point cluster is chosen from the M characteristic point cluster as to be checked Survey cluster;
A picture frame for representing neutral expression is chosen from image to be detected sequence as basic frame;
According to the coordinate parameters of each key feature points in cluster to be detected, each image in image to be detected sequence is calculated The key point vector of cluster to be detected in frame;
Will be vectorial based on the key point vector of basic frame, calculate the key of each picture frame in image to be detected sequence The Euclidean distance of point vector and basis vector, using the Euclidean distance being calculated as the deformation vector of correspondence image frame;
Using the picture frame with maximum deformation vector in image to be detected sequence as climax frame, by the deformation of the climax frame D times of vector is used as deformation threshold value, wherein 0<D<1;
The picture frame that all deformation vectors in image to be detected sequence are more than to deformation threshold value is added to standard as accurate micro- expression frame In micro- expression frame sequence;
It, will when there are successive frames in accurate micro- expression frame sequence, and the frame number of the successive frame is greater than or equal to preset frame threshold value The successive frame is as micro- expression frame sequence.
2. according to the method described in claim 1, being waited for it is characterized in that, being calculated as follows in a picture frame Detect the key point vector of cluster:
By the abscissa of each key feature points in the cluster to be detected in picture frame according to after preset be ranked sequentially as close The first row of key point vector;
By the ordinate of each key feature points in the cluster to be detected in picture frame according to after preset be ranked sequentially as close The secondary series of key point vector.
3. according to the method described in claim 1, it is characterized in that, Euclidean distance is calculated by following formula:
Wherein, vectorial based on a, biFor the key point vector of i-th of picture frame in image to be detected sequence, PiIt is to be detected The Euclidean distance of the key point vector and basis vector of i-th of picture frame in image sequence.
4. according to the method described in claim 1, it is characterized in that:
The value of the N is 68.
5. method according to claim 1 or 4, it is characterised in that:
The value of the M is 10.
6. method according to claim 1 or 4, it is characterised in that:
The value of the D is 0.6.
7. method according to claim 1 or 4, it is characterised in that:
The value of the frame threshold value is 8.
CN201710541472.1A 2017-07-05 2017-07-05 A kind of detection method of micro- expression Active CN107403142B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710541472.1A CN107403142B (en) 2017-07-05 2017-07-05 A kind of detection method of micro- expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710541472.1A CN107403142B (en) 2017-07-05 2017-07-05 A kind of detection method of micro- expression

Publications (2)

Publication Number Publication Date
CN107403142A CN107403142A (en) 2017-11-28
CN107403142B true CN107403142B (en) 2018-08-21

Family

ID=60404920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710541472.1A Active CN107403142B (en) 2017-07-05 2017-07-05 A kind of detection method of micro- expression

Country Status (1)

Country Link
CN (1) CN107403142B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569795B (en) * 2018-03-13 2022-10-14 腾讯科技(深圳)有限公司 Image identification method and device and related equipment
CN108647628B (en) * 2018-05-07 2021-10-26 山东大学 Micro-expression recognition method based on multi-feature multi-task dictionary sparse transfer learning
CN109190487A (en) * 2018-08-07 2019-01-11 平安科技(深圳)有限公司 Face Emotion identification method, apparatus, computer equipment and storage medium
CN109190582B (en) * 2018-09-18 2022-02-08 河南理工大学 Novel micro-expression recognition method
US11093886B2 (en) * 2018-11-27 2021-08-17 Fujifilm Business Innovation Corp. Methods for real-time skill assessment of multi-step tasks performed by hand movements using a video camera
CN109800771B (en) * 2019-01-30 2021-03-05 杭州电子科技大学 Spontaneous micro-expression positioning method of local binary pattern of mixed space-time plane
CN110807394A (en) * 2019-10-23 2020-02-18 上海能塔智能科技有限公司 Emotion recognition method, test driving experience evaluation method, device, equipment and medium
CN110991294B (en) * 2019-11-26 2023-06-02 吉林大学 Face action unit recognition method and system capable of being quickly constructed
CN111461021A (en) * 2020-04-01 2020-07-28 中国科学院心理研究所 Micro-expression detection method based on optical flow
CN111582212B (en) * 2020-05-15 2023-04-18 山东大学 Multi-domain fusion micro-expression detection method based on motion unit
CN111626179B (en) * 2020-05-24 2023-04-28 中国科学院心理研究所 Micro-expression detection method based on optical flow superposition
CN112329663B (en) * 2020-11-10 2023-04-07 西南大学 Micro-expression time detection method and device based on face image sequence

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440509A (en) * 2013-08-28 2013-12-11 山东大学 Effective micro-expression automatic identification method
CN104933416A (en) * 2015-06-26 2015-09-23 复旦大学 Micro expression sequence feature extracting method based on optical flow field
CN105139039A (en) * 2015-09-29 2015-12-09 河北工业大学 Method for recognizing human face micro-expressions in video sequence
CN106096537A (en) * 2016-06-06 2016-11-09 山东大学 A kind of micro-expression automatic identifying method based on multi-scale sampling
CN106548149A (en) * 2016-10-26 2017-03-29 河北工业大学 The recognition methods of the micro- facial expression image sequence of face in monitor video sequence
CN106599800A (en) * 2016-11-25 2017-04-26 哈尔滨工程大学 Face micro-expression recognition method based on deep learning
CN206147665U (en) * 2016-09-08 2017-05-03 哈尔滨理工大学 Unusual facial expression recognition device a little

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2009330607B2 (en) * 2008-12-04 2015-04-09 Cubic Corporation System and methods for dynamically injecting expression information into an animated facial mesh

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440509A (en) * 2013-08-28 2013-12-11 山东大学 Effective micro-expression automatic identification method
CN104933416A (en) * 2015-06-26 2015-09-23 复旦大学 Micro expression sequence feature extracting method based on optical flow field
CN105139039A (en) * 2015-09-29 2015-12-09 河北工业大学 Method for recognizing human face micro-expressions in video sequence
CN106096537A (en) * 2016-06-06 2016-11-09 山东大学 A kind of micro-expression automatic identifying method based on multi-scale sampling
CN206147665U (en) * 2016-09-08 2017-05-03 哈尔滨理工大学 Unusual facial expression recognition device a little
CN106548149A (en) * 2016-10-26 2017-03-29 河北工业大学 The recognition methods of the micro- facial expression image sequence of face in monitor video sequence
CN106599800A (en) * 2016-11-25 2017-04-26 哈尔滨工程大学 Face micro-expression recognition method based on deep learning

Also Published As

Publication number Publication date
CN107403142A (en) 2017-11-28

Similar Documents

Publication Publication Date Title
CN107403142B (en) A kind of detection method of micro- expression
Yuan et al. Fingerprint liveness detection using an improved CNN with image scale equalization
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN110837784B (en) Examination room peeping and cheating detection system based on human head characteristics
Shan Smile detection by boosting pixel differences
Ko et al. Development of a Facial Emotion Recognition Method based on combining AAM with DBN
CN107463920A (en) A kind of face identification method for eliminating partial occlusion thing and influenceing
CN103839042B (en) Face identification method and face identification system
CN105335719A (en) Living body detection method and device
CN111126240B (en) Three-channel feature fusion face recognition method
Kantarcı et al. Thermal to visible face recognition using deep autoencoders
CN113537027B (en) Face depth counterfeiting detection method and system based on face division
Cornejo et al. Emotion recognition from occluded facial expressions using weber local descriptor
Wang et al. Forgerynir: deep face forgery and detection in near-infrared scenario
Paul et al. Extraction of facial feature points using cumulative histogram
CN110427881A (en) The micro- expression recognition method of integration across database and device based on the study of face local features
CN111666845B (en) Small sample deep learning multi-mode sign language recognition method based on key frame sampling
Zhang et al. Real-time automatic deceit detection from involuntary facial expressions
CN113627256B (en) False video inspection method and system based on blink synchronization and binocular movement detection
CN111259759A (en) Cross-database micro-expression recognition method and device based on domain selection migration regression
Faria et al. Interface framework to drive an intelligent wheelchair using facial expressions
CN110598719A (en) Method for automatically generating face image according to visual attribute description
CN113221655A (en) Face spoofing detection method based on feature space constraint
CN115273150A (en) Novel identification method and system for wearing safety helmet based on human body posture estimation
CN114898137A (en) Face recognition-oriented black box sample attack resisting method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20180123

Address after: 250101 Shandong Province, Ji'nan City Shun Tai Plaza Building 2, 1201

Applicant after: Shandong China Magnetic Video Co.,Ltd.

Applicant after: Harvest Technology (Beijing) Co., Ltd.

Address before: 250101 Shandong Province, Ji'nan City Shun Tai Plaza Building 2, 1201

Applicant before: Shandong China Magnetic Video Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180626

Address after: No. 2, No. 2, Shun Tai square, Shandong, Shandong

Applicant after: Shandong China Magnetic Video Co.,Ltd.

Address before: No. 2, No. 2, Shun Tai square, Shandong, Shandong

Applicant before: Shandong China Magnetic Video Co.,Ltd.

Applicant before: Harvest Technology (Beijing) Co., Ltd.

GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Micro-facial expression detection method

Effective date of registration: 20191212

Granted publication date: 20180821

Pledgee: Li Yanyan

Pledgor: Shandong China Magnetic Video Co.,Ltd.

Registration number: Y2019370000115

PE01 Entry into force of the registration of the contract for pledge of patent right
TR01 Transfer of patent right

Effective date of registration: 20210728

Address after: 20 / F, east area, building 8, Shuntai Plaza, 2000 Shunhua Road, Jinan District, China (Shandong) pilot Free Trade Zone, Jinan City, Shandong Province

Patentee after: Shandong baoshengxin Information Technology Co.,Ltd.

Address before: No. 2, No. 2, Shun Tai square, Shandong, Shandong

Patentee before: SHANDONG CHINA MAGNETIC VIDEO Co.,Ltd.

TR01 Transfer of patent right