CN107403142A - A kind of detection method of micro- expression - Google Patents

A kind of detection method of micro- expression Download PDF

Info

Publication number
CN107403142A
CN107403142A CN201710541472.1A CN201710541472A CN107403142A CN 107403142 A CN107403142 A CN 107403142A CN 201710541472 A CN201710541472 A CN 201710541472A CN 107403142 A CN107403142 A CN 107403142A
Authority
CN
China
Prior art keywords
frame
expression
micro
detected
picture frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710541472.1A
Other languages
Chinese (zh)
Other versions
CN107403142B (en
Inventor
贾伟光
贲晛烨
牟骏
李明
邢辰
吴晨
任亿
王建超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong baoshengxin Information Technology Co.,Ltd.
Original Assignee
SHANDONG CHINA MAGNETIC VIDEO CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANDONG CHINA MAGNETIC VIDEO CO Ltd filed Critical SHANDONG CHINA MAGNETIC VIDEO CO Ltd
Priority to CN201710541472.1A priority Critical patent/CN107403142B/en
Publication of CN107403142A publication Critical patent/CN107403142A/en
Application granted granted Critical
Publication of CN107403142B publication Critical patent/CN107403142B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The invention provides a kind of detection method of micro- expression, including:N number of characteristic point is extracted from each picture frame in image to be detected sequence and the face that aligns;K key feature points are chosen from N number of characteristic point and are divided into M characteristic point cluster;Cluster to be detected is chosen from M characteristic point cluster;Choose basic frame;Calculate the key point vector of the cluster to be detected in each picture frame;Will be vectorial based on the key point vector of basic frame, calculate the deformation vector of each picture frame;Using D times of largest deformation vector as deformation threshold value;The picture frame that all deformation vectors are more than to deformation threshold value is added in accurate micro- expression frame sequence;When successive frame being present in accurate micro- expression frame sequence, and when frame number is more than or equal to default frame threshold value, using successive frame as micro- expression frame sequence.Micro- expression of detection needed for obtaining can be detected from image to be detected sequence using the present invention, and substantially increases the recognition capability of micro- expression and the robustness of detection method.

Description

A kind of detection method of micro- expression
Technical field
The application is related to pattern-recognition and technical field of computer vision, more particularly to a kind of detection side of micro- expression Method.
Background technology
In recent years, by carrying out Emotion identification to features such as sound, facial expression, body languages to realize man-machine communication's Technology develops rapidly.Wherein, facial expression occupies critical role in terms of human emotion is analyzed, still, people under many circumstances Can hide or suppress their real emotions.
It is only the very quick expression of 1/25 second to 1/5 second a kind of duration that micro- expression, which is, and it can reveal that the mankind Attempt hiding real feelings, therefore in fields such as national security, clinical diagnosis, cracking of cases, danger early warning, private defenses all Good application prospect is shown, especially has important application value at aspect of detecting a lie.But the research of micro- expression starting compared with In evening, also existing largely has the problem of to be solved.
Micro- expression detection refers to the position that micro- expression start frame, climax frame and end frame are determined from image sequence, and it is Very important link in micro- expression data storehouse foundation and micro- Expression Recognition algorithmic procedure.The detection technique of precise and high efficiency can be greatly The development for promoting micro- expression data storehouse and micro- expression automatic identification technology, in clinical detection, case investigation and public peace Congruent field has highly important application prospect and value.
In real life, because micro- expression duration is short and the characteristics of low intensity, it is difficult visually to be identified.Currently People only Jing Guo high pressure training can distinguish micro- expression.But even across correctly training, known by manual type Other correct recognition rata also only has 47%.Therefore, the researcher of computer vision and area of pattern recognition needs to research and develop micro- expression Detection technique detects micro- expression, while it also becomes the research topic of the supreme arrogance of a person with great power in recent years.
Recently as the rapid development of computer vision and mode identification technology, micro- expression Automatic Measurement Technique achieves Many achievements.For example, 2009, face is divided into several main regions by Shreve etc., and image is extracted using dense optical flow method Characteristic value is simultaneously inserted point-score estimation light stream using center and changed, and relatively micro- expression is detected by the threshold value with setting.But the party Human face region is simply divided into 8 pieces by method, and have ignored many important expression positions such as eyes.The same year, Polikovsky etc. detects micro- expression starting using the method for 3D gradient orientation histograms in the micro- expression data storehouse of oneself The duration in stage, peak phase and ending phase;2011, the expression that Sherve et al. is established using optical flow method at oneself Test experience has been carried out to two kinds of expressions (macro sheet feelings and micro- expression) with micro- expression hybrid database;Subsequent Wu et al., which is used, to be carried Take image Gabor characteristic and micro- expression is caught by the method for svm classifier training.2014, the proposition such as Moilanen was straight using LBP The space time information of square figure feature calculation image sequence detects micro- expression;Subsequent Davison etc. replaces LBP features with HPG features After extracting image sequence characteristic, set a baseline threshold and micro- expression is detected by contrasting.Xia in 2016 et al. is proposed A kind of micro- expression detection method based on geometric deformation model, this method are in micro- expression with walk random model estimation present frame The probability of frame sequence;The same year Qu etc. has issued database and has been used for expression and the detection of micro- expression, and utilizes LBP-TOP algorithms extraction sample Eigen detects micro- expression, achieves certain Detection results.
But the detection method of micro- expression of the prior art there is also some problems that the accuracy of its testing result is still It is not so very high, automatic identification ability is not still very strong.
The content of the invention
In view of this, the invention provides a kind of detection method of micro- expression, so as to from image to be detected sequence Detection obtains micro- expression of required detection, and substantially increases the recognition capability of micro- expression and the robustness of detection method.
What technical scheme was specifically realized in:
A kind of detection method of micro- expression, this method include:
For each picture frame in image to be detected sequence, feature point detection is carried out to the face in picture frame, carried Obtain N number of characteristic point;
Face in each picture frame of aligned in position of N number of characteristic point in each picture frame;
According to the position of each characteristic point and the characteristics of motion of facial muscles, on the face of each picture frame It is all corresponding in N number of characteristic point to choose K key feature points, and the K key feature points are divided into M characteristic point cluster, Each characteristic point cluster includes at least two key feature points;
According to micro- expression of required detection, at least one characteristic point cluster conduct is chosen from the M characteristic point cluster Cluster to be detected;
Chosen from image to be detected sequence and represent frame based on the picture frame of neutral expression;
According to the coordinate parameters of each key feature points in cluster to be detected, each in image to be detected sequence is calculated The key point vector of cluster to be detected in picture frame;
Will be vectorial based on the key point vector of basic frame, calculate each picture frame in image to be detected sequence Key point vector is with the Euclidean distance of basis vector, the deformation vector using the Euclidean distance being calculated as correspondence image frame;
Using the picture frame in image to be detected sequence with maximum deformation vector as climax frame, by the climax frame D times of deformation vector is used as deformation threshold value, wherein, 0<D<1;
The picture frame that all deformation vectors in image to be detected sequence are more than to deformation threshold value adds as accurate micro- expression frame Into accurate micro- expression frame sequence;
When successive frame being present in accurate micro- expression frame sequence, and the frame number of the successive frame is more than or equal to default frame threshold value When, using the successive frame as micro- expression frame sequence.
Preferably, the key point vector for the cluster to be detected being calculated as follows in a picture frame:
Make after the abscissa of each key feature points in cluster to be detected in picture frame is arranged according to default order For the first row of key point vector;
Make after the ordinate of each key feature points in cluster to be detected in picture frame is arranged according to default order For the secondary series of key point vector.
Preferably, Euclidean distance is calculated by formula below:
Wherein, it is vectorial based on a, biFor the key point vector of i-th of picture frame in image to be detected sequence, PiTo treat The Euclidean distance of the key point vector and basis vector of i-th of picture frame in detection image sequence.
Preferably, by frame based on first picture frame in image to be detected sequence.
Preferably, the value of the N is 68.
Preferably, the value of the M is 10.
Preferably, the value of the D is 0.6.
Preferably, the value of the frame threshold value is 8.
As above it is visible, in the detection method of micro- expression in the present invention, because elder generation extracts from the face in picture frame N number of characteristic point is obtained and the face in each picture frame that aligns, then according to the position of each characteristic point and facial flesh The characteristics of motion of meat, choose K key feature points and be divided into M characteristic point cluster;Then, choose and treat from characteristic point cluster Detection cluster simultaneously chooses basic frame from image to be detected sequence, calculates the key point of the cluster to be detected in each picture frame Vector, and the deformation vector of each picture frame in image to be detected sequence is further calculated, then deformation vector is more than The picture frame of deformation threshold value is added in accurate micro- expression frame sequence as accurate micro- expression frame, is finally more than or equal to frame number default Frame threshold value successive frame as micro- expression frame sequence, so as to from image to be detected sequence detection obtain needed for detection it is micro- Expression.Due to the present invention above-mentioned micro- expression detection method in, can be emphasized by way of extracting human face characteristic point eyes, The important expression position such as eyebrow, nose and face, and each key feature points are divided into different spies according to the characteristics of motion Sign point cluster, therefore can obtain more comprehensively, more thering is the feature for adjudicating power to detect micro- expression, substantially increase the knowledge of micro- expression The robustness of other ability and detection method.Simultaneously as only need to calculate in the detection method of above-mentioned micro- expression of the present invention European Distance, therefore amount of calculation is substantially reduced, consumed when reducing, and calculate simply, readily appreciate and realize, can widely answer For micro- expression automatic identification.
Brief description of the drawings
Fig. 1 is the flow chart of the detection method of micro- expression in the specific embodiment of the present invention.
Fig. 2 is the signal of the facial feature points detection result in a picture frame in the specific embodiment of the present invention Figure.
Fig. 3 is the distribution schematic diagram of each characteristic point cluster in the specific embodiment of the present invention.
Fig. 4 is the parallelogram law signal of vector addition in the case of two kinds in a specific embodiment of the invention Figure.
Fig. 5 is the deformation vector change curve schematic diagram in the specific embodiment of the present invention.
Embodiment
For technical scheme and advantage is more clearly understood, below in conjunction with drawings and the specific embodiments, to this Invention is described in further detail.
Fig. 1 is the flow chart of the detection method of micro- expression in the specific embodiment of the present invention.
As shown in figure 1, in one particular embodiment of the present invention, the detection method of micro- expression can include as follows The step:
Step 101, for each picture frame in image to be detected sequence, characteristic point is carried out to the face in picture frame Detection, extraction obtain N number of characteristic point.
In this step, it is necessary to carry out characteristic point inspection to the face in each picture frame in image to be detected sequence Survey, so as to which extraction obtains N number of characteristic point respectively from the face in each picture frame.
In the inventive solutions, above-mentioned N value is natural number.Furthermore it is also possible to according to practical situations Needs, pre-set above-mentioned N specific value.For example, preferably, in one particular embodiment of the present invention, the N's Value can be 68.Certainly, according to the needs of actual conditions, the value of the N can also be other values set in advance.
In addition, it is individual in the inventive solutions, a variety of specific implementations can be used to the face in picture frame Feature point detection is carried out, and extracts N number of characteristic point.For example, preferably, in one particular embodiment of the present invention, Ke Yili With DLIB increase income storehouse (i.e. using C++ technologies establish cross-platform general-purpose library) in " shape_predictor " function pair face Feature point detection is carried out, finally gives N number of characteristic point on the face in picture frame.For example, Fig. 2 is that of the invention one is specific The schematic diagram of the facial feature points detection result in a picture frame in embodiment, as shown in Fig. 2 the people in picture frame After face carries out feature point detection, 68 characteristic points on the face in picture frame have been obtained, i.e. marked as 0~67 in Fig. 2 Characteristic point.
Step 102, the face in each picture frame of aligned in position of N number of characteristic point in each picture frame.
Because in a step 101, extraction has obtained N number of spy respectively in each picture frame in image to be detected sequence Point is levied, therefore in this step, the face in each picture frame can be alignd according to the position of N number of characteristic point so that The position of each characteristic point of face in each picture frame is consistent.
Step 103, according to the position of each characteristic point and the characteristics of motion of facial muscles, in each picture frame It is all corresponding in N number of characteristic point on face to choose K key feature points, and the K key feature points are divided into M feature Point cluster, each characteristic point cluster include at least two key feature points.
In the inventive solutions, the regional of face, but institute are distributed in due to extracting obtained N number of characteristic point The micro- expression that need to be detected typically only appears in some specific regions on face, therefore, can be on face it is interested K key feature points are chosen in region (for example, it is possible to detecting the region of micro- expression), and by the K key feature dot-dash It is divided into M characteristic point cluster, in order to be detected in subsequent operation to micro- expression.
So in the inventive solutions, K above-mentioned key feature points can be from being possible to detect micro- expression Region (for example, the position such as eyebrow, eyes, nose, face and chin) in choose.
Because the position where each characteristic point is different, and the characteristics of motion of the muscle of the regional of face is also not to the utmost It is identical that (for example, eyebrow divides interior angle and exterior angle, and the muscle module at interior angle and exterior angle is different, and the characteristics of motion of muscle is naturally also It is different), therefore in the inventive solutions, can be according to the position of each characteristic point and the fortune of facial muscles Rule is moved, is corresponded in N number of characteristic point on the face of each picture frame and chooses K key feature points, and described K is closed Key characteristic point is divided into M characteristic point cluster, and each characteristic point cluster includes at least two key feature points.
In addition, in the inventive solutions, above-mentioned M and K value are natural number.Furthermore it is also possible to according to reality The needs of applicable cases, pre-set above-mentioned M and K specific value.
For example, preferably, in one particular embodiment of the present invention, the value of the M can be 10.Certainly, the M Value can also be other values set in advance.
For example, Fig. 3 is the distribution schematic diagram of each characteristic point cluster in the specific embodiment of the present invention, such as Fig. 3 It is shown, can be correspondingly arranged on the face in each picture frame 10 characteristic point cluster feature1 as described below~ feature10:
Feature1 is located at left eyebrow external corner region, including 2 key feature points:17~18;
Feature2 is located at left eyebrow inner angular region, including 3 key feature points:19~21;
Feature3 is located at right eyebrow inner angular region, including 3 key feature points:22~24;
Feature4 is located at right eyebrow external corner region, including 2 key feature points:25~26;
Feature5 is located at left eye region, including 6 key feature points:36~41;
Feature6 is located at right eye region, including 6 key feature points:42~47;
Feature7 is located at nasal area, including 5 key feature points:31~35;
Feature8 is located at left corners of the mouth region, including 4 key feature points:48th, 49,59 and 60;
Feature9 is located at right corners of the mouth region, including 4 key feature points:53rd, 54,55 and 64;
Feature10 is located at chin area, including 3 key feature points:7~9.
In addition, it is individual in the inventive solutions, face of a variety of specific implementations in picture frame can be used M characteristic point cluster of middle setting.For example, preferably, in one particular embodiment of the present invention, can be according to each characteristic point Position and facial behavior coded system (FACS, Facial Action Coding System) system in face Muscular movement rule, according in N number of characteristic point of the moving cell (AU, Action Unit) on the face of each picture frame It is corresponding to choose K key feature points, and the K key feature points are divided into M characteristic point cluster.
Step 104, according to micro- expression of required detection, at least one characteristic point is chosen from the M characteristic point cluster Cluster is as cluster to be detected.
In the inventive solutions, the characteristic point cluster involved by different micro- expressions is different.For example, with lift Characteristic point cluster involved by the relevant micro- expression of eyebrow be usually feature1~feature4 either feature1~ Feature6, and the characteristic point cluster involved by micro- expression relevant with angle of curling one's lip is usually feature8~feature9.Cause This, in this step, can according to the micro- expression that detect needed for, from M above-mentioned characteristic point cluster selection one or Multiple characteristic point clusters are as cluster to be detected, for the micro- expression detected needed for detection.
Step 105, chosen from image to be detected sequence and represent frame based on the picture frame of neutral expression.
In this step, it is necessary to select a picture frame for representing neutral expression, the i.e. figure from image to be detected sequence As the expression of the face in frame is neutral expression, rather than micro- expression, and frame based on the picture frame that this is selected.
In general, the expression of the face in first picture frame in image to be detected sequence is exactly neutral table Feelings, it is therefore advantageous to, in one particular embodiment of the present invention, can be by first image in image to be detected sequence Frame based on frame.
Step 106, according to the coordinate parameters of each key feature points in cluster to be detected, calculate in image to be detected sequence Each picture frame in cluster to be detected key point vector.
In the inventive solutions, can be calculated using a variety of specific implementations in image to be detected sequence The key point vector of cluster to be detected in each picture frame.For example, preferably, in a specific embodiment of the invention In, the key point that the cluster to be detected in a picture frame can be calculated as follows is vectorial:
Make after the abscissa of each key feature points in cluster to be detected in picture frame is arranged according to default order For the first row of key point vector;
Make after the ordinate of each key feature points in cluster to be detected in picture frame is arranged according to default order For the secondary series of key point vector.
For example, it is assumed that the feature2 chosen shown in Fig. 3 includes 3 as cluster to be detected, the cluster to be detected Key feature points 19~21.Assuming that in the 1st picture frame in image to be detected sequence, shown 3 key feature points 19~ 21 coordinate is respectively (x1,y1)、(x2,y2) and (x3,y3), then the key point of the cluster to be detected in the 1st picture frame is vectorial It is the bivector of three rows two row:A=[x1,x2,x3;y1,y2,y3];Assuming that the 2nd figure in image to be detected sequence As in frame, the coordinate of shown 3 key feature points 19~21 is respectively (t1,z1)、(t2,z2) and (t3,z3), then the 2nd image The key point vector of cluster to be detected in frame is:b2=[t1,t2,t3;z1,z2,z3];…….
By that analogy, all can be according to above-mentioned calculation meter for each picture frame in image to be detected sequence Calculate and obtain the key point vector of cluster to be detected.
Step 107, will be vectorial based on the key point vector of basic frame, calculate each in image to be detected sequence The key point vector of picture frame is with the Euclidean distance of basis vector, the shape using the Euclidean distance being calculated as correspondence image frame Become vector.
For example, preferably, in one particular embodiment of the present invention, it can be calculated by formula below above-mentioned Euclidean distance:
Wherein, it is vectorial based on a, biFor the key point vector of i-th of picture frame in image to be detected sequence, PiTo treat The Euclidean distance of the key point vector and basis vector of i-th of picture frame in detection image sequence.
Pass through above-mentioned step 107, you can the deformation arrow of each picture frame in image to be detected sequence is calculated (wherein, the deformation vector of basic frame is 0), so as to obtain a deformation vector sequence to amount:P=[P1,P2,P3,...PL], Wherein L is the sum of the picture frame in image to be detected sequence.
Step 108, using the picture frame in image to be detected sequence with maximum deformation vector as climax frame, by described in D times of the deformation vector of climax frame is used as deformation threshold value, wherein, 0<D<1.
In the inventive solutions, it can pre-set the specific of above-mentioned D according to the needs of practical situations and take Value.For example, preferably, in one particular embodiment of the present invention, the value of the D can be 0.6.Certainly, according to actual feelings The needs of condition, the value of the D can also be other values set in advance.
Step 109, all deformation vectors in image to be detected sequence are more than the picture frame of deformation threshold value as accurate micro- table Feelings frame is added in accurate micro- expression frame sequence.
Assuming that there are two n-dimensional vectors:X={ x1,…,xnAnd Y={ y1,…,yn, vectorial X and Y vector are Z, according to According to parallelogram law, vector Z represents vectorial X and Y vector.For some characteristic point of face face, if setting X to be somebody's turn to do Characteristic point is in the position vector of t, and Y is position vector of this feature point at the t+1 moment, then Z has meant that this feature point In the deformation cumulant at t and t+1 moment.Fig. 4 be the present invention a specific embodiment in the case of two kinds vector addition it is flat Row quadrilateral rule schematic diagram, if as shown in figure 4, X is consistent with Y position vector principal direction, Z just has certain amplitude Increase, as shown in the left figure in Fig. 4;If X and Y position vector principal direction is inconsistent, Z can diminish, such as the right figure in Fig. 4 It is shown.
In the inventive solutions, represented by the deformation vector of picture frame it is exactly collection to be detected in the picture frame Group is in deformation cumulant at different moments, i.e., the movement tendency of cluster region to be detected.Therefore, if some picture frames Deformation vector is more than default deformation threshold value, then it is micro- can to represent that the cluster to be detected in the now picture frame is possible to have occurred Expression.So in this step, deformation vector can be more than to the picture frame of deformation threshold value as accurate micro- expression frame, then will The micro- expression frame of all standards in image to be detected sequence is all added in accurate micro- expression frame sequence.
Step 110, when successive frame in accurate micro- expression frame sequence being present, and the frame number of the successive frame be more than or equal to it is default During frame threshold value, using the successive frame as micro- expression frame sequence.
In the inventive solutions, above-mentioned frame threshold value can be pre-set according to the needs of practical situations Specific value.For example, preferably, in one particular embodiment of the present invention, the value of the frame threshold value can be 8 (because SDU The micro- expression sample minimum frame sequence length of database is 8 frames).Certainly, according to the needs of actual conditions, the value of the frame threshold value Can be other values set in advance.
In this step, frame number can be more than or equal to the successive frame of default frame threshold value as micro- expression frame sequence, Therefore, the start frame of micro- expression frame sequence, climax frame and end frame are start frame, the climax frame of detected micro- expression And end frame.
Pass through above-mentioned step 101~110, you can detection obtains micro- table of required detection from image to be detected sequence There is micro- expression of required detection in feelings, i.e., the face of the face in picture frame in above-mentioned micro- expression frame sequence.
In addition, in the inventive solutions, micro- table proposed in the present invention can be verified by many modes The validity of the detection method of feelings.
For example, in one particular embodiment of the present invention, can be by the micro- expression data storehouse of CASME II and SDU The checking present invention carries algorithm to assess the validity that this patent proposes algorithm.
For example, in test experience, the sample in the micro- expression data storehouses of CASME2 and SDU can be divided into eyebrow position, Four types such as eyes, nose areas and face position are detected respectively.For example, Fig. 5 is that of the invention one is specific Deformation vector change curve schematic diagram in embodiment, shown in Fig. 5 is the deformation vector change of glad micro- expression sample.
As shown in figure 5, the above-mentioned micro- expression sample of happiness is the original video piece not yet split in the micro- expression data storehouses of SDU Section, shares 120 frames, wherein the micro- expression sequence detected is the 37th~97 frame, its main movement unit is that the right side corners of the mouth is Fig. 3 In characteristic point cluster feature9, each point on curve represents frame value and deformation size, for example, (62,22) represent the 62nd The deformation vector of frame is maximum for the deformation vector of the 22, the 62nd frame, therefore is climax frame.Now, it is assumed that D value is 0.6, then Deformation threshold value is 22*0.6=13.2;Assuming that the value of frame threshold value is 8, because the totalframes of the 37th~97 frame is more than 8, therefore by The successive frame of 37th~97 frame composition is micro- expression sequence.
Such handle is carried out to each sample standard deviation on the micro- expression data storehouse of CASME II and SDU, you can obtain each The deformation vector change curve of sample, and micro- expression frame sequence can be determined whether there is from deformation vector change curve.
In addition, in order to evaluate the accuracy of testing result, experiment can also be used as reference using the climax frame of h coding.By Certain deviation be present in COMPUTER DETECTION and h coding, so for the micro- expression data storehouses of SDU (sample frame per second is 90fps), Testing result of the lower deviation control within 8 frames in climax frame frame value relative to h coding can be accordingly to be regarded as correctly, changing Sentence is talked about, it is assumed that the obtained climax frame through h coding is designated as into W, if the climax detected using the method in the present invention Frame falls within the scope of [W-8, W+8], then it is assumed that detects successfully.Similarly, for micro- expression data storehouse (the sample frame per second of CASME II For 200fps), testing result of the lower deviation within 18 frames in the climax frame frame value relative to h coding can be accordingly to be regarded as Correctly.
In order to evaluate the accuracy of testing result, according to micro- expression test experience result, it can be assumed that sample size MAlways, The quantity that success detects is MSuccess, then micro- expression detection success rate f be represented by:
For SDU databases, the M from statisticsSuccess=63+8+24+79=174, MAlways=300, therefore overall success RateFor the databases of CASME II, MAlways=255, MSuccess=52+11+10+61=134, it is overall Success rateExperimental result is as shown in table 1.
Table 1
Micro- expression detection of distinguished point based cluster deformation vectors feature proposed in the results show present invention The validity of method.But in terms of area-of-interest angle, eyes and nasal area detection success rate are relatively low with respect to eyebrow and face. Can be deduced according to experimental result, this be due to the grain of meat in eyebrow and face region it is relatively neat and move it is regular, and Eyes and nose grain of meat is relatively complicated so the characteristics of motion is weaker, these factors can influence the success of detection Rate.
In summary, in the inventive solutions, N number of feature is obtained because elder generation extracts from the face in picture frame Put and the face in each picture frame that aligns, then advised according to the motion of the position of each characteristic point and facial muscles Rule, choose K key feature points and be divided into M characteristic point cluster;Then, cluster to be detected is chosen from characteristic point cluster simultaneously Basic frame is chosen from image to be detected sequence, the key point vector of the cluster to be detected in each picture frame is calculated, goes forward side by side One step calculates the deformation vector of each picture frame in image to be detected sequence, and deformation vector then is more than into deformation threshold value Picture frame is added in accurate micro- expression frame sequence as accurate micro- expression frame, and frame number finally is more than or equal into default frame threshold value Successive frame obtains micro- expression of required detection as micro- expression frame sequence so as to be detected from image to be detected sequence.Due to In the detection method of above-mentioned micro- expression of the present invention, eyes, eyebrow, nose can be emphasized by way of extracting human face characteristic point The expression position important with face etc., and each key feature points are divided into different characteristic point clusters according to the characteristics of motion, Therefore can obtain more comprehensively, more having the feature of judgement power to detect micro- expression, substantially increase micro- expression recognition capability and The robustness of detection method.Simultaneously as only need to calculate Euclidean distance in the detection method of above-mentioned micro- expression of the present invention, therefore Amount of calculation is substantially reduced, is consumed when reducing, and is calculated simply, readily appreciates and realizes, can be widely applied to micro- table Feelings automatic identification.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention God any modification, equivalent substitution and improvements done etc., should be included within the scope of protection of the invention with principle.

Claims (8)

1. a kind of detection method of micro- expression, it is characterised in that this method includes:
For each picture frame in image to be detected sequence, feature point detection is carried out to the face in picture frame, extracted To N number of characteristic point;
Face in each picture frame of aligned in position of N number of characteristic point in each picture frame;
It is N number of on the face of each picture frame according to the position of each characteristic point and the characteristics of motion of facial muscles It is all corresponding in characteristic point to choose K key feature points, and the K key feature points are divided into M characteristic point cluster, each Characteristic point cluster all includes at least two key feature points;
According to micro- expression of required detection, at least one characteristic point cluster is chosen from the M characteristic point cluster as to be checked Survey cluster;
Chosen from image to be detected sequence and represent frame based on the picture frame of neutral expression;
According to the coordinate parameters of each key feature points in cluster to be detected, each image in image to be detected sequence is calculated The key point vector of cluster to be detected in frame;
Will be vectorial based on the key point vector of basic frame, calculate the key of each picture frame in image to be detected sequence Point vector is with the Euclidean distance of basis vector, the deformation vector using the Euclidean distance being calculated as correspondence image frame;
Using the picture frame in image to be detected sequence with maximum deformation vector as climax frame, by the deformation of the climax frame D times of vector is used as deformation threshold value, wherein, 0<D<1;
The picture frame that all deformation vectors in image to be detected sequence are more than to deformation threshold value is added to standard as accurate micro- expression frame In micro- expression frame sequence;
When successive frame being present in accurate micro- expression frame sequence, and when the frame number of the successive frame is more than or equal to default frame threshold value, general The successive frame is as micro- expression frame sequence.
2. according to the method for claim 1, it is characterised in that be calculated in a picture frame and treat as follows Detect the key point vector of cluster:
As pass after the abscissa of each key feature points in cluster to be detected in picture frame is arranged according to default order The first row of key point vector;
As pass after the ordinate of each key feature points in cluster to be detected in picture frame is arranged according to default order The secondary series of key point vector.
3. according to the method for claim 1, it is characterised in that Euclidean distance is calculated by formula below:
<mrow> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>=</mo> <msqrt> <mrow> <mi>T</mi> <mi>r</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>a</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>a</mi> <mo>)</mo> </mrow> </mrow> </msqrt> </mrow>
Wherein, it is vectorial based on a, biFor the key point vector of i-th of picture frame in image to be detected sequence, PiTo be to be detected The Euclidean distance of the key point vector and basis vector of i-th of picture frame in image sequence.
4. according to the method for claim 1, it is characterised in that:
By frame based on first picture frame in image to be detected sequence.
5. according to the method for claim 1, it is characterised in that:
The value of the N is 68.
6. method according to claim 1 or 5, it is characterised in that:
The value of the M is 10.
7. method according to claim 1 or 5, it is characterised in that:
The value of the D is 0.6.
8. method according to claim 1 or 5, it is characterised in that:
The value of the frame threshold value is 8.
CN201710541472.1A 2017-07-05 2017-07-05 A kind of detection method of micro- expression Active CN107403142B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710541472.1A CN107403142B (en) 2017-07-05 2017-07-05 A kind of detection method of micro- expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710541472.1A CN107403142B (en) 2017-07-05 2017-07-05 A kind of detection method of micro- expression

Publications (2)

Publication Number Publication Date
CN107403142A true CN107403142A (en) 2017-11-28
CN107403142B CN107403142B (en) 2018-08-21

Family

ID=60404920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710541472.1A Active CN107403142B (en) 2017-07-05 2017-07-05 A kind of detection method of micro- expression

Country Status (1)

Country Link
CN (1) CN107403142B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647628A (en) * 2018-05-07 2018-10-12 山东大学 A kind of micro- expression recognition method based on the sparse transfer learning of multiple features multitask dictionary
CN109190582A (en) * 2018-09-18 2019-01-11 河南理工大学 A kind of new method of micro- Expression Recognition
CN109800771A (en) * 2019-01-30 2019-05-24 杭州电子科技大学 Mix spontaneous micro- expression localization method of space-time plane local binary patterns
WO2019174439A1 (en) * 2018-03-13 2019-09-19 腾讯科技(深圳)有限公司 Image recognition method and apparatus, and terminal and storage medium
WO2020029406A1 (en) * 2018-08-07 2020-02-13 平安科技(深圳)有限公司 Human face emotion identification method and device, computer device and storage medium
CN110807394A (en) * 2019-10-23 2020-02-18 上海能塔智能科技有限公司 Emotion recognition method, test driving experience evaluation method, device, equipment and medium
CN110991294A (en) * 2019-11-26 2020-04-10 吉林大学 Method and system for identifying rapidly-constructed human face action unit
CN111222737A (en) * 2018-11-27 2020-06-02 富士施乐株式会社 Method and system for real-time skill assessment and computer readable medium
CN111461021A (en) * 2020-04-01 2020-07-28 中国科学院心理研究所 Micro-expression detection method based on optical flow
CN111582212A (en) * 2020-05-15 2020-08-25 山东大学 Multi-domain fusion micro-expression detection method based on motion unit
CN111626179A (en) * 2020-05-24 2020-09-04 中国科学院心理研究所 Micro-expression detection method based on optical flow superposition
CN112329663A (en) * 2020-11-10 2021-02-05 西南大学 Micro-expression time detection method and device based on face image sequence

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440509A (en) * 2013-08-28 2013-12-11 山东大学 Effective micro-expression automatic identification method
US20140240324A1 (en) * 2008-12-04 2014-08-28 Intific, Inc. Training system and methods for dynamically injecting expression information into an animated facial mesh
CN104933416A (en) * 2015-06-26 2015-09-23 复旦大学 Micro expression sequence feature extracting method based on optical flow field
CN105139039A (en) * 2015-09-29 2015-12-09 河北工业大学 Method for recognizing human face micro-expressions in video sequence
CN106096537A (en) * 2016-06-06 2016-11-09 山东大学 A kind of micro-expression automatic identifying method based on multi-scale sampling
CN106548149A (en) * 2016-10-26 2017-03-29 河北工业大学 The recognition methods of the micro- facial expression image sequence of face in monitor video sequence
CN106599800A (en) * 2016-11-25 2017-04-26 哈尔滨工程大学 Face micro-expression recognition method based on deep learning
CN206147665U (en) * 2016-09-08 2017-05-03 哈尔滨理工大学 Unusual facial expression recognition device a little

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140240324A1 (en) * 2008-12-04 2014-08-28 Intific, Inc. Training system and methods for dynamically injecting expression information into an animated facial mesh
CN103440509A (en) * 2013-08-28 2013-12-11 山东大学 Effective micro-expression automatic identification method
CN104933416A (en) * 2015-06-26 2015-09-23 复旦大学 Micro expression sequence feature extracting method based on optical flow field
CN105139039A (en) * 2015-09-29 2015-12-09 河北工业大学 Method for recognizing human face micro-expressions in video sequence
CN106096537A (en) * 2016-06-06 2016-11-09 山东大学 A kind of micro-expression automatic identifying method based on multi-scale sampling
CN206147665U (en) * 2016-09-08 2017-05-03 哈尔滨理工大学 Unusual facial expression recognition device a little
CN106548149A (en) * 2016-10-26 2017-03-29 河北工业大学 The recognition methods of the micro- facial expression image sequence of face in monitor video sequence
CN106599800A (en) * 2016-11-25 2017-04-26 哈尔滨工程大学 Face micro-expression recognition method based on deep learning

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569795B (en) * 2018-03-13 2022-10-14 腾讯科技(深圳)有限公司 Image identification method and device and related equipment
US11393206B2 (en) * 2018-03-13 2022-07-19 Tencent Technology (Shenzhen) Company Limited Image recognition method and apparatus, terminal, and storage medium
WO2019174439A1 (en) * 2018-03-13 2019-09-19 腾讯科技(深圳)有限公司 Image recognition method and apparatus, and terminal and storage medium
CN110569795A (en) * 2018-03-13 2019-12-13 腾讯科技(深圳)有限公司 Image identification method and device and related equipment
CN108647628B (en) * 2018-05-07 2021-10-26 山东大学 Micro-expression recognition method based on multi-feature multi-task dictionary sparse transfer learning
CN108647628A (en) * 2018-05-07 2018-10-12 山东大学 A kind of micro- expression recognition method based on the sparse transfer learning of multiple features multitask dictionary
WO2020029406A1 (en) * 2018-08-07 2020-02-13 平安科技(深圳)有限公司 Human face emotion identification method and device, computer device and storage medium
CN109190582B (en) * 2018-09-18 2022-02-08 河南理工大学 Novel micro-expression recognition method
CN109190582A (en) * 2018-09-18 2019-01-11 河南理工大学 A kind of new method of micro- Expression Recognition
CN111222737A (en) * 2018-11-27 2020-06-02 富士施乐株式会社 Method and system for real-time skill assessment and computer readable medium
CN111222737B (en) * 2018-11-27 2024-04-05 富士胶片商业创新有限公司 Method and system for real-time skill assessment and computer readable medium
CN109800771A (en) * 2019-01-30 2019-05-24 杭州电子科技大学 Mix spontaneous micro- expression localization method of space-time plane local binary patterns
CN109800771B (en) * 2019-01-30 2021-03-05 杭州电子科技大学 Spontaneous micro-expression positioning method of local binary pattern of mixed space-time plane
CN110807394A (en) * 2019-10-23 2020-02-18 上海能塔智能科技有限公司 Emotion recognition method, test driving experience evaluation method, device, equipment and medium
CN110991294A (en) * 2019-11-26 2020-04-10 吉林大学 Method and system for identifying rapidly-constructed human face action unit
CN111461021A (en) * 2020-04-01 2020-07-28 中国科学院心理研究所 Micro-expression detection method based on optical flow
CN111582212A (en) * 2020-05-15 2020-08-25 山东大学 Multi-domain fusion micro-expression detection method based on motion unit
CN111582212B (en) * 2020-05-15 2023-04-18 山东大学 Multi-domain fusion micro-expression detection method based on motion unit
CN111626179A (en) * 2020-05-24 2020-09-04 中国科学院心理研究所 Micro-expression detection method based on optical flow superposition
CN111626179B (en) * 2020-05-24 2023-04-28 中国科学院心理研究所 Micro-expression detection method based on optical flow superposition
CN112329663A (en) * 2020-11-10 2021-02-05 西南大学 Micro-expression time detection method and device based on face image sequence

Also Published As

Publication number Publication date
CN107403142B (en) 2018-08-21

Similar Documents

Publication Publication Date Title
CN107403142B (en) A kind of detection method of micro- expression
Yuan et al. Fingerprint liveness detection using an improved CNN with image scale equalization
CN105005765B (en) A kind of facial expression recognizing method based on Gabor wavelet and gray level co-occurrence matrixes
CN107463920A (en) A kind of face identification method for eliminating partial occlusion thing and influenceing
CN103996195B (en) Image saliency detection method
CN102013011B (en) Front-face-compensation-operator-based multi-pose human face recognition method
CN107358206A (en) Micro- expression detection method that a kind of Optical-flow Feature vector modulus value and angle based on area-of-interest combine
CN106446811A (en) Deep-learning-based driver&#39;s fatigue detection method and apparatus
CN103839042B (en) Face identification method and face identification system
CN108182409A (en) Biopsy method, device, equipment and storage medium
CN110516616A (en) A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set
CN106203256A (en) A kind of low resolution face identification method based on sparse holding canonical correlation analysis
CN105335719A (en) Living body detection method and device
CN106446849B (en) A kind of method for detecting fatigue driving
CN113537027B (en) Face depth counterfeiting detection method and system based on face division
CN109766785A (en) A kind of biopsy method and device of face
Wang et al. Forgerynir: deep face forgery and detection in near-infrared scenario
CN109344763A (en) A kind of strabismus detection method based on convolutional neural networks
CN111666845B (en) Small sample deep learning multi-mode sign language recognition method based on key frame sampling
CN109101925A (en) Biopsy method
Zhang et al. Real-time automatic deceit detection from involuntary facial expressions
Lanz et al. Automated classification of therapeutic face exercises using the Kinect
CN106778797A (en) A kind of identity intelligent identification Method
Zheng et al. Age classification based on back-propagation network
Li et al. Learning State Assessment in Online Education Based on Multiple Facial Features Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180123

Address after: 250101 Shandong Province, Ji'nan City Shun Tai Plaza Building 2, 1201

Applicant after: Shandong China Magnetic Video Co.,Ltd.

Applicant after: Harvest Technology (Beijing) Co., Ltd.

Address before: 250101 Shandong Province, Ji'nan City Shun Tai Plaza Building 2, 1201

Applicant before: Shandong China Magnetic Video Co.,Ltd.

TA01 Transfer of patent application right

Effective date of registration: 20180626

Address after: No. 2, No. 2, Shun Tai square, Shandong, Shandong

Applicant after: Shandong China Magnetic Video Co.,Ltd.

Address before: No. 2, No. 2, Shun Tai square, Shandong, Shandong

Applicant before: Shandong China Magnetic Video Co.,Ltd.

Applicant before: Harvest Technology (Beijing) Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Micro-facial expression detection method

Effective date of registration: 20191212

Granted publication date: 20180821

Pledgee: Li Yanyan

Pledgor: Shandong China Magnetic Video Co.,Ltd.

Registration number: Y2019370000115

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210728

Address after: 20 / F, east area, building 8, Shuntai Plaza, 2000 Shunhua Road, Jinan District, China (Shandong) pilot Free Trade Zone, Jinan City, Shandong Province

Patentee after: Shandong baoshengxin Information Technology Co.,Ltd.

Address before: No. 2, No. 2, Shun Tai square, Shandong, Shandong

Patentee before: SHANDONG CHINA MAGNETIC VIDEO Co.,Ltd.