CN111259759A - Cross-database micro-expression recognition method and device based on domain selection migration regression - Google Patents
Cross-database micro-expression recognition method and device based on domain selection migration regression Download PDFInfo
- Publication number
- CN111259759A CN111259759A CN202010030236.5A CN202010030236A CN111259759A CN 111259759 A CN111259759 A CN 111259759A CN 202010030236 A CN202010030236 A CN 202010030236A CN 111259759 A CN111259759 A CN 111259759A
- Authority
- CN
- China
- Prior art keywords
- micro
- expression
- formula
- database
- domain selection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013508 migration Methods 0.000 title claims abstract description 33
- 230000005012 migration Effects 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 title claims abstract description 27
- 230000014509 gene expression Effects 0.000 claims abstract description 63
- 239000011159 matrix material Substances 0.000 claims abstract description 34
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000012360 testing method Methods 0.000 claims abstract description 19
- 230000001815 facial effect Effects 0.000 claims abstract description 9
- 230000000903 blocking effect Effects 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 10
- 238000000638 solvent extraction Methods 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 9
- 230000008451 emotion Effects 0.000 claims description 7
- 238000005192 partition Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 4
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 3
- 108010046685 Rho Factor Proteins 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 235000009508 confectionery Nutrition 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- OUXCBPLFCPMLQZ-WOPPDYDQSA-N 4-amino-1-[(2r,3s,4s,5r)-4-hydroxy-5-(hydroxymethyl)-3-methyloxolan-2-yl]-5-iodopyrimidin-2-one Chemical compound C[C@H]1[C@H](O)[C@@H](CO)O[C@H]1N1C(=O)N=C(N)C(I)=C1 OUXCBPLFCPMLQZ-WOPPDYDQSA-N 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000011840 criminal investigation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/75—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7847—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/18—Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Library & Information Science (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Operations Research (AREA)
- Probability & Statistics with Applications (AREA)
- Human Computer Interaction (AREA)
- Algebra (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a cross-database micro-expression recognition method and device based on domain selection migration regression, which comprises the following steps: (1) acquiring two micro expression databases which are respectively used as a training database and a testing database, wherein each micro expression database comprises a plurality of micro expression videos and corresponding micro expression category labels; (2) converting the micro expression videos in the training database and the testing database into micro expression image sequences, extracting gray face images from the micro expression image sequences, and extracting local area features of the face after blocking; (3) establishing a domain selection migration regression model, and learning the model by adopting the local facial region characteristics to obtain a sparse projection matrix connecting the local facial region characteristics and the micro-expression class labels; (4) and (3) for the micro expression to be recognized, obtaining the local area characteristics of the human face according to the step (2), and obtaining a corresponding micro expression category label by adopting the learned sparse projection matrix. The invention has higher accuracy.
Description
Technical Field
The invention relates to image processing, in particular to a cross-database micro-expression recognition method and device based on domain selection migration regression.
Background
Micro-expressions are facial expressions that are exposed inadvertently when a person attempts to hide or suppress the true mood of the mind, and are not controlled by subjective awareness of the person. The micro expression is an important non-language signal when detecting human hidden emotions, can generally and effectively reveal the real psychological state of a person, is considered as a key clue for identifying lie, and has an important role in better understanding human emotion. Therefore, the effective application of the micro expression plays an important role in social production and life. In the aspect of criminal investigation, a messenger trained by a certain micro expression recognition capability can better recognize the lie of a criminal suspect; in the aspect of social security, dangerous molecules hidden in daily life can be judged by observing micro-expression, and terrorism and riot can be prevented; in clinical medicine, through the micro-expression, doctors can better understand the real ideas of patients, such as hiding the disease conditions and the like, so that the doctors can more effectively communicate with the patients, more accurately analyze the disease conditions and improve the treatment scheme. However, the training cost for manually identifying the micro-expression is high, and the large-scale popularization is difficult. Therefore, in recent years, there has been an increasing demand for micro-expression recognition using computer vision technology and artificial intelligence methods.
Traditional micro expression recognition is usually trained and tested on a single micro expression database, and in real life, the training database and the testing database are usually different greatly, for example, micro expression samples are unbalanced in categories, samples come from different races, and the like, so that the recognition result is not accurate.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides a cross-database micro-expression identification method and device based on domain selection migration regression, and the identification accuracy is higher.
The technical scheme is as follows: the cross-database micro-expression identification method based on domain selection migration regression comprises the following steps:
(1) acquiring two micro expression databases which are respectively used as a training database and a testing database, wherein each micro expression database comprises a plurality of micro expression videos and corresponding micro expression category labels;
(2) converting the micro expression videos in the training database and the testing database into micro expression image sequences, extracting gray face images from the micro expression image sequences, and extracting local area features of the face after blocking;
(3) establishing a domain selection migration regression model, and learning the model by adopting the local facial region characteristics to obtain a sparse projection matrix connecting the local facial region characteristics and the micro-expression class labels; the domain selection migration regression model specifically comprises the following steps:
in the formula ,micro-expression category label for training database, c micro-expression category number, Ns、NtDecibels are the training database XsTest database XtThe number of micro-expression videos;respectively partitioning the training database and the testing database, and then partitioning the ith partitioned human face local area features, wherein K is the number of partitioned blocks, and d is the feature dimension of each partitioned block; w is aiIs the selection weight of the ith block,w=[wi|i=1,...,K]is a weight vector; i | · | purple wind1Is the 1-norm of the vector;labeling local area characteristics and micro-expression class labels L for ith block facesA relationship matrix between;is CiTransposing; λ, μ and γ are the corresponding constraint term coefficients, respectively;andis a matrix of elements 1, shaped asA matrix of real numbers representing rows and columns; ψ (-) denotes a core mapping operation;
(4) and (3) for the micro expression to be recognized, obtaining the local area characteristics of the human face according to the step (2), and obtaining a corresponding micro expression category label by adopting the learned sparse projection matrix.
Further, the step (2) specifically comprises:
(2-1) converting each micro expression video in the training database and the testing database into a micro expression image sequence;
(2-2) performing graying processing on the micro expression image sequence;
(2-3) cutting out a rectangular face image from the micro-expression image sequence subjected to graying processing and zooming;
(2-4) processing all the zoomed face images by utilizing an interpolation and key frame selection algorithm to obtain the same frame number of face images corresponding to each micro expression video;
and (2-5) partitioning the face image processed in the step (2-4), and extracting features in each partition to be used as face local area features.
Further, when the face images are partitioned in the step (2-5), each face image is partitioned for multiple times, and the partitions obtained in each time of partitioning are different in size.
Further, the method for learning the domain selection migration regression model comprises the following steps:
(3-1) converting the domain selection migration regression model into:
in the formula ,for connecting local area features of human face with micro-expression class labels LsC ═ C of the sparse projection matrix in betweeni|i=1,...,K]P satisfies formula (3):
ψ(C)=[ψ(Xs),ψ(Xt)]p type (3)
And P does not count1As shown in formulas (4), (5), (6) and (7), wherein P isiIs the ith column of P
P=[P1… Pc]Formula (7)
(3-2) solving the converted domain selection migration regression model to obtain a projection matrix estimation valueAnd weight estimation value
Further, the step (3-2) specifically comprises:
(3-2-1) keeping w unchanged, updating P:
A. converting formula (2) to formula (8)
The Lagrange function is as follows (9):
wherein ,representing the langerhan multiplier matrix, k represents the sparse constraint term coefficients,tr[·]a trace representing the matrix-of is shown,
B. solving the lagrangian function of the formula (9), specifically comprising the following steps:
I. keeping P, T and kappa unchanged, updating Q:
converting formula (8) to the following formula (10)
Formula (9) has a closed formula as formula (11)
Wherein I is an identity matrix;
II. Keeping Q, T and kappa unchanged, updating P:
formula (8) is converted into formula (12)
The optimal solution for formula (12) is as in formula (13)
III, update T and κ:
updating T and kappa according to equations (14) and (15)
T ═ T + kappa (P-Q) formula (14)
κ=min(ρκ,κmax) Formula (15)
wherein ,κmaxIs the preset maximum value of k, rho is the scaling factor, rho>1;
IV, checking whether convergence occurs:
checking whether equation (16) converges, if not, returning to step I, if yes, or if iteration number is larger than the set value, outputting matrix P, Q, T and k at this time,
||P-Q||∞<epsilon formula (16)
Wherein | · | purple sweet∞The maximum element in the data is solved, and epsilon represents a convergence threshold value;
(3-2-2) keeping P unchanged, updating w:
A. conversion of formula (9) to formula (17)
wherein ,are respectivelyAndformed vectors stacked in sequence in columns,Represents LsThe (c) th column of (a),
B. calculating a solving formula (17) by adopting an SLEP algorithm, and outputting w;
(3-2-3) check for Convergence
When a preset maximum iteration step is reached or the value of the objective function (18) is smaller than a preset value, taking the values of the matrixes P and w at the moment as the estimated value of the projection matrixAnd weight estimation valueOutputting; otherwise, returning to execute the step (3-2-1),
further, the step (4) specifically comprises:
from learned sparse projection matricesAnd weightThe emotion classification of the micro expression to be recognized is predicted by the formula (19):
wherein ,determined by the formula (20), xteIs the local area feature of the face to be identifiedteTo treat the micro expressionPredicted sentiment classification results, wiIs the ith element of w;
the cross-database micro-expression recognition device based on the domain selection migration regression comprises a processor and a computer program which is stored on a memory and can run on the processor, wherein the processor realizes the method when executing the program.
Has the advantages that: the invention has higher identification accuracy.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating one embodiment of a cross-database micro-expression recognition method based on domain selection migration regression according to the present invention;
fig. 2 is a schematic diagram of sequential image tiles.
Detailed Description
The embodiment provides a cross-database micro-expression recognition method based on domain selection migration regression, as shown in fig. 1, including the following steps:
(1) and acquiring two micro expression databases which are respectively used as a training database and a testing database, wherein each micro expression database comprises a plurality of micro expression videos and corresponding micro expression category labels.
(2) And converting the micro expression videos in the training database and the testing database into micro expression image sequences, extracting gray face images from the micro expression image sequences, and extracting local facial region characteristics after blocking.
The method specifically comprises the following steps:
(2-1) converting each micro expression video in the training database and the testing database into a micro expression image sequence;
(2-2) performing graying processing on the micro expression image sequence; the graying is realized by adopting a COLOR _ BGR2GRAY function of openCV;
(2-3) cutting out a rectangular face image from the micro-expression image sequence subjected to graying processing and zooming; wherein, before cutting, human face detection is carried outThe measurement is realized by adopting a face _ landworks function of face _ registration, when a video is cut into a face image, all frames are positioned according to the face position detected by the first frame of the video, and the minimum and maximum values of the horizontal and vertical axes are x respectivelymin=xLeft cheek-10,xmax=xRight cheek+10,ymin=xHighest point of eyebrow-30,ymax=yJawThe face image is scaled to 112x112 pixels;
(2-4) processing all the zoomed face images by utilizing an interpolation and key frame selection algorithm to obtain the same frame number of face images corresponding to each micro expression video; wherein, the interpolation utilizes the TIM time interpolation method provided by Honghong Peng 2014 in TPAMI A compact reproduction of Visual Speech Using tension variables, to select 16 face images for each video;
and (2-5) partitioning the face image processed in the step (2-4), and extracting features in each partition to be used as face local area features. When the face image is partitioned, each face image is partitioned for multiple times, and the partitions obtained each time are different in size, specifically as shown in fig. 2, the face image can be divided into 85 blocks, namely, 1x1 blocks, 2x2 blocks, 4x4 blocks, and 8x8 blocks. The feature is extracted for each block, namely each local area of the human face, the type of the feature is not limited, and the feature can be any feature, such as LBP-TOP, LPQ-TOP, LBP-SIP and the like.
(3) And establishing a domain selection migration regression model, and learning the model by adopting the local facial region characteristics to obtain a sparse projection matrix connecting the local facial region characteristics and the micro expression class labels.
The domain selection migration regression model specifically comprises the following steps:
in the formula ,micro-expression category label for training database, c micro-expression category number, Ns、NtDecibels are the training database XsTest database XtThe number of micro-expression videos;respectively partitioning the training database and the testing database, and then partitioning the ith partitioned human face local area features, wherein K is the number of partitioned blocks, and d is the feature dimension of each partitioned block; w is aiIs the selection weight of the ith block, w ═ wi|i=1,...,K]Is a weight vector; i | · | purple wind1Is the 1-norm of the vector;labeling local area characteristics and micro-expression class labels L for ith block facesA relationship matrix between;is CiTransposing; λ, μ and γ are the corresponding constraint term coefficients, respectively;andis a matrix of elements 1, shaped asA matrix of real numbers representing rows and columns; ψ (-) denotes a core mapping operation.
The method for learning the domain selection migration regression model specifically comprises the following steps:
(3-1) converting the domain selection migration regression model into:
in the formula ,for connecting local area features of human face with micro-expression class labels LsC ═ C of the sparse projection matrix in betweeni|i=1,...,K]P satisfies formula (3):
ψ(C)=[ψ(Xs),ψ(Xt)]p type (3)
And P does not count1As shown in formulas (4), (5), (6) and (7), wherein P isiIs the ith column of P
P=[P1… Pc]Formula (7)
(3-2) solving the converted domain selection migration regression model to obtain a projection matrix estimation valueAnd weight estimation valueThe solving method is ADM (selection direction method), and specifically comprises the following steps:
(3-2-1) keeping w unchanged, updating P:
the above formula can be further written as formula (8)
The Lagrange function is as follows (9):
wherein ,representing the langerhan multiplier matrix, k represents the sparse constraint term coefficients,tr[·]a trace representing the matrix-of is shown,
B. solving the lagrangian function of the formula (9), specifically comprising the following steps:
I. keeping P, T and kappa unchanged, updating Q:
converting formula (8) to the following formula (10)
Formula (9) has a closed formula as formula (11)
Wherein I is an identity matrix;
II. Keeping Q, T and kappa unchanged, updating P:
formula (8) is converted into formula (12)
The optimal solution for formula (12) is as in formula (13)
III, update T and κ:
updating T and kappa according to equations (14) and (15)
T ═ T + kappa (P-Q) formula (14)
κ=min(ρκ,κmax) Formula (15)
wherein ,κmaxIs the preset maximum value of k, rho is the scaling factor, rho>1; here,. kappa.maxIs set to 10-8ρ is set to 1.1.
IV, checking whether convergence occurs:
checking whether equation (16) converges, if not, returning to step I, if yes, outputting the matrixes P, Q, T and k at the moment, and if yes, or the iteration number is larger than a set value, wherein the maximum iteration number is set to 106,
||P-Q||∞<Epsilon formula (16)
Wherein | · | purple sweet∞The maximum element in the data is solved, and epsilon represents a convergence threshold value;
(3-2-2) keeping P unchanged, updating w:
A. conversion of formula (9) to formula (17)
wherein ,are respectivelyAndthe columns are stacked sequentially to form a vector,represents LsThe (c) th column of (a),
B. calculating a solving formula (17) by adopting an SLEP algorithm, and outputting w;
(3-2-3) check for Convergence
When a preset maximum iteration step is reached or the value of the objective function (18) is smaller than a preset value, taking the values of the matrixes P and w at the moment as the estimated value of the projection matrixAnd weight estimation valueOutputting; otherwise, returning to execute the step (3-2-1),
here, the maximum number of iterations is set to 10, and the objective function value is set to 10-7。
(4) And (3) for the micro expression to be recognized, obtaining the local area characteristics of the human face according to the step (2), and obtaining a corresponding micro expression category label by adopting the learned sparse projection matrix. The method specifically comprises the following steps:
from learned sparse projection matricesAnd weightThe emotion classification of the micro expression to be recognized is predicted by the formula (19):
wherein ,determined by the formula (20), xteIs the local area feature of the face to be identifiedteIs the emotion classification result of the prediction of the micro expression to be recognized, wiIs the ith element of w;
the embodiment also provides a cross-database micro-expression recognition device based on domain selection migration regression, which comprises a processor and a computer program stored on a memory and capable of running on the processor, wherein the processor executes the computer program to realize the method.
In order to verify the effectiveness of the invention, cross-data micro-expression recognition is performed between the HS sub-database, the VIS sub-database and the NIR sub-database of the CAME2 micro-expression database and the SMIC database, and the verification result is shown in Table 1:
TABLE 1
Training database | Test database | Evaluation index (meanF1/Acc) |
SMIC_HS | SMIC_VIS | 0.8721/87.32 |
SMIC_VIS | SMIC_HS | 0.6401/64.02 |
SMIC_HS | SMIC_NIR | 0.7466/74.65 |
SMIC_NIR | SMIC_HS | 0.5765/57.32 |
SMIC_VIS | SMIC_NIR | 0.7506/76.06 |
SMIC_NIR | SMIC_VIS | 0.8428/84.51 |
CASME II | SMIC_HS | 0.5297/54.27 |
SMIC_HS | CASME II | 0.5622/60.77 |
CASME II | SMIC_VIS | 0.5882/59.15 |
SMIC_VIS | CASME II | 0.7021/70.77 |
CASME II | SMIC_NIR | 0.5009/50.70 |
SMIC_NIR | CASME II | 0.4693/50.77 |
Wherein, the expression of the CASME2 database is processed as follows: expressions in the happy category are classified as positive, expressions in the sadness, dispost and fear categories are classified as negative, and labels in the surfrise category are classified as surfrise. The class of the SMIC database is positive, negative and surpride.
Experimental results show that the micro-expression identification method provided by the invention obtains higher cross-database micro-expression identification rate.
Claims (7)
1. A cross-database micro-expression recognition method based on domain selection migration regression is characterized by comprising the following steps:
(1) acquiring two micro expression databases which are respectively used as a training database and a testing database, wherein each micro expression database comprises a plurality of micro expression videos and corresponding micro expression category labels;
(2) converting the micro expression videos in the training database and the testing database into micro expression image sequences, extracting gray face images from the micro expression image sequences, and extracting local area features of the face after blocking;
(3) establishing a domain selection migration regression model, and learning the model by adopting the local facial region characteristics to obtain a sparse projection matrix connecting the local facial region characteristics and the micro-expression class labels; the domain selection migration regression model specifically comprises the following steps:
in the formula ,micro-expression category label for training database, c micro-expression category number, Ns、NtDecibels are the training database XsTest database XtThe number of micro-expression videos;the local area characteristics of the face of the ith block after the block operation of the training database and the test database are respectively, K is the block number of the block, d is the characteristic dimension of each block;wiIs the selection weight of the ith block, w ═ wi|i=1,...,K]Is a weight vector; i | · | purple wind1Is the 1-norm of the vector;labeling local area characteristics and micro-expression class labels L for ith block facesA relationship matrix between;is CiTransposing; λ, μ and γ are the corresponding constraint term coefficients, respectively;andis a matrix of elements 1, shaped asA matrix of real numbers representing rows and columns; ψ (-) denotes a core mapping operation;
(4) and (3) for the micro expression to be recognized, obtaining the local area characteristics of the human face according to the step (2), and obtaining a corresponding micro expression category label by adopting the learned sparse projection matrix.
2. The cross-database micro-expression recognition method based on domain selection migration regression as claimed in claim 1, wherein: the step (2) specifically comprises the following steps:
(2-1) converting each micro expression video in the training database and the testing database into a micro expression image sequence;
(2-2) performing graying processing on the micro expression image sequence;
(2-3) cutting out a rectangular face image from the micro-expression image sequence subjected to graying processing and zooming;
(2-4) processing all the zoomed face images by utilizing an interpolation and key frame selection algorithm to obtain the same frame number of face images corresponding to each micro expression video;
and (2-5) partitioning the face image processed in the step (2-4), and extracting features in each partition to be used as face local area features.
3. The cross-database micro-expression recognition method based on domain selection migration regression as claimed in claim 1, wherein: and (3) when the face images are partitioned in the step (2-5), partitioning each face image for multiple times, wherein the partitions obtained in each partitioning are different in size.
4. The cross-database micro-expression recognition method based on domain selection migration regression as claimed in claim 1, wherein: the method for learning the domain selection migration regression model comprises the following steps:
(3-1) converting the domain selection migration regression model into:
in the formula ,for connecting local area features of human face with micro-expression class labels LsC ═ C of the sparse projection matrix in betweeni|i=1,...,K]P satisfies formula (3):
ψ(C)=[ψ(Xs),ψ(Xt)]p type (3)
And P does not count1As shown in formulas (4), (5), (6) and (7), wherein P isiIs the ith column of P
P=[P1... Pc]Formula (7)
5. The cross-database micro-expression recognition method based on domain selection migration regression as claimed in claim 4, wherein: the step (3-2) specifically comprises the following steps:
(3-2-1) keeping w unchanged, updating P:
A. converting formula (2) to formula (8)
The Lagrange function is as follows (9):
wherein ,representing the langerhan multiplier matrix, k represents the sparse constraint term coefficients,tr[·]a trace representing the matrix-of is shown,
B. solving the lagrangian function of the formula (9), specifically comprising the following steps:
I. keeping P, T and kappa unchanged, updating Q:
converting formula (8) to the following formula (10)
Formula (9) has a closed formula as formula (11)
Wherein I is an identity matrix;
II. Keeping Q, T and kappa unchanged, updating P:
formula (8) is converted into formula (12)
The optimal solution for formula (12) is as in formula (13)
III, update T and κ:
updating T and kappa according to equations (14) and (15)
T ═ T + kappa (P-Q) formula (14)
κ=min(ρκ,κmax) Formula (15)
wherein ,κmaxIs the preset maximum value of k, rho is the scaling factor, rho>1;
IV, checking whether convergence occurs:
checking whether equation (16) converges, if not, returning to step I, if yes, or if iteration number is larger than the set value, outputting matrix P, Q, T and k at this time,
||P-Q||∞<epsilon formula (16)
Wherein | · | purple sweet∞The maximum element in the data is solved, and epsilon represents a convergence threshold value;
(3-2-2) keeping P unchanged, updating w:
A. conversion of formula (9) to formula (17)
wherein ,are respectivelyAndthe columns are stacked sequentially to form a vector, represents LsThe (c) th column of (a),
B. calculating a solving formula (17) by adopting an SLEP algorithm, and outputting w;
(3-2-3) check for Convergence
When a preset maximum iteration step is reached or the value of the objective function (18) is smaller than a preset value, taking the values of the matrixes P and w at the moment as the estimated value of the projection matrixAnd weight estimation valueOutputting; otherwise return to the execution step(3-2-1),
6. The cross-database micro-expression recognition method based on domain selection migration regression as claimed in claim 1, wherein: the step (4) specifically comprises the following steps:
from learned sparse projection matricesAnd weightThe emotion classification of the micro expression to be recognized is predicted by the formula (19):
wherein ,determined by the formula (20), xteIs the local area feature of the face to be identifiedteIs the emotion classification result of the prediction of the micro expression to be recognized, wiIs the ith element of w;
7. a cross-database micro-expression recognition apparatus based on domain selection migration regression, comprising a processor and a computer program stored on a memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 6 when executing the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010030236.5A CN111259759B (en) | 2020-01-13 | 2020-01-13 | Cross-database micro-expression recognition method and device based on domain selection migration regression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010030236.5A CN111259759B (en) | 2020-01-13 | 2020-01-13 | Cross-database micro-expression recognition method and device based on domain selection migration regression |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111259759A true CN111259759A (en) | 2020-06-09 |
CN111259759B CN111259759B (en) | 2023-04-28 |
Family
ID=70948688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010030236.5A Active CN111259759B (en) | 2020-01-13 | 2020-01-13 | Cross-database micro-expression recognition method and device based on domain selection migration regression |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111259759B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111832426A (en) * | 2020-06-23 | 2020-10-27 | 东南大学 | Cross-library micro-expression recognition method and device based on double-sparse transfer learning |
CN112307923A (en) * | 2020-10-30 | 2021-02-02 | 北京中科深智科技有限公司 | Partitioned expression migration method and system |
CN112800951A (en) * | 2021-01-27 | 2021-05-14 | 华南理工大学 | Micro-expression identification method, system, device and medium based on local base characteristics |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647628A (en) * | 2018-05-07 | 2018-10-12 | 山东大学 | A kind of micro- expression recognition method based on the sparse transfer learning of multiple features multitask dictionary |
CN110427881A (en) * | 2019-08-01 | 2019-11-08 | 东南大学 | The micro- expression recognition method of integration across database and device based on the study of face local features |
-
2020
- 2020-01-13 CN CN202010030236.5A patent/CN111259759B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647628A (en) * | 2018-05-07 | 2018-10-12 | 山东大学 | A kind of micro- expression recognition method based on the sparse transfer learning of multiple features multitask dictionary |
CN110427881A (en) * | 2019-08-01 | 2019-11-08 | 东南大学 | The micro- expression recognition method of integration across database and device based on the study of face local features |
Non-Patent Citations (5)
Title |
---|
YUAN ZONG等: "Domain Regeneration for Cross-Database Micro-Expression", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
YUAN ZONG等: "Learning from Hierarchical Spatiotemporal Descriptors", 《IEEE TRANSACTIONS ON MULTIMEDIA》 * |
丁泽超等: "可鉴别的多特征联合稀疏表示人脸表情识别方法", 《小型微型计算机系统》 * |
卢官明等: "基于LBP-TOP特征的微表情识别", 《南京邮电大学学报(自然科学版)》 * |
宗源: "基于子空间学习的微表情识别研究", 《中国优秀硕士学位论文全文数据库(电子期刊) 信息科技辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111832426A (en) * | 2020-06-23 | 2020-10-27 | 东南大学 | Cross-library micro-expression recognition method and device based on double-sparse transfer learning |
CN112307923A (en) * | 2020-10-30 | 2021-02-02 | 北京中科深智科技有限公司 | Partitioned expression migration method and system |
CN112800951A (en) * | 2021-01-27 | 2021-05-14 | 华南理工大学 | Micro-expression identification method, system, device and medium based on local base characteristics |
CN112800951B (en) * | 2021-01-27 | 2023-08-08 | 华南理工大学 | Micro-expression recognition method, system, device and medium based on local base characteristics |
Also Published As
Publication number | Publication date |
---|---|
CN111259759B (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110287805B (en) | Micro-expression identification method and system based on three-stream convolutional neural network | |
CN110532900B (en) | Facial expression recognition method based on U-Net and LS-CNN | |
CN110427881B (en) | Cross-library micro-expression recognition method and device based on face local area feature learning | |
CN108615010B (en) | Facial expression recognition method based on parallel convolution neural network feature map fusion | |
Bishay et al. | Schinet: Automatic estimation of symptoms of schizophrenia from facial behaviour analysis | |
CN111523462B (en) | Video sequence expression recognition system and method based on self-attention enhanced CNN | |
CN107403142B (en) | A kind of detection method of micro- expression | |
CN111797683A (en) | Video expression recognition method based on depth residual error attention network | |
CN113011357B (en) | Depth fake face video positioning method based on space-time fusion | |
CN111259759A (en) | Cross-database micro-expression recognition method and device based on domain selection migration regression | |
CN111222457B (en) | Detection method for identifying authenticity of video based on depth separable convolution | |
WO2021047190A1 (en) | Alarm method based on residual network, and apparatus, computer device and storage medium | |
CN106295501A (en) | The degree of depth based on lip movement study personal identification method | |
KR20210066697A (en) | Apparatus and method for predicting human depression level using multi-layer bi-lstm with spatial and dynamic information of video frames | |
CN114511912A (en) | Cross-library micro-expression recognition method and device based on double-current convolutional neural network | |
CN112149616A (en) | Figure interaction behavior recognition method based on dynamic information | |
CN110705428A (en) | Facial age recognition system and method based on impulse neural network | |
CN116230234A (en) | Multi-mode feature consistency psychological health abnormality identification method and system | |
CN108197593B (en) | Multi-size facial expression recognition method and device based on three-point positioning method | |
CN113591797B (en) | Depth video behavior recognition method | |
CN115909438A (en) | Pain expression recognition system based on depth time-space domain convolutional neural network | |
CN113963421B (en) | Dynamic sequence unconstrained expression recognition method based on hybrid feature enhanced network | |
CN110287761A (en) | A kind of face age estimation method analyzed based on convolutional neural networks and hidden variable | |
CN111898533B (en) | Gait classification method based on space-time feature fusion | |
Bhattacharya et al. | Simplified face quality assessment (sfqa) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |