CN111259759A - Cross-database micro-expression recognition method and device based on domain selection migration regression - Google Patents

Cross-database micro-expression recognition method and device based on domain selection migration regression Download PDF

Info

Publication number
CN111259759A
CN111259759A CN202010030236.5A CN202010030236A CN111259759A CN 111259759 A CN111259759 A CN 111259759A CN 202010030236 A CN202010030236 A CN 202010030236A CN 111259759 A CN111259759 A CN 111259759A
Authority
CN
China
Prior art keywords
micro
expression
formula
database
domain selection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010030236.5A
Other languages
Chinese (zh)
Other versions
CN111259759B (en
Inventor
宗源
江星洵
郑文明
李阳
路成
唐传高
李溯南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010030236.5A priority Critical patent/CN111259759B/en
Publication of CN111259759A publication Critical patent/CN111259759A/en
Application granted granted Critical
Publication of CN111259759B publication Critical patent/CN111259759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • Algebra (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cross-database micro-expression recognition method and device based on domain selection migration regression, which comprises the following steps: (1) acquiring two micro expression databases which are respectively used as a training database and a testing database, wherein each micro expression database comprises a plurality of micro expression videos and corresponding micro expression category labels; (2) converting the micro expression videos in the training database and the testing database into micro expression image sequences, extracting gray face images from the micro expression image sequences, and extracting local area features of the face after blocking; (3) establishing a domain selection migration regression model, and learning the model by adopting the local facial region characteristics to obtain a sparse projection matrix connecting the local facial region characteristics and the micro-expression class labels; (4) and (3) for the micro expression to be recognized, obtaining the local area characteristics of the human face according to the step (2), and obtaining a corresponding micro expression category label by adopting the learned sparse projection matrix. The invention has higher accuracy.

Description

Cross-database micro-expression recognition method and device based on domain selection migration regression
Technical Field
The invention relates to image processing, in particular to a cross-database micro-expression recognition method and device based on domain selection migration regression.
Background
Micro-expressions are facial expressions that are exposed inadvertently when a person attempts to hide or suppress the true mood of the mind, and are not controlled by subjective awareness of the person. The micro expression is an important non-language signal when detecting human hidden emotions, can generally and effectively reveal the real psychological state of a person, is considered as a key clue for identifying lie, and has an important role in better understanding human emotion. Therefore, the effective application of the micro expression plays an important role in social production and life. In the aspect of criminal investigation, a messenger trained by a certain micro expression recognition capability can better recognize the lie of a criminal suspect; in the aspect of social security, dangerous molecules hidden in daily life can be judged by observing micro-expression, and terrorism and riot can be prevented; in clinical medicine, through the micro-expression, doctors can better understand the real ideas of patients, such as hiding the disease conditions and the like, so that the doctors can more effectively communicate with the patients, more accurately analyze the disease conditions and improve the treatment scheme. However, the training cost for manually identifying the micro-expression is high, and the large-scale popularization is difficult. Therefore, in recent years, there has been an increasing demand for micro-expression recognition using computer vision technology and artificial intelligence methods.
Traditional micro expression recognition is usually trained and tested on a single micro expression database, and in real life, the training database and the testing database are usually different greatly, for example, micro expression samples are unbalanced in categories, samples come from different races, and the like, so that the recognition result is not accurate.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides a cross-database micro-expression identification method and device based on domain selection migration regression, and the identification accuracy is higher.
The technical scheme is as follows: the cross-database micro-expression identification method based on domain selection migration regression comprises the following steps:
(1) acquiring two micro expression databases which are respectively used as a training database and a testing database, wherein each micro expression database comprises a plurality of micro expression videos and corresponding micro expression category labels;
(2) converting the micro expression videos in the training database and the testing database into micro expression image sequences, extracting gray face images from the micro expression image sequences, and extracting local area features of the face after blocking;
(3) establishing a domain selection migration regression model, and learning the model by adopting the local facial region characteristics to obtain a sparse projection matrix connecting the local facial region characteristics and the micro-expression class labels; the domain selection migration regression model specifically comprises the following steps:
Figure BDA0002364047210000021
in the formula ,
Figure BDA0002364047210000022
micro-expression category label for training database, c micro-expression category number, Ns、NtDecibels are the training database XsTest database XtThe number of micro-expression videos;
Figure BDA0002364047210000023
respectively partitioning the training database and the testing database, and then partitioning the ith partitioned human face local area features, wherein K is the number of partitioned blocks, and d is the feature dimension of each partitioned block; w is aiIs the selection weight of the ith block,w=[wi|i=1,...,K]is a weight vector; i | · | purple wind1Is the 1-norm of the vector;
Figure BDA0002364047210000024
labeling local area characteristics and micro-expression class labels L for ith block facesA relationship matrix between;
Figure BDA0002364047210000025
is CiTransposing; λ, μ and γ are the corresponding constraint term coefficients, respectively;
Figure BDA0002364047210000026
and
Figure BDA0002364047210000027
is a matrix of elements 1, shaped as
Figure BDA0002364047210000028
A matrix of real numbers representing rows and columns; ψ (-) denotes a core mapping operation;
(4) and (3) for the micro expression to be recognized, obtaining the local area characteristics of the human face according to the step (2), and obtaining a corresponding micro expression category label by adopting the learned sparse projection matrix.
Further, the step (2) specifically comprises:
(2-1) converting each micro expression video in the training database and the testing database into a micro expression image sequence;
(2-2) performing graying processing on the micro expression image sequence;
(2-3) cutting out a rectangular face image from the micro-expression image sequence subjected to graying processing and zooming;
(2-4) processing all the zoomed face images by utilizing an interpolation and key frame selection algorithm to obtain the same frame number of face images corresponding to each micro expression video;
and (2-5) partitioning the face image processed in the step (2-4), and extracting features in each partition to be used as face local area features.
Further, when the face images are partitioned in the step (2-5), each face image is partitioned for multiple times, and the partitions obtained in each time of partitioning are different in size.
Further, the method for learning the domain selection migration regression model comprises the following steps:
(3-1) converting the domain selection migration regression model into:
Figure BDA0002364047210000029
in the formula ,
Figure BDA00023640472100000210
for connecting local area features of human face with micro-expression class labels LsC ═ C of the sparse projection matrix in betweeni|i=1,...,K]P satisfies formula (3):
ψ(C)=[ψ(Xs),ψ(Xt)]p type (3)
Figure BDA0002364047210000031
And P does not count1As shown in formulas (4), (5), (6) and (7), wherein P isiIs the ith column of P
Figure BDA0002364047210000032
Figure BDA0002364047210000033
Figure BDA0002364047210000034
P=[P1… Pc]Formula (7)
(3-2) solving the converted domain selection migration regression model to obtain a projection matrix estimation value
Figure BDA00023640472100000311
And weight estimation value
Figure BDA0002364047210000035
Further, the step (3-2) specifically comprises:
(3-2-1) keeping w unchanged, updating P:
A. converting formula (2) to formula (8)
Figure BDA0002364047210000036
The Lagrange function is as follows (9):
Figure BDA0002364047210000037
wherein ,
Figure BDA0002364047210000038
representing the langerhan multiplier matrix, k represents the sparse constraint term coefficients,
Figure BDA0002364047210000039
tr[·]a trace representing the matrix-of is shown,
Figure BDA00023640472100000310
B. solving the lagrangian function of the formula (9), specifically comprising the following steps:
I. keeping P, T and kappa unchanged, updating Q:
converting formula (8) to the following formula (10)
Figure BDA0002364047210000041
Formula (9) has a closed formula as formula (11)
Figure BDA0002364047210000042
Wherein I is an identity matrix;
II. Keeping Q, T and kappa unchanged, updating P:
formula (8) is converted into formula (12)
Figure BDA0002364047210000043
The optimal solution for formula (12) is as in formula (13)
Figure BDA0002364047210000044
III, update T and κ:
updating T and kappa according to equations (14) and (15)
T ═ T + kappa (P-Q) formula (14)
κ=min(ρκ,κmax) Formula (15)
wherein ,κmaxIs the preset maximum value of k, rho is the scaling factor, rho>1;
IV, checking whether convergence occurs:
checking whether equation (16) converges, if not, returning to step I, if yes, or if iteration number is larger than the set value, outputting matrix P, Q, T and k at this time,
||P-Q||<epsilon formula (16)
Wherein | · | purple sweetThe maximum element in the data is solved, and epsilon represents a convergence threshold value;
(3-2-2) keeping P unchanged, updating w:
A. conversion of formula (9) to formula (17)
Figure BDA0002364047210000051
wherein ,
Figure BDA0002364047210000052
are respectively
Figure BDA0002364047210000053
And
Figure BDA0002364047210000054
formed vectors stacked in sequence in columns,
Figure BDA0002364047210000055
Represents LsThe (c) th column of (a),
Figure BDA0002364047210000056
B. calculating a solving formula (17) by adopting an SLEP algorithm, and outputting w;
(3-2-3) check for Convergence
When a preset maximum iteration step is reached or the value of the objective function (18) is smaller than a preset value, taking the values of the matrixes P and w at the moment as the estimated value of the projection matrix
Figure BDA0002364047210000057
And weight estimation value
Figure BDA0002364047210000058
Outputting; otherwise, returning to execute the step (3-2-1),
Figure BDA0002364047210000059
further, the step (4) specifically comprises:
from learned sparse projection matrices
Figure BDA00023640472100000510
And weight
Figure BDA00023640472100000511
The emotion classification of the micro expression to be recognized is predicted by the formula (19):
Figure BDA00023640472100000512
wherein ,
Figure BDA00023640472100000513
determined by the formula (20), xteIs the local area feature of the face to be identifiedteTo treat the micro expressionPredicted sentiment classification results, wiIs the ith element of w;
Figure BDA00023640472100000514
the cross-database micro-expression recognition device based on the domain selection migration regression comprises a processor and a computer program which is stored on a memory and can run on the processor, wherein the processor realizes the method when executing the program.
Has the advantages that: the invention has higher identification accuracy.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating one embodiment of a cross-database micro-expression recognition method based on domain selection migration regression according to the present invention;
fig. 2 is a schematic diagram of sequential image tiles.
Detailed Description
The embodiment provides a cross-database micro-expression recognition method based on domain selection migration regression, as shown in fig. 1, including the following steps:
(1) and acquiring two micro expression databases which are respectively used as a training database and a testing database, wherein each micro expression database comprises a plurality of micro expression videos and corresponding micro expression category labels.
(2) And converting the micro expression videos in the training database and the testing database into micro expression image sequences, extracting gray face images from the micro expression image sequences, and extracting local facial region characteristics after blocking.
The method specifically comprises the following steps:
(2-1) converting each micro expression video in the training database and the testing database into a micro expression image sequence;
(2-2) performing graying processing on the micro expression image sequence; the graying is realized by adopting a COLOR _ BGR2GRAY function of openCV;
(2-3) cutting out a rectangular face image from the micro-expression image sequence subjected to graying processing and zooming; wherein, before cutting, human face detection is carried outThe measurement is realized by adopting a face _ landworks function of face _ registration, when a video is cut into a face image, all frames are positioned according to the face position detected by the first frame of the video, and the minimum and maximum values of the horizontal and vertical axes are x respectivelymin=xLeft cheek-10,xmax=xRight cheek+10,ymin=xHighest point of eyebrow-30,ymax=yJawThe face image is scaled to 112x112 pixels;
(2-4) processing all the zoomed face images by utilizing an interpolation and key frame selection algorithm to obtain the same frame number of face images corresponding to each micro expression video; wherein, the interpolation utilizes the TIM time interpolation method provided by Honghong Peng 2014 in TPAMI A compact reproduction of Visual Speech Using tension variables, to select 16 face images for each video;
and (2-5) partitioning the face image processed in the step (2-4), and extracting features in each partition to be used as face local area features. When the face image is partitioned, each face image is partitioned for multiple times, and the partitions obtained each time are different in size, specifically as shown in fig. 2, the face image can be divided into 85 blocks, namely, 1x1 blocks, 2x2 blocks, 4x4 blocks, and 8x8 blocks. The feature is extracted for each block, namely each local area of the human face, the type of the feature is not limited, and the feature can be any feature, such as LBP-TOP, LPQ-TOP, LBP-SIP and the like.
(3) And establishing a domain selection migration regression model, and learning the model by adopting the local facial region characteristics to obtain a sparse projection matrix connecting the local facial region characteristics and the micro expression class labels.
The domain selection migration regression model specifically comprises the following steps:
Figure BDA0002364047210000071
in the formula ,
Figure BDA0002364047210000072
micro-expression category label for training database, c micro-expression category number, Ns、NtDecibels are the training database XsTest database XtThe number of micro-expression videos;
Figure BDA0002364047210000073
respectively partitioning the training database and the testing database, and then partitioning the ith partitioned human face local area features, wherein K is the number of partitioned blocks, and d is the feature dimension of each partitioned block; w is aiIs the selection weight of the ith block, w ═ wi|i=1,...,K]Is a weight vector; i | · | purple wind1Is the 1-norm of the vector;
Figure BDA0002364047210000074
labeling local area characteristics and micro-expression class labels L for ith block facesA relationship matrix between;
Figure BDA0002364047210000075
is CiTransposing; λ, μ and γ are the corresponding constraint term coefficients, respectively;
Figure BDA0002364047210000076
and
Figure BDA0002364047210000077
is a matrix of elements 1, shaped as
Figure BDA0002364047210000078
A matrix of real numbers representing rows and columns; ψ (-) denotes a core mapping operation.
The method for learning the domain selection migration regression model specifically comprises the following steps:
(3-1) converting the domain selection migration regression model into:
Figure BDA0002364047210000079
in the formula ,
Figure BDA00023640472100000710
for connecting local area features of human face with micro-expression class labels LsC ═ C of the sparse projection matrix in betweeni|i=1,...,K]P satisfies formula (3):
ψ(C)=[ψ(Xs),ψ(Xt)]p type (3)
Figure BDA00023640472100000711
And P does not count1As shown in formulas (4), (5), (6) and (7), wherein P isiIs the ith column of P
Figure BDA00023640472100000712
Figure BDA00023640472100000713
Figure BDA0002364047210000081
P=[P1… Pc]Formula (7)
(3-2) solving the converted domain selection migration regression model to obtain a projection matrix estimation value
Figure BDA0002364047210000082
And weight estimation value
Figure BDA0002364047210000083
The solving method is ADM (selection direction method), and specifically comprises the following steps:
(3-2-1) keeping w unchanged, updating P:
A. equation (2) can be rewritten as:
Figure BDA0002364047210000084
the above formula can be further written as formula (8)
Figure BDA0002364047210000085
The Lagrange function is as follows (9):
Figure BDA0002364047210000086
wherein ,
Figure BDA0002364047210000087
representing the langerhan multiplier matrix, k represents the sparse constraint term coefficients,
Figure BDA0002364047210000088
tr[·]a trace representing the matrix-of is shown,
Figure BDA0002364047210000089
B. solving the lagrangian function of the formula (9), specifically comprising the following steps:
I. keeping P, T and kappa unchanged, updating Q:
converting formula (8) to the following formula (10)
Figure BDA00023640472100000810
Formula (9) has a closed formula as formula (11)
Figure BDA00023640472100000811
Wherein I is an identity matrix;
II. Keeping Q, T and kappa unchanged, updating P:
formula (8) is converted into formula (12)
Figure BDA0002364047210000091
The optimal solution for formula (12) is as in formula (13)
Figure BDA0002364047210000092
III, update T and κ:
updating T and kappa according to equations (14) and (15)
T ═ T + kappa (P-Q) formula (14)
κ=min(ρκ,κmax) Formula (15)
wherein ,κmaxIs the preset maximum value of k, rho is the scaling factor, rho>1; here,. kappa.maxIs set to 10-8ρ is set to 1.1.
IV, checking whether convergence occurs:
checking whether equation (16) converges, if not, returning to step I, if yes, outputting the matrixes P, Q, T and k at the moment, and if yes, or the iteration number is larger than a set value, wherein the maximum iteration number is set to 106
||P-Q||<Epsilon formula (16)
Wherein | · | purple sweetThe maximum element in the data is solved, and epsilon represents a convergence threshold value;
(3-2-2) keeping P unchanged, updating w:
A. conversion of formula (9) to formula (17)
Figure BDA0002364047210000093
wherein ,
Figure BDA0002364047210000094
are respectively
Figure BDA0002364047210000095
And
Figure BDA0002364047210000096
the columns are stacked sequentially to form a vector,
Figure BDA0002364047210000101
represents LsThe (c) th column of (a),
Figure BDA0002364047210000102
B. calculating a solving formula (17) by adopting an SLEP algorithm, and outputting w;
(3-2-3) check for Convergence
When a preset maximum iteration step is reached or the value of the objective function (18) is smaller than a preset value, taking the values of the matrixes P and w at the moment as the estimated value of the projection matrix
Figure BDA0002364047210000103
And weight estimation value
Figure BDA0002364047210000104
Outputting; otherwise, returning to execute the step (3-2-1),
Figure BDA0002364047210000105
here, the maximum number of iterations is set to 10, and the objective function value is set to 10-7
(4) And (3) for the micro expression to be recognized, obtaining the local area characteristics of the human face according to the step (2), and obtaining a corresponding micro expression category label by adopting the learned sparse projection matrix. The method specifically comprises the following steps:
from learned sparse projection matrices
Figure BDA0002364047210000106
And weight
Figure BDA0002364047210000107
The emotion classification of the micro expression to be recognized is predicted by the formula (19):
Figure BDA0002364047210000108
wherein ,
Figure BDA0002364047210000109
determined by the formula (20), xteIs the local area feature of the face to be identifiedteIs the emotion classification result of the prediction of the micro expression to be recognized, wiIs the ith element of w;
Figure BDA00023640472100001010
the embodiment also provides a cross-database micro-expression recognition device based on domain selection migration regression, which comprises a processor and a computer program stored on a memory and capable of running on the processor, wherein the processor executes the computer program to realize the method.
In order to verify the effectiveness of the invention, cross-data micro-expression recognition is performed between the HS sub-database, the VIS sub-database and the NIR sub-database of the CAME2 micro-expression database and the SMIC database, and the verification result is shown in Table 1:
TABLE 1
Training database Test database Evaluation index (meanF1/Acc)
SMIC_HS SMIC_VIS 0.8721/87.32
SMIC_VIS SMIC_HS 0.6401/64.02
SMIC_HS SMIC_NIR 0.7466/74.65
SMIC_NIR SMIC_HS 0.5765/57.32
SMIC_VIS SMIC_NIR 0.7506/76.06
SMIC_NIR SMIC_VIS 0.8428/84.51
CASME II SMIC_HS 0.5297/54.27
SMIC_HS CASME II 0.5622/60.77
CASME II SMIC_VIS 0.5882/59.15
SMIC_VIS CASME II 0.7021/70.77
CASME II SMIC_NIR 0.5009/50.70
SMIC_NIR CASME II 0.4693/50.77
Wherein, the expression of the CASME2 database is processed as follows: expressions in the happy category are classified as positive, expressions in the sadness, dispost and fear categories are classified as negative, and labels in the surfrise category are classified as surfrise. The class of the SMIC database is positive, negative and surpride.
Experimental results show that the micro-expression identification method provided by the invention obtains higher cross-database micro-expression identification rate.

Claims (7)

1. A cross-database micro-expression recognition method based on domain selection migration regression is characterized by comprising the following steps:
(1) acquiring two micro expression databases which are respectively used as a training database and a testing database, wherein each micro expression database comprises a plurality of micro expression videos and corresponding micro expression category labels;
(2) converting the micro expression videos in the training database and the testing database into micro expression image sequences, extracting gray face images from the micro expression image sequences, and extracting local area features of the face after blocking;
(3) establishing a domain selection migration regression model, and learning the model by adopting the local facial region characteristics to obtain a sparse projection matrix connecting the local facial region characteristics and the micro-expression class labels; the domain selection migration regression model specifically comprises the following steps:
Figure FDA0002364047200000011
in the formula ,
Figure FDA0002364047200000012
micro-expression category label for training database, c micro-expression category number, Ns、NtDecibels are the training database XsTest database XtThe number of micro-expression videos;
Figure FDA0002364047200000013
the local area characteristics of the face of the ith block after the block operation of the training database and the test database are respectively, K is the block number of the block, d is the characteristic dimension of each block;wiIs the selection weight of the ith block, w ═ wi|i=1,...,K]Is a weight vector; i | · | purple wind1Is the 1-norm of the vector;
Figure FDA0002364047200000014
labeling local area characteristics and micro-expression class labels L for ith block facesA relationship matrix between;
Figure FDA0002364047200000015
is CiTransposing; λ, μ and γ are the corresponding constraint term coefficients, respectively;
Figure FDA0002364047200000016
and
Figure FDA0002364047200000017
is a matrix of elements 1, shaped as
Figure FDA0002364047200000018
A matrix of real numbers representing rows and columns; ψ (-) denotes a core mapping operation;
(4) and (3) for the micro expression to be recognized, obtaining the local area characteristics of the human face according to the step (2), and obtaining a corresponding micro expression category label by adopting the learned sparse projection matrix.
2. The cross-database micro-expression recognition method based on domain selection migration regression as claimed in claim 1, wherein: the step (2) specifically comprises the following steps:
(2-1) converting each micro expression video in the training database and the testing database into a micro expression image sequence;
(2-2) performing graying processing on the micro expression image sequence;
(2-3) cutting out a rectangular face image from the micro-expression image sequence subjected to graying processing and zooming;
(2-4) processing all the zoomed face images by utilizing an interpolation and key frame selection algorithm to obtain the same frame number of face images corresponding to each micro expression video;
and (2-5) partitioning the face image processed in the step (2-4), and extracting features in each partition to be used as face local area features.
3. The cross-database micro-expression recognition method based on domain selection migration regression as claimed in claim 1, wherein: and (3) when the face images are partitioned in the step (2-5), partitioning each face image for multiple times, wherein the partitions obtained in each partitioning are different in size.
4. The cross-database micro-expression recognition method based on domain selection migration regression as claimed in claim 1, wherein: the method for learning the domain selection migration regression model comprises the following steps:
(3-1) converting the domain selection migration regression model into:
Figure FDA0002364047200000021
in the formula ,
Figure FDA0002364047200000022
for connecting local area features of human face with micro-expression class labels LsC ═ C of the sparse projection matrix in betweeni|i=1,...,K]P satisfies formula (3):
ψ(C)=[ψ(Xs),ψ(Xt)]p type (3)
Figure FDA0002364047200000023
And P does not count1As shown in formulas (4), (5), (6) and (7), wherein P isiIs the ith column of P
Figure FDA0002364047200000024
Figure FDA0002364047200000025
Figure FDA0002364047200000026
P=[P1... Pc]Formula (7)
(3-2) solving the converted domain selection migration regression model to obtain a projection matrix estimation value
Figure FDA0002364047200000027
And weight estimation value
Figure FDA0002364047200000028
5. The cross-database micro-expression recognition method based on domain selection migration regression as claimed in claim 4, wherein: the step (3-2) specifically comprises the following steps:
(3-2-1) keeping w unchanged, updating P:
A. converting formula (2) to formula (8)
Figure FDA0002364047200000031
The Lagrange function is as follows (9):
Figure FDA0002364047200000032
wherein ,
Figure FDA0002364047200000033
representing the langerhan multiplier matrix, k represents the sparse constraint term coefficients,
Figure FDA0002364047200000034
tr[·]a trace representing the matrix-of is shown,
Figure FDA0002364047200000035
B. solving the lagrangian function of the formula (9), specifically comprising the following steps:
I. keeping P, T and kappa unchanged, updating Q:
converting formula (8) to the following formula (10)
Figure FDA0002364047200000036
Formula (9) has a closed formula as formula (11)
Figure FDA0002364047200000037
Wherein I is an identity matrix;
II. Keeping Q, T and kappa unchanged, updating P:
formula (8) is converted into formula (12)
Figure FDA0002364047200000038
The optimal solution for formula (12) is as in formula (13)
Figure FDA0002364047200000039
III, update T and κ:
updating T and kappa according to equations (14) and (15)
T ═ T + kappa (P-Q) formula (14)
κ=min(ρκ,κmax) Formula (15)
wherein ,κmaxIs the preset maximum value of k, rho is the scaling factor, rho>1;
IV, checking whether convergence occurs:
checking whether equation (16) converges, if not, returning to step I, if yes, or if iteration number is larger than the set value, outputting matrix P, Q, T and k at this time,
||P-Q||<epsilon formula (16)
Wherein | · | purple sweetThe maximum element in the data is solved, and epsilon represents a convergence threshold value;
(3-2-2) keeping P unchanged, updating w:
A. conversion of formula (9) to formula (17)
Figure FDA0002364047200000041
wherein ,
Figure FDA0002364047200000042
are respectively
Figure FDA0002364047200000043
And
Figure FDA0002364047200000044
the columns are stacked sequentially to form a vector,
Figure FDA0002364047200000045
Figure FDA0002364047200000046
represents LsThe (c) th column of (a),
Figure FDA0002364047200000047
B. calculating a solving formula (17) by adopting an SLEP algorithm, and outputting w;
(3-2-3) check for Convergence
When a preset maximum iteration step is reached or the value of the objective function (18) is smaller than a preset value, taking the values of the matrixes P and w at the moment as the estimated value of the projection matrix
Figure FDA0002364047200000048
And weight estimation value
Figure FDA0002364047200000049
Outputting; otherwise return to the execution step(3-2-1),
Figure FDA00023640472000000410
6. The cross-database micro-expression recognition method based on domain selection migration regression as claimed in claim 1, wherein: the step (4) specifically comprises the following steps:
from learned sparse projection matrices
Figure FDA0002364047200000051
And weight
Figure FDA0002364047200000052
The emotion classification of the micro expression to be recognized is predicted by the formula (19):
Figure FDA0002364047200000053
wherein ,
Figure FDA0002364047200000054
determined by the formula (20), xteIs the local area feature of the face to be identifiedteIs the emotion classification result of the prediction of the micro expression to be recognized, wiIs the ith element of w;
Figure FDA0002364047200000055
7. a cross-database micro-expression recognition apparatus based on domain selection migration regression, comprising a processor and a computer program stored on a memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 6 when executing the program.
CN202010030236.5A 2020-01-13 2020-01-13 Cross-database micro-expression recognition method and device based on domain selection migration regression Active CN111259759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010030236.5A CN111259759B (en) 2020-01-13 2020-01-13 Cross-database micro-expression recognition method and device based on domain selection migration regression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010030236.5A CN111259759B (en) 2020-01-13 2020-01-13 Cross-database micro-expression recognition method and device based on domain selection migration regression

Publications (2)

Publication Number Publication Date
CN111259759A true CN111259759A (en) 2020-06-09
CN111259759B CN111259759B (en) 2023-04-28

Family

ID=70948688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010030236.5A Active CN111259759B (en) 2020-01-13 2020-01-13 Cross-database micro-expression recognition method and device based on domain selection migration regression

Country Status (1)

Country Link
CN (1) CN111259759B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832426A (en) * 2020-06-23 2020-10-27 东南大学 Cross-library micro-expression recognition method and device based on double-sparse transfer learning
CN112307923A (en) * 2020-10-30 2021-02-02 北京中科深智科技有限公司 Partitioned expression migration method and system
CN112800951A (en) * 2021-01-27 2021-05-14 华南理工大学 Micro-expression identification method, system, device and medium based on local base characteristics

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647628A (en) * 2018-05-07 2018-10-12 山东大学 A kind of micro- expression recognition method based on the sparse transfer learning of multiple features multitask dictionary
CN110427881A (en) * 2019-08-01 2019-11-08 东南大学 The micro- expression recognition method of integration across database and device based on the study of face local features

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647628A (en) * 2018-05-07 2018-10-12 山东大学 A kind of micro- expression recognition method based on the sparse transfer learning of multiple features multitask dictionary
CN110427881A (en) * 2019-08-01 2019-11-08 东南大学 The micro- expression recognition method of integration across database and device based on the study of face local features

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
YUAN ZONG等: "Domain Regeneration for Cross-Database Micro-Expression", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
YUAN ZONG等: "Learning from Hierarchical Spatiotemporal Descriptors", 《IEEE TRANSACTIONS ON MULTIMEDIA》 *
丁泽超等: "可鉴别的多特征联合稀疏表示人脸表情识别方法", 《小型微型计算机系统》 *
卢官明等: "基于LBP-TOP特征的微表情识别", 《南京邮电大学学报(自然科学版)》 *
宗源: "基于子空间学习的微表情识别研究", 《中国优秀硕士学位论文全文数据库(电子期刊) 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832426A (en) * 2020-06-23 2020-10-27 东南大学 Cross-library micro-expression recognition method and device based on double-sparse transfer learning
CN112307923A (en) * 2020-10-30 2021-02-02 北京中科深智科技有限公司 Partitioned expression migration method and system
CN112800951A (en) * 2021-01-27 2021-05-14 华南理工大学 Micro-expression identification method, system, device and medium based on local base characteristics
CN112800951B (en) * 2021-01-27 2023-08-08 华南理工大学 Micro-expression recognition method, system, device and medium based on local base characteristics

Also Published As

Publication number Publication date
CN111259759B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN110287805B (en) Micro-expression identification method and system based on three-stream convolutional neural network
CN110532900B (en) Facial expression recognition method based on U-Net and LS-CNN
CN110427881B (en) Cross-library micro-expression recognition method and device based on face local area feature learning
CN108615010B (en) Facial expression recognition method based on parallel convolution neural network feature map fusion
Bishay et al. Schinet: Automatic estimation of symptoms of schizophrenia from facial behaviour analysis
CN111523462B (en) Video sequence expression recognition system and method based on self-attention enhanced CNN
CN107403142B (en) A kind of detection method of micro- expression
CN111797683A (en) Video expression recognition method based on depth residual error attention network
CN113011357B (en) Depth fake face video positioning method based on space-time fusion
CN111259759A (en) Cross-database micro-expression recognition method and device based on domain selection migration regression
CN111222457B (en) Detection method for identifying authenticity of video based on depth separable convolution
WO2021047190A1 (en) Alarm method based on residual network, and apparatus, computer device and storage medium
CN106295501A (en) The degree of depth based on lip movement study personal identification method
KR20210066697A (en) Apparatus and method for predicting human depression level using multi-layer bi-lstm with spatial and dynamic information of video frames
CN114511912A (en) Cross-library micro-expression recognition method and device based on double-current convolutional neural network
CN112149616A (en) Figure interaction behavior recognition method based on dynamic information
CN110705428A (en) Facial age recognition system and method based on impulse neural network
CN116230234A (en) Multi-mode feature consistency psychological health abnormality identification method and system
CN108197593B (en) Multi-size facial expression recognition method and device based on three-point positioning method
CN113591797B (en) Depth video behavior recognition method
CN115909438A (en) Pain expression recognition system based on depth time-space domain convolutional neural network
CN113963421B (en) Dynamic sequence unconstrained expression recognition method based on hybrid feature enhanced network
CN110287761A (en) A kind of face age estimation method analyzed based on convolutional neural networks and hidden variable
CN111898533B (en) Gait classification method based on space-time feature fusion
Bhattacharya et al. Simplified face quality assessment (sfqa)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant