CN113591660A - Micro-expression recognition method based on meta-learning - Google Patents

Micro-expression recognition method based on meta-learning Download PDF

Info

Publication number
CN113591660A
CN113591660A CN202110840137.8A CN202110840137A CN113591660A CN 113591660 A CN113591660 A CN 113591660A CN 202110840137 A CN202110840137 A CN 202110840137A CN 113591660 A CN113591660 A CN 113591660A
Authority
CN
China
Prior art keywords
micro
expression
learning
meta
sets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110840137.8A
Other languages
Chinese (zh)
Inventor
宫文娟
张悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN202110840137.8A priority Critical patent/CN113591660A/en
Publication of CN113591660A publication Critical patent/CN113591660A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a micro-expression recognition method based on meta-learning, which comprises data preprocessing; extracting and fusing micro-expression characteristics; pre-training a model; and (5) micro-expression recognition. The method is based on meta-learning and is suitable for small sample learning, the fusion characteristics of the optical flow and the frame difference are respectively input into meta-learning models pre-trained by macro expression and micro expression, and the obtained characteristic vectors are spliced and classified, so that the generalization performance of model identification is greatly improved, and the identification effect of micro expression is improved.

Description

Micro-expression recognition method based on meta-learning
Technical Field
The invention belongs to the technical field of deep learning, and relates to a method for recognizing micro-expressions by using a meta-learning algorithm in deep learning.
Background
Recently, deep learning has made considerable progress in many areas. However, deep learning cannot be well solved for many problems in the real world due to its dependence on large-scale data. There is a need for a fast learning capability, and a human being has the ability to use past experience and knowledge to guide the learning of new knowledge through a small number of examples. On this basis, the few-sample learning method has gained wide attention.
Meta-learning is a key method for solving the learning of few samples at present, and firstly, experiences are extracted from the learning of a series of known tasks, and then the obtained experiences are used for processing a new task with a few samples.
Expression is an intuitive reflection of human emotion and an important means of interpersonal communication, so expression recognition is always one of the key points of research in the field of computer vision, and has achieved remarkable results in the past decades. The micro expression is a special facial expression and is spontaneously generated in an unconscious state of a person, and compared with a common expression, the micro expression has the characteristics of short duration, low motion intensity, difficulty in masking or disguising, analysis in a video state and the like. Because the micro expression is difficult to mask or disguise, the micro expression can better reflect the real emotion of a person, so that the micro expression is more real and reliable in emotion and psychological analysis, and has potential utilization value and wide application prospect in the aspects of exchange negotiation, psychological intervention, teaching evaluation and the like.
Disclosure of Invention
The invention aims to provide a micro-expression recognition method based on meta-learning, which solves the problems of small micro-expression data volume and slight action change. The method has the beneficial effects that the characteristics of the meta-learning method are utilized, and the micro-expression recognition with few samples can achieve a good generalization result.
The technical scheme adopted by the invention is carried out according to the following steps:
step1, preprocessing data;
step2, extracting and fusing micro-expression characteristics;
step3, pre-training a model;
step4. micro-expression recognition.
Further, in step1, preprocessing a micro-expression data set, wherein the micro-expression data set comprises a plurality of micro-expression video segments, and aligning human face characteristic points by adopting face registration; face registration extracts the human face feature points and maps the extracted feature points from the new image into a template.
Further, calculating optical flow characteristics and frame difference characteristics between the start frame and the vertex frame of the micro expression sequence in step2, and fusing the two characteristics, wherein the calculation process of the optical flow characteristics is as follows:
and calculating the optical flow characteristics by using a Gunnar Farnenback algorithm, extracting dense optical flows between a starting frame and a vertex frame, obtaining optical flow values of x and y components, calculating a vector mode, and dividing the optical flow values by the maximum value of the optical flow values to enable the optical flow values to range from 0 to 1. The separately normalized x-component, y-component, and vector modulo size constitute three channels of the optical flow feature representation.
Further, in step3, a feature extraction model of the framework is optimized, the Resnet-18 model is adopted as a backbone network in the feature extraction model, and the loss is set as cross entropy loss. And optimizing the model parameters by respectively utilizing the macro expression data set and the micro expression data set to obtain two different models. After the full connectivity layer is removed, the network structure and the optimized model parameters on different datasets are used as two different feature extractors (classifiers).
Further, in step4, the micro expression data set is divided according to the meta-learning definition, the characteristics of the micro expression data set are respectively input into a meta-fusion learning network pre-trained by the macro expression data set and the micro expression data set, two different characteristic vectors are obtained, and the two characteristic vectors are spliced to classify the micro expression.
The data division method according to the meta-learning definition is as follows:
the data set is divided into training samples and testing samples, each part is composed of tasks, and the tasks have training sets and testing sets of the tasks. For the sake of no confusion, the training set in the task is called support set and the test set is called query set. N-way K-shot is a common experimental setup in few-sample learning, N-way means that there are N categories in training data, and K-shot means that there are K labeled data under each category. The method of validation of loss-one-subject-out is used, that is, all videos of an object are taken as a validation set at a time, and the micro-representation distribution of the object is not included in the model. The single target of the verification set is regarded as a new class, and all other target sets except the target are regarded as base classes (training samples) to be trained sequentially. Wherein the query set in the test sample is composed of a new class, and the support set is obtained from the base class.
The method for constructing the meta-fusion learning module comprises the following steps:
the element fusion learning module solves the N-way K-shot classification problem of the frame difference and light stream fusion characteristics. On the basis of the pre-trained feature extractor, the micro-expression data features obtained through preprocessing are divided into a plurality of tasks, and a support set is encoded by a classifier in each task to obtain feature vectors. In the meta-training phase, the average feature representation of each class is calculated by using the feature average of all samples of the class:
Figure BDA0003178580490000021
wherein the average characteristic ωcAs centroid of class c, S as support set, ScA small sample set representing category c. Vector f obtained by calculating test set code by utilizing cosine similarityθ(x) And center of mass omegacCosine distance between<fθ(x),ωc>. Classifying the test set samples through the softmax probability of the cosine distance:
Figure BDA0003178580490000022
the loss function of the N-way K-shot classification problem uses the cross entropy loss calculated from the predicted distribution and the true distribution of the test set:
Figure BDA0003178580490000031
wherein
Figure BDA0003178580490000032
Representing the prediction distribution, C is the number of classes,
Figure BDA0003178580490000033
representing the true distribution of the ith data sample.
Drawings
FIG. 1 is a flow chart of the present invention for performing micro-expression recognition;
FIG. 2 is an overall architecture of the present invention;
FIG. 3 is a meta-baseline algorithm model.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The micro-expression recognition is divided into 4 modules as shown in figure 1, namely data preprocessing, micro-expression feature extraction and fusion, model pre-training and micro-expression recognition.
The data preprocessing stage is used for preprocessing a micro-expression data set to be processed, the micro-expression data set comprises a plurality of micro-expression video segments, and face feature points are aligned by adopting face registration; face registration extracts the human face feature points and maps the extracted feature points from the new image into a template.
And in the micro expression characteristic extraction and fusion stage, calculating the optical flow characteristic and the frame difference characteristic between the initial frame and the vertex frame of the micro expression sequence, and fusing the two characteristics. The optical flow characteristics are calculated by using a Gunnar Farnenback algorithm, dense optical flows between a starting frame and a vertex frame are extracted, optical flow values of x and y components are obtained, a vector mode is calculated, and the optical flow values are divided by the maximum value of the optical flow values to enable the range of the optical flow values to be between 0 and 1. The separately normalized x-component, y-component, and vector modulo size constitute three channels of the optical flow feature representation.
And optimizing a feature extraction model of the framework in a model pre-training stage, wherein the feature extraction model adopts a Resnet-18 model as a backbone network, and the loss is set as cross entropy loss. And optimizing the model parameters by respectively utilizing the macro expression data set and the micro expression data set to obtain two different models. After the full connectivity layer is removed, the network structure and the optimized model parameters on different datasets are used as two different feature extractors (classifiers).
And in the micro expression recognition stage, the micro expression data set is divided according to the meta-learning definition, the characteristics of the micro expression data set are respectively input into a meta-fusion learning network pre-trained by the macro expression data set and the micro expression data set to obtain two different characteristic vectors, and the two characteristic vectors are spliced to classify the micro expressions.
Fig. 2 is an overall architecture of the present invention, and before extracting the features of the micro-expressions, data preprocessing, including face alignment and face cropping, is performed on the micro-expression video segments. And then calculating optical flow and frame difference characteristics by using the initial frame and the vertex frame obtained by preprocessing, respectively inputting the two characteristic combinations into a meta-learning network subjected to micro-expression pre-training and macro-expression pre-training, and splicing the obtained characteristic vectors to calculate cosine distances so as to classify the micro-expressions.
FIG. 3 is a meta-baseline algorithm model in the present invention, which includes a pre-training module and a meta-learning module. The pre-training module uses a resnet18 network to train on the macro expression data set and the micro expression data set respectively by using cross entropy loss to obtain two groups of different model parameters, and then the network with the full connection layer removed is used as a classifier.
On the basis of the pre-trained feature extractor, the micro-expression data features obtained through preprocessing are divided into a plurality of tasks, and a support set is encoded by a classifier in each task to obtain feature vectors. In the meta-training phase, the average feature representation of each class is calculated by using the feature average of all samples of the class:
Figure BDA0003178580490000041
wherein the average characteristic ωcAs centroid of class c, S as support set, ScA small sample set representing category c. Vector f obtained by calculating test set code by utilizing cosine similarityθ(x) And center of mass omegacCosine distance between<fθ(x),ωc>. Classifying the test set samples through the softmax probability of the cosine distance:
Figure BDA0003178580490000042
the loss function of the N-way K-shot classification problem uses the cross entropy loss calculated from the predicted distribution and the true distribution of the test set:
Figure BDA0003178580490000043
wherein
Figure BDA0003178580490000044
Representing the prediction distribution, C is the number of classes,
Figure BDA0003178580490000045
representing the true distribution of the ith data sample.
The technical effects are as follows:
in order to verify the effectiveness of the method provided by the invention, verification is carried out on 3 public Micro Expression datasets, namely, a Spontaneous Micro-Expression database (SMIC), the chip Academy of Sciences Micro-Expression (CASME) dataset and a CASME II dataset.
SMIC: the SMIC was designed and obtained by the zhao national english team at the center of machine learning vision research of the university of orlu, finland, 2012, and is the first public spontaneous microexpression database in the world. The image resolution was 640 x 480 pixels and the face region resolution was 190 x 230 pixels. The SMIC contained 164 segments of micro-expression video of 16 subjects, and the database divided the micro-expressions into three categories, positive (positive), negative (negative), and surprise (surpride), positive being "happy" expression, and negative including four emotions "sad", "angry", "fear", and "disgust". Note that the SMIC does not label the vertex frames as well as the AU units.
CASME: the database contains 195 micro-expression videos of 35 subjects (13 females, 22 males). The foucault team summarized the expression-inducing methods published by Ekman, using 17 video clips that induce emotions such as "dislike", "suppress", "surprise", "tension" and asked the subject to suppress his own expression, the entire process of micro-expression being captured by a 60-frame-per-second camera. There are two kinds of image resolutions: 640 x 480 and 1280 x 720, facial expression resolution of 150 x 190. The obtained micro-expression sample is AUs encoded and includes three parts, start (onset), vertex (apex), and end (offset).
CASME II data set currently contains 255 micro-expression video segments, which are shot using a high speed video camera with 200FPS, and the face resolution of the video segments can reach about 280 x 340 pixels, and the picture resolution is 640 x 480 pixels. The CASME II dataset labels the micro expressions in 7 categories, namely Happy (Happiness), nausea (Disgust), Surprise (Surrise), depression (repetition), Sadness (Sadness), Fear (Fear), Others (Others); in addition, the CASME II dataset also labels the start (Onset), vertex (Apex), and end (Offset) of micro-expression activity, where Apex helps in micro-expression recognition; in addition to labeling emotions, the CASME II dataset also labels AUs (facial Activity units) for each micro expression, which can be the basis for classifying micro expressions. Only the four categories of surprise, depression, happiness, nausea were used in the present method.
For a fair comparison, all experiments were cross-validated with LOSO (Leave-one-subject-out), in which one subject sample was used as the test set and the rest as the training set. Taking CASME II as an example, the micro-expression video sequences of 26 subjects were collected, and the micro-expression video sequences of 25 subjects in each time were used as our training set, and the rest subjects were used as test sets, so that the experiment was performed.
Wherein the classification accuracy (acc) is the ratio of the number of correctly classified samples to the total number of samples in the experiment.
The proposed method is compared with the most advanced method, and tables 1, 2, and 3 are the comparison results on three datasets SMIC, CASME, and CASME II, respectively.
Table 1: comparison results on SMIC datasets
Method Rate of accuracy
LBP-SIP 0.4212
FHOFO 0.5122
FR 0.579
3DFCNN 0.5549
The invention 0.5915
Table 2: comparison results on CASME datasets
Figure BDA0003178580490000051
Figure BDA0003178580490000061
Table 3: comparison results on CASME II dataset
Method Rate of accuracy
DTCM 0.7206
The invention 0.7933
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiments according to the technical spirit of the present invention are within the scope of the present invention.

Claims (5)

1. The micro expression identification method based on meta-learning is characterized by comprising the following steps of:
step1, preprocessing data;
step2, extracting and fusing micro-expression characteristics;
step3, pre-training a model;
step4. micro-expression recognition.
2. The micro-expression recognition method based on meta-learning is characterized by comprising the following steps: preprocessing a micro-expression data set in the Step1, wherein the micro-expression data set comprises a plurality of micro-expression video segments, and aligning human face characteristic points by adopting face registration; face registration extracts the human face feature points and maps the extracted feature points from the new image into a template.
3. The micro-expression recognition method based on meta-learning is characterized by comprising the following steps: and calculating the optical flow characteristics and the frame difference characteristics between the start frame and the vertex frame of the micro expression sequence in the step2, and fusing the two characteristics, wherein the calculation process of the optical flow characteristics is as follows:
calculating the optical flow characteristics by using a Gunnar Farnenback algorithm, extracting dense optical flows between a starting frame and a vertex frame, obtaining optical flow values of x and y components, calculating a vector mode, and dividing the optical flow values by the maximum value to enable the optical flow values to be in the range of 0-1; the separately normalized x-component, y-component, and vector modulo size constitute three channels of the optical flow feature representation.
4. The micro-expression recognition method based on meta-learning is characterized by comprising the following steps: step3. optimizing a feature extraction model of the framework, wherein the feature extraction model adopts a Resnet-18 model as a backbone network, and the loss is set as cross entropy loss; and optimizing the model parameters by respectively utilizing the macro expression data set and the micro expression data set to obtain two different models. After the full connectivity layer is removed, the network structure and the optimized model parameters on different datasets are used as two different feature extractors (classifiers).
5. The micro-expression recognition method based on meta-learning is characterized by comprising the following steps: in the step4, the micro expression data sets are divided according to meta-learning definitions, the characteristics of the micro expression data sets are respectively input into a meta-fusion learning network pre-trained by the macro expression data sets and the micro expression data sets to obtain two different characteristic vectors, and the two characteristic vectors are spliced to classify the micro expressions;
the data division method according to the meta-learning definition is as follows:
the data set is divided into training samples and testing samples, each part is composed of tasks, the tasks have own training sets and testing sets, in order to avoid confusion, the training sets in the tasks are called support sets, and the testing sets are called query sets. N-way K-shot is common experimental setup in few-sample learning, wherein N-way means that N categories exist in training data, and K-shot means that K marked data exist under each category; using a verification mode of LOSO (Leave-one-subject-out), namely, taking all videos of one object as a verification set at a time, and the micro-representation distribution of the object is not contained in the model; the single target of the verification set is regarded as a new class, and all other target sets except the target are regarded as base classes (training samples) to be trained sequentially. Wherein the query set in the test sample is composed of a new class, and the support set is obtained from the base class;
the method for constructing the meta-fusion learning module comprises the following steps:
the element fusion learning module solves the N-way K-shot classification problem of the frame difference and light stream fusion characteristics, divides the micro-expression data characteristics obtained through preprocessing into a plurality of tasks on the basis of a pre-trained characteristic extractor, and encodes a support set by a classifier in each task to obtain a characteristic vector; in the meta-training phase, the average feature representation of each class is calculated by using the feature average of all samples of the class:
Figure FDA0003178580480000021
wherein the average characteristic ωcAs centroid of class c, S as support set, ScRepresenting a few-sample set of the category c, and calculating a vector f obtained by encoding a test set by utilizing cosine similarityθ(x) And center of mass omegacCosine distance between<fθ(x),ωc>And classifying the test set samples through the softmax probability of the cosine distance:
Figure FDA0003178580480000022
the loss function of the N-way K-shot classification problem uses the cross entropy loss calculated from the predicted distribution and the true distribution of the test set:
Figure FDA0003178580480000023
wherein
Figure FDA0003178580480000024
Representing the prediction distribution, C is the number of classes,
Figure FDA0003178580480000025
representing the true distribution of the ith data sample.
CN202110840137.8A 2021-07-24 2021-07-24 Micro-expression recognition method based on meta-learning Pending CN113591660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110840137.8A CN113591660A (en) 2021-07-24 2021-07-24 Micro-expression recognition method based on meta-learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110840137.8A CN113591660A (en) 2021-07-24 2021-07-24 Micro-expression recognition method based on meta-learning

Publications (1)

Publication Number Publication Date
CN113591660A true CN113591660A (en) 2021-11-02

Family

ID=78249435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110840137.8A Pending CN113591660A (en) 2021-07-24 2021-07-24 Micro-expression recognition method based on meta-learning

Country Status (1)

Country Link
CN (1) CN113591660A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778268A (en) * 2023-04-20 2023-09-19 江苏济远医疗科技有限公司 Sample selection deviation relieving method suitable for medical image target classification

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175596A (en) * 2019-06-04 2019-08-27 重庆邮电大学 The micro- Expression Recognition of collaborative virtual learning environment and exchange method based on double-current convolutional neural networks
CN110175588A (en) * 2019-05-30 2019-08-27 山东大学 A kind of few sample face expression recognition method and system based on meta learning
CN112115993A (en) * 2020-09-11 2020-12-22 昆明理工大学 Zero sample and small sample evidence photo anomaly detection method based on meta-learning
CN112183419A (en) * 2020-10-09 2021-01-05 福州大学 Micro-expression classification method based on optical flow generation network and reordering
CN112949560A (en) * 2021-03-24 2021-06-11 四川大学华西医院 Method for identifying continuous expression change of long video expression interval under two-channel feature fusion
CN113139479A (en) * 2021-04-28 2021-07-20 山东大学 Micro-expression recognition method and system based on optical flow and RGB modal contrast learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175588A (en) * 2019-05-30 2019-08-27 山东大学 A kind of few sample face expression recognition method and system based on meta learning
CN110175596A (en) * 2019-06-04 2019-08-27 重庆邮电大学 The micro- Expression Recognition of collaborative virtual learning environment and exchange method based on double-current convolutional neural networks
CN112115993A (en) * 2020-09-11 2020-12-22 昆明理工大学 Zero sample and small sample evidence photo anomaly detection method based on meta-learning
CN112183419A (en) * 2020-10-09 2021-01-05 福州大学 Micro-expression classification method based on optical flow generation network and reordering
CN112949560A (en) * 2021-03-24 2021-06-11 四川大学华西医院 Method for identifying continuous expression change of long video expression interval under two-channel feature fusion
CN113139479A (en) * 2021-04-28 2021-07-20 山东大学 Micro-expression recognition method and system based on optical flow and RGB modal contrast learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SZE-TENG LIONG ET AL.: "OFF-ApexNet on Micro-expression Recognition System", 《ARXIV》, 10 May 2018 (2018-05-10), pages 1 - 13 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778268A (en) * 2023-04-20 2023-09-19 江苏济远医疗科技有限公司 Sample selection deviation relieving method suitable for medical image target classification

Similar Documents

Publication Publication Date Title
Song et al. Recognizing spontaneous micro-expression using a three-stream convolutional neural network
Kim et al. Groupface: Learning latent groups and constructing group-based representations for face recognition
CN111523462B (en) Video sequence expression recognition system and method based on self-attention enhanced CNN
Tariq et al. Recognizing emotions from an ensemble of features
CN108509880A (en) A kind of video personage behavior method for recognizing semantics
CN108765279A (en) A kind of pedestrian&#39;s face super-resolution reconstruction method towards monitoring scene
CN104933414A (en) Living body face detection method based on WLD-TOP (Weber Local Descriptor-Three Orthogonal Planes)
CN112766159A (en) Cross-database micro-expression identification method based on multi-feature fusion
CN105488519B (en) A kind of video classification methods based on video size information
Jumani et al. Facial expression recognition with histogram of oriented gradients using CNN
Wang et al. Improving human action recognition by non-action classification
CN111666845A (en) Small sample deep learning multi-mode sign language recognition method based on key frame sampling
CN115439884A (en) Pedestrian attribute identification method based on double-branch self-attention network
Cormier et al. UPAR Challenge: Pedestrian Attribute Recognition and Attribute-Based Person Retrieval--Dataset, Design, and Results
Satapathy et al. A lite convolutional neural network built on permuted Xceptio-inception and Xceptio-reduction modules for texture based facial liveness recognition
CN113591660A (en) Micro-expression recognition method based on meta-learning
CN114022905A (en) Attribute-aware domain expansion pedestrian re-identification method and system
Zhao et al. Pooling the convolutional layers in deep convnets for action recognition
She et al. Micro-expression recognition based on multiple aggregation networks
Pham et al. Vietnamese scene text detection and recognition using deep learning: An empirical study
CN108197593B (en) Multi-size facial expression recognition method and device based on three-point positioning method
CN116246305A (en) Pedestrian retrieval method based on hybrid component transformation network
CN115830643A (en) Light-weight pedestrian re-identification method for posture-guided alignment
Wei et al. Attention based relation network for facial action units recognition
Wang et al. Face recognition algorithm using wavelet decomposition and Support Vector Machines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination