CN110175588A - A kind of few sample face expression recognition method and system based on meta learning - Google Patents
A kind of few sample face expression recognition method and system based on meta learning Download PDFInfo
- Publication number
- CN110175588A CN110175588A CN201910465071.1A CN201910465071A CN110175588A CN 110175588 A CN110175588 A CN 110175588A CN 201910465071 A CN201910465071 A CN 201910465071A CN 110175588 A CN110175588 A CN 110175588A
- Authority
- CN
- China
- Prior art keywords
- sample
- expression recognition
- facial
- master cast
- meta learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008921 facial expression Effects 0.000 title claims abstract description 53
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000014509 gene expression Effects 0.000 claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 37
- 230000001815 facial effect Effects 0.000 claims abstract description 19
- 238000012360 testing method Methods 0.000 claims abstract description 15
- 238000005457 optimization Methods 0.000 claims abstract description 12
- 238000007781 pre-processing Methods 0.000 claims description 10
- 238000010276 construction Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000011160 research Methods 0.000 abstract description 6
- 238000013527 convolutional neural network Methods 0.000 abstract description 2
- 239000000284 extract Substances 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000008451 emotion Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本公开涉及计算机视觉的图像识别技术领域,特别是涉及一种基于元学习的少样本面部表情识别方法及系统。The present disclosure relates to the technical field of image recognition of computer vision, in particular to a method and system for recognizing facial expressions with few samples based on meta-learning.
背景技术Background technique
随着人工智能和深度学习技术的发展,面部表情识别作为图像识别领域很重要的问题受到了越来越多的关注。面部表情是人们传达自身情感和意图最有利、最自然、最普遍的信号之一,而面部表情识别是基于面部的七种表情(生气、厌恶、害怕、高兴、伤心、吃惊和中立)来确定人的情感。面部表情识别作为深度学习技术之一,已经取得了很大的进步,广泛应用于心态分析、医疗诊断、广告效果研究等很多领域。With the development of artificial intelligence and deep learning technology, facial expression recognition, as an important problem in the field of image recognition, has received more and more attention. Facial expression is one of the most favorable, natural and common signals for people to convey their emotions and intentions, and facial expression recognition is based on seven facial expressions (angry, disgusted, scared, happy, sad, surprised and neutral) to determine human emotion. Facial expression recognition, as one of the deep learning technologies, has made great progress and is widely used in many fields such as mentality analysis, medical diagnosis, and advertising effect research.
发明人在研究中发现,近年来深度学习技术在计算机视觉领域取得了很大的进步,如目标检测、图像分割和图像分类等。深度神经网络可以自动的从输入图像端到端的提取高级语义特征,其被认为是最有可能接近人类水平的人工智能技术之一。而这些监督学习模型需要大量的有标签的数据和多次的迭代从而训练大量的模型参数。由于标签标注的耗时耗力,这就限制了面对新的类别时的可扩展性。更重要的是,由于新兴的或者稀有的类别没有大量的标注样本,而限制了应用性。所以如何更好的利用少量的有标签样本训练一个识别精度高、泛化性能好的分类模型是一个很有意义的研究方向,即少样本问题的研究。The inventor found in the research that in recent years, deep learning technology has made great progress in the field of computer vision, such as target detection, image segmentation and image classification. Deep neural networks can automatically extract advanced semantic features from input images end-to-end, which is considered to be one of the artificial intelligence technologies most likely to be close to human level. These supervised learning models require a large amount of labeled data and multiple iterations to train a large number of model parameters. Due to the time-consuming and labor-intensive labeling, this limits the scalability when facing new categories. More importantly, the applicability is limited due to the absence of a large number of labeled samples for emerging or rare categories. Therefore, how to better use a small number of labeled samples to train a classification model with high recognition accuracy and good generalization performance is a very meaningful research direction, that is, the study of the few-sample problem.
少样本学习问题一般含有三个数据集:training set,support set,query set,针对support set中含有C类样本,且每类样本含有K个样本,称之为C-way K-shot问题。相反,人类很擅长在没有直接的监督信息的情况下对目标进行识别,甚至是先前没有出现过的类别,这是人类先天具有的元学习的能力。Few-sample learning problems generally contain three data sets: training set, support set, and query set. For the support set containing C-type samples, and each type of sample contains K samples, it is called the C-way K-shot problem. On the contrary, humans are very good at recognizing objects without direct supervision information, even categories that have not appeared before, which is the innate ability of meta-learning.
发明内容Contents of the invention
本说明书实施方式的目的是提供一种基于元学习的少样本面部表情识别方法,利用本公开该方法训练好的模型可以利用非常少的训练样本,达到满意的识别精度,从而解决了训练模型需要大量有标签样本的耗时耗力问题。The purpose of the embodiment of this description is to provide a method for facial expression recognition with few samples based on meta-learning. The model trained by this method can use very few training samples to achieve satisfactory recognition accuracy, thereby solving the need for training models. The time-consuming and labor-intensive problem of a large number of labeled samples.
本说明书实施方式提供一种基于元学习的少样本面部表情识别方法,通过以下技术方案实现:The implementation mode of this specification provides a method for recognizing facial expressions with few samples based on meta-learning, which is realized through the following technical solutions:
包括:include:
接收面部样本集,进行数据预处理;Receive the face sample set and perform data preprocessing;
构建表情识别的主模型;Construct the main model of expression recognition;
基于元学习的方法对数据集进行划分:对于七类表情,随机选择三类表情用于training set,再在剩余四类表情中随机选择三类表情用于testing set;The data set is divided based on the meta-learning method: for the seven types of expressions, three types of expressions are randomly selected for the training set, and then three types of expressions are randomly selected for the test set from the remaining four types of expressions;
利用episode-based的方法构造C-way K-shot,即利用划分好的training set构造sample set和query set;Use the episode-based method to construct C-way K-shot, that is, use the divided training set to construct sample set and query set;
将构造好的面部样本集输入主模型,对主模型进行参数优化;Input the constructed face sample set into the main model, and optimize the parameters of the main model;
接收待识别面部数据,根据优化后的主模型进行面部表情识别。Receive facial data to be recognized, and perform facial expression recognition based on the optimized master model.
本说明书实施方式提供一种基于元学习的少样本面部表情识别系统,通过以下技术方案实现:The implementation mode of this specification provides a few-sample facial expression recognition system based on meta-learning, which is realized through the following technical solutions:
包括:include:
数据预处理模块,被配置为:接收面部样本集,进行数据预处理;The data preprocessing module is configured to: receive a face sample set and perform data preprocessing;
主模型构建模块,被配置为:构建表情识别的主模型;The main model building module is configured to: build a main model for facial expression recognition;
数据集构造模块,被配置为:基于元学习的方法对数据集进行划分:对于七类表情,随机选择三类表情用于training set,再在剩余四类表情中随机选择三类表情用于testing set;The data set construction module is configured to: divide the data set based on the method of meta-learning: for seven types of expressions, randomly select three types of expressions for training set, and then randomly select three types of expressions from the remaining four types of expressions for testing set;
利用episode-based的方法构造C-way K-shot,即利用划分好的training set构造sample set和query set;Use the episode-based method to construct C-way K-shot, that is, use the divided training set to construct sample set and query set;
主模型优化模块,被配置为:将构造好的面部样本集输入主模型,对主模型进行参数优化;The main model optimization module is configured to: input the constructed face sample set into the main model, and optimize the parameters of the main model;
面部表情识别模块,被配置为:接收待识别面部数据,根据优化后的主模型进行面部表情识别。The facial expression recognition module is configured to: receive facial data to be recognized, and perform facial expression recognition according to the optimized main model.
与现有技术相比,本公开的有益效果是:Compared with the prior art, the beneficial effects of the present disclosure are:
本公开主要考虑基于元学习的少样本面部表情识别问题,利用episode-based的方法探索training set,构造一卷积神经网络提取可传递性的知识来实现对表情的识别。This disclosure mainly considers the problem of few-sample facial expression recognition based on meta-learning, explores the training set with an episode-based method, and constructs a convolutional neural network to extract transferable knowledge to realize expression recognition.
本公开考虑到原始的面部表情识别需要大量的有标签数据,而标签的标注既耗时又耗力,如何更加有效的利用少量的标签数据,针对少样本表情识别的研究,本公开将元学习的训练策略引入到面部表情识别,可以利用少量的面部表情样本,从而去识别未在训练过程中出现过的样本类型;利用基于episode的对训练集的探索方法,更有效的充分利用训练集,更加有效地提取具有传递性的知识,从而在support set中表现更好,从而更好地分类testing set。This disclosure considers that the original facial expression recognition requires a large amount of labeled data, and labeling is time-consuming and labor-intensive. How to use a small amount of labeled data more effectively, and for the research of few-sample expression recognition, this disclosure will use meta-learning The training strategy introduced to facial expression recognition can use a small number of facial expression samples to identify sample types that have not appeared in the training process; use the episode-based exploration method for the training set to make full use of the training set more effectively, More efficient extraction of transitive knowledge leads to better performance in the support set and better classification of the testing set.
附图说明Description of drawings
构成本公开的一部分的说明书附图用来提供对本公开的进一步理解,本公开的示意性实施例及其说明用于解释本公开,并不构成对本公开的不当限定。The accompanying drawings constituting a part of the present disclosure are used to provide a further understanding of the present disclosure, and the exemplary embodiments and descriptions of the present disclosure are used to explain the present disclosure, and do not constitute improper limitations to the present disclosure.
图1是根据一个或多个实施例的一种基于元学习的少样本面部表情识别方法流程图;Fig. 1 is a flow chart of a method for recognizing facial expressions with few samples based on meta-learning according to one or more embodiments;
图2(a)-图2(b)是根据一个或多个实施例的表情识别网络主模型示意图;Figure 2(a)-Figure 2(b) is a schematic diagram of the main model of the facial expression recognition network according to one or more embodiments;
图3是根据一个或多个实施例的面部表情识别方法框架示意图。Fig. 3 is a schematic diagram of a framework of a facial expression recognition method according to one or more embodiments.
具体实施方式Detailed ways
应该指出,以下详细说明都是例示性的,旨在对本公开提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本公开所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the present disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本公开的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used herein is only for describing specific embodiments, and is not intended to limit the exemplary embodiments according to the present disclosure. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural, and it should also be understood that when the terms "comprising" and/or "comprising" are used in this specification, they mean There are features, steps, operations, means, components and/or combinations thereof.
实施例子一Implementation example one
如图1所示,根据本公开的一个或多个实施例的一个方面,提供一种基于元学习的少样本面部表情识别方法流程图。As shown in FIG. 1 , according to an aspect of one or more embodiments of the present disclosure, a flow chart of a meta-learning-based method for recognizing facial expressions with few samples is provided.
一种基于元学习的少样本面部表情识别方法,该方法包括:A meta-learning based few-sample facial expression recognition method, the method comprising:
S101:接收面部样本集,进行数据预处理;S101: Receive face sample set, carry out data preprocessing;
S102:构建表情识别网络主模型;S102: Construct the network main model of expression recognition;
S103:基于元学习的方法划分数据集形成training和support/query set,利用episode-based的方法构造满足C-way K-shot的sample set和query set;S103: Divide the data set based on the method of meta-learning to form training and support/query set, and use the method of episode-based to construct the sample set and query set that satisfy the C-way K-shot;
S104:将预处理后的样本输入网络主模型,对模型进行优化;S104: Input the preprocessed sample into the main network model, and optimize the model;
S105:接收待识别面部数据,根据优化后的模型进行面部表情识别。S105: Receive facial data to be recognized, and perform facial expression recognition according to the optimized model.
在本实施例的步骤S101中,所述面部样本集中的面部样本数据为面部样本图片,对面部样本图片进行所述数据预处理包括对每幅面部样本图片进行归一化以及对面部样本图片中的每个像素进行归一化。在本实施例中进行数据预处理的具体操作步骤:In step S101 of this embodiment, the face sample data in the face sample set is a face sample picture, and performing the data preprocessing on the face sample picture includes normalizing each face sample picture and performing normalization on each face sample picture. Each pixel is normalized. The specific operation steps for data preprocessing in this embodiment:
S1011归一化每一幅图片:从每张图片中减去平均值,然后将标准差设置为3.125;S1011 normalize each picture: subtract the mean value from each picture, and then set the standard deviation to 3.125;
S1012归一化每一个像素:首先计算一个均值像素值图片,然后对于每一幅图片减去对应位置的均值像素;然后将所有训练集图片的每个像素的标准差设置为1。S1012 Normalize each pixel: first calculate a mean pixel value picture, and then subtract the mean pixel value corresponding to each picture; then set the standard deviation of each pixel of all training set pictures to 1.
在本实施例的步骤S102中,如图2(a)-图2(b)所示,所述表情识别网络主模型包括4个串联的Convolution Block块和最后一个Flatten层;In step S102 of the present embodiment, as shown in Fig. 2 (a)-Fig. 2 (b), described facial expression recognition network master model comprises 4 Convolution Block blocks and last Flatten layer of series connection;
所述的Convolution Block块主要包括:卷积层,批标准化层,Relu激活函数,池化层。卷积层的参数设置:卷积核为3*3,padding=SAME方式;池化层采用最大值池化。The Convolution Block block mainly includes: a convolutional layer, a batch normalization layer, a Relu activation function, and a pooling layer. Parameter settings of the convolution layer: the convolution kernel is 3*3, padding=SAME; the pooling layer adopts the maximum pooling.
所述的Flatten层用于将训练模型提取的特征一维化,得到n=64维度的特征。The Flatten layer is used to one-dimensionalize the features extracted by the training model to obtain n=64-dimensional features.
图3给出了基于元学习的少样本面部表情识别方法框架图:整个方法中主要创新在于对少样本面部表情识别的研究,主要分成三个部分:数据集的构造、特征提取、模型优化。Figure 3 shows the framework of a few-sample facial expression recognition method based on meta-learning: the main innovation in the whole method lies in the research on few-sample facial expression recognition, which is mainly divided into three parts: data set construction, feature extraction, and model optimization.
少样本学习中分为三个数据集:training,support,test。其中training用于训练,support用于验证,test用于最后的模型测试。而在训练的过程中,利用training构造训练过程中的sample set和query set,用于接下来算法的展开。Few sample learning is divided into three data sets: training, support, test. Among them, training is used for training, support is used for verification, and test is used for final model testing. In the training process, use training to construct the sample set and query set in the training process, which are used for the expansion of the next algorithm.
S103数据集的构造具体步骤是:The specific steps of constructing the S103 dataset are:
S1031利用原始数据集根据元学习的思想(即training和support/test有不同的label空间,而support/test有相同的label空间)构造training/support/testing。本着最大化利用数据集的标准,对数据集进行如下划分:在七类表情中随机选择三类表情用于training set,在剩下的四类表情中再次随机选择三类用于support/query set,则程序每运行一次用于训练和测试的样本类别不是相同的。S1031 uses the original data set to construct training/support/testing according to the idea of meta-learning (that is, training and support/test have different label spaces, and support/test has the same label space). In line with the standard of maximizing the use of the dataset, the dataset is divided as follows: randomly select three types of expressions from the seven types of expressions for the training set, and randomly select three types of expressions from the remaining four types of expressions for support/query set, the sample categories used for training and testing are not the same each time the program is run.
本申请实施例子中,斜杠统一代表“和”。In the implementation examples of this application, the slashes uniformly represent "and".
利用episode-based的方法构造C-way K-shot,是七类表情中选择C类,在每一类选择K个样本用于构造训练过程中的sample set,在该C个类中选择q个样本用于构造queryset。Using the episode-based method to construct C-way K-shot, select C category among the seven categories of expressions, select K samples in each category to construct the sample set in the training process, and select q samples in the C categories Samples are used to construct the queryset.
具体实施时,S1032利用episode-based的方法,构造满足C-way K-shot的sampleset和query set。在C=3的三类表情图片中,选择K张图片,K=1或K>1用于构造sampleset;在该三类的剩余图片中,选择q张图片用于构造query set,q=5,15,20等。During specific implementation, S1032 uses an episode-based method to construct a sample set and a query set that satisfy the C-way K-shot. Among the three types of expression pictures with C=3, select K pictures, K=1 or K>1 to construct the sampleset; among the remaining pictures of the three categories, select q pictures to construct the query set, q=5 ,15,20 etc.
特征提取是经过网络主模型经过Flattern层数输出得到输入样本的64维度的特征。Feature extraction is to obtain the 64-dimensional features of the input samples through the output of the main network model through the Flattern layer.
S104模型优化是利用设计好的损失函数,利用随机梯度下降SGD的方法对模型参数进行优化,使得总损失最小,具体过程如下:S104 Model optimization is to use the designed loss function to optimize the model parameters using the stochastic gradient descent SGD method to minimize the total loss. The specific process is as follows:
S1041根据提取的特征,计算sample set中每一类样本的原型。在3-wayK-shot的sample set中:若K=1,则该类样本的原型即为该样本64维度的特征pk=f(xi);若K>1,根据Bregman散度的原理,则第k类样本的原型是其中Ns是每一类表情选取的样本数,Sk是sample set中该类样本的集合,(xi,yi)是该集合中的有标签样本,f(xi)是经过主模型提取的特征。S1041 Calculate the prototype of each type of sample in the sample set according to the extracted features. In the 3-wayK-shot sample set: if K=1, the prototype of this type of sample is the 64-dimensional feature p k = f( xi ) of the sample; if K>1, according to the principle of Bregman divergence , then the prototype of the kth class sample is Among them, N s is the number of samples selected for each type of expression, S k is the set of samples of this type in the sample set, ( xi , y i ) is the labeled sample in the set, f( xi ) is the Extracted features.
S1042依次计算query set中的样本xq与每一类原型pk的距离d(xq,pk),然后根据到该类原型距离的远近计算属于该类的可能性此处距离的计算可以采用欧氏距离或者余弦距离。S1042 Calculate the distance d(x q ,p k ) between the sample x q in the query set and each type of prototype p k in turn, and then calculate the possibility of belonging to this type according to the distance to the type of prototype The calculation of the distance here can use the Euclidean distance or the cosine distance.
S1043采用损失函数L(xq)=-logp(xq,pk),利用随机梯度下降法SGD进行优化。S1043 Use the loss function L(x q )=-logp(x q ,p k ), and optimize using the stochastic gradient descent method SGD.
表1给出了数据集构造及模型优化的伪代码。Table 1 gives the pseudocode of data set construction and model optimization.
表1Table 1
实施例子二Implementation Example 2
根据本公开的一个或多个实施例的一个方面,提供一种基于元学习的少样本面部表情识别装置。According to an aspect of one or more embodiments of the present disclosure, a meta-learning based few-shot facial expression recognition device is provided.
一种基于元学习的少样本面部表情识别装置,基于所述的一种元学习的少样本面部表情识别方法,包括:依次连接的数据预处理模块、主模型构建模块、数据集构造模块、模型优化模块和面部表情识别模块。A meta-learning based few-sample facial expression recognition device, based on a meta-learning few-sample facial expression recognition method, comprising: sequentially connected data preprocessing module, main model building module, data set building module, model Optimization module and facial expression recognition module.
所述数据预处理模块用于接收面部样本集,进行数据预处理;The data preprocessing module is used to receive a face sample set and perform data preprocessing;
所述主模型构建模块用于构建表情识别网络主模型;Described main model construction module is used for constructing facial expression recognition network main model;
所述数据集构造模块用于将构建基于元学习的training set和support set/testing set,使得training set和support set/test set具有不同的label空间;用于构建episode-based的满足C-way K-shot的sample set和query set;The data set construction module is used to construct the training set and support set/testing set based on meta-learning, so that the training set and support set/test set have different label spaces; the C-way K for constructing episode-based -shot sample set and query set;
所述模型优化模块用于根据构造好的数据集,根据确定的损失函数和随机梯度下降的方法对模型进行优化。The model optimization module is used to optimize the model according to the constructed data set, according to the determined loss function and stochastic gradient descent method.
所述面部表情识别模块用于接收待识别面部数据,根据优化后的模型进行面部表情识别。The facial expression recognition module is used to receive facial data to be recognized, and perform facial expression recognition according to the optimized model.
应当注意,尽管在上文的详细描述中提及了设备的若干模块或子模块,但是这种划分仅仅是示例性而非强制性的。实际上,根据本公开的实施例,上文描述的两个或更多模块的特征和功能可以在一个模块中具体化。反之,上文描述的一个模块的特征和功能可以进一步划分为由多个模块来具体化。It should be noted that although several modules or sub-modules of the device have been mentioned in the above detailed description, this division is only exemplary and not mandatory. Actually, according to an embodiment of the present disclosure, the features and functions of two or more modules described above may be embodied in one module. Conversely, the features and functions of one module described above may be further divided to be embodied by a plurality of modules.
实施例子三Implementation example three
一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现一种基于元学习的少样本面部表情识别方法步骤。A computer device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor implements a meta-learning-based few-sample facial expression when executing the program Identify method steps.
该实施例子中的一种基于元学习的少样本面部表情识别方法的步骤参见实施例子一中的具体技术内容,此处不再进行详细描述。For the steps of a meta-learning-based few-sample facial expression recognition method in this embodiment example, please refer to the specific technical content in Embodiment 1, and will not be described in detail here.
实施例子四Implementation Example 4
一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现一种基于元学习的少样本面部表情识别方法步骤。A computer-readable storage medium, on which a computer program is stored, is characterized in that, when the program is executed by a processor, the steps of a meta-learning-based few-sample facial expression recognition method are implemented.
该实施例子中的一种基于元学习的少样本面部表情识别方法的步骤参见实施例子一中的具体技术内容,此处不再进行详细描述。For the steps of a meta-learning-based few-sample facial expression recognition method in this embodiment example, please refer to the specific technical content in Embodiment 1, and will not be described in detail here.
可以理解的是,在本说明书的描述中,参考术语“一实施例”、“另一实施例”、“其他实施例”、或“第一实施例~第N实施例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。It can be understood that, in the description of this specification, references to the terms "an embodiment", "another embodiment", "other embodiments", or "the first embodiment to the Nth embodiment" mean that A specific feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the described specific features, structures, materials or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
以上所述仅为本公开的优选实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。The above descriptions are only preferred embodiments of the present disclosure, and are not intended to limit the present disclosure. For those skilled in the art, the present disclosure may have various modifications and changes. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present disclosure shall be included within the protection scope of the present disclosure.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910465071.1A CN110175588B (en) | 2019-05-30 | 2019-05-30 | A meta-learning-based few-sample facial expression recognition method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910465071.1A CN110175588B (en) | 2019-05-30 | 2019-05-30 | A meta-learning-based few-sample facial expression recognition method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110175588A true CN110175588A (en) | 2019-08-27 |
CN110175588B CN110175588B (en) | 2020-12-29 |
Family
ID=67696889
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910465071.1A Active CN110175588B (en) | 2019-05-30 | 2019-05-30 | A meta-learning-based few-sample facial expression recognition method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110175588B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111737426A (en) * | 2020-05-09 | 2020-10-02 | 中国科学院深圳先进技术研究院 | Question answering model training method, computer device and readable storage medium |
CN113591660A (en) * | 2021-07-24 | 2021-11-02 | 中国石油大学(华东) | Micro-expression recognition method based on meta-learning |
WO2022011493A1 (en) * | 2020-07-13 | 2022-01-20 | 广东石油化工学院 | Neural semantic memory storage method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376625A (en) * | 2018-10-10 | 2019-02-22 | 东北大学 | A Facial Expression Recognition Method Based on Convolutional Neural Network |
CN109685135A (en) * | 2018-12-21 | 2019-04-26 | 电子科技大学 | A kind of few sample image classification method based on modified metric learning |
-
2019
- 2019-05-30 CN CN201910465071.1A patent/CN110175588B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376625A (en) * | 2018-10-10 | 2019-02-22 | 东北大学 | A Facial Expression Recognition Method Based on Convolutional Neural Network |
CN109685135A (en) * | 2018-12-21 | 2019-04-26 | 电子科技大学 | A kind of few sample image classification method based on modified metric learning |
Non-Patent Citations (5)
Title |
---|
FLOOD SUNG等: "Learning to Compare: Relation Network for Few-Shot Learning", 《HTTPS://ARXIV.ORG/ABS/1711.06025V2》 * |
JAKE SNELL等: "Prototypical Networks for Few-shot Learning", 《31ST CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS (NIPS 2017)》 * |
ORIOL VINYALS等: "Matching Networks for One Shot Learning", 《30TH CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS (NIPS 2016)》 * |
YONG WANG等: "Large Margin Meta-Learning for Few-Shot Classification", 《2ND WORKSHOP ON META-LEARNING AT NEURIPS 2018》 * |
闫雷鸣 等: "基于句式元学习的Twitter分类", 《北京大学学报(自然科学版)》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111737426A (en) * | 2020-05-09 | 2020-10-02 | 中国科学院深圳先进技术研究院 | Question answering model training method, computer device and readable storage medium |
CN111737426B (en) * | 2020-05-09 | 2021-06-01 | 中国科学院深圳先进技术研究院 | Question answering model training method, computer device and readable storage medium |
WO2022011493A1 (en) * | 2020-07-13 | 2022-01-20 | 广东石油化工学院 | Neural semantic memory storage method |
CN113591660A (en) * | 2021-07-24 | 2021-11-02 | 中国石油大学(华东) | Micro-expression recognition method based on meta-learning |
Also Published As
Publication number | Publication date |
---|---|
CN110175588B (en) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109685819B (en) | A 3D Medical Image Segmentation Method Based on Feature Enhancement | |
CN109376242B (en) | Text classification method based on cyclic neural network variant and convolutional neural network | |
CN106919951B (en) | A Weakly Supervised Bilinear Deep Learning Method Based on Click and Vision Fusion | |
CN108182441A (en) | Parallel multichannel convolutive neural network, construction method and image characteristic extracting method | |
CN107545245A (en) | A kind of age estimation method and equipment | |
CN107527318A (en) | A kind of hair style replacing options based on generation confrontation type network model | |
CN109948692B (en) | Computer generated image detection method based on multi-color space convolutional neural network and random forest | |
CN104331442A (en) | Video classification method and device | |
CN110689523A (en) | Personalized image information evaluation method based on meta-learning and information data processing terminal | |
CN110321805B (en) | Dynamic expression recognition method based on time sequence relation reasoning | |
CN105469376A (en) | Method and device for determining picture similarity | |
CN104239897A (en) | Visual feature representing method based on autoencoder word bag | |
CN110175588B (en) | A meta-learning-based few-sample facial expression recognition method and system | |
CN105678344A (en) | Intelligent classification method for power instrument equipment | |
CN111783688B (en) | A classification method of remote sensing image scene based on convolutional neural network | |
CN109815920A (en) | Gesture recognition method based on convolutional neural network and adversarial convolutional neural network | |
CN111368734B (en) | A micro-expression recognition method based on normal expression assistance | |
CN112434172A (en) | Pathological image prognosis feature weight calculation method and system | |
CN112418059A (en) | Emotion recognition method and device, computer equipment and storage medium | |
CN110516734A (en) | A kind of image matching method, device, equipment and storage medium | |
CN114399808A (en) | A face age estimation method, system, electronic device and storage medium | |
CN114282059A (en) | Method, device, device and storage medium for video retrieval | |
CN110765960A (en) | Pedestrian re-identification method for adaptive multi-task deep learning | |
CN114694174A (en) | A human interaction behavior recognition method based on spatiotemporal graph convolution | |
CN115578624A (en) | Agricultural pest model construction method, detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |