CN110175588B - Meta learning-based few-sample facial expression recognition method and system - Google Patents

Meta learning-based few-sample facial expression recognition method and system Download PDF

Info

Publication number
CN110175588B
CN110175588B CN201910465071.1A CN201910465071A CN110175588B CN 110175588 B CN110175588 B CN 110175588B CN 201910465071 A CN201910465071 A CN 201910465071A CN 110175588 B CN110175588 B CN 110175588B
Authority
CN
China
Prior art keywords
sample
facial
expressions
meta
main model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910465071.1A
Other languages
Chinese (zh)
Other versions
CN110175588A (en
Inventor
周风余
刘晓倩
常致富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201910465071.1A priority Critical patent/CN110175588B/en
Publication of CN110175588A publication Critical patent/CN110175588A/en
Application granted granted Critical
Publication of CN110175588B publication Critical patent/CN110175588B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides a meta-learning based few-sample facial expression recognition method and system, comprising: receiving a face sample set, and performing data preprocessing; constructing a main model for recognizing the expression; dividing a face sample set based on a meta-learning method: for seven kinds of expressions, randomly selecting three kinds of expressions for training set, and randomly selecting three kinds of expressions for testing set from the remaining four kinds of expressions; constructing a C-way K-shot by using an epicode-based method, namely constructing a sample set and a query set by using a partitioned tracing set; inputting the constructed face sample set into a main model, and performing parameter optimization on the main model; and receiving the facial data to be recognized, and recognizing the facial expressions according to the optimized main model. The method mainly considers the problem of facial expression recognition of few samples based on meta-learning, explores training sets by using an epicode-based method, and constructs a convolutional neural network to extract transitive knowledge to realize the recognition of the expression.

Description

Meta learning-based few-sample facial expression recognition method and system
Technical Field
The disclosure relates to the technical field of image recognition of computer vision, in particular to a method and a system for recognizing facial expressions with few samples based on meta-learning.
Background
With the development of artificial intelligence and deep learning techniques, facial expression recognition has received more and more attention as an important issue in the field of image recognition. Facial expression is one of the most beneficial, natural, and common signals that people convey their own emotions and intentions, while facial expression recognition is the determination of a person's emotions based on seven expressions of the face (anger, aversion, fear, happiness, impairment, surprise, and neutrality). Facial expression recognition has made great progress as one of deep learning techniques, and is widely applied to many fields such as mental analysis, medical diagnosis, advertisement effect research and the like.
The inventors have found in their research that in recent years deep learning techniques have made great progress in the field of computer vision, such as object detection, image segmentation and image classification. The deep neural network can automatically extract high-level semantic features from an input image end to end, which is considered to be one of artificial intelligence techniques that are most likely to approach the human level. These supervised learning models require a large amount of labeled data and multiple iterations to train a large number of model parameters. This limits scalability in the face of new categories due to the time and effort involved in tagging. More importantly, applicability is limited because emerging or rare classes do not have a large number of labeled samples. Therefore, how to train a classification model with high recognition accuracy and good generalization performance by using a small amount of labeled samples is a significant research direction, namely the research of the problem of few samples.
The few sample learning problem generally contains three data sets: the training set, the support set and the query set refer to the C samples in the support set, and each sample contains K samples, which is called C-way K-shot problem. In contrast, humans are very adept at identifying targets without direct supervised information, even classes that have not previously appeared, which is a human's innate ability to learn meta.
Disclosure of Invention
The purpose of the embodiments of the present specification is to provide a method for recognizing facial expressions with fewer samples based on meta-learning, and a model trained by using the method of the present disclosure can utilize very few training samples to achieve satisfactory recognition accuracy, thereby solving the problem of time and labor consumption that a training model needs a large number of labeled samples.
The embodiment of the specification provides a meta-learning-based few-sample facial expression recognition method, which is realized by the following technical scheme:
the method comprises the following steps:
receiving a face sample set, and performing data preprocessing;
constructing a main model for recognizing the expression;
dividing a data set based on a meta-learning method: for seven kinds of expressions, randomly selecting three kinds of expressions for training set, and randomly selecting three kinds of expressions for testing set from the remaining four kinds of expressions;
constructing a C-way K-shot by using an epicode-based method, namely constructing a sample set and a query set by using a partitioned tracing set;
inputting the constructed face sample set into a main model, and performing parameter optimization on the main model;
and receiving the facial data to be recognized, and recognizing the facial expressions according to the optimized main model.
The embodiment of the specification provides a meta-learning-based few-sample facial expression recognition system, which is realized by the following technical scheme:
the method comprises the following steps:
a data pre-processing module configured to: receiving a face sample set, and performing data preprocessing;
a master model building module configured to: constructing a main model for recognizing the expression;
a dataset construction module configured to: dividing a data set based on a meta-learning method: for seven kinds of expressions, randomly selecting three kinds of expressions for training set, and randomly selecting three kinds of expressions for testing set from the remaining four kinds of expressions;
constructing a C-way K-shot by using an epicode-based method, namely constructing a sample set and a query set by using a partitioned tracing set;
a master model optimization module configured to: inputting the constructed face sample set into a main model, and performing parameter optimization on the main model;
a facial expression recognition module configured to: and receiving the facial data to be recognized, and recognizing the facial expressions according to the optimized main model.
Compared with the prior art, the beneficial effect of this disclosure is:
the method mainly considers the problem of facial expression recognition of few samples based on meta-learning, explores training sets by using an epicode-based method, and constructs a convolutional neural network to extract transitive knowledge to realize the recognition of the expression.
According to the method, a large amount of labeled data is needed for original facial expression recognition, labeling of labels is time-consuming and labor-consuming, and how to more effectively utilize a small amount of labeled data is considered; by utilizing the method for exploring the training set based on the epicode, the training set is more effectively and fully utilized, and the transitive knowledge is more effectively extracted, so that the representation in the support set is better, and the tethering set is better classified.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
FIG. 1 is a flow diagram of a method for meta-learning based low-sample facial expression recognition in accordance with one or more embodiments;
2(a) -2 (b) are schematic diagrams of expression recognition network master models in accordance with one or more embodiments;
FIG. 3 is a block diagram of a facial expression recognition method in accordance with one or more embodiments.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example of implementation 1
As shown in fig. 1, according to an aspect of one or more embodiments of the present disclosure, a flow diagram of a method for low-sample facial expression recognition based on meta-learning is provided.
A meta-learning based method of identifying small sample facial expressions, the method comprising:
s101, receiving a face sample set and carrying out data preprocessing;
s102, constructing an expression recognition network main model;
s103, dividing a data set based on a meta-learning method to form a training set and a support/query set, and constructing a sample set and a query set which meet the C-way K-shot by using an epoxy-based method;
s104, inputting the preprocessed sample into a network main model, and optimizing the model;
and S105, receiving the facial data to be recognized, and recognizing the facial expressions according to the optimized model.
In step S101 of this embodiment, the face sample data in the face sample set is a face sample picture, and the performing the data preprocessing on the face sample picture includes normalizing each face sample picture and normalizing each pixel in the face sample picture. The specific operation steps of the data preprocessing in this embodiment are as follows:
s1011 normalizes each picture: the mean was subtracted from each picture and then the standard deviation was set to 3.125;
s1012 normalizes each pixel: firstly, calculating a mean pixel value picture, and then subtracting a mean pixel of a corresponding position from each picture; the standard deviation of each pixel of all training set pictures is then set to 1.
In step S102 of this embodiment, as shown in fig. 2(a) -2 (b), the expression recognition network main model includes 4 concatenated constraint Block blocks and a last scatter layer;
the Convolition Block mainly comprises: convolutional layer, batch normalization layer, Relu activation function, pooling layer. Parameter setting of the convolutional layer: convolution kernel is 3 × 3, padding is SAME mode; and the pooling layer adopts maximum pooling.
The Flatten layer is used for unifying the features extracted by the training model to obtain the feature with n being 64 dimensions.
Fig. 3 presents a skeleton diagram of a meta-learning based method for identifying facial expressions in a few samples: the main innovation in the whole method is the research on the facial expression recognition of a few samples, which is mainly divided into three parts: and constructing a data set, extracting features and optimizing a model.
The few sample learning is divided into three data sets: trailing, support, test. Where training is used for training, support is used for verification, and test is used for final model testing. And in the training process, constructing sample set and query set in the training process by using training for the expansion of the algorithm next.
S103, the specific steps of the data set construction are as follows:
s1031 constructs trailing/support/testing according to the concept of meta-learning (i.e. trailing and support/test have different label spaces, and support/test has the same label space) by using the original data set. With the present criteria of maximizing the use of a data set, the data set is partitioned as follows: three expressions are randomly selected for training set in seven expressions, and three expressions are randomly selected for support/query set again in the remaining four expressions, so that the sample categories for training and testing are not the same every time the program is run.
In the present embodiment, the slashes collectively represent "and".
C-way K-shot is constructed by using an epicode-based method, a C class is selected from seven classes of expressions, K samples are selected from each class to be used for constructing sample set in the training process, and q samples are selected from the C class to be used for constructing query set.
In specific implementation, S1032 constructs sample set and query set meeting the C-way K-shot by using an epoxy-based method. Selecting K pictures from three types of expression pictures with C being 3, wherein K being 1 or K being more than 1 is used for constructing sample set; of the remaining three types of pictures, q pictures are selected for constructing a query set, q being 5,15,20, and so on.
The feature extraction is to obtain 64-dimensional features of the input samples through the network main model and the Flattern layer number output.
S104, model optimization is to optimize model parameters by using a designed loss function and a method of descending SGD (generalized mean-square) by using a random gradient, so that the total loss is minimum, and the specific process is as follows:
s1041, calculating the prototype of each sample in sample set according to the extracted features. In sample set of 3-wayK-shot: if K is 1, the class sampleThe prototype is the feature p of the 64 dimensions of the samplek=f(xi) (ii) a If K>1, according to the Bregman divergence principle, the prototype of class k samples is
Figure BDA0002079177660000061
Wherein N issThe number of samples selected for each category of the expression, SkIs the set of samples of this class in sample set, (x)i,yi) Is the labeled sample in the set, f (x)i) Is a feature extracted through the main model.
S1042 calculates sample x in query set in turnqWith each type of prototype pkDistance d (x) ofq,pk) Then the likelihood of belonging to the class is calculated based on the distance to the prototype of the class
Figure BDA0002079177660000062
Here, the distance may be calculated using euclidean distance or cosine distance.
S1043 applying loss function L (x)q)=-logp(xq,pk) And optimizing by using a random gradient descent method SGD.
Table 1 gives the pseudo code for the dataset construction and model optimization.
TABLE 1
Figure BDA0002079177660000071
Figure BDA0002079177660000081
Example II
According to an aspect of one or more embodiments of the present disclosure, there is provided a sample-less facial expression recognition apparatus based on meta learning.
A meta-learning based low-sample facial expression recognition device and a meta-learning based low-sample facial expression recognition method comprise the following steps: the system comprises a data preprocessing module, a main model building module, a data set constructing module, a model optimizing module and a facial expression recognition module which are sequentially connected.
The data preprocessing module is used for receiving the face sample set and preprocessing data;
the main model building module is used for building an expression recognition network main model;
the data set construction module is used for constructing a training set and a support set/testing set based on meta-learning, so that the training set and the support set/testing set have different label spaces; sample set and query set meeting C-way K-shot for constructing an epicode-based;
and the model optimization module is used for optimizing the model according to the constructed data set and the determined loss function and the random gradient descent method.
And the facial expression recognition module is used for receiving facial data to be recognized and recognizing facial expressions according to the optimized model.
It should be noted that although several modules or sub-modules of the device are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the modules described above may be embodied in one module in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module described above may be further divided into embodiments by a plurality of modules.
Example III
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements a method step of a meta-learning based low-sample facial expression recognition method when executing the program.
The steps of a method for identifying facial expressions with few samples based on meta learning in this embodiment are described in detail in the first embodiment, and will not be described in detail here.
Example four
A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the steps of a method for few-sample facial expression recognition based on meta-learning.
The steps of a method for identifying facial expressions with few samples based on meta learning in this embodiment are described in detail in the first embodiment, and will not be described in detail here.
It is to be understood that throughout the description of the present specification, reference to the term "one embodiment", "another embodiment", "other embodiments", or "first through nth embodiments", etc., is intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, or materials described may be combined in any suitable manner in any one or more embodiments or examples.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (9)

1. A meta-learning based method for recognizing facial expressions with few samples is characterized by comprising the following steps:
receiving a face sample set, and performing data preprocessing;
constructing a main model for recognizing the expression;
dividing a face sample set based on a meta-learning method: for seven kinds of expressions, randomly selecting three kinds of expressions for training set, and randomly selecting three kinds of expressions for testing set from the remaining four kinds of expressions;
constructing a C-way K-shot by using an epicode-based method, namely constructing a sample set and a query set by using a partitioned tracing set;
inputting the constructed face sample set into a main model, and performing parameter optimization on the main model;
receiving facial data to be recognized, and recognizing facial expressions according to the optimized main model; outputting the main model of the expression recognition through a Flatten layer to obtain 64-dimensional characteristics of an input sample;
based on the extracted features, a prototype of each type of sample in sample set is calculated, in sample set for 3-way K-shot: if K is 1, the prototype of the sample is the feature p of the 64-dimension samplek=f(xi) (ii) a If K>1, according to the Bregman divergence principle, the prototype of class k samples is
Figure FDA0002740395380000011
Wherein N issThe number of samples selected for each category of the expression, SkIs the set of samples of this class in sample set, (x)i,yi) Is the labeled sample in the set, f (x)i) Is a feature extracted through a main model;
sequentially calculating samples x in query setqWith each type of prototype pkDistance d (x) ofq,pk) Then, calculating the possibility of belonging to the class according to the distance to the prototype of the class, wherein the calculation of the distance adopts Euclidean distance or cosine distance;
using a loss function L (x)q)=-logp(xq,pk) And optimizing by using a random gradient descent method SGD.
2. The method of claim 1, wherein the facial sample data in the facial sample set are facial sample pictures, and the pre-processing of the facial sample pictures comprises normalizing each facial sample picture and normalizing each pixel in the facial sample pictures.
3. A meta-learning based method of identifying facial expressions with few samples as claimed in claim 2, wherein each sample picture is normalized by: the mean value was subtracted from each picture and then the standard deviation was set.
4. A meta-learning based low-sample facial expression recognition method as claimed in claim 2, wherein each pixel in the facial sample picture is normalized by: firstly, calculating a mean pixel value picture, and then subtracting a mean pixel of a corresponding position from each picture; the standard deviation of each pixel of all training set pictures is then set.
5. The method of claim 1, wherein the main model of facial expression recognition comprises 4 concatenated constraint Block blocks and the last Flatten layer;
the Convolition Block mainly comprises: a convolutional layer, a batch normalization layer, a Relu activation function and a pooling layer;
parameter setting of the convolutional layer: convolution kernel is 3 × 3, padding is SAME mode; the pooling layer adopts maximum pooling;
the Flatten layer is used for unifying the features extracted by the training model to obtain the feature with n being 64 dimensions.
6. The method for identifying facial expressions with few samples based on meta-learning as claimed in claim 1, wherein, by using the method of epicode-based, construct sample set and query set satisfying C-way K-shot, and in three types of expression pictures with C-3, select K pictures, where K is 1 or K >1 for constructing sample set; of the remaining pictures of the three categories, q pictures are selected for constructing query sets.
7. A meta-learning based few-sample facial expression recognition system, comprising:
a data pre-processing module configured to: receiving a face sample set, and performing data preprocessing;
a master model building module configured to: constructing a main model for recognizing the expression;
a dataset construction module configured to: dividing a data set based on a meta-learning method: for seven kinds of expressions, randomly selecting three kinds of expressions for training set, and randomly selecting three kinds of expressions for testing set from the remaining four kinds of expressions;
constructing a C-way K-shot by using an epicode-based method, namely constructing a sample set and a query set by using a partitioned tracing set;
a master model optimization module configured to: inputting the constructed face sample set into a main model, and performing parameter optimization on the main model;
a facial expression recognition module configured to: receiving facial data to be recognized, and recognizing facial expressions according to the optimized main model; outputting the main model of the expression recognition through a Flatten layer to obtain 64-dimensional characteristics of an input sample;
based on the extracted features, a prototype of each type of sample in sample set is calculated, in sample set for 3-way K-shot: if K is 1, the prototype of the sample is the feature p of the 64-dimension samplek=f(xi) (ii) a If K>1, according to the Bregman divergence principle, the prototype of class k samples is
Figure FDA0002740395380000031
Wherein N issThe number of samples selected for each category of the expression, SkIs the set of samples of this class in sample set, (x)i,yi) Is the labeled sample in the set, f (x)i) Is a feature extracted through a main model;
sequentially calculating samples x in query setqWith each type of prototype pkDistance d (x) ofq,pk) Then, calculating the possibility of belonging to the class according to the distance to the prototype of the class, wherein the calculation of the distance adopts Euclidean distance or cosine distance;
using a loss function L (x)q)=-logp(xq,pk) And optimizing by using a random gradient descent method SGD.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the method steps of a meta-learning based few-sample facial expression recognition method of any of claims 1-6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method steps of a meta-learning based few-sample facial expression recognition method according to any one of claims 1 to 6.
CN201910465071.1A 2019-05-30 2019-05-30 Meta learning-based few-sample facial expression recognition method and system Active CN110175588B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910465071.1A CN110175588B (en) 2019-05-30 2019-05-30 Meta learning-based few-sample facial expression recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910465071.1A CN110175588B (en) 2019-05-30 2019-05-30 Meta learning-based few-sample facial expression recognition method and system

Publications (2)

Publication Number Publication Date
CN110175588A CN110175588A (en) 2019-08-27
CN110175588B true CN110175588B (en) 2020-12-29

Family

ID=67696889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910465071.1A Active CN110175588B (en) 2019-05-30 2019-05-30 Meta learning-based few-sample facial expression recognition method and system

Country Status (1)

Country Link
CN (1) CN110175588B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111737426B (en) * 2020-05-09 2021-06-01 中国科学院深圳先进技术研究院 Method for training question-answering model, computer equipment and readable storage medium
WO2022011493A1 (en) * 2020-07-13 2022-01-20 广东石油化工学院 Neural semantic memory storage method
CN113591660A (en) * 2021-07-24 2021-11-02 中国石油大学(华东) Micro-expression recognition method based on meta-learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376625A (en) * 2018-10-10 2019-02-22 东北大学 A kind of human facial expression recognition method based on convolutional neural networks
CN109685135B (en) * 2018-12-21 2022-03-25 电子科技大学 Few-sample image classification method based on improved metric learning

Also Published As

Publication number Publication date
CN110175588A (en) 2019-08-27

Similar Documents

Publication Publication Date Title
CN111259142B (en) Specific target emotion classification method based on attention coding and graph convolution network
CN108536679B (en) Named entity recognition method, device, equipment and computer readable storage medium
CN110175588B (en) Meta learning-based few-sample facial expression recognition method and system
CN111143576A (en) Event-oriented dynamic knowledge graph construction method and device
CN112015859A (en) Text knowledge hierarchy extraction method and device, computer equipment and readable medium
CN112819023B (en) Sample set acquisition method, device, computer equipment and storage medium
CN109460737A (en) A kind of multi-modal speech-emotion recognition method based on enhanced residual error neural network
CN108509411A (en) Semantic analysis and device
CN107766324A (en) A kind of text coherence analysis method based on deep neural network
CN111078887B (en) Text classification method and device
CN111950596A (en) Training method for neural network and related equipment
CN110738102A (en) face recognition method and system
CN110046356B (en) Label-embedded microblog text emotion multi-label classification method
CN113761259A (en) Image processing method and device and computer equipment
CN115438215A (en) Image-text bidirectional search and matching model training method, device, equipment and medium
CN113051914A (en) Enterprise hidden label extraction method and device based on multi-feature dynamic portrait
CN113434688B (en) Data processing method and device for public opinion classification model training
CN113722474A (en) Text classification method, device, equipment and storage medium
CN111400494A (en) Sentiment analysis method based on GCN-Attention
CN112418059A (en) Emotion recognition method and device, computer equipment and storage medium
CN112071429A (en) Medical automatic question-answering system construction method based on knowledge graph
CN116775872A (en) Text processing method and device, electronic equipment and storage medium
CN112307048A (en) Semantic matching model training method, matching device, equipment and storage medium
CN110717013B (en) Vectorization of documents
CN112837466B (en) Bill recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant