CN113705527A - Expression recognition method based on loss function integration and coarse and fine hierarchical convolutional neural network - Google Patents

Expression recognition method based on loss function integration and coarse and fine hierarchical convolutional neural network Download PDF

Info

Publication number
CN113705527A
CN113705527A CN202111049291.XA CN202111049291A CN113705527A CN 113705527 A CN113705527 A CN 113705527A CN 202111049291 A CN202111049291 A CN 202111049291A CN 113705527 A CN113705527 A CN 113705527A
Authority
CN
China
Prior art keywords
network
classification
coarse
recognition
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111049291.XA
Other languages
Chinese (zh)
Other versions
CN113705527B (en
Inventor
李云飞
程吉祥
李志丹
刘家伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202111049291.XA priority Critical patent/CN113705527B/en
Publication of CN113705527A publication Critical patent/CN113705527A/en
Application granted granted Critical
Publication of CN113705527B publication Critical patent/CN113705527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an expression recognition method based on loss function integration and a coarse-fine hierarchical convolutional neural network, which comprises the following steps of: aiming at the problem that the classification and identification accuracy is affected due to the fact that the expression features extracted by the convolutional neural network are too small in inter-class distance and too large in intra-class distance, improvement is provided from the perspective of a loss function, and other four loss functions are introduced to replace a common Softmax loss function so as to expand the inter-class distance of the expression features and reduce the intra-class distance; aiming at the problem of classifying and mixing up certain expressions in expression recognition, a convolutional neural network expression recognition method based on thickness grading is provided; in order to unify the functions of the expression recognition task, a convolution neural network-based expression recognition system is designed and developed. The method and the device can improve the expression recognition accuracy of different loss functions, improve the recognition accuracy of the integrated network, and improve the recognition accuracy of the confusable class expressions.

Description

Expression recognition method based on loss function integration and coarse and fine hierarchical convolutional neural network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an expression recognition method based on loss function integration and coarse and fine hierarchical convolutional neural network
Background
Human beings express emotions in various ways, such as posture change, speech weight, facial expressions and the like, wherein the facial expressions play an important role in emotion expression. With the advancement of science and technology and the continuous improvement of living standard, people have higher expectations and requirements for intelligent life, wherein the research on facial expression recognition is continuously increasing. It is becoming increasingly important for computers to give correct emotional classification results based on human expressions. As early as the twentieth century, Ekman and Friesen began a study of human expression and defined 6 basic expressions across ethnic groups, including anger, disgust, fear, happiness, sadness, and surprise, as shown in FIGS. 1-1. In 1992, the slight bamboo expression was added to the basic expressions, and these expression categories became the categories adopted by most of the current expression recognition classifications.
With the rise of the popularity of deep learning research, the improvement of computer computing capability and the proposal of rich expression data sets in recent years, more and more scholars develop research on expression recognition methods based on deep learning. In general, facial expression recognition can be divided into picture-based expression recognition and video-stream-based expression recognition, which is based on picture-based recognition. The expression recognition based on the picture comprises two main steps of feature extraction and feature classification, which is also the research focus of the expression recognition. In real life, the role played by expression recognition is also more important, and unlike face recognition, most expression recognition application scenes have certain particularity.
Disclosure of Invention
The invention provides an expression recognition method based on loss function integration and thickness grading convolutional neural networks, which can improve the expression recognition accuracy of different loss functions, improve the recognition accuracy of an integrated network and improve the recognition accuracy of easily confused expressions. The technical scheme adopted by the invention is as follows:
1. an expression recognition method based on loss function integration and coarse and fine hierarchical convolutional neural network comprises the following steps:
step 1, introducing four loss functions in the field of face recognition systems
The centrloss is an auxiliary loss function, usually used in combination with the cross-entropy loss function Softmax, which can further reduce the intra-class distance of the same class of representation features while maintaining the distinctiveness of different class features; the SphereFace loss function is represented by w in cross entropy loss function SoftmaxTx | | | w | | | | x | | | cos θ is sent out, namely the multiplication of the feature vector and the weight vector contains angle information, the SphereFace enables the learned feature to have angular distribution characteristics, in order to enable the feature to learn the more separable angle characteristics, the SphereFace is improved on the basis of Softmax, namely the weight is normalized, and the deviation is set to zero; the CosFace loss function is similar to the SphereFace, and is improved on the basis of Softmax, the classification space of the features is converted into the angle space for classification, and the CosFace further converts the features x into the angle space for classification compared with the SphereFaceiNormalization is done in units of 1, with the aim that the final classification result is only related to the angle between the weight vector and the feature vector, but considering xiToo small normalization value can lead to too large training loss value, and a scaling factor s and a penalty factor m are introduced, so that more separable characteristics are obtained; the Arcap loss function is similar to the CosFace loss function except that a penalty function is applied to the angle
Figure BDA0003252282560000021
In the above, penalty constraint is directly performed on the classification boundary in the angle space, similarly, the ArcFace loss function needs to normalize the characteristics and the weight, and a scaling factor s is introduced;
step 2, constructing an expression recognition network ResNet-EnLoss integrating various different loss functions
The problem of insufficient extraction characteristics often exists in a single network, and therefore a multi-network integration mode can be adopted for solving the problem; meanwhile, the accuracy and generalization capability of the model can be further improved by adopting the integration of a plurality of convolutional neural networks of different types in the classification problem, so that in the current chapter, on the basis of the expression recognition networks of the four different loss functions, an integrated learning method is adopted to integrate the classification results of a single network, and in the current chapter, the network is integrated by adopting wide average voting and majority voting in the integration method as a sale-forming strategy;
step 3, designing a coarse and fine hierarchical expression recognition network based on a convolutional neural network
In the aspect of a coarse and fine hierarchical convolutional neural network structure, because the extracted features of the first layers of the convolutional neural network are shallow features, the idea of transfer learning is used for reference, the coarse classification network and the fine classification network in the network structure share the first layers of the convolutional neural network, the total network is divided into 3 parts, namely the coarse classification network, the fine classification network (not easy to be confused) and the fine classification network (easy to be confused), and the coarse classification network is mainly used for classifying the expressions of the easy-to-be-confused classes and the expressions of the difficult-to-be-confused classes, so that the coarse classification function is achieved; the two fine classification networks are accurately classified on the basis of the classification result of the rough classification network, the expressions of the categories which are not easy to be confused and the categories which are easy to be confused are accurately classified respectively, and the purpose of designing the network structure is mainly to strip the expressions of the categories which are easy to be confused from the whole categories to carry out accurate classification, so that the confusion of the categories which are easy to be confused is reduced.
Step 4, designing a facial expression recognition system
The system mainly has the functions of selecting an expression recognition network model, recognizing facial expression pictures and real-time facial expressions, and also has the functions of displaying recognition time, recognition results, probability of various expressions, displaying pictures or real-time recognition effects and the like;
2. the expression recognition method based on the loss function integration and coarse and fine hierarchical convolutional neural network as claimed in claim 1, wherein: four loss functions of CenterLoss, SphereFace, CosFace, ArcFace and the like are introduced into the network to expand the inter-class distance of the expression features and reduce the intra-class distance, and simultaneously, the problem of single feature extraction of a single network is solved:
3. the expression recognition method based on the loss function integration and coarse and fine hierarchical convolutional neural network as claimed in claim 1, wherein: in order to further improve the identification accuracy and the generalization capability of the model, two integrated learning methods of average voting and majority voting are adopted to integrate the four networks:
4. the expression recognition method based on the loss function integration and the coarse and fine hierarchical convolutional neural network as claimed in claim 1, wherein the coarse network mainly functions to classify confusable class expressions and provide prospective classification for the subsequent fine classification network, the coarse network is a two-classification network, and the network selects ResNet-18; the fine network is divided into two networks of expression classification which is not easy to be confused and expression classification which is easy to be confused, and because the characteristics extracted from the first few layers of the convolutional neural network are shallow characteristics, the two fine classification network structures adopt the idea of transfer learning, share the first few layers of the network with the coarse network, enhance the relevance between the coarse and fine networks, and reduce the network size:
5. the expression recognition method based on the loss function integration and coarse and fine hierarchical convolutional neural network as claimed in claim 1, wherein the system realizes facial expression picture recognition and video real-time recognition and provides network model switching function. The picture recognition function can realize the facial expression recognition of static pictures, and the video real-time recognition function can realize the real-time facial expression recognition based on the camera:
the invention has the following technical characteristics:
1. the method is based on the improvement of four loss functions, two integrated expression recognition networks based on average voting and majority voting are constructed, and the expression recognition accuracy is improved.
2. The invention designs a coarse and fine hierarchical convolutional neural network expression recognition method, and improves the recognition accuracy of confusable class expressions.
3. The invention designs a facial expression recognition system based on deep learning, and the system has certain real-time performance and applicability.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the services required for the embodiments or the technical solutions in the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a coarse network structure according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a fine network structure according to an embodiment of the present invention.
FIG. 4 is a system UI layout diagram of an embodiment of the invention.
Detailed Description
The invention will be further illustrated with reference to specific examples:
the invention discloses an expression recognition method based on loss function integration and a coarse-fine hierarchical convolutional neural network, which comprises the following steps of:
step 1, introducing four loss functions in the field of face recognition systems
CenterLoss is a secondary loss function, usually used in conjunction with the cross-entropy loss function Softmax, that can further reduce the intra-class distance of the same class of expressive features while preserving the distinctiveness of the different classes of features, the formula of which is shown below
Figure BDA0003252282560000041
Where m denotes the number of training data per batch, xiWhich represents the features to be classified, and,
Figure BDA0003252282560000044
denotes the y thiFeature centers of individual categories. The auxiliary function collocates a Softmax loss function in a form of L ═ Lsoftmax + λ Lnterloss as a loss function of the network, wherein λ is a balance factor controlling two loss functions, and when λ is 0, the loss function becomes a Softmax cross entropy loss function.
The SphereFace loss function is represented by w in cross entropy loss function SoftmaxTx | | | w | | | x | | | cos θ, that is, the multiplication of the feature vector and the weight vector contains angle information. The SphereFace makes the learned features have angular distribution characteristics. In order to enable the features to learn more separable angle characteristics, the SphereFace is improved on the basis of Softmax, namely, the weights are normalized, and the deviation is set to be zero. The Sphereface loss function is of the form:
Figure BDA0003252282560000042
in the formula, xiWhich represents the features to be classified, and,
Figure BDA0003252282560000045
is a weight vector
Figure BDA0003252282560000043
And a feature vector xiAngle between them, thetajIs a weight vector wjAnd a feature vector xiAnd m is a penalty factor and is used for controlling the aggregation degree of the same type of features so as to control the distance between different types. The SphereFace can not only reduce the intra-class distance between the expression features, but also increase the inter-class distance between the expression features.
The CosFace loss function is similar to the SphereFace, and is improved on the basis of Sofmax, and the classification space of the features is converted into the angle space for classification. CosFace further identifies x as compared to SphereFaceiNormalization is performed in units of 1, with the goal that the final classification result is only related to the angle between the weight vector and the feature vector. However, consider xiToo small a normalized value will result in too large a training loss value, introducing a scaling factor s and a penalty factor m, and further obtaining more separable features. The CosFace loss function is of the form:
Figure BDA0003252282560000051
wherein s is a scaling factor, m is a penalty factor,
Figure BDA0003252282560000052
is a weight vector
Figure BDA0003252282560000053
And a feature vector xiAngle between them, thetajIs a weight vector wjAnd a feature vector xiThe included angle therebetween.
The ArcFace loss function is similar to the CosFace loss function except that a penalty function is applied to the angle
Figure BDA0003252282560000057
And directly performing penalty constraint on the classification boundary in the angle space. Similarly, the ArcFace loss function requires normalization of features and weights and the introduction of a scaling factor s. The ArcFace loss function is of the form:
Figure BDA0003252282560000054
in the formula (I), the compound is shown in the specification,
Figure BDA0003252282560000055
and (3) inputting an angle between the weight vectors of the characteristic ArcFace loss function layer, wherein s is a characteristic normalization parameter and m is an angle punishment parameter.
Step 2, constructing an expression recognition network ResNet-EnLoss integrating various different loss functions
The problem of insufficient extraction characteristics often exists in a single network, and therefore a multi-network integration mode can be adopted for solving the problem; meanwhile, the accuracy and generalization capability of the model can be further improved by adopting the integration of a plurality of convolutional neural networks of different types in the classification problem, so that in the current chapter, on the basis of the expression recognition networks of the four different loss functions, an integrated learning method is adopted to integrate the classification results of a single network, and in the current chapter, the network is integrated by adopting wide average voting and majority voting in the integration method as a sale-forming strategy; the average voting determines the class with the highest score by averaging the posterior probabilities of the respective network outputs, and is calculated as follows:
Figure BDA0003252282560000056
where N is the number of integrated networks, hi(x) The results output for each network in the integrated network. The majority voting is based on the principle that minority obeys majority, and the predicted labels of the final network are obtained from the predicted labels of each network. For each sample, assume the prediction class of the N networks is { C1,C2,…CNThe largest number of categories c are adoptediAs a prediction category for the network.
Step 3, designing a coarse and fine hierarchical expression recognition network based on a convolutional neural network
Adjusting the output of ResNet-18 into a binary output, and in order to adapt to a data set used in this chapter, reducing the size of a convolution kernel of a first layer to be 3x3, modifying the size of the convolution kernel of a convolution layer in the middle of a network from 3x3 to 2 x 2, modifying the size of a full connection layer to be (512, 64), and simultaneously adding a full connection layer with the size of (64, 2), besides, reserving an interface at a specific node of the network, providing interfaces for the first few layers of the shared network of the fine network, and reserving an experimental space for subsequent experiments of optimal interface positions; the fine network is divided into two networks of expression classification which is not easy to be confused and expression classification which is easy to be confused, and because the extracted features of the first layers of the convolutional neural network are shallow features, the two fine classification network structures adopt the idea of transfer learning, share the first layers of the network with the coarse network, enhance the relevance between the coarse network and the fine network and reduce the network size. The fine classification network consists of the first few layers of the convolutional neural network shared by the two fine networks and the coarse network, and the whole network is in a parallel structure in structural view. The fine classification network (which is not easy to be confused) classifies the expressions which are not easy to be confused, and from the analysis in the prior art, the number of the expressions which are not easy to be confused is more than that of the expressions which are easy to be confused, and most of the expression classes which are easy to be confused in pairs, so the fine classification network (which is not easy to be confused) is designed into a multi-classification network, and the fine classification network (which is easy to be confused) is designed into a two-classification network.
Step 4, designing a facial expression recognition system
The main modules of the system can be roughly divided into four types, namely model selection, facial expression image recognition, real-time facial expression recognition and auxiliary functions. The three main function keys of the system are designed on the left upper side of the interface, the display function of the display and the recognition result display function during recognition are designed on the right upper side of the interface, the display function of the recognition effect is designed on the left lower side of the interface, and the display function of the recognition probability of various expressions is designed on the right lower side of the interface. According to the functions of each module, three main function models of the system are selected, the facial expression picture is used for identifying the facial expression in real time and is designed as a tool button, and other function modules use QLabel controls. In addition, the system is additionally provided with a state description information column behind the three main function buttons, and the state description information column is mainly used for describing and prompting the current state information of the three main functions. An initial state interface is designed for the recognition effect display function and various expression recognition probability display functions, and for the recognition effect display function, when the system is initialized and real-time recognition and picture recognition are finished, the icons of the laboratory are displayed; and for the probability display function of various expression recognition, the probability values of all expression categories are set to zero when the system is initialized and recognition is finished. The UI interface of the present system is designed and laid out on QtDesigner and then converted to a. py file using pyuic5 for programming and implementation of subsequent system functions.

Claims (5)

1. An expression recognition method based on loss function integration and coarse and fine hierarchical convolutional neural network comprises the following steps:
step 1, introducing four loss functions in the field of face recognition systems
The centrloss is an auxiliary loss function, usually used in combination with the cross-entropy loss function Softmax, which can further reduce the intra-class distance of the same class of representation features while maintaining the distinctiveness of different class features; SphereFace loss function is lost by cross entropyW in the function SoftmaxTThe SphereFace enables the learned features to have angular distribution characteristics, and in order to enable the features to learn more separable angular characteristics, the SphereFace is improved on the basis of Softmax, namely the weights are normalized, and the deviation is set to zero; the CosFace loss function is similar to the SphereFace, and is improved on the basis of Softmax, the classification space of the features is converted into the angle space for classification, and the CosFace further converts the features x into the angle space for classification compared with the SphereFaceiNormalization is done in units of 1, with the aim that the final classification result is only related to the angle between the weight vector and the feature vector, but considering xiToo small normalization value can lead to too large training loss value, and a scaling factor s and a penalty factor m are introduced, so that more separable characteristics are obtained; the Arcap loss function is similar to the CosFace loss function except that a penalty function is applied to the angle
Figure FDA0003252282550000011
In the above, penalty constraint is directly performed on the classification boundary in the angle space, similarly, the ArcFace loss function needs to normalize the characteristics and the weight, and a scaling factor s is introduced;
step 2, constructing an expression recognition network ResNet-EnLoss integrating various different loss functions
The problem of insufficient extraction characteristics often exists in a single network, and therefore a multi-network integration mode can be adopted for solving the problem; meanwhile, the accuracy and generalization capability of the model can be further improved by adopting the integration of a plurality of convolutional neural networks of different types in the classification problem, so that in the current chapter, on the basis of the expression recognition networks of the four different loss functions, an integrated learning method is adopted to integrate the classification results of a single network, and in the current chapter, the network is integrated by adopting wide average voting and majority voting in the integration method as a sale-forming strategy;
step 3, designing a coarse and fine hierarchical expression recognition network based on a convolutional neural network
In the aspect of a coarse and fine hierarchical convolutional neural network structure, because the extracted features of the first layers of the convolutional neural network are shallow features, the idea of transfer learning is used for reference, the coarse classification network and the fine classification network in the network structure share the first layers of the convolutional neural network, the total network is divided into 3 parts, namely the coarse classification network, the fine classification network (not easy to be confused) and the fine classification network (easy to be confused), and the coarse classification network is mainly used for classifying the expressions of the easy-to-be-confused classes and the expressions of the difficult-to-be-confused classes, so that the coarse classification function is achieved; the two fine classification networks are accurately classified on the basis of the classification result of the rough classification network, the expressions of the categories which are not easy to be confused and the categories which are easy to be confused are accurately classified respectively, and the purpose of designing the network structure is mainly to strip the expressions of the categories which are easy to be confused from the whole categories to carry out accurate classification, so that the confusion of the categories which are easy to be confused is reduced.
Step 4, designing a facial expression recognition system
The system mainly comprises an expression recognition network model selection function, a facial expression picture recognition function and a facial expression real-time recognition function, and in addition, the system also comprises functions of time display for recognition, recognition result display, various expression probability display, picture or real-time recognition effect display and the like.
2. The expression recognition method based on the loss function integration and coarse and fine hierarchical convolutional neural network as claimed in claim 1, wherein: four loss functions of CenterLoss, SphereFace, CosFace, ArcFace and the like are introduced into the network to expand the inter-class distance of the expression features and reduce the intra-class distance, and meanwhile, the problem of single extraction of the features by a single network is solved.
3. The expression recognition method based on the loss function integration and coarse and fine hierarchical convolutional neural network as claimed in claim 1, wherein: in order to further improve the identification accuracy and the generalization capability of the model, two integrated learning methods of average voting and majority voting are adopted to integrate the four networks.
4. The expression recognition method based on the loss function integration and the coarse and fine hierarchical convolutional neural network as claimed in claim 1, wherein the coarse network mainly functions to classify confusable class expressions and provide prospective classification for the subsequent fine classification network, the coarse network is a two-classification network, and the network selects ResNet-18; the fine network is divided into two networks of expression classification which is not easy to be confused and expression classification which is easy to be confused, and because the extracted features of the first layers of the convolutional neural network are shallow features, the two fine classification network structures adopt the idea of transfer learning, share the first layers of the network with the coarse network, enhance the relevance between the coarse network and the fine network and reduce the network size.
5. The expression recognition method based on the loss function integration and coarse and fine hierarchical convolutional neural network as claimed in claim 1, wherein the system realizes facial expression picture recognition and video real-time recognition and provides network model switching function. The image recognition function can realize the facial expression recognition of static images, and the video real-time recognition function can realize the real-time facial expression recognition based on a camera.
CN202111049291.XA 2021-09-08 2021-09-08 Expression recognition method based on loss function integration and thickness grading convolutional neural network Active CN113705527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111049291.XA CN113705527B (en) 2021-09-08 2021-09-08 Expression recognition method based on loss function integration and thickness grading convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111049291.XA CN113705527B (en) 2021-09-08 2021-09-08 Expression recognition method based on loss function integration and thickness grading convolutional neural network

Publications (2)

Publication Number Publication Date
CN113705527A true CN113705527A (en) 2021-11-26
CN113705527B CN113705527B (en) 2023-09-22

Family

ID=78659209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111049291.XA Active CN113705527B (en) 2021-09-08 2021-09-08 Expression recognition method based on loss function integration and thickness grading convolutional neural network

Country Status (1)

Country Link
CN (1) CN113705527B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107077625A (en) * 2014-10-27 2017-08-18 电子湾有限公司 The deep convolutional neural networks of layering
CN109919177A (en) * 2019-01-23 2019-06-21 西北工业大学 Feature selection approach based on stratification depth network
CN110309888A (en) * 2019-07-11 2019-10-08 南京邮电大学 A kind of image classification method and system based on layering multi-task learning
KR20200000824A (en) * 2018-06-25 2020-01-03 한국과학기술원 Method for recognizing facial expression based on deep-learning model using center-dispersion loss function
CN110852288A (en) * 2019-11-15 2020-02-28 苏州大学 Cell image classification method based on two-stage convolutional neural network
CN110909785A (en) * 2019-11-18 2020-03-24 西北工业大学 Multitask Triplet loss function learning method based on semantic hierarchy
WO2020114118A1 (en) * 2018-12-07 2020-06-11 深圳光启空间技术有限公司 Facial attribute identification method and device, storage medium and processor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107077625A (en) * 2014-10-27 2017-08-18 电子湾有限公司 The deep convolutional neural networks of layering
KR20200000824A (en) * 2018-06-25 2020-01-03 한국과학기술원 Method for recognizing facial expression based on deep-learning model using center-dispersion loss function
WO2020114118A1 (en) * 2018-12-07 2020-06-11 深圳光启空间技术有限公司 Facial attribute identification method and device, storage medium and processor
CN109919177A (en) * 2019-01-23 2019-06-21 西北工业大学 Feature selection approach based on stratification depth network
CN110309888A (en) * 2019-07-11 2019-10-08 南京邮电大学 A kind of image classification method and system based on layering multi-task learning
CN110852288A (en) * 2019-11-15 2020-02-28 苏州大学 Cell image classification method based on two-stage convolutional neural network
CN110909785A (en) * 2019-11-18 2020-03-24 西北工业大学 Multitask Triplet loss function learning method based on semantic hierarchy

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JINGYING CHEN 等: "Automatic social signal analysis:Facial expression Recognition using difference convolution neural network", 《J.PARALLEL DISTRIB.COMPUT.》, pages 97 - 102 *
RUICONG ZHI 等: "Development of a direct mapping model between hedonic rating and facial responses by dynamic facial expression representation", 《FOOD RESEARCH INTERNATIONAL》, pages 1 - 10 *
付小龙 等: "残差网络和损失函数集成的人脸表情识别", 《控制工程》, vol. 29, no. 3, pages 522 - 529 *
杨春健: "基于局部幅值编码的人脸表情识别", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 138 - 1601 *
陈克凡: "基于视觉的地下管道缺陷检测方法研究", 《中国优秀硕士学位论文全文数据库工程科技II辑》, pages 038 - 1252 *

Also Published As

Publication number Publication date
CN113705527B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
Savchenko et al. Classifying emotions and engagement in online learning based on a single facial expression recognition neural network
Wen et al. Ensemble of deep neural networks with probability-based fusion for facial expression recognition
Punyani et al. Neural networks for facial age estimation: a survey on recent advances
Wang et al. Brain-inspired deep networks for image aesthetics assessment
WO2021057056A1 (en) Neural architecture search method, image processing method and device, and storage medium
Jiang et al. A survey on artificial intelligence in Chinese sign language recognition
CN108027899A (en) Method for the performance for improving housebroken machine learning model
CN104463191A (en) Robot visual processing method based on attention mechanism
CN113128369B (en) Lightweight network facial expression recognition method fusing balance loss
CN107851198A (en) Media categories
CN108804453A (en) A kind of video and audio recognition methods and device
CN110135251B (en) Group image emotion recognition method based on attention mechanism and hybrid network
Chen et al. Discriminative BoW framework for mobile landmark recognition
Chen et al. Online control programming algorithm for human–robot interaction system with a novel real-time human gesture recognition method
CN109886281A (en) One kind is transfinited learning machine color image recognition method based on quaternary number
CN110110724A (en) The text authentication code recognition methods of function drive capsule neural network is squeezed based on exponential type
Kirana et al. Emotion recognition using fisher face-based viola-jones algorithm
CN113255602A (en) Dynamic gesture recognition method based on multi-modal data
Bari et al. AestheticNet: deep convolutional neural network for person identification from visual aesthetic
CN114764869A (en) Multi-object detection with single detection per object
Fujii et al. Hierarchical group-level emotion recognition
Wu et al. Generic proposal evaluator: A lazy learning strategy toward blind proposal quality assessment
Wang et al. A novel multiface recognition method with short training time and lightweight based on ABASNet and H-softmax
Jadhav et al. HDL-PI: hybrid DeepLearning technique for person identification using multimodal finger print, iris and face biometric features
Behera et al. Regional attention network (ran) for head pose and fine-grained gesture recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant