CN109993100B - Method for realizing facial expression recognition based on deep feature clustering - Google Patents

Method for realizing facial expression recognition based on deep feature clustering Download PDF

Info

Publication number
CN109993100B
CN109993100B CN201910240401.7A CN201910240401A CN109993100B CN 109993100 B CN109993100 B CN 109993100B CN 201910240401 A CN201910240401 A CN 201910240401A CN 109993100 B CN109993100 B CN 109993100B
Authority
CN
China
Prior art keywords
facial expression
network
clustering
loss function
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910240401.7A
Other languages
Chinese (zh)
Other versions
CN109993100A (en
Inventor
吴晨
李雷
吴婧漪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201910240401.7A priority Critical patent/CN109993100B/en
Publication of CN109993100A publication Critical patent/CN109993100A/en
Application granted granted Critical
Publication of CN109993100B publication Critical patent/CN109993100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for realizing facial expression recognition based on deep feature clustering, which comprises the following steps: s1: collecting various facial expression pictures, and classifying the facial expression pictures one by one according to the facial expressions; s2: preprocessing the picture, removing the fuzzy picture, obtaining key points of the face by using a convolutional neural network-based cascading multitask face detection algorithm, and uniformly cutting the face picture according to the key points; s3: constructing a facial expression recognition network based on a convolutional neural network, and respectively inputting the preprocessed facial expression pictures into the network to calculate a loss function and train the loss function; s4: and acquiring the trained facial expression recognition network, and applying the trained facial expression recognition network to actual measurement. The method solves the problems of low accuracy rate, overfitting and the like of facial expression recognition.

Description

Method for realizing facial expression recognition based on deep feature clustering
Technical Field
The invention relates to a method for realizing facial expression recognition based on deep feature clustering, which can be used in the technical field of computer vision picture processing.
Background
In recent years, with the rapid development of artificial intelligence, deep learning has become an area of intense research. Deep learning is excellent in solving many problems such as image object recognition, voice recognition, and natural language processing. Among various types of neural networks, convolutional neural networks are most intensively studied. Early on, due to the lack of training data and computational power, it was difficult to train a high performance convolutional neural network without generating overfitting. The emergence of large-scale labeled data such as ImageNet and the rapid improvement of GPU computational performance have led to rapid well-blowout for the study of convolutional neural networks.
With the continuous development of the convolutional neural network, the fitting analysis capability of the model to the real data is stronger and stronger, and meanwhile, in order to take speed and precision into consideration, researchers provide a plurality of lightweight convolutional neural networks. The lightweight convolutional neural network can achieve high reasoning speed, meanwhile achieve high accuracy rate, and fully utilize parameters of the network. The Mobilenet-V2 network is a lightweight convolutional neural network developed by Google, and is characterized by having fewer parameters and being capable of realizing real-time operation on a mobile phone.
The facial expression recognition belongs to fine-grained feature recognition, and the Mobilenet-V2 is directly applied to the facial expression recognition, so that the phenomenon of low recognition accuracy rate or overfitting is easily caused. For the fine-grained facial expression characteristics, how to make the network easily realize the accurate division of the expression is a technical problem which is urgently solved.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a method for realizing facial expression recognition based on deep feature clustering.
The purpose of the invention is realized by the following technical scheme: the method for realizing facial expression recognition based on deep feature clustering comprises the following steps:
s1: collecting various facial expression pictures, and classifying the facial expression pictures one by one according to facial expressions to obtain a classified facial expression data set;
s2: preprocessing the classified facial expression data set pictures obtained in the step S1, removing fuzzy pictures, obtaining facial key points by using a convolutional neural network-based cascading multitask face detection algorithm, and uniformly cutting the facial pictures according to the key points to obtain a preprocessed facial expression data set;
s3: constructing a facial expression recognition network based on a convolutional neural network, and respectively inputting the preprocessed facial expression data set pictures obtained in the step S2 into the network to calculate a loss function and train the loss function to obtain a trained facial expression recognition network;
s4: and (4) the trained facial expression recognition network obtained in the step S3 is applied to actual measurement.
Preferably, in the step S1, the collected facial expression pictures need to be of balanced category, each type of facial expression picture needs to be at least two thousand facial expression pictures, and the face needs to be clear and the posture needs to be correct.
Preferably, in the step S2, the image is preprocessed to remove the blurred image, then the face key points are obtained by using a convolutional neural network-based cascaded multi-task face detection algorithm, the face images are uniformly cut according to the key points, and then the face images are respectively saved according to the face expressions, if there are few face expression images, data enhancement is performed on the face images.
Preferably, in the step S3, the convolutional neural network has a structure of mobilene-V2, and the input layer is the clipped face picture and outputs probability values of various facial expressions.
Preferably, in the step S3, deep feature clustering loss is added to the loss function of the convolutional neural network, so that the deep features obtained by various classes of facial expression pictures through the convolutional neural network have larger differences.
Preferably, the training of the facial expression recognition algorithm based on deep feature clustering in the step S3 includes the steps of:
s31: inputting the facial expression data preprocessed in the step S2 into a pre-trained Mobilene-V2 network in sequence according to the expression categories, extracting high latitude features of a penultimate layer 1280 x 1 in the network in sequence, clustering the high latitude features of each category of expressions by adopting a K-means clustering algorithm to obtain K clustering centers of each facial expression, and iteratively updating the clustering centers once in each cycle;
s32: comparing the K clustering centers of each facial expression in the step S31 with the high latitude characteristics of the same layer of each training sample to obtain a clustering loss function;
s33: training a convolutional neural network model to minimize a loss function of the network
Preferably, the loss function in the step S3 is designed to be
Figure BDA0002008373980000031
Wherein,
Figure BDA0002008373980000032
L k-means (f,a,c)=||max(f,c a )-min(f,c -a )||
wherein L in the formula is a total loss function,
Figure BDA0002008373980000033
for categorizing the cross-entropy loss function, L k-means (f, a, c) is a clustering loss function, x is an input facial expression training image, a is a facial expression label corresponding to the input image x,
Figure BDA0002008373980000034
obtaining a predicted label of an input image x through a Mobilene-V2 network, f obtaining high latitude characteristics of a penultimate layer 1280 x 1 of the input image x through a Mobilene-V2 network, c obtaining K clustering centers of N expressions after all the high latitude characteristics of a training picture are clustered through a pre-trained Mobilene-V2 network, wherein the K clustering centers are N x K clustering centers in total, and c is a K cluster centers with expression a, c -a There are (N-1) × K cluster centers for all expressions except expression a.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects: the invention adopts a method of enlarging the distance between the deep image features, thereby enabling the network to easily realize accurate division of the expression. The facial expression recognition algorithm based on deep feature clustering can enlarge the distance between deep features of facial expression pictures in a Mobilenet-V2 network, so that fine-grained facial expression classification is more accurate. The method solves the problems of low accuracy rate, overfitting and the like of facial expression recognition.
Drawings
Fig. 1 is a structural diagram of mobilene-V2 in the deep feature cluster-based facial expression recognition algorithm of the present invention.
FIG. 2 is a structural diagram of a residual network block in the deep feature clustering-based facial expression recognition algorithm of the present invention.
Detailed Description
Objects, advantages and features of the present invention will be illustrated and explained by the following non-limiting description of preferred embodiments. The embodiments are merely exemplary for applying the technical solutions of the present invention, and any technical solution formed by replacing or converting the equivalent thereof falls within the scope of the present invention claimed.
The invention discloses a method for realizing facial expression recognition based on deep feature clustering, which comprises the following steps:
s1: various facial expression pictures are collected and classified one by one according to facial expressions.
The method comprises the following specific steps: and finding a picture website, finding a facial expression picture and ensuring that the picture is relatively clear. And (3) crawling various facial expression pictures from the website by using a crawler technology, and ensuring that each type of facial expression picture is more than twenty thousand.
S2: preprocessing the picture, removing the fuzzy picture, obtaining five key points of the face by using a convolutional neural network-based cascade multitask face detection algorithm (MTCNN), and uniformly cutting the face picture according to the key points.
And screening the pictures one by one, and removing the pictures with blurs and inconsistent picture contents. And uniformly cutting the screened pictures into 128-by-128 sizes, and respectively storing according to various expressions of the face images.
S3: and constructing a facial expression recognition network based on a convolutional neural network, and respectively inputting the preprocessed facial expression pictures into the network to calculate a loss function and train the loss function.
The network structure of the Mobilenet-V2 is shown in fig. 1. Mobilenet-V2 is composed of four parts: convolutional layer, global average pooling layer, residual network block. The convolutional layer extracts feature information of the picture through convolution operation, and the extracted information is more and more abstract along with the multi-layer superposition of the convolution operation. Residual network blocks in the network structure are shown in fig. 2, and the residual network blocks are used for transferring the bottom layer features into the high layer and inhibiting the situation of gradient disappearance. The input to the mobilent-V2 is a facial expression picture, and the predicted facial expression label is output.
The loss function is composed of a classification cross entropy loss function and a clustering loss function. The classification cross entropy loss function is used for improving the classification accuracy of the network, and the clustering loss function is used for enlarging the high latitude characteristic difference generated by different types of facial expression images through the network.
And adding deep feature clustering loss into the loss function of the convolutional neural network in the step S3, so that the deep features obtained by the facial expression pictures of various categories through the convolutional neural network have larger difference, which is beneficial to distinguishing fine-grained facial features and fine-grained facial features.
The training process specifically comprises the following steps:
s31: and (4) sequentially inputting the preprocessed facial expression data in the step (S2) into a pre-trained Mobilene-V2 network according to the expression classes, sequentially extracting high latitude features of the 1 x 1280 on the penultimate layer in the network, and clustering the high latitude features of the N types of expressions by adopting a K-means clustering algorithm to obtain K clustering centers (clusters) of each facial expression, wherein the N clusters are N x K clusters.
S32: and comparing the N x K clustering centers in the step of S31 with the high latitude characteristics of the same layer of each training sample to obtain a clustering loss function. And calculating 1 x 1280 latitude characteristics of the input facial expression picture during training, finding out the distance between the same type expression cluster farthest from the characteristics and the nearest non-same type expression cluster, and then respectively calculating the distance between the characteristics and the two clusters. The difference of the two distances, i.e. the cluster loss function, is maximized. And after all the training pictures are trained for one round, the network model is stored, N x K clusters are recalculated, and iterative training is performed again.
S33: the convolutional neural network model is trained to minimize a loss function of the network.
The loss function is:
Figure BDA0002008373980000051
wherein,
Figure BDA0002008373980000052
L k-means (f,a,c)=||max(f,c a )-min(f,c -a )||
in the formula L is the overall loss function,
Figure BDA0002008373980000053
for categorizing the cross-entropy loss function, L k-means (f, a, c) is a clustering loss function, x is an input facial expression training image, a is a facial expression label corresponding to the input image x,
Figure BDA0002008373980000054
obtaining a predicted label of an input image x through a Mobilene-V2 network, f obtaining high latitude characteristics of a penultimate layer 1 x 1280 of the input image x through a Mobilene-V2 network, c obtaining K clustering centers (N x K clustering centers in total) of N expressions after all the high latitude characteristics of training pictures are clustered through a pre-trained Mobilene-V2 network, and c obtaining K clustering centers of the N expressions (N x K clustering centers in total) through pre-training images a K cluster centers expressed as a, c -a K cluster centers for all expressions except expression a (total of (N-1) × K cluster centers).
S4: and acquiring a trained facial expression recognition network, and applying the trained facial expression recognition network to actual measurement.
In conclusion, the invention can obtain a face recognition network model with both precision and speed, and the generalization capability of the network is strong. The invention obtains the trained facial expression recognition network by inputting the facial expression pictures into the Mobilene-V2 network and training the model through the facial expression recognition algorithm based on deep feature clustering. At the moment, the network can better identify fine-grained facial expressions. The facial expression recognition algorithm based on deep feature clustering is applied to facial expression recognition, so that the difference between facial expression classes is enlarged, and the problem that fine-grained images are difficult to recognize is optimized.
The invention has various embodiments, and all technical solutions formed by adopting equivalent transformation or equivalent transformation are within the protection scope of the invention.

Claims (6)

1. The method for realizing facial expression recognition based on deep feature clustering is characterized by comprising the following steps: the method comprises the following steps:
s1: collecting various facial expression pictures, and classifying the facial expression pictures one by one according to facial expressions to obtain a classified facial expression data set;
s2: preprocessing the classified facial expression data set pictures obtained in the step S1, removing fuzzy pictures, obtaining facial key points by using a convolutional neural network-based cascading multitask face detection algorithm, and uniformly cutting the facial pictures according to the key points to obtain a preprocessed facial expression data set;
s3: constructing a facial expression recognition network based on a convolutional neural network, and respectively inputting the preprocessed facial expression data set pictures obtained in the step S2 into the network to calculate a loss function and train the loss function to obtain a trained facial expression recognition network;
the loss function is designed as
Figure FDA0003736546020000011
Wherein,
Figure FDA0003736546020000012
L k-means (f,a,c)=||max(f,c a )-min(f,c -a )||
wherein L in the formula is a total loss function,
Figure FDA0003736546020000013
for categorizing the cross-entropy loss function, L k-means (f, a, c) is clusteringA loss function, x is an input facial expression training image, a is a facial expression label corresponding to the input image x,
Figure FDA0003736546020000014
obtaining a predicted label of an input image x through a Mobilene-V2 network, f obtaining a high-dimensional feature of a penultimate layer 1280 x 1 of the input image x through a Mobilene-V2 network, c obtaining K clustering centers of N expressions after all the high-dimensional features of a training picture are clustered through a pre-trained Mobilene-V2 network, wherein the K clustering centers are N x K clustering centers in total, and c is a K cluster centers expressed as a, c -a The total number of the cluster centers of all expressions except the expression a is (N-1) × K;
s4: and identifying the trained facial expression obtained in the step S3, and applying the identified network to actual measurement.
2. The method for realizing facial expression recognition based on deep feature clustering according to claim 1, wherein the method comprises the following steps: in the step S1, the collected facial expression pictures need to be of a balanced category, each facial expression picture needs to be at least two thousand facial expression pictures, and the face needs to be clear and the posture needs to be correct.
3. The method for realizing facial expression recognition based on deep feature clustering according to claim 1, wherein the method comprises the following steps: in the step S2, the image is preprocessed to remove the blurred image, then the cascaded multi-task face detection algorithm based on the convolutional neural network is used to obtain face key points, the face image is uniformly cut according to the key points, and then the face image is respectively stored according to the face expression, if there is less face expression images, the data enhancement is performed on the face image.
4. The method for realizing facial expression recognition based on deep feature clustering according to claim 1, wherein the method comprises the following steps: in the step S3, the convolutional neural network structure is mobilene-V2, and the input layer is the clipped human face picture and outputs probability values of various human face expressions.
5. The method for realizing facial expression recognition based on deep feature clustering according to claim 1, wherein the method comprises the following steps: in the step S3, deep feature clustering loss is added to the loss function of the convolutional neural network, so that the deep feature differences obtained by the various types of facial expression pictures through the convolutional neural network are larger.
6. The method for realizing facial expression recognition based on deep feature clustering according to claim 1, wherein the method comprises the following steps: training a facial expression recognition algorithm based on deep feature clustering in the step S3, including the steps of:
s31: inputting the facial expression data preprocessed in the step S2 into a pre-trained Mobilene-V2 network in sequence according to expression categories, extracting high-dimensional features of the last but one layer 1280 x 1 in the network in sequence, clustering the high-dimensional features of each type of expression by adopting a K-means clustering algorithm to obtain K clustering centers of each facial expression, and iteratively updating the clustering centers once in each cycle;
s32: comparing the K clustering centers of each facial expression in the step S31 with the high-dimensional features of the same layer of each training sample to obtain a clustering loss function;
s33: the convolutional neural network model is trained to minimize the loss function of the network.
CN201910240401.7A 2019-03-27 2019-03-27 Method for realizing facial expression recognition based on deep feature clustering Active CN109993100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910240401.7A CN109993100B (en) 2019-03-27 2019-03-27 Method for realizing facial expression recognition based on deep feature clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910240401.7A CN109993100B (en) 2019-03-27 2019-03-27 Method for realizing facial expression recognition based on deep feature clustering

Publications (2)

Publication Number Publication Date
CN109993100A CN109993100A (en) 2019-07-09
CN109993100B true CN109993100B (en) 2022-09-20

Family

ID=67131863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910240401.7A Active CN109993100B (en) 2019-03-27 2019-03-27 Method for realizing facial expression recognition based on deep feature clustering

Country Status (1)

Country Link
CN (1) CN109993100B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569878B (en) * 2019-08-08 2022-06-24 上海汇付支付有限公司 Photograph background similarity clustering method based on convolutional neural network and computer
CN110781784A (en) * 2019-10-18 2020-02-11 高新兴科技集团股份有限公司 Face recognition method, device and equipment based on double-path attention mechanism
CN111126244A (en) * 2019-12-20 2020-05-08 南京邮电大学 Security authentication system and method based on facial expressions
CN111401193B (en) * 2020-03-10 2023-11-28 海尔优家智能科技(北京)有限公司 Method and device for acquiring expression recognition model, and expression recognition method and device
CN111414862B (en) * 2020-03-22 2023-03-24 西安电子科技大学 Expression recognition method based on neural network fusion key point angle change
CN111507224B (en) * 2020-04-09 2022-08-30 河海大学常州校区 CNN facial expression recognition significance analysis method based on network pruning
CN112232116A (en) * 2020-09-08 2021-01-15 深圳微步信息股份有限公司 Facial expression recognition method and device and storage medium
CN113033374A (en) * 2021-03-22 2021-06-25 开放智能机器(上海)有限公司 Artificial intelligence dangerous behavior identification method and device, electronic equipment and storage medium
CN113076930B (en) * 2021-04-27 2022-11-08 东南大学 Face recognition and expression analysis method based on shared backbone network
CN117542106B (en) * 2024-01-10 2024-04-05 成都同步新创科技股份有限公司 Static face detection and data elimination method, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304826A (en) * 2018-03-01 2018-07-20 河海大学 Facial expression recognizing method based on convolutional neural networks
CN108764207A (en) * 2018-06-07 2018-11-06 厦门大学 A kind of facial expression recognizing method based on multitask convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190042952A1 (en) * 2017-08-03 2019-02-07 Beijing University Of Technology Multi-task Semi-Supervised Online Sequential Extreme Learning Method for Emotion Judgment of User

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304826A (en) * 2018-03-01 2018-07-20 河海大学 Facial expression recognizing method based on convolutional neural networks
CN108764207A (en) * 2018-06-07 2018-11-06 厦门大学 A kind of facial expression recognizing method based on multitask convolutional neural networks

Also Published As

Publication number Publication date
CN109993100A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN109993100B (en) Method for realizing facial expression recognition based on deep feature clustering
CN112308158B (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN107480261B (en) Fine-grained face image fast retrieval method based on deep learning
CN108133188B (en) Behavior identification method based on motion history image and convolutional neural network
CN112418117B (en) Small target detection method based on unmanned aerial vehicle image
Wang et al. Deep learning algorithms with applications to video analytics for a smart city: A survey
Asadi et al. A convolution recurrent autoencoder for spatio-temporal missing data imputation
CN110321805B (en) Dynamic expression recognition method based on time sequence relation reasoning
Xia et al. Transferring ensemble representations using deep convolutional neural networks for small-scale image classification
CN112329536A (en) Single-sample face recognition method based on alternative pair anti-migration learning
Menaga et al. Deep learning: a recent computing platform for multimedia information retrieval
Zhang et al. Classification of canker on small datasets using improved deep convolutional generative adversarial networks
CN116152554A (en) Knowledge-guided small sample image recognition system
CN115458174A (en) Method for constructing intelligent diagnosis model of diabetic retinopathy
Dhawan et al. Deep Learning Based Sugarcane Downy Mildew Disease Detection Using CNN-LSTM Ensemble Model for Severity Level Classification
CN114048843A (en) Small sample learning network based on selective feature migration
Li Parallel two-class 3D-CNN classifiers for video classification
CN112434145A (en) Picture-viewing poetry method based on image recognition and natural language processing
CN116051924B (en) Divide-and-conquer defense method for image countermeasure sample
Dhanalakshmi et al. Tomato leaf disease identification by modified inception based sequential convolution neural networks
CN116797821A (en) Generalized zero sample image classification method based on fusion visual information
CN115374943A (en) Data cognition calculation method and system based on domain confrontation migration network
CN115019342A (en) Endangered animal target detection method based on class relation reasoning
CN115100694A (en) Fingerprint quick retrieval method based on self-supervision neural network
Guzzi et al. Distillation of a CNN for a high accuracy mobile face recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant