CN109993100A - The implementation method of facial expression recognition based on further feature cluster - Google Patents
The implementation method of facial expression recognition based on further feature cluster Download PDFInfo
- Publication number
- CN109993100A CN109993100A CN201910240401.7A CN201910240401A CN109993100A CN 109993100 A CN109993100 A CN 109993100A CN 201910240401 A CN201910240401 A CN 201910240401A CN 109993100 A CN109993100 A CN 109993100A
- Authority
- CN
- China
- Prior art keywords
- expression
- human face
- cluster
- picture
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
- G06V10/464—Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Probability & Statistics with Applications (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Present invention discloses a kind of implementation methods of facial expression recognition based on further feature cluster, method includes the following steps: S1: acquiring various human face expression pictures, and classified one by one according to human face expression;S2: picture pretreatment removes fuzzy photo, then obtain face key point with the cascade multitask Face datection algorithm based on convolutional neural networks, and uniformly cut face picture according to key point;S3: facial expression recognition network of the building based on convolutional neural networks, and pretreated human face expression picture is inputted respectively in network and calculates loss function and is trained;S4: trained facial expression recognition network is obtained, and is applied to actual measurement.The problems such as this method solve facial expression recognition accurate rate is lower and over-fitting.
Description
Technical field
The present invention relates to a kind of implementation methods of facial expression recognition based on further feature cluster, can be used for computer view
Feel technical field at picture.
Background technique
In recent years, with the high speed development of artificial intelligence, deep learning also becomes popular research field.Deep learning exists
Solve the problems, such as that many aspects such as images steganalysis, speech recognition and natural language processing all do well.Various types of
In the neural network of type, convolutional neural networks are most furtherd investigate.Early stage is due to data and the calculating energy of lacking training
Power, it is highly difficult that high-performance convolutional neural networks are trained in the case where not generating over-fitting.As ImageNet
The appearance of extensive flag data and the quick raising of GPU calculated performance, so as to the rapid blowout of the research of convolutional neural networks.
With the constantly development of convolutional neural networks, model is more and more stronger to the Fitting Analysis ability of real data, together
When speed and precision, researcher propose the convolutional neural networks of many lightweights in order to balance.The convolutional Neural of lightweight
Network can reach preferable accurate rate, make full use of the parameter of network while realizing higher inference speed.
Mobilenet-V2 network is a kind of convolutional neural networks of lightweight of Google's research and development, its main feature is that having less ginseng
Number can be realized realizes operation in real time on mobile phone.
Facial expression recognition is to belong to the identification of fine granularity feature, directly applies Mobilenet-V2 in human face expression
In identification, it be easy to cause the phenomenon that identifying lower accurate rate or over-fitting.For fine-grained human face expression feature, how to make
Obtain network expression easy to accomplish is accurately divided into the technical issues of urgent solution.
Summary of the invention
The object of the invention is to propose a kind of based on further feature to solve the above-mentioned problems in the prior art
The implementation method of the facial expression recognition of cluster.
The facial expression recognition based on further feature cluster that the purpose of the invention will be achieved through the following technical solutions:
Implementation method, method includes the following steps:
S1: acquiring various human face expression pictures, and classified one by one according to human face expression, the human face expression data classified
Collection;
S2: the human face expression data set picture pretreatment for the classification that S1 step is obtained removes fuzzy photo, then with being based on
The cascade multitask Face datection algorithm of convolutional neural networks obtains face key point, and face figure is uniformly cut according to key point
Piece obtains pretreated human face expression data set;
S3: facial expression recognition network of the building based on convolutional neural networks, and the pretreatment that the S2 step is obtained
Human face expression data set picture inputted in network respectively and calculate loss function and be trained, obtain trained face table
Feelings identify network;
S4: the trained facial expression recognition network that the S3 step is obtained, and it is applied to actual measurement.
Preferably, in the S1 step, acquisition human face expression picture needs classification balanced, and all kinds of human face expression pictures need
Will be more than at least two thousand sheets, and need that face is clear, posture is rectified.
Preferably, in the S2 step, picture pretreatment removes fuzzy photo, then with based on convolutional neural networks
Cascade multitask Face datection algorithm obtains face key point, face picture is uniformly cut according to key point, further according to face table
Mutual affection does not save, such as exist certain class human face expression picture is less, then to this kind of pictures progress data enhancing.
Preferably, in the S3 step, convolutional neural networks structure is Mobilenet-V2, and input layer is after cutting out
Face picture exports as the probability value of all kinds of human face expressions.
Preferably, in the S3 step, further feature cluster loss is added in the loss function of convolutional neural networks,
So that various types of other face expression picture is bigger by the further feature difference that convolutional neural networks obtain.
Preferably, the facial expression recognition algorithm clustered based on further feature is trained in the S3 step, is wrapped
Include step:
S31: the human face expression data pre-processed in the S2 step are sequentially input into pre-training according to expression classification
Mobilenet-V2 network is successively extracted the high latitude feature of layer 1280*1 second from the bottom in network, then is calculated using K-means cluster
Method clusters the high latitude feature of every a kind of expression, obtains K cluster centre of each human face expression, and each loop iteration
Update a cluster centre;
S32: by the same layer of the K cluster centre and each training sample of each human face expression in the S31 step
High latitude feature is compared, and obtains cluster loss function;
S33: being trained convolutional neural networks model, so that the loss function of network minimizes
Preferably, loss function is designed as in the S3 step
Wherein,
Lk-means(f, a, c)=| | max (f, ca)-min (f, c-a)||
Wherein, L is total loss function in formula,For cross entropy loss function of classifying, Lk-means(f, a, c)
For cluster loss function, x is the human face expression training image of input, and a is the corresponding human face expression label of input picture x,It is defeated
Enter the label for the prediction that image x is obtained by Mobilenet-V2 network, f is that input picture x passes through Mobilenet-V2 network
The high latitude feature of obtained layer 1280*1 second from the bottom, c are that Mobilenet-V2 network of the training picture Jing Guo pre-training obtains
All high latitude feature clusterings after N class expression K cluster centre, share N*K cluster centre, caIt is a for expression
K cluster centre, c-aFor K cluster centre for removing institute's espressiove except expression a, (N-1) * K cluster centre is shared.
The invention adopts the above technical scheme compared with prior art, has following technical effect that present invention employs drawings
The method of spacing between big deep layer characteristics of image, so that the accurate division of network expression easy to accomplish.Based on further feature
The facial expression recognition algorithm of cluster can widen human face expression picture between the further feature in Mobilenet-V2 network
Distance so that fine-grained facial expression classification is more accurate.This method solve facial expression recognition accurate rate compared with
The problems such as low and over-fitting.
Detailed description of the invention
Fig. 1 is that the present invention is based on the Mobilenet-V2 structure charts in the facial expression recognition algorithm of further feature cluster.
Fig. 2 is that the present invention is based on the structure charts of residual error network block in the facial expression recognition algorithm of further feature cluster.
Specific embodiment
The purpose of the present invention, advantage and feature, by by the non-limitative illustration of preferred embodiment below carry out diagram and
It explains.These embodiments are only the prominent examples using technical solution of the present invention, it is all take equivalent replacement or equivalent transformation and
The technical solution of formation, all falls within the scope of protection of present invention.
Present invention discloses it is a kind of based on further feature cluster facial expression recognition implementation method, this method include with
Lower step:
S1: various human face expression pictures are acquired, and are classified one by one according to human face expression.
It is specific as follows: to find picture website, find human face expression picture and guarantee that picture is relatively clear.Utilize crawler skill
Art crawls all kinds of human face expression pictures from website respectively, and guarantees that the human face expression picture of every one kind is greater than 2,000.
S2: picture pretreatment removes fuzzy photo, then is calculated with the cascade multitask Face datection based on convolutional neural networks
Method (MTCNN) obtains five key points of face, and uniformly cuts face picture according to key point.
Seriatim screen picture, the picture that removal obscures and image content is not inconsistent.By the unified cutting of the picture screened
For 128*128 size, saved respectively according to all kinds of expressions of facial image.
S3: facial expression recognition network of the building based on convolutional neural networks, and by pretreated human face expression picture
Loss function is calculated in input network respectively and is trained.
The network structure of Mobilenet-V2 is as shown in Fig. 1.Mobilenet-V2 is made of four parts: convolutional layer,
The overall situation average pond layer, residual error network block.Convolutional layer extracts the characteristic information of picture by convolution operation, and as convolution is grasped
The multiple-layer stacked of work, the information of extraction are more and more abstract.Residual error network block such as attached drawing 2 in network structure, residual error network block are
In order to which low-level image feature is passed into high level, and inhibit the phenomenon that gradient disappears.The input of Mobilenet-V2 is face table
Feelings picture exports the human face expression label of prediction.
Loss function be by classification cross entropy loss function and cluster loss combination of function at.Classification intersects entropy loss letter
Number is the classification accuracy in order to promote network, and cluster loss function is to widen inhomogeneity Facial Expression Image by network
The high latitude feature difference generated.
Further feature cluster loss is added in the S3 step in the loss function of convolutional neural networks, so that various classifications
The further feature difference that obtains by convolutional neural networks of human face expression picture it is bigger, be conducive to distinguish fine-grained face special
Sign, is conducive to distinguish fine-grained face characteristic.
The training process, specifically:
S31: the human face expression data pre-processed described in S2 step are sequentially input into pre-training according to expression classification
Mobilenet-V2 network is successively extracted the high latitude feature of layer 1*1*1280 second from the bottom in network, then is clustered using K-means
Algorithm clusters the high latitude feature of N class expression, obtains the K cluster centre (cluster) of each human face expression, total N*K cluster.
S32: the high latitude feature of the same layer of N*K cluster centre and each training sample described in S31 step is carried out
Compare, obtains cluster loss function.The 1*1*1280 latitude feature that input human face expression picture is calculated when training is found out apart from this spy
The farthest similar expression cluster distance of sign and nearest non-similar expression cluster, then calculate separately between this feature and two clusters away from
From.Maximize the difference i.e. cluster loss function of two distances.Network model is saved after all trained wheels of picture training one to lay equal stress on
N*K cluster is newly calculated, again repetitive exercise.
S33: being trained the convolutional neural networks model, so that the loss function of network minimizes.
Loss function are as follows:
Wherein,
Lk-means(f, a, c)=| | max (f, ca)-min (f, c-a)||
L is total loss function in the formula,For cross entropy loss function of classifying, Lk-means(f, a, c)
For cluster loss function, x is the human face expression training image of input, and a is the corresponding human face expression label of input picture x,It is defeated
Enter the label for the prediction that image x is obtained by Mobilenet-V2 network, f is that input picture x passes through Mobilenet-V2 network
The high latitude feature of obtained layer 1*1*1280 second from the bottom, c are that Mobilenet-V2 network of the training picture Jing Guo pre-training obtains
The K cluster centre (sharing N*K cluster centre) of N class expression after all high latitude feature clusterings arrived, caFor expression
For the K cluster centre of a, c-aTo remove K cluster centre of institute's espressiove except expression a (in shared (N-1) * K cluster
The heart).
S4: trained facial expression recognition network is obtained, and is applied to actual measurement.
To sum up, the present invention can obtain the recognition of face network model for taking into account accuracy and speed, and network is general
Change ability is stronger.The present invention is by the way that human face expression picture to be input in Mobilenet-V2 network and by based on further feature
The facial expression recognition algorithm training pattern of cluster, obtains trained facial expression recognition network.Network can be preferable at this time
The fine-grained human face expression of identification.The facial expression recognition algorithm clustered based on further feature is acted on into facial expression recognition
Application in, it is poor between human face expression class to have widened, and optimizes the problem of fine granularity image is difficult to.
Still there are many embodiment, all technical sides formed using equivalents or equivalent transformation by the present invention
Case is within the scope of the present invention.
Claims (7)
1. the implementation method of the facial expression recognition based on further feature cluster, it is characterised in that: method includes the following steps:
S1: acquiring various human face expression pictures, and classified one by one according to human face expression, the human face expression data set classified;
S2: the human face expression data set picture pretreatment for the classification that S1 step is obtained removes fuzzy photo, then with based on convolution
The cascade multitask Face datection algorithm of neural network obtains face key point, and uniformly cuts face picture according to key point,
Obtain pretreated human face expression data set;
S3: facial expression recognition network of the building based on convolutional neural networks, and the pretreated people that the S2 step is obtained
Face expression data collection picture, which is inputted respectively in network, to be calculated loss function and is trained, and is obtained trained human face expression and is known
Other network;
S4: the trained facial expression recognition network that the S3 step is obtained, and it is applied to actual measurement.
2. the implementation method of the facial expression recognition according to claim 1 based on further feature cluster, it is characterised in that:
In the S1 step, acquisition human face expression picture needs classification balanced, all kinds of human face expression pictures need at least two thousand sheets with
On, and need face is clear, posture rectify.
3. the implementation method of the facial expression recognition according to claim 1 based on further feature cluster, it is characterised in that:
In the S2 step, picture pretreatment removes fuzzy photo, then is examined with the cascade multitask face based on convolutional neural networks
Method of determining and calculating obtains face key point, uniformly cuts face picture according to key point, saves respectively further according to human face expression, such as exists
Certain class human face expression picture is less, then carries out data enhancing to this kind of pictures.
4. the implementation method of the facial expression recognition according to claim 1 based on further feature cluster, it is characterised in that:
In the S3 step, convolutional neural networks structure is Mobilenet-V2, and input layer is the face picture after cutting out, and exports and is
The probability value of all kinds of human face expressions.
5. the implementation method of the facial expression recognition according to claim 1 based on further feature cluster, it is characterised in that:
In the S3 step, further feature cluster loss is added in the loss function of convolutional neural networks, so that various types of other
Human face expression picture is bigger by the further feature difference that convolutional neural networks obtain.
6. the implementation method of the facial expression recognition according to claim 1 based on further feature cluster, it is characterised in that:
The facial expression recognition algorithm clustered based on further feature is trained in the S3 step, comprising steps of
S31: the human face expression data pre-processed in the S2 step are sequentially input into pre-training according to expression classification
Mobilenet-V2 network is successively extracted the high latitude feature of layer 1280*1 second from the bottom in network, then is calculated using K-means cluster
Method clusters the high latitude feature of every a kind of expression, obtains K cluster centre of each human face expression, and each loop iteration
Update a cluster centre;
S32: by the high latitude of the same layer of the K cluster centre and each training sample of each human face expression in the S31 step
Feature is compared, and obtains cluster loss function;
S33: being trained convolutional neural networks model, so that the loss function of network minimizes.
7. the implementation method of the facial expression recognition according to claim 6 based on further feature cluster, it is characterised in that:
Loss function is designed as in the S3 step
Wherein,
Lk-means(f, a, c)=| | max (f, ca)-min (f, c-a)||
Wherein, L is total loss function in formula,For cross entropy loss function of classifying, Lk-means(f, a, c) is
Cluster loss function, x are the human face expression training image of input, and a is the corresponding human face expression label of input picture x,It is defeated
Enter the label for the prediction that image x is obtained by Mobilenet-V2 network, f is that input picture x passes through Mobilenet-V2 network
The high latitude feature of obtained layer 1280*1 second from the bottom, c are that Mobilenet-V2 network of the training picture Jing Guo pre-training obtains
All high latitude feature clusterings after N class expression K cluster centre, share N*K cluster centre, caIt is a for expression
K cluster centre, c-aFor K cluster centre for removing institute's espressiove except expression a, (N-1) * K cluster centre is shared.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910240401.7A CN109993100B (en) | 2019-03-27 | 2019-03-27 | Method for realizing facial expression recognition based on deep feature clustering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910240401.7A CN109993100B (en) | 2019-03-27 | 2019-03-27 | Method for realizing facial expression recognition based on deep feature clustering |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109993100A true CN109993100A (en) | 2019-07-09 |
CN109993100B CN109993100B (en) | 2022-09-20 |
Family
ID=67131863
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910240401.7A Active CN109993100B (en) | 2019-03-27 | 2019-03-27 | Method for realizing facial expression recognition based on deep feature clustering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109993100B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110569878A (en) * | 2019-08-08 | 2019-12-13 | 上海汇付数据服务有限公司 | Photograph background similarity clustering method based on convolutional neural network and computer |
CN110781784A (en) * | 2019-10-18 | 2020-02-11 | 高新兴科技集团股份有限公司 | Face recognition method, device and equipment based on double-path attention mechanism |
CN111126244A (en) * | 2019-12-20 | 2020-05-08 | 南京邮电大学 | Security authentication system and method based on facial expressions |
CN111401193A (en) * | 2020-03-10 | 2020-07-10 | 海尔优家智能科技(北京)有限公司 | Method and device for obtaining expression recognition model and expression recognition method and device |
CN111414862A (en) * | 2020-03-22 | 2020-07-14 | 西安电子科技大学 | Expression recognition method based on neural network fusion key point angle change |
CN111507224A (en) * | 2020-04-09 | 2020-08-07 | 河海大学常州校区 | CNN facial expression recognition significance analysis method based on network pruning |
CN112232116A (en) * | 2020-09-08 | 2021-01-15 | 深圳微步信息股份有限公司 | Facial expression recognition method and device and storage medium |
CN113033374A (en) * | 2021-03-22 | 2021-06-25 | 开放智能机器(上海)有限公司 | Artificial intelligence dangerous behavior identification method and device, electronic equipment and storage medium |
CN113076930A (en) * | 2021-04-27 | 2021-07-06 | 东南大学 | Face recognition and expression analysis method based on shared backbone network |
CN117542106A (en) * | 2024-01-10 | 2024-02-09 | 成都同步新创科技股份有限公司 | Static face detection and data elimination method, device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304826A (en) * | 2018-03-01 | 2018-07-20 | 河海大学 | Facial expression recognizing method based on convolutional neural networks |
CN108764207A (en) * | 2018-06-07 | 2018-11-06 | 厦门大学 | A kind of facial expression recognizing method based on multitask convolutional neural networks |
US20190042952A1 (en) * | 2017-08-03 | 2019-02-07 | Beijing University Of Technology | Multi-task Semi-Supervised Online Sequential Extreme Learning Method for Emotion Judgment of User |
-
2019
- 2019-03-27 CN CN201910240401.7A patent/CN109993100B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190042952A1 (en) * | 2017-08-03 | 2019-02-07 | Beijing University Of Technology | Multi-task Semi-Supervised Online Sequential Extreme Learning Method for Emotion Judgment of User |
CN108304826A (en) * | 2018-03-01 | 2018-07-20 | 河海大学 | Facial expression recognizing method based on convolutional neural networks |
CN108764207A (en) * | 2018-06-07 | 2018-11-06 | 厦门大学 | A kind of facial expression recognizing method based on multitask convolutional neural networks |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110569878B (en) * | 2019-08-08 | 2022-06-24 | 上海汇付支付有限公司 | Photograph background similarity clustering method based on convolutional neural network and computer |
CN110569878A (en) * | 2019-08-08 | 2019-12-13 | 上海汇付数据服务有限公司 | Photograph background similarity clustering method based on convolutional neural network and computer |
CN110781784A (en) * | 2019-10-18 | 2020-02-11 | 高新兴科技集团股份有限公司 | Face recognition method, device and equipment based on double-path attention mechanism |
CN111126244A (en) * | 2019-12-20 | 2020-05-08 | 南京邮电大学 | Security authentication system and method based on facial expressions |
CN111401193A (en) * | 2020-03-10 | 2020-07-10 | 海尔优家智能科技(北京)有限公司 | Method and device for obtaining expression recognition model and expression recognition method and device |
CN111401193B (en) * | 2020-03-10 | 2023-11-28 | 海尔优家智能科技(北京)有限公司 | Method and device for acquiring expression recognition model, and expression recognition method and device |
CN111414862A (en) * | 2020-03-22 | 2020-07-14 | 西安电子科技大学 | Expression recognition method based on neural network fusion key point angle change |
CN111414862B (en) * | 2020-03-22 | 2023-03-24 | 西安电子科技大学 | Expression recognition method based on neural network fusion key point angle change |
CN111507224A (en) * | 2020-04-09 | 2020-08-07 | 河海大学常州校区 | CNN facial expression recognition significance analysis method based on network pruning |
CN111507224B (en) * | 2020-04-09 | 2022-08-30 | 河海大学常州校区 | CNN facial expression recognition significance analysis method based on network pruning |
CN112232116A (en) * | 2020-09-08 | 2021-01-15 | 深圳微步信息股份有限公司 | Facial expression recognition method and device and storage medium |
CN113033374A (en) * | 2021-03-22 | 2021-06-25 | 开放智能机器(上海)有限公司 | Artificial intelligence dangerous behavior identification method and device, electronic equipment and storage medium |
CN113076930A (en) * | 2021-04-27 | 2021-07-06 | 东南大学 | Face recognition and expression analysis method based on shared backbone network |
CN117542106A (en) * | 2024-01-10 | 2024-02-09 | 成都同步新创科技股份有限公司 | Static face detection and data elimination method, device and storage medium |
CN117542106B (en) * | 2024-01-10 | 2024-04-05 | 成都同步新创科技股份有限公司 | Static face detection and data elimination method, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109993100B (en) | 2022-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109993100A (en) | The implementation method of facial expression recognition based on further feature cluster | |
CN112308158A (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
CN107622104A (en) | A kind of character image identification mask method and system | |
CN109101938B (en) | Multi-label age estimation method based on convolutional neural network | |
CN111160533A (en) | Neural network acceleration method based on cross-resolution knowledge distillation | |
CN106599800A (en) | Face micro-expression recognition method based on deep learning | |
CN111414862A (en) | Expression recognition method based on neural network fusion key point angle change | |
CN111738303B (en) | Long-tail distribution image recognition method based on hierarchical learning | |
CN108256630A (en) | A kind of over-fitting solution based on low dimensional manifold regularization neural network | |
CN110472652A (en) | A small amount of sample classification method based on semanteme guidance | |
CN110414587A (en) | Depth convolutional neural networks training method and system based on progressive learning | |
CN112507800A (en) | Pedestrian multi-attribute cooperative identification method based on channel attention mechanism and light convolutional neural network | |
CN110263174A (en) | - subject categories the analysis method based on focus | |
CN112766229A (en) | Human face point cloud image intelligent identification system and method based on attention mechanism | |
CN107220598A (en) | Iris Texture Classification based on deep learning feature and Fisher Vector encoding models | |
CN106227836B (en) | Unsupervised joint visual concept learning system and unsupervised joint visual concept learning method based on images and characters | |
CN110110724A (en) | The text authentication code recognition methods of function drive capsule neural network is squeezed based on exponential type | |
CN110334584A (en) | A kind of gesture identification method based on the full convolutional network in region | |
CN110765285A (en) | Multimedia information content control method and system based on visual characteristics | |
WO2021128704A1 (en) | Open set classification method based on classification utility | |
CN115062727A (en) | Graph node classification method and system based on multi-order hypergraph convolutional network | |
Ali et al. | Sindhi handwritten-digits recognition using machine learning techniques | |
CN112149556B (en) | Face attribute identification method based on deep mutual learning and knowledge transfer | |
CN105844299B (en) | A kind of image classification method based on bag of words | |
CN112200260A (en) | Figure attribute identification method based on discarding loss function |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |