CN112036281B - Facial expression recognition method based on improved capsule network - Google Patents
Facial expression recognition method based on improved capsule network Download PDFInfo
- Publication number
- CN112036281B CN112036281B CN202010860025.4A CN202010860025A CN112036281B CN 112036281 B CN112036281 B CN 112036281B CN 202010860025 A CN202010860025 A CN 202010860025A CN 112036281 B CN112036281 B CN 112036281B
- Authority
- CN
- China
- Prior art keywords
- capsule
- layer
- expression
- capsules
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000002775 capsule Substances 0.000 title claims abstract description 166
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000008921 facial expression Effects 0.000 title claims abstract description 31
- 230000014509 gene expression Effects 0.000 claims abstract description 47
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 35
- 230000004913 activation Effects 0.000 claims description 19
- 238000009826 distribution Methods 0.000 claims description 18
- 230000001815 facial effect Effects 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000036544 posture Effects 0.000 abstract description 10
- 210000000056 organ Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention provides a facial expression recognition method based on an improved capsule network, which comprises the following steps: inputting sample pictures into an improved capsule network for training; inputting the live-action picture into an improved capsule network for identification, and extracting the facial expression in the live-action picture; sample pictures are input into the improved capsule network for training specifically comprising: s1, extracting a face region from a picture through a multitasking convolutional neural network; s2, marking the extracted face area to obtain the expression and the head posture of the face area; s3, inputting the expression and the head gesture of the face area into a generating countermeasure network, and generating the face area with the expression for the generating countermeasure network; s4, inputting the face area with the expression into the improved capsule network to train the improved capsule network, and accurately identifying the expression of the face under different postures without considering the posture condition of the human body, so that the identification accuracy is ensured, and meanwhile, the identification efficiency is effectively improved.
Description
Technical Field
The invention relates to a facial expression recognition method, in particular to a facial expression recognition method based on an improved capsule network.
Background
Facial expression recognition is widely applied to modern production and life, mainly depends on a deep convolutional neural network frame, and can recognize a human face to a certain extent, but facial expression recognition of the human face under different postures is difficult to realize, and angles of the human face need to be adjusted in the recognition process, so that the efficiency of facial expression recognition is reduced, on the other hand, facial expression recognition depends on respective characteristics of organs such as eyes, nose and mouth, and the existing facial expression recognition cannot recognize relative positions among the organs, so that recognition accuracy is low.
Therefore, in order to solve the above-mentioned problems, a technical means is needed to be proposed.
Disclosure of Invention
In view of the above, the present invention aims to provide a facial expression recognition method based on an improved capsule network, which can accurately recognize facial expressions under different gestures without considering the gesture conditions of a human body, so that recognition accuracy can be ensured and recognition efficiency can be effectively improved.
The invention provides a facial expression recognition method based on an improved capsule network, which comprises the following steps:
inputting sample pictures into an improved capsule network for training;
inputting the live-action picture into an improved capsule network for identification, and extracting the facial expression in the live-action picture;
sample pictures are input into the improved capsule network for training specifically comprising:
s1, extracting a face region from a picture through a multitasking convolutional neural network;
s2, marking the extracted face area to obtain the expression and the head posture of the face area;
s3, inputting the expression and the head gesture of the face area into a generating countermeasure network, and generating the face area with the expression for the generating countermeasure network;
s4, inputting the face area with the expression into an improved capsule network to train the improved capsule network.
Further, in step S3, the generating the face area with the expression for the countermeasure network specifically includes:
the generation countermeasure network includes an encoder and a decoder;
inputting the expression and the head gesture of the face area into an encoder for processing, and outputting the face picture characteristics, the expression and the gesture by the encoder;
inputting the facial picture characteristics, the expressions and the gestures into a decoder for processing, and outputting the facial picture with the expressions by the decoder;
constructing an objective function for generating an countermeasure network:
wherein x is a face region, y is an expression label and a gesture label, D (x, y) is an expression of a discriminator, and the output is true or false; g (x, y) is an expression of the generator, and is output as a generated personFace pictures; d (G (x, y), y) is the result of the judging generator G generating the face picture by the judging device D; pd (x, y) is the joint probability of x and y; e (E) x,y~pd(x,y) Is a desire for pd (x, y);
and judging the face picture with the expression output by the decoder by adopting an objective function for generating an countermeasure network, and outputting the face picture with the expression, wherein the judging result is true.
Further, the step S4 specifically includes:
the improved capsule network has a relu convolutional layer, an initial capsule layer prim_cap, a first convolutional capsule layer conv_cap1, a second convolutional capsule layer conv_cap2, and a classification capsule layer class_cap;
inputting the face picture with the facial expression of the countermeasure network into a relu convolution layer for processing, and outputting the local characteristics of the face picture;
the initial capsule layer prim_cap processes local features of the face picture output by the relu convolution layer, and outputs 32 capsules;
the first capsule convolution layer conv_cap1 processes the initial capsule layer and outputs 32 capsules;
the second capsule convolution layer conv_cap2 processes the 32 capsules output by the first capsule convolution layer conv_cap1 and outputs 32 capsules;
the classified capsule layer class_cap processes the 32 capsules output by the second capsule convolution layer conv_cap2 and outputs 7 capsules, the 7 capsules corresponding to the facial expression in 7.
Further, the 32 capsules output by the initial capsule layer prim_cap are sequentially realized from the first capsule convolution layer conv_cap1, the second capsule convolution layer conv_cap2 to the classified capsule layer class_cap through a T-EM route, and specifically include:
determining a voting matrix V of lower layer capsules i to higher layer capsules j ij :
V ij =P i ·W ij ;
wherein ,Pi Is the gesture matrix of the capsule i of the lower layer, W ij A view invariant matrix for lower layer capsules i to higher layer capsules j;
wherein the voting matrix V ij The kth element in (a)The capsules j belonging to the higher layer are determined by the T distribution:
wherein Γ (·) is a gamma function,is element->To mean mu j Is the mahalanobis distance; />For the desire of T distribution, +.>For the degree of freedom of the T distribution, +.>Is the variance of the T distribution, pi is the circumference ratio;
the loss function C for classifying I lower layer capsules into the J higher layer capsules is:
gesture matrix P for higher layer capsules j And an activation matrix a j Voting matrix P through lower layer capsules i And an activation matrix a i Routing through T-EMThe process minimization formula (3) is obtained by:
initializing parameters:
m step:
R ij =R ij ×a i ,i=[1,I];
β a and βv The method is characterized in that a trainable variable is represented, lambda is a temperature coefficient, and the value is 0.01;
E, step E: determining a route based on t distribution based on the parameters calculated in the M step:
after the times of the M step and the E step are set, an attitude matrix P of the capsule with a higher layer is obtained j And the gesture matrix P j Each element being a voting matrix V ij Elements of (2)Is a mean value of (c).
Further, training the capsule network by a propagation loss function, wherein the propagation loss function of the t-th capsule in the lower layer capsule activation higher layer capsule is:
L=∑ i≠t (max(0,m-(a t -a i ))) 2 the method comprises the steps of carrying out a first treatment on the surface of the Wherein m is a variable gap, the initial value is 0.2, the maximum value is 0.9, a t To activate the activation value of the parent capsule, a i An activation value of the parent capsule that is inactive.
The invention has the beneficial effects that: according to the invention, the facial expressions under different postures can be accurately identified without considering the posture conditions of the human body, so that the identification accuracy is ensured, and the identification efficiency is effectively improved.
Drawings
The invention is further described below with reference to the accompanying drawings and examples:
FIG. 1 is a flow chart of the present invention.
Detailed Description
The invention is described in further detail below with reference to the attached drawing figures:
the invention provides a facial expression recognition method based on an improved capsule network, which comprises the following steps:
inputting sample pictures into an improved capsule network for training;
inputting the live-action picture into an improved capsule network for identification, and extracting the facial expression in the live-action picture;
sample pictures are input into the improved capsule network for training specifically comprising:
s1, extracting a face region from a picture through a multitasking convolutional neural network;
s2, marking the extracted face area to obtain the expression and the head posture of the face area;
s3, inputting the expression and the head gesture of the face area into a generating countermeasure network, and generating the face area with the expression for the generating countermeasure network;
s4, inputting the face area with the expression into an improved capsule network to train the improved capsule network; according to the invention, the facial expressions under different postures can be accurately identified without considering the posture conditions of the human body, so that the identification accuracy is ensured, and the identification efficiency is effectively improved.
In this embodiment, in step S3, the generating the face region with the expression generated by the countermeasure network specifically includes:
the generation countermeasure network includes an encoder and a decoder;
inputting the expression and the head gesture of the face area into an encoder for processing, and outputting the face picture characteristics, the expression and the gesture by the encoder; the input of the encoder is 224 x 244 x 3 face pictures, the output is 50 face picture features f, and the encoder is composed of five convolution layers and one full connection layer, wherein: the core of the convolution layer is 5*5 with the relu activation function; the full connection layer is provided with a tanh activation function;
inputting the facial picture characteristics, the expressions and the gestures into a decoder for processing, and outputting the facial picture with the expressions by the decoder; the decoder consists of seven deconvolution layers, the convolution kernel is 5*5, the first six deconvolution layers have a relu activation function, and the last deconvolution layer has a tanh activation function;
constructing an objective function for generating an countermeasure network:
wherein x is a face region, y is an expression label and a gesture label, D (x, y) is an expression of a discriminator, and the output is true or false; g (x, y) is an expression of the generator, and is output as a generated face picture; d (G (x, y), y) is the result of the judging generator G generating the face picture by the judging device D; pd (x, y) is the joint probability of x and y; e (E) x,y~pd(x,y) Is a desire for pd (x, y);
judging the facial picture with the expression output by the decoder by adopting an objective function for generating an countermeasure network, and outputting the facial picture with the expression, wherein the judging result is true; by the method, the facial expression in the sample picture can be accurately extracted, so that subsequent training and final recognition are facilitated.
In this embodiment, step S4 specifically includes:
the improved capsule network has a relu convolutional layer, an initial capsule layer prim_cap, a first convolutional capsule layer conv_cap1, a second convolutional capsule layer conv_cap2, and a classification capsule layer class_cap; the input of the Relu convolution layer conv_relu is a facial expression picture of 28 x 3, the output is a local feature of 14 x 32, and the facial expression picture consists of a coiler layer 5*5, a Batchnormal layer and a Relu layer;
inputting the face picture with the facial expression of the countermeasure network into a relu convolution layer for processing, and outputting the local characteristics of the face picture;
the initial capsule layer prim_cap processes local features of the face picture output by the relu convolution layer, and outputs 32 capsules; the input of the initial capsule layer (prim_cap) is the local feature of the output of the Relu convolution layer, the output is 32 capsules, the initial capsule layer is composed of two paths of rolling machine layers 1*1, the step length is 1, and a gesture matrix and an activation matrix of the output capsules are respectively formed;
the first capsule convolution layer conv_cap1 processes the initial capsule layer and outputs 32 capsules;
the second capsule convolution layer conv_cap2 processes the 32 capsules output by the first capsule convolution layer conv_cap1 and outputs 32 capsules;
the classified capsule layer class_cap processes the 32 capsules output by the second capsule convolution layer conv_cap2 and outputs 7 capsules, the 7 capsules corresponding to the facial expression in 7.
Specifically: the 32 capsules output by the initial capsule layer prim_cap are sequentially realized from a first capsule convolution layer conv_cap1, a second capsule convolution layer conv_cap2 to a classification capsule layer class_cap through a T-EM route, and the method specifically comprises the following steps:
determining a voting matrix V of lower layer capsules i to higher layer capsules j ij :
V ij =P i ·W ij ;
wherein ,Pi Is the gesture matrix of the capsule i of the lower layer, W ij A view invariant matrix for lower layer capsules i to higher layer capsules j;
wherein the voting matrix V ij The kth element in (a)The capsules j belonging to the higher layer are determined by the T distribution: />
Wherein Γ (·) is a gamma function,is element->To the average valueμ j Is the mahalanobis distance; />For the desire of T distribution, +.>For the degree of freedom of the T distribution, +.>Is the variance of the T distribution, pi is the circumference ratio;
the loss function C for classifying I lower layer capsules into the J higher layer capsules is:
gesture matrix P for higher layer capsules j And an activation matrix a j Voting matrix P through lower layer capsules i And an activation matrix a i Obtained by minimizing formula (3) through T-EM routing process, specifically:
initializing parameters:
m step:
R ij =R ij ×a i ,i=[1,I];
β a and βv The method is characterized in that a trainable variable is represented, lambda is a temperature coefficient, and the value is 0.01; />
e, step E: determining a route based on t distribution based on the parameters calculated in the M step:
by iteration MAfter the setting times of the step and the step E, obtaining an attitude matrix P of the capsule of the higher layer j And the gesture matrix P j Each element being a voting matrix V ij Elements of (2)Is a mean value of (c). By the method, each capsule can be trained, so that the accuracy of final identification is ensured.
In this embodiment, the capsule network is trained by the propagation loss function, and a trainable variable β is obtained during the training process a and βv The propagation loss function of the t-th capsule in the lower layer capsules activated higher layer capsules is:
L=∑ i≠t (max(0,m-(a t -a i ))) 2 the method comprises the steps of carrying out a first treatment on the surface of the Wherein m is a variable gap, the initial value is 0.2, the maximum value is 0.9, a t To activate the activation value of the parent capsule, a i An activation value of the parent capsule that is inactive.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered by the scope of the claims of the present invention.
Claims (2)
1. A facial expression recognition method based on an improved capsule network is characterized by comprising the following steps of: comprising the following steps:
inputting sample pictures into an improved capsule network for training;
inputting the live-action picture into an improved capsule network for identification, and extracting the facial expression in the live-action picture;
sample pictures are input into the improved capsule network for training specifically comprising:
s1, extracting a face region from a picture through a multitasking convolutional neural network;
s2, marking the extracted face area to obtain the expression and the head posture of the face area;
s3, inputting the expression and the head gesture of the face area into a generating countermeasure network, and generating the face area with the expression for the generating countermeasure network;
s4, inputting the face area with the expression into an improved capsule network to train the improved capsule network;
in step S3, the generating the face area with the expression by the countermeasure network specifically includes:
the generation countermeasure network includes an encoder and a decoder;
inputting the expression and the head gesture of the face area into an encoder for processing, and outputting the face picture characteristics, the expression and the gesture by the encoder;
inputting the facial picture characteristics, the expressions and the gestures into a decoder for processing, and outputting the facial picture with the expressions by the decoder;
constructing an objective function for generating an countermeasure network:
wherein x is a face region, y is an expression label and a gesture label, D (x, y) is an expression of a discriminator, and the output is true or false; g (x, y) is an expression of the generator, and is output as a generated face picture; d (G (x, y), y) is the result of the judging generator G generating the face picture by the judging device D; pd (x, y) is the joint probability of x and y; e (E) x,y~pd(x,y) Is a desire for pd (x, y);
judging the facial picture with the expression output by the decoder by adopting an objective function for generating an countermeasure network, and outputting the facial picture with the expression, wherein the judging result is true;
the step S4 specifically includes:
the improved capsule network has a relu convolutional layer, an initial capsule layer prim_cap, a first convolutional capsule layer conv_cap1, a second convolutional capsule layer conv_cap2, and a classification capsule layer class_cap;
inputting the face picture with the facial expression of the countermeasure network into a relu convolution layer for processing, and outputting the local characteristics of the face picture;
the initial capsule layer prim_cap processes local features of the face picture output by the relu convolution layer, and outputs 32 capsules;
the first capsule convolution layer conv_cap1 processes the initial capsule layer and outputs 32 capsules;
the second capsule convolution layer conv_cap2 processes the 32 capsules output by the first capsule convolution layer conv_cap1 and outputs 32 capsules;
the classified capsule layer class_cap processes the 32 capsules output by the second capsule convolution layer conv_cap2 and outputs 7 capsules, wherein the 7 capsules correspond to the facial expression in 7;
the 32 capsules output by the initial capsule layer prim_cap are sequentially realized from a first capsule convolution layer conv_cap1, a second capsule convolution layer conv_cap2 to a classification capsule layer class_cap through a T-EM route, and the method specifically comprises the following steps:
determining a voting matrix V of lower layer capsules i to higher layer capsules j ij :
V ij =P i ·W ij ;
wherein ,Pi Is the gesture matrix of the capsule i of the lower layer, W ij A view invariant matrix for lower layer capsules i to higher layer capsules j;
wherein the voting matrix V ij The kth element in (a)The capsules j belonging to the higher layer are determined by the T distribution:
wherein Γ (·) is a gamma function,is element->To mean mu j Is the mahalanobis distance; />For the desire of T distribution, +.>For the degree of freedom of the T distribution, +.>Is the variance of the T distribution, pi is the circumference ratio;
the loss function C for classifying I lower layer capsules into the J higher layer capsules is:
gesture matrix P for higher layer capsules j And an activation matrix a j Voting matrix p through lower layer capsules i And an activation matrix a i Obtained by minimizing formula (3) through T-EM routing process, specifically:
initializing parameters:
m step:
R ij =R ij ×a i ,i=[1,I];
e, step E: determining a route based on t distribution based on the parameters calculated in the M step:
2. The facial expression recognition method based on the improved capsule network according to claim 1, wherein: training the capsule network through a propagation loss function, wherein the propagation loss function of the t-th capsule in the lower-layer capsule activation higher-layer capsule is as follows:
L=∑ i≠t (max(0,m-(a t -a i ))) 2 the method comprises the steps of carrying out a first treatment on the surface of the Wherein m is a variable gap, the initial value is 0.2, the maximum value is 0.9, a t To activate the activation value of the parent capsule, a i An activation value of the parent capsule that is inactive.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010746073 | 2020-07-29 | ||
CN2020107460730 | 2020-07-29 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112036281A CN112036281A (en) | 2020-12-04 |
CN112036281B true CN112036281B (en) | 2023-06-09 |
Family
ID=73581053
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010860025.4A Active CN112036281B (en) | 2020-07-29 | 2020-08-24 | Facial expression recognition method based on improved capsule network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112036281B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112507916B (en) * | 2020-12-16 | 2021-07-27 | 苏州金瑞阳信息科技有限责任公司 | Face detection method and system based on facial expression |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108446609A (en) * | 2018-03-02 | 2018-08-24 | 南京邮电大学 | A kind of multi-angle human facial expression recognition method based on generation confrontation network |
CN108764031A (en) * | 2018-04-17 | 2018-11-06 | 平安科技(深圳)有限公司 | Identify method, apparatus, computer equipment and the storage medium of face |
CN109063724A (en) * | 2018-06-12 | 2018-12-21 | 中国科学院深圳先进技术研究院 | A kind of enhanced production confrontation network and target sample recognition methods |
CN109934116A (en) * | 2019-02-19 | 2019-06-25 | 华南理工大学 | A kind of standard faces generation method based on generation confrontation mechanism and attention mechanism |
CN110197125A (en) * | 2019-05-05 | 2019-09-03 | 上海资汇信息科技有限公司 | Face identification method under unconfined condition |
CN110533004A (en) * | 2019-09-07 | 2019-12-03 | 哈尔滨理工大学 | A kind of complex scene face identification system based on deep learning |
CN111241958A (en) * | 2020-01-06 | 2020-06-05 | 电子科技大学 | Video image identification method based on residual error-capsule network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190303742A1 (en) * | 2018-04-02 | 2019-10-03 | Ca, Inc. | Extension of the capsule network |
-
2020
- 2020-08-24 CN CN202010860025.4A patent/CN112036281B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108446609A (en) * | 2018-03-02 | 2018-08-24 | 南京邮电大学 | A kind of multi-angle human facial expression recognition method based on generation confrontation network |
CN108764031A (en) * | 2018-04-17 | 2018-11-06 | 平安科技(深圳)有限公司 | Identify method, apparatus, computer equipment and the storage medium of face |
CN109063724A (en) * | 2018-06-12 | 2018-12-21 | 中国科学院深圳先进技术研究院 | A kind of enhanced production confrontation network and target sample recognition methods |
CN109934116A (en) * | 2019-02-19 | 2019-06-25 | 华南理工大学 | A kind of standard faces generation method based on generation confrontation mechanism and attention mechanism |
CN110197125A (en) * | 2019-05-05 | 2019-09-03 | 上海资汇信息科技有限公司 | Face identification method under unconfined condition |
CN110533004A (en) * | 2019-09-07 | 2019-12-03 | 哈尔滨理工大学 | A kind of complex scene face identification system based on deep learning |
CN111241958A (en) * | 2020-01-06 | 2020-06-05 | 电子科技大学 | Video image identification method based on residual error-capsule network |
Non-Patent Citations (9)
Title |
---|
Capsule GAN Using Capsule Network for Generator Architecture;Kanako Marusaki等;《arXiv》;20200318;1-7 * |
Capsule Networks Need an Improved Routing Algorithm;Inyoung Paik等;《arXiv》;20190731;1-14 * |
Dynamic Routing Between Capsules;Sara Sabour等;《arXiv》;20171026;1-11 * |
MATRIX CAPSULES WITH EM ROUTING;Geoffrey E. Hinton等;《ICLR 2018 》;20190218;正文第3页最后一段,第4页图1,第4页第1-2段 * |
基于GAN网络的面部表情识别;陈霖等;《电子技术与软件工程》;20200115(第01期);正文第一页图1,第2页左栏第一段 * |
基于生成式对抗网络的鲁棒人脸表情识别;姚乃明等;《自动化学报》;20180418(第05期);865-877 * |
基于胶囊网络的人脸表情特征提取与识别算法研究;姚玉倩;《中国优秀硕士学位论文全文数据库 信息科技辑》;20200115(第01期);正文14页图2-4,正文第29,48-51页 * |
生成式对抗网络研究综述;罗佳等;《仪器仪表学报》;20190315(第03期);74-84 * |
胶囊网络模型综述;杨巨成等;《山东大学学报(工学版)》;20191105(第06期);1-10 * |
Also Published As
Publication number | Publication date |
---|---|
CN112036281A (en) | 2020-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109409297B (en) | Identity recognition method based on dual-channel convolutional neural network | |
US9317785B1 (en) | Method and system for determining ethnicity category of facial images based on multi-level primary and auxiliary classifiers | |
JP4571628B2 (en) | Face recognition system and method | |
Tivive et al. | A gender recognition system using shunting inhibitory convolutional neural networks | |
CN107704813B (en) | Face living body identification method and system | |
CN106529504B (en) | A kind of bimodal video feeling recognition methods of compound space-time characteristic | |
Kaluri et al. | An enhanced framework for sign gesture recognition using hidden Markov model and adaptive histogram technique. | |
CN112733627B (en) | Finger vein recognition method based on fusion local and global feature network | |
CN112329683A (en) | Attention mechanism fusion-based multi-channel convolutional neural network facial expression recognition method | |
CN112001215B (en) | Text irrelevant speaker identity recognition method based on three-dimensional lip movement | |
CN110555463B (en) | Gait feature-based identity recognition method | |
CN112036281B (en) | Facial expression recognition method based on improved capsule network | |
US20220207305A1 (en) | Multi-object detection with single detection per object | |
CN109063626A (en) | Dynamic human face recognition methods and device | |
CN112200074A (en) | Attitude comparison method and terminal | |
Garg et al. | Facial expression recognition & classification using hybridization of ICA, GA, and neural network for human-computer interaction | |
Tautkutė et al. | Classifying and visualizing emotions with emotional DAN | |
CN113159002B (en) | Facial expression recognition method based on self-attention weight auxiliary module | |
CN111680550A (en) | Emotion information identification method and device, storage medium and computer equipment | |
CN113076916A (en) | Dynamic facial expression recognition method and system based on geometric feature weighted fusion | |
Abedi et al. | Modification of deep learning technique for face expressions and body postures recognitions | |
US11048926B2 (en) | Adaptive hand tracking and gesture recognition using face-shoulder feature coordinate transforms | |
Xu et al. | Skeleton guided conflict-free hand gesture recognition for robot control | |
Nimitha et al. | Supervised chromosomal anomaly detection using VGG-16 CNN model | |
Boulahia et al. | 3D multistroke mapping (3DMM): Transfer of hand-drawn pattern representation for skeleton-based gesture recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |