CN113361307A - Facial expression classification method and device and storage equipment - Google Patents
Facial expression classification method and device and storage equipment Download PDFInfo
- Publication number
- CN113361307A CN113361307A CN202010153454.8A CN202010153454A CN113361307A CN 113361307 A CN113361307 A CN 113361307A CN 202010153454 A CN202010153454 A CN 202010153454A CN 113361307 A CN113361307 A CN 113361307A
- Authority
- CN
- China
- Prior art keywords
- prior
- face
- training
- model
- size
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008921 facial expression Effects 0.000 title claims abstract description 34
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000014509 gene expression Effects 0.000 claims abstract description 36
- 238000013528 artificial neural network Methods 0.000 claims abstract description 8
- 230000035945 sensitivity Effects 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims description 14
- 238000013507 mapping Methods 0.000 claims description 8
- 210000001097 facial muscle Anatomy 0.000 claims description 4
- 238000000265 homogenisation Methods 0.000 claims description 3
- 230000007935 neutral effect Effects 0.000 claims description 2
- 230000001815 facial effect Effects 0.000 abstract description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 206010063659 Aversion Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Abstract
The invention provides a facial expression classification method, a device and a storage device, so that facial expression characteristics can be fully and scientifically extracted through self-adaptive learning of facial region division, a facial expression picture is preprocessed, and a face is mapped into a uniform size in the picture; establishing an expression recognition model, and adding two layers of neural networks in the model, wherein one layer is used for training the size of a prior frame to divide a prior region of a face, and the other layer is used for learning the weight of the prior region; setting the weight of the prior region according to the importance of different regions of the face, and training the sensitivity of each region under different expressions; and training the model by adopting a classifier so as to classify the expression.
Description
Technical Field
The invention relates to the technical field of face recognition, in particular to a facial expression classification method, a device and storage equipment.
Background
The accuracy of the existing expression recognition method is generally poor due to the fact that the whole face region is divided scientifically. Facial expressions are the result of the linkage of more complex facial muscle groups, and the dynamic details of the expressions are difficult to capture purely by human definition. Therefore, the deep learning method is adopted to train the division of the face region, and meanwhile, the sensibility of different expressions to different regions is learned, so that the facial expression recognition effect can be greatly improved.
Disclosure of Invention
The invention aims to provide a facial expression classification method, a device and storage equipment, so that facial expression characteristics can be fully and scientifically extracted through self-adaptive learning of facial region division.
In order to achieve the above object, an aspect of the present invention provides a facial expression classification method, including:
preprocessing the facial expression picture, and mapping the face to be uniform in size in the picture;
establishing an expression recognition model, and adding two layers of neural networks in the model, wherein one layer is used for training the size of a prior frame to divide a prior region of a face, and the other layer is used for learning the weight of the prior region;
setting the weight of the prior region according to the importance of different regions of the face, and training the sensitivity of each region under different expressions;
and training the model by adopting a classifier so as to classify the expression.
Further, in the pretreatment process, the method comprises the following steps:
and carrying out preprocessing operations such as graying, homogenization and the like on the picture, and mapping the human face into a uniform size in the picture.
Further, in the expression recognition model establishing process, the method includes:
setting the size of a prior frame, and dividing the face into a plurality of prior areas according to the prior frame;
and (3) reversely updating the weight of each prior area and the influence parameters of facial muscles on different expressions by the size and the position of each prior frame through model training.
Further, the face is initially divided into 9 blocks of prior regions, and the sizes of the 9 blocks of prior regions are consistent.
Further, the face is divided into at most 30 blocks of prior regions, and the 30 blocks of prior regions are consistent in size.
Further, the expression classification includes happy, frightened, sad, hate, angry, neutral, surprised.
In another aspect, the present invention also provides a satisfaction evaluating apparatus, comprising:
the preprocessing module is used for preprocessing the facial expression picture and mapping the face to be uniform in size in the picture;
the model generation module is used for establishing an expression recognition model, two layers of neural networks are added in the model, one layer is used for training the size of a prior frame to divide a prior region of a face, and the other layer is used for learning the weight of the prior region;
the model training module is used for setting the weight of the prior region according to the importance of different regions of the face and training the sensitivity of each region under different expressions;
and the model classification module is used for training the models by adopting a classifier so as to classify the expressions.
In another aspect, the present invention also provides a storage device, wherein the storage medium stores instructions adapted to be loaded by a processor to perform the steps of a satisfaction assessment method according to any of claims 1-6.
The invention provides a facial expression classification method, a device and a storage device, so that facial expression characteristics can be fully and scientifically extracted through self-adaptive learning of facial region division, a facial expression picture is preprocessed, and a face is mapped into a uniform size in the picture; establishing an expression recognition model, and adding two layers of neural networks in the model, wherein one layer is used for training the size of a prior frame to divide a prior region of a face, and the other layer is used for learning the weight of the prior region; setting the weight of the prior region according to the importance of different regions of the face, and training the sensitivity of each region under different expressions; and training the model by adopting a classifier so as to classify the expression.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a method for classifying facial expressions according to an embodiment of the present invention.
Fig. 2 is a system architecture diagram of a facial expression classification apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. A
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Where similar language of "first/second" appears in the specification, the following description is added, and where reference is made to the term "first \ second \ third" merely for distinguishing between similar items and not for indicating a particular ordering of items, it is to be understood that "first \ second \ third" may be interchanged both in particular order or sequence as appropriate, so that embodiments of the application described herein may be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
A facial expression classification method, apparatus, and storage device according to an embodiment of the present invention will be described below with reference to the accompanying drawings, and first, a facial expression classification method according to an embodiment of the present invention will be described with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for classifying facial expressions according to an embodiment of the present invention. As shown in fig. 1, the method comprises the steps of:
and step S1, preprocessing the facial expression picture, and mapping the face to be uniform in size in the picture.
In the preprocessing, the preprocessing operation such as graying and homogenization is performed on the picture, and the face is mapped to be uniform in size in the picture.
Step S2, establishing an expression recognition model, adding two layers of neural networks in the model, wherein one layer is used for training the size of a prior frame to divide a prior area of the face, and the other layer is used for learning the weight of the prior area.
Specifically, the face is initially divided into 9 blocks of prior regions, and the sizes of the 9 blocks of prior regions are consistent. The face is divided into 30 blocks of prior regions at most, and the 30 blocks of prior regions are consistent in size.
And step S3, setting the weight of the prior region according to the importance of different regions of the face, and training the sensitivity of each region under different expressions.
In the process of establishing the model, the size of a prior frame is set, and the face is divided into a plurality of prior areas according to the prior frame; training is carried out by adopting a VGG16 model, and the weight of each prior area and the influence parameters of facial muscles on different expressions are updated reversely according to the size and the position of each prior box.
As will be appreciated by those skilled in the art, the VGG16 model is a 16-layer deep CNN network.
VGG16 comprises in combination:
13 Convolutional layers (Convolutional Layer), represented by conv3-XXX
3 Fully connected layers (Fully connected Layer), each denoted by FC-XXXX
5 pooling layers (Pool layers), each denoted by maxpool
Among them, the convolutional layer and the fully-connected layer have a weight coefficient and are also called as weight layers, and the total number is 13+3 — 16, which is the source of 16 in VGG 16. (pooling layers do not involve weights and therefore do not belong to the weight layer and are not counted).
Specifically, the method and the device fit the size of the prior region according to the weight of the region through the convolution layer and the full-connection layer so as to ensure that the region division can accurately extract the region characteristics of the expression.
And step S4, training the model by adopting a classifier to classify the expression.
Specifically, the expression classification includes happiness, fear, sadness, aversion, anger, neutrality and surprise.
As shown in fig. 2, in another aspect, the present invention further provides a facial expression classifying device, including:
the preprocessing module 101 is used for preprocessing the facial expression picture and mapping the face to be uniform in size in the picture;
the model generation module 102 is used for establishing an expression recognition model, adding two layers of neural networks in the model, wherein one layer is used for training the size of a prior frame to divide a prior region of a face, and the other layer is used for learning the weight of the prior region;
the model training module 103 is used for setting the weight of the prior region according to the importance of different regions of the face and training the sensitivity of each region under different expressions;
and the model classification module 104 trains the models by adopting classifiers so as to classify the expressions.
In another aspect, the present invention further provides a storage device, wherein the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform the steps of the method for classifying facial expressions as claimed above.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or terminal. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. A facial expression classification method is characterized by comprising the following steps:
preprocessing the facial expression picture, and mapping the face to be uniform in size in the picture;
establishing an expression recognition model, and adding two layers of neural networks in the model, wherein one layer is used for training the size of a prior frame to divide a prior region of a face, and the other layer is used for learning the weight of the prior region;
setting the weight of the prior region according to the importance of different regions of the face, and training the sensitivity of each region under different expressions;
and training the model by adopting a classifier so as to classify the expression.
2. The method for classifying facial expressions according to claim 1, wherein the preprocessing process comprises:
and carrying out preprocessing operations such as graying, homogenization and the like on the picture, and mapping the human face into a uniform size in the picture.
3. The method for classifying facial expressions according to claim 1, wherein the expression recognition model establishing process comprises:
setting the size of a prior frame, and dividing the face into a plurality of prior areas according to the prior frame;
training is carried out by adopting a VGG16 model, and the weight of each prior area and the influence parameters of facial muscles on different expressions are updated reversely according to the size and the position of each prior box.
4. The method of claim 3, wherein the facial expression is classified,
the face is initially divided into 9 blocks of prior regions, the 9 blocks of prior regions being of uniform size.
5. The method of claim 3, wherein the facial expression is classified,
the face is divided into 30 blocks of prior regions at most, and the 30 blocks of prior regions are consistent in size.
6. The method of claim 1, wherein the facial expression is classified,
the expression classifications include happy, frightened, sad, hate, angry, neutral, surprised.
7. A facial expression classification apparatus, characterized by comprising:
the preprocessing module is used for preprocessing the facial expression picture and mapping the face to be uniform in size in the picture;
the model generation module is used for establishing an expression recognition model, two layers of neural networks are added in the model, one layer is used for training the size of a prior frame to divide a prior region of a face, and the other layer is used for learning the weight of the prior region;
the model training module is used for setting the weight of the prior region according to the importance of different regions of the face and training the sensitivity of each region under different expressions;
and the model classification module is used for training the models by adopting a classifier so as to classify the expressions.
8. A storage device, wherein the storage medium stores instructions adapted to be loaded by a processor to perform the steps of a satisfaction assessment method according to any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010153454.8A CN113361307A (en) | 2020-03-06 | 2020-03-06 | Facial expression classification method and device and storage equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010153454.8A CN113361307A (en) | 2020-03-06 | 2020-03-06 | Facial expression classification method and device and storage equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113361307A true CN113361307A (en) | 2021-09-07 |
Family
ID=77524206
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010153454.8A Pending CN113361307A (en) | 2020-03-06 | 2020-03-06 | Facial expression classification method and device and storage equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113361307A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113688789A (en) * | 2021-09-17 | 2021-11-23 | 华中师范大学 | Online learning investment recognition method and system based on deep learning |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101449744B1 (en) * | 2013-09-06 | 2014-10-15 | 한국과학기술원 | Face detection device and method using region-based feature |
US20140362091A1 (en) * | 2013-06-07 | 2014-12-11 | Ecole Polytechnique Federale De Lausanne | Online modeling for real-time facial animation |
KR20160053749A (en) * | 2014-11-05 | 2016-05-13 | 한국과학기술원 | Method and systems of face expression features classification robust to variety of face image appearance |
WO2018060993A1 (en) * | 2016-09-27 | 2018-04-05 | Faception Ltd. | Method and system for personality-weighted emotion analysis |
CN108875833A (en) * | 2018-06-22 | 2018-11-23 | 北京智能管家科技有限公司 | Training method, face identification method and the device of neural network |
CN109086663A (en) * | 2018-06-27 | 2018-12-25 | 大连理工大学 | The natural scene Method for text detection of dimension self-adaption based on convolutional neural networks |
CN109344693A (en) * | 2018-08-13 | 2019-02-15 | 华南理工大学 | A kind of face multizone fusion expression recognition method based on deep learning |
CN109902660A (en) * | 2019-03-18 | 2019-06-18 | 腾讯科技(深圳)有限公司 | A kind of expression recognition method and device |
US20190311188A1 (en) * | 2018-12-05 | 2019-10-10 | Sichuan University | Face emotion recognition method based on dual-stream convolutional neural network |
CN110738160A (en) * | 2019-10-12 | 2020-01-31 | 成都考拉悠然科技有限公司 | human face quality evaluation method combining with human face detection |
-
2020
- 2020-03-06 CN CN202010153454.8A patent/CN113361307A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140362091A1 (en) * | 2013-06-07 | 2014-12-11 | Ecole Polytechnique Federale De Lausanne | Online modeling for real-time facial animation |
KR101449744B1 (en) * | 2013-09-06 | 2014-10-15 | 한국과학기술원 | Face detection device and method using region-based feature |
KR20160053749A (en) * | 2014-11-05 | 2016-05-13 | 한국과학기술원 | Method and systems of face expression features classification robust to variety of face image appearance |
WO2018060993A1 (en) * | 2016-09-27 | 2018-04-05 | Faception Ltd. | Method and system for personality-weighted emotion analysis |
CN108875833A (en) * | 2018-06-22 | 2018-11-23 | 北京智能管家科技有限公司 | Training method, face identification method and the device of neural network |
CN109086663A (en) * | 2018-06-27 | 2018-12-25 | 大连理工大学 | The natural scene Method for text detection of dimension self-adaption based on convolutional neural networks |
CN109344693A (en) * | 2018-08-13 | 2019-02-15 | 华南理工大学 | A kind of face multizone fusion expression recognition method based on deep learning |
US20190311188A1 (en) * | 2018-12-05 | 2019-10-10 | Sichuan University | Face emotion recognition method based on dual-stream convolutional neural network |
CN109902660A (en) * | 2019-03-18 | 2019-06-18 | 腾讯科技(深圳)有限公司 | A kind of expression recognition method and device |
CN110738160A (en) * | 2019-10-12 | 2020-01-31 | 成都考拉悠然科技有限公司 | human face quality evaluation method combining with human face detection |
Non-Patent Citations (4)
Title |
---|
ZHANG, H: "Face-selective regions differ in their ability to classify facial expressions", NEUROIMAGE, vol. 130, pages 77 - 90, XP029470312, DOI: 10.1016/j.neuroimage.2016.01.045 * |
方东东: "基于深度学习的人脸检测算法研究", 中国优秀硕士学位论文全文数据库 (信息科技辑), no. 2019, pages 138 - 615 * |
胡振寰: "基于深度学习算法的遮挡行人检测", 中国优秀硕士学位论文全文数据库 (信息科技辑), no. 2019, pages 138 - 701 * |
郭凯: "基于SSD的人脸遮挡实时检测方法研究及系统实现", 中国优秀硕士学位论文全文数据库 (信息科技辑), no. 2020, pages 138 - 1102 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113688789A (en) * | 2021-09-17 | 2021-11-23 | 华中师范大学 | Online learning investment recognition method and system based on deep learning |
CN113688789B (en) * | 2021-09-17 | 2023-11-10 | 华中师范大学 | Online learning input degree identification method and system based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111079639B (en) | Method, device, equipment and storage medium for constructing garbage image classification model | |
KR20180125905A (en) | Method and apparatus for classifying a class to which a sentence belongs by using deep neural network | |
CN107704495A (en) | Training method, device and the computer-readable recording medium of subject classification device | |
US11403510B2 (en) | Processing sensor data | |
CN110364185B (en) | Emotion recognition method based on voice data, terminal equipment and medium | |
CN113094578B (en) | Deep learning-based content recommendation method, device, equipment and storage medium | |
CN109918501A (en) | Method, apparatus, equipment and the storage medium of news article classification | |
JP6751816B2 (en) | New training data set generation method and new training data set generation device | |
CN112215696A (en) | Personal credit evaluation and interpretation method, device, equipment and storage medium based on time sequence attribution analysis | |
CN113408570A (en) | Image category identification method and device based on model distillation, storage medium and terminal | |
CN111598153B (en) | Data clustering processing method and device, computer equipment and storage medium | |
CN111738403A (en) | Neural network optimization method and related equipment | |
CN108304376A (en) | Determination method, apparatus, storage medium and the electronic device of text vector | |
CN115456043A (en) | Classification model processing method, intent recognition method, device and computer equipment | |
CN110728182B (en) | Interview method and device based on AI interview system and computer equipment | |
CN113361307A (en) | Facial expression classification method and device and storage equipment | |
CN111400445B (en) | Case complex distribution method based on similar text | |
CN111858923A (en) | Text classification method, system, device and storage medium | |
CN115905613A (en) | Audio and video multitask learning and evaluation method, computer equipment and medium | |
CN115730221A (en) | False news identification method, device, equipment and medium based on traceability reasoning | |
CN109492124A (en) | Bad main broadcaster's detection method, device and the electronic equipment of selective attention clue guidance | |
US11593641B2 (en) | Automatic generation of synthetic samples using dynamic deep autoencoders | |
Sejnova et al. | Compositional models for VQA: Can neural module networks really count? | |
Sikand et al. | Using Classifier with Gated Recurrent Unit-Sigmoid Perceptron, Order to Get the Right Bird Species Detection | |
CN107844758A (en) | Intelligence pre- film examination method, computer equipment and readable storage medium storing program for executing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |