CN107169413B - Facial expression recognition method based on feature block weighting - Google Patents

Facial expression recognition method based on feature block weighting Download PDF

Info

Publication number
CN107169413B
CN107169413B CN201710234709.1A CN201710234709A CN107169413B CN 107169413 B CN107169413 B CN 107169413B CN 201710234709 A CN201710234709 A CN 201710234709A CN 107169413 B CN107169413 B CN 107169413B
Authority
CN
China
Prior art keywords
features
geometric
feature
weighting
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710234709.1A
Other languages
Chinese (zh)
Other versions
CN107169413A (en
Inventor
许烁
张二东
江渊广
张鹏
王阳
周可璞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201710234709.1A priority Critical patent/CN107169413B/en
Publication of CN107169413A publication Critical patent/CN107169413A/en
Application granted granted Critical
Publication of CN107169413B publication Critical patent/CN107169413B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention relates to a facial expression recognition method based on feature block weighting. The method comprises the following operation steps: 1) extracting Gabor texture features and geometric features of the expression pictures; 2) reducing feature dimensionality of the extracted Gabor texture features by adopting a PCA algorithm, aligning the extracted geometric features in blocks, dividing the geometric features into three feature blocks of a mouth, a left eye and a right eye, and aligning the geometric features by respectively adopting a Procrustes Analysis method; 3) fusing the Gabor texture features subjected to PCA dimension reduction with the three geometric feature blocks subjected to Procrustes Analysis to form fused features; 4) and inputting the fusion features into a Bp neural network weighted by the feature block, training the neural network, and seeking a proper weight coefficient of each layer. The invention improves the commonality of the expression geometric characteristics and solves the problem that the contribution rate of different facial characteristic forms and different regional characteristics to the expression recognition is different.

Description

Facial expression recognition method based on feature block weighting
Technical Field
The invention relates to a facial expression recognition technology, in particular to a method for weighting each feature block based on feature blocks and weighting a Bp (back propagation) neural network.
Background
The biggest problem faced by the face and facial expression recognition research is how to improve the accuracy of facial expression recognition, and the existing facial expression recognition method has no good universality and no robustness to different people due to the influences of different areas, the sizes of the faces of the ethnic people, skin colors, culture and the like.
The feature extraction of facial expressions is very key to the recognition of expressions, and different feature extraction methods express features from different angles, however, the recognition contribution rates of different features to facial expressions are different. In order to distinguish the feature importance of different features and different areas of the face, a plurality of scholars endow each dimension of the features with a weight factor based on a weight analysis method, and search the weight factor by adopting optimization principles such as maximizing inter-class distance and minimizing intra-class distance, so as to distinguish the contribution rate of the different features to the expression recognition and improve the recognition rate of the facial expression. However, these methods all face the following 3 problems:
1. the feature dimension of expression image extraction is as high as thousands, weighting of each dimension of features inevitably leads to a large number of weight factors, and searching for the weight factors inevitably leads to extra increase of calculation pressure, resulting in poor real-time performance.
2. Weighting features of each dimension separately inevitably leads to the loss of the original representation form of each feature.
3. The optimization searching of the weight factors and the classifier are two independent processes, and the quality of the weight factors is detected by the classifier, so that the classifier is favorable for correctly classifying.
Based on the above requirements, the invention provides a facial expression recognition method based on feature block weighting, and aims at problems 1 and 2 to weight different forms of features and features of different areas of a face based on the level of a feature block. Aiming at the problem 3, a weighted Bp neural network is provided, and optimization of the weight factor and optimization of each layer of weight and threshold of the neural network are simultaneously carried out.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a facial expression recognition method based on feature block weighting, and solves the problem that different forms of features and different areas of the face have different contribution rates to facial expression recognition.
In order to achieve the above object, the idea of the present invention is:
the facial expression recognition method based on the feature block weighting comprises facial Gabor feature extraction, facial geometric feature extraction, block alignment and a Bp neural network based on the feature block weighting.
And constructing a Gabor filter, extracting Gabor texture features of the facial expression, and performing feature dimensionality reduction by adopting PCA (principal component analysis) aiming at the problem of overhigh dimension of the Gabor features. The positions of the key points of the Face are extracted by adopting a Face + + function library to serve as geometrical characteristics, because the positions and the sizes of the faces are different, the geometric characteristics of the faces need to be aligned, the influence of inaccurate positioning, different sizes and the like is reduced, and a plurality of scholars align the geometric characteristics of the faces by Procrustes Analysis to obtain good effect, as is known, human beings judge expressions mainly by the different shapes of mouths and eyes, and the changes of the mouths and the eyes are independent from each other and do not interfere with each other, therefore, the method provides that the geometric characteristics of the face are divided into three geometric characteristic blocks of a left eye, a right eye and a mouth, and are respectively aligned by Procrustes Analysis, which is different from the integral alignment of the face, the method reduces the interference among the feature blocks, and compared with the method for aligning the geometric features of the whole face, the method has the advantage that the alignment effect of the geometric features of the sample is better. The operation is beneficial to solving the problems of low recognition rate and the like caused by different sizes of faces and different sizes of facial organs of different people.
Aiming at the problem that different areas of the face and different expression characteristic representation forms of the face have different contribution rates to the expression recognition, the traditional method weights each dimension of characteristics and optimizes an iteration weight factor by combining the principles of maximizing inter-class distance, minimizing intra-class distance and the like, but the method has three defects: 1. the original feature representation form and the existing relationship among the features are destroyed, and the integral advantage is necessarily lost when each feature is weighted. 2. Weighting features of each dimension separately inevitably leads to the loss of the original representation form of each feature. 3. Feature weight optimization and classifier are a separate process. Therefore, for the 1 st and 2 nd disadvantages: the method provides a feature block weighting-based concept, takes Gabor features, left eye geometric features, right eye geometric features and mouth geometric features of facial expressions as four independent feature blocks, and weights each feature block based on the feature blocks. For the 3 rd drawback: the method comprises the steps of providing a Bp neural network based on feature block weighting, combining the Bp neural network, adding a weighting layer in front of an input layer of the neural network, realizing weighting of each feature block by the weighting layer, combining a weighting process of the feature blocks with a classifier, searching and optimizing weighting layer weighting factors through training of training samples, and realizing weighting of the feature blocks.
According to the inventive concept, the invention adopts the following technical scheme:
a facial expression recognition method based on feature block weighting is characterized by comprising the following operation steps: 1) extracting Gabor texture features and geometric features of the expression pictures; 2) reducing feature dimensionality of the extracted Gabor texture features by adopting a PCA algorithm, aligning the extracted geometric features in blocks, dividing the geometric features into three feature blocks of a mouth, a left eye and a right eye, and aligning the geometric features by respectively adopting a Procrustes Analysis method; 3) fusing the Gabor texture features subjected to PCA dimension reduction with the three geometric feature blocks subjected to Procrustes Analysis to form fused features; 4) and inputting the fusion features into a Bp neural network weighted by the feature block, training the neural network, and seeking a proper weight coefficient of each layer.
The above Gabor texture and geometric features for extracting the expression picture are: and extracting Gabor texture features of the expression image by adopting a Gabor filter, and extracting geometric features of the expression image by adopting a Face + + function library.
The above geometric feature block alignment is: the geometric features are divided into a left-eye geometric feature block, a right-eye geometric feature block and a mouth geometric feature block, and then the feature blocks are respectively aligned by Procrustes Analysis.
The above feature fusion is: and arranging and combining the Gabor features and the geometric feature blocks in a column vector mode.
The Bp neural network method for weighting the feature blocks comprises the following steps: a weighting layer is added in front of an input layer of the neural network, the weighting layer comprises four weighting factors, and the four weighting factors and parameters of each layer of the Bp neural network are trained and optimized together to realize weighting of the four feature blocks.
Compared with the prior art, the invention has the following obvious and prominent substantive characteristics and remarkable technical progress: the method improves the commonality of the expression geometric features, solves the problems of different feature representation forms and different contribution rates of the features of different areas of the face to the expression recognition, and further improves the recognition accuracy of the facial expression.
Drawings
FIG. 1 is an overall flow diagram of an embodiment of the present invention.
Fig. 2 is a diagram of a weighted Bp neural network according to an embodiment of the present invention.
FIG. 3 is a flow chart of the calculation of weighted Bp neural network input features according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The first embodiment is as follows:
referring to fig. 1, the facial expression recognition method based on feature block weighting is characterized by comprising the following operation steps: 1) extracting Gabor texture features and geometric features of the expression pictures; 2) reducing feature dimensionality of the extracted Gabor texture features by adopting a PCA algorithm, aligning the extracted geometric features in blocks, dividing the geometric features into three feature blocks of a mouth, a left eye and a right eye, and aligning the geometric features by respectively adopting a Procrustes Analysis method; 3) fusing the Gabor texture features subjected to PCA dimension reduction with the three geometric feature blocks subjected to Procrustes Analysis to form fused features; 4) and inputting the fusion features into a Bp neural network weighted by the feature block, training the neural network, and seeking a proper weight coefficient of each layer.
Example two:
this embodiment is substantially the same as the first embodiment, and is characterized in that:
the Gabor texture and the geometric characteristics of the expression picture are extracted as follows: and extracting Gabor texture features of the expression image by adopting a Gabor filter, and extracting geometric features of the expression image by adopting a Face + + function library.
The geometric feature block alignment is as follows: the geometric features are divided into a left-eye geometric feature block, a right-eye geometric feature block and a mouth geometric feature block, and then the feature blocks are respectively aligned by Procrustes Analysis.
The feature fusion is as follows: and arranging and combining the Gabor features and the geometric feature blocks in a column vector mode.
The characteristic block weighted Bp neural network method comprises the following steps: a weighting layer is added in front of an input layer of the neural network, the weighting layer comprises four weighting factors, and the four weighting factors and parameters of each layer of the Bp neural network are trained and optimized together to realize weighting of the four feature blocks.
Example three:
as shown in fig. 1, a Gabor filter is used to extract Gabor features of facial expressions, and since the features of the Gabor features have more dimensions, and in a feature representation with high dimensions, the features are generally linear correlation and contain more useless or less useful variables, feature selection is performed on the extracted Gabor features by using a PCA algorithm. The Face + + function library is adopted to extract the facial geometric features of the expression images, the extracted facial expression geometric features are divided into geometric feature blocks of the mouth, the left eye and the right eye due to the fact that the Face structures and the sizes are different and the positions and the sizes of the eyes and the mouth are different, and then each feature block of the facial features is independently subjected to Procrustes Analysis. Fusing the extracted Gabor features and the features after the geometric features are aligned in blocks to form fused features, wherein the fusing mode is as follows:
Figure DEST_PATH_IMAGE002A
in the above formula, the first and second carbon atoms are,Fthe features after the fusion are represented by the graph,F g showing the Gabor features after PCA dimension reduction.F l F r F m Representing the left, right, and mouth geometry after Procrustes Analysis, respectively.
Aiming at each feature block in the fusion features, defining weights based on the feature blocks, respectively giving a certain weight to the feature of each region as a whole, and dividing the extracted features into four independent feature blocks: the Gabor texture feature block, the left eye geometric feature block, the right eye geometric feature block and the mouth geometric feature block are respectively regarded as independent integers, and certain weight is given to the integers. The definition rule of the weight factor is as follows:
Figure DEST_PATH_IMAGE004A
Figure DEST_PATH_IMAGE006A
when the Bp neural network receives input feature vectors, all feature variables are treated equally, and actually different facial expression feature areas have different contribution rates to the expression recognition, so that the method provides the Bp neural network with the weighted feature blocks, as shown in fig. 2, a weighting layer is added in front of an input layer of the original Bp neural network, the weighting layer is composed of four weighting factors defined by the above two formulas, and the weighting factors of the four feature blocks are respectively defined. The weighting layer will weight each feature block first, then input the weighted features into the input layer, then the hidden layer, and finally the output layer, which is the forward propagation of the input features. Then, according to the calculation error, the weight and the threshold of the output layer are updated, and then the weights of the hidden layer, the input layer and the weight layer are updated, namely the back propagation of the error is carried out. The specific flow chart is as shown in fig. 3: after the weight and the threshold value of the network are initialized, firstly, weighting operation of a feature block is carried out on input features, then the input features are input into an input layer of a neural network, the results of each layer are calculated layer by layer, the difference between actual output and expected output is analyzed and compared, then the weight of each layer is updated in a back-to-back mode, and the calculation mode of the weight layer is as follows:
and multiplying the four feature blocks by corresponding weight factors respectively, and weighting the four feature blocks based on the feature blocks. The following equation:
Figure DEST_PATH_IMAGE007
in the above equation, the fused features are obtained after weighting the feature blocks.FThe input characteristics of the weight layer, the output of the weight layer and the input of the input layer. And then, gradually expanding calculation according to the calculation steps of the Bp neural network, and iteratively updating the weight coefficients of all layers until the error requirement is met.

Claims (1)

1. A facial expression recognition method based on feature block weighting is characterized by comprising the following operation steps:
1) extracting Gabor texture features and geometric features of the expression pictures;
2) reducing feature dimensionality of the extracted Gabor texture features by adopting a PCA algorithm, aligning the extracted geometric features in blocks, dividing the geometric features into three feature blocks of a mouth, a left eye and a right eye, and aligning the geometric features by respectively adopting a Procrustes Analysis method;
3) fusing the Gabor texture features subjected to PCA dimension reduction with the three geometric feature blocks subjected to Procrustes Analysis to form fused features;
4) inputting the fusion features into a Bp neural network weighted by the feature block, training the neural network, and seeking a proper weight coefficient of each layer;
the step 1) of extracting Gabor textures and geometric features of the expression pictures is as follows: extracting Gabor texture features of the expression image by adopting a Gabor filter, and extracting geometric features of the expression image by adopting a Face + + function library;
the geometric feature block alignment in the step 2) is as follows: dividing the geometric features into a left-eye geometric feature block, a right-eye geometric feature block and a mouth geometric feature block, and then respectively carrying out alignment processing on each feature block by using Procrustes Analysis;
the feature fusion in the step 3) is as follows: arranging and combining the Gabor characteristics and each geometric characteristic block according to a column vector mode;
the Bp neural network method for weighting the feature blocks in the step 4) comprises the following steps: a weighting layer is added in front of an input layer of the neural network, the weighting layer comprises four weighting factors, and the four weighting factors and parameters of each layer of the Bp neural network are trained and optimized together to realize weighting of the four feature blocks.
CN201710234709.1A 2017-04-12 2017-04-12 Facial expression recognition method based on feature block weighting Expired - Fee Related CN107169413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710234709.1A CN107169413B (en) 2017-04-12 2017-04-12 Facial expression recognition method based on feature block weighting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710234709.1A CN107169413B (en) 2017-04-12 2017-04-12 Facial expression recognition method based on feature block weighting

Publications (2)

Publication Number Publication Date
CN107169413A CN107169413A (en) 2017-09-15
CN107169413B true CN107169413B (en) 2021-01-12

Family

ID=59849968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710234709.1A Expired - Fee Related CN107169413B (en) 2017-04-12 2017-04-12 Facial expression recognition method based on feature block weighting

Country Status (1)

Country Link
CN (1) CN107169413B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108288023B (en) * 2017-12-20 2020-10-16 深圳和而泰数据资源与云技术有限公司 Face recognition method and device
KR102564855B1 (en) * 2018-01-08 2023-08-08 삼성전자주식회사 Device and method to recognize object and face expression, and device and method to train obejct and face expression robust to facial change
CN110263681B (en) * 2019-06-03 2021-07-27 腾讯科技(深圳)有限公司 Facial expression recognition method and device, storage medium and electronic device
US11244206B2 (en) * 2019-09-06 2022-02-08 Fujitsu Limited Image normalization for facial analysis

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512273A (en) * 2015-12-03 2016-04-20 中山大学 Image retrieval method based on variable-length depth hash learning
CN105892287A (en) * 2016-05-09 2016-08-24 河海大学常州校区 Crop irrigation strategy based on fuzzy judgment and decision making system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100557625C (en) * 2008-04-18 2009-11-04 清华大学 Face identification method and device thereof that face component feature and Gabor face characteristic merge
CN101620669B (en) * 2008-07-01 2011-12-07 邹采荣 Method for synchronously recognizing identities and expressions of human faces
CN101388075B (en) * 2008-10-11 2011-11-16 大连大学 Human face identification method based on independent characteristic fusion
CN101719223B (en) * 2009-12-29 2011-09-14 西北工业大学 Identification method for stranger facial expression in static image
CN101799919A (en) * 2010-04-08 2010-08-11 西安交通大学 Front face image super-resolution rebuilding method based on PCA alignment
CN103020654B (en) * 2012-12-12 2016-01-13 北京航空航天大学 The bionical recognition methods of SAR image with core Local Feature Fusion is produced based on sample
CN104517104B (en) * 2015-01-09 2018-08-10 苏州科达科技股份有限公司 A kind of face identification method and system based under monitoring scene
CN105117708A (en) * 2015-09-08 2015-12-02 北京天诚盛业科技有限公司 Facial expression recognition method and apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512273A (en) * 2015-12-03 2016-04-20 中山大学 Image retrieval method based on variable-length depth hash learning
CN105892287A (en) * 2016-05-09 2016-08-24 河海大学常州校区 Crop irrigation strategy based on fuzzy judgment and decision making system

Also Published As

Publication number Publication date
CN107169413A (en) 2017-09-15

Similar Documents

Publication Publication Date Title
CN107169413B (en) Facial expression recognition method based on feature block weighting
CN110210551B (en) Visual target tracking method based on adaptive subject sensitivity
CN109508669B (en) Facial expression recognition method based on generative confrontation network
CN111462175B (en) Space-time convolution twin matching network target tracking method, device, medium and equipment
CN107590831B (en) Stereo matching method based on deep learning
CN110033446B (en) Enhanced image quality evaluation method based on twin network
US11967175B2 (en) Facial expression recognition method and system combined with attention mechanism
CN108765414B (en) No-reference stereo image quality evaluation method based on wavelet decomposition and natural scene statistics
CN104954780B (en) A kind of DIBR virtual image restorative procedure suitable for the conversion of high definition 2D/3D
CN107679491A (en) A kind of 3D convolutional neural networks sign Language Recognition Methods for merging multi-modal data
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN107657625A (en) Merge the unsupervised methods of video segmentation that space-time multiple features represent
CN109800317B (en) Image query answering method based on image scene map alignment
CN111275643A (en) True noise blind denoising network model and method based on channel and space attention
CN110569724B (en) Face alignment method based on residual hourglass network
CN103942794A (en) Image collaborative cutout method based on confidence level
CN110351548B (en) Stereo image quality evaluation method guided by deep learning and disparity map weighting
CN109960975B (en) Human face generation and human face recognition method based on human eyes
CN111402311A (en) Knowledge distillation-based lightweight stereo parallax estimation method
CN113963032A (en) Twin network structure target tracking method fusing target re-identification
CN110070574A (en) A kind of binocular vision Stereo Matching Algorithm based on improvement PSMNet
CN112084895B (en) Pedestrian re-identification method based on deep learning
CN113947814A (en) Cross-visual angle gait recognition method based on space-time information enhancement and multi-scale saliency feature extraction
CN115797835A (en) Non-supervision video target segmentation algorithm based on heterogeneous Transformer
CN113033345B (en) V2V video face recognition method based on public feature subspace

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210112