CN107169413A - A kind of human facial expression recognition method of feature based block weight - Google Patents
A kind of human facial expression recognition method of feature based block weight Download PDFInfo
- Publication number
- CN107169413A CN107169413A CN201710234709.1A CN201710234709A CN107169413A CN 107169413 A CN107169413 A CN 107169413A CN 201710234709 A CN201710234709 A CN 201710234709A CN 107169413 A CN107169413 A CN 107169413A
- Authority
- CN
- China
- Prior art keywords
- feature
- features
- weight
- geometric
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008921 facial expression Effects 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 title claims abstract description 32
- 241000282414 Homo sapiens Species 0.000 title claims description 7
- 230000014509 gene expression Effects 0.000 claims abstract description 18
- 238000004458 analytical method Methods 0.000 claims abstract description 16
- 241000228740 Procrustes Species 0.000 claims abstract description 15
- 230000004927 fusion Effects 0.000 claims abstract description 9
- 230000009467 reduction Effects 0.000 claims abstract description 5
- 238000000605 extraction Methods 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 7
- 230000007935 neutral effect Effects 0.000 claims 4
- 230000001537 neural effect Effects 0.000 claims 1
- 238000011017 operating method Methods 0.000 claims 1
- 238000013528 artificial neural network Methods 0.000 abstract description 27
- 230000001815 facial effect Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及一种基于特征块权重化的面部表情识别方法。本方法的操作步骤如下:1)提取表情图片的Gabor纹理特征和几何特征;2)对提取的Gabor纹理特征采用PCA算法降低特征维度,对提取的几何特征分块对齐,将几何特征分为嘴部、左眼、右眼三个特征块,并分别采用Procrustes Analysis方法将各个几何特征进行对齐;3)将PCA降维后的Gabor纹理特征与Procrustes Analysis后的三个几何特征块进行融合,组成融合特征;4)将融合特征输入到特征块权重化的Bp神经网络,对神经网络进行训练,寻求合适的各层权重系数。本发明提高了表情几何特征的共性,解决了面部不同特征形式、不同区域特征对表情识别贡献率不同的问题。
The invention relates to a facial expression recognition method based on feature block weighting. The operation steps of this method are as follows: 1) Extract the Gabor texture features and geometric features of the expression picture; 2) Use the PCA algorithm to reduce the feature dimension for the extracted Gabor texture features, align the extracted geometric features in blocks, and divide the geometric features into mouth Three feature blocks of head, left eye, and right eye, and use the Procrustes Analysis method to align each geometric feature; 3) The Gabor texture feature after PCA dimensionality reduction is fused with the three geometric feature blocks after Procrustes Analysis to form Fusion features; 4) Input the fusion features into the Bp neural network with weighted feature blocks, train the neural network, and seek appropriate weight coefficients for each layer. The invention improves the commonality of the expression geometric features, and solves the problem that different feature forms and different regional features of the face have different contribution rates to expression recognition.
Description
技术领域technical field
本发明涉及一种人脸表情识别技术,特别是一种各个特征块基于特征块的权重化以及权重化Bp(反向传播)神经网络的方法。The invention relates to a facial expression recognition technology, in particular to a method of weighting each feature block based on the feature block and weighting a Bp (backpropagation) neural network.
背景技术Background technique
人脸面部表情识别研究面对的最大问题就是如何提高面部表情识别的准确率,由于不同区域、种族的人面部大小、肤色、文化等的影响,导致现在的面部表情识别方法不具备较好的通用性,对不同人不具备鲁棒性。The biggest problem facing the research on facial expression recognition is how to improve the accuracy of facial expression recognition. Due to the influence of different regions, races, face sizes, skin colors, cultures, etc., the current facial expression recognition methods do not have good performance. Versatility, not robust to different people.
面部表情的特征提取对于表情的识别非常关键,不同的特征提取方法从不同的角度对特征进行表示,然而,不同特征对于人脸表情的识别贡献率不同。为了区分不同特征、面部不同区域的特征重要性,很多学者基于权重分析的方法,对每维特征赋予权重因子,并采取诸如最大化类间距离、最小化类内距离等优化原则对权重因子进行寻找,依此来区分不同特征对表情识别的贡献率,提高面部表情的识别率。然而这些方法都面对以下3个问题:The feature extraction of facial expressions is very critical to the recognition of facial expressions. Different feature extraction methods represent features from different perspectives. However, different features have different contribution rates to the recognition of facial expressions. In order to distinguish the feature importance of different features and different regions of the face, many scholars assign weight factors to each dimension feature based on the method of weight analysis, and adopt optimization principles such as maximizing inter-class distance and minimizing intra-class distance to optimize the weight factor. Find, and use this to distinguish the contribution rate of different features to expression recognition, and improve the recognition rate of facial expressions. However, these methods all face the following three problems:
1、表情图像提取的特征维数高达成千上万,每维特征权重化,必然会导致权重因子数量较多,寻找权重因子必然会额外增加计算压力,导致实时性不够好。1. The number of feature dimensions extracted from expression images is as high as tens of thousands. The weighting of each dimension of features will inevitably lead to a large number of weight factors. Finding weight factors will inevitably increase the calculation pressure, resulting in insufficient real-time performance.
2、单独对每维特征进行权重化,必然会导致各个特征失去原有的表示形式。2. Weighting each dimension feature separately will inevitably cause each feature to lose its original representation.
3、权重因子的优化寻找与分类器是两个独立的过程,权重因子的好坏要经过分类器的检测,利于分类器正确分类的才是好的。3. The optimization search of the weight factor and the classifier are two independent processes. The quality of the weight factor must be detected by the classifier, and it is good for the classifier to classify correctly.
基于以上要求,本发明提出一种基于特征块权重化的面部表情识别方法,针对问题1、2,提出对不同形式的特征、面部不同区域的特征基于特征块水平进行权重化。针对问题3,提出权重化的Bp神经网络,将权重因子的优化与神经网络各层权重和阈值的优化同时进行。Based on the above requirements, the present invention proposes a facial expression recognition method based on feature block weighting. For problems 1 and 2, it is proposed to weight features of different forms and different regions of the face based on the feature block level. Aiming at problem 3, a weighted Bp neural network is proposed, and the optimization of the weight factor is carried out simultaneously with the optimization of the weights and thresholds of each layer of the neural network.
发明内容Contents of the invention
针对现有技术存在的缺陷,本发明的目的在于提出一种基于特征块权重化的面部表情识别方法,解决不同形式的特征、面部不同区域的特征对面部表情识别贡献率不同的问题。In view of the defects in the prior art, the purpose of the present invention is to propose a facial expression recognition method based on feature block weighting to solve the problem that different forms of features and features of different regions of the face have different contribution rates to facial expression recognition.
为了实现上述目的,本发明的构思是:In order to achieve the above object, design of the present invention is:
本基于特征块权重化的面部表情识别方法,包括面部Gabor特征提取,面部几何特征的提取、分块对齐,基于特征块权重化的Bp神经网络。This facial expression recognition method based on feature block weighting includes facial Gabor feature extraction, facial geometric feature extraction, block alignment, and Bp neural network based on feature block weighting.
构建Gabor滤波器,提取面部表情的Gabor纹理特征,针对Gabor特征维数过高问题,采用PCA进行特征降维。采用Face++函数库提取面部关键点的位置作为几何特征,针对几何特征,由于面部位置、大小的不同,需要将面部几何特征对齐,降低定位不精确、尺寸大小不一等的影响,不少学者将面部的几何特征采用Procrustes Analysis对齐,取得了不错的效果,而我们知道,人类判断表情,主要是依靠嘴部、眼部的不同形状,嘴部和眼部的变化是彼此独立的,不互相干扰,因此,本方法提出将面部的几何特征分成左眼、右眼、嘴巴三个几何特征块,分别采用Procrustes Analysis对齐,有别于面部的整体对齐,该方法降低各个特征块之间的干扰,相比于将整个面部的几何特征进行对齐,样本几何特征的对齐效果更好。该操作有利于解决因不同人面部大小、面部器官大小不一样导致的识别率低等问题。A Gabor filter is constructed to extract the Gabor texture features of facial expressions, and PCA is used to reduce the dimensionality of the Gabor features due to the high dimensionality of Gabor features. The Face++ function library is used to extract the position of facial key points as geometric features. For geometric features, due to the difference in facial position and size, facial geometric features need to be aligned to reduce the impact of inaccurate positioning and different sizes. Many scholars will The geometric features of the face are aligned using Procrustes Analysis, which has achieved good results. We know that human beings judge expressions mainly rely on the different shapes of the mouth and eyes. The changes of the mouth and eyes are independent of each other and do not interfere with each other. Therefore, this method proposes to divide the geometric features of the face into three geometric feature blocks of left eye, right eye, and mouth, and use Procrustes Analysis to align them respectively, which is different from the overall alignment of the face. This method reduces the interference between each feature block, Compared to aligning the geometric features of the entire face, the alignment of the sample geometric features is better. This operation is beneficial to solve the problems of low recognition rate caused by different face sizes and facial organ sizes of different people.
针对面部的不同区域、不同的面部表情特征表示形式对表情识别贡献率不同的问题,传统的方法将每维特征进行权重化,并结合最大化类间距离、最小化类内距离等原则优化迭代权重因子,但该方法存在三个缺点:1、破坏了原有的特征表示形式以及特征之间存在的关系,每个特征进行权重化必然失去整体的优势。2、单独对每维特征进行权重化,必然会导致各个特征失去原有的表示形式。3、特征权重的优化和分类器是一个分离的过程。因此,针对第1、2个缺点:本方法提出基于特征块权重化的概念,将面部表情的Gabor特征、左眼部几何特征、右眼部几何特征、嘴部几何特征作为四个独立的特征块,对各个特征块基于特征块进行权重化。针对第3个缺点:提出基于特征块权重化的Bp神经网络,结合Bp神经网络,在神经网络的输入层前增加了一层权重层,该权重层实现对各个特征块的权重化,将特征块的权重化过程与分类器结合,通过训练样本的训练,搜索优化权重层权重因子,实现对特征块的权重化。Aiming at the problem that different areas of the face and different facial expression feature representations have different contribution rates to expression recognition, the traditional method weights each dimension feature, and optimizes iterations based on the principles of maximizing inter-class distance and minimizing intra-class distance. However, this method has three disadvantages: 1. It destroys the original feature representation and the relationship between features, and the weighting of each feature will inevitably lose the overall advantage. 2. Weighting each dimension feature separately will inevitably cause each feature to lose its original representation. 3. The optimization of feature weights and the classifier are a separate process. Therefore, for the first and second shortcomings: this method proposes a concept based on feature block weighting, and uses the Gabor feature of facial expression, the geometric feature of the left eye, the geometric feature of the right eye, and the geometric feature of the mouth as four independent features Blocks, each feature block is weighted based on the feature block. For the third shortcoming: a Bp neural network based on feature block weighting is proposed. Combined with the Bp neural network, a weight layer is added before the input layer of the neural network. This weight layer realizes the weighting of each feature block, and the feature The weighting process of the block is combined with the classifier, through the training of the training samples, the weight factor of the weight layer is searched and optimized, and the weighting of the feature block is realized.
根据上述发明构思,本发明采用下述技术方案:According to above-mentioned inventive concept, the present invention adopts following technical scheme:
一种基于特征块权重化的面部表情识别方法,其特征在于操作步骤如下:1)提取表情图片的Gabor纹理特征和几何特征;2)对提取的Gabor纹理特征采用PCA算法降低特征维度,对提取的几何特征分块对齐,将几何特征分为嘴部、左眼、右眼三个特征块,并分别采用Procrustes Analysis方法将各个几何特征进行对齐;3)将PCA降维后的Gabor纹理特征与Procrustes Analysis后的三个几何特征块进行融合,组成融合特征;4)将融合特征输入到特征块权重化的Bp神经网络,对神经网络进行训练,寻求合适的各层权重系数。A facial expression recognition method based on feature block weighting, characterized in that the operation steps are as follows: 1) extract Gabor texture features and geometric features of expression pictures; 2) use PCA algorithm to reduce feature dimension for extracted Gabor texture features, and extract The geometric features are aligned in blocks, and the geometric features are divided into three feature blocks of the mouth, left eye, and right eye, and the Procrustes Analysis method is used to align each geometric feature; 3) Gabor texture features after PCA dimensionality reduction and The three geometric feature blocks after Procrustes Analysis are fused to form a fused feature; 4) The fused feature is input to the Bp neural network weighted by the feature block, and the neural network is trained to find the appropriate weight coefficient of each layer.
上述的提取表情图片的Gabor纹理和几何特征是:采用Gabor滤波器提取表情图像的Gabor纹理特征,采用Face++函数库提取表情图像的几何特征。The above-mentioned Gabor texture and geometric features of extracting facial expressions are: adopt Gabor filter to extract the Gabor texture features of facial expressions, adopt Face++ function library to extract geometric features of facial expressions.
上述的几何特征块对齐是:将几何特征分为左眼几何特征块、右眼几何特征块和嘴部几何特征块,然后对各个特征块分别采用Procrustes Analysis进行对齐处理。The above-mentioned alignment of geometric feature blocks is: divide the geometric features into left eye geometric feature blocks, right eye geometric feature blocks and mouth geometric feature blocks, and then use Procrustes Analysis to perform alignment processing on each feature block.
上述的特征融合是:将Gabor特征和各个几何特征块按列向量的方式进行排列组合。The above-mentioned feature fusion is: arrange and combine the Gabor feature and each geometric feature block in the form of a column vector.
上述的特征块权重化的Bp神经网络方法是:在神经网络的输入层前增加了权重层,权重层包含四个权重因子,并将这四个权重因子与Bp神经网络各层的参数进行一起训练优化,实现对四个特征块的权重化。The above-mentioned Bp neural network method of feature block weighting is: a weight layer is added before the input layer of the neural network, and the weight layer contains four weight factors, and these four weight factors are combined with the parameters of each layer of the Bp neural network Training optimization to achieve weighting of the four feature blocks.
本发明与现有技术相比较,具有如下显而易见的突出实质性特点和显著技术进步:提高了表情几何特征的共性,解决了不同特征表示形式、面部不同区域的特征对表情识别贡献率不同的问题,进而提高了面部表情的识别正确率。Compared with the prior art, the present invention has the following obvious outstanding substantive features and significant technological progress: the commonality of the geometric features of expressions is improved, and the problem of different feature representation forms and features of different regions of the face on the different contribution rates to expression recognition is solved. , thereby improving the recognition accuracy of facial expressions.
附图说明Description of drawings
图1是本发明实施例的整体流程框图。FIG. 1 is an overall flow diagram of an embodiment of the present invention.
图2是本发明实施例的权重化Bp神经网络结构图。FIG. 2 is a structural diagram of a weighted Bp neural network according to an embodiment of the present invention.
图3是本发明实施例的权重化Bp神经网络输入特征的计算流程图。Fig. 3 is a flow chart of calculating the input features of the weighted Bp neural network according to the embodiment of the present invention.
具体实施方式detailed description
下面结合附图对本发明优选实施例作详细说明。The preferred embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings.
实施例一:Embodiment one:
参见图1,本基于特征块权重化的面部表情识别方法,其特征在于操作步骤如下:1)提取表情图片的Gabor纹理特征和几何特征;2)对提取的Gabor纹理特征采用PCA算法降低特征维度,对提取的几何特征分块对齐,将几何特征分为嘴部、左眼、右眼三个特征块,并分别采用Procrustes Analysis方法将各个几何特征进行对齐;3)将PCA降维后的Gabor纹理特征与Procrustes Analysis后的三个几何特征块进行融合,组成融合特征;4)将融合特征输入到特征块权重化的Bp神经网络,对神经网络进行训练,寻求合适的各层权重系数。Referring to Fig. 1, this facial expression recognition method based on feature block weighting is characterized in that the operation steps are as follows: 1) extracting the Gabor texture features and geometric features of the expression picture; 2) using the PCA algorithm to reduce the feature dimension for the extracted Gabor texture features , align the extracted geometric features in blocks, divide the geometric features into three feature blocks of mouth, left eye, and right eye, and use the Procrustes Analysis method to align each geometric feature; 3) Gabor after PCA dimensionality reduction The texture feature is fused with the three geometric feature blocks after Procrustes Analysis to form the fused feature; 4) The fused feature is input to the Bp neural network weighted by the feature block, and the neural network is trained to find the appropriate weight coefficient of each layer.
实施例二:Embodiment two:
本实施例与实施例一基本相同,特别之处如下:This embodiment is basically the same as Embodiment 1, and the special features are as follows:
所述的提取表情图片的Gabor纹理和几何特征是:采用Gabor滤波器提取表情图像的Gabor纹理特征,采用Face++函数库提取表情图像的几何特征。The Gabor texture and the geometric feature of the described extraction expression picture are: adopt the Gabor filter to extract the Gabor texture feature of the expression image, adopt the Face++ function library to extract the geometric feature of the expression image.
所述的几何特征块对齐是:将几何特征分为左眼几何特征块、右眼几何特征块和嘴部几何特征块,然后对各个特征块分别采用Procrustes Analysis进行对齐处理。The alignment of the geometric feature blocks is: divide the geometric features into left eye geometric feature blocks, right eye geometric feature blocks and mouth geometric feature blocks, and then use Procrustes Analysis to perform alignment processing on each feature block.
所述的特征融合是:将Gabor特征和各个几何特征块按列向量的方式进行排列组合。The feature fusion is: arranging and combining the Gabor feature and each geometric feature block in the form of a column vector.
所述的特征块权重化的Bp神经网络方法是:在神经网络的输入层前增加了权重层,权重层包含四个权重因子,并将这四个权重因子与Bp神经网络各层的参数进行一起训练优化,实现对四个特征块的权重化。The Bp neural network method of described characteristic block weighting is: before the input layer of neural network, increased weight layer, weight layer comprises four weight factors, and these four weight factors are carried out with the parameter of each layer of Bp neural network Train and optimize together to realize the weighting of the four feature blocks.
实施例三:Embodiment three:
如图1,采用Gabor滤波器提取面部表情的Gabor特征,由于Gabor特征的特征维数较多,在高维的特征表示中,这些特征通常是线性相关性并且包含较多的无用或者用处较小变量,因此,采用PCA算法对提取的Gabor特征进行特征选择。采用Face++函数库提取表情图像的面部几何特征,由于人脸结构、大小的不同,眼睛、嘴巴的位置、尺寸大小不同,将提取的面部表情几何特征分为嘴部、左眼、右眼几何特征块,然后将面部特征的各个特征块单独进行Procrustes Analysis。将提取的Gabor特征与几何特征分块对齐后的特征进行融合,组成融合特征,融合方式如下:As shown in Figure 1, the Gabor filter is used to extract the Gabor features of facial expressions. Since the feature dimensions of Gabor features are large, in high-dimensional feature representations, these features are usually linearly correlated and contain more useless or less useful features. Variables, therefore, the PCA algorithm is used to perform feature selection on the extracted Gabor features. Use the Face++ function library to extract the facial geometric features of the expression image. Due to the difference in the structure and size of the face, the position and size of the eyes and mouth are different, the extracted facial expression geometric features are divided into mouth, left eye, and right eye geometric features. block, and then perform Procrustes Analysis on each feature block of facial features separately. The extracted Gabor features are fused with the features after the block alignment of the geometric features to form the fused features. The fusion method is as follows:
上式中,F表示融合后的特征,F g 表示经过PCA降维后的Gabor特征。F l ,F r ,F m 分别表示经过Procrustes Analysis之后的左眼、右眼、嘴部几何特征。In the above formula, F represents the fused feature, and F g represents the Gabor feature after PCA dimensionality reduction. F l , F r , and F m represent the geometric features of the left eye, right eye, and mouth after Procrustes Analysis, respectively.
针对于融合特征中的各个特征块,基于特征块定义权重,分别对各个区域的特征整体赋予一定权重,本方法将提取的特征分为为四个独立的特征块:Gabor纹理特征块、左眼部几何特征块、右眼部几何特征块和嘴部几何特征块,分别将这四部分看作独立的整体,赋予其一定的权重。权重因子的定义规则如下:For each feature block in the fusion feature, the weight is defined based on the feature block, and a certain weight is assigned to the features of each region as a whole. This method divides the extracted features into four independent feature blocks: Gabor texture feature block, left eye The body geometric feature block, the right eye geometric feature block and the mouth geometric feature block are respectively regarded as independent wholes and given certain weights. The definition rules of weight factors are as follows:
Bp神经网络在接受输入特征向量时,将各个特征变量平等对待,而实际上不同的面部表情特征区域对表情识别的贡献率不同,因此,本方法提出特征块权重化的Bp神经网络,如图2,特征块权重化神经网络在原Bp神经网络的输入层前增加了一层权重层,该权重层由上面两个式子定义的四个权重因子组成,分别定义了四个特征块的权重因子。权重层将首先对各个特征块权重化,然后将权重化后的特征输入到输入层,其次隐含层,最后输出层,这就是输入特征的正向传播。之后,根据计算误差,更新输出层的权重和阈值,然而更新隐含层、输入层、权重层的权值,也就是误差的背向传播。具体的流程图如图3:网络的权重和阈值初始化之后,对输入特征首先进行特征块的权重化运算,之后将其输入到神经网络的输入层,逐层计算各层的结果,分析比对实际输出与期望输出的差距,然后背向更新各层的权重,权重层的计算方式如下:When the Bp neural network accepts the input feature vector, it treats each feature variable equally, but in fact, different facial expression feature regions have different contribution rates to expression recognition. Therefore, this method proposes a Bp neural network with feature block weighting, as shown in the figure 2. The feature block weighted neural network adds a layer of weight layer before the input layer of the original Bp neural network. The weight layer is composed of four weight factors defined by the above two formulas, respectively defining the weight factors of the four feature blocks . The weight layer will first weight each feature block, and then input the weighted features to the input layer, followed by the hidden layer, and finally the output layer, which is the forward propagation of the input features. After that, according to the calculation error, the weight and threshold of the output layer are updated, but the weights of the hidden layer, input layer, and weight layer are updated, that is, the backpropagation of the error. The specific flow chart is shown in Figure 3: After the weights and thresholds of the network are initialized, the input features are first weighted to the feature block, and then input to the input layer of the neural network, and the results of each layer are calculated layer by layer, analyzed and compared The difference between the actual output and the expected output, and then update the weights of each layer backwards. The calculation method of the weight layer is as follows:
将四个特征块分别乘上对应的权重因子,对四个特征块进行基于特征块的权重化。如下公式:The four feature blocks are multiplied by the corresponding weight factors, and the four feature blocks are weighted based on feature blocks. The formula is as follows:
上式子中,为特征块权重化之后的融合特征。F为权重层的输入特征,为权重层的输出,并将作为输入层的输入。之后将逐步按照Bp神经网络的计算步骤展开计算,对各层的权重系数进行迭代更新,直到满足误差要求。In the above formula, it is the fusion feature after feature block weighting. F is the input feature of the weight layer, the output of the weight layer, and will be used as the input of the input layer. After that, the calculation will be carried out step by step according to the calculation steps of the Bp neural network, and the weight coefficients of each layer will be updated iteratively until the error requirements are met.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710234709.1A CN107169413B (en) | 2017-04-12 | 2017-04-12 | Facial expression recognition method based on feature block weighting |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710234709.1A CN107169413B (en) | 2017-04-12 | 2017-04-12 | Facial expression recognition method based on feature block weighting |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107169413A true CN107169413A (en) | 2017-09-15 |
CN107169413B CN107169413B (en) | 2021-01-12 |
Family
ID=59849968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710234709.1A Expired - Fee Related CN107169413B (en) | 2017-04-12 | 2017-04-12 | Facial expression recognition method based on feature block weighting |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107169413B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108288023A (en) * | 2017-12-20 | 2018-07-17 | 深圳和而泰数据资源与云技术有限公司 | The method and apparatus of recognition of face |
CN110020580A (en) * | 2018-01-08 | 2019-07-16 | 三星电子株式会社 | Identify the method for object and facial expression and the method for training facial expression |
WO2020244434A1 (en) * | 2019-06-03 | 2020-12-10 | 腾讯科技(深圳)有限公司 | Method and apparatus for recognizing facial expression, and electronic device and storage medium |
CN112464699A (en) * | 2019-09-06 | 2021-03-09 | 富士通株式会社 | Image normalization method, system and readable medium for face analysis |
US12236712B2 (en) | 2019-06-03 | 2025-02-25 | Tencent Technology (Shenzhen) Company Limited | Facial expression recognition method and apparatus, electronic device and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101276421A (en) * | 2008-04-18 | 2008-10-01 | 清华大学 | Face recognition method and device for fusion of face part features and Gabor face features |
CN101388075A (en) * | 2008-10-11 | 2009-03-18 | 大连大学 | Face Recognition Method Based on Independent Feature Fusion |
CN101620669A (en) * | 2008-07-01 | 2010-01-06 | 邹采荣 | Method for synchronously recognizing identities and expressions of human faces |
CN101719223A (en) * | 2009-12-29 | 2010-06-02 | 西北工业大学 | Identification method for stranger facial expression in static image |
CN101799919A (en) * | 2010-04-08 | 2010-08-11 | 西安交通大学 | Front face image super-resolution rebuilding method based on PCA alignment |
CN103020654A (en) * | 2012-12-12 | 2013-04-03 | 北京航空航天大学 | Synthetic aperture radar (SAR) image bionic recognition method based on sample generation and nuclear local feature fusion |
CN104517104A (en) * | 2015-01-09 | 2015-04-15 | 苏州科达科技股份有限公司 | Face recognition method and face recognition system based on monitoring scene |
CN105117708A (en) * | 2015-09-08 | 2015-12-02 | 北京天诚盛业科技有限公司 | Facial expression recognition method and apparatus |
CN105512273A (en) * | 2015-12-03 | 2016-04-20 | 中山大学 | Image retrieval method based on variable-length depth hash learning |
CN105892287A (en) * | 2016-05-09 | 2016-08-24 | 河海大学常州校区 | Crop irrigation strategy based on fuzzy judgment and decision making system |
-
2017
- 2017-04-12 CN CN201710234709.1A patent/CN107169413B/en not_active Expired - Fee Related
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101276421A (en) * | 2008-04-18 | 2008-10-01 | 清华大学 | Face recognition method and device for fusion of face part features and Gabor face features |
CN101620669A (en) * | 2008-07-01 | 2010-01-06 | 邹采荣 | Method for synchronously recognizing identities and expressions of human faces |
CN101388075A (en) * | 2008-10-11 | 2009-03-18 | 大连大学 | Face Recognition Method Based on Independent Feature Fusion |
CN101719223A (en) * | 2009-12-29 | 2010-06-02 | 西北工业大学 | Identification method for stranger facial expression in static image |
CN101799919A (en) * | 2010-04-08 | 2010-08-11 | 西安交通大学 | Front face image super-resolution rebuilding method based on PCA alignment |
CN103020654A (en) * | 2012-12-12 | 2013-04-03 | 北京航空航天大学 | Synthetic aperture radar (SAR) image bionic recognition method based on sample generation and nuclear local feature fusion |
CN104517104A (en) * | 2015-01-09 | 2015-04-15 | 苏州科达科技股份有限公司 | Face recognition method and face recognition system based on monitoring scene |
CN105117708A (en) * | 2015-09-08 | 2015-12-02 | 北京天诚盛业科技有限公司 | Facial expression recognition method and apparatus |
CN105512273A (en) * | 2015-12-03 | 2016-04-20 | 中山大学 | Image retrieval method based on variable-length depth hash learning |
CN105892287A (en) * | 2016-05-09 | 2016-08-24 | 河海大学常州校区 | Crop irrigation strategy based on fuzzy judgment and decision making system |
Non-Patent Citations (2)
Title |
---|
ZHANGERDONG等: "《Facial expression recognition research based on blocked local feature》", 《PROCEEDINGS OF 2016 7TH INTERNATIONAL CONFERENCE ON MECHATRONICS,CONTROL AND MATERIALS(ICMCM 2016)》》 * |
张静: "《基于面部图像分块处理和PCA算法的表情识别研究》", 《中国优秀硕士论文辑信息科技辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108288023A (en) * | 2017-12-20 | 2018-07-17 | 深圳和而泰数据资源与云技术有限公司 | The method and apparatus of recognition of face |
CN108288023B (en) * | 2017-12-20 | 2020-10-16 | 深圳和而泰数据资源与云技术有限公司 | Face recognition method and device |
CN110020580A (en) * | 2018-01-08 | 2019-07-16 | 三星电子株式会社 | Identify the method for object and facial expression and the method for training facial expression |
CN110020580B (en) * | 2018-01-08 | 2024-06-04 | 三星电子株式会社 | Method for identifying object and facial expression and method for training facial expression |
WO2020244434A1 (en) * | 2019-06-03 | 2020-12-10 | 腾讯科技(深圳)有限公司 | Method and apparatus for recognizing facial expression, and electronic device and storage medium |
US12236712B2 (en) | 2019-06-03 | 2025-02-25 | Tencent Technology (Shenzhen) Company Limited | Facial expression recognition method and apparatus, electronic device and storage medium |
CN112464699A (en) * | 2019-09-06 | 2021-03-09 | 富士通株式会社 | Image normalization method, system and readable medium for face analysis |
CN112464699B (en) * | 2019-09-06 | 2024-08-20 | 富士通株式会社 | Image normalization method, system and readable medium for face analysis |
Also Published As
Publication number | Publication date |
---|---|
CN107169413B (en) | 2021-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113496217B (en) | Face micro-expression recognition method in video image sequence | |
CN108133188B (en) | Behavior identification method based on motion history image and convolutional neural network | |
WO2021115123A1 (en) | Method for footprint image retrieval | |
CN104268593B (en) | The face identification method of many rarefaction representations under a kind of Small Sample Size | |
CN111709304B (en) | Behavior recognition method based on space-time attention-enhancing feature fusion network | |
CN110458038B (en) | Small data cross-domain action identification method based on double-chain deep double-current network | |
CN109766858A (en) | Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering | |
CN108470320A (en) | A kind of image stylizing method and system based on CNN | |
CN108898180A (en) | Depth clustering method for single-particle cryoelectron microscope images | |
CN110414371A (en) | A real-time facial expression recognition method based on multi-scale kernel convolutional neural network | |
CN109410247A (en) | A kind of video tracking algorithm of multi-template and adaptive features select | |
CN110309835B (en) | A method and device for extracting local features of an image | |
CN109829353B (en) | Face image stylizing method based on space constraint | |
CN107590831B (en) | Stereo matching method based on deep learning | |
CN108009222B (en) | Three-dimensional model retrieval method based on better view and deep convolutional neural network | |
CN112418330A (en) | Improved SSD (solid State drive) -based high-precision detection method for small target object | |
CN110689599A (en) | 3D visual saliency prediction method for generating countermeasure network based on non-local enhancement | |
CN107292250A (en) | A Gait Recognition Method Based on Deep Neural Network | |
CN107169413A (en) | A kind of human facial expression recognition method of feature based block weight | |
Zhou et al. | Pose-robust face recognition with Huffman-LBP enhanced by divide-and-rule strategy | |
CN106570183B (en) | A Color Image Retrieval and Classification Method | |
CN111709331B (en) | Pedestrian re-recognition method based on multi-granularity information interaction model | |
CN110807420A (en) | A facial expression recognition method integrating feature extraction and deep learning | |
CN114758293B (en) | Deep learning crowd counting method based on auxiliary branch optimization and local density block enhancement | |
CN113420794A (en) | Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210112 |