CN109753942B - Facial expression recognition method and device based on spatial pyramid FHOG characteristics - Google Patents

Facial expression recognition method and device based on spatial pyramid FHOG characteristics Download PDF

Info

Publication number
CN109753942B
CN109753942B CN201910030221.6A CN201910030221A CN109753942B CN 109753942 B CN109753942 B CN 109753942B CN 201910030221 A CN201910030221 A CN 201910030221A CN 109753942 B CN109753942 B CN 109753942B
Authority
CN
China
Prior art keywords
facial expression
fhog
expression image
image sample
cells
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910030221.6A
Other languages
Chinese (zh)
Other versions
CN109753942A (en
Inventor
赵运基
范存良
刘晓光
张新良
张海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN201910030221.6A priority Critical patent/CN109753942B/en
Publication of CN109753942A publication Critical patent/CN109753942A/en
Application granted granted Critical
Publication of CN109753942B publication Critical patent/CN109753942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a facial expression recognition method based on spatial pyramid FHOG characteristics, which comprises the following steps: s1, acquiring a facial expression image sample set, and extracting SP-FHOG characteristics of each facial expression image sample in the facial expression image sample set; s2, inputting the SP-FHOG characteristics of each facial expression image sample into a neural network for training to obtain a facial expression recognition model; s3, obtaining a target facial expression image, and extracting SP-FHOG characteristics of the target facial expression image; and S4, inputting the SP-FHOG characteristics of the target facial expression image into a facial expression recognition model for recognition. The invention also provides a facial expression recognition device based on the spatial pyramid FHOG characteristics. The method applies the spatial pyramid to process the FHOG characteristics of the cell, finally obtains the overall characteristics and the local characteristics of the face image, and improves the identification effect.

Description

Facial expression recognition method and device based on spatial pyramid FHOG characteristics
Technical Field
The invention relates to the technical field of image processing, in particular to a facial expression recognition method and device based on spatial pyramid FHOG characteristics.
Background
Facial expression recognition is the most direct and effective emotion recognition mode. How to let the computer understand human emotion better is an important research topic in human-computer interaction. The human face expression is the psychological activity which can reflect the most internal emotion, so that the understanding of the human face expression by a computer is an indispensable important part for realizing human-computer interaction. The existing facial expression recognition generally adopts feature extraction, and then the feature extraction is input into a neural network model for training and recognition, so that a facial expression recognition result is obtained. There are many ways of feature extraction, such as SIFT, LBP, HOG, FHOG, etc. For FHOG, a face expression image is divided into a plurality of cells (cells), FHOG characteristics of each cell are obtained, and then training or recognition is carried out.
Disclosure of Invention
In order to overcome the defects of the prior art, one of the purposes of the present invention is to provide a facial expression recognition method based on a spatial pyramid FHOG feature, which applies a spatial pyramid manner to process the FHOG feature of a block cell, so as to finally obtain the overall feature and the local feature of a facial image, further enhance the feature description performance, and improve the recognition effect.
The second purpose of the present invention is to provide a facial expression recognition apparatus based on the spatial pyramid FHOG feature, which processes the FHOG feature of the block cells in a spatial pyramid manner, so as to finally obtain the overall feature and the local feature of the facial image, further enhance the feature description performance, and improve the recognition effect.
In order to achieve one of the above purposes, the invention provides the following technical scheme:
a facial expression recognition method based on spatial pyramid FHOG features comprises the following steps:
s1, acquiring a facial expression image sample set, and extracting SP-FHOG characteristics of each facial expression image sample in the facial expression image sample set;
s2, inputting the SP-FHOG characteristics of each facial expression image sample into a neural network for training to obtain a facial expression recognition model;
s3, obtaining a target facial expression image, and extracting SP-FHOG characteristics of the target facial expression image;
s4, inputting the SP-FHOG characteristics of the target facial expression image into a facial expression recognition model for recognition;
the method for extracting the SP-FHOG characteristics of the facial expression image sample and the target facial expression image comprises the following steps:
s11, segmenting the facial expression image sample or the target facial expression image to obtain 3 x 3 cells, and calculating FHOG characteristics of each cell in the 3 x 3 cells and recording the FHOG characteristics as local FHOG characteristics; each facial expression image sample or target facial expression image comprises 9 local FHOG characteristics;
s12, applying an overlapping statistical pooling method with 2 × 2 size step length being one cell to the 3 × 3 cells, converting the 3 × 3 cells into 2 × 2 cells, and calculating FHOG characteristics of each cell in the 2 × 2 cells and recording the FHOG characteristics as FHOGR characteristics; each facial expression image sample or target facial expression image comprises 4 FHOGR characteristics in total;
s13, taking the 2 x 2 cells as a whole to obtain the FHOG characteristic of the whole, and recording the FHOG characteristic as the FHOG characteristic;
and S14, connecting 9 local FHOG characteristics, 4 FHOG characteristics and 1 FHOG characteristic corresponding to each facial expression image sample or the target facial expression image in series to obtain a final SP-FHOG characteristic.
Preferably, the neural network employs a DBN network.
Preferably, before extracting the SP-FHOG features of the facial expression image sample or the target facial expression image, the following operations are performed:
and performing Gamma correction on the facial expression image sample or the target facial expression image.
Preferably, before step S14, the method further comprises:
calibrating the FHOGALL characteristics corresponding to each facial expression image sample or target facial expression image, wherein the calibration method comprises the following steps:
and circularly moving the direction sensitive feature in the SP-FHOG feature corresponding to each facial expression image sample or the target facial expression image to the left or to the right so as to ensure that the pivot is at the initial position of the direction sensitive feature.
In order to achieve the second purpose, the invention provides the following technical scheme:
a spatial pyramid FHOG feature-based facial expression recognition device, comprising:
the first acquisition module is used for acquiring a facial expression image sample set and extracting the SP-FHOG characteristic of each facial expression image sample in the facial expression image sample set;
the training module is used for inputting the SP-FHOG characteristics of each facial expression image sample into a neural network for training to obtain a facial expression recognition model;
the second acquisition module is used for acquiring a target facial expression image and extracting SP-FHOG characteristics of the target facial expression image;
the recognition module is used for inputting the SP-FHOG characteristics of the target facial expression image into a facial expression recognition model for recognition;
the extraction of the SP-FHOG characteristics of the facial expression image sample and the target facial expression image comprises the following steps:
the first calculating unit is used for segmenting the facial expression image sample or the target facial expression image to obtain 3 × 3 cells, and calculating FHOG characteristics of each cell in the 3 × 3 cells and recording the FHOG characteristics as local FHOG characteristics; each facial expression image sample or target facial expression image comprises 9 local FHOG characteristics;
a second calculating unit, configured to apply an overlapping statistical pooling method with a size step of 2 × 2 for one cell to the 3 × 3 cells, convert the 3 × 3 cells into 2 × 2 cells, and calculate an FHOG feature of each cell in the 2 × 2 cells, which is denoted as an FHOGR feature; each facial expression image sample or target facial expression image comprises 4 FHOGR characteristics in total;
a third computing unit, configured to use the 2 × 2 cells as a whole to obtain an FHOG feature of the whole, which is recorded as an FHOGAll feature;
and the series unit is used for connecting 9 local FHOG characteristics, 4 FHOGR characteristics and 1 FHOGLL characteristic corresponding to each facial expression image sample or target facial expression image in series to obtain the final SP-FHOG characteristic.
Compared with the prior art, the method and the device for identifying the facial expression based on the spatial pyramid FHOG characteristic have the advantages that:
1. the FHOG characteristics of the block cells are processed in a space pyramid mode, the overall characteristics and the local characteristics of the face image are finally obtained, the characteristic description performance is further enhanced, and the identification effect is improved;
2. on the basis of the FHOG characteristic, the main direction calibration is carried out on the FHOG characteristic, and the calibrated FHOG characteristic has stronger rotation robustness.
Drawings
Fig. 1 is a flowchart of a facial expression recognition method based on a spatial pyramid FHOG feature according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the extraction of SP-FHOG features;
FIG. 3 is a schematic diagram of computing FHOG characteristics of a cell;
FIG. 4 is an example of a Gamma corrected image and an original image;
FIG. 5 is a schematic calibration of the SP-FHOG feature;
fig. 6 is a schematic structural diagram of a second apparatus for recognizing facial expressions based on a spatial pyramid FHOG feature according to an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
Example one
Aiming at the problem of facial expression recognition, the embodiment provides a facial expression recognition method based on a spatial pyramid FHOG characteristic. According to the method, the main direction correction is carried out on the FHOG characteristic on the basis of the FHOG characteristic, and the corrected FHOG characteristic has stronger rotation robustness. Meanwhile, the FHOG characteristics of the block cells are processed in a space pyramid mode, the overall characteristics and the local characteristics of the face image are finally obtained, and the characteristic description performance is further enhanced.
Referring to fig. 1, the method for recognizing facial expressions based on spatial pyramid FHOG features of the present invention includes the following steps:
s1, obtaining a facial expression image sample set, and extracting SP-FHOG characteristics of each facial expression image sample in the facial expression image sample set.
The facial expression image sample set adopts JAFFE and CK + original sample libraries. In order to enhance the identification robustness of the model to samples such as scale and rotation, the samples are subjected to scale expansion and rotation expansion on the basis of the original JAFFE and CK + original sample library, and before the expansion, color sample images in the original sample library are firstly converted into gray level images. In the scale expansion, the transformation range of the original image with the scale of S =1,s is set to [0.6 to 1.6], and the step size is set to 0.1. The rotation expansion is to rotate the original image, the rotation angle of the original image is defined as R =0, R ranges from [ -10 to 10] degrees, and the step length is 1 degree. Finally, a sample set after the rotation expansion and the scale expansion is obtained, the sample label in the sample set after the rotation expansion and the scale expansion is unchanged, and the label of the transformed sample is the same as that of the original sample. And (3) extracting SP-FHOG characteristics (namely space pyramid FHOG characteristics) from the transformed sample library by applying the method provided by the invention, and constructing a label corresponding to the sample in the sample characteristic library.
Referring to fig. 2, the method for extracting SP-FHOG features of each facial expression image sample includes the following steps:
A. segmenting the facial expression image sample to obtain 3 x 3 cells, and calculating FHOG characteristics of each cell in the 3 x 3 cells and recording the FHOG characteristics as local FHOG characteristics; each facial expression image sample comprises 9 local FHOG characteristics which are recorded as FHOG 1-FHOG 9.
The method of calculating the FHOG characteristic of each cell is conventional in the art and is only briefly described here.
As shown in fig. 3, for the pixel points in each cell in the 8 × 8 region shown in fig. 3, the gradient and direction of each pixel point are calculated. The original image is firstly normalized before the gradient and direction calculation of pixel points. In the expression of the grammatical intensity of the image, the local surface exposure contribution is large, so the normalization compression processing can effectively reduce the local shadow and illumination change of the image. The original image can be represented as T (x, y), and the specific process of Gamma correction is as follows: the original image T (x, y) is first normalized. That is, the pixel value in the T (x, y) image is normalized to be between 0 and 1, the normalized result image is still T (x, y), then the normalized result image is subjected to Gamma compression, the formula of the compression is shown in formula 1, and Gamma =1/2 is selected. And performing inverse normalization on the Gamma correction result image to be between 0 and 255. The final corrected resulting image is T (x, y). An example of a Gamma corrected image and an original image is shown in fig. 4.
T(x,y)=T(x,y) gamma (1)
For the corrected result image, the calculation of the gradient amplitude and direction of each pixel point is shown in formula 2 and formula 3, and the actual operation process adopts [ -10 1 [ -1 [ ]]The matrix is used for convolving the original image to obtain the gradient component in the X-axis direction. Using [1 0-1 ]] T The matrix convolves the original image to obtain the gradient component in the Y-axis direction. And calculating the amplitude and the direction of the gradient of each pixel point according to the gradient components in the X-axis direction and the Y-axis direction. The direction of the gradient is divided between 0 and 360 into discrete 18 intervals and 9 intervals. The amplitude statistical result characteristics corresponding to 18 intervals are 18-dimensional characteristics sensitive to directions, and the amplitude statistical result characteristics corresponding to 9 intervals are insensitive to directions.
Figure BDA0001943931280000072
Figure BDA0001943931280000071
Wherein Gx (x, y) and Gy (x, y) are horizontal gradients and vertical gradients of the target pixel point P (x, y), r (x, y) and theta (x, y) are respectively the gradient amplitude and gradient direction of the target pixel point P (x, y), and I (x +1, y), I (x-1, y), I (x, y + 1) and I (x, y-1) are respectively gray values of right, left, upper and lower adjacent pixel points of the target pixel point P (x, y). And the target pixel point P (x, y) is the pixel point to be obtained with the gradient amplitude and the gradient direction.
For an 8 × 8cell as shown in fig. 3, 64 pixels are total in the cell (only 64 pixels are taken as an example for each cell, and the calculation is similar in other cases), so that, in the 8 × 8 image region in the original image, the magnitude and direction of the gradient of each pixel in the region are calculated. Normalizing the direction of each pixel point into 18-dimensional direction sensitive features and 9-dimensional direction insensitive features, and simultaneously acquiring the gradient energy of 4 cells around the cell, so that the FHOG features finally extracted by each cell are 18+9+4= 31-dimensional features in total. If the gradient direction of a certain pixel point is theta, the direction corresponding to the pixel point is normalized to 18-dimensional directional gradient, and b in the corresponding directional gradient i And the position of the mark is not less than 1,i epsilon {1,2, \8230; 18}, and the other positions are zero. Similarly, for any M × N image region, the gradient amplitudes in the gradient direction of each pixel (18-dimensional directional gradients at the same position) are accumulated, and finally, 18-dimensional direction-sensitive features in the M × N image region are obtained. The calculation process for the 9-dimensional orientation insensitive feature is the same as the calculation process for the 18-dimensional feature. The direction-sensitive and direction-insensitive characteristics of the mxn image area are obtained by the above process.
To enhance the invariance of the gradient to the shift, the direction sensitive and direction insensitive features in each cell are normalized. For an 8 × 8cell as shown in fig. 3, the normalized factor is N α,β (i, j), wherein i, j belongs to {1,2, \8230; 8}, alpha, beta belongs to { -11}. Normalization factor N α,β The calculation formula of (i, j) is shown in formula 4, wherein C (i, j) represents cell (i, j). In fig. 3, cell (i, j) represents the concatenation of the orientation sensitive feature and the insensitive feature of the middle 8 × 8 cell.
Figure BDA0001943931280000081
Normalized feature T γ (v) Representing the vector formed after vector v is truncated by gamma (values greater than gamma in vector v are all set to gamma). The truncation result is characterized as shown in equation 5. And summing corresponding positions according to the final truncation result to finally obtain the FHOG characteristic of the 8 × 8cell in FIG. 3.
Figure BDA0001943931280000082
B. For the 3 × 3 cells, applying an overlapping statistical pooling method with a size step of 2 × 2 being one cell, converting the 3 × 3 cells into 2 × 2 cells, and calculating and extracting FHOG features (the calculation method is detailed in step A) of each of the 2 × 2 cells and recording the FHOG features as FHOG features; each facial expression image sample comprises 4 FHOGR characteristics which are recorded as FHOGR 1-FHOGR 4;
C. normalizing the 2 × 2 cells, namely taking the 2 × 2 cells as a whole to obtain FHOG characteristics of the whole (the detailed calculation method is shown in step A), and recording the FHOG characteristics as FHOG all characteristics;
D. FHOG 1-FHOG 9, FHOG 1- FHOG 4 and 1 FHOGALL feature corresponding to each facial expression image sample are connected in series to obtain a final SP-FHOG feature, and therefore the SP-FHOG feature corresponding to each facial expression image sample is a vector with dimensions of 14 x 31.
S2, inputting the SP-FHOG characteristics of each facial expression image sample into a neural network for training to obtain a facial expression recognition model.
And (3) the neural network adopts a DBN (digital base network), the sample characteristics obtained in the step (1) and the corresponding sample labels are input into a DBN-based network recognition model for training and testing, when the recognition rate reaches more than 95%, the facial expression recognition model passes, otherwise, the DBN network recognition model is trained again.
S3, obtaining a target facial expression image, and extracting the SP-FHOG characteristic of the target facial expression image.
The target facial expression image is the facial expression image to be recognized, and the method for extracting the SP-FHOG characteristics of the target facial expression image is similar to the step S1, and is not repeated here.
And S4, inputting the SP-FHOG characteristics of the target facial expression image into a facial expression recognition model for recognition, thereby obtaining a recognition result.
During the SP-FHOG solving process, the characteristic has weak robustness on the rotation of the original image, and therefore, the invention provides a method for calibrating the SP-FHOG. The method searches the direction of the principal component in the direction sensitive feature in the FHGOALfeature of the extracted SP-FHGG feature of the target. And circularly moving the direction sensitive features in the FHOGALL to the left or circularly moving the direction sensitive features to the right to ensure that the principal element is at the initial position of the direction sensitive features, wherein when moving, the direction sensitive features in the FHOGALL circularly move to the left or circularly move to the right, and simultaneously, the direction sensitive features in the FHOG 1-FHOG 9 and the FHOG 1-FHOG 4 corresponding to the FHOGALL also need to move according to the moving result of the FHOGALL, namely, the direction sensitive features in all the facial expression image samples and the SP-FHOG features of the target facial expression image circularly move to the left or circularly move to the right, so that the rotation invariance of the SP-FHOG features is ensured, and the final SP-FHOG features are ensured to have the rotation invariance. Assuming that the number of steps that the direction sensitive feature needs to move circularly is S steps, the direction insensitive feature needs to move in the same direction
Figure BDA0001943931280000091
And (5) carrying out the steps. The movement of the direction sensitive and insensitive features does not affect the portion of the accumulated sum in the FHOG feature calculation process, and therefore the portion of the SP-FHOG feature that is summed does not need to be adjusted. The calibrated SP-FHOG characteristic has stronger rotation invariance. A calibration schematic of the SP-FHOG characteristic is shown in figure 5.
Example two
Referring to fig. 6, a facial expression recognition apparatus based on a spatial pyramid FHOG feature is a virtual apparatus in an embodiment, and includes:
a first obtaining module 10, configured to obtain a facial expression image sample set, and extract SP-FHOG features of each facial expression image sample in the facial expression image sample set;
the training module 20 is used for inputting the SP-FHOG characteristics of each facial expression image sample into a neural network for training to obtain a facial expression recognition model;
the second obtaining module 30 is configured to obtain a target facial expression image, and extract SP-FHOG features of the target facial expression image;
the recognition module 40 is used for inputting the SP-FHOG characteristics of the target facial expression image into a facial expression recognition model for recognition;
the extraction of the SP-FHOG characteristics of the facial expression image sample and the target facial expression image comprises the following steps:
the first calculating unit is used for segmenting the facial expression image sample or the target facial expression image to obtain 3 x 3 cells, and calculating FHOG characteristics of each cell in the 3 x 3 cells and recording the FHOG characteristics as local FHOG characteristics; each facial expression image sample or target facial expression image comprises 9 local FHOG characteristics;
a second calculating unit, configured to apply an overlapping statistical pooling method with a size step of 2 × 2 for one cell to the 3 × 3 cells, convert the 3 × 3 cells into 2 × 2 cells, and calculate an FHOG feature of each cell in the 2 × 2 cells, which is denoted as an FHOGR feature; each facial expression image sample or target facial expression image comprises 4 FHGOGR characteristics in total;
a third computing unit, configured to use the 2 × 2 cells as a whole to obtain an FHOG feature of the whole, which is recorded as an FHOGAll feature;
and the series unit is used for connecting 9 local FHOG characteristics, 4 FHOGR characteristics and 1 FHOGLL characteristic corresponding to each facial expression image sample or target facial expression image in series to obtain the final SP-FHOG characteristic.
The above embodiments are only preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, and any insubstantial changes and substitutions made by those skilled in the art based on the present invention are within the protection scope of the present invention.

Claims (5)

1. A facial expression recognition method based on spatial pyramid FHOG characteristics is characterized by comprising the following steps:
s1, obtaining a facial expression image sample set, and extracting SP-FHOG characteristics of each facial expression image sample in the facial expression image sample set;
s2, inputting the SP-FHOG characteristics of each facial expression image sample into a neural network for training to obtain a facial expression recognition model;
s3, obtaining a target facial expression image, and extracting SP-FHOG characteristics of the target facial expression image;
s4, inputting the SP-FHOG characteristics of the target facial expression image into a facial expression recognition model for recognition;
the method for extracting the SP-FHOG characteristics of the facial expression image sample and the target facial expression image comprises the following steps:
s11, segmenting the facial expression image sample or the target facial expression image to obtain 3 × 3 cells, and calculating FHOG characteristics of each cell in the 3 × 3 cells and recording the FHOG characteristics as local FHOG characteristics; each facial expression image sample or target facial expression image comprises 9 local FHOG characteristics;
s12, applying an overlapping statistical pooling method with 2 × 2 size step length being one cell to the 3 × 3 cells, converting the 3 × 3 cells into 2 × 2 cells, and calculating FHOG characteristics of each cell in the 2 × 2 cells and recording the FHOG characteristics as FHOGR characteristics; each facial expression image sample or target facial expression image comprises 4 FHOGR characteristics in total;
s13, taking the 2 x 2 cells as a whole to obtain FHOG characteristics of the whole, and recording the FHOG characteristics as FHOGALL characteristics;
s14, connecting 9 local FHOG characteristics, 4 FHOGR characteristics and 1 FHOGLL characteristic corresponding to each facial expression image sample or target facial expression image in series to obtain the final SP-FHOG characteristic.
2. The method of claim 1, wherein the neural network is a DBN network.
3. The method of claim 1, wherein the following operations are performed before the extraction of the SP-FHOG feature of the facial expression image sample or the target facial expression image:
and performing Gamma correction on the facial expression image sample or the target facial expression image.
4. The method for facial expression recognition based on spatial pyramid FHOG features of claim 1, further comprising before step S14:
calibrating the FHOGALL characteristics corresponding to each facial expression image sample or target facial expression image, wherein the calibration method comprises the following steps:
and circularly moving the direction sensitive feature in the SP-FHOG feature corresponding to each facial expression image sample or the target facial expression image to the left or to the right so as to ensure that the pivot is at the initial position of the direction sensitive feature.
5. A facial expression recognition device based on a spatial pyramid FHOG feature, comprising:
the first acquisition module is used for acquiring a facial expression image sample set and extracting SP-FHOG characteristics of each facial expression image sample in the facial expression image sample set;
the training module is used for inputting the SP-FHOG characteristics of each facial expression image sample into a neural network for training to obtain a facial expression recognition model;
the second acquisition module is used for acquiring a target facial expression image and extracting SP-FHOG characteristics of the target facial expression image;
the recognition module is used for inputting the SP-FHOG characteristics of the target facial expression image into a facial expression recognition model for recognition;
the extraction of the SP-FHOG characteristics of the facial expression image sample and the target facial expression image comprises the following steps:
the first calculating unit is used for segmenting the facial expression image sample or the target facial expression image to obtain 3 × 3 cells, and calculating FHOG characteristics of each cell in the 3 × 3 cells and recording the FHOG characteristics as local FHOG characteristics; each facial expression image sample or target facial expression image comprises 9 local FHOG characteristics in total;
a second calculating unit, configured to apply an overlapping statistical pooling method with a size step of 2 × 2 for one cell to the 3 × 3 cells, convert the 3 × 3 cells into 2 × 2 cells, and calculate an FHOG feature of each cell in the 2 × 2 cells, which is denoted as an FHOGR feature; each facial expression image sample or target facial expression image comprises 4 FHOGR characteristics in total;
a third computing unit, configured to use the 2 × 2 cells as a whole to obtain an FHOG feature of the whole, which is recorded as an FHOGAll feature;
and the series unit is used for connecting 9 local FHOG characteristics, 4 FHOGR characteristics and 1 FHOGLL characteristic corresponding to each facial expression image sample or target facial expression image in series to obtain the final SP-FHOG characteristic.
CN201910030221.6A 2019-01-14 2019-01-14 Facial expression recognition method and device based on spatial pyramid FHOG characteristics Active CN109753942B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910030221.6A CN109753942B (en) 2019-01-14 2019-01-14 Facial expression recognition method and device based on spatial pyramid FHOG characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910030221.6A CN109753942B (en) 2019-01-14 2019-01-14 Facial expression recognition method and device based on spatial pyramid FHOG characteristics

Publications (2)

Publication Number Publication Date
CN109753942A CN109753942A (en) 2019-05-14
CN109753942B true CN109753942B (en) 2022-11-04

Family

ID=66405590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910030221.6A Active CN109753942B (en) 2019-01-14 2019-01-14 Facial expression recognition method and device based on spatial pyramid FHOG characteristics

Country Status (1)

Country Link
CN (1) CN109753942B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103488974A (en) * 2013-09-13 2014-01-01 南京华图信息技术有限公司 Facial expression recognition method and system based on simulated biological vision neural network
WO2015090126A1 (en) * 2013-12-16 2015-06-25 北京天诚盛业科技有限公司 Facial characteristic extraction and authentication method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103488974A (en) * 2013-09-13 2014-01-01 南京华图信息技术有限公司 Facial expression recognition method and system based on simulated biological vision neural network
WO2015090126A1 (en) * 2013-12-16 2015-06-25 北京天诚盛业科技有限公司 Facial characteristic extraction and authentication method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种多尺度可变形部件模型的人脸表情识别;孟彦斌等;《科学技术与工程》;20171218(第35期);全文 *
基于拉普拉斯金字塔的Gabor特征人脸识别算法;吴定雄等;《计算机应用》;20171220;全文 *
融合局部特征与深度置信网络的人脸表情识别;王琳琳等;《激光与光电子学进展》;20170817(第01期);全文 *

Also Published As

Publication number Publication date
CN109753942A (en) 2019-05-14

Similar Documents

Publication Publication Date Title
CN111160269A (en) Face key point detection method and device
CN110532900B (en) Facial expression recognition method based on U-Net and LS-CNN
CN108154102B (en) Road traffic sign identification method
CN108520225B (en) Fingerprint detection classification method based on spatial transformation convolutional neural network
CN111753828B (en) Natural scene horizontal character detection method based on deep convolutional neural network
CN110569738B (en) Natural scene text detection method, equipment and medium based on densely connected network
CN108399386A (en) Information extracting method in pie chart and device
CN112115783A (en) Human face characteristic point detection method, device and equipment based on deep knowledge migration
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
CN104484658A (en) Face gender recognition method and device based on multi-channel convolution neural network
JP5289412B2 (en) Local feature amount calculation apparatus and method, and corresponding point search apparatus and method
CN105405138A (en) Water surface target tracking method based on saliency detection
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN103839066A (en) Feature extraction method based on biological vision
CN113379777A (en) Shape description and retrieval method based on minimum circumscribed rectangle vertical internal distance proportion
CN113378812A (en) Digital dial plate identification method based on Mask R-CNN and CRNN
CN110188646B (en) Human ear identification method based on fusion of gradient direction histogram and local binary pattern
CN110826534A (en) Face key point detection method and system based on local principal component analysis
CN112184785B (en) Multi-mode remote sensing image registration method based on MCD measurement and VTM
CN106407975A (en) Multi-dimensional layered object detection method based on space-spectrum constraint
CN110909678B (en) Face recognition method and system based on width learning network feature extraction
CN109753942B (en) Facial expression recognition method and device based on spatial pyramid FHOG characteristics
CN110909819A (en) Electromagnetic information leakage detection method based on time domain, terminal equipment and storage medium
CN111368637A (en) Multi-mask convolution neural network-based object recognition method for transfer robot
CN111104965A (en) Vehicle target identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant