CN110046587B - Facial expression feature extraction method based on Gabor differential weight - Google Patents

Facial expression feature extraction method based on Gabor differential weight Download PDF

Info

Publication number
CN110046587B
CN110046587B CN201910321321.4A CN201910321321A CN110046587B CN 110046587 B CN110046587 B CN 110046587B CN 201910321321 A CN201910321321 A CN 201910321321A CN 110046587 B CN110046587 B CN 110046587B
Authority
CN
China
Prior art keywords
gabor
image
expression
regions
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910321321.4A
Other languages
Chinese (zh)
Other versions
CN110046587A (en
Inventor
周华平
张道义
汪晓燕
张晓宇
殷凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Science and Technology
Original Assignee
Anhui University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Science and Technology filed Critical Anhui University of Science and Technology
Priority to CN201910321321.4A priority Critical patent/CN110046587B/en
Publication of CN110046587A publication Critical patent/CN110046587A/en
Application granted granted Critical
Publication of CN110046587B publication Critical patent/CN110046587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A facial expression feature extraction method based on Gabor differential weight comprises the following steps: extracting three expression effective areas of eyes, a nose and a mouth in a face image to be processed; calculating Gabor characteristics in each region according to the obtained three expression effective region gray characteristics; calculating to obtain the difference weight of the three expression areas according to the comparison of the effective area and a pre-stored neutral image; and calculating the Gabor characteristics of the face image according to the difference weight and the Gabor characteristics of the three regions. The method only considers the effective areas of eyes, nose and mouth when extracting the Gabor characteristics of the face image, can effectively reduce the dimension of the face image and improve the performance of the algorithm, and comprehensively considers the difference value between the effective area and the neutral expression and the weight occupied by the related area when calculating the Gabor characteristics of the face image, and calculates the Gabor characteristics of the face image by combining the weight with the Gabor characteristics of the three areas, thereby avoiding the loss of detail information of the original image.

Description

Facial expression feature extraction method based on Gabor differential weight
Technical Field
The invention relates to an expression feature extraction method, in particular to a facial expression feature extraction method based on Gabor differential weight.
Background
The human face expression is the basis of interaction between people, is a part of emotion calculation, and is a research hotspot in the fields of computer vision, man-machine interaction, image processing and the like; most researchers have classified facial expressions into 7 types: anger, aversion, fear, joy, sadness, surprise, and neutral expression that does not contain any emotion.
At present, the hot spot of expression recognition research mainly focuses on the aspect of expression feature extraction. At present, a common expression feature extraction method is feature extraction based on Local Binary Pattern (LBP), which can effectively extract Local texture features of an image, but is easily affected by noise; linear Discriminant Analysis (LDA) has a fast speed for extracting features, but it too depends on the correlation between gray images; dimensionality of expression features can be effectively reduced based on Principal Component Analysis (PCA); effective expression features can be extracted in multiple directions and multiple scales based on Gabor wavelets, and the Gabor wavelets are always a research hotspot for expression feature extraction due to strong robustness and certain tolerance capability to noise.
The LBP can describe the local texture features of the image strongly, and the Gabor can effectively weaken the interference of noise in multi-direction scale, so that the local texture of the image can be described and the interference caused by the noise can be weakened by fusing the Gabor and the LBP features. The face region is partitioned according to the significance degree of the expression features, and the Gabor multi-directional scale features of all the parts are fused and combined with LBP, so that the feature dimension is effectively reduced, the global feature representation is enhanced, the expression recognition rate is greatly improved, the superiority of Gabor feature extraction is proved, but the algorithm complexity is high, and the time loss is large.
The Gabor wavelet can extract frequency characteristics of specific areas of an image in multiple directions and scales, and can amplify the gray scale of eyes, a nose, a mouth and other local characteristics. However, the performance of the algorithm is affected due to the fact that the extracted face image has a high dimension due to multi-direction and multi-scale changes. The difference image of the expression can draw the change condition of the gray level of the corresponding area of the face more intuitively, the difference brought between individuals can be effectively reduced by using the differential texture as the classification feature of the expression, but the differential texture feature can lose the detail information of the original image to a certain extent.
Disclosure of Invention
The invention aims to overcome the problems in the prior art and provides a method for extracting facial expression features based on Gabor differential weight, which can safely verify a sensor.
In order to achieve the technical purpose and achieve the technical effect, the invention is realized by the following technical scheme:
a facial expression feature extraction method based on Gabor differential weight comprises the following steps:
extracting three expression effective areas of eyes, a nose and a mouth in a face image to be processed;
calculating Gabor characteristics in each region according to the obtained three expression effective region gray characteristics;
calculating to obtain the difference weights of the three expression areas according to the comparison between the effective areas and the pre-stored neutral images;
and calculating the Gabor characteristics of the face image according to the differential weight and the Gabor characteristics of the three regions.
The invention provides a facial expression feature extraction method based on Gabor differential weight, wherein three expression effective regions of eyes, a nose and a mouth in a facial image to be processed are extracted, and the method comprises the following steps:
selecting regions with the same size of human faces in the image to be processed;
acquiring effective areas of eyes, nose and mouth;
and respectively processing the selected areas to the respective preset sizes.
The invention provides a facial expression feature extraction method based on Gabor differential weight, wherein Gabor features in each region are respectively extracted according to the obtained regions, and the method comprises the following steps:
extracting image gray distribution of a corresponding area;
calculating Gabor characteristics G (x, y, v, u) = f (x, y) × Ψ of the corresponding region according to a two-dimensional Gabor function u,v (z) wherein
Figure BDA0002034784860000021
Wherein the parameters u, v are respectively expressed as the direction and scale of the filter; z = (x, y) represents the position of the pixel in the image; σ represents the ratio of window width to wavelength, i.e. the bandwidth of the filter; i is a complex unit.
The invention provides a facial expression feature extraction method based on Gabor differential weight, wherein the differential weight of three expression regions is obtained by calculation according to the contrast of an effective region and a pre-stored neutral image, and the method comprises the following steps:
acquiring neutral expression image data;
calculating to obtain a difference image according to the corresponding region and the neutral expression database
Figure BDA0002034784860000022
Wherein f is 1 (i, j) represents a pixel value of the expression image at (i, j), f 2 (i, j) represents the pixel value of the neutral image at the corresponding position (i, j); the size of each face image is M × N (width × height); abs () is an absolute value function;
and calculating to obtain the difference weights of the three regions according to the difference image.
The invention provides a facial expression feature extraction method based on Gabor differential weight, wherein the facial expression feature extraction method is based on a difference value
The image is calculated to obtain the difference weight of three areas, which comprises the following steps:
presetting a threshold value T;
comparing the pixel value of the difference image with a preset threshold T, calculating the number Ecount, ncount and Mcount of the three area pixel values larger than the threshold T,
the specific gravities of the three regions in the total region are calculated respectively.
The invention provides a facial expression feature extraction method based on Gabor differential weight, wherein Gabor features of a facial image are calculated according to the differential weight and Gabor features of three regions, and the method comprises the following steps: and multiplying the Gabor characteristics of the three regions by the weights of the corresponding regions respectively, and taking the obtained data sets of the three regions as the data of the Gabor characteristics of the face image.
Compared with the prior art, the invention has the beneficial effects that:
in the embodiment, when the Gabor features of the face image are extracted, only the effective regions of eyes, a nose and a mouth are considered, the regions to be processed are reduced, the dimension of the face image to be processed can be effectively reduced, the performance of the algorithm is improved, and when the Gabor features of the face image are calculated, the difference value between the effective regions and the neutral expression and the weight occupied by the related regions are comprehensively considered, and the Gabor features of the three regions are combined with the weight to calculate the Gabor features of the face image, so that the influence degree of different regions on the face expression recognition is distinguished, the potential relation among the three regions is considered, the accuracy of the expression recognition can be improved, and the loss of original image detail information in the processing process is avoided.
Drawings
FIG. 1 is a schematic view of the overall flow of the process of the present invention;
FIG. 2 is a schematic flow chart of method example 1 of the present invention;
FIG. 3 is a schematic diagram of image cropping according to the method of the present invention;
FIG. 4 is a schematic diagram of the extraction of effective area according to the method of the present invention;
FIG. 5 is a schematic diagram of the normalized effective area of the method of the present invention;
FIG. 6 is an expression difference image of the present invention;
FIG. 7 is a comparison of difference pictures under different thresholds T according to the method of the present invention;
Detailed Description
As shown in fig. 1, this embodiment is a method for extracting facial expression features based on Gabor differential weight, including:
s100, extracting three effective expression areas of eyes, a nose and a mouth in a face image to be processed;
s200, extracting Gabor characteristics in each region according to the obtained three expression effective region gray characteristics;
s300, calculating to obtain the difference weights of the three expression areas according to the comparison between the effective areas and neutral images stored in advance;
and S400, calculating the Gabor characteristics of the three regions according to the differential weight and the Gabor characteristics of the three regions.
It should be noted that, when the effective region is extracted, three rectangular regions formed by four points (12 points in three regions) near three regions of eyes, nose and mouth are manually and respectively selected in the algorithm as an intercepted region, and coordinates of the manually selected 12 points are changed into fixed coordinates of the corresponding region, in this embodiment, only the effective regions of eyes, nose and mouth are considered when the Gabor features of the face image are extracted, so that the region needing to be processed is reduced, the dimension of the face image needing to be processed can be effectively reduced, the performance of the algorithm is improved, and when the Gabor features of the face image are calculated, the difference between the effective region and the neutral expression and the weight occupied by the relevant regions are comprehensively considered, and the Gabor features of the three regions are combined with the weight to calculate the Gabor features of the face image, so that the influence degree of different regions on the face expression recognition is distinguished, the potential relationship among the three regions is considered, the accuracy of the expression recognition can be improved, and the loss of detail information of the original image in the processing process can be avoided.
The method for extracting facial expression features based on Gabor differential weight provided in this embodiment 1 extracts three expression effective regions, namely eyes, nose, and mouth, in a facial image to be processed, and includes:
s110, selecting areas with the same size of human faces in the image to be processed;
s120, acquiring effective areas of eyes, a nose and a mouth;
s130 processes the selected regions to respective preset sizes, respectively.
For example, as shown in fig. 3, it should be noted that (b) in fig. 3 represents a Region after an image is directly cropped, and (c) represents an image after the Region after cropping is subjected to histogram equalization, image enhancement, preprocessing such as noise reduction, and image contrast enhancement, and image sharpness enhancement, which are general prior art in the field and are not described herein, it should be noted that in the embodiment of the present invention, all processed images are subjected to histogram equalization before calculation, but equalization is not necessary, and has a small influence on the calculation result, and the Region of Interest of ROI (Region of Interest) shown generally refers to a Region that needs to be processed in an image processing and machine vision field, and is circled in various expression forms (such as circle, square, ellipse, and the like). Typically, the ROI that can most embody the expressive features is three parts, namely, the Eye E (Eye), the Nose N (Nose), and the Mouth M (Mouth), which is represented as ENM in this embodiment. Such as faces, hairs, etc., do not provide much information for the feature extraction of expressions.
The representation image is pre-processed prior to ROI acquisition. Uniformly cutting the image into 140-by-100 face pictures, wherein the cutting effect is as shown in fig. 3, and only the required face area is cut;
and after the face is cut, performing ROI extraction on the preprocessed face image. If the facial expression library is not too large, manual selection will be more accurate. The ENM region selection process is illustrated in fig. 4. Performing four-point marking on an image window popped up during program operation to determine an approximate area of the ENM, and after marking is finished, circling by using a green dotted line frame, cutting and storing until all areas of the ENM are obtained;
and after all the ENM areas are acquired, respectively processing the selected areas to the respective preset sizes, namely performing size normalization on the three areas and simultaneously performing gray level equalization. In the present embodiment, as shown in fig. 5, the eye region size is normalized to 30 × 100, the nose region is normalized to 30 × 45, and the mouth region is normalized to 30 × 50, and the normalization size may be performed by using the function imresize in matlab, and if the selected value is smaller than the predetermined value, bilinear difference compensation may be performed. If the size of the image is larger than the specified value, the unnecessary part is cut off, and the unnecessary part can also be automatically realized through other functions, it needs to be noted that the normalization is to adjust the image into the same size, the processing is convenient, the specific normalization size is determined according to the reality, the gray level equalization adopts histogram equalization, the contrast of the image is enhanced, the definition of the image is improved, and the histogram equalization is the general prior art in the field, and is not described in detail here.
In the method for extracting facial expression features based on Gabor differential weight according to embodiment 2, the Gabor features in each region are respectively extracted according to the obtained region, and the method includes:
s210, extracting image gray distribution of a corresponding area;
s220, calculating Gabor characteristics G (x, y, v, u) = f (x, y) × Ψ of the image according to a two-dimensional Gabor function u,v (z) wherein
Figure BDA0002034784860000051
Wherein the parameters u, v are respectively expressed as the direction and scale of the filter; z = (x, y) represents the position of a pixel in an image; σ represents the ratio of window width to wavelength, i.e. the bandwidth of the filter; i is a complex unit.
An exemplary two-dimensional Gabor filter is a set of plane waves, which have a gaussian envelope. The method has the advantages of being extremely strong in feature extraction of expression images, more accurate in local feature extraction, and capable of having certain tolerance on brightness and human face postures.
The Gabor wavelet kernel function has the same characteristics as the two-dimensional reflection region of simple cells of the human cerebral cortex, and the two-dimensional Gabor filter can be represented by the following kernel functions:
Figure BDA0002034784860000061
among the functions, the parameters u and v are respectively expressed as the direction and the scale of the filter; z = (x, y) represents the position of a pixel in an image; sigma represents the ratio of the window width to the wavelength, namely the bandwidth of the filter, and generally takes the value of 2 pi; i is a complex unit. In the Gabor feature representation, 8 directions and 5 scales are generally taken, namely 40 Gabor filters, multi-direction and multi-scale can provide more detailed texture information, but the more the better, the more the directions and scales are taken, the dimension of the feature is increased, and therefore 8 directions and 5 scales are generally taken.
The Gabor feature of an image is obtained by convolution of the kernel function and the image, wherein the convolution is defined as:
G(x,y,v,u)=f(x,y)*Ψ u,v (z) (2)
where f (x, y) represents the gray scale distribution of the image.
After the convolution operation is performed on the image f (x, y), 40 complex values are obtained at each pixel point, which represent 40 amplitudes of each pixel point in the image, that is, the characteristic of each pixel is represented by the 40 amplitudes of the pixel; and repeatedly calculating the amplitude of each pixel point to obtain the all Gabor characteristic representation of the image f (x, y).
In this embodiment 3, the method for extracting facial expression features based on Gabor differential weight calculates differential weights of three expression regions according to a comparison between an effective region and a pre-stored neutral image, and includes:
acquiring neutral expression image data;
calculating to obtain a difference image according to the corresponding region and the neutral expression database
Figure BDA0002034784860000062
Wherein, f 1 (i, j) represents a pixel value of the expression image at (i, j), f 2 (i, j) represents the pixel value of the neutral image at the corresponding position (i, j); the size of each face image is M × N (width × height); abs () is an absolute value function;
and calculating to obtain the difference weights of the three regions according to the difference image.
For example, in the field of expression recognition, neutral expressions are often classified as a category. In fact, a neutral expression is a human face state without emotion, and a deviation value reference can be provided for other expressions when the expressions are classified. The difference between a set of mood-rich expression images and neutral facial images should best reflect the biased character of the expression.
And (3) obtaining a difference image by taking the difference value between the expression image and the neutral image, and recording the difference image as Subimg (i, j), wherein the difference image can be defined as:
Figure BDA0002034784860000071
wherein f is 1 (i, j) represents a pixel value of the expression image at (i, j), f 2 (i, j) represents the pixel value of the neutral image at the corresponding position (i, j); the size of each face image is M × N (width × height); abs () is an absolute value function.
The difference value of the two images is also the difference value of the pixels at the corresponding positions of the two images, the pixel value of the difference image at the point can be obtained by subtracting the pixels at the corresponding positions of the two images, and the difference pixel is obtained by traversing all the pixel values of the two images by using the same method until the difference image of the two images is finally obtained. As shown in fig. 6, a difference image obtained from the "happy" expression image and the neutral image is shown. As can be seen from the figure, the difference image of the two images well keeps the change characteristics of the original expression image.
In the method for extracting facial expression features based on Gabor differential weight provided in this embodiment 4, the differential weight of three regions is obtained through calculation according to the difference image, and the method includes:
presetting a threshold value T;
comparing the pixel value of the difference image with a preset threshold value T, and calculating that the pixel values of the three regions are greater than the threshold value T
The number of Ecount, ncount, mcount,
the proportion of the three regions in the total area is calculated.
Illustratively, in consideration of different degrees of importance of different regions of the face in expression recognition, the embodiment adaptively weights three regions of the eye, the nose and the mouth from which the face is extracted. The nature of the expressive image is the change of the pixel values compared with the expressive features of the neutral image. The expression image and the neutral image are subjected to difference value to obtain the variation amplitude of each pixel value at the corresponding position of the face, but the expression image or the neutral image is often greatly influenced by noise such as uneven illumination. Therefore, in the embodiment, a threshold value T is set when the image is differenced, and the pixels with the pixel difference value larger than T are retained, otherwise, the pixels are discarded. The idea of the ENM weight distribution algorithm is as follows:
(1) Obtaining the pixel value of the difference image at the point (i, j) as Subimg (i, j)
(2) Counting ENM sub-region pixel value changes
Setting a count value, and recording the count of the corresponding pixel change of the expression image relative to the neutral image; traversing all pixel values of the expression sub-region, and when a difference pixel is larger than a threshold value, reserving the pixel and counting the pixel, namely when Subimg (i, j) > T, adding 1 to the count; and traversing all pixels of the three areas of the eyes, the nose and the mouth, counting and recording the variation quantity of the pixel values of the corresponding areas, and recording the variation quantity as Ecount, ncount and Mcount.
(3) ENM area weight calculation
Setting weights of eyes, a nose and a mouth area as Eweight, nweight and Mweight respectively, and setting Allweight = Ecount + Ncount + Mcount; the weight of each region is expressed as
Figure BDA0002034784860000081
Figure BDA0002034784860000082
The method for extracting facial expression features based on Gabor differential weight according to the embodiment calculates Gabor features of a facial image according to the differential weight and the Gabor features of three regions, and includes: and multiplying the Gabor characteristics of the three regions by the weights of the corresponding regions respectively, and taking the obtained data sets of the three regions as the Gabor characteristic data of the face image.
Exemplary, (4) Gabor feature representation of facial expressions
In this embodiment, gabor filtering is performed on three ENM regions of the facial expression image, and the Gabor features of each region are recorded as G E ,G N ,G M Let us note that the Gabor feature of an image is ENM-Gabor, which is defined as:
ENM_Gabor=(Eweight*G E ,Nweight*GN,Mweight*G M ) (4)
an exemplary difference image obtained by subtracting the expression image from the neutral image is affected by the pixel change of one of the images, so that in the embodiment, when the difference image is obtained between the two images, a threshold T is considered to be set to balance the error caused by the image noise. And performing difference on the two images, namely performing difference on the pixels at the corresponding positions. A record is made with the absolute value of the difference pixel (the difference between the pixels corresponding to the two images) greater than T and the pixel value is set to 0 (black), and an unrecorded record is made with the absolute value of the difference pixel less than T and the pixel value is set to 255 (white). In this embodiment, the number of pixel changes recorded is the expression of the weight.
The T value is the embodiment of expression characteristic change errors, partial errors can be reserved and cannot be eliminated due to the fact that the T value is too small, and partial characteristics of the expression are lost due to the fact that the T value is too large, so that recognition accuracy is affected. As shown in fig. 5, when T =0, the contour of the face is not substantially recognized, which has a certain influence on the calculation of the weight; and when T =100, the features of the eye part such as the nose are obviously lost. The value of T is selected as required.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solution of the present invention may be made by those skilled in the art without departing from the spirit of the present invention, which is defined by the claims.

Claims (6)

1. A facial expression feature extraction method based on Gabor differential weight is characterized by comprising the following steps:
extracting three expression effective areas of eyes, a nose and a mouth in the face image to be processed;
calculating Gabor characteristics in each region according to the obtained three expression effective region gray characteristics;
calculating to obtain the difference weight of the three expression areas according to the comparison of the effective area and a pre-stored neutral image;
and calculating the Gabor characteristics of the face image according to the differential weight and the Gabor characteristics of the three regions.
2. The method for extracting facial expression features based on Gabor differential weight according to claim 1, wherein extracting three effective expression regions of eyes, nose and mouth in the facial image to be processed comprises:
selecting regions with the same size of human faces in the image to be processed;
acquiring effective areas of eyes, nose and mouth;
and respectively processing the selected areas to the respective preset sizes.
3. The method for extracting facial expression features based on Gabor differential weight according to claim 1, wherein the step of extracting Gabor features in each region respectively according to the obtained regions comprises the following steps:
extracting image gray distribution of a corresponding area;
calculating Gabor characteristics G (x, y, v, u) = f (x, y) × Ψ of the corresponding region according to a two-dimensional Gabor function u,v (z) wherein
Figure FDA0002034784850000011
Wherein the parameters u, v are respectively expressed as the direction and scale of the filter; z = (x, y) represents the position of the pixel in the image; σ represents the ratio of window width to wavelength, i.e. the bandwidth of the filter; i is a complex unit.
4. The method for extracting facial expression features based on Gabor differential weight according to claim 1, wherein the differential weight of three expression regions is calculated according to the contrast between the effective region and a pre-stored neutral image, and comprises:
acquiring neutral expression image data;
calculating to obtain a difference image according to the corresponding region and the neutral expression database
Figure FDA0002034784850000012
Wherein, f 1 (i, j) represents a pixel value of the expression image at (i, j), f 2 (i, j) represents the pixel value of the neutral image at the corresponding position (i, j); the size of each face image is M × N (width × height); abs () is an absolute value function;
and calculating to obtain the difference weights of the three regions according to the difference image.
5. The method for extracting facial expression features based on Gabor differential weight according to claim 4, wherein the calculating of the differential weight of the three regions according to the difference image comprises:
presetting a threshold value T;
comparing the pixel value of the difference image with a predetermined threshold T, calculating the number Ecount, ncount and Mcount of the three area pixel values greater than the threshold T,
the proportion of the three regions in the total area is calculated.
6. The method of claim 1, wherein the step of calculating the Gabor features of the face image according to the difference weight and the Gabor features of the three regions comprises: and multiplying the Gabor characteristics of the three regions by the weights of the corresponding regions respectively, and taking the obtained data sets of the three regions as the data of the Gabor characteristics of the face image.
CN201910321321.4A 2019-04-22 2019-04-22 Facial expression feature extraction method based on Gabor differential weight Active CN110046587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910321321.4A CN110046587B (en) 2019-04-22 2019-04-22 Facial expression feature extraction method based on Gabor differential weight

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910321321.4A CN110046587B (en) 2019-04-22 2019-04-22 Facial expression feature extraction method based on Gabor differential weight

Publications (2)

Publication Number Publication Date
CN110046587A CN110046587A (en) 2019-07-23
CN110046587B true CN110046587B (en) 2022-11-25

Family

ID=67278212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910321321.4A Active CN110046587B (en) 2019-04-22 2019-04-22 Facial expression feature extraction method based on Gabor differential weight

Country Status (1)

Country Link
CN (1) CN110046587B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401324A (en) * 2020-04-20 2020-07-10 Oppo广东移动通信有限公司 Image quality evaluation method, device, storage medium and electronic equipment
CN111814697B (en) * 2020-07-13 2024-02-13 伊沃人工智能技术(江苏)有限公司 Real-time face recognition method and system and electronic equipment
CN114005153A (en) * 2021-02-01 2022-02-01 南京云思创智信息科技有限公司 Real-time personalized micro-expression recognition method for face diversity
CN113240881B (en) * 2021-07-12 2021-10-29 环球数科集团有限公司 Fire identification system based on multi-feature fusion
CN114973374A (en) * 2022-05-31 2022-08-30 平安银行股份有限公司 Expression-based risk evaluation method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016110005A1 (en) * 2015-01-07 2016-07-14 深圳市唯特视科技有限公司 Gray level and depth information based multi-layer fusion multi-modal face recognition device and method
WO2016169219A1 (en) * 2015-04-21 2016-10-27 深圳Tcl数字技术有限公司 Method and device for extracting human facial textures
CN106127196A (en) * 2016-09-14 2016-11-16 河北工业大学 The classification of human face expression based on dynamic texture feature and recognition methods
CN106599854A (en) * 2016-12-19 2017-04-26 河北工业大学 Method for automatically recognizing face expressions based on multi-characteristic fusion
CN106980848A (en) * 2017-05-11 2017-07-25 杭州电子科技大学 Facial expression recognizing method based on warp wavelet and sparse study

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016110005A1 (en) * 2015-01-07 2016-07-14 深圳市唯特视科技有限公司 Gray level and depth information based multi-layer fusion multi-modal face recognition device and method
WO2016169219A1 (en) * 2015-04-21 2016-10-27 深圳Tcl数字技术有限公司 Method and device for extracting human facial textures
CN106127196A (en) * 2016-09-14 2016-11-16 河北工业大学 The classification of human face expression based on dynamic texture feature and recognition methods
CN106599854A (en) * 2016-12-19 2017-04-26 河北工业大学 Method for automatically recognizing face expressions based on multi-characteristic fusion
CN106980848A (en) * 2017-05-11 2017-07-25 杭州电子科技大学 Facial expression recognizing method based on warp wavelet and sparse study

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于自动分割的局部Gabor小波人脸表情识别算法;刘姗姗等;《计算机应用》;20091101(第11期);全文 *
结合差图像和Gabor小波的人脸表情识别;丁志起等;《计算机应用与软件》;20110415(第04期);全文 *

Also Published As

Publication number Publication date
CN110046587A (en) 2019-07-23

Similar Documents

Publication Publication Date Title
CN110046587B (en) Facial expression feature extraction method based on Gabor differential weight
CN108171104B (en) Character detection method and device
Chen et al. Total variation models for variable lighting face recognition
CN110569756B (en) Face recognition model construction method, recognition method, device and storage medium
US7035456B2 (en) Face detection in color images with complex background
US9286537B2 (en) System and method for classifying a skin infection
US8098936B2 (en) Method and apparatus for detecting objects in an image
US7092554B2 (en) Method for detecting eye and mouth positions in a digital image
CN105825192A (en) Facial expression identification method and system
CN104331683B (en) A kind of facial expression recognizing method with noise robustness
Zhao et al. Applying contrast-limited adaptive histogram equalization and integral projection for facial feature enhancement and detection
CN107066966A (en) A kind of face identification method based on key point area image
CN112883824A (en) Finger vein feature recognition device for intelligent blood sampling and recognition method thereof
De Automatic data extraction from 2D and 3D pie chart images
CN111209873A (en) High-precision face key point positioning method and system based on deep learning
CN111105368B (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
Telea Feature preserving smoothing of shapes using saliency skeletons
CN111144413A (en) Iris positioning method and computer readable storage medium
Ma et al. Noise-against skeleton extraction framework and application on hand gesture recognition
CN112258532A (en) Method for positioning and segmenting corpus callosum in ultrasonic image
CN111950403A (en) Iris classification method and system, electronic device and storage medium
de Arruda et al. Non-photorealistic neural sketching: A case study on frontal-face images
CN113421256A (en) Dot matrix text line character projection segmentation method and device
CN113221942A (en) Tea disease identification algorithm under small sample based on deep migration and Cayley-Klein measurement
CN111553195A (en) Three-dimensional face shielding discrimination method based on multi-bitmap tangent plane and multi-scale uLBP

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant