CN108197529B - Expression feature extraction method integrating DLDP and sobel - Google Patents

Expression feature extraction method integrating DLDP and sobel Download PDF

Info

Publication number
CN108197529B
CN108197529B CN201711208327.8A CN201711208327A CN108197529B CN 108197529 B CN108197529 B CN 108197529B CN 201711208327 A CN201711208327 A CN 201711208327A CN 108197529 B CN108197529 B CN 108197529B
Authority
CN
China
Prior art keywords
dldp
sobel
sub
image
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711208327.8A
Other languages
Chinese (zh)
Other versions
CN108197529A (en
Inventor
罗元
余朝靖
张毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201711208327.8A priority Critical patent/CN108197529B/en
Publication of CN108197529A publication Critical patent/CN108197529A/en
Application granted granted Critical
Publication of CN108197529B publication Critical patent/CN108197529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an expression feature extraction method fusing DLDP and sobel, which comprises the steps of dividing 8 direction masks of a Kirsch operator into two sub-direction masks, calculating edge response values respectively to obtain two codes (DLDP1 and DLDP2), and cascading histograms of the two codes to obtain the expression feature DLDP. In order to enhance the local features to obtain more useful information, DLDP and Sobel operator are fused to obtain the features Sobel-DLDP. Experiments on JAFFE and a Cohn-Kanade facial expression database show that compared with other feature extraction algorithms based on local texture, the method not only shortens the operation time of feature extraction, but also improves the recognition rate.

Description

Expression feature extraction method integrating DLDP and sobel
Technical Field
The invention belongs to the field of image processing and pattern recognition, and particularly relates to an expression feature extraction method fusing DLDP and sobel.
Background
The facial expression recognition is used for analyzing human emotions and intentions, and has important application in the fields of human-computer interaction, computer vision, human psychological theory, emotion simulation research and the like. Extracting effective expression features is a key step in facial expression recognition, and directly influences the recognition rate of expression classification. The expression feature extraction method is divided into extraction methods based on global features and extraction methods based on local texture features. Compared with global feature extraction, the feature extraction method based on local is better in robustness to changes such as illumination and posture. The LBP has simple principle and low calculation complexity, and is widely applied to the fields of image recognition, target detection, tracking and the like. LBP is not sensitive to consistent illumination change, but has a poor image representation effect of integrating inconsistent illumination change and random noise, and the optimal image identification accuracy is influenced. For this reason, in 2010, Jabid et al proposed a Local Directional Pattern (LDP) face description method that is more robust to noise. In 2013, river ar et al encoded data in 2 directions with the largest positive and negative direction response values among 8-direction edge response values, and enhanced the discrimination to improve the recognition rate. In 2015, river ar et al improved the recognition rate by encoding the direction information of the image and the first and second largest intensity information in combination. In 2016, r.srinivasa personal et al calculated a single code per block by X-ORing LDP codes obtained in a single block, generating compact codes DR-LDP for efficient face recognition, and the recognition rate was improved. However, the above researches are all directed to improving the recognition rate, do not consider calculating the edge responses in 8 directions, and simultaneously take the absolute values of the obtained 8 edge responses and then uniformly sort the absolute values, which results in that the whole expression feature extraction process is time-consuming.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. The expression feature extraction method fusing DLDP and sobel is provided, wherein the time for feature extraction is shortened, and the recognition rate is improved. The technical scheme of the invention is as follows:
an expression feature extraction method fusing DLDP and sobel comprises the following steps:
s1, convolving the facial expression image with sobel (Sobel operator) to obtain edge feature information of the facial expression image;
s2, dividing the image obtained in the S1 into M multiplied by N non-overlapping sub-regions;
s3, enabling any pixel point X of the face image to serve as a center point and 8 pixel points in the field to form a square, enabling four corners of the square to serve as first sub-direction masks and four middle points on the edges of the square to serve as second sub-direction masks according to an 8-direction mask of a Kirsch operator, and respectively calculating edge response values of the four corners and the upper, lower, left and right directions through convolution of the first sub-direction mask and the second sub-direction masks and the 8 field pixel values;
s4, setting the position of the edge response value of 3 before ranking as 1 and setting other positions as 0, directly carrying out binary coding according to the response symbol of the image in 4 directions, and constructing an image local descriptor to obtain DLDP1 and DLDP2 codes;
s5, obtaining a histogram of DLDP coding gradient values through DLDP1 and DLDP2, cascading histograms of M multiplied by N sub-regions to obtain feature information of the facial expression image, and obtaining the recognition rate of the expression feature extraction method fusing DLDP and sobel through a classifier.
Further, the step S1 convolves the facial expression image with the sobel operator to obtain the edge feature of the facial expression image, and specifically includes the steps of:
the sobel operator is convolved with the facial expression image to obtain horizontal and vertical filtering results:
Figure BDA0001484125550000021
Figure BDA0001484125550000022
by mixing IxAnd IyCombine to find edge features:
Figure BDA0001484125550000023
further, the step S3 is performed according to the mask set { M ] of the 8-direction Kirsch operator0~M7}:
Figure BDA0001484125550000031
Figure BDA0001484125550000032
Wherein M is1,M3,M5,M7As a first sub-directional mask, M0,M2,M4,M6As a second sub-direction mask, edge response values in the four corners and the upper, lower, left, and right directions are calculated by convolving the 8 same-direction field pixel values with the first sub-direction mask and the second sub-direction mask. Then, directly performing binary coding according to the response symbols of the image in 4 directions, and constructing an image local descriptor, which is defined as:
Figure BDA0001484125550000033
Figure BDA0001484125550000034
wherein the content of the first and second substances,
Figure BDA0001484125550000035
m1irepresenting the ith edge response value, m, in the first sub-direction2iRepresents the ith edge response value, m, in the second sub-directionkIs the k-th large edge response value.
Further, the step S5 of obtaining the histogram of DLDP decomposed local directional mode encoding gradient values specifically includes the steps of:
DLDP codes of pixels are arranged according to original image coordinates to form corresponding DLDP code maps, and after calculating the DLDP codes of each pixel (x, y), a histogram H is formedDLDPiThe definition is as follows:
HDLDPi=∑x,yf(DLDPi(x,y),Ci)
in the formula (I), the compound is shown in the specification,
Figure BDA0001484125550000036
Cithe obtained DLDP descriptor is a histogram of DLDP coding gradient values, and the histograms of DLDP1 and DLDP2 are linked by the following formula to obtain a DLDP histogram HDLDPAnd expressed as image features.
Figure BDA0001484125550000041
Further, the method for extracting and calculating the expression features fusing the DLDP and the sobel in the step 5) comprises the following steps:
combining the DLDP operators in the horizontal direction and the vertical direction to obtain a Sobel-DLDP operator:
Sobel-DLDP={Sobel-DLDPx,Sobel-DLDPy}
wherein the content of the first and second substances,
Sobel-DLDPx={Sobel-DLDP1x,Sobel-DLDP2x}
Sobel-DLDPy={Sobel-DLDP1y,Sobel-DLDP2y}
DLDPxrepresenting the horizontal component of the DLDP code, DLDPyRepresenting the vertical component of the DLDP code, DLDP1xAnd DLDP2xRespectively representing the horizontal component of the first sub-direction DLDP and the horizontal component of the second sub-direction DLDP, DLDP1yAnd DLDP2yThe vertical component of the first sub-direction DLDP and the vertical component of the second sub-direction DLDP are indicated separately.
The Sobel-DLDP histogram H of each subblock is respectively extracted by the following formulaSobel-DLDP
Figure BDA0001484125550000042
In the formula (I), the compound is shown in the specification,
Figure BDA0001484125550000043
Ciand (3) a sobel-DLDP code of the ith sub-direction, wherein i is 1 and 2. Finally, the expression image is connected with the feature histograms of the sub-blocks in the order from left to right and from top to bottom, and the cascade histogram is used for representing the facial expression features. And finally, classifying the fusion information by using an SVM (support vector machine) and identifying the facial expression.
Further, in the step S1, a JAFFE facial expression database is used as an experiment database, and first, edge features in the horizontal and vertical directions of the expression image are obtained through a sobel operator, and then the features in the two directions are fused to obtain the edge features of the expression image; then reducing the data volume through binarization; and then, carrying out feature extraction through DLDP to obtain a facial expression feature histogram for a classifier to carry out expression classification and identification.
The invention has the following advantages and beneficial effects:
aiming at the problem of large time consumption of an LDP algorithm, the invention provides an improved algorithm, namely a decomposition local direction mode (DLDP), and simultaneously provides a Sobel-DLDP expression feature extraction algorithm fusing the DLDP and a Sobel operator for improving the recognition rate. The Sobel-DLDP not only solves the problem that expression features are slowly extracted by LDP, but also improves the recognition rate compared with a plurality of local feature extraction methods with better effect at present. Experiments are carried out on JAFFE and a Cohn-Kanade facial expression database, and the effectiveness and superiority of the method on the processing speed and the recognition rate are verified.
Drawings
Fig. 1 is a flowchart of an expression feature extraction method based on fusion of DLDP and sobel in the preferred embodiment of the present invention.
Fig. 2 is an example DLDP encoding.
FIG. 3 shows a Sobel-DLDP feature extraction process.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
as shown in fig. 1, the invention provides an expression feature extraction method fusing DLDP and sobel, which is characterized by comprising the following steps:
s1, convolving the facial expression image with the sobel to obtain the edge characteristics of the facial expression image:
the step of solving the edge feature is as follows:
the sobel operator is convolved with the original image I to obtain horizontal and vertical filtering results:
Figure BDA0001484125550000051
Figure BDA0001484125550000052
by mixing IxAnd IyCombine to find edge features:
Figure BDA0001484125550000053
s2, dividing the facial expression image into non-overlapping M multiplied by N sub-regions;
s3, dividing the 8-direction mask of the Kirsch operator into two sub-direction masks according to four corners, upper, lower, left and right, and obtaining two four-direction edge response values:
dividing an 8-direction mask of a Kirsch operator into two sub-direction masks according to four corners, upper, lower, left and right, enabling any pixel point X of a face image to serve as a central point, and simultaneously calculating edge response values of the four corners, the upper, lower, left and right directions through the two sub-masks respectively to obtain two four-direction edge response values, namely m10、m11、m12、m13And m20、m21、m22、m23. The steps for finding two four-direction edge response values are shown in table 1:
Figure BDA0001484125550000061
TABLE 1
Directly carrying out binary coding according to response symbols of the image in 4 directions, and constructing an image local descriptor, which is defined as:
Figure BDA0001484125550000062
Figure BDA0001484125550000063
wherein the content of the first and second substances,
Figure BDA0001484125550000064
m1irepresenting the ith edge response value, m, in the first sub-direction2iRepresents the ith edge response value, m, in the second sub-directionkIs the k-th large edge response value.
S4, setting the position of the edge response value of the top 3 as 1 and setting other positions as 0 to obtain DLDP1 and DLDP2 codes;
s5, obtaining a histogram of DLDP coding gradient values through DLDP1 and DLDP2, cascading histograms of M multiplied by N sub-regions to obtain feature information of the facial expression image, and obtaining the recognition rate of the expression feature extraction method fusing DLDP and sobel through a classifier.
Method of obtaining a histogram of DLDP encoded gradient values:
the DLDP codes of the pixels are arranged according to the original image coordinates to form a corresponding DLDP code picture. After calculating the DLDP code for each pixel (x, y), its DLDP descriptor, histogram HDLDPiThe definition is as follows:
HDLDPi=∑x,yf(DLDPi(x,y),Ci)
in the formula (I), the compound is shown in the specification,
Figure BDA0001484125550000071
Ciand (3) a sobel-DLDP code of the ith sub-direction, wherein i is 1 and 2. The resulting DLDP descriptor is a histogram of DLDP encoded gradient values. The DLDP descriptor contains detailed information of the image, such as edges, corners, points, and other local texture features. Obtaining the histograms of DLDP1 and DLDP2 by the above formula, and connecting the histograms of DLDP1 and DLDP2 by the following formula to obtain DLDP histogram HDLDPAnd expressed as image features.
Figure BDA0001484125550000072
Finally, the expression feature extraction and calculation method fusing DLDP and sobel comprises the following steps:
combining the DLDP operators in the horizontal direction and the vertical direction to obtain a Sobel-DLDP operator:
Sobel-DLDP={Sobel-DLDPx,Sobel-DLDPy}
wherein the content of the first and second substances,
Sobel-DLDPx={Sobel-DLDP1x,Sobel-DLDP2x}
Sobel-DLDPy={Sobel-DLDP1y,Sobel-DLDP2y}
DLDPxrepresenting the horizontal component of the DLDP code, DLDPyRepresenting the vertical component of the DLDP code, DLDP1xAnd DLDP2xRespectively representing the horizontal component of the first sub-direction DLDP and the horizontal component of the second sub-direction DLDP, DLDP1yAnd DLDP2yThe vertical component of the first sub-direction DLDP and the vertical component of the second sub-direction DLDP are indicated separately.
The Sobel-DLDP histogram H of each subblock is respectively extracted by the following formulaSobel-DLDP
Figure BDA0001484125550000073
In the formula (I), the compound is shown in the specification,
Figure BDA0001484125550000081
Ciand (3) a sobel-DLDP code of the ith sub-direction, wherein i is 1 and 2. Finally, the expression image is connected with the feature histograms of the sub-blocks in the order from left to right and from top to bottom, and the cascade histogram is used for representing the facial expression features. And finally, classifying the fusion information by using an SVM (support vector machine) and identifying the facial expression.
Fig. 2 shows an example of DLDP coding in the present invention. Image pixels are divided into two sub-direction masks through four corners and upper, lower, left and right, two four-direction edge response values are obtained, the absolute values of the eight edge response values are sorted, the position where the edge response value of the top 3 is located is set as 1, and the other positions are set as 0, and DLDP1 and DLDP2 codes are obtained.
As shown in FIG. 3, the process of Sobel-DLDP feature extraction in the present invention is shown. The JAFFE facial expression database was used as the experimental database. Firstly, obtaining the edge characteristics of the expression image in the horizontal and vertical directions through a sobel operator, and then fusing the characteristics in the two directions to obtain the edge characteristics of the expression image; further reducing the data volume through binarization; and then, carrying out feature extraction through DLDP to obtain a facial expression feature histogram for a classifier to carry out expression classification and identification.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (6)

1. An expression feature extraction method fusing DLDP and sobel is characterized by comprising the following steps:
s1, convolving the facial expression image with a sobel operator to obtain edge feature information of the facial expression image;
s2, dividing the image obtained in the S1 into M multiplied by N non-overlapping sub-regions; m and N are positive integers;
s3, enabling any pixel point X of the face image to serve as a center point and 8 pixel points in the field to form a square, enabling four corners of the square to serve as first sub-direction masks and four middle points on the edges of the square to serve as second sub-direction masks according to an 8-direction mask of a Kirsch operator, and respectively calculating edge response values of the four corners and the upper, lower, left and right directions through convolution of the first sub-direction mask and the second sub-direction masks and the 8 field pixel values;
s4, setting the position of the edge response value of 3 before ranking as 1 and setting other positions as 0, directly carrying out binary coding according to the response symbol of the image in 4 directions, and constructing an image local descriptor to obtain DLDP1 and DLDP2 codes;
s5, obtaining a histogram of DLDP coding gradient values through DLDP1 and DLDP2, cascading histograms of M multiplied by N sub-regions to obtain feature information of the facial expression image, and obtaining the recognition rate of the expression feature extraction method fusing DLDP and sobel through a classifier.
2. The method for extracting expression features fusing DLDP and sobel as claimed in claim 1, wherein said step S1 convolves the facial expression image with a sobel operator to obtain edge features of the facial expression image, specifically comprising the steps of:
the sobel operator is convolved with the facial expression image to obtain horizontal and vertical filtering results:
Figure FDA0003073510730000011
Figure FDA0003073510730000012
by mixing IxAnd IyCombine to find edge features:
Figure FDA0003073510730000021
3. the method for extracting expressive features fusing DLDP and sobel as claimed in claim 1, wherein the step S3 is performed according to the mask set { M of 8-direction Kirsch operator0~M7}:
Figure FDA0003073510730000022
Figure FDA0003073510730000023
Wherein M is1,M3,M5,M7As a first sub-directional mask M0,M2,M4,M6As a second sub-direction mask, the first sub-direction mask and the second sub-direction mask are simultaneously used for convolution calculation of four corners and edge response values in the upper, lower, left and right directions with 8 field pixel values in the same direction, and then binary coding is directly carried out according to the response symbols of the image in 4 directions to construct an image local descriptor, which is defined as:
Figure FDA0003073510730000024
Figure FDA0003073510730000025
wherein the content of the first and second substances,
Figure FDA0003073510730000026
m1irepresenting the ith edge response value, m, in the first sub-direction2iRepresents the ith edge response value, m, in the second sub-directionkIs the k-th large edge response value.
4. The method for extracting expression features fusing DLDP and sobel as claimed in claim 1, wherein said step S5 of obtaining a histogram of DLDP decomposition local directional mode encoding gradient values specifically includes the steps of:
DLDP codes of pixels are arranged according to original image coordinates to form corresponding DLDP code maps, and after calculating the DLDP codes of each pixel (x, y), a histogram H is formedDLDPiThe definition is as follows:
HDLDPi=∑x,yf(DLDPi(x,y),Ci)
in the formula (I), the compound is shown in the specification,
Figure FDA0003073510730000031
Cithe obtained DLDP descriptor is a histogram of DLDP coding gradient values, and the histograms of DLDP1 and DLDP2 are linked by the following formula to obtain a DLDP histogram HDLDPAs an image feature representation;
Figure FDA0003073510730000032
5. the method for extracting expression features of DLDP and sobel combined according to claim 4, wherein the method for extracting and calculating the expression features of DLDP and sobel combined in step S5 comprises:
combining the DLDP operators in the horizontal direction and the vertical direction to obtain a Sobel-DLDP operator:
Sobel-DLDP={Sobel-DLDPx,Sobel-DLDPy}
wherein the content of the first and second substances,
Sobel-DLDPx={Sobel-DLDP1x,Sobel-DLDP2x}
Sobel-DLDPy={Sobel-DLDP1y,Sobel-DLDP2y}
DLDPxrepresenting the horizontal component of the DLDP code, DLDPyRepresenting the vertical component of the DLDP code, DLDP1xAnd DLDP2xRespectively representing the horizontal component of the first sub-direction DLDP and the horizontal component of the second sub-direction DLDP, DLDP1yAnd DLDP2yRespectively representing the vertical component of the first sub-direction DLDP and the vertical component of the second sub-direction DLDP;
the Sobel-DLDP histogram H of each subblock is respectively extracted by the following formulaSobel-DLDP
Figure FDA0003073510730000033
In the formula (I), the compound is shown in the specification,
Figure FDA0003073510730000034
Cia sobel-DLDP code indicating an i-th sub-direction, i being 1, 2; finally, the expression image is connected with the feature histograms of the sub-blocks in the sequence from left to right and from top to bottom, and the cascade histogram is used for representing the facial expression features; and finally, classifying the fusion information by using an SVM (support vector machine) and identifying the facial expression.
6. The method for extracting expression features fusing DLDP and sobel as claimed in claim 5, wherein said step S1 uses JAFFE facial expression database as experimental database, first, obtaining edge features of the expression image in horizontal and vertical directions by sobel operator, and then fusing the features of the two directions to obtain edge features of the expression image; then reducing the data volume through binarization; and then, carrying out feature extraction through DLDP to obtain a facial expression feature histogram for a classifier to carry out expression classification and identification.
CN201711208327.8A 2017-11-27 2017-11-27 Expression feature extraction method integrating DLDP and sobel Active CN108197529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711208327.8A CN108197529B (en) 2017-11-27 2017-11-27 Expression feature extraction method integrating DLDP and sobel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711208327.8A CN108197529B (en) 2017-11-27 2017-11-27 Expression feature extraction method integrating DLDP and sobel

Publications (2)

Publication Number Publication Date
CN108197529A CN108197529A (en) 2018-06-22
CN108197529B true CN108197529B (en) 2021-09-10

Family

ID=62573201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711208327.8A Active CN108197529B (en) 2017-11-27 2017-11-27 Expression feature extraction method integrating DLDP and sobel

Country Status (1)

Country Link
CN (1) CN108197529B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086692A (en) * 2018-07-16 2018-12-25 北京建筑大学 A kind of face identification device and method
CN110348284A (en) * 2019-05-20 2019-10-18 天津科技大学 Face recognition features' extracting method is blocked based on local direction texture pattern

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205460A (en) * 2015-09-17 2015-12-30 电子科技大学 Face expression feature extraction and recognition method based on maximum direction encoding
CN105893941A (en) * 2016-03-28 2016-08-24 电子科技大学 Facial expression identifying method based on regional images
CN106778529A (en) * 2016-11-25 2017-05-31 南京理工大学 A kind of face identification method based on improvement LDP

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205460A (en) * 2015-09-17 2015-12-30 电子科技大学 Face expression feature extraction and recognition method based on maximum direction encoding
CN105893941A (en) * 2016-03-28 2016-08-24 电子科技大学 Facial expression identifying method based on regional images
CN106778529A (en) * 2016-11-25 2017-05-31 南京理工大学 A kind of face identification method based on improvement LDP

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Facial expression recognition based on fused Feature of PCA and LDP";Zhang Yi etc.;《Image Processing and Pattern Recognition》;20141124;论文第2-3节 *
"一种改进的LDP面部表情特征提取方法";罗元等;《光电技术应用》;20160229;第37卷(第1期);第122-124页 *
"基于改进LDP特征的人脸识别方法";王绎博;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170715(第07期);I138-763 *

Also Published As

Publication number Publication date
CN108197529A (en) 2018-06-22

Similar Documents

Publication Publication Date Title
CN109753885B (en) Target detection method and device and pedestrian detection method and system
Shao et al. Spatio-temporal Laplacian pyramid coding for action recognition
Pietikäinen Image analysis with local binary patterns
Chen et al. Detection evolution with multi-order contextual co-occurrence
Yi et al. Text extraction from scene images by character appearance and structure modeling
Ahmed et al. Automated facial expression recognition using gradient-based ternary texture patterns
Anil et al. Literature survey on face and face expression recognition
Zhen et al. Embedding motion and structure features for action recognition
CN110738207A (en) character detection method for fusing character area edge information in character image
Ungureanu et al. Toward unconstrained palmprint recognition on consumer devices: A literature review
Yu et al. Anchor cascade for efficient face detection
Wang et al. Traffic sign detection using a cascade method with fast feature extraction and saliency test
Mohamed et al. An improved LBP algorithm for avatar face recognition
Ahmed et al. Directional ternary pattern (DTP) for facial expression recognition
Zakaria et al. Hierarchical skin-adaboost-neural network (h-skann) for multi-face detection
Lin et al. Saliency detection within a deep convolutional architecture
Huang et al. Human action recognition based on self organizing map
Alphonse et al. Novel directional patterns and a Generalized Supervised Dimension Reduction System (GSDRS) for facial emotion recognition
CN108197529B (en) Expression feature extraction method integrating DLDP and sobel
Saeed A framework for recognition of facial expression using HOG features
Bencheriet New face features to detect multiple faces in complex background
Mohamed et al. Face recognition based on Wavelet transform and adaptive local binary pattern
Zaaraoui et al. A new local descriptor based on strings for face recognition
Khemakhem et al. Facial expression recognition using convolution neural network enhancing with pre-processing stages
CN112818779B (en) Human behavior recognition method based on feature optimization and multiple feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant