CN109376680A - A kind of Hog and Gabor characteristic based on near-infrared facial image efficiently merges fast human face recognition - Google Patents
A kind of Hog and Gabor characteristic based on near-infrared facial image efficiently merges fast human face recognition Download PDFInfo
- Publication number
- CN109376680A CN109376680A CN201811311715.3A CN201811311715A CN109376680A CN 109376680 A CN109376680 A CN 109376680A CN 201811311715 A CN201811311715 A CN 201811311715A CN 109376680 A CN109376680 A CN 109376680A
- Authority
- CN
- China
- Prior art keywords
- matrix
- dimensionality reduction
- feature
- image
- hog
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The present invention provides a kind of Hog based on near-infrared facial image and Gabor characteristic efficiently to merge fast human face recognition, belongs to Pattern recognition and image processing technical field.This method includes carrying out Hog feature extraction and Gabor characteristic extraction to near-infrared facial image sample respectively;A dimensionality reduction is carried out to two category feature matrixes respectively using Non-negative Matrix Factorization method;Hog feature is carried out serial nature with Gabor characteristic to merge to obtain fusion feature;Secondary dimensionality reduction is carried out to fusion feature matrix using Fisher face LDA, obtains the feature vector of training sample after reprojection's transformation matrix and secondary dimensionality reduction;Classification and Identification is carried out to the test sample that secondary dimensionality reduction obtains based on the distribution situation of training sample using nearest neighbor algorithm KNN.This invention ensures that obtaining the comprehensive of characteristic information, characteristic present efficiency is effectively increased, algorithm operation storage and time-consuming cost is reduced, improves the efficiency of face recognition process.
Description
Technical field
The invention belongs to Pattern recognition and image processing technical fields, and in particular to a kind of based on near-infrared facial image
Hog and Gabor characteristic efficiently merge fast human face recognition.
Background technique
Recognition of face has high research valence in field of scientific study as a kind of important biological feather recognition method
Value.Recognition of face is one of key areas of artificial intelligence, is related to that image procossing, pattern-recognition, computer vision etc. are multiple to grind
Field is studied carefully, currently, it is developed and with the emphasis and hot spot for becoming researchers at home and abroad research.
The content of following five aspects is generally comprised for the research of recognition of face: obtaining image, Face datection and feature
Point location, face standardization, face characteristic extraction, Classification and Identification.Wherein feature extraction is a most important link, and feature mentions
The otherness taken fundamentally determines the superiority and inferiority of face identification method.The algorithm of feature extraction at present can be divided into: based on several
Face identification method, the face identification method based on statistical nature, the face identification method based on connection mechanism, base of what feature
In the face identification method etc. of neural network.The characteristic type emphasis that different feature extracting methods extracts is different, because
This, which states face characteristic using single features, can make characteristic extraction procedure generate certain limitation, extracted object information
Not comprehensively.In order to which characteristic efficient and comprehensively intrinsic to facial image extracts and is showed in brief mode,
The method that multiple features fusion can be used is studied, to obtain the more efficient algorithm of recognition of face, overcomes single features
The limitation of statement.Although multiple features fusion method improves recognition of face efficiency, Fusion Features process to a certain extent
Also it is easy to produce the new matrix of excessively high dimension, causes efficiency during subsequent Classification and Identification too low.
To solve the above-mentioned problems, we first extract HOG feature and Gabor characteristic.HOG feature descriptor can
To capture profile information well, so as to realize the description of target shape, and as unit of local unit to target into
The influence that row processing can reduce geometric deformation to a certain extent and optical deformation generates;Gabor transformation can enhance image
Edge feature, so as to strengthen facial image certain key components feature, and Gabor transformation is to illumination and posture
With robustness, additionally it is possible to which the perception for reflecting human visual system extracts the local feature of image and the frequency of image
Domain information.By multiple features fusion method may be implemented comprehensive information extract, than single feature obtain information more comprehensively, effect
More preferably.It is improved efficiency secondly, carrying out two step dimensionality reductions using NMF, LDA, wherein storage sky can be greatly lowered in a dimensionality reduction NMF
Between and operation cost, significantly improve efficiency, and there is the sparsity that can inhibit external interference to a certain extent;Secondary dimensionality reduction
LDA is that have supervision dimensionality reduction, and the priori knowledge of classification can be used, and the best direction of selection sort performance can efficiently extract
There is the dimension of outstanding contributions to classification, and algorithm complexity can be significantly reduced, shorten runing time, is more advantageous to classification and knows
Not.Finally, carrying out Classification and Identification using KNN, theoretical mature, accuracy is high, and time complexity is low, ensure that face recognition process
High efficiency.
Summary of the invention
Fast face is efficiently merged the present invention provides a kind of Hog based on near-infrared facial image and Gabor characteristic to know
Other method, it is therefore an objective to improve the discrimination and recognition speed of face identification method, guarantee to obtain the comprehensive of characteristic information, effectively
Characteristic present efficiency is improved, algorithm operation storage and time-consuming cost are reduced.
Technical solution of the present invention:
A kind of Hog and Gabor characteristic based on near-infrared facial image efficiently merges fast human face recognition, and step is such as
Under:
Step 1: Hog feature extraction being carried out to near-infrared facial image training sample respectively and Gabor characteristic is extracted, is obtained
Two category feature matrixes;Specifically:
(1.1) Hog feature extraction is carried out to near-infrared facial image training sample:
Firstly, image gray processing and color space normalized are carried out to near-infrared facial image training sample, it is described
The formula of color space normalized are as follows: L (x1,y1)=E (x1,y1)γ, wherein L (x1,y1) it is by color space normalizing
Change treated image in pixel (x1,y1) at pixel value, E (x1,y1) it is image in pixel (x1,y1) at gray scale
Value, value
Then, pixel (x is calculated1,y1) horizontal direction gradient Gx(x1,y1) and vertical gradient Gy(x1,y1):
Gx(x1,y1)=L (x1+1,y1)-L(x1-1,y1)
Gy(x1,y1)=L (x1,y1+1)-L(x1,y1-1)
In turn, the gradient magnitude G (x of pixel is calculated1,y1) and gradient direction θ (x1,y1) be respectively as follows:
Later, the identical unit of size is divided an image into, is each building unit histogram of gradients, and will be adjacent
Unit combination is blocking, carries out gradient intensity normalization;
Finally, the histogram vectors in all pieces are combined, that is, Hog feature vector is formed, combines all trained samples
The Hog feature vector composing training sample Hog eigenmatrix V of this imagem×n;Wherein, n is training sample image number, and m is Hog special
The dimension of image, V after sign is extractedm×nEach column vector be a width m dimension image;
(1.2) Gabor characteristic extraction is carried out to near-infrared facial image training sample:
Firstly, building has 8 directions, 40 groups of Gabor kernel functions of 5 scales:
Wherein, x, y
The transverse and longitudinal coordinate value of certain point pixel respectively in image, u, v are respectively direction in space and space scale regulation coefficient;μ ∈ { 0 ..., 7 } corresponds to 8 different spaces directions, ν ∈ { 0 ..., 4 } corresponding 5
A different spaces scale, valueThe π of δ=2;I is imaginary unit;
Then, convolution algorithm is carried out with training sample image respectively using Gabor kernel function, every facial image obtains 40
A Gabor characteristic figure;The formula of the convolution algorithm are as follows:
Oμ,ν(x, y)=I (x, y) * Gμ,v(x, y), wherein I (x, y) is the sample image of input, Oμ,ν(x, y) is Gabor
The Gabor characteristic figure obtained after filtering;
Later, the Gabor characteristic figure of the same space direction different spaces scale is merged, obtains 8 Gabor characteristics
8 Gabor characteristic figure tandem compounds are got up to be formed Gabor characteristic vector by figure;The fusion formula are as follows:
Wherein, Bμ(x, y) is Gabor characteristic figure after fusion;
Finally, combining the Gabor characteristic vector composing training sample Gabor characteristic matrix of all training sample imagesWherein, n is training sample image number, m1The dimension of image after being extracted for Gabor characteristic,Each column vector
It is a width m1The image of dimension;
Step 2: a dimensionality reduction being carried out to two category feature matrixes respectively using Non-negative Matrix Factorization method NMF, obtains two one
Secondary projective transformation matrix;Specifically:
(2.1) Hog eigenmatrix Vm×nA dimensionality reduction:
Firstly, utilizing NMF method decomposing H og eigenmatrix Vm×n: Vm×n=Wm×k×Hk×n, wherein Wm×kFor basic matrix,
Hk×nFor coefficient matrix;
So, Non-negative Matrix Factorization problem turns to:
Wherein, E (W, H) is the Euclidean distance of V and WH;h
For matrix line label;J is rectangular array label;
Rule of iteration are as follows:
Wherein, WhkFor basic matrix Wm×kH row k column
Element value;HkjFor coefficient matrix Hk×nRow k j column element value;
The matrix W obtained through Non-negative Matrix Factorization processm×k, an as projective transformation matrix;
Then, by Hog eigenmatrix Vm×nIt is projected on Wm×kSpatially:
Wherein, Vk'×nFor the training sample Hog eigenmatrix after a dimensionality reduction, image dimension is tieed up by m dimension dimensionality reduction to k, complete
At training sample Hog eigenmatrix Vm×nA dimensionality reduction;
(2.2) identical as step (2.1), a dimensionality reduction of training sample Gabor characteristic matrix is completed, Gabor mono- is obtained
Secondary projective transformation matrixAnd the training sample Gabor characteristic matrix U after a dimensionality reduction 'k×n;
Step 3: utilizing two projective transformation matrixs, the Hog feature after dimensionality reduction is subjected to serial spy with Gabor characteristic
Sign fusion, obtains fusion feature matrix;The calculation formula of the fusion feature matrix are as follows:
Step 4: secondary dimensionality reduction being carried out to fusion feature matrix using Fisher face LDA, reprojection is obtained and becomes
Change the feature vector of training sample after matrix and secondary dimensionality reduction;Specifically:
(4.1) it is calculated using LDA method and determines optimal projection matrix:
Firstly, calculating within-class scatter matrix SwWith between class scatter matrix Sb:
Wherein, μpIndicate pth class sample average, μaIndicate all sample averages, total C class people in image pattern, everyone is N
Open facial image;x(p,q)Represent the feature vector of the q facial images of pth class people;
Then;Utilize within-class scatter matrix SwWith between class scatter matrix Sb, optimal projection is obtained by Fisher criterion function
Matrix WLDA: WLDA=[w1,w2,...,wr];
Wherein, Fisher criterion function are as follows:
R is required projection dimension;W=[w1,w2...] and for constructing Fisher projection matrix, so that JLDA(w) maximum, root
The maximum feature vector composition matrix W of r characteristic value is selected from w according to demandLDA;
(4.2) the fusion feature matrix M for obtaining step 32k×nIt is projected on the W of r dimensionLDASpatially:Training sample feature vector R after obtaining secondary dimensionality reductionr×n, the sample after guaranteeing projection is in new space
There are maximum between class distance and minimum inter- object distance, to realize secondary dimensionality reduction;
Step 5: Hog feature and Gabor characteristic are extracted to near-infrared facial image test sample to be detected;Utilize step 2
Two projective transformation matrixs of middle acquisition carry out a dimensionality reduction to test sample Hog feature and Gabor characteristic respectively;Benefit
The fusion feature matrix of test sample is obtained with the method for step 3;The reprojection's transformation matrix pair obtained in recycle step 4
The fusion feature matrix of test sample carries out secondary dimensionality reduction, obtains the secondary dimensionality reduction feature vector of facial image to be detected;Specifically
Are as follows:
(5.1) test sample Hog eigenmatrix is obtained using the method for step (1.1) and (1.2)And test sample
Gabor characteristic matrixWherein n1For test sample image number, m is the dimension of image after Hog feature extraction, m1For Gabor
The dimension of image after feature extraction;
(5.2) the two projective transformation matrix W obtained in step (2.1) and (2.2) are utilizedm×kWithRespectively
To test sample Hog eigenmatrixWith Gabor characteristic matrixCarry out a dimensionality reduction, the test after obtaining a dimensionality reduction
Sample Hog eigenmatrixAnd the test sample Gabor characteristic matrix after a dimensionality reduction
(5.3) the fusion feature matrix of test sample is obtained using the method for step 3
(5.4) the reprojection transformation matrix W obtained in recycle step 4LDATo the fusion feature matrix of test sampleCarry out secondary dimensionality reduction:
Obtain the secondary dimensionality reduction feature vector of facial image to be detected
Step 6: the distribution situation using nearest neighbor algorithm KNN based on training sample, it is special to the secondary dimensionality reduction of test sample
It levies vector and carries out Classification and Identification;Specifically:
(6.1) secondary dimensionality reduction test sample feature vector is setZ-th of column vector be αz, αzRepresent secondary dimensionality reduction
The characteristic of z-th of individual in test sample, calculates αzWith secondary dimensionality reduction training sample feature vector Rr×nEach column to
Measure β1,β2,...,βnBetween Euclidean distance Ds(αz):
Ds(αz)=| | αz-βs||2, wherein s ∈ { 1 ..., n };
(6.2) lookup and αzApart from the smallest secondary dimensionality reduction training sample personal feature data, pass through point of training sample
Cloth obtains the secondary dimensionality reduction training sample individual tag, the generic of z-th of individual in as secondary dimensionality reduction test sample;
(6.3) for secondary dimensionality reduction test sample feature vectorEach column vector, be all made of (6.1) extremely
(6.2) method obtains the generic of each individual in secondary dimensionality reduction test sample.
Beneficial effects of the present invention:
1, information is comprehensive
The HOG feature descriptor that the present invention uses can capture profile information well, so as to realize target shape
Description, and target is handled as unit of local unit can reduce geometric deformation and optics shape to a certain extent
Sell of one's property raw influence;The Gabor transformation of use can enhance picture edge characteristic, so as to strengthen certain passes of facial image
The feature of key section, and Gabor transformation also has robustness to illumination and posture, additionally it is possible to reflect human visual system's
Perception extracts the local feature of image and the frequency domain information of image.The present invention is special by fusion Hog feature and Gabor
Sign realize comprehensive information extraction, than single feature obtain information more comprehensively, effect is more preferable.
2, characteristic present is high-efficient
One of most important research contents of face identification system be exactly can efficiently and comprehensively to facial image it is intrinsic
Characteristic is extracted and is showed in brief mode.For the limitation for overcoming single features to state, research multiple features melt
The method of conjunction.Multiple features fusion method can not only obtain Hog and Gabor characteristic information, and comprehensive Hog feature retouches profile information
Power is stated to be good at Gabor characteristic by force and extract key feature and two category features to geometric deformation, optical deformation, illumination and posture Shandong
The advantages of stick, and effective information is retained to the full extent, redundancy is removed, obtains its processing capability in real time excellent
Change.Face identification rate can be significantly increased in the fusion new feature obtained after processing compared with single feature.Make in training process
Dimensionality reduction twice is carried out with NMF, LDA, wherein storage unit occupancy and operation cost can be greatly lowered in a dimensionality reduction NMF, is shown
Work improves efficiency, and has the sparsity that can inhibit external interference to a certain extent;Secondary dimensionality reduction LDA is that have supervision to drop
Dimension, can be used the priori knowledge of classification, the best direction of selection sort performance, can efficiently extract has prominent tribute to classification
The dimension offered, and algorithm complexity can be significantly reduced, shorten runing time, is more advantageous to last Classification and Identification.
3, the storage of algorithm operation and time-consuming cost are small
The HOG feature that the present invention uses calculates on the unified cell factory of a size, calculating speed small with calculation amount
The advantage that degree is fast, detection performance is good;Gabor of the present invention to same direction different scale that is high-dimensional and causing data redundancy
Feature is merged, and can reduce intrinsic dimensionality, improves recognition of face efficiency.In addition, the present invention uses Non-negative Matrix Factorization
(NMF) method carries out dimensionality reduction to excessively high intrinsic dimensionality, improves operation efficiency, reduces time-consuming cost.And NMF method passes through more
The mode of secondary iteration calculates basic matrix and coefficient matrix, small to the occupancy of memory space.
Detailed description of the invention
Fig. 1 is that a kind of Hog based on near-infrared facial image and Gabor characteristic efficiently merge fast human face recognition
Flow chart.
Specific embodiment
A specific embodiment of the invention is described in detail below in conjunction with technical solution and attached drawing.
Step 1: Hog feature extraction is carried out to near-infrared facial image training sample:
Firstly, image gray processing and color space normalized are carried out to near-infrared facial image training sample, it is described
The formula of color space normalized are as follows: L (x1,y1)=E (x1,y1)γ, wherein L (x1,y1) it is by color space normalizing
Change treated image in pixel (x1,y1) at pixel value, E (x1,y1) it is image in pixel (x1,y1) at gray scale
Value, value
Then, pixel (x is calculated1,y1) horizontal direction gradient Gx(x1,y1) and vertical gradient Gy(x1,y1):
Gx(x1,y1)=L (x1+1,y1)-L(x1-1,y1)
Gy(x1,y1)=L (x1,y1+1)-L(x1,y1-1)
In turn, the gradient magnitude G (x of pixel is calculated1,y1) and gradient direction θ (x1,y1) be respectively as follows:
Later, the identical unit of size is divided an image into, is each building unit histogram of gradients, and will be adjacent
Unit combination is blocking, carries out gradient intensity normalization;
Finally, the histogram vectors in all pieces are combined, that is, Hog feature vector is formed, combines all trained samples
The Hog feature vector composing training sample Hog eigenmatrix V of this imagem×n;Wherein, n is training sample image number, and m is Hog special
The dimension of image, V after sign is extractedm×nEach column vector be a width m dimension image;
Step 2: Gabor characteristic extraction is carried out to near-infrared facial image training sample:
Firstly, building has 8 directions, 40 groups of Gabor kernel functions of 5 scales:
Wherein, x, y
The transverse and longitudinal coordinate value of certain point pixel respectively in image, u, v are respectively direction in space and space scale regulation coefficient;μ ∈ { 0 ..., 7 } corresponds to 8 different spaces directions, ν ∈ { 0 ..., 4 } corresponding 5
A different spaces scale, valueThe π of δ=2;I is imaginary unit;
Then, convolution algorithm is carried out with training sample image respectively using Gabor kernel function, every facial image obtains 40
A Gabor characteristic figure;The formula of the convolution algorithm are as follows:
Oμ,ν(x, y)=I (x, y) * Gμ,v(x, y), wherein I (x, y) is the sample image of input, Oμ,ν(x, y) is Gabor
The Gabor characteristic figure obtained after filtering;
Later, the Gabor characteristic figure of the same space direction different spaces scale is merged, obtains 8 Gabor characteristics
8 Gabor characteristic figure tandem compounds are got up to be formed Gabor characteristic vector by figure;The fusion formula are as follows:
Wherein, Bμ(x, y) is Gabor characteristic figure after fusion;
Finally, combining the Gabor characteristic vector composing training sample Gabor characteristic matrix of all training sample imagesWherein, n is training sample image number, m1The dimension of image after being extracted for Gabor characteristic,Each column vector
It is a width m1The image of dimension;
Step 3: a dimensionality reduction is carried out to two category feature matrixes respectively using Non-negative Matrix Factorization method NMF, obtains two one
Secondary projective transformation matrix;Specifically:
(A) Hog eigenmatrix Vm×nA dimensionality reduction:
Firstly, utilizing NMF method decomposing H og eigenmatrix Vm×n: Vm×n=Wm×k×Hk×n, wherein Wm×kFor basic matrix,
Hk×nFor coefficient matrix;
So, Non-negative Matrix Factorization problem turns to:
Wherein, E (W, H) is the Euclidean distance of V and WH;h
For matrix line label;J is rectangular array label;
Rule of iteration are as follows:
Wherein, WhkFor basic matrix Wm×kH row k column
Element value;HkjFor coefficient matrix Hk×nRow k j column element value;
The matrix W obtained through Non-negative Matrix Factorization processm×k, an as projective transformation matrix;
Then, by Hog eigenmatrix Vm×nIt is projected on Wm×kSpatially:
Wherein, Vk'×nFor the training sample Hog eigenmatrix after a dimensionality reduction, image dimension is tieed up by m dimension dimensionality reduction to k, complete
At training sample Hog eigenmatrix Vm×nA dimensionality reduction;
(B) identical as step (A), a dimensionality reduction of training sample Gabor characteristic matrix is completed, Gabor is obtained and once throws
Shadow transformation matrixAnd the training sample Gabor characteristic matrix U after a dimensionality reduction 'k×n;
Step 4: utilizing two projective transformation matrixs, and the Hog feature after dimensionality reduction is carried out serially with Gabor characteristic
Fusion Features obtain fusion feature matrix;The calculation formula of the fusion feature matrix are as follows:
Step 5: secondary dimensionality reduction is carried out to fusion feature matrix using Fisher face LDA, reprojection is obtained and becomes
Change the feature vector of training sample after matrix and secondary dimensionality reduction;Specifically:
(C) it is calculated using LDA method and determines optimal projection matrix:
Firstly, calculating within-class scatter matrix SwWith between class scatter matrix Sb:
Wherein, μpIndicate pth class sample average, μaIndicate all sample averages, total C class people in image pattern, everyone is N
Open facial image;x(p,q)Represent the feature vector of the q facial images of pth class people;
Then;Utilize within-class scatter matrix SwWith between class scatter matrix Sb, optimal projection is obtained by Fisher criterion function
Matrix WLDA: WLDA=[w1,w2,...,wr];
Wherein, Fisher criterion function are as follows:
R is required projection dimension;W=[w1,w2...] and for constructing Fisher projection matrix, so that JLDA(w) maximum, root
The maximum feature vector composition matrix W of r characteristic value is selected from w according to demandLDA;
(D) the fusion feature matrix M for obtaining Step 42k×nIt is projected on the W of r dimensionLDASpatially:
Training sample feature vector R after obtaining secondary dimensionality reductionr×n, guarantee that the sample after projecting has maximum between class distance in new space
With minimum inter- object distance, to realize secondary dimensionality reduction;
Step 6: Hog feature and Gabor characteristic are extracted to near-infrared facial image test sample to be detected;Utilize Step
Two projective transformation matrixs obtained in 3 carry out a dimensionality reduction to test sample Hog feature and Gabor characteristic respectively;Benefit
The fusion feature matrix of test sample is obtained with the method for Step 4;Recycle the reprojection's transformation matrix obtained in Step 5
Secondary dimensionality reduction is carried out to the fusion feature matrix of test sample, obtains the secondary dimensionality reduction feature vector of facial image to be detected;Tool
Body are as follows:
(E) test sample Hog eigenmatrix is obtained using the method for Step 1 and Step 2With test sample Gabor
EigenmatrixWherein n1For test sample image number, m is the dimension of image after Hog feature extraction, m1It is mentioned for Gabor characteristic
Take the dimension of rear image;
(F) the two projective transformation matrix W obtained in step (A) and (B) are utilizedm×kAnd Xm1×k, respectively to test specimens
This Hog eigenmatrixWith Gabor characteristic matrixA dimensionality reduction is carried out, the test sample Hog after obtaining a dimensionality reduction is special
Levy matrixAnd the test sample Gabor characteristic matrix after a dimensionality reduction
(G) the fusion feature matrix of test sample is obtained using the method for Step 4
(H) the reprojection transformation matrix W obtained in Step 5 is recycledLDATo the fusion feature matrix of test sampleCarry out secondary dimensionality reduction:
Obtain the secondary dimensionality reduction feature vector of facial image to be detected
Step 7: the distribution situation using nearest neighbor algorithm KNN based on training sample, it is special to the secondary dimensionality reduction of test sample
It levies vector and carries out Classification and Identification;Specifically:
(I) secondary dimensionality reduction test sample feature vector is setZ-th of column vector be αz, αzSecondary dimensionality reduction is represented to survey
The characteristic of z-th of individual in sample sheet, calculates αzWith secondary dimensionality reduction training sample feature vector Rr×nEach column vector
β1,β2,...,βnBetween Euclidean distance Ds(αz):
Ds(αz)=| | αz-βs||2, wherein s ∈ { 1 ..., n };
(J) lookup and αzApart from the smallest secondary dimensionality reduction training sample personal feature data, pass through the distribution of training sample
The secondary dimensionality reduction training sample individual tag is obtained, the generic of z-th of individual in as secondary dimensionality reduction test sample;
(K) for secondary dimensionality reduction test sample feature vectorEach column vector, be all made of the side of (1) to (2)
Method obtains the generic of each individual in secondary dimensionality reduction test sample.
The present invention carries out face recognition experiment on FERET face database, totally 200 people, everyone six facial images,
In 1000 for training, 200 for testing.200 test images can be obtained with 96% discrimination, detect one
The time of face is 0.08s, not only ensure that discrimination, but also shorten recognition time.
Above-mentioned specific example is to a kind of Hog and Gabor characteristic based on near-infrared facial image provided by the present invention
Efficiently fusion fast human face recognition is described in detail, and example described above is not intended to limit the present invention, and is to aid in reason
Solve core of the invention content.For one of skill in the art, basic thought according to the present invention in the process of implementation can
There is modification place, these improvement are all included in the scope of protection of the present invention.
Claims (1)
1. a kind of Hog and Gabor characteristic based on near-infrared facial image efficiently merges fast human face recognition, feature exists
In steps are as follows:
Step 1: Hog feature extraction being carried out to near-infrared facial image training sample respectively and Gabor characteristic is extracted, obtains two classes
Eigenmatrix;Specifically:
(1.1) Hog feature extraction is carried out to near-infrared facial image training sample:
Firstly, carrying out image gray processing and color space normalized, the color to near-infrared facial image training sample
The formula of spatial normalization processing are as follows: L (x1,y1)=E (x1,y1)γ, wherein L (x1,y1) be by color space normalization at
Image after reason is in pixel (x1,y1) at pixel value, E (x1,y1) it is image in pixel (x1,y1) at gray value, take
Value
Then, pixel (x is calculated1,y1) horizontal direction gradient Gx(x1,y1) and vertical gradient Gy(x1,y1):
Gx(x1,y1)=L (x1+1,y1)-L(x1-1,y1)
Gy(x1,y1)=L (x1,y1+1)-L(x1,y1-1)
In turn, the gradient magnitude G (x of pixel is calculated1,y1) and gradient direction θ (x1,y1) be respectively as follows:
Later, the identical unit of size is divided an image into, is each building unit histogram of gradients, and by adjacent unit
Combine blocking, progress gradient intensity normalization;
Finally, the histogram vectors in all pieces are combined, that is, Hog feature vector is formed, combines all training sample figures
The Hog feature vector composing training sample Hog eigenmatrix V of picturem×n;Wherein, n is training sample image number, and m is that Hog feature mentions
Take the dimension of rear image, Vm×nEach column vector be a width m dimension image;
(1.2) Gabor characteristic extraction is carried out to near-infrared facial image training sample:
Firstly, building has 8 directions, 40 groups of Gabor kernel functions of 5 scales:
Wherein, x, y difference
For the transverse and longitudinal coordinate value of certain point pixel in image, u, v are respectively direction in space and space scale regulation coefficient;μ ∈ { 0 ..., 7 } corresponds to 8 different spaces directions, ν ∈ { 0 ..., 4 } corresponding 5
A different spaces scale, valueThe π of δ=2;I is imaginary unit;
Then, convolution algorithm is carried out with training sample image respectively using Gabor kernel function, every facial image obtains 40
Gabor characteristic figure;The formula of the convolution algorithm are as follows:
Oμ,ν(x, y)=I (x, y) * Gμ,v(x, y), wherein I (x, y) is the sample image of input, Oμ,ν(x, y) is Gabor filtering
The Gabor characteristic figure obtained afterwards;
Later, the Gabor characteristic figure of the same space direction different spaces scale is merged, obtains 8 Gabor characteristic figures,
8 Gabor characteristic figure tandem compounds are got up to be formed Gabor characteristic vector;The fusion formula are as follows:
Wherein, Bμ(x, y) is Gabor characteristic figure after fusion;
Finally, combining the Gabor characteristic vector composing training sample Gabor characteristic matrix of all training sample imagesIts
In, n is training sample image number, m1The dimension of image after being extracted for Gabor characteristic,Each column vector be a width m1
The image of dimension;
Step 2: a dimensionality reduction being carried out to two category feature matrixes respectively using Non-negative Matrix Factorization method NMF, obtains two primary throwings
Shadow transformation matrix;Specifically:
(2.1) Hog eigenmatrix Vm×nA dimensionality reduction:
Firstly, utilizing NMF method decomposing H og eigenmatrix Vm×n: Vm×n=Wm×k×Hk×n, wherein Wm×kFor basic matrix, Hk×nFor
Coefficient matrix;
So, Non-negative Matrix Factorization problem turns to:
Wherein, E (W, H) is the Euclidean distance of V and WH;H is square
Battle array line label;J is rectangular array label;
Rule of iteration are as follows:
Wherein, WhkFor basic matrix Wm×kH row k column element
Value;HkjFor coefficient matrix Hk×nRow k j column element value;
The matrix W obtained through Non-negative Matrix Factorization processm×k, an as projective transformation matrix;
Then, by Hog eigenmatrix Vm×nIt is projected on Wm×kSpatially:
Wherein, V 'k×nFor the training sample Hog eigenmatrix after a dimensionality reduction, image dimension is tieed up dimensionality reduction by m and is tieed up to k, completes instruction
Practice sample Hog eigenmatrix Vm×nA dimensionality reduction;
(2.2) identical as step (2.1), a dimensionality reduction of training sample Gabor characteristic matrix is completed, Gabor is obtained and once throws
Shadow transformation matrixAnd the training sample Gabor characteristic matrix U after a dimensionality reduction 'k×n;
Step 3: utilizing two projective transformation matrixs, the Hog feature after dimensionality reduction is subjected to serial nature with Gabor characteristic and is melted
It closes, obtains fusion feature matrix;The calculation formula of the fusion feature matrix are as follows:
Step 4: secondary dimensionality reduction being carried out to fusion feature matrix using Fisher face LDA, reprojection is obtained and converts square
The feature vector of training sample after battle array and secondary dimensionality reduction;Specifically:
(4.1) it is calculated using LDA method and determines optimal projection matrix:
Firstly, calculating within-class scatter matrix SwWith between class scatter matrix Sb:
Wherein, μpIndicate pth class sample average, μaIndicate all sample averages, total C class people in image pattern, everyone is N people
Face image;x(p,q)Represent the feature vector of the q facial images of pth class people;
Then;Utilize within-class scatter matrix SwWith between class scatter matrix Sb, optimal projection matrix is obtained by Fisher criterion function
WLDA: WLDA=[w1,w2,...,wr];
Wherein, Fisher criterion function are as follows:
R is required projection dimension;W=[w1,w2...] and for constructing Fisher projection matrix, so that JLDA(w) maximum, according to need
It asks and selects the maximum feature vector composition matrix W of r characteristic value from wLDA;
(4.2) the fusion feature matrix M for obtaining step 32k×nIt is projected on the W of r dimensionLDASpatially:It obtains
Training sample feature vector R after secondary dimensionality reductionr×n, the sample after guaranteeing projection has maximum between class distance and most in new space
Small inter- object distance, to realize secondary dimensionality reduction;
Step 5: Hog feature and Gabor characteristic are extracted to near-infrared facial image test sample to be detected;It is obtained using in step 2
Two projective transformation matrixs taken carry out a dimensionality reduction to test sample Hog feature and Gabor characteristic respectively;Utilize step
Rapid 3 method obtains the fusion feature matrix of test sample;The reprojection's transformation matrix obtained in recycle step 4 is to test
The fusion feature matrix of sample carries out secondary dimensionality reduction, obtains the secondary dimensionality reduction feature vector of facial image to be detected;Specifically:
(5.1) test sample Hog eigenmatrix is obtained using the method for step (1.1) and (1.2)With test sample Gabor
EigenmatrixWherein n1For test sample image number, m is the dimension of image after Hog feature extraction, m1It is mentioned for Gabor characteristic
Take the dimension of rear image;
(5.2) the two projective transformation matrix W obtained in step (2.1) and (2.2) are utilizedm×kWithRespectively to test
Sample Hog eigenmatrixWith Gabor characteristic matrixCarry out a dimensionality reduction, the test sample after obtaining a dimensionality reduction
Hog eigenmatrixAnd the test sample Gabor characteristic matrix after a dimensionality reduction
(5.3) the fusion feature matrix of test sample is obtained using the method for step 3
(5.4) the reprojection transformation matrix W obtained in recycle step 4LDATo the fusion feature matrix of test sample
Carry out secondary dimensionality reduction:
Obtain the secondary dimensionality reduction feature vector of facial image to be detected
Step 6: the distribution situation using nearest neighbor algorithm KNN based on training sample, to the secondary dimensionality reduction feature of test sample to
Amount carries out Classification and Identification;Specifically:
(6.1) secondary dimensionality reduction test sample feature vector is setZ-th of column vector be αz, αzRepresent secondary dimensionality reduction test specimens
The characteristic of z-th of individual in this, calculates αzWith secondary dimensionality reduction training sample feature vector Rr×nEach column vector β1,
β2,...,βnBetween Euclidean distance Ds(αz):
Ds(αz)=| | αz-βs||2, wherein s ∈ { 1 ..., n };
(6.2) lookup and αzApart from the smallest secondary dimensionality reduction training sample personal feature data, obtained by the distribution of training sample
The secondary dimensionality reduction training sample individual tag, the generic of z-th of individual in as secondary dimensionality reduction test sample;
(6.3) for secondary dimensionality reduction test sample feature vectorEach column vector, be all made of the side of (6.1) to (6.2)
Method obtains the generic of each individual in secondary dimensionality reduction test sample.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811311715.3A CN109376680A (en) | 2018-11-06 | 2018-11-06 | A kind of Hog and Gabor characteristic based on near-infrared facial image efficiently merges fast human face recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811311715.3A CN109376680A (en) | 2018-11-06 | 2018-11-06 | A kind of Hog and Gabor characteristic based on near-infrared facial image efficiently merges fast human face recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109376680A true CN109376680A (en) | 2019-02-22 |
Family
ID=65397592
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811311715.3A Withdrawn CN109376680A (en) | 2018-11-06 | 2018-11-06 | A kind of Hog and Gabor characteristic based on near-infrared facial image efficiently merges fast human face recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109376680A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112052344A (en) * | 2020-09-29 | 2020-12-08 | 北京邮电大学 | Method for acquiring converged media information based on knowledge graph and ScSIFT |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103390154A (en) * | 2013-07-31 | 2013-11-13 | 中国人民解放军国防科学技术大学 | Face recognition method based on extraction of multiple evolution features |
CN106203528A (en) * | 2016-07-19 | 2016-12-07 | 华侨大学 | A kind of feature based merges and the 3D of KNN draws intelligent classification algorithm |
CN106991385A (en) * | 2017-03-21 | 2017-07-28 | 南京航空航天大学 | A kind of facial expression recognizing method of feature based fusion |
-
2018
- 2018-11-06 CN CN201811311715.3A patent/CN109376680A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103390154A (en) * | 2013-07-31 | 2013-11-13 | 中国人民解放军国防科学技术大学 | Face recognition method based on extraction of multiple evolution features |
CN106203528A (en) * | 2016-07-19 | 2016-12-07 | 华侨大学 | A kind of feature based merges and the 3D of KNN draws intelligent classification algorithm |
CN106991385A (en) * | 2017-03-21 | 2017-07-28 | 南京航空航天大学 | A kind of facial expression recognizing method of feature based fusion |
Non-Patent Citations (3)
Title |
---|
杨勇 等: "一种基于两步降维和并行特征融合的表情识别方法", 《重庆邮电大学学报(自然科学版)》 * |
王晓华 等: "改进的Gabor变换和二维NMF融合的人脸识别", 《计算机工程与应用》 * |
聂义乐: "基于Gabor和HOG特征的稀疏表示人脸识别方法", 《中国优秀硕士学位论文全文数据库电子期刊 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112052344A (en) * | 2020-09-29 | 2020-12-08 | 北京邮电大学 | Method for acquiring converged media information based on knowledge graph and ScSIFT |
CN112052344B (en) * | 2020-09-29 | 2022-09-09 | 北京邮电大学 | Method for acquiring converged media information based on knowledge graph and ScSIFT |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537743B (en) | Face image enhancement method based on generation countermeasure network | |
Girshick et al. | Rich feature hierarchies for accurate object detection and semantic segmentation | |
Tao et al. | Smoke detection based on deep convolutional neural networks | |
Sermanet et al. | Convolutional neural networks applied to house numbers digit classification | |
Berg et al. | Shape matching and object recognition using low distortion correspondences | |
Hariharan et al. | Discriminative decorrelation for clustering and classification | |
Meyers et al. | Using biologically inspired features for face processing | |
CN102663413B (en) | Multi-gesture and cross-age oriented face image authentication method | |
US7440586B2 (en) | Object classification using image segmentation | |
CN101661554B (en) | Front face human body automatic identity recognition method under long-distance video | |
CN105138998B (en) | Pedestrian based on the adaptive sub-space learning algorithm in visual angle recognition methods and system again | |
CN102930300B (en) | Method and system for identifying airplane target | |
CN107862267A (en) | Face recognition features' extraction algorithm based on full symmetric local weber description | |
CN106056067A (en) | Corresponding relationship prediction-based low-resolution face recognition method | |
CN110188646B (en) | Human ear identification method based on fusion of gradient direction histogram and local binary pattern | |
CN104376312B (en) | Face identification method based on bag of words compressed sensing feature extraction | |
Gilani et al. | Towards large-scale 3D face recognition | |
CN109376680A (en) | A kind of Hog and Gabor characteristic based on near-infrared facial image efficiently merges fast human face recognition | |
CN107273840A (en) | A kind of face recognition method based on real world image | |
Talele et al. | Face detection and geometric face normalization | |
He et al. | Covariance matrix based feature fusion for scene classification | |
Xu et al. | Car detection using deformable part models with composite features | |
Li et al. | 3D face recognition by constructing deformation invariant image | |
Liu et al. | Unsupervised network pretraining via encoding human design | |
Praseeda Lekshmi et al. | Analysis of facial expressions from video images using PCA |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20190222 |