CN106778522A - A kind of single sample face recognition method extracted based on Gabor characteristic with spatial alternation - Google Patents

A kind of single sample face recognition method extracted based on Gabor characteristic with spatial alternation Download PDF

Info

Publication number
CN106778522A
CN106778522A CN201611059543.6A CN201611059543A CN106778522A CN 106778522 A CN106778522 A CN 106778522A CN 201611059543 A CN201611059543 A CN 201611059543A CN 106778522 A CN106778522 A CN 106778522A
Authority
CN
China
Prior art keywords
gabor
matrix
feature
transformation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611059543.6A
Other languages
Chinese (zh)
Other versions
CN106778522B (en
Inventor
葛洪伟
李莉
江明
朱嘉钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huiyouba Technology Co ltd
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201611059543.6A priority Critical patent/CN106778522B/en
Publication of CN106778522A publication Critical patent/CN106778522A/en
Application granted granted Critical
Publication of CN106778522B publication Critical patent/CN106778522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of single sample face recognition method extracted based on Gabor characteristic with spatial alternation, mainly solve under conditions of only individual training sample image, because scatter matrix is traditional face recognition methods can not be applied caused by zero problem in class.The method extracts spatial signature vectors using Gabor wavelet from original single sample image, then extracted spatial signature vectors and original spectral signature vector is merged, low-dimensional Feature Space Transformation is carried out to fusion feature matrix using Feature Space Transformation method, in being transformed to a lower-dimensional subspace, finally, complete to recognize using nearest neighbor classifier.The inventive method can be accurately finished the identification of single sample face, improve accuracy of identification, reduce the cost of calculating.Compared with prior art, face identification method proposed by the present invention has more validity and robustness.

Description

Single-sample face recognition method based on Gabor feature extraction and spatial transformation
Technical Field
The invention belongs to the field of pattern recognition and image processing, and relates to a face recognition problem under the condition that a traditional face recognition method cannot be used for a single face image due to the fact that an intra-class scattering matrix is zero; in particular to a single sample face recognition method based on Gabor feature extraction and space transformation, which can be used for video monitoring, identity recognition and the like under the single sample situation.
Background
The face recognition technology is the most important one in the biological feature recognition technology, and is widely applied in the fields of video monitoring, supervision and law enforcement, multimedia, process control, identity recognition and the like at present. Many researchers have made much research in this regard to date. However, in severe or specific environments, a completely new challenge is often posed to a face recognition system, for example, law enforcement officers only have one face image on a criminal identity card, and only can monitor and compare the face image. For such a scenario with only one face image, the face recognition problem will become very difficult, mainly because the intra-class dispersion matrix in the commonly used classification model is zero, and the traditional methods such as Fisher linear discriminant analysis, maximum divergence difference, etc. cannot be directly used. This situation we often call the single training sample face recognition problem in an unconstrained environment. How to complete accurate automatic identification by monitoring and capturing a single brand new face of a criminal under the conditions of poor illumination, large change of facial posture and large change of expression is a great challenge. At present, the problem of face recognition of a single training sample is not well solved.
In recent years, some researchers at home and abroad have made some research on single-sample training images. Gao et al propose a Fisher Linear discriminant analysis with Singular Value Decomposition (SVD) method of Gao Q, Zhang L, Zhang D. face recognition using FLDA with a single training image person [ J ]. Applied Mathematics and calculation, 2008,205(2): 726-. At this point, the intra-class scatter matrix in the FLDA model is obtained. Koc and Barkana et al propose a column rotation orthogonal trigonometric decomposition (QRCP) -based method (Koc M, Barkana A.A new solution to one sample surface registration using FLDA [ J ]. Applied Mathesics and calculation, 2011,217(24):10368-10376.), which processes a single training sample image using column rotation orthogonal trigonometric decomposition (QRCP), also obtains a set of base images, then constructs an approximate image using the obtained base images (the approximate image contains 97% of the energy of the original image), and finally forms each new type of training image using the obtained approximate image and the original image, thereby obtaining an intra-class dispersion matrix in the FLDA model. Li et al propose another method for obtaining an intra-class scattering matrix (Li L, Gao J, Ge H.A new surface registration method via semi-discrete demodulation for one sample protocol [ J ]. Optik-International Journal for Light and electronic optics,2016,127(19):7408-7417.) by using semi-discrete decomposition (SDD) instead of SVD or QRCP, and by using artificial decomposition energy parameters to obtain an approximate image, and also by obtaining an intra-class scattering matrix in the FLDA model.
Although both SVD-based and QRCP-based Fisher linear discriminant analysis methods can solve the problem of face recognition of a single training sample, the two methods have the following three disadvantages: (1) the reconstructed approximate image is not very satisfactory or convincing; (2) in the QRCP-based method, no theoretical analysis and interpretation is given to the approximation image to contain at least 97% of the energy of the original image; in the SVD-based method, when the number of approximate images is greater than 4, there is only a slight difference between the base image and the original image. (3) In the SVD and QRCP based methods, decomposition and storage of large scale images is omitted.
The SDD based method is superior to SVD and QRCP based methods in terms of recognition rate and recognition time, and requires less storage space, but there are three major drawbacks to the SDD based method: (1) the stopping criterion of the method needs manual control; (2) SDD-based methods still use Fisher criteria, namely: the method still utilizes the intra-class dispersion matrix and the inter-class dispersion matrix to obtain effective discrimination information.
Disclosure of Invention
Aiming at the existing problems, the invention provides a single-sample face recognition method based on Gabor feature extraction and spatial transformation, which aims to solve the problem that an intra-class scattering matrix under the single-sample image scene is zero. Therefore, the accuracy and robustness of the face recognition in practical application are improved.
The key technology for realizing the invention is as follows: under the condition of a single training sample image, firstly, extracting a spatial feature vector from an original single training sample image by utilizing a Gabor wavelet; then, obtaining a fusion feature matrix by using the extracted spatial feature vector and the original spectral feature vector; and then, performing low-dimensional feature space transformation on the fusion feature matrix by using a feature space transformation method, and transforming the fusion feature matrix into a low-dimensional subspace. And finally, identifying by using a nearest neighbor classifier. The single-sample face recognition method based on Gabor feature extraction and spatial transformation not only greatly improves the recognition rate and reduces the complexity of calculation, but also has higher effectiveness and robustness compared with the prior art.
In order to achieve the above object, the specific implementation steps are as follows:
(1) spatial information of a single image is extracted using a Gabor wavelet.
(1.1) constructing a Gabor filter function: the invention adopts a two-dimensional Gabor filter to extract the spatial information of a single image, which is a Gaussian kernel function adjusted by a complex sinusoidal plane. Is defined as:
where f is the central angular frequency of the complex sinusoidal plane wave, θ represents the normal parallel fringe direction of the Gabor function, φ is the phase, σ is the standard deviation, and γ is the spatial ratio for specifying the supporting ellipticity of the Gabor function.
(1.2) constructing a Gabor filter bank: since the Gabor filter bank is composed of a group of Gabor filters with different frequencies and directions, in the present invention, we use Gabor filter banks with five different scales and eight different directions, and the following two formulas give a Gabor filter bank with five different scales and eight different directions:
(1.3) Gabor representation of face image: for a face image a (x, y), its Gabor representation can be obtained by convolution of the original image and Gabor filtering, i.e.:
g (x, y) represents the result of two-dimensional convolution of Gabor filter in different scales u and different directions v, and the size of G (x, y) is determined by the down-sampling factor ξ, and the zero mean and unit variance are performed to the G (x, y) to obtain a filter characteristic matrix Zu,v∈Rm*n
(1.4) constructing a Gabor direction block feature matrix: filtering characteristic matrix Z obtained in (1.3)u,vConversion into one-dimensional column vector, using Z0The Gabor direction block feature matrix representing five different scales and eight different directions of the face image a (x, y) is as follows:
wherein,is Zu,vOne-dimensional representation at the scale i, Z0∈R(m *n)×40Is a Gabor direction block feature matrix obtained from the convolution result G (x, y).
2. And fusing the spatial information extracted by the Gabor wavelet and the spectral information of the original image.
The Gabor direction block feature matrix obtained in the step 1 is Z0∈R(m*n)×40On the other hand, since a single training sample image itself contains very important spectral information, the Gabor spatial feature vector and the spectral feature vector Y are combined0∈R(m *n)×41Performing fusion to obtain a fusion feature matrix F ∈ R(m*n)×41
Wherein σ1And σ2Are each Z0And Y0Can be obtained by calculating the square root of the variance of the feature vector.
3. Feature space transformation based on the fused feature matrix.
(3.1) establishing a fusion characteristic optimization model
(3.2) performing feature space transformation to obtain a transformation matrix
4. Constructing projection feature vectors
For test feature vector f ∈ Rn×1By passingThe projected feature vector can be obtained by linear transformation as follows
It is clear that the computational complexity described above is significantly reduced.
5. After the projected feature vectors are obtained, the nearest neighbor classifier is used for identification.
The method of the invention has the following advantages:
(1) the difficult problems encountered by a single training sample are overcome: since the intra-class dispersion matrix in the single training sample model is zero, the conventional Fisher criterion fails, and the method reconstructs the intra-class dispersion matrix through Gabor filtering and feature space transformation.
(2) The invention can make full use of the spatial information of the original image and the spectral information of the original image. Meanwhile, the spatial characteristic information based on the Gabor is more robust than the spectral characteristic information of the image, and can avoid local distortion caused by changes of expressions, postures, illumination and the like. The recognition rate and the recognition time are greatly improved, and the calculation cost is reduced. Compared with the prior art, the face recognition method provided by the invention has higher effectiveness and robustness.
Drawings
FIG. 1 is a flow chart of the method of the present invention
FIG. 2 Gabor filters real parts at 5 different scales and 8 different directions
FIG. 3 convolution results of 2 Gabor filters for a single face image
FIG. 45 different facial images from each dataset
FIG. 5 recognition rate of four methods under different projections (ORL face database)
FIG. 6 shows the recognition rate of the four methods under different projections (Yale face database)
FIG. 7 recognition rates of four methods under different projections (FERET face database)
Detailed Description
The invention relates to a single-sample face recognition method based on Gabor feature extraction and spatial transformation. Referring to fig. 1, the embodied steps of the present invention include the following.
And step 1, extracting the spatial information of a single image by using Gabor wavelet.
(1.1) constructing a Gabor filter function: the invention adopts a two-dimensional Gabor filter to extract the spatial information of a single image, which is a Gaussian kernel function adjusted by a complex sinusoidal plane. Is defined as:
where f is the central angular frequency of the complex sinusoidal plane wave, θ represents the normal parallel fringe direction of the Gabor function, φ is the phase, σ is the standard deviation, and γ is the spatial ratio for specifying the supporting ellipticity of the Gabor function.
(1.2) constructing a Gabor filter bank: since the Gabor filter bank is composed of a group of Gabor filters with different frequencies and directions, in the present invention, we use Gabor filter banks with five different scales and eight different directions, and the following two formulas give a Gabor filter bank with five different scales and eight different directions:
(1.3) Gabor representation of face image: for a face image a (x, y), its Gabor representation can be obtained by convolution of the original image and Gabor filtering, i.e.:
g (x, y) represents the result of two-dimensional convolution of Gabor filter in different scales u and different directions v, and the size of G (x, y) is determined by the down-sampling factor ξ, and the zero mean and unit variance are performed to the G (x, y) to obtain a filter characteristic matrix Zu,v∈Rm*n
(1.4) constructing a Gabor direction block feature matrix: filtering characteristic matrix Z obtained in (1.3)u,vConversion into one-dimensional column vector, using Z0The Gabor direction block feature matrix representing five different scales and eight different directions of the face image a (x, y) is as follows:
wherein,is Zu,vOne-dimensional representation at the scale i, Z0∈R(m *n)×40Is a Gabor direction block feature matrix obtained from the convolution result G (x, y).
And 2, fusing the spatial characteristic information extracted by the Gabor characteristic and the spectral information of the original image.
Due to the fact that the single sample image is wrapped by itselfContains very important spectral information, so that the Gabor space characteristic vector and the spectral characteristic vector Y are combined0∈R(m*n)×41Performing fusion to obtain a fusion characteristic matrix F ∈ R(m*n)×41
Wherein σ1And σ2Are each Z0And Y0Can be obtained by calculating the square root of the variance of the feature vector.
And 3, performing feature space transformation on the fusion feature matrix F.
(3.1) establishing a fusion characteristic optimization model
In order to be able to distinguish each class of all training images, it is desirable that the differences from the same class are as small as possible, whereas the differences between sample images from different classes are as large as possible. Inspired by Fisher criterion thought, the following fusion characteristic optimization model is established:
our goal is to project the fused feature matrix into a low-dimensional feature subspace using feature space transformations and find an optimal linear transformation matrix that maximizes inter-class separation.
(3.2) performing feature space transformation to obtain a transformation matrix W2
(3.2a) construction of inter-Gabor-type spreading matrices and intra-Gabor-type spreading matrices.
Assuming that n-dimensional training samples are obtained from the fused feature matrix, c is classNumber of (2), ni(i ═ 1,2, …, c) is the number of training samples of class i, and then the inter-Gabor scattering matrix and the intra-Gabor scattering matrix are defined as follows:
wherein,is the jth fused feature vector, f, from class iiIs the mean vector of class i, f0Is the mean vector of all training samples.
We maximize at the same timeAnd minimization ofA transformation matrix W is obtained. The optimization model is as follows:
but when the matrix in the above formula isSum matrixStrange and abnormal time, upperThe above criteria are not valid. In this case, matrix decomposition and eigenspace transformation generally play an important role. In feature space transformation, since our goal is to maximize the diversity of different classes, it should be discardedBecause it contains useless information; at the same time should remainIs important information of the null space.
(3.2b) projecting Gabor inter-class spreading matrix and Gabor intra-class spreading matrix to s1Dimensional subspace and obtain a transformation W1
In this step, first considerSingular Value Decomposition (SVD)
Will UbThe block-shaped materials are divided into blocks,whereinTherefore, the temperature of the molten metal is controlled,
thus, an equation of the form:
wherein,is a matrix of orthogonal columns,is a diagonal matrix with non-increasing and positive diagonal elements. In practical application, the matrixThe singularity of (a) may result in a reduction of the discrimination ability. Therefore, its zero eigenvalue and corresponding eigenvector should be discarded. Based on the above considerations, first, a transformation is utilizedTransforming the original data to s1In the space of the dimension. Thus obtaining the transformation
(3.2c) at s1Performing a correlated transformation in a transformation space of dimensions to obtain a transformation W2
In step (3.2b) a transformation has been obtainedIn the resulting transform space, inter-class scatter matricesAnd intra-class scatter matricesRespectively become:
wherein,the original n × n-dimensional inter-class and intra-class scatter matrices are then reduced to s1×s1And (5) maintaining.
Now, we considerDecomposition of characteristic values of (2):
whereinIs an orthogonal matrix in which the matrix is orthogonal,is a diagonal matrix. Thus, there are:
in most of the fields of application,is greater thanAnd ∑, andwis non-exotic, due to:
therefore, there are:
thus, the optimal transformation matrixThis can be obtained by the following formula:
in fact, the above optimization problemThe following eigenvalue problem can be transformed to solve:
and the solution of the above characteristic problem can be obtained by solving the generalized characteristic value. Let λ be1,λ2,…,λtT maximum eigenvalues in descending order, w, of the eigenvalue problem1,w2,…,wtIs the corresponding feature vector.
To solve the problemTwo steps are mainly considered: the first step is to maximize the inter-class scatter matrix by Singular Value Decomposition (SVD) and the second step is to solve the generalized eigenvalue problem. The key problem of the first step is to deal with the following optimization problem:
we know that:is a matrix of orthogonal columns,is a diagonal matrix with non-increasing and positive diagonal elements. Thus, U can be obtainedb1Is a solution to the above problem.
In addition, the pseudo-inverse is typically used to solve the singular matrix. The pseudo-inverse of the matrix may be computed by Singular Value Decomposition (SVD). One natural extension using the pseudo-inverse is to use a feature decomposition matrixOrMore specifically, let M be U ∑ VTIs a singular value decomposition of M, where U and V are column orthogonal matrices, ∑ is a diagonal matrix with diagonal elements positive, and then the pseudo-inverse of M is M+=V∑-1UT. Based on the above discussion, we have obtainedIs optimized to transform the matrix
Thus, based on the above transformations and argumentations, we derive s after transformation1In the dimension space of the optical fiber,is an optimal transformation matrix.
Step 4, constructing projection characteristic vector
For face test image I ∈ RM×NThe Gabor-based spatial feature matrix can be formulated from Is obtained and expressed byA fused feature matrix may be obtained. Thus, a new Gabor directional block feature matrix for the face test imageBy the formula Andit is obtained.
And 5, finally, utilizing a nearest neighbor classifier to identify.
Based on the above description, for a face image a (x, y), spatial feature information is extracted through Gabor wavelet, then the spatial feature information is fused with the spectral information of the original image, and then the feature space transformation is performed, and the transformed s1In the dimensional subspace, we obtain the optimal transformation matrix And obtains the projection feature vectorAnd then identified by a nearest neighbor classifier. The nearest neighbor classifier (NNc) is a non-parametric method classifier, and the main idea is that: let X { (X) be the training sample set1,l1),(x2,l2),…,(xn,ln) In which liI-1, 2, …, n is a class label if the test sample x and the k samples x of the training sample1,…,xkThere is a minimum distance between, then the test sample x belongs to liAnd (i ═ 1,2, …, k).
Suppose FtestIs a test image, according to the formulaThe Gabor direction block feature matrix x can be obtainedtest. F is shown by the following formulatestBelong to the i-th class:
wherein,
the effects of the present invention can be further illustrated by the following simulation experiments.
1. Simulation conditions and parameters
The present example is directed to ORL, Yale, and FERET face databases. The ORL database contains 400 images of faces of size 112 x 92 of 40 persons, 10 for each person. Since these images are taken at different times, there are variations in the pose, angle, scale, expression, glasses, and the like. Fig. 4(a) shows 5 different face images of the present data set. 1 image of each person in the experiment was used for training, and the remaining 9 images were used for testing; the Yale face database contains 165 images from 15 individuals, 11 images per person. These images may change with changes in facial expression and lighting conditions, such as: happy, sad, frightened, indifferent, with glasses, etc. Fig. 4(b) shows 5 different face images of the present data set. In the experiment, the size of each image was set to 100 × 100, 1 image of each person was used for training, and the remaining images were used for testing; the FERET face database was launched by the U.S. department of defense through the DARPA project. This data set contained 14051 facial grayscale images from 1199 different people with poses, facial expressions, etc. Fig. 4(c) shows 5 different face images of the present data set. In the experiment, we selected 5 different images of 15 persons, a total of 75 facial images, and resized the images to 80 x 80. We performed 50 experiments on the above three face databases and compared them with the existing methods, namely, the Fisher linear discriminant analysis method based on SVD, the Fisher linear discriminant analysis method based on QRCP and the Fisher linear discriminant analysis method based on SDD for a single face image. Fig. 2 shows 5 Gabor filter banks of different dimensions and 8 different orientations. Fig. 3 shows the convolution results of 2 Gabor filters on a single face image.
2. Simulation content and result analysis
In a simulation test, the method of the invention is compared and analyzed with the traditional Fisher linear discriminant analysis method based on SVD, the Fisher linear discriminant analysis method based on QRCP and the Fisher linear discriminant analysis method based on SDD, and the test is developed on three data sets.
Experiment one:
the experiment is implemented on the ORL face database according to the above five steps, and the result of the experiment about the recognition rate is shown in fig. 5, and we can see from fig. 5 that: in the four methods, the maximum recognition rate of the proposed method is 76.67%, which is higher than that of the other methods (the maximum recognition rate of the Fisher linear discriminant analysis method based on SVD is 56.67%, the maximum recognition rate of the Fisher linear discriminant analysis method based on QRCP is 68.89%, and the maximum recognition rate of the Fisher linear discriminant analysis method based on SDD is 71.94%). Compared with the other three methods, the method has the maximum recognition rate and obtains the best recognition performance. Obviously, the robustness of the proposed method is the reason for using the Gabor directional feature block and the fusion feature space transformation.
Experiment two:
second experiment is implemented on the Yale face database according to the five steps, fig. 6 shows the change relationship of the recognition rate of the four methods with respect to the projection vector, and from fig. 6, we can draw the following conclusions: according to the method, the identification rate is gradually increased along with the increase of the number of the projection vectors, and the identification performance is gradually enhanced. The maximum recognition rates of the Fisher linear discriminant analysis method based on SVD, the Fisher linear discriminant analysis method based on QRCP, the Fisher linear discriminant analysis method based on SDD and the proposed methods are 24.00%, 38.67%, 45.33% and 64.67%, respectively. From the recognition rate, the method is optimal, the two methods of the Fisher linear discriminant analysis method based on the SDD and the Fisher linear discriminant analysis method based on the QRCP are suboptimal, and the recognition performance of the Fisher linear discriminant analysis method based on the SVD is the worst, which can be fully illustrated in FIG. 6
Experiment three:
experiment three was performed on the FERET face database, again in the five steps described above. The Fisher linear discriminant analysis method based on SVD, the Fisher linear discriminant analysis method based on QRCP, the Fisher linear discriminant analysis method based on SDD and the relation between the recognition rate and the projection vector of the methods are shown in FIG. 7. As is clear from fig. 7, the recognition performance of the proposed method is higher than that of the other three methods, and the recognition rate thereof gradually increases as the number of projection vectors increases, which shows excellent recognition performance. The best recognition rate of the Fisher linear discriminant analysis method based on SVD, the Fisher linear discriminant analysis method based on QRCP, the Fisher linear discriminant analysis method based on SDD and the provided method is respectively as follows: 88.83%, 86.67%, 93.33% and 96.67%.
The three experiments show that in the actual face recognition, the single-sample face recognition method based on Gabor feature extraction and spatial transformation has better recognition result because the Gabor feature block has robustness to local distortion caused by changes of expression, posture and illumination. Table 1 shows the maximum recognition rate (rr,%) and recognition time (t, s) for the four methods on three different data sets. #1, #2, and #3 represent ORL, Yale, and FERET datasets, respectively.
Table 1 maximum recognition rate (rr,%) and recognition time (t, s) for the four methods on different datasets.
As can be seen from Table 1, the recognition rate of the proposed method is higher than that of the other three methods (i.e., Fisher linear discriminant analysis method based on SVD, Fisher linear discriminant analysis method based on QRCP, and Fisher linear discriminant analysis method based on SDD), and is far less than that of the other three methods in recognition time. In other words, in all of the above experiments, the proposed method ran approximately 22.28, 98.08, and 52.60 times faster on ORL, Yale, and FERET datasets than the SDD-based method, respectively.
It is obvious from the experimental result chart that the recognition rate of the method of the invention is obviously higher than that of the Fisher linear discriminant analysis method based on SVD, the Fisher linear discriminant analysis method based on QRCP and the Fisher linear discriminant analysis method based on SDD, and the average recognition time is obviously lower than that of other three algorithms. This temporal difference is mainly caused by image vectorization. Therefore, the face recognition method is a very effective single-sample face recognition method with good robustness.

Claims (3)

1. A single-sample face recognition method based on Gabor feature extraction and spatial transformation comprises the following steps:
(1) extracting spatial information of a single image by using a Gabor wavelet:
(1.1) constructing a Gabor filter function: the invention adopts two-dimensional Gabor filtering to extract the spatial information of a single image, which is a Gaussian kernel function adjusted by a complex sinusoidal plane; is defined as:
wherein f is the central angular frequency of the complex sinusoidal plane wave, θ represents the normal parallel stripe direction of the Gabor function, φ is the phase, σ is the standard deviation, γ is the spatial ratio for specifying the supporting ellipse of the Gabor function;
(1.2) constructing a Gabor filter bank: since the Gabor filter bank is composed of a group of Gabor filters with different frequencies and directions, in the present invention, we use Gabor filter banks with five different scales and eight different directions, and the following two formulas give a Gabor filter bank with five different scales and eight different directions:
(1.3) Gabor representation of face image: for a face image a (x, y), its Gabor representation can be obtained by convolution of the original image and Gabor filtering, i.e.:
wherein G (x, y) represents the two-dimensional convolution result of Gabor filtering in different scales u and different directions v, and the size of G (x, y) is determined by a down-sampling factor ξ, and the zero mean and unit variance are carried out on the G (x, y) to obtain a filtering feature matrix Zu,v∈Rm*n
(1.4) constructing a Gabor direction block feature matrix: filtering characteristic matrix Z obtained in (1.3)u,vConversion into one-dimensional column vector, using Z0The Gabor direction block feature matrix representing five different scales and eight different directions of the face image a (x, y) is as follows:
wherein,is Zu,vOne-dimensional representation at the scale i, Z0∈R(m*n)×40Is a Gabor direction block feature matrix obtained from the convolution result G (x, y);
(2) and fusing spatial information extracted by Gabor wavelet and spectral information of the original image: obtaining a Gabor direction block feature matrix Z from the step (1)0∈R(m*n)×40On the other hand, since a single training sample image itself contains very important spectral information, the Gabor spatial feature vector and the spectral feature vector Y are combined0∈R(m*n)×41Performing fusion to obtain a fusion feature matrix F ∈ R(m*n)×41
Wherein σ1And σ2Are each Z0And Y0The standard deviation of (a), which can be obtained by calculating the square root of the variance of the feature vector;
(3) feature space transformation based on the fusion feature matrix:
(3.1) establishing a fusion characteristic optimization model
(3.2) performing feature space transformation to obtain a transformation matrix
(4) Constructing a projection feature vector;
(5) after the projected feature vectors are obtained, the nearest neighbor classifier is used for identification.
2. The single-sample face recognition method based on Gabor feature extraction and spatial transformation according to claim 1; the method is characterized in that: the specific process of the feature space transformation framework in the step (3.2) is as follows:
(3.2a) defining Gabor inter-class spreading matrices and intra-class spreading matrices:
suppose that n-dimensional training samples are obtained from the fused feature matrix, c is the number of classes, ni(i ═ 1,2, …, c) is the number of training samples of class i, and then the inter-Gabor scattering matrix and the intra-Gabor scattering matrix are defined as follows:
wherein,is the jth fused feature vector, f, from class iiIs the mean vector of class i, f0Is the mean vector of all training samples;
(3.2b) transforming the inter-class and intra-class scatter matrices to s1In the space of dimensions and obtaining a transformation W1
First considerSingular Value Decomposition (SVD)
Will UbThe block-shaped materials are divided into blocks,whereinTherefore, the temperature of the molten metal is controlled,
thus, the formulaCan be converted into the following forms:
wherein,is a matrix of orthogonal columns,is a diagonal matrix with non-increasing and positive diagonal elements; in practical application, the matrixThe singularity of (a) may cause a reduction in discrimination ability; therefore, its zero eigenvalue and corresponding eigenvector should be discarded; based on the above considerations, we utilize transformationsTransforming the original data to s1In the space of the dimension;
(3.2c) at s1Performing a correlated transformation in a transformation space of dimensions to obtain a final transformation W2
In the obtained s1Inter-class scatter matrices in dimension transform spaceThe following steps are changed:
intra-class scatter matrixThe following steps are changed:
wherein,
now, we considerDecomposition of characteristic values of (2):
whereinIs an orthogonal matrix in which the matrix is orthogonal,is a diagonal matrix; thus, there are:
in most of the fields of application,is greater thanAnd ∑, andwis non-exotic, due to:
therefore, there are:
thus, the optimal transformation matrixCan be obtained by the following formula:
3. the single-sample face recognition method based on Gabor feature extraction and spatial transformation according to claim 1; the method is characterized in that:
the specific method for constructing the projection feature vector in the step (4) comprises the following steps:
for test feature vector f ∈ Rn×1The projected feature vector can be obtained by linear transformation as follows
It is apparent that the computational complexity is significantly reduced, and the face test image I ∈ RM×NThe Gabor-based spatial feature matrix can be formulated fromIs obtained and expressed byCan be fusedA feature matrix; thus, a new Gabor directional block feature matrix for the face test imageThis can be obtained from the following equation:
then x is the Gabor directional block feature matrix we need, here denoted as
CN201611059543.6A 2016-11-25 2016-11-25 Single-sample face recognition method based on Gabor feature extraction and spatial transformation Active CN106778522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611059543.6A CN106778522B (en) 2016-11-25 2016-11-25 Single-sample face recognition method based on Gabor feature extraction and spatial transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611059543.6A CN106778522B (en) 2016-11-25 2016-11-25 Single-sample face recognition method based on Gabor feature extraction and spatial transformation

Publications (2)

Publication Number Publication Date
CN106778522A true CN106778522A (en) 2017-05-31
CN106778522B CN106778522B (en) 2020-08-04

Family

ID=58911568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611059543.6A Active CN106778522B (en) 2016-11-25 2016-11-25 Single-sample face recognition method based on Gabor feature extraction and spatial transformation

Country Status (1)

Country Link
CN (1) CN106778522B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403194A (en) * 2017-07-26 2017-11-28 广州慧扬健康科技有限公司 Cutaneum carcinoma image recognition visualization model based on t SNE
CN107798308A (en) * 2017-11-09 2018-03-13 石数字技术成都有限公司 A kind of face identification method based on short-sighted frequency coaching method
CN107886090A (en) * 2017-12-15 2018-04-06 苏州大学 A kind of single sample face recognition method, system, equipment and readable storage medium storing program for executing
CN114445720A (en) * 2021-12-06 2022-05-06 西安电子科技大学 Hyperspectral anomaly detection method based on spatial-spectral depth synergy

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2428916A2 (en) * 2010-09-09 2012-03-14 Samsung Electronics Co., Ltd. Method and apparatus to generate object descriptor using extended curvature gabor filter
CN102855468A (en) * 2012-07-31 2013-01-02 东南大学 Single sample face recognition method in photo recognition
CN103514443A (en) * 2013-10-15 2014-01-15 中国矿业大学 Single-sample face recognition transfer learning method based on LPP (Low Power Point) feature extraction
CN104239856A (en) * 2014-09-04 2014-12-24 电子科技大学 Face recognition method based on Gabor characteristics and self-adaptive linear regression
CA2931348A1 (en) * 2013-11-25 2015-05-28 Ehsan Fazl Ersi System and method for face recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2428916A2 (en) * 2010-09-09 2012-03-14 Samsung Electronics Co., Ltd. Method and apparatus to generate object descriptor using extended curvature gabor filter
CN102855468A (en) * 2012-07-31 2013-01-02 东南大学 Single sample face recognition method in photo recognition
CN103514443A (en) * 2013-10-15 2014-01-15 中国矿业大学 Single-sample face recognition transfer learning method based on LPP (Low Power Point) feature extraction
CA2931348A1 (en) * 2013-11-25 2015-05-28 Ehsan Fazl Ersi System and method for face recognition
CN104239856A (en) * 2014-09-04 2014-12-24 电子科技大学 Face recognition method based on Gabor characteristics and self-adaptive linear regression

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
聂祥飞: "利用Gabor 小波变换解决人脸识别中的小样本问题", 《光学精密工程》 *
邹建法: "基于增强Gabor特征和直接分步线性判别分析的人脸识别", 《模式识别与人工智能》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403194A (en) * 2017-07-26 2017-11-28 广州慧扬健康科技有限公司 Cutaneum carcinoma image recognition visualization model based on t SNE
CN107403194B (en) * 2017-07-26 2020-12-18 广州慧扬健康科技有限公司 Skin cancer image recognition visualization system based on t-SNE
CN107798308A (en) * 2017-11-09 2018-03-13 石数字技术成都有限公司 A kind of face identification method based on short-sighted frequency coaching method
CN107798308B (en) * 2017-11-09 2020-09-22 一石数字技术成都有限公司 Face recognition method based on short video training method
CN107886090A (en) * 2017-12-15 2018-04-06 苏州大学 A kind of single sample face recognition method, system, equipment and readable storage medium storing program for executing
CN107886090B (en) * 2017-12-15 2021-07-30 苏州大学 Single-sample face recognition method, system, equipment and readable storage medium
CN114445720A (en) * 2021-12-06 2022-05-06 西安电子科技大学 Hyperspectral anomaly detection method based on spatial-spectral depth synergy

Also Published As

Publication number Publication date
CN106778522B (en) 2020-08-04

Similar Documents

Publication Publication Date Title
Li et al. Overview of principal component analysis algorithm
CN103136516B (en) The face identification method that visible ray and Near Infrared Information merge and system
Kortli et al. A comparative study of cfs, lbp, hog, sift, surf, and brief for security and face recognition
CN106778522B (en) Single-sample face recognition method based on Gabor feature extraction and spatial transformation
Timotius et al. Face recognition between two person using kernel principal component analysis and support vector machines
Ouarda et al. MLP Neural Network for face recognition based on Gabor Features and Dimensionality Reduction techniques
CN103984920A (en) Three-dimensional face identification method based on sparse representation and multiple feature points
Tathe et al. Face detection and recognition in videos
CN107194314A (en) The fuzzy 2DPCA and fuzzy 2DLDA of fusion face identification method
Kaur et al. Comparative study of facial expression recognition techniques
CN111259780A (en) Single-sample face recognition method based on block linear reconstruction discriminant analysis
Sudhakar et al. Facial identification of twins based on fusion score method
KR101727833B1 (en) Apparatus and method for constructing composite feature vector based on discriminant analysis for face recognition
Guo et al. Palmprint Recognition Based on Local Fisher Discriminant Analysis.
Chater et al. Comparison of robust methods for extracting descriptors and facial matching
Mráček et al. 3D face recognition on low-cost depth sensors
CN108830163B (en) Customs identity verification system and method based on local judgment CCA
Huang et al. Regularized trace ratio discriminant analysis with patch distribution feature for human gait recognition
Gatto et al. Kernel two dimensional subspace for image set classification
Das Comparative analysis of PCA and 2DPCA in face recognition
JP2005202673A (en) Image recognition apparatus
Vázquez et al. Real time face identification using a neural network approach
Purahong et al. Hybrid Facial Features with Application in Person Identification
Naveen et al. A robust novel method for face recognition from 2d depth images using DWT and DFT score fusion
Shen et al. A near-infrared face detection and recognition system using ASM and PCA+ LDA

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210923

Address after: 710016 B 1616, Tiandi Times Plaza, Fengcheng two road, Weiyang District, Xi'an, Shaanxi.

Patentee after: Liu Jiaojiao

Address before: No. 1800 road 214122 Jiangsu Lihu Binhu District City of Wuxi Province

Patentee before: Jiangnan University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221026

Address after: No.06, 20th Floor, Building C, Huihai Square, Chuangye Road, Longhua Street, Longhua District, Shenzhen, Guangdong 518109

Patentee after: SHENZHEN QIANKEDUO INFORMATION TECHNOLOGY CO.,LTD.

Address before: 710016 B 1616, Tiandi Times Plaza, Fengcheng two road, Weiyang District, Xi'an, Shaanxi.

Patentee before: Liu Jiaojiao

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231220

Address after: W338, 3rd Floor, Port Building, Shipping Center, No. 1167 Yihai Avenue, Nanshan Street, Qianhai Shenzhen Hong Kong Cooperation Zone, Shenzhen, Guangdong Province, 518000

Patentee after: Shenzhen Huiyouba Technology Co.,Ltd.

Address before: No.06, 20th Floor, Building C, Huihai Square, Chuangye Road, Longhua Street, Longhua District, Shenzhen, Guangdong 518109

Patentee before: SHENZHEN QIANKEDUO INFORMATION TECHNOLOGY CO.,LTD.